URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.esaral.com/q/the-range-of-the-function-f-x-73839 | [
"",
null,
"# The range of the function f(x)",
null,
"Question:\n\nThe range of the function f(x) = [x] − x is __________ .\n\nSolution:\n\nf(x) = [x] − x\n\nSince x ≥ [x]\n\nEvery number is greater than or equal to its greatest integral value\n\ni.e x – [x] = {x} fractional part of x\n\n∴ [x] − x = −{x} fraction part only.\n\nalso [x] = x for integral value of x\n\nhence, for non-integral values f(x) = −{x\"> (−1, 0)\n\nand for integral values f(x) = 0\n\nHence, Range of f(x) is (−1, 0]."
]
| [
null,
"https://www.facebook.com/tr",
null,
"https://www.esaral.com/static/assets/images/Padho-Bharat-Web.jpeg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6971271,"math_prob":0.9999727,"size":453,"snap":"2023-40-2023-50","text_gpt3_token_len":166,"char_repetition_ratio":0.19153675,"word_repetition_ratio":0.08791209,"special_character_ratio":0.39955848,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999958,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T19:19:30Z\",\"WARC-Record-ID\":\"<urn:uuid:f2f8056c-3169-44c3-8e10-f649cc545079>\",\"Content-Length\":\"64259\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:515a3347-4d3d-4d3b-8b2c-ed9f9bc406a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b3428f3-9007-4935-9d29-72092b4d4957>\",\"WARC-IP-Address\":\"104.26.13.203\",\"WARC-Target-URI\":\"https://www.esaral.com/q/the-range-of-the-function-f-x-73839\",\"WARC-Payload-Digest\":\"sha1:VAJLPPBCE3PBQ4P2Z22Q7S2M2ZL4EYLK\",\"WARC-Block-Digest\":\"sha1:KVEYTZCCFUEUTVT7NCILBFLQI67VD5IE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100448.65_warc_CC-MAIN-20231202172159-20231202202159-00162.warc.gz\"}"} |
https://thecleverprogrammer.com/2020/08/04/arima-model-in-machine-learning/ | [
"# ARIMA Model in Machine Learning\n\nARIMA model means Autoregressive Integrated Moving Average. This model provides a family of functions which are a very powerful and flexible to perform any task related to Time Series Forecasting. In Machine Learning ARIMA model is generally a class of statistical models that give outputs which are linearly dependent on their previous values in the combination of stochastic factors.\n\nWhile choosing an appropriate time series forecasting model, we need to visualize the data to analyse the trends, seasonalities, and cycles. When seasonality is a very strong feature of the time series we need to consider a model such as seasonal ARIMA (SARIMA).\n\nThe ARIMA model works by using a distributed lag model in which algorithms are used to predict the future based on the lagged values. In this article, I will show you how to use an ARIMA model by using a very practical example in Machine Learning which is Anomaly Detection.\n\n## Anomaly Detection with ARIMA Model\n\nAnomaly Detection means to identify unexpected events in a process. It means to detect threats to our systems that may cause harm in terms of security and leakage of important information. The importance of Anomaly Detection is not limited to security, but it is used for detection of any event that does not conform to our expectations. Here I will explain to you how we can use ARIMA model for Anomaly Detection.\n\nI will use the data which is based on per-minute metrics of the host’s CPU utilization. Now let’s get started with this task by importing the necessary libraries:\n\n```.wp-block-code {\nborder: 0;\n}\n\n.wp-block-code > div {\noverflow: auto;\n}\n\n.hljs {\nbox-sizing: border-box;\n}\n\n.hljs.shcb-code-table {\ndisplay: table;\nwidth: 100%;\n}\n\n.hljs.shcb-code-table > .shcb-loc {\ncolor: inherit;\ndisplay: table-row;\nwidth: 100%;\n}\n\n.hljs.shcb-code-table .shcb-loc > span {\ndisplay: table-cell;\n}\n\n.wp-block-code code.hljs:not(.shcb-wrap-lines) {\nwhite-space: pre;\n}\n\n.wp-block-code code.hljs.shcb-wrap-lines {\nwhite-space: pre-wrap;\n}\n\n.hljs.shcb-line-numbers {\nborder-spacing: 0;\ncounter-reset: line;\n}\n\n.hljs.shcb-line-numbers > .shcb-loc {\ncounter-increment: line;\n}\n\n.hljs.shcb-line-numbers .shcb-loc > span {\n}\n\n.hljs.shcb-line-numbers .shcb-loc::before {\nborder-right: 1px solid #ddd;\ncontent: counter(line);\ndisplay: table-cell;\ntext-align: right;\n-webkit-user-select: none;\n-moz-user-select: none;\n-ms-user-select: none;\nuser-select: none;\nwhite-space: nowrap;\nwidth: 1%;\n}\n```import pandas as pd\n!pip install pyflux\nimport pyflux as pf\nfrom datetime import datetime``````\n\nNow let’s import the data and have a quick look at the data and some of its insights. You can download the data, I am using in this task from here.\n\n``````from google.colab import files\n\nNow, let’s visualize this data to have a quick look at what we are working with:\n\n``````import matplotlib.pyplot as plt\nplt.figure(figsize=(20,8))\nplt.plot(data_train_a['datetime'], data_train_a['cpu'], color='black')\nplt.ylabel('CPU %')\nplt.title('CPU Utilization')``````\n\n## Using ARIMA Model\n\nNow, let’s see how we can use the ARIMA model for prediction on the data:\n\n``````model_a = pf.ARIMA(data=data_train_a, ar=11, ma=11, integ=0, target='cpu')\nx = model_a.fit(\"M-H\")``````\n\nAcceptance rate of Metropolis-Hastings is 0.0\nAcceptance rate of Metropolis-Hastings is 0.026\nAcceptance rate of Metropolis-Hastings is 0.2346\n\nTuning complete! Now sampling.\nAcceptance rate of Metropolis-Hastings is 0.244425\n\nNow, let’s visualize our Model:\n\n``model_a.plot_fit(figsize=(20,8))``\n\nThe output above shows CPU utilization over time fitted with the ARIMA model prediction. Now let’s perform a sample test to evaluate the performance of our model:\n\n``model_a.plot_predict_is(h=60, figsize=(20,8))``\n\nThe output above shows the In-sample (training set) of our ARIMA prediction model. Now, I will run the actual prediction, by using the most recent 100 observed data points being followed bt the 60 predicted points:\n\n``model_a.plot_predict(h=60,past_values=100,figsize=(20,8))``\n\nLet’s perform the same anomaly detection on another segment of the CPU utilization dataset captured at a different time:\n\n``````data_train_b = pd.read_csv('cpu-train-b.csv', parse_dates=, infer_datetime_format=True)\nplt.figure(figsize=(20,8))\nplt.plot(data_train_b['datetime'], data_train_b['cpu'], color='black')\nplt.ylabel('CPU %')\nplt.title('CPU Utilization')``````\n\nNow, let’s fit this data on the model:\n\n``````model_b = pf.ARIMA(data=data_train_b, ar=11, ma=11, integ=0, target='cpu')\nx = model_b.fit(\"M-H\")``````\n\nAcceptance rate of Metropolis-Hastings is 0.0\nAcceptance rate of Metropolis-Hastings is 0.016\nAcceptance rate of Metropolis-Hastings is 0.1344\nAcceptance rate of Metropolis-Hastings is 0.21025\nAcceptance rate of Metropolis-Hastings is 0.23585\nTuning complete! Now sampling.\nAcceptance rate of Metropolis-Hastings is 0.34395\n\n``model_b.plot_predict(h=60,past_values=100,figsize=(20,8))``\n\nWe can visualize the anomaly that occurs a short time after the training period, as the observed values fall within the low-confidence bands, so it will raise an anomaly alert.\n\nI hope you liked this article on Anomaly Detection using the ARIMA Model. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of machine learning."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7938751,"math_prob":0.88307965,"size":4687,"snap":"2020-34-2020-40","text_gpt3_token_len":1109,"char_repetition_ratio":0.124279305,"word_repetition_ratio":0.05564142,"special_character_ratio":0.23533177,"punctuation_ratio":0.13184358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99579924,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T19:33:48Z\",\"WARC-Record-ID\":\"<urn:uuid:883630e4-171c-490c-9725-ded4644f8325>\",\"Content-Length\":\"122452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a3f4967-0273-44a8-aaa6-3a76fa7518f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f3736aa-335f-4715-a45d-5ef7283093ac>\",\"WARC-IP-Address\":\"192.0.78.224\",\"WARC-Target-URI\":\"https://thecleverprogrammer.com/2020/08/04/arima-model-in-machine-learning/\",\"WARC-Payload-Digest\":\"sha1:JXWIEJJDCDEAIZWMR34PQE2JSM6RZOL3\",\"WARC-Block-Digest\":\"sha1:46I45I2VBQU5X5HNFD2KOEX7OI4DXXLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400228707.44_warc_CC-MAIN-20200925182046-20200925212046-00673.warc.gz\"}"} |
http://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-topology-optimization.html | [
"## Friday, November 10, 2017\n\n### Deep Learning to Accelerate Topology Optimization",
null,
"Topology Optimization Data Set for CNN Training\nNeural networks for topology optimization is an interesting paper I read on arXiv that illustrates how to speed up the topology optimization calculations by using a deep learning convolution neural network. The data sets for training the network are generate in ToPy, which is an Open Source topology optimization tool.\n\nThe approach the authors take is to run ToPy for some number of iterations to generate a partially converged solution, and then use this partially converged solution and its gradient as the input to the CNN. The CNN is trained on a data set generated from randomly generated ToPy problem definitions that are run to convergence. Here's their abstract,\nIn this research, we propose a deep learning based approach for speeding up the topology optimization methods. The problem we seek to solve is the layout problem. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as the efficient pixel-wise image labeling technique to perform the topology optimization. We introduce convolutional encoder-decoder architecture and the overall approach of solving the above-described problem with high performance. The conducted experiments demonstrate the significant acceleration of the optimization process. The proposed approach has excellent generalization properties. We demonstrate the ability of the application of the proposed model to other problems. The successful results, as well as the drawbacks of the current method, are discussed.\n\nThe deep learning network architecture from the paper is shown below. Each kernal is 3x3 pixels and the illustration shows how many kernals are in each layer.",
null,
"Architecture (Figure 3) from Neural Networks for Topology Optimization\n\nThe data set that the authors used to train the deep learning network contained 10,000 randomly generated (with certain constraints, see the paper) example problems. Each of those 10k \"objects\" in the data set included 100 iterations of the ToPy solver, so they are 40x40x100 tensors (40x40 is the domain size). The authors claim a 20x speed-up in particular cases, but the paper is a little light in actually showing / exploring / explaining timing results.\n\nThe problem for the network to learn is to predict the final iteration from some intermediate state. This seems like it could be a generally applicable approach to speeding up convergence of PDE solves in computational fluid dynamics (CFD) or computational structural mechanics / finite element analysis. I haven't seen this sort of approach to speeding up solvers before. Have you? Please leave a comment if you know of any work applying similar methods to CFD or FEA for speed-up.\n\n1.",
null,
"2.",
null,
""
]
| [
null,
"https://3.bp.blogspot.com/-0AudlK3o0ZM/WgWQLoAyM4I/AAAAAAAAC4U/rersemOWM7sRdThBNS4poz3NUPSVQjLkwCLcBGAs/s400/Screenshot%2Bfrom%2B2017-11-10%2B06-31-05.png",
null,
"https://4.bp.blogspot.com/-ZzGOdXeijdE/WgWTmG1AYfI/AAAAAAAAC4g/1Kkpj3NmRZowFEAfq3jhDZgWDcNkRfGPACLcBGAs/s640/Screenshot%2Bfrom%2B2017-11-10%2B06-54-44.png",
null,
"http://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKKhRT-OPufgXsquQllIsZNvpKrnCbN5QBhK2y1shv78STK5M8zNqdTHTRyD2jaTFlRVTvP5FDqNlCnSDt3iFg5VBEKoErsczMrSJE033aHH6JMpadwn1TQQISdi9Y0w/s45-c/*",
null,
"http://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKKhRT-OPufgXsquQllIsZNvpKrnCbN5QBhK2y1shv78STK5M8zNqdTHTRyD2jaTFlRVTvP5FDqNlCnSDt3iFg5VBEKoErsczMrSJE033aHH6JMpadwn1TQQISdi9Y0w/s45-c/*",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9363562,"math_prob":0.81779486,"size":2940,"snap":"2023-40-2023-50","text_gpt3_token_len":569,"char_repetition_ratio":0.11307902,"word_repetition_ratio":0.0,"special_character_ratio":0.19013606,"punctuation_ratio":0.08316832,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97430515,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,10,null,10,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T07:17:38Z\",\"WARC-Record-ID\":\"<urn:uuid:1e4023eb-1afd-4ae7-99bd-00e0d1dc3276>\",\"Content-Length\":\"129272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ce71d0e-7917-4697-92c3-90826c82a56b>\",\"WARC-Concurrent-To\":\"<urn:uuid:082104d2-8e22-49d9-b4a9-4be85f1c37d5>\",\"WARC-IP-Address\":\"172.253.122.121\",\"WARC-Target-URI\":\"http://www.variousconsequences.com/2017/11/deep-learning-to-accelerate-topology-optimization.html\",\"WARC-Payload-Digest\":\"sha1:5CMNAXVRUY63HFMETKLJ42CHOANFDFN7\",\"WARC-Block-Digest\":\"sha1:57ZLQAT7S34AEFKFOC6X3F3HKU4ZVM7A\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510368.33_warc_CC-MAIN-20230928063033-20230928093033-00599.warc.gz\"}"} |
http://eel.is/c++draft/allocator.traits.types | [
"# 20 General utilities library [utilities]\n\n## 20.10 Memory [memory]\n\n### 20.10.8 Allocator traits [allocator.traits]\n\n#### 20.10.8.2 Member types [allocator.traits.types]\n\n```using pointer = see below; ```\nType: Alloc::pointer if the qualified-id Alloc::pointer is valid and denotes a type ([temp.deduct]); otherwise, value_type*.\n```using const_pointer = see below; ```\nType: Alloc::const_pointer if the qualified-id Alloc::const_pointer is valid and denotes a type ([temp.deduct]); otherwise, pointer_traits<pointer>::rebind<const value_type>.\n```using void_pointer = see below; ```\nType: Alloc::void_pointer if the qualified-id Alloc::void_pointer is valid and denotes a type ([temp.deduct]); otherwise, pointer_traits<pointer>::rebind<void>.\n```using const_void_pointer = see below; ```\nType: Alloc::const_void_pointer if the qualified-id Alloc::const_void_pointer is valid and denotes a type ([temp.deduct]); otherwise, pointer_traits<pointer>::rebind<const void>.\n```using difference_type = see below; ```\nType: Alloc::difference_type if the qualified-id Alloc::difference_type is valid and denotes a type ([temp.deduct]); otherwise, pointer_traits<pointer>::difference_type.\n```using size_type = see below; ```\nType: Alloc::size_type if the qualified-id Alloc::size_type is valid and denotes a type ([temp.deduct]); otherwise, make_unsigned_t<difference_type>.\n```using propagate_on_container_copy_assignment = see below; ```\nType: Alloc::propagate_on_container_copy_assignment if the qualified-id Alloc::propagate_on_container_copy_assignment is valid and denotes a type ([temp.deduct]); otherwise false_type.\n```using propagate_on_container_move_assignment = see below; ```\nType: Alloc::propagate_on_container_move_assignment if the qualified-id Alloc::propagate_on_container_move_assignment is valid and denotes a type ([temp.deduct]); otherwise false_type.\n```using propagate_on_container_swap = see below; ```\nType: Alloc::propagate_on_container_swap if the qualified-id Alloc::propagate_on_container_swap is valid and denotes a type ([temp.deduct]); otherwise false_type.\n```using is_always_equal = see below; ```\nType: Alloc::is_always_equal if the qualified-id Alloc::is_always_equal is valid and denotes a type ([temp.deduct]); otherwise is_empty<Alloc>::type.\n```template<class T> using rebind_alloc = see below; ```\nAlias template: Alloc::rebind<T>::other if the qualified-id Alloc::rebind<T>::other is valid and denotes a type ([temp.deduct]); otherwise, Alloc<T, Args> if Alloc is a class template instantiation of the form Alloc<U, Args>, where Args is zero or more type arguments; otherwise, the instantiation of rebind_alloc is ill-formed."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5113793,"math_prob":0.83798224,"size":330,"snap":"2021-31-2021-39","text_gpt3_token_len":84,"char_repetition_ratio":0.12576687,"word_repetition_ratio":0.0,"special_character_ratio":0.23636363,"punctuation_ratio":0.24657534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.983491,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T05:29:22Z\",\"WARC-Record-ID\":\"<urn:uuid:ecb105fe-79f6-4bd4-87c5-47d6ad7a5683>\",\"Content-Length\":\"20270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0b9f242c-9cbc-4747-8c52-ba39e6f3e274>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9ffaad2-d376-442a-88dc-dd0632ed0379>\",\"WARC-IP-Address\":\"83.96.242.80\",\"WARC-Target-URI\":\"http://eel.is/c++draft/allocator.traits.types\",\"WARC-Payload-Digest\":\"sha1:OFPAEPGGAIQAFZW3TKVYVUYTZ4KG6BYX\",\"WARC-Block-Digest\":\"sha1:DTS7P3OAILSYS4B5YDKGZHTEHMZJYAYC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057598.98_warc_CC-MAIN-20210925052020-20210925082020-00466.warc.gz\"}"} |
https://techcommunity.microsoft.com/t5/excel/excel-vba-vb-5-5-regex-help-getting-count-and-value/td-p/2800958 | [
"SOLVED\n\nContributor\n\n# Excel VBA VB 5.5 REGEX - help getting count and value\n\nGiven the following contents in A1 cell:\n\nLog file\n, target1 = 1234, blah\n, target1 = 5678, blah\n\nIs there a way to use VB 5.5 Regex to get count of \"target1\" and the value for the latter target1 value?\n\nResults:\n\ntarget1 count: 2\n\ntarget1 value: 5678\n\nI've got a working example of regex in Excel from here, just need help with the pattern.\n\n8 Replies\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nI'm so sorry but I need to change the OP...there should be no square brackets surrounding \"target1\"\nbest response confirmed by rodsan724 (Contributor)\nSolution\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nYou may try the following functions...\n\nTo get the Count:\n\n``````Function getTargetCount(ByVal str As String) As Long\nDim Matches As Object\nWith CreateObject(\"VBScript.RegExp\")\n.Global = True\n.ignorecase = True\n.Pattern = \"target1 = (\\d+)\"\nIf .test(str) Then\nSet Matches = .Execute(str)\ngetTargetCount = Matches.Count\nElse\ngetTargetCount = 0\nEnd If\nEnd With\nEnd Function``````\n\nTo get the Last Value of Target:\n\n``````Function getTargetValue(ByVal str As String) As Long\nDim Matches As Object\nWith CreateObject(\"VBScript.RegExp\")\n.Global = True\n.ignorecase = True\n.Pattern = \"target1 = (\\d+)\"\nIf .test(str) Then\nSet Matches = .Execute(str)\ngetTargetValue = Matches(Matches.Count - 1).submatches(0)\nEnd If\nEnd With\nEnd Function``````\n\nThen you may either use these functions in another code or as regular Excel functions on the worksheet itself.\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nThank you so much!\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nYou're welcome @rodsan724! Glad I could help.\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\n@Subodh_Tiwari_sktneer, is it possible to add these functions to a PowerQuery that I have connected to a database?\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nNope, but Power Query has other options to do the same.\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nDear SIr,\nClearly seen that this code taken from output of Chat-GPT.\nplease remove black background & put neat code\nRegards,\n\n# Re: Excel VBA VB 5.5 REGEX - help getting count and value\n\nThe reply by @Subodh_Tiwari_sktneer is from October of 2021. ChatGPT was launched in November of 2022.\n\nThe black background is a feature of this forum. That, the line numbers and the formatting are applied if you use the </> button on the toolbar at the top of the box where you compose a post or reply:",
null,
""
]
| [
null,
"https://techcommunity.microsoft.com/t5/image/serverpage/image-id/455592i0CF463F74F18EC65/image-size/medium",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77247375,"math_prob":0.6952155,"size":2136,"snap":"2023-14-2023-23","text_gpt3_token_len":691,"char_repetition_ratio":0.13414635,"word_repetition_ratio":0.29503918,"special_character_ratio":0.35486892,"punctuation_ratio":0.13793103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95005643,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T08:03:15Z\",\"WARC-Record-ID\":\"<urn:uuid:410db9ba-59aa-4dff-8ca2-38f4437c081c>\",\"Content-Length\":\"449091\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1858374-4007-4638-9a83-b0704a3b5a44>\",\"WARC-Concurrent-To\":\"<urn:uuid:35563cd7-069c-4689-8a8c-8bdfa6215b5d>\",\"WARC-IP-Address\":\"23.218.122.245\",\"WARC-Target-URI\":\"https://techcommunity.microsoft.com/t5/excel/excel-vba-vb-5-5-regex-help-getting-count-and-value/td-p/2800958\",\"WARC-Payload-Digest\":\"sha1:NT6VEUXZC7OK7RMNSPEX2IXJN3EXTAYQ\",\"WARC-Block-Digest\":\"sha1:MPUYO46K4G23HS5HQFOWVESCJ3IFV43T\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657144.94_warc_CC-MAIN-20230610062920-20230610092920-00101.warc.gz\"}"} |
https://q.utoronto.ca/eportfolios/6605/java_tutorial | [
"## Scope of Variables in Java Tutorial",
null,
"Rich Text Content\n\nIn this tutorial, we will learn what the scope of variables are and how they work in Java.\n\n## Definition of Scope: What Does Scope Mean?\n\nDepending on the places where we declare a variable, it’ll have some level of accessibility and this is called the scope of that variable.\n\nFor example, if we declare a variable in a method, the scope of the variable will be Method Scope. This means we can only access the variable in the method and no other methods can see or access the value of this variable or change it.\n\n## Scopes of variables in Java\n\nIn short, a variable can have any of these three scopes:\n\n• Class scope (AKA member variable)\n• Method scope (AKA local variable)\n• Block scope\n• Java Class Scope (AKA member variable)\n\nA variable that is declared outside of any method in a class has the Class Scope.\n\nThis means methods in that class can access and change the content of the variable if they want.\n\nNote: We can’t have a variable with the same name declared twice in Class Scope.\n\n## Example: class scope variable in Java\n\nCreate a file named `Person` and put the class `Person` you see below into the file:\n\n```public class Person {\nprivate String name;\nprivate String lastName;\n\npublic String getName() {\nreturn name;\n}\npublic void setName(String name) {\nthis.name = name;\n}\npublic String getLastName() {\nreturn lastName;\n}\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n}\n```\n\nNow create another file named `Simple` and put the code below into that file:\n\n```public class Simple {\npublic static void main(String[] args) {\nPerson person = new Person();\nperson.setName(\"John\");\nperson.setLastName(\"Doe\");\nSystem.out.println(person.getName());\nSystem.out.println(person.getLastName());\n}\n}\n```\n\nOutput:\n\n```John\n\nDoe```\n\nInside the `Person` class, methods like `getName()` and `getLastName()` return the value of the `name` and `lastName` variables, respectively.\n\nThese two variables are declared inside the class as the attribute of that class. And so they have the scope of the class and that means not just the two methods mentioned above, but also any other method in this class can access these two variables.\n\n## Java Method scope (AKA local variable)\n\nA variable that is declared in the body of a method has Method Scope. This means the variable is only accessible within the body of the method and no other method has the access to that variable.\n\nThe beauty of such a variable is that because a method scope variable is not in the scope of other methods, other methods can also have their own variable with the same name and type.\n\n## Method Scope Notes:\n\n• In the same method, we can’t have a variable with the same name declared twice!\n• But we can have two variables with the same name, but one is class scope, and the other is Method scope. In such case the Method scope variable will shadow the Class scope variable. (Later in this section, we explain what shadowing means in practice)\n• Also, two independent methods can have a variable with the same name! This is because the scope of each variable will be separate from the other.\n\n## Example: method scope variable in Java\n\n```public class Simple {\npublic static void main(String[] args) {\nint age = 100;\nprintAge();\n}\npublic static void printAge(){\nSystem.out.println(age);\n}\n}\n```\n\nIf we try to compile this program, we will get a compile time error because the variable `age` is declared inside the body of the `main` method so it has the scope of method but we’re trying to get the value of this variable inside the `printAge()` method!\n\n## Java Block scope (AKA bracket scope)\n\nA variable that is declared inside a block `{}` like in an `if` statement is only accessible within that block.\n\nThis means after the closing brace `}` of that block, the variable does not exist anymore.\n\n## Example: Block scope variable in Java\n\n```public class Simple {\npublic static void main(String[] args) {\nfor (int test = 0; test<10; test++){\nSystem.out.print(test+\", \");\n}\nSystem.out.println(\"The final value of the 'test' variable is: \"+ test);\n}\n}\n```\n\nOutput:\n\n```Error:(8, 75) java: cannot find symbol\n\nsymbol: variable test\n\nlocation: class tuto.Simple```\n\nIn this example, we’ve created the variable `test` inside the body of the `for` loop. Because this variable is Block Scope, it is only accessible within the body of the `for` loop.\n\nAfter the body of the loop and outside the block, we tried to access the value of the `test` variable and because of that we got the compile time error.\n\nLet’s say we have a variable with the name `age` and with the scope of class. Also, we have another variable with the same name but the scope of Method.\n\nThis second variable is declared inside a method called `print`.\n\nNow, if we call the `print` method and want to get the value of the `age` variable, what value do you think will be sent to the output stream?\n\nIt’s the value of the Method Scope variable.\n\nBasically, when we call a variable inside a method, the compiler will first check the method where the call happened in there and look for a variable with this name. If in the method, it finds such a variable, then it’ll choose its value. Otherwise, the compiler will look one level higher, which is the variables with Class Scope, and see if there is a variable with this name.\n\nAt the end if the compiler can’t find such a variable with this name, it’ll send an error.\n\n## Example: Shadowing variable in Java\n\n```public class Simple {\nprivate static int age = 80;\npublic static void main(String[] args) {\nint age = 100;\nSystem.out.println(\"The age is: \"+ age);\n}\n}\n```\n\nOutput: `The age is: 100`\n\nHere there’s variable named `age` with the class scope and another variable with the same name inside the body of the `main` method, which means its scope is Method.\n\nWhen we called the println () method to get the value of the `age` variable, the compiler chooses the value of the one with Method scope and sent its value to the output stream. This is because the Java engine is in the current scope (scope of the `main` method) and here it finds a variable with the name `age` and so it won’t go to the higher scope (global in this case) to find the variable."
]
| [
null,
"https://du11hjcvx0uqb.cloudfront.net/dist/images/move-e0f9bfc8dc.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8459181,"math_prob":0.65480226,"size":6348,"snap":"2022-05-2022-21","text_gpt3_token_len":1409,"char_repetition_ratio":0.19971627,"word_repetition_ratio":0.092307694,"special_character_ratio":0.23267171,"punctuation_ratio":0.11421725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96004343,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T22:33:30Z\",\"WARC-Record-ID\":\"<urn:uuid:ae8896c8-c8df-418c-8db7-fde36ff4bbec>\",\"Content-Length\":\"233069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0679ed12-e2fb-41b4-aa56-0eab38123f92>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7e047ad-e9cb-46ea-8f4f-9a94d9684f19>\",\"WARC-IP-Address\":\"3.96.75.6\",\"WARC-Target-URI\":\"https://q.utoronto.ca/eportfolios/6605/java_tutorial\",\"WARC-Payload-Digest\":\"sha1:HCNADL23RDMMGE2GYETHWIOHJOCC4I5Y\",\"WARC-Block-Digest\":\"sha1:MC6YK5C27UVG2YT7TJGIUA2ULQF44IMV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512249.16_warc_CC-MAIN-20220516204516-20220516234516-00348.warc.gz\"}"} |
https://mijn.bsl.nl/the-diffusion-model-visualizer-an-interactive-tool-to-understand/16227264 | [
"## Swipe om te navigeren naar een ander artikel\n\nOpen Access 25-10-2018 | Original Article\n\n# The diffusion model visualizer: an interactive tool to understand the diffusion model parameters\n\nAuteur: Rainer W. Alexandrowicz\n\nGepubliceerd in: Psychological Research | Uitgave 4/2020\n\n• Optie A:\n• Optie B:\ninsite\nZOEKEN\n\n## Abstract\n\nResponse time (RT) data play an important role in psychology. The diffusion model (DM) allows to analyze RT-data in a two-alternative-force-choice paradigm using a particle drift diffusion modeling approach. It accounts for right-skewed distributions in a natural way. However, the model incorporates seven parameters, the roles of which are difficult to comprehend from the model equation. Therefore, the present article introduces the diffusion model visualizer (DMV) allowing for interactive manipulation of each parameter and plotting the resulting RT densities. Thus, the DMV serves as a valuable tool for understanding the specific role of each model parameter. It may come in handy for didactical purposes and in research context. It allows for tracking down parameter estimation problems by delivering the model-based ideal densities, which can be juxtaposed to the data-based densities. It will also serve a valuable purpose in detecting outliers. The article describes the basics of the DM along with technical details of the DMV and gives several hints for its usage.\n\n## Introduction\n\nResponse times (RTs) for decisions constitute a valuable source of information in psychological research. They occur in simple reaction tasks (e.g., response to the occurence of a stimulus such as a light) and recognition tasks (go/no-go tasks and choice tasks, responding to stimuli with certain characteristics, but not to distractors lacking these characteristics). In the present context, we will focus primarily on data from recognition tasks.\nRT distributions are right-skewed by nature, requiring appropriate handling when applying analysis techniques assuming normally distributed data. Because of the technical efforts required to record RT data, we usually use experimental designs to generate response data. These are primarily evaluated with ANOVA-based methods, which do assume normal distributions. As a remedy, several ways of handling the skewed data have been proposed: some argue that the ANOVA F test is sufficiently robust against non-normality and, therefore, do not correct at all (e.g., Hays, 1994, p. 406). Others apply transformations to approach normal distribution, like the Box–Cox transformation (e.g., Seber & Lee, 2003, ch. 10.3.2), the square root transformation or, more general, the family of power transformations (e.g., Cohen, Cohen, West & Aiken, 2003, p. 233 and p. 245), the logarithmic transformation (Cohen et al., 2003, p. 245), rank-based normalization (Cohen et al., 2003, p. 247), the shifted power transformation (e.g., Atkinson, 1985, p. 184), and others (e.g., Gro, 2004, ch. 6). A third line of handling is elimination of the values considered as outliers, i.e., both fast and slow responses (e.g., Voss, Nagler & Lerche, 2016). Fourth, one may as well try to apply other models than the normal one, as did Matzke and Wagenmakers (2009) by probing the ex-Gaussian and the shifted-Wald distribution, however, with limited success. Although some researchers found parameters of descriptive parameterizations of RT distributions to be useful (e.g., Schmiedek, Oberauer, Wilhelm, Sü & Wittmann, 2007; Spieler, Balota & Faust, 1996), the primary point of criticism concerning shifted Wald and ex-Gaussian parameterization is probably that they are not motivated by a psychological theory. Another shortcoming is that they neglect the information in classification errors. Consequently, they are not suited to cope with speed–accuracy trade-offs.\nIn contrast, the diffusion model constitutes an entirely different approach by drawing on particle diffusion theory. This approach offers a compelling principle to describe the skewed RT distributions typically resulting from human decision formation. However, due to its complexity (involving seven model parameters, as will be demonstrated below), it is hard to conceive, how the various parameters affect the resulting RT density curves. Therefore, the present article introduces a visualization tool allowing for exactly tracking and scrutinizing the role of each parameter and their interplay in great detail. The text is structured as follows: after a short introduction of the diffusion model (including numerous hints to relevant sources), the visualization tool is presented along with some technical details. Finally, we will discuss useful applications of the visualization program.\n\n## The diffusion modeling approach to RT analysis\n\nThe diffusion model (DM; Ratcliff, 1978, 2013), also termed Ratcliff DM, drift diffusion model (DDM, e.g., Bogacz, Brown, Moehlis, Holmes & Cohen, 2006; Correll, Wittenbrink, Crawford & Sadler, 2015; Dutilh et al., 2016, subm.), or Wiener diffusion model with absorbing boundaries (e.g., Grasman, Wagenmakers & van der Maas, 2009) takes into account both RT and accuracy of speeded binary decisions like those occurring in a two-alternative-forced-choice (2AFC or TAFC) paradigm. Respondents have to select one out of two response alternatives, possibly under time pressure, while RT and correctness are recorded for each decision. The simultaneous availability of both provides a means for disentangling the speed–accuracy trade-off dilemma (for a detailed account see Heitz, 2014 or Ratcliff, 1978, pp. 93–97). How DM analyses improve our understanding of relatively fast decisions has been a major topic in two recent overview articles (Forstmann, Ratcliff & Wagenmakers, 2016; Ratcliff, Smith, Brown & McKoon, 2016).\n\n### The concept of the diffusion model\n\nBasically, the model assumes that cognitive information accumulation and processing takes place in form of a sequential sampling process. After stimulus presentation, the respondent collects, processes, and accumulates stimulus features, which favor either decision A or decision B. The model assumes that this accumulation corresponds to neural activity in some way without making further assumptions regarding details of the process. Several studies collected empirical evidence supporting the plausibility of this assumption (e.g., Gold & Shadlen, 2007, Forstmann et al., 2016, Heekeren, Marrett, Bandettini & Ungerleider, 2004; Heekeren, Marrett & Ungerleider, 2008; Ho, Brown & Serences, 2009; Lo & Wang, 2006; Ma, Beck & Pouget, 2008; Soltani & Wang, 2010).\nWe conceive of the decision process as a random walk (i.e., discrete), but with the sampling assumed so fast that it can be expressed by a Wiener (i.e., a continuous-time) diffusion process, with a large noise component (cf. Navarro & Fuss, 2009). The two resulting RT distributions form First-Passage Time distributions (FPT; e.g., Feller, 1968, ch. XIV.6 or Feller, 1971, ch. XIV.5), which can be traced back to what Siegmund (1986) has termed the “grandfather of all such problems” (p. 361), the one sample Kolmogorov–Smirnov statistic. Cox and Miller (1965) provide a fundamental treatise regarding Brownian motion and absorption.\nTechnically, we deal with sequential sampling models in the sense of Wald (1945, 1947), assuming here that a decision is the result of (noisy) evidence accumulation across a period of time eventually passing a critical threshold. Laming (1968) presents a series of experiments specifically linking the decision process to the random walk model. However, in contrast to random walk models, the DM assumes both, evidence and time, to be continuous. The DM is therefore frequently described as a special case of the more general class of sequential sampling models, which are characterized by sampling of relative evidence, and contrasted to the class of Accumulator Models characterized by absolute evidence criteria (Bogacz et al., 2006; Ratcliff et al., 2016).\n\n### The four main parameters of the DM\n\nThe “classical” DM as formulated by Ratcliff (1978) employs four model parameters, which can be perceived as the “main” parameters carrying the fundamental meaning of the model in substantive terms (extensions will be discussed in “Parameter variability/variability parameters”).\nFigure 1 shows an illustration of the RT densities of both reaction alternatives along with the four main model parameters. The parameter a is the upper threshold of accumulated evidence in favor of decision alternative A required to issue the respective response (usually pressing a key). The lower boundary is set to a value of zero, hence a is the boundary separation and reflects the threshold difference at the same time. The parameter z denotes the location of the starting point. It can be expressed as an absolute value on the same scale as a, then $$0< z < a$$; however, it has proven convenient to rescale it to represent the starting position relative to a, then $$0< z < 1$$ (which will be adopted here). The parameter $$t_0$$ (or $$T_{\\mathrm {ER}}$$) collects all time components not related to decision-making but for encoding and response. The parameter $$\\nu$$ denotes the drift rate of information accumulation, which is the average amount of evidence gathered per time slice. It can take positive and negative values.\n\n### Parameter interpretation from a psychological point of view\n\nWhile the model parameters convey a compelling interpretation from a theoretical point of view, we have to provide empirical evidence corroborating these theoretical assumptions from a substantive (psychological) perspective. The following selection of results regarding parameter validation illustrates that we already dispose of ample evidence supporting the theoretical view. For studies systematically exploring parameter validity see, for example, Arnold, Broder and Bayen (2015), Lerche and Voss (2017), Ratcliff and Rouder (1998), Voss, Rothermund and Voss (2004), or Wagenmakers, Ratcliff, Gomez and McKoon (2008).\n\n#### Boundary separation a\n\nLarge values of a are assumed to indicate the subject’s response caution, which is under subjective control and determined prior to the start of each trial (e.g., Wagenmakers, 2009). In this sense, Voss et al. (2004) termed this parameter the “response criterion”. Ratcliff and Rouder (1998) established a speed vs. accuracy condition by instructing the respondents to respond as quickly as possible in the first case or to decide as accurate as possible in the latter case. They found marked differences in the a parameter estimates between the two conditions. Also Voss et al. (2004) found significantly increased values of this parameter when instructing respondents to “work especially carefully and to avoid mistakes” (p. 1211). Likewise, van Ravanzwaaij and Oberauer (2009) found the speed-accuracy instruction to affect primarily a. Arnold et al. (2015) varied the speed-accuracy instruction by giving negative feedback either when the response was wrong (accuracy condition) or when the response took more than 1000 ms (speed condition), yielding the expected differences in a as well. Moreover, the boundary separation also seems to increase with age. Ratcliff and McKoon (2008) quote a large series of studies indicating that this effect is due to increased conservatism of older adults (p. 911).\n\n#### Response bias z\n\nThe starting point (or bias) parameter z is understood as a bias of the individual, reflecting the a priori expectation, whether the next stimulus will be a positive or negative example (i.e., which response will likely be the adequate one). Accordingly, Ratcliff and McKoon (2008) found variations in z in relation to the proportion of left- and right-moving stimuli (which the subjects were told in advance); Arnold et al. (2015) also found variations in z by varying the proportion of old and new items in a recognition memory experiment; Voss et al. (2004) further revealed that the starting point varies when one of the two responses is offered a reward.\n\n#### Drift parameter $${\\nu }$$\n\nRatcliff and McKoon (2008) describe the drift parameter as the “quality, or strength, of the information available from a stimulus” (p. 901). Hence, large (absolute) values indicate that the stimulus allows for a fast decision, while values close to zero indicate that the decision might rather rest upon guessing. Ratcliff (1978) considered the drift parameter “to alone represent input from memory into the decision system” (p. 70). However, in a between-subject comparison of identical tasks, $$\\nu$$ may as well be seen as the parameter representing “perceptual sensitivity” (Voss et al. 2004, p. 1208). This interpretation is rather appealing, as it allows for modeling a subject’s information processing speed independent of speed–accuracy preference or conservatism (which is covered by a) or motor response-execution speed (covered by $$t_0$$). It is further in line with Schmiedek et al. (2007), who found a relation of the drift parameter to working memory. Also, van Ravenzwaaij, Brown and Wagenmakers (2011) relate individual differences in general intelligence to the drift parameter and Ranger, Kuhn and Szardenings (2016) characterize $$\\nu$$ generally as “(...) the subject’s capability to process information.” (p. 124).\nVoss et al. (2004) found the drift rate to vary corresponding to the increased difficulty of the task. Also Ratcliff and McKoon (2008) found that stimuli of varying difficulty affected exclusively the drift parameter. Similarly, van Ravanzwaaij and Oberauer (2009) found $$\\nu$$ to correspond to the stimulus–response compatibility (p. 469). Arnold et al. (2015) presented some of the old stimuli of a recognition experiment once and some of them twice along with new items, yielding significant differences in $$\\nu$$ across these three conditions.\n\n#### Non-decision time component $${t}_{{0}}$$\n\nThe $$t_0$$ parameter comprises all processes not involved in decision-making. These embrace encoding of the stimulus, response preparation, and motor response. Voss et al. (2004) induced a response handicap condition, in which respondents were instructed to use one and the same finger for all keyboard responses (“C” and “M”), requiring them to press the “B”-key with the same finger to start the trial. They found on average a significantly increased value for the $$t_0$$-parameter (compared to a standard experimental condition, not involving this response handicap), suggesting that the non-decision parameter actually reflects the motoric complexity of a task. The same conclusion was achieved by Gomez et al. (2015), who by varying the response modality (eye movement, key pressing, and pointing on a touchscreen; p. 1518) found these three modalities to affect only $$t_0$$ (and its variability, $$s_{t_0}$$), but none of the other model parameters. Lerche and Voss (2017) required their respondents to press the response key not once (as usual), but three times in a row, after coming to a decision. They found clear effects upon the $$t_0$$-parameter, thus corroborating the validity of this parameter.\n\n### The model equations\n\nThe DM relies on two sources of information, the (non-)matching response rate and the two RT distributions. Resorting to the “gamblers ruin problem” (or “classical ruin problem”, as denoted by Feller 1968, p. 342), which is of discrete nature, Ratcliff (1978) restated winning (or losing) a dollar as the (non-)matching of the features of a probe item and a memory item. By considering information accumulation as a continuous process, he arrives at\n\\begin{aligned} P(-|a,z,\\nu ) = \\frac{\\mathrm {e}^{-(2\\nu {}a/s^2)} - \\mathrm {e}^{-(2\\nu {}z/s^2)}}{\\,\\mathrm {e}^{-(2\\nu {}a/s^2)} - 1 \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\,}, \\end{aligned}\n(1)\nfor the probability of a non-matching response and the finishing time density for negative responses is given as\n\\begin{aligned} g_{-}(t,a,z,\\nu ) = \\frac{\\pi {}s^2}{a^2} \\mathrm {e}^{-\\frac{z\\nu }{s^2}} \\sum _{k=1}^{\\infty } k\\sin \\left( \\frac{\\pi {}zk}{a} \\right) \\mathrm {e}^{-\\frac{1}{2}\\left( \\frac{\\nu ^2}{s^2}+\\frac{\\pi ^2k^2s^2}{a^2}\\right) t} \\end{aligned}\n(2)\n(p. 70, Equations (A8) and (A9), notation adapted). The probability of and density for positive responses is obtained analoguously by applying $$\\nu _+ = -\\nu$$ and $$z_+ = a-z$$. The term $$s^2$$ denotes the variance of the Brownian motion within one trial (therefore termed intra-trial variability of the drift), which is not a model parameter, but rather a constant. It has to be set to an appropriate value prior to parameter estimation to make the model identified (cf. Ratcliff et al., 2016, p. 262). The choice of s is not critical, as it regards only the scale of the estimated parameters; two values are often observed, $$s=1$$ and $$s=0.1$$.\nEquation (2) has been previously published, e.g., in Feller (1968, eq. 6.15), who also notes that it is known as “Fürth’s formula for first passages” in physical diffusion theory (Feller, 1968, p. 359). Busemeyer and Diederich (2010, esp. Appendix to ch. 4) provide a comprehensible derivation of the model equations. Ratcliff (1978) also showed an alternative means to derive the first-passage time distribution (2) as the solution of a partial differential equation (PDE, also known as Fokker–Planck Backward Equation in statistical mechanics), thus circumventing the approximation of the infinite sum involved in Eq. (2).\n\n### Parameter variability/variability parameters\n\nFor applications it is reasonable to take further sources of variability into account and to provide for the according parameters. The core aspect of parameter variability is that it seems unplausible to assume respondents’ attention to remain constant throughout the entire experiment. Rather, we have to expect a trial-to-trial-fluctuation of the model parameters.\nThe variability of the drift parameter $$s_\\nu$$ (Ratcliff, 1978 and other authors use $$\\xi$$ for the mean and $$\\eta$$ for the standard deviation of $$\\nu$$) has always been an integral element of the drift diffusion model. It allows to account for variability in encoding in memory, but also predicts slower RTs for incorrect responses than for correct ones (cf. Ratcliff & Rouder, 1998, p. 348; Ratcliff et al., 2016, p. 267).\nRatcliff and Rouder (1998) introduced further the starting point variability parameter $$s_z$$ to explain error responses “slower than correct responses at intermediate levels of accuracy” and “faster than correct responses at extreme levels of accuracy” (p. 349). Moreover, Ratcliff and Tuerlinckx (2002) introduced also a variability parameter of the encoding and reaction time, $$s_{t_0}$$ to model large variability of very fast responses (esp. the 0.1 quantile of the RT distribution, see ibid., p. 441).\n\n#### Distributional assumptions and estimation\n\nThe drift rate variability is assumed to follow a normal distribution with mean $$\\nu$$ and standard deviation $$s_\\nu$$. In contrast, the encoding and reaction time component $$t_0$$ and the starting point z are modelled assuming a uniform distribution, with range $$s_{t_0}$$ and $$s_z$$, respectively. Thus, we yield for the effective parameters $$\\nu ^*$$, $$z^*$$, and $$t_0^*$$:\n\\begin{aligned} \\nu ^*&\\sim {} N(\\nu ,s_\\nu ^2), \\end{aligned}\n(3)\n\\begin{aligned} z^*&\\sim {} U(z - s_z/2 ,z + s_z/2), \\end{aligned}\n(4)\n\\begin{aligned} t_0^*&\\sim {} U(t_0 - s_{t_0}/2,t_0 + s_{t_0}/2). \\end{aligned}\n(5)\nThe three variability parameters do not have a psychological interpretation of their own, but rather allow for capturing random disturbances without invalidating the model or deteriorating parameter estimates and model fit. However, some authors argue in favor of fixing the variability parameters, either because this might improve estimability of the four main parameters (e.g., Lerche & Voss, 2016; 2017), or to keep the entire procedure as simple as possible (e.g., van Ravenzwaaij, Donkin & Vanderkerckhove, 2017).\nThe variability parameters are not part of the model Eqs. (1) and (2), but rather have to be found by numerical integration in the parameter estimation process (see “Behind the scenes: a few technical details”). They have (compared to the four main parameters) little influence upon the resulting density curves, which can easily be checked with the tool presented in the next section.\n\n#### Further extensions\n\nNext to the variability parameters, several model extensions have been proposed, e.g., Voss et al. (2010) introducing a response-execution bias parameter; Krajbich and Rangel (2011) providing for a three-alternative decision design; Diederich (1994) using an Ornstein–Uhlenbeck process with drift; Tuerlinckx and Boeck (2005), van der Maas et al. (2011), and Molenaar, Tuerlinckx and van der Maas (2015) bridging the DM to item response theory (IRT) models, or Vandekerckhove et al. (2011) introducing a hierarchical approach with responses (level 1) nested within respondents (level 2).\n\n## The diffusion model visualizer\n\nThe diffusion model has proven to be a valuable tool for evaluating both response correctness and RT in decision-making processes. However, the specific roles the various model parameters play in generating the densities of positive and negative responses are hardly apprehensible from the model equations. For that purpose, the diffusion model visualizer (DMV) has been developed visualizing the RT density curves for a given parameter constellation. Figure 1 shows the graphical user interface (GUI) of this program.\n\n### The DMV GUI\n\nAt the left, there is a panel with seven sliders allowing adjustment of each parameter. These sliders can be changed with the mouse, using the “+” and “–” buttons, using the PgUp and PgDown buttons, or by entering values into the respective text fields. Below, three entries allow for setting $$s^2$$, the number of iterations to approach the infinite sum of Eq. (2), and the number of nodes to be used for the numerical integration required for the variability parameters. The middle section of the GUI shows the resulting density curves along with a symbolic depiction of the actual model parameters. At the right, graphical and computational options allow for fine-tuning the plot. On top of the screen, there are four tabs allowing for switching from the PLOT page to a HELP page with some basic instructions and usage hints, the LIMITS page, which allows for increasing the parameter limits (for experimental purposes), and the LOG page with some numerical details of the calculations. The DRAW button invokes the calculations and delivers the diagram with the currently set options. The CLEAR button empties the canvas, the CLIP button copies the current diagram to the clipboard, and the SAVE button allows for storing the plot to disk. A bold face caption of the DRAW button indicates pending parameter changes.\nNext to the DRAW button there is the Animation checkbox. It causes the plot to be drawn instantly upon slider movements, which gives a vivid impression how the curves change with parameter modifications. Next to this option, there is a Superimpose densities checkbox, which helps to compare various parameter constellations.\nThe Plot options panel allows for adaption of the diagram to specific scenarios (e.g., for a closer inspection of the densities’ ascending tails by decreasing the Time span). The fill area option might prove useful in cases, in which a very small image is to be conveyed, for example in a matrix plot.\n\n### DMV usage\n\nThe default parameter setting is $$a=1$$, $$z=0.5$$, $$\\nu =0.2$$, and $$t_0=300$$, with the variability parameters set to zero and $$s^2=1$$. This configuration results in two fairly similar density curves. Shifting the a slider upwards shows how both densities become increasingly flatter the larger the value of a. This effect is comprehensible, as a larger a indicates more evidence accumulation towards one of the two decision boundaries thus causing elongated reaction times. In contrast, when we decrease a, the densities increase for short reaction times changing their shape drastically. Both curves change in a similar way, as positive and negative responses are affected equally from changes in a.\nChanging the starting point z causes a strong dissimilarity of the two curves. Increase of z causes the upper curve to grow and the lower curve to shrink and vice versa. This effect is near at hand, as being biased towards, say, the upper decision fosters its choice while much more evidence is required to come to the lower one, and vice versa. At the same time, the two curves also change their shape drastically and in an asymmetrical way, because not only the response probabilities (as given in Eq. 1) change, but also the respective RT mean values.\nChanging the $$\\nu$$ parameter also in-/decreases the kurtosis of the two curves, as did z. But in contrast to z, the shape changes are more symmetrical, less pronounced, and the means will not differ notably.\nFinally, moving the $$t_0$$ slider shifts both densities horizontally. This is evident from the model definition, as the encoding and RT is not associated with the decision process itself thus leaving the densities’ shapes unaltered.\nThe three variability parameters are initially set to zero. Increasing $$s_v$$ will cause a slight increase in short reaction times moving the mode of the curve somewhat to the left. The $$s_z$$ parameter causes a slight increasing of the “peakedness” (i.e., the kurtosis) of both densities. The $$s_{t_0}$$ parameter “flattens” the ascending tails of the densities a bit and thus reduces their kurtosis. Note that due to the computational burden, the variability parameter sliders will take effect not before releasing the mouse button (in contrast to the four main parameter sliders; see “Behind the scenes: a few technical details”).\nTo gain an impression of the effect of the various parameter configurations, the Superimpose densities checkbox allows for plotting multiple curves in one plot. Note that this option becomes automatically unchecked, if the Draw model parameters options is checked and the value of a is changed. Otherwise, the dislocated baselines (indicating the de-/increasing a) would be drawn one over the other, thus rendering the diagram distorted. Alternatively, one could also uncheck the Draw model parameters option, which will leave the two baselines in place thus keeping the diagram intact (this applies only if you want to change a; for examining the other parameters, using Superimpose densities and Draw model parameters conjointly will provide highly informative diagrams).\nWhile it is relatively easy to describe the main effects of changing one parameter at a time, the impact of multiple changes is much more complex and is, therefore, left to the reader. As an introductory example, interested readers could increase the (uncommonly low) $$\\nu$$ (which is 0.2 at start-up) to a more frequently observed value of 1 and inspect the changes.\n\n### Behind the scenes: a few technical details\n\nThe program implements the model Eqs. (1) and (2). The infinite sum contained in Eq. (2) is approximated by 100 steps. This value can be changed with the Kmax option. However, program testing showed that too small a Kmax (simulations suggest below approximately 50) may cause artifacts (spikes in the vicinity of $$t_0$$) with certain parameter configurations due to numerical inaccuracies. A preliminary sensitivity analysis revealed that the default value should suffice for most cases (in most cases, the sum converges after much fewer iterations). Readers are, therefore, advised to increase Kmax with caution, as larger values will increase the computational burden considerably, in particular when working with the variability parameters.\nThe effects of the three variability parameters are induced by integrating over the respective distributions as given by Eqs. (3), (4), and (5) (cf. Ratcliff & Tuerlinckx, 2002, esp. App. B). The integration is implemented using the trapezoidal rule with ten nodes across the respective ranges as given in Eqs. (3), (4), and (5). For $$\\nu$$, the integration interval is $$\\nu \\pm 4s_\\nu$$. The nodes option allows for changing the number of nodes, but again, use this option with caution, increasing it will also cause considerable computational burden, while decreasing will lead to numerical inaccuracies.\nBecause the numerical integration routines invoked by the variability parameters are computationally expensive, especially when activating two or all the three of them simultaneously, plotting may slow down. Therefore, the Animation feature regarding the $$s_z$$-, the $$s_\\nu$$-, and the $$s_{t_0}$$-sliders only update the plot upon releasing the mouse button, while the a-, z-, $$\\nu$$-, and $$t_0$$-sliders invoke the plot update immediately upon mouse movement.\nThe intra-trial variance of the Brownian motion, $$s^2$$, defaults to one with an option for change. Changing $$s^2$$ will alter the plot unless the other parameter values, which are linearly related to $$s^2$$ [see model Eqs. (1) and (2)], are adapted accordingly.\nThe DMV is written in Free Pascal/Lazarus (Free Pascal Team 1993–2016; Lazarus Team 1993–2016). This programming environment has three advantages relevant to our endeavour: first, it allows generation a fast executing code, which is necessary as complex calculations are performed. Second, it supports building clearly-arranged GUIs with little effort. Thirdly, it allows for cross-compiling, i.e., generating binaries for a Linux or a Mac environment as well (at the moment, only a Windows version is provided; in case a version for a different platform is required, please contact the author; under Linux, the program can be executed using the wine emulator).\n\n## Applying the DMV\n\nSeveral applications of the program can be thought of: first of all, it may serve as a valuable tool for educational purposes, allowing for a vivid illustration of how the various parameters affect the resulting density curves. This is an important task, because the diffusion model uses seven parameters whose effects on the polymorphic RT densities and response probabilities are complex and difficult to understand—equally for students and researchers new to the model. The DMV allows exploration of the effect of the various model parameters upon the decision accuracy and the RT distribution, i.e., not only its mean and standard deviation, but also the higher moments skewness and kurtosis—or even the entire shape as such.\nThe diffusion model differs fundamentally from more frequently applied models in psychology, like the Generalized Linear Model (GLM; McCullagh & Nelder, 1989) including multi-level structures and structural equation modeling extensions (Skrondal & Rabe-Hesketh, 2004), or Item Response Theory models (IRT; de Ayala, 2009). The most important difference is that it is not a member of the exponential family (Barndorff-Nielsen, 1978) thus follows quite a different mathematical structure, viz. differential equations. Moreover, DMs are still rather rarely applied (but showing a strongly increasing tendency), so researchers have so far had fewer opportunities to become familiar with them. While the model equations rather conceal information, the animated illustration allows for grasping the basic concept and dynamics of the model and the roles its parameters play in an intuitive way.\nThe DMV might also prove useful in research. It could come in handy when parameter estimation problems are observed (e.g., estimates exhibit unexpected values or the estimation routine would not terminate in a regular fashion). Then, one could juxtapose the observed RT distributions of positive and negative responses to those of the DMV plot and explore manually various parameter constellations. Such a comparison will be particularly useful for determining RT outliers, which are subject to an ongoing debate of how to be handled (cf. Grasman et al., 2009; Ratcliff, 1978; Vandekerckhove & Tuerlinckx, 2007).\nSimilarly, the DMV could serve as a validation tool for empirical data: for example, Schmiedek et al. (2007) used the DM parameters estimated from their data to simulate a DM perfectly in line with these estimates and compared statistics derived from the simulated data to the respective empirical counterparts (Schmiedek et al., 2007, p. 424, Fig. 3). With the DMV, they had the entire shape of the RT distributions at their disposal, rather than only statistics.\nMost diffusion model parameter estimation routines allow for fixing of parameters and estimating the remaining ones (e.g., $$z=0.5$$ or variability parameters are set to zero). Comparing the observed with DMV-generated distributions may help determine a sensible choice of model parameter restrictions (requiring validation by independent data, of course). For example, in “Parameter variability/variability parameters”, we referred to studies arguing in favor of fixing the variability parameters. Exploring the effect of these parameters with the DMV reveals that they have rather moderate impact upon the RT densities compared to those of the four main parameters. This supports—at least to some extent—the critical voices regarding the variability parameters.\n\n## Conclusions and outlook\n\nAlthough the diffusion model was introduced quite a while ago in the late seventies, it was only scarcely applied during the early years of its existence. But within approximately the last decade, a number of easy-to-use programs have appeared and a steadily growing number of studies have been applying the DM. For this growing community, the DMV will likely serve as a useful tool and might come in handy during lectures and presentations. The program is available at https://osf.io/4en3b/ or http://www.dmvis.at/.\n\n## Acknowledgements\n\nOpen access funding provided by University of Klagenfurt. The author is indebted to Bartosz Gula for helpful comments on a former version of the manuscript and two anonymous reviewers for not only their thoughtful remarks regarding the manuscript, but also thoroughly testing the program and making useful suggestions, which improved usability.\n\n## Compliance with ethical standards\n\n### Conflict of Interest\n\nThe Author declares that he has no conflict of interest.\n\n### Ethical approval\n\nThis article does not contain any studies with human participants or animals performed by any of the authors.\n\n• Optie A:\n• Optie B:\n\n## Onze productaanbevelingen\n\n### BSL Psychologie Totaal\n\nMet BSL Psychologie Totaal blijf je als professional steeds op de hoogte van de nieuwste ontwikkelingen binnen jouw vak. Met het online abonnement heb je toegang tot een groot aantal boeken, protocollen, vaktijdschriften en e-learnings op het gebied van psychologie en psychiatrie. Zo kun je op je gemak en wanneer het jou het beste uitkomt verdiepen in jouw vakgebied.\n\n### BSL Academy Accare GGZ collective\n\nLiteratuur\nArnold, N. R., Bröder, A., & Bayen, U. J. (2015). Empirical validation of the diffusion model for recognition memory and a comparison of parameter-estimation methods. Psychological Research, 79, 882–898. PubMed\nAtkinson, A. C. (1985). Plots, transformations and regression. Oxford: Clarendon Press.\nBarndorff-Nielsen, O. E. (1978). Information and exponential families. In statistical theory. Hoboken: Wiley.\nBogacz, R., Brown, E., Moehlis, J., Holmes, P., & Cohen, J. D. (2006). The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113, 700–765. PubMed\nBusemeyer, J. R., & Diederich, A. (2010). Cognitive modelling. Thousend Oaks: Sage.\nCohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/ correlation analysis for the behavioral sciences. New York: Routledge.\nCorrell, J., Wittenbrink, B., Crawford, M. T., & Sadler, M. S. (2015). Stereotypic vision: How stereotypes disambiguate visual stimuli. Journal of Personality and Social Psychology, 108, 219–233. PubMed\nCox, D. R., & Miller, H. D. (1965). The theory of stochastic processes. London: Methuen & Co Ltd.\nde Ayala, R. J. (2009). The theory and practice of item response theory. New York: Guilford.\nDiederich, A. (1994). A diffusion model for intersensory facilitation of reaction times. In G. H. Fischer & D. Laming (Eds.), Contributions to mathematical psychology, psychometrics, and methodology (pp. 207–220). New York: Springer.\nDutilh, G., Annis, J., Brown, S. D., Cassey, P., Evans, N. J., Grasman, R. P. P. P., Donkin, C. (2016). The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models. Psychonomic Bulletin & Review. http://psyarxiv.com/s2x32 (see http://osf.io/g7ka7/). Accessed 27 Nov 2017.\nFeller, W. (1968). An introduction to probability theory and its applications (3rd ed., Vol. I). New York: Wiley.\nFeller, W. (1971). An introduction to probability theory and its applications (2nd ed., Vol. I). New York: Wiley.\nForstmann, B., Ratcliff, R., & Wagenmakers, E.-J. (2016). Sequential sampling models in cognitive neuroscience: Advantages, applications, and extensions. Annual Review of Psychology, 67, 641–666. PubMed\nFree Pascal Team. (1993–2016). Free Pascal: A 32, 64 and 16 bit professional Pascal compiler [Computer software manual]. Fairfax, VA. http://www.freepascal.org (Version 3.0.4, rrid: SCR\\_014360). Accessed 1 Aug 2018.\nGold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574. PubMed\nGomez, P., Ratcliff, R., & Childers, R. (2015). Pointing, looking at, and pressing keys. A diffusion model account of response modality. Journal of Experimental Psychological Human Perception Performance, 41, 1515–1523.\nGrasman, R. P., Wagenmakers, E.-J., & van der Maas, H. L. (2009). On the mean and variance of response times under the diffusion model with an application to parameter estimation. Journal of Mathematical Psychology, 53, 55–68.\nGroß, J. (2004). A normal distribution course. Frankfurt am Main: Peter Lang.\nHays, W. L. (1994). Statistics (5th ed.). Belmont: Wadsworth Publishing.\nHeekeren, H. R., Marrett, S., Bandettini, P. A., & Ungerleider, L. G. (2004). A general mechanism for perceptual decision-making in the human brain. Nature, 431, 859–861. PubMed\nHeekeren, H. R., Marrett, S., & Ungerleider, L. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9, 467–479. PubMed\nHeitz, R. P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience, 8, 1–19.\nHo, T. C., Brown, S., & Serences, J. T. (2009). Domain general mechanisms of perceptual decision making in human cortex. The Journal of Neuroscience, 29, 8675–8687.\nKrajbich, I., & Rangel, A. (2011). Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciences, 108, 13852–13857.\nLaming, D. R. J. (1968). Information theory of choice-reaction times. London: Academic Press.\nLazarus Team. (1993–2016). Lazarus: The professional Free Pascal RAD IDE [Computer software manual]. Fairfax, VA. http://www.lazarus-ide.org (Version 1.8.4, rrid: SCR\\_014362). Accessed 1 Aug 2018.\nLerche, V., & Voss, A. (2016). Model complexity in diffusion modeling: Benefits of making the model more parsimonious. Frontiers in Psychology, 7, 1–14. https://doi.org/10.3389/fpsyg.2016.01324. CrossRef\nLerche, V., & Voss, A. (2017). Experimental validation of the diffusion model based on a slow response time paradigm. Psychological Research. https://doi.org/10.1007/s00426-017-0945-8.\nLo, C.-C., & Wang, X.-J. (2006). Cortico-basal ganglia circuit mechanism for a decision threshold in reaction time tasks. Nature Neuroscience, 9, 956–963. PubMed\nMa, W. J., Beck, J. M., & Pouget, A. (2008). Spiking networks for bayesian inference and choice. Current Opinion in Neurobiology, 18, 217–222. PubMed\nMatzke, D., & Wagenmakers, E.-J. (2009). Psychological interpretation of the ex-Gaussian and shifted Wald parameters: A diffusion model analysis. Psychonomic Bulletin & Review, 16, 798–817.\nMcCullagh, P., & Nelder, J. A. (1989). Generalized linear models (2nd ed.). Boca Raton: Chapman & Hall.\nMolenaar, D., Tuerlinckx, F., & van der Maas, H. L. J. (2015). Fitting diffusion item response theory models for responses and response times using the R Package diffIRT. Journal of Statistical Software, 66, 1–34.\nNavarro, D., & Fuss, I. (2009). Fast and accurate calculations for first-passage times in Wiener diffusion models. Journal of Mathematical Psychology, 53, 222–230.\nRanger, J., Kuhn, J.-T., & Szardenings, C. (2016). Limited information estimation of the diffusion-based item response theory models for responses and response times. British Journal of Mathematical and Statistical Psychology, 69, 122–138. PubMed\nRatcliff, R. (1978). A theory of memory retrieval. Psychological Review, 85, 59–108.\nRatcliff, R. (2013). Response time: Data and theory. In L. Zhong-lin (Ed.), Progress in cognitive science: From cellular mechanisms to computational theories (pp. 31–62). Peking: Peking University Press.\nRatcliff, R., & McKoon, G. (2008). The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation, 20, 873–922.\nRatcliff, R., & Rouder, J. N. (1998). Modelling response times for two-choice decisions. Psychological Science, 9, 347–356.\nRatcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion decision model: Current issues and history. Trends in Cognitive Sciences, 20, 260–281.\nRatcliff, R., & Tuerlinckx, F. (2002). Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychognomic Bulletin & Review, 9, 438–481.\nSchmiedek, F., Oberauer, K., Wilhelm, O., Süß, H. M., & Wittmann, W. W. (2007). Individual differences in components of reaction time distributions and their relations to working memory and intelligence. Journal of Experimental Psychology: General, 136, 414–429.\nSeber, G. A. F., & Lee, A. J. (2003). Linear regression analysis (2nd ed.). Hoboken: Wiley.\nSiegmund, D. (1986). Boundary crossing probabilities and statistical applications. The Annals of Statistics, 14, 361–404.\nSkrondal, A., & Rabe-Hesketh, S. (2004). Generalized latent variable modeling. multilevel, longitudinal, and structural equation models. Boca Raton: Chapman & Hall.\nSoltani, A., & Wang, X.-J. (2010). Synaptic computation underlying probabilistic inference. Nature Neuroscience, 13, 112–121. PubMed\nSpieler, D. H., Balota, D. A., & Faust, M. E. (1996). Stroop performance in healthy younger and older adults and in individuals with dementia of the Alzheimer’s type. Journal of Experimental Psychology: Human Perception and Performance, 22, 461–479. PubMed\nTuerlinckx, F., & de Boeck, P. (2005). Two interpretations of the discrimination parameter. Psychometrika, 70, 629–650.\nVandekerckhove, J., & Tuerlinckx, F. (2007). Fitting the Ratcliff diffusion model to experimental data. Psychonomic Bulletin & Review, 14, 1011–1026.\nVandekerckhove, J., Tuerlinckx, F., & Lee, M. D. (2011). Hierarchical diffusion models for two-choice response times. Psychological Methods, 16, 44–62. PubMed\nvan der Maas, H. L. J., Molenaar, D., Maris, G., Kievit, R. A., & Borsboom, D. (2011). Cognitive psychology meets psychometric theory: on the relation between process models for decision making and latent variable models for individual differences. Psychological Review, 118, 339–356. PubMed\nvan Ravanzwaaij, D., & Oberauer, K. (2009). How to use the diffusion model: Parameter recovery of three methods: EZ, fast-dm, and DMAT. Journal of Mathematical Psychology, 53, 463–473.\nvan Ravenzwaaij, D., Brown, S., & Wagenmakers, E.-J. (2011). An integrated perspectiv on the relation between response speed and intelligence. Cognition, 119, 381–393. PubMed\nvan Ravenzwaaij, D., Donkin, C., & Vanderkerckhove, J. (2017). The EZ diffusion model provides a powerful test of simple empirical effects. Psychonomic Bulletin & Review, 24, 547–556.\nVoss, A., Nagler, M., & Lerche, V. (2016). Diffusion models in experimental psychology: A practical introduction. Experimental Psychology, 60, 385–402.\nVoss, A., Rothermund, K., & Voss, J. (2004). Interpreting the parameters of the diffusion model: An empirical validation. Memory & Cognition, 32, 1206–1220.\nVoss, A., Voss, J., & Klauer, K. C. (2010). Separating response-execution bias from decision bias: Arguments for an additional parameter in Ratcliff’s diffusion model. British Journal of Mathematical and Statistical Psychology, 63, 539–555. PubMed\nWagenmakers, E.-J. (2009). Methodological and empirical developments for the Ratcliff diffusion model of response time and accuracy. European Journal of Cognitive Psychology, 21, 641–671.\nWagenmakers, E.-J., Ratcliff, R., Gomez, P., & McKoon, G. (2008). A diffusion model account of criterion shifts in the lexical decision task. Journal of Memory and Language, 58, 140–159.\nWald, A. (1945). Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16, 117–186.\nWald, A. (1947). Sequential analysis. New York: Wiley.\nMetagegevens\nTitel\nThe diffusion model visualizer: an interactive tool to understand the diffusion model parameters\nAuteur\nRainer W. Alexandrowicz\nPublicatiedatum\n25-10-2018\nUitgeverij\nSpringer Berlin Heidelberg\nGepubliceerd in\nPsychological Research / Uitgave 4/2020\nPrint ISSN: 0340-0727\nElektronisch ISSN: 1430-2772\nDOI\nhttps://doi.org/10.1007/s00426-018-1112-6\n\n### Andere artikelen Uitgave 4/2020",
null,
"Naar de uitgave"
]
| [
null,
"https://media.springernature.com/lw150/springer-static/cover/journal/426/84/4.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89369965,"math_prob":0.9350357,"size":3294,"snap":"2023-40-2023-50","text_gpt3_token_len":703,"char_repetition_ratio":0.1106383,"word_repetition_ratio":0.008196721,"special_character_ratio":0.2191864,"punctuation_ratio":0.18993506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9658629,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:58:43Z\",\"WARC-Record-ID\":\"<urn:uuid:431cb6a6-5fa1-4a86-b96f-424b2d6afaef>\",\"Content-Length\":\"191561\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:048ee306-566e-449b-8dbb-cecbd4323b8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3e443b6-d4ae-4cac-9613-622afd837dcf>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://mijn.bsl.nl/the-diffusion-model-visualizer-an-interactive-tool-to-understand/16227264\",\"WARC-Payload-Digest\":\"sha1:OVMTVXZ22RHF3Q323HN3OZPOMNBWS7B7\",\"WARC-Block-Digest\":\"sha1:4QKQAN3Y2VZMQLBOGYCYIA2VDDPBCO3D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00162.warc.gz\"}"} |
https://answers.everydaycalculation.com/as-percent/5.699 | [
"Solutions by everydaycalculation.com\n\n## Express 5.699 as a percent\n\n5.699 is equivalent to 569.9%\n\n#### Steps to convert decimal into percentage\n\n1. Multiply both numerator and denominator by 100. We do this to find an equivalent fraction having 100 as the denominator.\n5.699 × 100/100\n2. = (5.699 × 100) × 1/100 = 569.9/100\n3. Write in percentage notation: 569.9%\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to work with percentages in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8671124,"math_prob":0.98567694,"size":532,"snap":"2021-31-2021-39","text_gpt3_token_len":146,"char_repetition_ratio":0.1344697,"word_repetition_ratio":0.0,"special_character_ratio":0.33270678,"punctuation_ratio":0.14018692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98165584,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T02:29:27Z\",\"WARC-Record-ID\":\"<urn:uuid:f8751c88-bb6d-4536-97e8-40e15b3e14b4>\",\"Content-Length\":\"5226\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f12a79a-db03-4b8f-987f-0339d36a29db>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b148215-2a1e-4cbc-b90d-4265926fedd1>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/as-percent/5.699\",\"WARC-Payload-Digest\":\"sha1:X5OBPC3VBJGHBMFQBXC33TJ63E4H7DXQ\",\"WARC-Block-Digest\":\"sha1:GTSK4OQJHSHPFRXOEEMBRFUPWEVOD566\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056974.30_warc_CC-MAIN-20210920010331-20210920040331-00341.warc.gz\"}"} |
https://hackage.haskell.org/package/TypeCompose-0.6.4/docs/Data-Zip.html | [
"TypeCompose-0.6.4: Type composition classes & instances\n\nPortability GHC experimental [email protected]\n\nData.Zip\n\nContents\n\nDescription\n\nZip-related type constructor classes.\n\nThis module is similar to `Control.Functor.Zip` in the `category-extras` package, but it does not require a `Functor` superclass.\n\nThis module defines generalized `zip` and `unzip`, so if you use it, you'll have to\n\n``` import Prelude hiding (zip,zipWith,zipWith3,unzip)\n```\n\nSynopsis\n\n# Zippings\n\ntype ZipTy f = forall a b. f a -> f b -> f (a, b)Source\n\nType of `zip` method\n\nclass Zip f whereSource\n\nType constructor class for `zip`-like things. Here are some standard instance templates you can fill in. They're not defined in the general forms below, because they would lead to a lot of overlap.\n\n``` instance Applicative f => Zip f where\nzip = liftA2 (,)\ninstance (Applicative h, Zip f) => Zip (h :. f) where\nzip = apZip\ninstance (Functor g, Zip g, Zip f) => Zip (g :. f)\nwhere zip = ppZip\ninstance (Arrow (~>), Unzip f, Zip g) => Zip (Arrw (~>) f g) where\nzip = arZip\ninstance (Monoid_f h, Cozip h) => Zip h where\nzip = cozip\n```\n\nAlso, if you have a type constructor that's a `Functor` and a `Zip`, here is a way to define '(*)' for `Applicative`:\n\n``` (<*>) = zipWith (\\$)\n```\n\nMinimum definitions for instances.\n\nMethods\n\nArguments\n\n :: ZipTy f Generalized `zip`\n\nInstances\n\n Zip [] Zip IO Zip Endo Zip Id Zip ((->) u) Monoid u => Zip ((,) u) Monoid o => Zip (Const o) (Zip f, Zip g) => Zip (:*: f g) (Arrow ~>, Monoid_f (Flip ~> o)) => Zip (Flip ~> o) (Arrow ~>, Unzip f, Zip g) => Zip (Arrw ~> f g)\n\nzipWith :: (Functor f, Zip f) => (a -> b -> c) -> f a -> f b -> f cSource\n\nGeneralized `zipWith`\n\nzipWith3 :: (Functor f, Zip f) => (a -> b -> c -> d) -> f a -> f b -> f c -> f dSource\n\nGeneralized `zipWith`\n\napZip :: (Applicative h, Zip f) => ZipTy (h :. f)Source\n\nHandy for `Zip` instances\n\nppZip :: (Functor g, Zip g, Zip f) => ZipTy (g :. f)Source\n\nHandy for `Zip` instances\n\narZip :: (Arrow ~>, Unzip f, Zip g) => ZipTy (Arrw ~> f g)Source\n\nZiping of `Arrw` values. Warning: definition uses `arr`, so only use if your arrow has a working `arr`.\n\n# Unzipings\n\ntype UnzipTy f = forall a b. f (a, b) -> (f a, f b)Source\n\nType of `unzip` method. Generalizes `unzip`.\n\nclass Unzip f whereSource\n\nUnzippable. Minimal instance definition: either (a) `unzip` or (b) both of `fsts` and `snds`. A standard template to substitute any `Functor` `f.` But watch out for effects!\n\n``` instance Functor f => Unzip f where {fsts = fmap fst; snds = fmap snd}\n```\n\nMethods\n\nArguments\n\n :: UnzipTy f generalized unzip\n\nArguments\n\n :: f (a, b) -> f a First part of pair-like value\n\nArguments\n\n :: f (a, b) -> f b Second part of pair-like value\n\nInstances\n\n Unzip [] Unzip Endo Unzip Id Unzip ((->) a) Unzip ((,) a) Unzip (Const a)\n\n# Dual unzipings\n\nclass Cozip f whereSource\n\nDual to `Unzip`. Especially handy for contravariant functors (`Cofunctor`) . Use this template (filling in `f`) :\n\n``` instance Cofunctor f => Cozip f where\n{ cofsts = cofmap fst ; cosnds = cofmap snd }\n```\n\nMethods\n\nArguments\n\n :: f a -> f (a, b) Zip-like value from first part\n\nArguments\n\n :: f b -> f (a, b) Zip-like value from second part\n\nInstances\n\n Cozip Endo Cozip (Const e) (Cozip f, Cozip g) => Cozip (:*: f g) Arrow ~> => Cozip (Flip ~> o) (Functor h, Cozip f) => Cozip (:. h f)\n\ncozip :: (Cozip f, Monoid_f f) => ZipTy fSource\n\nZiping of `Cozip` values. Combines contribution of each.\n\n# Misc\n\npairEdit :: (Functor m, Monoid (m ((c, d) -> (c, d)))) => (m c, m d) -> m ((c, d) -> (c, d))Source\n\nTurn a pair of sources into a source of pair-editors. See http://conal.net/blog/posts/pairs-sums-and-reactivity/. 'Functor'\\/'Monoid' version. See also `pairEditM`.\n\npairEditM :: MonadPlus m => (m c, m d) -> m ((c, d) -> (c, d))Source\n\nTurn a pair of sources into a source of pair-editors. See http://conal.net/blog/posts/pairs-sums-and-reactivity/. Monad version. See also `pairEdit`."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6865792,"math_prob":0.8857676,"size":2925,"snap":"2021-31-2021-39","text_gpt3_token_len":1006,"char_repetition_ratio":0.14241698,"word_repetition_ratio":0.325,"special_character_ratio":0.36683762,"punctuation_ratio":0.19492869,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992013,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T21:22:27Z\",\"WARC-Record-ID\":\"<urn:uuid:b65ec46f-5fc3-44e1-a5d0-1eaddc3c14d9>\",\"Content-Length\":\"21953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54895876-10ee-4007-80be-066fc90b605b>\",\"WARC-Concurrent-To\":\"<urn:uuid:042be31c-84bc-4990-8422-f9b08820d463>\",\"WARC-IP-Address\":\"151.101.248.68\",\"WARC-Target-URI\":\"https://hackage.haskell.org/package/TypeCompose-0.6.4/docs/Data-Zip.html\",\"WARC-Payload-Digest\":\"sha1:T5HKKHZGILGBTBQLLXRGB6QZWEN4NCAN\",\"WARC-Block-Digest\":\"sha1:YCBERLZ27K7H73OQIGABYEXR77TCNZP5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056900.32_warc_CC-MAIN-20210919190128-20210919220128-00075.warc.gz\"}"} |
https://www.lmfdb.org/knowledge/show/character.dirichlet.jacobi_symbol | [
"show · character.dirichlet.jacobi_symbol all knowls · up · search:\n\nThe Jacobi symbol $$\\displaystyle\\left(\\frac{a}{b}\\right)$$ is an extension of the Legendre symbol to all positive odd integers $b$ as follows.\n\nWe set $\\displaystyle\\left(\\frac{a}{1}\\right) =1$ for all $a.$\n\nIf $b>1,$ then it has a decomposition into distinct odd primes of the form $b = p_1^{e_1} p_2^{e_2}\\cdots p_r^{e_r}$ and the Jacobi symbol is defined as the product of Legendre symbols $$\\displaystyle\\left(\\frac{a}{b}\\right) = \\left(\\frac{a}{p_1}\\right)^{e_1} \\left(\\frac{a}{p_2}\\right)^{e_2}\\cdots \\left(\\frac{a}{p_r}\\right)^{e_r}.$$\n\nThe Jacobi symbol is multiplicative in the sense that $$\\displaystyle\\left(\\frac{a_1 a_2}{b}\\right) = \\displaystyle\\left(\\frac{a_1}{b}\\right)\\displaystyle\\left(\\frac{a_2}{b}\\right)$$ and $$\\displaystyle\\left(\\frac{a}{b_1b_2}\\right) = \\displaystyle\\left(\\frac{a}{b_1}\\right)\\displaystyle\\left(\\frac{a}{b_2}\\right)$$ for all integers $a, a_1, a_2$ and all positive odd integers $b, b_1, b_2.$\n\nIt has the same special formulas for $a=-1$ and $a = 2$ as the Legendre symbol, namely\n\n$\\left(\\displaystyle \\frac{-1}{b}\\right)= (-1)^{\\frac{b-1}{2}} = \\left\\{ \\begin{array}{cl} 1 & \\text{if } b \\equiv 1 \\bmod 4 \\\\ -1 & \\text{if } b \\equiv -1 \\bmod 4 \\end{array} \\right.$ and $\\left(\\displaystyle \\frac{2}{b}\\right) = (-1)^{\\frac{b^2-1}{8}}=\\left\\{ \\begin{array}{cl} 1 & \\text{if } b \\equiv \\pm1 \\bmod 8 \\\\ -1 & \\text{if } b \\equiv \\pm3 \\bmod 8 \\end{array} \\right.$.\n\nThe law of quadratic reciprocity states that for $m,n$ odd positive coprime integers we have\n\n$$\\left(\\displaystyle\\frac{n}{m}\\right) = (-1)^{\\frac{m-1}2 \\frac{n-1}2} \\left(\\displaystyle \\frac{m}{n}\\right).$$\n\nAuthors:\nKnowl status:\n• Review status: reviewed\n• Last edited by Kiran S. Kedlaya on 2018-07-04 19:17:29\nReferred to by:\nHistory:"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5641973,"math_prob":0.99997425,"size":1746,"snap":"2021-21-2021-25","text_gpt3_token_len":674,"char_repetition_ratio":0.22502871,"word_repetition_ratio":0.058252428,"special_character_ratio":0.38659793,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T05:11:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f4259b1f-94f1-4772-96bf-ea47c3a9cd27>\",\"Content-Length\":\"14199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f170a814-fe09-4943-8040-c383c84b04fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:d08e1c4f-caaf-4840-ad89-47f7788b8d61>\",\"WARC-IP-Address\":\"35.241.19.59\",\"WARC-Target-URI\":\"https://www.lmfdb.org/knowledge/show/character.dirichlet.jacobi_symbol\",\"WARC-Payload-Digest\":\"sha1:ZSLJOOP24TCLXNMLK7H3CPYJALN4CNWF\",\"WARC-Block-Digest\":\"sha1:GV45OCXMX4PMHZV6QAWI5HAWGE65A7ZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991537.32_warc_CC-MAIN-20210513045934-20210513075934-00176.warc.gz\"}"} |
https://discourse.julialang.org/t/jump-defining-objective-function-uses-matrix-multiplication/50169 | [
"",
null,
"# JuMP: defining objective function uses matrix multiplication\n\nHi I am trying to recreate a relatively simple optimization function that I have working with python using (casadi + ipopt).\n\nI am having some trouble figuring out how I can describe the objective function since it involves matrix/vectors. What would be the correct method to handle such a function with JuMP?\n\nReference working python code (Function in line 11): http://ix.io/2Ecl\nWhat I have got with Julia so far: http://ix.io/2Ecp (line 55 in solve_InfMat())\n\nThanks for your help",
null,
"I do not know Python so many things are not clear to me in your example, from where comes `U`? Are `I` variables? (I am not sure of what comes from a `.mat` file.) If I am not wrong Ipopt supports arbitrary functions so you could code a function that does the same your objective function in python does and pass it to Ipopt by JuMP. I think this post summarises some of the difficulties of working with non-linear objectives in Ipopt.\n\nSorry, i should’ve been more explicit.\n\nI is an MxN matrix (45x37). I am pulling that from the mat file. Some experimental data in reality.\n\nU is an Nx1 vector. This is the unknown I am trying to solve for.\n\nZ is a Mx1 vector. That’s the known target. I set it to zeros. But explicitly set z=-2.0 in my example code.\n\nI am of course able to describe the same function in both languages see (lines 11 in Python and 55 in Julia version).\n\nThanks for sharing the post. I noticed this linked there: https://jump.dev/JuMP.jl/stable/nlp/#User-defined-functions-with-vector-inputs-1\n\nI couldn’t quite grasp the work around. I am new to Julia so perhaps I misunderstand something. Its a relatively simple linear equation:\n\nobj_function(U) = 0.5 * ((U’ * (I’ * I)) * U) + transpose(-1 * (I’ * z)) * U\n\nHow could something like this be passed without using vectors/matrices?\n\nFor reference. I use IPOPT to minimize this function. The only constrains are that each element of U is within 0-400^2\n\nJuMP doesn’t have the same features as CasADi for writing nonlinear problems with matrix-valued and vector-valued subexpressions. However, it looks like you’re in luck because your objective appears to be quadratic with respect to U. You can therefore write something like:\n\n``````@objective(m, Min, 0.5 * ((U’ * (I’ * I)) * U) + transpose(-1 * (I’ * z)) * U)\n``````\n\n(possibly with modifications to make sure the output is a scalar and not a 1x1 matrix).\n\n2 Likes\n\nThanks. That way of writing the objective does indeed solve the problem correctly."
]
| [
null,
"https://aws1.discourse-cdn.com/business5/uploads/julialang/original/2X/1/12829a7ba92b924d4ce81099cbf99785bee9b405.png",
null,
"https://sjc3.discourse-cdn.com/business5/images/emoji/twitter/slight_smile.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8820543,"math_prob":0.5031572,"size":978,"snap":"2020-45-2020-50","text_gpt3_token_len":259,"char_repetition_ratio":0.08418891,"word_repetition_ratio":0.0,"special_character_ratio":0.27402863,"punctuation_ratio":0.11904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.984395,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T19:54:19Z\",\"WARC-Record-ID\":\"<urn:uuid:818fbbbe-bd23-4d5d-88c9-92a4aa768c63>\",\"Content-Length\":\"25312\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ec55c41-7b77-49d3-98f5-6dbf343c48de>\",\"WARC-Concurrent-To\":\"<urn:uuid:c97fe02a-e174-4925-9072-2813814ba3e3>\",\"WARC-IP-Address\":\"72.52.80.20\",\"WARC-Target-URI\":\"https://discourse.julialang.org/t/jump-defining-objective-function-uses-matrix-multiplication/50169\",\"WARC-Payload-Digest\":\"sha1:6G2E23O32KPSYMU3YIJXBAKLCZJXQB3F\",\"WARC-Block-Digest\":\"sha1:NTQZLTDFEFKJIMW3WSSB7GWSAHBPA7WC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141164142.1_warc_CC-MAIN-20201123182720-20201123212720-00020.warc.gz\"}"} |
https://en.m.wikipedia.org/wiki/Quadratic_residue | [
"In number theory, an integer q is called a quadratic residue modulo n if it is congruent to a perfect square modulo n; i.e., if there exists an integer x such that:\n\n$x^{2}\\equiv q{\\pmod {n}}.$",
null,
"Otherwise, q is called a quadratic nonresidue modulo n.\n\nOriginally an abstract mathematical concept from the branch of number theory known as modular arithmetic, quadratic residues are now used in applications ranging from acoustical engineering to cryptography and the factoring of large numbers.\n\n## History, conventions, and elementary facts\n\nFermat, Euler, Lagrange, Legendre, and other number theorists of the 17th and 18th centuries established theorems and formed conjectures about quadratic residues, but the first systematic treatment is § IV of Gauss's Disquisitiones Arithmeticae (1801). Article 95 introduces the terminology \"quadratic residue\" and \"quadratic nonresidue\", and states that if the context makes it clear, the adjective \"quadratic\" may be dropped.\n\nFor a given n a list of the quadratic residues modulo n may be obtained by simply squaring the numbers 0, 1, ..., n − 1. Because a2 ≡ (na)2 (mod n), the list of squares modulo n is symmetric around n/2, and the list only needs to go that high. This can be seen in the table below.\n\nThus, the number of quadratic residues modulo n cannot exceed n/2 + 1 (n even) or (n + 1)/2 (n odd).\n\nThe product of two residues is always a residue.\n\n### Prime modulus\n\nModulo 2, every integer is a quadratic residue.\n\nModulo an odd prime number p there are (p + 1)/2 residues (including 0) and (p − 1)/2 nonresidues, by Euler's criterion. In this case, it is customary to consider 0 as a special case and work within the multiplicative group of nonzero elements of the field Z/pZ. (In other words, every congruence class except zero modulo p has a multiplicative inverse. This is not true for composite moduli.)\n\nFollowing this convention, the multiplicative inverse of a residue is a residue, and the inverse of a nonresidue is a nonresidue.\n\nFollowing this convention, modulo an odd prime number there are an equal number of residues and nonresidues.\n\nModulo a prime, the product of two nonresidues is a residue and the product of a nonresidue and a (nonzero) residue is a nonresidue.\n\nThe first supplement to the law of quadratic reciprocity is that if p ≡ 1 (mod 4) then −1 is a quadratic residue modulo p, and if p ≡ 3 (mod 4) then −1 is a nonresidue modulo p. This implies the following:\n\nIf p ≡ 1 (mod 4) the negative of a residue modulo p is a residue and the negative of a nonresidue is a nonresidue.\n\nIf p ≡ 3 (mod 4) the negative of a residue modulo p is a nonresidue and the negative of a nonresidue is a residue.\n\n### Prime power modulus\n\nAll odd squares are ≡ 1 (mod 8) and thus also ≡ 1 (mod 4). If a is an odd number and m = 8, 16, or some higher power of 2, then a is a residue modulo m if and only if a ≡ 1 (mod 8).\n\nFor example, mod (32) the odd squares are\n\n12 ≡ 152 ≡ 1\n32 ≡ 132 ≡ 9\n52 ≡ 112 ≡ 25\n72 ≡ 92 ≡ 49 ≡ 17\n\nand the even ones are\n\n02 ≡ 82 ≡ 162 ≡ 0\n22 ≡ 62≡ 102 ≡ 142≡ 4\n42 ≡ 122 ≡ 16.\n\nSo a nonzero number is a residue mod 8, 16, etc., if and only if it is of the form 4k(8n + 1).\n\nA number a relatively prime to an odd prime p is a residue modulo any power of p if and only if it is a residue modulo p.\n\nIf the modulus is pn,\n\nthen pka\nis a residue modulo pn if kn\nis a nonresidue modulo pn if k < n is odd\nis a residue modulo pn if k < n is even and a is a residue\nis a nonresidue modulo pn if k < n is even and a is a nonresidue.\n\nNotice that the rules are different for powers of two and powers of odd primes.\n\nModulo an odd prime power n = pk, the products of residues and nonresidues relatively prime to p obey the same rules as they do mod p; p is a nonresidue, and in general all the residues and nonresidues obey the same rules, except that the products will be zero if the power of p in the product ≥ n.\n\nModulo 8, the product of the nonresidues 3 and 5 is the nonresidue 7, and likewise for permutations of 3, 5 and 7. In fact, the multiplicative group of the non-residues and 1 form the Klein four-group.\n\n### Composite modulus not a prime power\n\nThe basic fact in this case is\n\nif a is a residue modulo n, then a is a residue modulo pk for every prime power dividing n.\nif a is a nonresidue modulo n, then a is a nonresidue modulo pk for at least one prime power dividing n.\n\nModulo a composite number, the product of two residues is a residue. The product of a residue and a nonresidue may be a residue, a nonresidue, or zero.\n\nFor example, from the table for modulus 6 1, 2, 3, 4, 5 (residues in bold).\n\nThe product of the residue 3 and the nonresidue 5 is the residue 3, whereas the product of the residue 4 and the nonresidue 2 is the nonresidue 2.\n\nAlso, the product of two nonresidues may be either a residue, a nonresidue, or zero.\n\nFor example, from the table for modulus 15 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 (residues in bold).\n\nThe product of the nonresidues 2 and 8 is the residue 1, whereas the product of the nonresidues 2 and 7 is the nonresidue 14.\n\nThis phenomenon can best be described using the vocabulary of abstract algebra. The congruence classes relatively prime to the modulus are a group under multiplication, called the group of units of the ring Z/nZ, and the squares are a subgroup of it. Different nonresidues may belong to different cosets, and there is no simple rule that predicts which one their product will be in. Modulo a prime, there is only the subgroup of squares and a single coset.\n\nThe fact that, e.g., modulo 15 the product of the nonresidues 3 and 5, or of the nonresidue 5 and the residue 9, or the two residues 9 and 10 are all zero comes from working in the full ring Z/nZ, which has zero divisors for composite n.\n\nFor this reason some authors add to the definition that a quadratic residue a must not only be a square but must also be relatively prime to the modulus n. (a is coprime to n if and only if a2 is coprime to n.)\n\nAlthough it makes things tidier, this article does not insist that residues must be coprime to the modulus.\n\n## Notations\n\nGauss used R and N to denote residuosity and non-residuosity, respectively;\n\nfor example, 2 R 7 and 5 N 7, or 1 R 8 and 3 N 8.\n\nAlthough this notation is compact and convenient for some purposes, a more useful notation is the Legendre symbol, also called the quadratic character, which is defined for all integers a and positive odd prime numbers p as\n\n$\\left({\\frac {a}{p}}\\right)={\\begin{cases}\\;\\;\\,0&{\\text{ if }}p{\\text{ divides }}a\\\\+1&{\\text{ if }}a\\operatorname {R} p{\\text{ and }}p{\\text{ does not divide }}a\\\\-1&{\\text{ if }}a\\operatorname {N} p{\\text{ and }}p{\\text{ does not divide }}a\\end{cases}}$\n\nThere are two reasons why numbers ≡ 0 (mod p) are treated specially. As we have seen, it makes many formulas and theorems easier to state. The other (related) reason is that the quadratic character is a homomorphism from the multiplicative group of nonzero congruence classes modulo p to the complex numbers under multiplication. Setting $({\\tfrac {np}{p}})=0$ allows its domain to be extended to the multiplicative semigroup of all the integers.\n\nOne advantage of this notation over Gauss's is that the Legendre symbol is a function that can be used in formulas. It can also easily be generalized to cubic, quartic and higher power residues.\n\nThere is a generalization of the Legendre symbol for composite values of p, the Jacobi symbol, but its properties are not as simple: if m is composite and the Jacobi symbol $({\\tfrac {a}{m}})=-1,$ then a N m, and if a R m then $({\\tfrac {a}{m}})=1,$ but if $({\\tfrac {a}{m}})=1$ we do not know whether a R m or a N m. For example: $({\\tfrac {2}{15}})=1$ and $({\\tfrac {4}{15}})=1$ , but 2 N 15 and 4 R 15. If m is prime, the Jacobi and Legendre symbols agree.\n\nAlthough quadratic residues appear to occur in a rather random pattern modulo n, and this has been exploited in such applications as acoustics and cryptography, their distribution also exhibits some striking regularities.\n\nUsing Dirichlet's theorem on primes in arithmetic progressions, the law of quadratic reciprocity, and the Chinese remainder theorem (CRT) it is easy to see that for any M > 0 there are primes p such that the numbers 1, 2, ..., M are all residues modulo p.\n\nFor example, if p ≡ 1 (mod 8), (mod 12), (mod 5) and (mod 28), then by the law of quadratic reciprocity 2, 3, 5, and 7 will all be residues modulo p, and thus all numbers 1–10 will be. The CRT says that this is the same as p ≡ 1 (mod 840), and Dirichlet's theorem says there are an infinite number of primes of this form. 2521 is the smallest, and indeed 12 ≡ 1, 10462 ≡ 2, 1232 ≡ 3, 22 ≡ 4, 6432 ≡ 5, 872 ≡ 6, 6682 ≡ 7, 4292 ≡ 8, 32 ≡ 9, and 5292 ≡ 10 (mod 2521).\n\n### Dirichlet's formulas\n\nThe first of these regularities stems from Peter Gustav Lejeune Dirichlet's work (in the 1830s) on the analytic formula for the class number of binary quadratic forms. Let q be a prime number, s a complex variable, and define a Dirichlet L-function as\n\n$L(s)=\\sum _{n=1}^{\\infty }\\left({\\frac {n}{q}}\\right)n^{-s}.$\n\nDirichlet showed that if q ≡ 3 (mod 4), then\n\n$L(1)=-{\\frac {\\pi }{\\sqrt {q}}}\\sum _{n=1}^{q-1}{\\frac {n}{q}}\\left({\\frac {n}{q}}\\right)>0.$\n\nTherefore, in this case (prime q ≡ 3 (mod 4)), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q − 1 is a negative number.\n\nFor example, modulo 11,\n\n1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (residues in bold)\n1 + 4 + 9 + 5 + 3 = 22, 2 + 6 + 7 + 8 + 10 = 33, and the difference is −11.\n\nIn fact the difference will always be an odd multiple of q if q > 3. In contrast, for prime q ≡ 1 (mod 4), the sum of the quadratic residues minus the sum of the nonresidues in the range 1, 2, ..., q − 1 is zero, implying that both sums equal ${\\frac {q(q-1)}{4}}$ .[citation needed]\n\nDirichlet also proved that for prime q ≡ 3 (mod 4),\n\n$L(1)={\\frac {\\pi }{\\left(2-\\left({\\frac {2}{q}}\\right)\\right)\\!{\\sqrt {q}}}}\\sum _{n=1}^{\\frac {q-1}{2}}\\left({\\frac {n}{q}}\\right)>0.$\n\nThis implies that there are more quadratic residues than nonresidues among the numbers 1, 2, ..., (q − 1)/2.\n\nFor example, modulo 11 there are four residues less than 6 (namely 1, 3, 4, and 5), but only one nonresidue (2).\n\nAn intriguing fact about these two theorems is that all known proofs rely on analysis; no-one has ever published a simple or direct proof of either statement.\n\nIf p and q are odd primes, then:\n\n((p is a quadratic residue mod q) if and only if (q is a quadratic residue mod p)) if and only if (at least one of p and q is congruent to 1 mod 4).\n\nThat is:\n\n$\\left({\\frac {p}{q}}\\right)\\left({\\frac {q}{p}}\\right)=(-1)^{{\\frac {p-1}{2}}\\cdot {\\frac {q-1}{2}}}$\n\nwhere $\\left({\\frac {p}{q}}\\right)$ is the Legendre symbol.\n\nThus, for numbers a and odd primes p that don't divide a:\n\na a is a quadratic residue mod p if and only if a a is a quadratic residue mod p if and only if\n1 (every prime p) −1 p ≡ 1 (mod 4)\n2 p ≡ 1, 7 (mod 8) −2 p ≡ 1, 3 (mod 8)\n3 p ≡ 1, 11 (mod 12) −3 p ≡ 1 (mod 3)\n4 (every prime p) −4 p ≡ 1 (mod 4)\n5 p ≡ 1, 4 (mod 5) −5 p ≡ 1, 3, 7, 9 (mod 20)\n6 p ≡ 1, 5, 19, 23 (mod 24) −6 p ≡ 1, 5, 7, 11 (mod 24)\n7 p ≡ 1, 3, 9, 19, 25, 27 (mod 28) −7 p ≡ 1, 2, 4 (mod 7)\n8 p ≡ 1, 7 (mod 8) −8 p ≡ 1, 3 (mod 8)\n9 (every prime p) −9 p ≡ 1 (mod 4)\n10 p ≡ 1, 3, 9, 13, 27, 31, 37, 39 (mod 40) −10 p ≡ 1, 7, 9, 11, 13, 19, 23, 37 (mod 40)\n11 p ≡ 1, 5, 7, 9, 19, 25, 35, 37, 39, 43 (mod 44) −11 p ≡ 1, 3, 4, 5, 9 (mod 11)\n12 p ≡ 1, 11 (mod 12) −12 p ≡ 1 (mod 3)\n\n### Pairs of residues and nonresidues\n\nModulo a prime p, the number of pairs n, n + 1 where n R p and n + 1 R p, or n N p and n + 1 R p, etc., are almost equal. More precisely, let p be an odd prime. For i, j = 0, 1 define the sets\n\n$A_{ij}=\\left\\{k\\in \\{1,2,\\dots ,p-2\\}:\\left({\\frac {k}{p}}\\right)=(-1)^{i}\\land \\left({\\frac {k+1}{p}}\\right)=(-1)^{j}\\right\\},$\n\nand let\n\n$\\alpha _{ij}=|A_{ij}|.$\n\nThat is,\n\nα00 is the number of residues that are followed by a residue,\nα01 is the number of residues that are followed by a nonresidue,\nα10 is the number of nonresidues that are followed by a residue, and\nα11 is the number of nonresidues that are followed by a nonresidue.\n\nThen if p ≡ 1 (mod 4)\n\n$\\alpha _{00}={\\frac {p-5}{4}},\\;\\alpha _{01}=\\alpha _{10}=\\alpha _{11}={\\frac {p-1}{4}}$\n\nand if p ≡ 3 (mod 4)\n\n$\\alpha _{01}={\\frac {p+1}{4}},\\;\\alpha _{00}=\\alpha _{10}=\\alpha _{11}={\\frac {p-3}{4}}.$\n\nFor example: (residues in bold)\n\nModulo 17\n\n1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16\nA00 = {1,8,15},\nA01 = {2,4,9,13},\nA10 = {3,7,12,14},\nA11 = {5,6,10,11}.\n\nModulo 19\n\n1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18\nA00 = {4,5,6,16},\nA01 = {1,7,9,11,17},\nA10 = {3,8,10,15},\nA11 = {2,12,13,14}.\n\nGauss (1828) introduced this sort of counting when he proved that if p ≡ 1 (mod 4) then x4 ≡ 2 (mod p) can be solved if and only if p = a2 + 64 b2.\n\nThe values of $({\\tfrac {a}{p}})$ for consecutive values of a mimic a random variable like a coin flip. Specifically, Pólya and Vinogradov proved (independently) in 1918 that for any nonprincipal Dirichlet character χ(n) modulo q and any integers M and N,\n\n$\\left|\\sum _{n=M+1}^{M+N}\\chi (n)\\right|=O\\left({\\sqrt {q}}\\log q\\right),$\n\nin big O notation. Setting\n\n$\\chi (n)=\\left({\\frac {n}{q}}\\right),$\n\nthis shows that the number of quadratic residues modulo q in any interval of length N is\n\n${\\frac {1}{2}}N+O({\\sqrt {q}}\\log q).$\n\nIt is easy to prove that\n\n$\\left|\\sum _{n=M+1}^{M+N}\\left({\\frac {n}{q}}\\right)\\right|<{\\sqrt {q}}\\log q.$\n\nIn fact,\n\n$\\left|\\sum _{n=M+1}^{M+N}\\left({\\frac {n}{q}}\\right)\\right|<{\\frac {4}{\\pi ^{2}}}{\\sqrt {q}}\\log q+0.41{\\sqrt {q}}+0.61.$\n\nMontgomery and Vaughan improved this in 1977, showing that, if the generalized Riemann hypothesis is true then\n\n$\\left|\\sum _{n=M+1}^{M+N}\\chi (n)\\right|=O\\left({\\sqrt {q}}\\log \\log q\\right).$\n\nThis result cannot be substantially improved, for Schur had proved in 1918 that\n\n$\\max _{N}\\left|\\sum _{n=1}^{N}\\left({\\frac {n}{q}}\\right)\\right|>{\\frac {1}{2\\pi }}{\\sqrt {q}}$\n\nand Paley had proved in 1932 that\n\n$\\max _{N}\\left|\\sum _{n=1}^{N}\\left({\\frac {d}{n}}\\right)\\right|>{\\frac {1}{7}}{\\sqrt {d}}\\log \\log d$\n\nfor infinitely many d > 0.\n\nThe least quadratic residue mod p is clearly 1. The question of the magnitude of the least quadratic non-residue n(p) is more subtle, but it is always prime, with 7 appearing for the first time at 71.\n\nThe Pólya–Vinogradov inequality above gives O(p log p).\n\nThe best unconditional estimate is n(p) ≪ pθ for any θ>1/4e, obtained by estimates of Burgess on character sums.\n\nAssuming the Generalised Riemann hypothesis, Ankeny obtained n(p) ≪ (log p)2.\n\nLinnik showed that the number of p less than X such that n(p) > Xε is bounded by a constant depending on ε.\n\nThe least quadratic non-residues mod p for odd primes p are:\n\n2, 2, 3, 2, 2, 3, 2, 5, 2, 3, 2, ... (sequence A053760 in the OEIS)\n\nLet p be an odd prime. The quadratic excess E(p) is the number of quadratic residues on the range (0,p/2) minus the number in the range (p/2,p) (sequence A178153 in the OEIS). For p congruent to 1 mod 4, the excess is zero, since −1 is a quadratic residue and the residues are symmetric under rpr. For p congruent to 3 mod 4, the excess E is always positive.\n\n## Complexity of finding square roots\n\nThat is, given a number a and a modulus n, how hard is it\n\n1. to tell whether an x solving x2a (mod n) exists\n2. assuming one does exist, to calculate it?\n\nAn important difference between prime and composite moduli shows up here. Modulo a prime p, a quadratic residue a has 1 + (a|p) roots (i.e. zero if a N p, one if a ≡ 0 (mod p), or two if a R p and gcd(a,p) = 1.)\n\nIn general if a composite modulus n is written as a product of powers of distinct primes, and there are n1 roots modulo the first one, n2 mod the second, ..., there will be n1n2... roots modulo n.\n\nThe theoretical way solutions modulo the prime powers are combined to make solutions modulo n is called the Chinese remainder theorem; it can be implemented with an efficient algorithm.\n\nFor example:\n\nSolve x2 ≡ 6 (mod 15).\nx2 ≡ 6 (mod 3) has one solution, 0; x2 ≡ 6 (mod 5) has two, 1 and 4.\nand there are two solutions modulo 15, namely 6 and 9.\nSolve x2 ≡ 4 (mod 15).\nx2 ≡ 4 (mod 3) has two solutions, 1 and 2; x2 ≡ 4 (mod 5) has two, 2 and 3.\nand there are four solutions modulo 15, namely 2, 7, 8, and 13.\nSolve x2 ≡ 7 (mod 15).\nx2 ≡ 7 (mod 3) has two solutions, 1 and 2; x2 ≡ 7 (mod 5) has no solutions.\nand there are no solutions modulo 15.\n\n### Prime or prime power modulus\n\nFirst off, if the modulus n is prime the Legendre symbol $\\left({\\frac {a}{n}}\\right)$ can be quickly computed using a variation of Euclid's algorithm or the Euler's criterion. If it is −1 there is no solution. Secondly, assuming that $\\left({\\frac {a}{n}}\\right)=1$ , if n ≡ 3 (mod 4), Lagrange found that the solutions are given by\n\n$x\\equiv \\pm \\;a^{(n+1)/4}{\\pmod {n}},$\n\nand Legendre found a similar solution if n ≡ 5 (mod 8):\n\n$x\\equiv {\\begin{cases}\\pm \\;a^{(n+3)/8}{\\pmod {n}}&{\\text{ if }}a{\\text{ is a quartic residue modulo }}n\\\\\\pm \\;a^{(n+3)/8}2^{(n-1)/4}{\\pmod {n}}&{\\text{ if }}a{\\text{ is a quartic non-residue modulo }}n\\end{cases}}$\n\nFor prime n ≡ 1 (mod 8), however, there is no known formula. Tonelli (in 1891) and Cipolla found efficient algorithms that work for all prime moduli. Both algorithms require finding a quadratic nonresidue modulo n, and there is no efficient deterministic algorithm known for doing that. But since half the numbers between 1 and n are nonresidues, picking numbers x at random and calculating the Legendre symbol $\\left({\\frac {x}{n}}\\right)$ until a nonresidue is found will quickly produce one. A slight variant of this algorithm is the Tonelli–Shanks algorithm.\n\nIf the modulus n is a prime power n = pe, a solution may be found modulo p and \"lifted\" to a solution modulo n using Hensel's lemma or an algorithm of Gauss.\n\n### Composite modulus\n\nIf the modulus n has been factored into prime powers the solution was discussed above.\n\nIf n is not congruent to 2 modulo 4 and the Kronecker symbol $\\left({\\tfrac {a}{n}}\\right)=-1$ then there is no solution; if n is congruent to 2 modulo 4 and $\\left({\\tfrac {a}{n/2}}\\right)=-1$ , then there is also no solution. If n is not congruent to 2 modulo 4 and $\\left({\\tfrac {a}{n}}\\right)=1$ , or n is congruent to 2 modulo 4 and $\\left({\\tfrac {a}{n/2}}\\right)=1$ , there may or may not be one.\n\nIf the complete factorization of n is not known, and $\\left({\\tfrac {a}{n}}\\right)=1$ and n is not congruent to 2 modulo 4, or n is congruent to 2 modulo 4 and $\\left({\\tfrac {a}{n/2}}\\right)=1$ , the problem is known to be equivalent to integer factorization of n (i.e. an efficient solution to either problem could be used to solve the other efficiently).\n\nThe above discussion indicates how knowing the factors of n allows us to find the roots efficiently. Say there were an efficient algorithm for finding square roots modulo a composite number. The article congruence of squares discusses how finding two numbers x and y where x2y2 (mod n) and x ≠ ±y suffices to factorize n efficiently. Generate a random number, square it modulo n, and have the efficient square root algorithm find a root. Repeat until it returns a number not equal to the one we originally squared (or its negative modulo n), then follow the algorithm described in congruence of squares. The efficiency of the factoring algorithm depends on the exact characteristics of the root-finder (e.g. does it return all roots? just the smallest one? a random one?), but it will be efficient.\n\nDetermining whether a is a quadratic residue or nonresidue modulo n (denoted a R n or a N n) can be done efficiently for prime n by computing the Legendre symbol. However, for composite n, this forms the quadratic residuosity problem, which is not known to be as hard as factorization, but is assumed to be quite hard.\n\nOn the other hand, if we want to know if there is a solution for x less than some given limit c, this problem is NP-complete; however, this is a fixed-parameter tractable problem, where c is the parameter.\n\nIn general, to determine if a is a quadratic residue modulo composite n, one can use the following theorem:\n\nLet n > 1, and gcd(a,n) = 1. Then x2a (mod n) is solvable if and only if:\n\n• The Legendre symbol $\\left({\\tfrac {a}{p}}\\right)=1$ for all odd prime divisors p of n.\n• a ≡ 1 (mod 4) if n is divisible by 4 but not 8; or a ≡ 1 (mod 8) if n is divisible by 8.\n\nNote: This theorem essentially requires that the factorization of n is known. Also notice that if gcd(a,n) = m, then the congruence can be reduced to a/mx2/m (mod n/m), but then this takes the problem away from quadratic residues (unless m is a square).\n\n## The number of quadratic residues\n\nThe list of the number of quadratic residues modulo n, for n = 1, 2, 3 ..., looks like:\n\n1, 2, 2, 2, 3, 4, 4, 3, 4, 6, 6, 4, 7, 8, 6, ... (sequence A000224 in the OEIS)\n\nA formula to count the number of squares modulo n is given by Stangl.\n\n### Acoustics\n\nSound diffusers have been based on number-theoretic concepts such as primitive roots and quadratic residues.\n\n### Graph theory\n\nPaley graphs are dense undirected graphs, one for each prime p ≡ 1 (mod 4), that form an infinite family of conference graphs, which yield an infinite family of symmetric conference matrices.\n\nPaley digraphs are directed analogs of Paley graphs, one for each p ≡ 3 (mod 4), that yield antisymmetric conference matrices.\n\nThe construction of these graphs uses quadratic residues.\n\n### Cryptography\n\nThe fact that finding a square root of a number modulo a large composite n is equivalent to factoring (which is widely believed to be a hard problem) has been used for constructing cryptographic schemes such as the Rabin cryptosystem and the oblivious transfer. The quadratic residuosity problem is the basis for the Goldwasser-Micali cryptosystem.\n\nThe discrete logarithm is a similar problem that is also used in cryptography.\n\n### Primality testing\n\nEuler's criterion is a formula for the Legendre symbol (a|p) where p is prime. If p is composite the formula may or may not compute (a|p) correctly. The Solovay–Strassen primality test for whether a given number n is prime or composite picks a random a and computes (a|n) using a modification of Euclid's algorithm, and also using Euler's criterion. If the results disagree, n is composite; if they agree, n may be composite or prime. For a composite n at least 1/2 the values of a in the range 2, 3, ..., n − 1 will return \"n is composite\"; for prime n none will. If, after using many different values of a, n has not been proved composite it is called a \"probable prime\".\n\nThe Miller–Rabin primality test is based on the same principles. There is a deterministic version of it, but the proof that it works depends on the generalized Riemann hypothesis; the output from this test is \"n is definitely composite\" or \"either n is prime or the GRH is false\". If the second output ever occurs for a composite n, then the GRH would be false, which would have implications through many branches of mathematics.\n\n### Integer factorization\n\nIn § VI of the Disquisitiones Arithmeticae Gauss discusses two factoring algorithms that use quadratic residues and the law of quadratic reciprocity.\n\nSeveral modern factorization algorithms (including Dixon's algorithm, the continued fraction method, the quadratic sieve, and the number field sieve) generate small quadratic residues (modulo the number being factorized) in an attempt to find a congruence of squares which will yield a factorization. The number field sieve is the fastest general-purpose factorization algorithm known.\n\nThe following table (sequence A096008 in the OEIS) lists the quadratic residues mod 1 to 75 (a red number means it is not coprime to n). (For the quadratic residues coprime to n, see , and for nonzero quadratic residues, see .)\n\n1 0 26 0, 1, 3, 4, 9, 10, 12, 13, 14, 16, 17, 22, 23, 25 51 0, 1, 4, 9, 13, 15, 16, 18, 19, 21, 25, 30, 33, 34, 36, 42, 43, 49\n2 0, 1 27 0, 1, 4, 7, 9, 10, 13, 16, 19, 22, 25 52 0, 1, 4, 9, 12, 13, 16, 17, 25, 29, 36, 40, 48, 49\n3 0, 1 28 0, 1, 4, 8, 9, 16, 21, 25 53 0, 1, 4, 6, 7, 9, 10, 11, 13, 15, 16, 17, 24, 25, 28, 29, 36, 37, 38, 40, 42, 43, 44, 46, 47, 49, 52\n4 0, 1 29 0, 1, 4, 5, 6, 7, 9, 13, 16, 20, 22, 23, 24, 25, 28 54 0, 1, 4, 7, 9, 10, 13, 16, 19, 22, 25, 27, 28, 31, 34, 36, 37, 40, 43, 46, 49, 52\n5 0, 1, 4 30 0, 1, 4, 6, 9, 10, 15, 16, 19, 21, 24, 25 55 0, 1, 4, 5, 9, 11, 14, 15, 16, 20, 25, 26, 31, 34, 36, 44, 45, 49\n6 0, 1, 3, 4 31 0, 1, 2, 4, 5, 7, 8, 9, 10, 14, 16, 18, 19, 20, 25, 28 56 0, 1, 4, 8, 9, 16, 25, 28, 32, 36, 44, 49\n7 0, 1, 2, 4 32 0, 1, 4, 9, 16, 17, 25 57 0, 1, 4, 6, 7, 9, 16, 19, 24, 25, 28, 30, 36, 39, 42, 43, 45, 49, 54, 55\n8 0, 1, 4 33 0, 1, 3, 4, 9, 12, 15, 16, 22, 25, 27, 31 58 0, 1, 4, 5, 6, 7, 9, 13, 16, 20, 22, 23, 24, 25, 28, 29, 30, 33, 34, 35, 36, 38, 42, 45, 49, 51, 52, 53, 54, 57\n9 0, 1, 4, 7 34 0, 1, 2, 4, 8, 9, 13, 15, 16, 17, 18, 19, 21, 25, 26, 30, 32, 33 59 0, 1, 3, 4, 5, 7, 9, 12, 15, 16, 17, 19, 20, 21, 22, 25, 26, 27, 28, 29, 35, 36, 41, 45, 46, 48, 49, 51, 53, 57\n10 0, 1, 4, 5, 6, 9 35 0, 1, 4, 9, 11, 14, 15, 16, 21, 25, 29, 30 60 0, 1, 4, 9, 16, 21, 24, 25, 36, 40, 45, 49\n11 0, 1, 3, 4, 5, 9 36 0, 1, 4, 9, 13, 16, 25, 28 61 0, 1, 3, 4, 5, 9, 12, 13, 14, 15, 16, 19, 20, 22, 25, 27, 34, 36, 39, 41, 42, 45, 46, 47, 48, 49, 52, 56, 57, 58, 60\n12 0, 1, 4, 9 37 0, 1, 3, 4, 7, 9, 10, 11, 12, 16, 21, 25, 26, 27, 28, 30, 33, 34, 36 62 0, 1, 2, 4, 5, 7, 8, 9, 10, 14, 16, 18, 19, 20, 25, 28, 31, 32, 33, 35, 36, 38, 39, 40, 41, 45, 47, 49, 50, 51, 56, 59\n13 0, 1, 3, 4, 9, 10, 12 38 0, 1, 4, 5, 6, 7, 9, 11, 16, 17, 19, 20, 23, 24, 25, 26, 28, 30, 35, 36 63 0, 1, 4, 7, 9, 16, 18, 22, 25, 28, 36, 37, 43, 46, 49, 58\n14 0, 1, 2, 4, 7, 8, 9, 11 39 0, 1, 3, 4, 9, 10, 12, 13, 16, 22, 25, 27, 30, 36 64 0, 1, 4, 9, 16, 17, 25, 33, 36, 41, 49, 57\n15 0, 1, 4, 6, 9, 10 40 0, 1, 4, 9, 16, 20, 24, 25, 36 65 0, 1, 4, 9, 10, 14, 16, 25, 26, 29, 30, 35, 36, 39, 40, 49, 51, 55, 56, 61, 64\n16 0, 1, 4, 9 41 0, 1, 2, 4, 5, 8, 9, 10, 16, 18, 20, 21, 23, 25, 31, 32, 33, 36, 37, 39, 40 66 0, 1, 3, 4, 9, 12, 15, 16, 22, 25, 27, 31, 33, 34, 36, 37, 42, 45, 48, 49, 55, 58, 60, 64\n17 0, 1, 2, 4, 8, 9, 13, 15, 16 42 0, 1, 4, 7, 9, 15, 16, 18, 21, 22, 25, 28, 30, 36, 37, 39 67 0, 1, 4, 6, 9, 10, 14, 15, 16, 17, 19, 21, 22, 23, 24, 25, 26, 29, 33, 35, 36, 37, 39, 40, 47, 49, 54, 55, 56, 59, 60, 62, 64, 65\n18 0, 1, 4, 7, 9, 10, 13, 16 43 0, 1, 4, 6, 9, 10, 11, 13, 14, 15, 16, 17, 21, 23, 24, 25, 31, 35, 36, 38, 40, 41 68 0, 1, 4, 8, 9, 13, 16, 17, 21, 25, 32, 33, 36, 49, 52, 53, 60, 64\n19 0, 1, 4, 5, 6, 7, 9, 11, 16, 17 44 0, 1, 4, 5, 9, 12, 16, 20, 25, 33, 36, 37 69 0, 1, 3, 4, 6, 9, 12, 13, 16, 18, 24, 25, 27, 31, 36, 39, 46, 48, 49, 52, 54, 55, 58, 64\n20 0, 1, 4, 5, 9, 16 45 0, 1, 4, 9, 10, 16, 19, 25, 31, 34, 36, 40 70 0, 1, 4, 9, 11, 14, 15, 16, 21, 25, 29, 30, 35, 36, 39, 44, 46, 49, 50, 51, 56, 60, 64, 65\n21 0, 1, 4, 7, 9, 15, 16, 18 46 0, 1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18, 23, 24, 25, 26, 27, 29, 31, 32, 35, 36, 39, 41 71 0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 19, 20, 24, 25, 27, 29, 30, 32, 36, 37, 38, 40, 43, 45, 48, 49, 50, 54, 57, 58, 60, 64\n22 0, 1, 3, 4, 5, 9, 11, 12, 14, 15, 16, 20 47 0, 1, 2, 3, 4, 6, 7, 8, 9, 12, 14, 16, 17, 18, 21, 24, 25, 27, 28, 32, 34, 36, 37, 42 72 0, 1, 4, 9, 16, 25, 28, 36, 40, 49, 52, 64\n23 0, 1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18 48 0, 1, 4, 9, 16, 25, 33, 36 73 0, 1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 19, 23, 24, 25, 27, 32, 35, 36, 37, 38, 41, 46, 48, 49, 50, 54, 55, 57, 61, 64, 65, 67, 69, 70, 71, 72\n24 0, 1, 4, 9, 12, 16 49 0, 1, 2, 4, 8, 9, 11, 15, 16, 18, 22, 23, 25, 29, 30, 32, 36, 37, 39, 43, 44, 46 74 0, 1, 3, 4, 7, 9, 10, 11, 12, 16, 21, 25, 26, 27, 28, 30, 33, 34, 36, 37, 38, 40, 41, 44, 46, 47, 48, 49, 53, 58, 62, 63, 64, 65, 67, 70, 71, 73\n25 0, 1, 4, 6, 9, 11, 14, 16, 19, 21, 24 50 0, 1, 4, 6, 9, 11, 14, 16, 19, 21, 24, 25, 26, 29, 31, 34, 36, 39, 41, 44, 46, 49 75 0, 1, 4, 6, 9, 16, 19, 21, 24, 25, 31, 34, 36, 39, 46, 49, 51, 54, 61, 64, 66, 69\nx 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25\nx2 1 4 9 16 25 36 49 64 81 100 121 144 169 196 225 256 289 324 361 400 441 484 529 576 625\nmod 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\nmod 2 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1\nmod 3 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1\nmod 4 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1\nmod 5 1 4 4 1 0 1 4 4 1 0 1 4 4 1 0 1 4 4 1 0 1 4 4 1 0\nmod 6 1 4 3 4 1 0 1 4 3 4 1 0 1 4 3 4 1 0 1 4 3 4 1 0 1\nmod 7 1 4 2 2 4 1 0 1 4 2 2 4 1 0 1 4 2 2 4 1 0 1 4 2 2\nmod 8 1 4 1 0 1 4 1 0 1 4 1 0 1 4 1 0 1 4 1 0 1 4 1 0 1\nmod 9 1 4 0 7 7 0 4 1 0 1 4 0 7 7 0 4 1 0 1 4 0 7 7 0 4\nmod 10 1 4 9 6 5 6 9 4 1 0 1 4 9 6 5 6 9 4 1 0 1 4 9 6 5\nmod 11 1 4 9 5 3 3 5 9 4 1 0 1 4 9 5 3 3 5 9 4 1 0 1 4 9\nmod 12 1 4 9 4 1 0 1 4 9 4 1 0 1 4 9 4 1 0 1 4 9 4 1 0 1\nmod 13 1 4 9 3 12 10 10 12 3 9 4 1 0 1 4 9 3 12 10 10 12 3 9 4 1\nmod 14 1 4 9 2 11 8 7 8 11 2 9 4 1 0 1 4 9 2 11 8 7 8 11 2 9\nmod 15 1 4 9 1 10 6 4 4 6 10 1 9 4 1 0 1 4 9 1 10 6 4 4 6 10\nmod 16 1 4 9 0 9 4 1 0 1 4 9 0 9 4 1 0 1 4 9 0 9 4 1 0 1\nmod 17 1 4 9 16 8 2 15 13 13 15 2 8 16 9 4 1 0 1 4 9 16 8 2 15 13\nmod 18 1 4 9 16 7 0 13 10 9 10 13 0 7 16 9 4 1 0 1 4 9 16 7 0 13\nmod 19 1 4 9 16 6 17 11 7 5 5 7 11 17 6 16 9 4 1 0 1 4 9 16 6 17\nmod 20 1 4 9 16 5 16 9 4 1 0 1 4 9 16 5 16 9 4 1 0 1 4 9 16 5\nmod 21 1 4 9 16 4 15 7 1 18 16 16 18 1 7 15 4 16 9 4 1 0 1 4 9 16\nmod 22 1 4 9 16 3 14 5 20 15 12 11 12 15 20 5 14 3 16 9 4 1 0 1 4 9\nmod 23 1 4 9 16 2 13 3 18 12 8 6 6 8 12 18 3 13 2 16 9 4 1 0 1 4\nmod 24 1 4 9 16 1 12 1 16 9 4 1 0 1 4 9 16 1 12 1 16 9 4 1 0 1\nmod 25 1 4 9 16 0 11 24 14 6 0 21 19 19 21 0 6 14 24 11 0 16 9 4 1 0"
]
| [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/acc4f9549e2b34cc5d809cfb782f8753fb81aac0",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.78451383,"math_prob":0.9989448,"size":33491,"snap":"2023-40-2023-50","text_gpt3_token_len":13732,"char_repetition_ratio":0.18174216,"word_repetition_ratio":0.16078658,"special_character_ratio":0.44140217,"punctuation_ratio":0.21947044,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99183095,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T18:41:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b48c03c3-fe18-4121-982c-268531e91bb2>\",\"Content-Length\":\"242525\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76990807-7900-436e-8ca9-e9f37a4cc40a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a45c1278-84f7-4019-b40c-6ac58a71ab25>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Quadratic_residue\",\"WARC-Payload-Digest\":\"sha1:YSUQPFXXORYHP5X4DDHMYJ3AO6HC2GXP\",\"WARC-Block-Digest\":\"sha1:NT7MG762FGDEPSPYEOLO4XSUWSEEM62X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511170.92_warc_CC-MAIN-20231003160453-20231003190453-00313.warc.gz\"}"} |
https://www.scribd.com/document/429745167/What-is-the-term-used-for-the-third-derivative-of-position | [
"You are on page 1of 2\n\n# What is the term used for the third derivative\n\nof position?\nPhilip Gibbs & Stephanie Gragert\n\nNovember 1998\n\nIt is well known that the first derivative of position (symbol r) with respect to time is\nvelocity (symbol v) and the second is acceleration (symbol a). It is a little less well known\nthat there is no universally accepted name for derivative subsequent to acceleration.\nHowever the third derivative, i.e. the rate of increase of acceleration, is technically known\nas jerk (symbol j). In the UK jolt has sometimes been used instead of jerk and may be\nequally acceptable, also if jerk appears to be the more common of the two. It is also reco-\ngnised in international standards:\n\n## ”1.5 jerk: A vector that specifies the time-derivative of acceleration.”\n\nNote that the symbol j for jerk is not in the standard and is probably only one of many\nsymbols used.\n\nMany other terms have appeared in individual cases for the third derivative, including\npulse, impulse, bounce, surge, shock and super acceleration. These are generally less ap-\npropriate than jerk and jolt, either because they are used in engineering to mean other\nthings or because the common English use of the word does not fit the meaning so well.\nFor example impulse is more commonly used in physics to mean a increase of momentum\nimparted by a force of limited duration [Belanger 1847] and surge is used by electricians\nto mean something like rate of increase of current or voltage. The terms jerk and jolt are\ntherefore preferred for rate of increase of acceleration.\nAs its name suggests, jerk is important when evaluating the destructive effect of motion\non a mechanism or the discomfort caused to passengers in a vehicle. The movement of\ndelicate instruments needs to be kept within specified limits of jerk as well as acceleration\nto avoid damage. When designing a train the engineers will typically be required to keep\nthe jerk less than 2 metres per second cubed for passenger comfort. In the aerospace indu-\nstry they even have such a thing as a jerkmeter; an instrument for measuring jerk.\nIn the case of the Hubble space telescope, the engineers are said to have even gone as far\nas specifying limits on the magnitude of the fourth derivative.\n\nThe term jounce has been used for the fourth derivative, i.e. the rate of increase of jerk, but\nit has the drawback of using the same initial letter as jerk so it is not clear which symbol\nto use. According this convention, According to this convention, the terms flounce and\npounce have been proposed for the fifth derivative and the sixth derivative. Another less\nserious suggestion is snap (symbol s), crackle (symbol c) and pop (symbol ℘) for the 4th,\n5th and 6th derivatives respectively. Higher derivatives do not yet have names because\nthey do not come up very often.\n\nI\nWhat is the term used for the third derivative of position?\n\nRegarding the dynamic translational quantities, since the rate of increase of momentum\n(symbol p) is force (symbol F) is it seems necessary to find terms for higher derivatives\nof force too. So far yank (symbol Y) has been suggested for rate of increase of force, tug\n(symbol T) for rate of increase of yank, snatch (symbol S) for rate of increase of tug and\nshake (symbol S ) for rate of increase of snatch.\nA similar reasoning can be done for the dynamic rotary quantities, in fact, when the refe-\nrence point is motion less, the rate increase of angular momentum (symbol p), the moment\nof momentum, is torque (symbol M), the moment of force, so for rate increase of torque\nhas been proposed rotatum (symbol P), gyratum (symbol G) for rate increase of rotatum,\nversum (symbol V) for rate increase of gyratum and volutum (symbol V ) for rate increase\nof versum.\nNeedless to say, none of these are in any kind of standards, yet. We just made them up on\nusenet.\n\n## Now class, repeat after me, if mass is constant...\n\nMomentum equals mass times velocity and its moment is angular momentum!\nForce equals mass times acceleration and its moment is torque!\nYank equals mass times jerk and its moment is rotatum!\nTug equals mass times snap and its moment is gyratum!\nSnatch equals mass times crackle and its moment is versum!\nShake equals mass times pop and its moment is volutum!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9370613,"math_prob":0.8987478,"size":4280,"snap":"2020-10-2020-16","text_gpt3_token_len":918,"char_repetition_ratio":0.15645464,"word_repetition_ratio":0.042918455,"special_character_ratio":0.19462617,"punctuation_ratio":0.09378961,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.970089,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T05:27:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c5fc45a7-e04c-49a5-a2b6-6768f48d3db8>\",\"Content-Length\":\"314765\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:66dbe425-766c-4a5b-bdfd-1a1255dd4641>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b0c1a35-df32-4ba1-a079-3102d25c7d4e>\",\"WARC-IP-Address\":\"151.101.202.152\",\"WARC-Target-URI\":\"https://www.scribd.com/document/429745167/What-is-the-term-used-for-the-third-derivative-of-position\",\"WARC-Payload-Digest\":\"sha1:I2TK4IWI4PUQRX6ZCERFNY3GV7PX775K\",\"WARC-Block-Digest\":\"sha1:25DIM5MJUDLV2JI672SZHKAKWDQBS3VO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371829677.89_warc_CC-MAIN-20200409024535-20200409055035-00246.warc.gz\"}"} |
https://www.metaglossary.com/define/simple+random+sample | [
"Definitions for \"Simple Random Sample\"\nA method for drawing a sample from a population such that all samples of a given size have equal probability of being drawn.\na probability sample in which the probabilities of selecting any individual in the population is the same\nA sample of a fixed size chosen randomly in such a way that every possible sample of this size has the same probability of being selected."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9716283,"math_prob":0.9932778,"size":408,"snap":"2021-04-2021-17","text_gpt3_token_len":77,"char_repetition_ratio":0.13613862,"word_repetition_ratio":0.0,"special_character_ratio":0.18872549,"punctuation_ratio":0.08108108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886537,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-10T12:39:54Z\",\"WARC-Record-ID\":\"<urn:uuid:952f7cf0-f836-4264-bc99-27bd65fb0a0e>\",\"Content-Length\":\"9583\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73ea5a60-9893-40c9-8371-57e08d884c0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc5502e0-2d01-4693-9570-142879aa6281>\",\"WARC-IP-Address\":\"104.21.73.55\",\"WARC-Target-URI\":\"https://www.metaglossary.com/define/simple+random+sample\",\"WARC-Payload-Digest\":\"sha1:QIXHSMNHBX3RYGJNGL6H2ZHE3GS3F74R\",\"WARC-Block-Digest\":\"sha1:5AIORELROIY6TSF4ZB2GXGXYAWJC3W3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038056869.3_warc_CC-MAIN-20210410105831-20210410135831-00539.warc.gz\"}"} |
https://medium.com/cantors-paradise/black-hole-entropy-and-the-laws-of-thermodynamics-d85fd5d5cce2 | [
"# Black Hole Entropy and the Laws of Thermodynamics\n\n## The Remarkable Similarities Between the Black Holes Mechanics and Laws of Thermodynamics\n\nThe Mexican-born Israeli-American theoretical physicist J. Bekenstein was the first one to suggest that black holes, a region of spacetime where gravity is so strong that not even light can escape from it, should have a well-defined entropy. According to Bekenstein, one can define black-hole entropy as follows:",
null,
"Figure 1: The Bekenstein-Hawking entropy is the entropy ascribed to a black hole. Its value is proportional to A/4 where A is the area of the event horizon (source). The Israeli-American theoretical physicist Jacob Bekenstein was the first one to suggest that black holes should have a well-defined entropy (source).\n\n# Why Black Holes Have Entropy?\n\nIn an article by Bekenstein, he provided three possible ways to justify the existence of black hole entropy:\n\n• The matter and radiation that collapse to form a black hole are hidden to exterior observers. Hence, such observers cannot provide a thermodynamic description of the black hole’s collapse that is based on the entropy of the collapsed matter and radiation since those are not observable. A way to explain the thermodynamics would be to associate entropy to the black hole.\n• Only three numbers are needed to parametrize a stationary (not to be confused with static) black hole: its mass M, charge Q, and angular momentum J (see the no-hair theorem). Since there are several possible formation scenarios for a black hole with a specific choice of these three parameters, there must be several possible internal states associated with it. We know that thermodynamic entropy quantifies the many microstates compatible with a given macrostate (characterized by quantities such as temperature T, pressure P, and volume V). In the case of the black hole, the macrostate is characterized by the tuple (M, Q, J). Quoting Bekenstein, “Thermodynamic entropy quantifies […] multiplicity. Thus by analogy, one needs to associate entropy with a black hole”.",
null,
"Figure 3: A microstate is a particular microscopic configuration that a thermodynamic system may occupy (with a certain probability). A macrostate refers to its macroscopic properties, characterized by a few parameters such as its temperature, pressure, and volume.\n• The black hole hides information. All information black holes provide are (M, Q, J), and it blocks all signals that enter it. Since we know from basic statistical mechanics that entropy measures missing information, this suggests that black holes have entropy.",
null,
"Figure 4: All three different shaped “stars” collapse without preserving its properties. After the collapse, the first hole has no magnetic field, the second hole is not a square and the third hole does not have a mountain on its surfaces (source).\n\n# The Black Hole Formula for the Entropy\n\nIt is reasonable to suppose that the entropy of a black hole should depend only on one or more of the three observable quantities, M, Q and J. According to the area theorem, the area of the event horizon of a black hole cannot decrease. This reminds us of the behavior of the ordinary thermodynamic entropy of closed systems. Therefore, it makes sense to suppose that the entropy of the black hole is a monotonic increasing function of the area of the event horizon.\n\nNow, one can show (to be discussed later) that a black hole that obeys “a version” of the first law of thermodynamics possesses an entropy that is proportional to the area of the event horizon. More specifically, if the area of the event horizon is given by A, the black hole entropy is",
null,
"Equation 1: The entropy of a black hole is proportional to the area A of its event horizon (in this expression, all physical constants were set to 1).\n\nwhere we used natural units (physical constants set to 1).",
null,
"Figure 5: The entropy of a black hole S is proportional to the area A of its event horizon(source).\n\n# Types of Black Hole\n\nThere are four main types of black holes:\n\nIn the following, I will briefly describe Schwarzschild black holes and Kerr-Newman black holes. The former is the simplest type of black hole and the latter is the most general. For a more detailed description see Carroll.\n\n## The Schwarzschild Black Hole\n\nThe Schwarzschild black hole is stationary and spherically symmetric with only one parameter, its mass M. The Schwarzschild line element in spherical coordinates reads:\n\nChanging from spherical to a new set of coordinates called Eddington–Finkelstein coordinates, one finds that r = 2M is not a real spacetime singularity. The only singularity of the Schwarzschild black hole is at the center r=0. The radius of the event horizon is r = 2M and its surface area and corresponding entropy are given by:",
null,
"Equation 3: The area of the event horizon and the corresponding entropy of the Schwarzschild black hole.",
null,
"Figure 6: A Schwarzschild black hole. The figure on the right shows an accretion disk which is formed by diffuse material that orbits massive central bodies. The photon sphere (also known as last photon orbit) is a region of space where gravity forces photons to travel in orbits (source).\n\n## Kerr-Newman Black Hole\n\nThe Kerr-Newman black hole is the most general type of stationary asymptotically flat black hole. Note that stationary and static are different concepts. The Kerr-Newman black hole is spinning (hence it is not static) but it always spins in the same way, and is, therefore, stationary. The Kerr-Newman black hole has three parameters, namely, M, interpreted as the mass of the black hole (as seen by an observer at infinity), charge Q, and J, interpreted as its angular momentum (it is axisymmetric). It is a generalization of the Kerr black hole which has Q=0. In contrast to the Schwarzschild black hole, the horizon of the Kerr-Newman black hole is not spherical.\n\nIn the Boyer-Lindquist coordinates (t,r,θ,ϕ) the line element reads:\n\nwhere:\n\nwhere θ is the polar angle in spherical coordinates. We also need to know the electromagnetic field tensor for the solution of the Einstein field equations to be obtained. In the Boyer-Lindquist form, the electromagnetic field tensor reads:\n\nThe equations of motion of a test particle with charge per mass q orbiting around the Kerr–Newman black hole are given by:",
null,
"Equation 7: The equations of motion of a test particle with charge q orbiting around the Kerr–Newman black hole.\n\nNote that the equation of motion depends on the Christoffel symbols Γ. The animation below illustrates the motion of the test particle.",
null,
"Figure 7: A test particle orbiting a Kerr-Newman black hole (source).\n\nThe area of the event horizon of the Kerr-Newman black hole is given by:\n\nWe note here that though the horizon of the Kerr–Newman black hole is not spherical anymore, in the Boyer-Lindquist coordinates it still lies at a fixed radial coordinate allowing Eq. 8 to be calculated in a simple way (see Bekenstein).\n\nImportant Surfaces of the Kerr–Newman Black Hole\nAs shown in Fig.8, the Kerr–Newman metric has the following important surfaces:\n\n• The inner and outer event horizons:\n\nwhere θ is the polar angle in spherical coordinates.",
null,
"Figure 8: The figure shows the horizons around a Kerr black hole (the uncharged version of the Kerr-Newman black hole). Event horizons (inner and outer) define null surfaces past which it is impossible to return to particular regions of space. The stationary limit surface (upper right of the figure) is timelike (except at the poles). Beyond it, an observer cannot stay stationary. The ergosphere is a region where it is possible to transit in and out (but not stay stationary).\n\n# The Laws of Black Hole Thermodynamics\n\nI will first consider the first, second and third laws of black hole thermodynamics and then examine the zeroth law.\n\n## The First Law of Thermodynamics\n\nWhen a thermodynamic system at temperature T is near its equilibrium and changes its state, the following relation holds between the corresponding increments of energy E and entropy S:",
null,
"Equation 11: The first law of thermodynamics. Here dW is the work performed on the system.\n\nIf the system is under rotation with angular frequency Ω and it is charged up to an electric potential Φ, its angular momentum and charge will vary. The first law Eq. 9 will then become:",
null,
"Equation 12: The first law of thermodynamics when the system is rotating with angular frequency Ω and it is charged up to an electric potential Φ.\n\nBlack holes admit a relation analogous to Eq. 12. To explain it I will consider only the Kerr-Newman black hole type since, as mentioned before, it is the most general (asymptotically flat) type of stationary black hole.\n\nNow consider Eq. 8 for the area of the Kerr-Newman black hole horizon, calculate dA and multiply it by the parameter Θ below. We obtain the following relation between the dM, dJ and dQ:",
null,
"Equation 13: Relation between increments in mechanical and geometrical properties of black holes.\n\nwhere:",
null,
"Equation 14: Values of the coefficients of dM, dJ, and dQ in Eq. 13.\n\nThe quantities above:",
null,
"Equation 15: Angular rotation frequency and electric potential of the black hole.\n\nare respectively:\n\n• The angular rotation frequency of the black hole (any test body dropped into the black hole close to the horizon, ends up circumnavigating it at this frequency).\n• The black hole’s electric potential (the line integral of the black hole’s electric field from ∞ to the horizon).\n\nSince the first term of the right-hand side of Eq. 13 is the energy of the black hole, Eq. 13 is very similar to the first law of ordinary thermodynamics. For Eq. 13 to be the first law of black hole thermodynamics the entropy must be a univariate function of the area of the event horizon which would imply that:",
null,
"Equation 16: Condition for Eq. 13 to be the first law of thermodynamics for black holes.\n\nNow, if we choose the entropy of the black hole to be Eq. 1, we find the following expression for the temperature of the black hole:",
null,
"Equation 17: Temperature of the black hole with the choice of entropy S given by Eq. 1.\n\nFor Q=J=0 this becomes:\n\nBut in 1974, it was shown that a black hole spontaneously emits thermal radiation precisely with this temperature, the so-called Hawking temperature.",
null,
"Figure 9: The English theoretical physicist Stephen Hawking, who published in 1974 a paper where he calculated the temperature of the (now-called) Hawking radiation emitted by a black hole (source).\n\nThough in Hawking’s original paper, Eq. 18 was obtained, the temperature in the presence of charge Q and angular momentum J was later shown to be equal to Eq. 17.\n\n## The Generalized Second Law of Black Hole Thermodynamics\n\nAccording to standard thermodynamics, the second law demands that the entropy of a closed system must never decrease:\n\nSince conventional systems that fall into a black hole become unobservable, so does their entropy. A more useful law is then obtained if one generalizes the second law of thermodynamics as follows:\n\nHence, according to the generalized second law (GSL), the variation of the standard entropy S₀ outside the black hole summed with the variation of the total entropy of the black hole must never be negative. When the entropy of matter enters the black hole, the GSL demands that the increase in the entropy of the black hole must more than compensate for the decrease of the entropy from sight.\n\nWhile the black hole is emitting radiation, there is a decrease in the black hole’s area, which violates the area theorem we discussed previously. Such violation happens because the energy condition assumed by the theorem fails due to the presence of the quantum fluctuations which themselves produce the radiation. The generalized second law predicts that the entropy of the outgoing radiation shall more than compensate for the black hole entropy drop.\n\n## The Third Law of Black Hole Thermodynamics\n\nThe third law of standard of thermodynamics can be stated in two ways:\n\n• Unattainability statement: For a system to be taken to T=0 an infinite number of steps is needed.\n• The Nernst-Simon statement: At T=0 the entropy of a system can either go to zero or become independent of the intensive thermodynamic properties such as pressure, magnetic field or the electric potential.\n\nFrom our previous equations for the Kerr-Newman black hole, we find that if the black hole temperature given by Eq. 17 vanishes, two things can happen:\n\n• The black hole Nernst-Simon third law statement. More specifically, at zero temperature the black hole entropy is not zero and, instead, depends on the ratio J/M, which is related to the black hole’s angular velocity, an intensive parameter.\n• Second, there exists some indication that black holes satisfy the unattainability statement. As found by Thorne (1973), in an astrophysical scenario, the spinning process of an uncharged black hole gets delayed at J/M≈0.998M, before the vanishing of the extremality condition is satisfied.\n\n## The Zeroth Law of Thermodynamics\n\nAccording to the zeroth law of thermodynamics, if two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. Systems in thermal equilibrium have a constant temperature.\n\nSince it can be shown that the surface gravity κ (which equals the acceleration due to gravity close to the horizon times the redshift factor) is constant on the horizon of a stationary black hole it is reasonable to associate it with the temperature T for a normal system.\n\n# What Is the Origin of the Black Hole Entropy?\n\nThis question still has no definite answer. However, since ordinary entropy is a measure of the multiplicity of microstates corresponding to a single macrostate, such as in Boltzmann formula",
null,
"Equation 21: The Boltzmann entropy where W is the number of equiprobable microstates corresponding to a single macrostate.\n\none is led to wonder what is the meaning of the black hole microstate. Here is a shortlist of some possible interpretations of black hole entropy (for more details see Bekenstein):\n\n• The black hole entropy counts how many internal states of matter and energy there are (see this article).\n• For a black hole formed by collapse, there is entanglement between the quantum field degrees of freedom external and internal to the horizon. For external observers, the degrees of freedom internal to the horizon are (naturally) not accessible. Hence, in a meaningful state, internal degrees of freedom are traced out.\n• The black hole entropy counts horizon gravitational states. The microstates we are looking for are therefore the states of the gravitational degrees of freedom that reside on the horizon of the black hole.\n\nMy Github and personal website www.marcotavora.me have some other interesting material both about physics and other topics such as physics, data science, and mathematics. Check them out!\n\nWritten by\n\nWritten by\n\n## Marco Tavora Ph.D.\n\n#### Physicist/Data Scientist | Lover of unification, generalization & abstraction | https://www.linkedin.com/in/marco-tavora",
null,
""
]
| [
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/freeze/max/60/1*LdvNPZzFXsVTAa4Dd54Xrg.gif",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/max/42/1*y_uM1pfBJeJRHOkMSTzNUg.jpeg",
null,
"https://miro.medium.com/max/60/1*[email protected]",
null,
"https://miro.medium.com/fit/c/80/80/1*iODVVKJIKo4wOlnO8yO10Q.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89986205,"math_prob":0.8766289,"size":12237,"snap":"2020-45-2020-50","text_gpt3_token_len":2584,"char_repetition_ratio":0.1862176,"word_repetition_ratio":0.04090685,"special_character_ratio":0.19751573,"punctuation_ratio":0.09584245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.988802,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T02:24:52Z\",\"WARC-Record-ID\":\"<urn:uuid:c60597b5-6e4c-49b2-b626-7e8431963184>\",\"Content-Length\":\"264713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ccbf4a9-8ff4-42ad-bfc5-b70b2478e1fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:67c91ade-8437-4874-8c6d-d20d857bd1c4>\",\"WARC-IP-Address\":\"104.16.124.127\",\"WARC-Target-URI\":\"https://medium.com/cantors-paradise/black-hole-entropy-and-the-laws-of-thermodynamics-d85fd5d5cce2\",\"WARC-Payload-Digest\":\"sha1:CS7JXHI4ZILWNSSG6AB55F5VEC3TV4WP\",\"WARC-Block-Digest\":\"sha1:DAWSVJDJR7NZWDWOKCMLX3CUY4SDH2PR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141717601.66_warc_CC-MAIN-20201203000447-20201203030447-00499.warc.gz\"}"} |
https://beta.geogebra.org/m/psaRACGy | [
"# Reflecting Polygons Over the Axes\n\nUse the check boxes to see what happens when you reflect pentagon ABCDE over the x-axis, y-axis, and x- and y-axes. Pay attention to how the coordinates of specific points change!\n1) What are the coordinates of B (2,2) after it is reflected over the y-axis? 2) What are the coordinates of E (4,0) after it is reflected over the x-axis? Will this always be true for points on the line of reflection? 3) What are the coordinates of D (5,3) after it is reflected over the x- and y-axes? 4) Does reflection change the length of segments? The measure of angles? The area of the shape? 5) Can you write a general rule for reflecting over the x-axis? The y-axis? Both x- and y-axes?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83959556,"math_prob":0.9437005,"size":735,"snap":"2020-24-2020-29","text_gpt3_token_len":189,"char_repetition_ratio":0.16689466,"word_repetition_ratio":0.06870229,"special_character_ratio":0.2557823,"punctuation_ratio":0.11042945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9820886,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-26T18:57:57Z\",\"WARC-Record-ID\":\"<urn:uuid:c8d6d768-7702-4ba3-a505-63327afb69d4>\",\"Content-Length\":\"38801\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c054bc88-02a4-41bf-969b-20e9912f9f39>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f3ccbdb-826c-476e-8d55-e1ba10168ca1>\",\"WARC-IP-Address\":\"99.84.191.18\",\"WARC-Target-URI\":\"https://beta.geogebra.org/m/psaRACGy\",\"WARC-Payload-Digest\":\"sha1:3376CATMGW3TUNC63BEIWOXJ7CHZ2NKG\",\"WARC-Block-Digest\":\"sha1:HDURDXBHEJBJD2BJLZ7NP7YRRWJGTXAK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347391277.13_warc_CC-MAIN-20200526160400-20200526190400-00059.warc.gz\"}"} |
https://www.writingessays.org/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E8%80%83%E8%AF%95%E5%8A%A9%E6%94%BB/ | [
"Search the whole station\n\n# 经济学考试助攻 Economics 3382A-001代写 经济学考试代写\n\n177\n\n## Practice Final Exam\n\nTime: 120 minutes\n\nAnswers without explanations will not be given full credit\n\n### Problem 1 (25 points) 经济学考试助攻\n\nEach of 3 players chooses whether to contribute \\$1 toward the provision of a public good. The good is provided if and only if at least one player contributes. Contributions are never refunded to the players regardless of how many players contributed. Each player receives benefit \\$4 if the public good is provided, and \\$0 if the public good is not provided. If a player contributes, then she pays the cost \\$1; if she does not contribute, then she pays no cost.\n\n(a) Find all rationalizable strategies for each player.\n\n(b) Find all pure strategy Nash equilibria.\n\n(c) Find the symmetric mixed strategy Nash equilibrium where each player is using the same mixed strategy: contribute with probability p, not contribute with probability 1 p (where 0 < p < 1).\n\n(d) Compute the players’expected payoffs in all equilibria and compare these payoffs for different equilibria.\n\n### Problem 2 (25 points). 经济学考试助攻\n\nConsider a two-player two-period game where each period the players play the following simultaneous move stage game:\n\nAfter the first period the players observe what was played in the first period. The payoff of each player is the sum of the payoffs from the two periods. Throughout the question restrict attention to pure strategies.\n\n(a) Can you find a subgame-perfect equilibrium of the entire game such that players play (A, a) in the first period? If your answer is ‘yes’, then find all such equilibria. If your answer is ‘no’, then explain your answer.\n\n(b) Can you find a subgame-perfect equilibrium of the entire game such that players play (B, a) in the first period, and play (B; a) in the second period if (B; a) was played in the first period? If your answer is ‘yes’, then find all such equilibria. If your answer is ‘no’, then explain your answer.\n\n(c) Can you find a Nash equilibrium of the entire game that is not a subgame-perfect equilibrium? Explain your answer.\n\n### Problem 4 (25 points).\n\nConsider an oligopoly situation when there are two firms, the inverse demand function is given by\n\nwhere A is equally likely to be 30 or 0. Firm 1 learns the value of A before choosing its output. Firm 2 is aware of this fact but cannot itself observe the value of A. Both firms have identical total cost functions, C (q) = 0 for every q ≥ 0. They choose their outputs simultaneously.\n\nFind the (pure strategy) Nash equilibrium of this game. (Assume that the firm produces nothing if it cannot make strictly positive profits).\n\nThe prev: The next:\n\n### Related recommendations\n\n• #### 能源经济代考 ECON783代写 经济学考试代考 经济学考试助攻\n\n85\n\nECON783 Energy and Resource Economics 能源经济代考 Question 1 See lecture slides (a) Explain the different scenarios that the International Energy Agency use in their World Energy Outlook...\n\nView details\n• #### 能源与资源经济学代写 ECON783代写 经济学考试助攻\n\n75\n\nECON783 Energy and Resource Economics 能源与资源经济学代写 Question 1 (a) Explain the different scenarios that the International Energy Agency use in their World Energy Outlook. (4 marks)...\n\nView details\n• #### 高等微观经济学代写 Economics 3382A-001代写 经济学代考\n\n135\n\nEconomics 3382A-001 Advanced Microeconomics I Midterm Exam 高等微观经济学代写 Problem 1 (20 points) You need to decide whether to buy the good today or wait until tomorrow....\n\nView details\n1"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8559343,"math_prob":0.7904584,"size":3580,"snap":"2023-40-2023-50","text_gpt3_token_len":941,"char_repetition_ratio":0.12248322,"word_repetition_ratio":0.27533785,"special_character_ratio":0.22346368,"punctuation_ratio":0.09715994,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T18:52:37Z\",\"WARC-Record-ID\":\"<urn:uuid:f6e6cf92-afb6-41df-a235-7f946147ea1b>\",\"Content-Length\":\"90440\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:609cc5d0-bb99-462a-bc8c-7e0ca3ea5021>\",\"WARC-Concurrent-To\":\"<urn:uuid:92c1d37f-4339-4171-a8f4-4a2b3d023390>\",\"WARC-IP-Address\":\"104.225.145.50\",\"WARC-Target-URI\":\"https://www.writingessays.org/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E8%80%83%E8%AF%95%E5%8A%A9%E6%94%BB/\",\"WARC-Payload-Digest\":\"sha1:EKNTDXTGZ32EEXNB2RWK5DFAAB77D6JF\",\"WARC-Block-Digest\":\"sha1:34SWPXMKKWAFUZF64ZQ6FEMN2ZFFESF7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510219.5_warc_CC-MAIN-20230926175325-20230926205325-00790.warc.gz\"}"} |
https://prepinsta.com/check-whether-two-trees-are-symmetrical-or-not/ | [
"# Check whether two trees are symmetrical or not.\n\nWe need to write a recursive function isSymmetrical() that takes two trees as argument and returns true if trees are Symmetrical and false if trees are not Symmetrical. The isSymmetrical() function recursively checks two roots and subtrees under the root.\n\nThe code for above js as follows:\n\n[code language=”cpp”]\n\n#include<bits/stdc++.h>\n\nstruct Treenode// definition of structure of tree node which has lchild child and rchild child\n{\nint val;\nTreenode* lchild;\nTreenode* rchild;\n};\n\nstruct Treenode* newNode(int key)// function for new node creation\n{\nstruct Treenode* newnode = (struct Treenode*)malloc(sizeof(struct Treenode));\nnewnode->val = key;\nnewnode->lchild = NULL;\nnewnode->rchild = NULL;\n\nreturn(newnode);\n}\n\nbool isMirror(Treenode *root1, Treenode *root2)\n{\n// If both trees are empty, then they are mirror images\nif (!root1 && !root2)\nreturn true;\n\n// For two trees to be mirror images, the following three\n// conditions must be true\n// 1 – Their root node’s key must be same\n// 2 – lchild subtree of lchild tree and rchild subtree\n// of rchild tree have to be mirror images\n// 3 – rchild subtree of lchild tree and lchild subtree\n// of rchild tree have to be mirror images\nif (root1 && root2 && root1->val == root2->val)\nreturn isMirror(root1->lchild, root2->rchild) &&\nisMirror(root1->rchild, root2->lchild);\n\n// if neither of above conditions is true then root1\n// and root2 are not mirror images\nreturn false;\n}\n\n// Returns true if a tree is symmetric i.e. mirror image of itself\nbool isSymmetric(Treenode* root)\n{\n// Check if tre is mirror of itself\nreturn isMirror(root, root);\n}\n\nint main()\n{\nTreenode *root1 = newNode(232);\nTreenode *root2 = newNode(232);// these part is concerned with creation of tree\n// user can edit this as per wish\nroot1->lchild = newNode(231);\nroot1->rchild = newNode(231);\nroot1->lchild->lchild = newNode(431);\nroot1->lchild->rchild = newNode(531);\n\nroot2->lchild = newNode(231);\nroot2->rchild = newNode(331);\nroot2->lchild->lchild = newNode(431);\nroot2->lchild->rchild = newNode(513);\n\nif(isSymmetric(root))\ncout<<\"Tree is symmetric\\n\";\nelse\ncout<<\"Tree is not symmetric\\n\";\n\ngetchar();\nreturn 0;\n}\n\n[/code]"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5557778,"math_prob":0.952554,"size":2213,"snap":"2022-27-2022-33","text_gpt3_token_len":607,"char_repetition_ratio":0.17836125,"word_repetition_ratio":0.04307692,"special_character_ratio":0.2851333,"punctuation_ratio":0.110526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96428275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T01:12:31Z\",\"WARC-Record-ID\":\"<urn:uuid:ab0bd0ed-ec79-4f20-9c84-204fcda6087f>\",\"Content-Length\":\"104640\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:931818e1-463d-4f3d-b539-f31cb9b5708c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c357d96-13fa-447a-a531-b1a02d7dd28f>\",\"WARC-IP-Address\":\"172.66.42.232\",\"WARC-Target-URI\":\"https://prepinsta.com/check-whether-two-trees-are-symmetrical-or-not/\",\"WARC-Payload-Digest\":\"sha1:2UXX4A5GHNMK2FXEXWT4KK2HSMMJKPKD\",\"WARC-Block-Digest\":\"sha1:5BUQ2XZ2Z6NZZJJ4GVFGKMD4MV4JCUL5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104277498.71_warc_CC-MAIN-20220703225409-20220704015409-00590.warc.gz\"}"} |
https://math.stackexchange.com/questions/86982/definition-of-compact-mapping | [
"# Definition of Compact Mapping\n\nI was reading around the other day and came across the term \"compact mapping\". After googling, I saw the following two definitions:\n\n1. Let $X$ be a topological space. Then a mapping $f:X \\to X$ is compact if $f^{-1}(\\{x\\})$ is compact for every $x \\in X$.\n\n2. Let $X$ be a Banach space. Then a mapping (not necessarily linear) $f:X \\to X$ is compact if the closure of $f(Y)$ is compact whenever $Y \\subset X$ is bounded.\n\nAre these definitions equivalent if $X$ is a Banach space? If not, what is the usual meaning in the context of Banach spaces? For example, Schaefer's Fixed Point Theorem states\n\nIf $X$ is a Banach space and $f:X \\to X$ is a continuous and compact mapping such that $$\\{x \\in X: x = \\lambda f(x) \\mbox{ for some } 0 \\leq \\lambda \\leq 1\\}$$ is bounded then $f$ has a fixed point.\n\nWhich definition is meant? Sorry if I am missing something obvious here.\n\n• They're not equivalent. Consider the function $f: \\mathbb{R} \\to \\mathbb{R}$ that is constantly $0$. $f$ satisfies condition 2 but not condition 1. – Grasshopper Nov 30 '11 at 6:45\n• Also, the identity mapping on an infinite dimensional Banach satisfies condition 1, but not condition 2. – Philip Brooker Nov 30 '11 at 7:16\n• In the context of Banach spaces (for instance, in the statement of the fixed point theorem you quoted), the second definition is (almost) always what is meant. – Adam Smith Nov 30 '11 at 7:18\n\n• Grasshopper: They're not equivalent. Consider the function $f: \\mathbb{R} \\to \\mathbb{R}$ that is constantly $0$. $f$ satisfies condition 2 but not condition 1.\nThe first is true and the second is the definition of a Compact operator (different from a mapping at all and necessarily linear) on a Banach space by this changes: Let $X$ be a Banach space. Then a linear operator ( necessarily linear) $f:X\\to X$ is compact if the closure of $f(Y)$ is relatively compact whenever $Y\\subset X$ is bounded."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8919351,"math_prob":0.99886703,"size":863,"snap":"2020-34-2020-40","text_gpt3_token_len":249,"char_repetition_ratio":0.13969733,"word_repetition_ratio":0.025806451,"special_character_ratio":0.28273463,"punctuation_ratio":0.0989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997528,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T01:26:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f26962df-896c-4c47-b218-9aca2916a212>\",\"Content-Length\":\"158273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ad3069f-2532-432a-89c6-e4d2280a4b30>\",\"WARC-Concurrent-To\":\"<urn:uuid:2de6119a-ab8e-4ac8-885d-daf02b78471e>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/86982/definition-of-compact-mapping\",\"WARC-Payload-Digest\":\"sha1:WGF46RZHYJH4DG3V5G7FSB4PDACQSRM3\",\"WARC-Block-Digest\":\"sha1:VJ7BNPM7N7UDJZKUG4GXDCG37MI4XUBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738858.45_warc_CC-MAIN-20200811235207-20200812025207-00429.warc.gz\"}"} |
https://socratic.org/questions/how-do-you-solve-the-equation-log-2x-3 | [
"How do you solve the equation log_2x=-3?\n\nMar 22, 2018\n\ncolor(purple)(x = 1/8 = 0.125\n\nExplanation:",
null,
"From the above table,\n\n${\\log}_{a} m = n , \\text{ then } m = {a}^{n}$\n\nGiven : ${\\log}_{2} x = - 3$\n\n$\\therefore x = {2}^{- 3} = \\frac{1}{2} ^ 3 = \\frac{1}{8} = 0.125$"
]
| [
null,
"https://d2jmvrsizmvf4x.cloudfront.net/fGHJhkEYTvOnZpFjSCdB_log+laws.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5403478,"math_prob":1.0000099,"size":242,"snap":"2019-26-2019-30","text_gpt3_token_len":62,"char_repetition_ratio":0.12605043,"word_repetition_ratio":0.0,"special_character_ratio":0.25619835,"punctuation_ratio":0.13043478,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000001,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-15T22:32:56Z\",\"WARC-Record-ID\":\"<urn:uuid:34c97a1f-1d1e-42db-bf5c-51a95f0c87a4>\",\"Content-Length\":\"32317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d6d02d3-efac-469c-9001-e9d754365c1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:471a462d-6ba2-4250-a099-687f5f65dc37>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-solve-the-equation-log-2x-3\",\"WARC-Payload-Digest\":\"sha1:765Z7DW53I5BTCXXSREYFT4PEJDMN53G\",\"WARC-Block-Digest\":\"sha1:VANSRNOIQEGVQRH53CUGJ5253S2I2DYX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627997501.61_warc_CC-MAIN-20190615222657-20190616004657-00460.warc.gz\"}"} |
https://optovr.com/geometry-worksheets-congruent-similar-shapes-symmetry/ | [
"# Geometry Worksheets Congruent Similar Shapes Symmetry",
null,
"Geometry Worksheets Congruent Similar Shapes Symmetry.\n\nPrintable on identifying similar and congruent shapes. symmetry worksheets. several symmetry activities and printable worksheets. Congruence and symmetry in geometry. Congruent and similar figures - symmetry worksheets are similar triangles date period, the similar congruent house, figures work, similar congruent shapes, activity for similarity and congruence, similarity congruence h, congruence and triangles, lesson title similarity and congruence.\n\nStudents identify congruent shapes download symmetry worksheets. Introduction congruence home learning maths symmetry congruent worksheets. Sample rotational symmetry worksheet templates free premium congruent worksheets. Symmetry fun worksheet grade lesson planet congruent worksheets. Congruent shapes worksheet teachers pay symmetry worksheets. Math facts symmetry worksheets free reading comprehension elementary worksheet mind puzzle games grade 3 multiplication writing decimals fractions home printable fraction congruent. Congruence symmetry video download congruent worksheets."
]
| [
null,
"https://optovr.com/images/geometry-worksheets-congruent-similar-shapes-symmetry.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7748864,"math_prob":0.59233063,"size":1149,"snap":"2021-31-2021-39","text_gpt3_token_len":193,"char_repetition_ratio":0.28733623,"word_repetition_ratio":0.015151516,"special_character_ratio":0.13664056,"punctuation_ratio":0.12903225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99661803,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T03:31:27Z\",\"WARC-Record-ID\":\"<urn:uuid:04cce482-12c9-4e0f-89a5-6e9515689632>\",\"Content-Length\":\"20725\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1b55c1a-cc61-4c93-8369-7c2406e31215>\",\"WARC-Concurrent-To\":\"<urn:uuid:c485b3ae-8d66-47c3-9a54-9a5152f5401c>\",\"WARC-IP-Address\":\"172.67.133.61\",\"WARC-Target-URI\":\"https://optovr.com/geometry-worksheets-congruent-similar-shapes-symmetry/\",\"WARC-Payload-Digest\":\"sha1:VLPTOZWCITN25FBZN7Q2CBVV5QNZ5L7G\",\"WARC-Block-Digest\":\"sha1:IE7QAGR3G727F5HQ7JNQQAXCWVLM5AHG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151563.91_warc_CC-MAIN-20210725014052-20210725044052-00149.warc.gz\"}"} |
https://www.fresherslive.com/placement-papers/motorola-interview-questions-answers/5 | [
"# Motorola Aptitude Test Questions 07 Dec 2014\n\nPosted on :11-03-2016\n\nQuantitative Ability:\n\nQ1. If 5 spiders can catch five flies in five minutes. How many flies can hundred spiders catch in 100 minutes?\n\nA. 100\nB. 1000\nC. 500\nD. 2000\n\nANS: D\n\nExplanation:\n\nOne spider catches one fly in 5 minutes. 100 spider catches 100 fly in 5 minutes. In 100 minutes 100 - 20 = 2000 flies will be caught.\n\nQ2. The average of 5 quantities is 6. The average of 3 of them is 8. What is the average of the remaining two numbers?\n\nA. 6.5\nB. 4\nC. 3\nD. 3.5\n\nANS: C\n\nExplanation:\n\nThe average of 5 quantities is 6. Therefore, the sum of the 5 quantities is 5 * 6 = 30. The average of three of these 5 quantities is 8. Therefore, the sum of these three quantities = 3 * 8 = 24. The sum of the remaining two quantities = 30 - 24 = 6. Average of these two quantities = 6/2= 3.\n\nQ3. What is the total worth of Lakhirams assets?\nStatement 1: Compound interest at 10% on his assets, followed by a tax of 4% on the interest, fetches him Rs. 15000 this year.\nStatement 2: The interest is compounded once every four months. Let Lakhirams assets be worth Rs. X.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\n\nANS: C\n\nExplanation:\n\nIn the case of compound interest, the period of reckoning or calculation of CI is very important. This information is given in statement (b). The annual CI rate is 10%, so the rate for 4 months is (4/12) 10 = (10/3) %. So the total CI after one year, in terms of X, may be written as: CI = X [(1 + ((10/3)/100)]3, because in a year, there are 3 terms of 4 months. This interest is followed by a tax of 4% paid by him which ultimately fetches Lakhiram Rs. 1500. This data us to find the value of X, so the answer is (3).\n\nQ4. Find the next term in series? 17 14 14 11 11 8 8\n\nA. 8 5\nB. 5 2\nC. 8 2\nD. 5 5\n\nANS: D\n\nExplanation:\n\nIn this simple subtraction with repetition series, each number is repeated, then 3 is subtracted to give the next number, which is then repeated, and so on.\n\nQ5. The function f(x) = |x - 2| + |2.5 - x| + |3.6 - x|, where x is a real number, attains a minimum at:-\n\nA. x = 2.3\nB. x = 2.5\nC. x = 2.7\nD. none of the above\n\nANS: B\n\nQ6. A rainy day occurs once in every 10 days. Half of the rainy days produce rainbows. What percent of all the days do not produce rainbow?\n\nA. 95%\nB. 10%\nC. 50%\nD. 5%\n\nANS: A\n\nExplanation:\n\nTwo rainy days occur in 20 days. So, rainbow will occur once in 20 days. Rest 19 days will have not rainbow. % of not producing rainbows = 19/ 20 * 100 = 95%\n\nQ7. What is the ratio of the two liquids A and B in the mixture finally, if these two liquids kept in three vessels are mixed together?\nStatement 1: The ratio of liquid A to liquid B in the first and second vessel is, respectively, 3: 5, 2: 3.\nStatement 2: The ratio liquid A to liquid B in vessel 3 is 4:3.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\nE. Cannot be answered even by using both the statement\n\nANS: E\n\nExplanation:\n\nAlthough the containers are of equal volume, it is not known to what extent these containers are filled by the liquids A and B. (i.e. the first container might be half full, while the second might be two-thirds full). Until such details are known, the final ratio of liquids A and B cannot be found out.\n\nQ8. What is the number of type 2 widgets produced, if the total number of widgets produced is 20,000?\nStatement 1: If the production of type - 1 widgets increases by 10% and that of type-2 decreases by 6%, the total production remains the same.\nStatement 2: The ratio in which type - 1 and type - 2 widgets are produced is 2: 1. If the number of type - 1 widgets produced is A and that of type - 2 widgets is B,\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\n\nANS: C\n\nExplanation:\n\nThen we get the basic equation [A + B = 20,000] from the data in the question. From 1st statement, we get [1.1 A + 0.94 B = 20,000]. This is enough to give us the value of B. Similarly from 2nd statement, we get A = 2B. This is enough to give us the value of B.\n\nQ9. How many different triangles can be formed?\nStatement 1: There are 16 coplanar, straight lines in all.\nStatement 2: No two lines are parallel.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\nE. Cannot be answered even by using both the statement\n\nANS: E\n\nExplanation:\n\nAlthough it is known that none of the lines are parallel to each other, there might be the case wherein all the lines have exactly one point of intersection, or eight lines with one point and the other eight with another point of Intersection. Unless something about the relative arrangement of these lines is known, one cannot arrive at definite answer.\n\nQ10. How old is Sachin in 1997?\nStatement 1: Sachin is 11 years younger than Anil whose age will be prime number in 1998.\nStatement 2: Anils age was a prime number in 1996.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\nE. Cannot be answered even by using both the statement\n\nANS: E\n\nExplanation:\n\nAnils age was a prime number in 1996 and 1998. So Anils age in these two yeas can be a pair of such numbers which are prime, and differ by 2. We have many such pairs - (3, 5), (5, 7), (11, 13)..... And it is not possible to arrive at a unique answer.\n\nQ11. A sequence of numbers a1, a2..... is given by the rule an 2 = an+1. Do 3 appear in the sequence?\nStatement 1: a1 = 2\nStatement 2: a3 = 16\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\n\nANS: D\n\nExplanation:\n\nPut n = 1 in an2 = an+1 a12 = a2, a22 = a3, a32 = a4 etc From statement 1: a12 = a2 i.e. 22 = a2 or a2 = 4 Now, a22 = a3 i.e. 42 = a3 or a3 = 16, etc Thus, a1 = 2, a2 = 4, a3 = 16, etc\n\nQ12. A, B, C, D, E are five positive numbers. A + B < C + D, B + C < D + E, C + D < E + A. Is A the greatest?\nStatement 1: D + E < A + B.\nStatement 2: E < C.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\n\nANS: C\n\nExplanation:\n\nA + B < C + D B + C < D + E C + D < E + A D + E < A + B E < C Adding, A + 2B < 2A + B i.e. B < A\n\nReasoning Ability:\n\nDIRECTIONS for Questions 13 and 16: Answer the questions on the basis of the information given below:\n\nA and B are two sets (e.g. A = mothers, B = women). The elements that could belong to both the sets (e.g. women who are mothers) is given by the set C = A.B. The elements which could belong to either A or B, or both, is indicated by the set D = AOB. A set that does not contain any elements is known as a null set, represented by @(for example, if none of the women in the set B is a mother, then C = A.B. is a null set, or C = @. Let V signify the set of all vertebrates; M the set of all mammals; D dogs; F fish; A Alsatian and P a dog named Pluto. If P . A = @ and P U A = D\n\nQ13. Given that X = M.D. is such that X = D, which of the following is true?\n\nA. All dogs are mammals\nB. Some dogs are mammals.\nC. X = @\nD. All mammals are dogs.\n\nANS: A\n\nExplanation:\n\nX = Mammals ∩ Dogs = Dogs, hence dogs are mammals.\n\nQ14. Which of the following is true?\n\nA. Pluto and Alsatian are dogs\nB. Pluto is an Alsatian\nC. Pluto is not an Alsatian\nD. D is a null set.\n\nANS: A\n\nExplanation:\n\nP . A = @ implies P into is not an Alsatian, but P U A = D implies both. P and A are dogs.\n\nQ15. If Z = (P.D.) OM, then\n\nA. The elements of Z consist of Pluto the dog or any other mammal.\nB. Z implies any dog or mammal.\nC. Z implies Pluto or any dog that is a mammal.\nD. Z is a null set.\n\nANS:\n\nExplanation:\n\nZ = (Pluto ∩ Dogs) U Mammals = Pluto U Mammals.\n\nQ16. If y = FO (D.V.) is not a null set, it implies that:\n\nA. All fish are vertebrates.\nB. All dogs are vertebrates.\nC. Some fish are dogs.\nD. None of the above.\n\nANS: C\n\nExplanation:\n\nFish U (Dogs ∩ Vertebrate) not equal to @ implies that some elements are common between Fish and Dogs.\n\nDIRECTIONS for Questions 17 and 19: Answer the questions on the basis of the information given below.\n\nIn a local pet store, seven puppies wait to be introduced to their new owners. The puppies, named Ashlen, Blakely, Custard, Daffy, Earl, Fala and Gabino, are all kept in two available pens. Pen 1 holds three puppies, and pen 2 holds four puppies. If Gabino is kept in pen 1, then Daffy is not kept in pen 2. If Daffy is not kept in pen 2, then Gabino is kept in pen 1. If Ashlen is kept in pen 2, then Blakely is not kept in pen 2. If Blakely is kept in pen 1, then Ashlen is not kept in pen 1.\n\nQ17. If Earl shares a pen with Fala, then which of the following MUST be true?\n\nA. Gabino is in pen 1 with Daffy.\nB. Custard is in pen 2.\nC. Blakely is in pen 2 and Fala is in pen 1.\nD. Earl is in pen 1.\n\nANS: B\n\nExplanation:\n\nIf Earl shares a pen with Fala, then Earl and Fala can both be either in pen 1 or in pen 2, Now, if Earl and Fala both are in pen 1 then one of Ashlen and Blakely have to be in pen 2 as they both cannot be together in one pen. Therefore Custard has to be in pen 2. If Earl and Fala both are in pen 2 then also one of Ashlen and Blakely have to be in pen 2. Then Gabino and Daffy will be in pen 1 with one of Ashlen and Blakeley. Therefore Custard will be in pen 2. Therefore In both the cases Custard will be in pen 2. Hence, option B\n\nQ18. Which of the following groups of puppies could be in pen 2?\n\nA. Gabino, Daffy, Custard, Earl.\nB. Blakely, Gabino, Ashlen, Daffy\nC. Ashlen, Gabino, Earl, Custard\nD. Blakely, Custard, Earl, Fala.\n\nANS: C\n\nExplanation:\n\nConsider option A: If Gabino, Daffy, Custard and Earl are in pen 2, then Ashlen and Blakely will be in pen 1 which is not possible according to the last condition given. Therefore Option 1 is not correct. Consider option B: According to condition 3 both Ashlen and Blakely cannot be in pen 2 together. Therefore Option 2 is not correct. Consider option C: In the second condition it is given that if Daffy is not kept in pen 2 then Gabino is kept in pen 1. Therefore Option 3 is not correct.\n\nQ19. If Earl and Fala are in different pens, then which of the following must NOT be true?\n\nA. Gabino shares a pen with Ashlen.\nB. Earl is in a higher-numbered pen than Blakely.\nC. Blakely shares pen 2 with Earl and Daffy.\nD. Custard is in a higher-numbered pen than Fala.\n\nANS:\n\nDIRECTIONS for Questions 20 and 24:\n\nAnswer the questions on the basis of the information given below:\n\nFive numbers A, B, C, D and E are to be arranged in an array in such a manner that they have a common prime factor between two consecutive numbers. These integers are such that: A has a prime factor P B has two prime factors Q and R C has two prime factors Q and S D has two prime factors P and S E has two prime factors P and R.\n\nQ20. Which of the following is an acceptable order, from left to right, in which the numbers can be arranged?\n\nA. D, E, B, C, A\nB. B, A, E, D, C\nC. B, C, D, E, A\nD. B, C, E, D, A\n\nANS: C\n\nExplanation:\n\nNo.1 A ---- P D----(S/ P) E ----- (P/R) B ------ (R/Q) C ----- (Q/S) OR D ---(S/P) A---- P E ---- (P/R) B -----(R/Q) C ----- (Q/S) NO.2 A ---- P E ----- (P/R) B ------ (R/Q) C ----- (Q/S) D----(S/ P) NO.3 A ---- P E ----- (R/P) D----(P/S) C ----- (S/Q) B ------ (Q/R)\n\nQ21. If number E is not in the list and the other four numbers are arranged properly, which of the following must be true?\n\nA. A and D cannot be the consecutive numbers.\nB. A and B are to be placed at the two ends in the array.\nC. A and C are to be placed at the two ends in the array.\nD. C and D cannot be the consecutive numbers.\n\nANS: B\n\nExplanation:\n\nA ---- P D----(P/S) C ----- (S/Q) B ------ (Q/R)\n\nQ22. If B must be arranged at one end in the array, in how many ways the other four numbers can be arranged?\n\nA. 1\nB. 2\nC. 3\nD. 4\n\nANS: B\n\nExplanation:\n\nB ------ (R/Q) C ----- (Q/S) D----(S/ P) E ----- (P/R) A ---- P OR B ------ (R/Q) E ----- (P/R) A ---- P D----(S/ P) C ----- (Q/S)\n\nQ23. If the number E is arranged in the middle with two numbers on either side of it, all of the following must be true, EXCEPT:\n\nA. A and D are arranged consecutively\nB. B and C are arranged consecutively\nC. B and E are arranged consecutively\nD. A is arranged at one end in the array\n\nANS: D\n\nQ24. If number B is not in the list and other four numbers are arranged properly, which of the following must be true?\n\nA. A is arranged at one end in the array.\nB. C is arranged at one end in the array.\nC. D is arranged at one end in the array.\nD. E is arranged at one end in the array.\n\nANS: C\n\nExplanation:\n\nA ---- P E ----- (P/R) D----(S/ P) C ----- (Q/S) or E ----- (P/R) A ---- P D----(S/ P) C ----- (Q/S)\n\nDIRECTIONS for Questions 25 and 29:\n\nAnswer the questions on the basis of the information given below.\n\nFive colleagues pooled their efforts during the office lunch-hour to solve the crossword in the daily paper. Colleagues: Mr. Bineet, Mr. Easwar, Ms. Elsie, Ms. Sheela, Ms. Titli. Answers: Burden, Barely, Baadshah, Rosebud. Silence. Numbers: 4 down, 8 across, 15 across, 15 down, 21 across. Order: First, second, third, fourth, fifth.\n\n1. Titli produced the answer to 8 across, which had the same number of letters as the previous answer to be inserted, and one more than the subsequent answer which was produced by one of the men.\n2. It was not Bineet who solved the clue to Burden, and Easwar did not solve 4 down.\n3. The answers to 15 across and 15 down did not have the same number of letters.\n4. Silence, which was not the third word to be inserted, was the answer to an across clue.\n5. Barely was the first word to be entered in the grid, but Baadshah was not the second answer to be found.\n6. Elsies word was longer than Bineets; Sheela was neither the first nor the last lo come up with an answer.\n\nQ25. Fifth one to be worked out was an answer to an across clue. What was Sheelas word?\n\nB. Silence\nC. Rosebud\nD. Barely\n\nANS: B\n\nQ26. What was Bineets word?\n\nA. Barely\nB. Burden\nC. Silence\nD. Rosebud\n\nANS: A\n\nQ27. What was Titlis order?\n\nA. First\nB. Second\nC. Third\nD. Fourth\n\nANS:. C\n\nQ28. What could be Titlis answer?\n\nA. First\nB. Second\nC. Third\nD. Fourth\n\nANS: C\n\nQ29. What was Easwars number?\n\nA. 4 down\nB. 21 across\nC. 8 across\nD. 15 down\n\nANS:\n\nQ30. Around a circular table six persons A, B, C, D, E and F are sitting. Who is on the immediate left to A?\nStatement 1: B is opposite to C and D is opposite to E\nStatement 2: F is on the immediate left to B and D is to the left of B.\n\nA. using 1st Statement only\nB. using 2nd statement only\nC. using both 1st and 2nd statement\nD. using 1st or 2nd statement\n\nANS: C\n\nFreshersLive - No.1 Job site in India. Here you can find latest 2020 government as well as private job recruitment notifications for different posts vacancies in India. Get top company jobs for both fresher and experienced. Job Seekers can get useful interview tips, resume services & interview Question and answer. Practice online test free which is helpful for interview preparation. Register with us to get latest employment news/rojgar samachar notifications. Also get latest free govt and other sarkari naukri job alerts daily through E-mail.\n\n ✖"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8709732,"math_prob":0.9568078,"size":18360,"snap":"2020-34-2020-40","text_gpt3_token_len":5690,"char_repetition_ratio":0.13516016,"word_repetition_ratio":0.14359713,"special_character_ratio":0.28077343,"punctuation_ratio":0.1534105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99256355,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T21:43:14Z\",\"WARC-Record-ID\":\"<urn:uuid:3cd1bed9-004d-42f0-9355-2bbc500dc080>\",\"Content-Length\":\"295129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:001aa0c2-e353-4943-b498-4a7ebcea73bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5a228da-90c6-48e1-aacc-d7c4a97cccc6>\",\"WARC-IP-Address\":\"104.22.78.185\",\"WARC-Target-URI\":\"https://www.fresherslive.com/placement-papers/motorola-interview-questions-answers/5\",\"WARC-Payload-Digest\":\"sha1:UW6FIX2RTWATB2B7NEE6Q7U76Z7ORWYC\",\"WARC-Block-Digest\":\"sha1:C5CQPOZEFXP4K54IBVQZ5FM5T7W7VSOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737225.57_warc_CC-MAIN-20200807202502-20200807232502-00296.warc.gz\"}"} |
https://forexsb.com/forum/topic/7529/how-to-get-total-history-profit-from-portfolio-ea/ | [
"# forex software\n\nCreate and Test Forex Strategies\n\nforex software\n\nForex Forum\n\nForex Software, Forex Strategies, Expert Advisors Generator\n\n# How to get Total History Profit from Portfolio ea?\n\nForex Forum → How to get Total History Profit from Portfolio ea?\n\nPages 1\n\n## Posts: 8\n\n#### Topic: How to get Total History Profit from Portfolio ea?\n\nHi,\n\nhow can i get the total Profit of an Portfolio EA?\n\nOn normal Eas this kind of code is working well.\n\n``````double Total_History_Profit()\n{\ndouble totalhistoryprofit=0;\nint total=OrdersHistoryTotal();\nfor(int i=0;i<total;i++)\n{\nif(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\n{\nif(OrderSymbol()==Symbol()&& OrderMagicNumber()==Magic_Number)\n{\ntotalhistoryprofit+= OrderSwap()+OrderProfit();\n}\n}\n}\nreturn(totalhistoryprofit);\n} ``````\n\nBut cause Portfolio have different magicnumbers.. What did i have to put in this line instead of Magic_Number?\n\n`` if(OrderSymbol()==Symbol()&& OrderMagicNumber()==Magic_Number)``\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nA Portfolio Expert opens positions with different Magic numbers for each included strategy.\n\nWe are currently developing a tool in EA Studio that will be able to calculate the stats form a particular date. It will help you for that case.\n\nOn the other hand, if you need to calculate the profit in the expert, you have to take into account all used magic numbers.\n\nThe positions' Magic Number calculation is:\n\n``````int GetMagicNumber(int strategyIndex)\n{\nint magic=1000*Base_Magic_Number+strategyIndex;\nreturn (magic);\n}``````\n\nThe strategyIndex is from 0 to the count of the strategies - 1.\n\n..\n\nNow I think that probably adding trading info in the Portfolio Expert is a good idea. It may be like in the FSB Pro's experts and to plot on the chart something like:\nTotal strategies: 32\nLong positions count: 11, lots: 0.11, profit: 333\nShort positions count: 03, lots: 0.03, profit: -24\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nPopov wrote:\n\nA Portfolio Expert opens positions with different Magic numbers for each included strategy.\n\nWe are currently developing a tool in EA Studio that will be able to calculate the stats form a particular date. It will help you for that case.\n\nOn the other hand, if you need to calculate the profit in the expert, you have to take into account all used magic numbers.\n\nThe positions' Magic Number calculation is:\n\n``````int GetMagicNumber(int strategyIndex)\n{\nint magic=1000*Base_Magic_Number+strategyIndex;\nreturn (magic);\n}``````\n\nThe strategyIndex is from 0 to the count of the strategies - 1.\n\n..\n\nNow I think that probably adding trading info in the Portfolio Expert is a good idea. It may be like in the FSB Pro's experts and to plot on the chart something like:\nTotal strategies: 32\nLong positions count: 11, lots: 0.11, profit: 333\nShort positions count: 03, lots: 0.03, profit: -24\n\nyes i saw that this calculation. but when i try to implement it in my functions it gives me errors..\nsomething lik GetMagicNumber() dont work on my process\n\nwe have to put the calculated magicnumber as global variable than it might be work.. but i dont know how to solve this..\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nhmm can nobody help out how i can get the calculated magicnumbers for that .. Cause when i change base magicnumber it also should be than in my Profit calculation.\n\ni have tried that but dont work\n\n`` if(OrderSymbol()==Symbol()&& OrderMagicNumber()== GetMagicNumber());``\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nYou can check if a position magic number is from that portfolio expert by this way:\n\n``````const int magicNumber = OrderMagicNumber();\nif (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100))\n{\n// Do the things here\n}``````\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nPopov wrote:\n\nYou can check if a position magic number is from that portfolio expert by this way:\n\n``````const int magicNumber = OrderMagicNumber();\nif (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100))\n{\n// Do the things here\n}``````\n\nI tried it out put your example like this in code.\n\n``````double Total_History_Profit()\n{\nconst int magicNumber = OrderMagicNumber();\ndouble totalhistoryprofit=0;\nint total=OrdersHistoryTotal();\nfor(int i=0;i<total;i++)\n{\nif(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\n{\nif (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100))\nif(OrderSymbol()==Symbol())\n{\ntotalhistoryprofit+= OrderSwap()+OrderProfit();\n}\n}\n}\nreturn(totalhistoryprofit);\n} ``````\n\nalso like this isnt working well.\n\n``````double Total_History_Profit()\n{\nconst int magicNumber = OrderMagicNumber();\ndouble totalhistoryprofit=0;\nint total=OrdersHistoryTotal();\nfor(int i=0;i<total;i++)\n{\nif(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\n{\nif(OrderSymbol()==Symbol() && (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100)))\n{\ntotalhistoryprofit+= OrderSwap()+OrderProfit();\n}\n}\n}\nreturn(totalhistoryprofit);\n} ``````\n\nBut i get the complete history Total Profit not only from the magicnumbers.\nWhat about when i use portfolio 4 times. One Basic Magicnumber 400...one 401..one 402.. So if the code runs on basic numver 402 and the code is\n\n``(1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100)``\n\nso when an has a smaller basenumber it will count all together.\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nMove \"const int magicNumber = OrderMagicNumber();\" within the \"if(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\".\n\nLike that:\n\n`````` for(int i=0;i<total;i++)\n{\nif(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\n{\nconst int magicNumber = OrderMagicNumber();\nif(OrderSymbol()==Symbol() && (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100)))``````\n\n#### Re: How to get Total History Profit from Portfolio ea?\n\nPopov wrote:\n\nMove \"const int magicNumber = OrderMagicNumber();\" within the \"if(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\".\n\nLike that:\n\n`````` for(int i=0;i<total;i++)\n{\nif(OrderSelect(i,SELECT_BY_POS, MODE_HISTORY)==true)\n{\nconst int magicNumber = OrderMagicNumber();\nif(OrderSymbol()==Symbol() && (1000 * Base_Magic_Number <= magicNumber && magicNumber < (1000 * Base_Magic_Number + 100)))``````\n\nYes nice it works. Thank you..\n\nPages 1"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6041003,"math_prob":0.9814142,"size":707,"snap":"2021-43-2021-49","text_gpt3_token_len":171,"char_repetition_ratio":0.14793742,"word_repetition_ratio":0.0,"special_character_ratio":0.2545969,"punctuation_ratio":0.1495327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98579055,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T20:05:51Z\",\"WARC-Record-ID\":\"<urn:uuid:6cb1aa65-f19b-460c-abf1-8ae88f6fec56>\",\"Content-Length\":\"32515\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04002c20-3487-408b-8674-3343ecd1a47d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1e343f0-aa22-4bbe-8fe4-7f99f135a99f>\",\"WARC-IP-Address\":\"161.97.129.131\",\"WARC-Target-URI\":\"https://forexsb.com/forum/topic/7529/how-to-get-total-history-profit-from-portfolio-ea/\",\"WARC-Payload-Digest\":\"sha1:GETP53HPPBXVWUTZISV5U2JVFVAGUSN7\",\"WARC-Block-Digest\":\"sha1:G2UIUT37LD2BXHWGZU4SF2US4NSNSQYK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587767.18_warc_CC-MAIN-20211025185311-20211025215311-00209.warc.gz\"}"} |
https://stackoverflow.com/questions/47385374/lambda-calculus-in-c-booleans-and-not-operator | [
"# lambda-calculus in C: Booleans and NOT operator\n\nI wanted to give implementations in different programming languages of the lambda-calculus construction of the booleans and the NOT operator.\n\nThese are:\n\n``````TRUE = lx.ly. x\nFALSE = lx.ly. y\nNOT = lx. x FALSE TRUE\n``````\n\nIt's trivial to do in Javascript and Python say, like\n\n``````var TRUE = function(x,y){ return x;}\nvar FALSE = function(x,y){ return y;}\nvar NOT = function(b){ return b(FALSE,TRUE) ; }\n``````\n\nbut I can't figure out how to do it in C.\n\nThe naive idea of implementing something like this\n\n``````lambda true(lambda x, lambda y){ return x ; }\nlambda false(lambda x, lambda y){ return x ; }\n\nlambda not(lambda (b)(lambda, lambda) ){ return b(false,true) ;}\n``````\n\ndoesn't seem possible in C, as `typedef` doesn't allow a recursive definition\n\n``````typedef void (*lambda)(lambda,lambda) ;not valid in C\n``````\n\nIs there a way to do it in C? and is there a way to do it that is meaningful to use as an educational example? That is, if the syntax starts getting to cumbersome it ends up defeating its purpose...\n\nFinally, if C ends up being too limited in any way, an answer in C++ would also work for me, albeit with the same \"complexity\" constraint\n\nI may be expecting too much of C.\n\nEDIT: Following the suggestion by Tom in the comments, the following definitions do compile\n\n``````typedef void *(*bol)() ;\n\nbol true(bol x, bol y){ return x ; }\nbol false(bol x, bol y){ return x ; }\n\nbol not(bol b ){ return b(false,true) ;}\n\nint main(){\nbol p = not((bol)true);\n\nreturn 0;\n}\n``````\n\nEDIT2: This, however, is not strictly conforming as Tom and others have pointed out.\n\nFurthermore, as @Antti Haapala, and @n.m point out, this may be asking too much of C.\n\nAt this point I'm skeptical that there could be a simple enough implementation in C++.\n\n• Comments are not for extended discussion; this conversation has been moved to chat. – Andy Nov 20 '17 at 17:46\n• @Andy Now that's a bad timing...because of late. The discussion was long over. This thread is dead now without the enlighting comments from Tom, n.m. and Antti. This helps nobody. – MASL Nov 20 '17 at 18:25\n• Would it be acceptable to write a mini-interpreter that would understand lambda-code? – Ring Ø Nov 21 '17 at 5:26\n• @MASL if there were comments of lasting value here or later in chat they should be moved into an (existing) answer so 1) they're summarised and searchable unlike comments and 2) can be voted on and 3) less liable to deletion and more prominently accessible. – Jon Clements Nov 21 '17 at 13:47\n• @JonClements So what's the point of comments then? None of the comments left, this one included, are really of much a value for anyone landing here, let alone of a lasting one. Maybe stackoverflow should automatically move all comments to a trash after a few hours... – MASL Nov 22 '17 at 4:25\n\nThe only way I know in C to declare recursive declarations is by using struct, like this:\n\n``````#include <stdio.h>\n#include <stdarg.h>\n\ntypedef struct LAMBDA {\nstruct LAMBDA * (*function)(struct LAMBDA *, ...);\n} *lambda;\n\nlambda trueFunction(lambda x, ...) {return x;}\nlambda true = &(struct LAMBDA) {.function = trueFunction};\n\nlambda falseFunction(lambda x, ...) {va_list argp; va_start(argp, x); lambda y = va_arg(argp, lambda); va_end(argp); return y;}\nlambda false = &(struct LAMBDA) {.function = falseFunction};\n\nlambda notFunction(lambda b, ...) {return b->function(false, true);}\nlambda not = &(struct LAMBDA) {.function = notFunction};\n\nint main() {\nlambda p1 = not->function(true);\nlambda p2 = not->function(false);\nprintf(\"%p %p %p %p\", true, p1, false, p2);\nreturn 0;\n}\n``````\n\nIts hard for me to judge whether such syntax is too cumbersome or not, obviously less clear than dynamic languages.\n\n• Nice. This is the other alternative Tom suggested. Yes, I think it's difficult to make a point about Lambda calculus using this example as the syntax starts getting in the way, but it deserves a +1 for showing such a use of struct in this case. One note: That printf requires the arguments to be cast into `void *` to make the code strictly compliant (gcc -pedantic). – MASL Nov 22 '17 at 4:43"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8438627,"math_prob":0.90825385,"size":1676,"snap":"2020-10-2020-16","text_gpt3_token_len":423,"char_repetition_ratio":0.125,"word_repetition_ratio":0.06711409,"special_character_ratio":0.26551312,"punctuation_ratio":0.15512465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98436916,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T15:18:21Z\",\"WARC-Record-ID\":\"<urn:uuid:f5ecc054-e109-400a-99c6-c165347d1ee6>\",\"Content-Length\":\"152666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3bd9f640-b89c-4f3f-9569-871b97745c93>\",\"WARC-Concurrent-To\":\"<urn:uuid:28244670-584b-4bc3-a1e4-7eab66394e16>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/47385374/lambda-calculus-in-c-booleans-and-not-operator\",\"WARC-Payload-Digest\":\"sha1:IS4DXW7MUKNHOCDPHZXTAYT5HDKHVOB6\",\"WARC-Block-Digest\":\"sha1:4YGGLQQEH6AFCSBFY35COWGOA4PV5QQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371637684.76_warc_CC-MAIN-20200406133533-20200406164033-00281.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/cond-mat/9908359/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# Dilatancy and friction in sheared granular media\n\nFrederic Lacombe, Stefano Zapperi and Hans J. Herrmann PMMH-ESPCI, 10 rue Vauquelin, 75231 Paris Cedex 05, France\n###### Abstract\n\nWe introduce a simple model to describe the frictional properties of granular media under shear. We model the friction force in terms of the horizontal velocity and the vertical position of the slider, interpreting as a constitutive variable characterizing the contact. Dilatancy is shown to play an essential role in the dynamics, inducing a stick-slip instability at low velocity. We compute the phase diagram, analyze numerically the model for a wide range of parameters and compare our results with experiments on dry and wet granular media, obtaining a good agreement. In particular, we reproduce the hysteretic velocity dependence of the frictional force.\n\n## I Introduction\n\nProblems related to interfacial friction are very important from a practical and conceptual point of view , and in spite of its wide domain of application sliding friction is still not completely understood. In order to construct efficient machines in engineering science, or to understand geophysical events like earthquakes, it is necessary to understand several aspects of friction dynamics. Beside usual solid-solid contacts, the sliding interface can be lubricated with molecular fluids or filled by a granular gauge, and the problem becomes strongly dependent on the internal dynamics of the material itself. In experiments on lubricated surfaces with thin layers of molecular fluids the friction force depends on the thermodynamic phase of the lubricant, which depends on its turn on the shear stress . The friction force is thus directly related to the microscopic dynamics of the system and a description of sliding friction cannot be achieved without a good microscopic understanding of the problem. Understanding the frictional properties of granular matter turns out to be an even harder task, since basic problems like stress propagation in a static packing remain largely unsolved due to the disordered nature of the stress repartition inside the medium. Moreover, when a granular medium is sheared, it reorganizes modifing the geometrical disorder. The microscopic arrangement of the grains and their compaction have an important effect on the friction since, in order to deform the medium one has to overcome several geometrical constraints.\n\nThe understanding of sheared granular media has recently advanced thanks to experiments [3, 4, 5, 6] and numerical studies [7, 8, 9]. The response to an external shear stress can be characterized by the dilatancy, which measures the modification of the compaction of the granular medium during the flow . We note that in both lubricated and granular interfaces the friction force has a dynamical origin. Since a sheared material modifies its own internal state fluidizing or changing structure, a natural approach to the problem is to describe phenomenologically this change of state and to relate it to the macroscopic friction force. As we discussed previously, a complete theoretical description of sheared granular media is still not available, so that the analysis should strongly rely on experimental data.\n\nRecent experiments , focusing on the stick-slip instability induced by friction in sheared granular layers, helped to elucidate the role of compaction and the microscopic origin of slip events. In particular, accurate measurement of the friction force and of the horizontal and vertical positions of the slider have permitted to emphasize the connections between dilatancy and friction. The apparatus used was composed by a slider dragged at constant velocity by a spring whose elongation measured the applied shear stress. The surface of the slider was roughened in order to avoid slip at the surface of the medium and so that friction would crucially depend on the internal structure of the medium. At low velocity, a stick slip instability was observed and related to the modification in the granular compaction.\n\nFriction of granular layers has been mainly studied in the framework of geophysical research [5, 10] using rate and state constitutive equations [11, 12, 13, 14, 15] where the friction force is a function of an auxiliary variable describing the state of the interface. In this approach, one assumes that the microscopic events causing the movement of the slider are self-averaging and neglects the fluctuations. The quantities used in the constitutive equations are thus mean field-like. This assumption should be valid for sliding friction experiments on granular materials, where the size of the grain is much smaller than the length of the slider, so that the variables used in the model (velocity, displacement or friction force), are well defined macroscopical quantities.\n\nThe constitutive variable, related to the microscopic dynamics of the system, describes the dynamical history of the interface. In the case of solid-solid interfaces this variable was associated with the age of the contacts and described two opposite effects: the age of the contact increases the static friction force and the displacement of the slider renews the interface continuously so that the friction force decreases with velocity. Lubricated systems have been approached similarly using the rate of fluidization as a constitutive variable [17, 18] which captures two different effects. On one hand the confinement of the thin fluid layer induces a glassy transition resulting in a large static friction force. On the other hand an applied shear stress increases the temperature of the medium, favoring fluidization, thus reducing the friction force, which crucially depends on the ratio between the strength of the two effects.\n\nIn the case of granular media, a parameter suitable to characterize the frictional behavior is the compaction of the layers or the height of the slider which can be measured experimentally. Also in this case we can identify the competition between two opposite effects: the velocity of the slider keeps the layer dilated, lowering the friction force, and the weight of the slider induces recompaction. In this paper we present a model which includes these two effects in the framework of rate and state constitutive equations to describe typical effects like the stick-slip instability or the force-velocity hysteretic loop.\n\nIn Sec. II we concentrate on the description of the model, in Sec. III we describe the main results obtained by numerical integration of the model, in Sec. IV we present a stability analysis and the phase diagram. Finally, Sec. V presents a discussion and a summary of our results.\n\n## Ii The model\n\nHere we write rate and state constitutive equations in order to describe the frictional properties of granular media. The dynamics of the sliding plate is described by two constitutive equations. The first one is simply the equation of motion for the slider block driven by a spring of stiffness and submitted to a frictional force, which depends on velocity and dilatancy. The second equation is the evolution law for an auxiliary variable characterizing the dilatancy, which we will identify with the vertical position of the slider. This model could in principle be applied to geophysical situations, although in that case instead of a single elastic constant , strain is mediated via the material bulk elasticity.\n\nThe frictional properties of a granular medium depend explicitly on its density: a dense granular medium submitted to a tangential stress tends to dilate, i.e to modify the granular packing and thus the friction force. It is not simple to measure granular density especially for non homogeneous systems, but global changes can be characterized by the vertical position of the sliding plate, which is thus an excellent candidate to describe the state of the system. Therefore, in agreement with Ref. , we write the equation of motion for the slider block as\n\n m¨x=k(Vt−x)−F(z,˙x), (1)\n\nwhere is the mass of the sliding plate, its position, the spring constant, the drag velocity, and the friction force depending on the velocity and on the height of the plate .\n\nIf the slider is at rest, we need to apply a minimal constant force in order for it to move. When the force exceeds , the slider moves and dilation will occur, reducing the friction. When the layer is fully dilated the friction force reduces to . We assume that the friction force is velocity dependent when the layer is partially dilated (), and becomes independent on velocity in the stationary state, when the granular medium is fully dilated ().\n\nIn summary (in the case ), we write the friction force as\n\n F(z,˙x)=Fd−βz−zmR−ν˙xz−zmR. (2)\n\nThe first two terms in Eq. (2) give the friction force at rest () as function of . In the fully expanded phase (), the friction term is , while in the compacted phase (). The velocity dependence is linear, mediated by the factor which vanishes when the bed is fully dilated. These equations should be compared with those presented in Ref., where the second term in Eq. (2) is not present.\n\nIn Eq. (2) depends explicitly on , which describes the vertical displacement of the slider. In order to complete the description of the dynamics, we must specify the evolution equation for . We write the law controlling the dilation of the granular medium during shear as\n\n ˙z=−zη−˙xz−zmR. (3)\n\nIn Eq. (3) the second term dilates the support and can be seen as the response of the granular medium to the external tangential stress: when submitted to a shear rate , the medium dilates and increases. The factor () reduces to zero when the bed is fully dilated and can be identified with the maximal height.\n\nThe first term allows for recompaction under the slider weight: in the case the plate falls exponentially fast. At high velocity this term will not perturb significantly the system and the dynamics will be stationary. We are interested in the small velocity limit: Eqs. (2,3) will display an instability below a critical drag velocity , as we will show in Sec. IV.\n\nIt is useful to rewrite the system of equations in terms of dimensionless variables\n\n ~t=tkν, ~η=ηkν, ~x=xR, ~z=zR, ~zm=zmR, ~m=mkν2, ~V=VνRk, ~v=vνRk, ~Fd=FdRk, ~β=βRk.\n\nDefining , the system becomes\n\n ˙~l=~V−~v, (4) ~m˙~v=~l+(~z−~zm)(~v+~β), (5) ˙~z=−~z~η−(~z−~zm)~v. (6)\n\nAssuming that these equation are valid for , we can analyze them for different spring constants, velocity.\n\n## Iii Numerical simulations\n\nWe numerically solve the model (Eqs. (4-6)) using the fourth order Runge-Kutta method and assuming that the slider plate sticks when its velocity is zero. We concentrate our analysis on two sets of parameters. The first set corresponds to experiments carried out with a dry granular medium. We compute the typical force-velocity diagram in order to fix the parameters. Then using the same parameters we test the validity of our model calculating other quantities such as the slider velocity during a slip event, the spring elongation or the vertical displacement.\n\nA second set of parameters is used to model wet granular media. We recover the instability at low velocity and low spring force and study the evolution of dilatancy and spring elongation before reaching the steady-state.\n\n### iii.1 Dry granular media\n\nDry granular media exhibit stick-slip instabilities for relatively high velocities and it is difficult to achieve complete vertical displacement of the slider. For this reason the steady sliding regime has not been studied in detail in experiments. In order to quantitatively test our model we adjust the parameters to fit the experimental results. We present in Fig. 1 the force-velocity curve during slip comparing the experimental data from Ref. with the result of the integration of the model. The parameters used are given in the caption. The model is able to accurately describe the first part of the hysteretic loop (when the velocity increases), but slight deviations appear for small velocities for which also the experimental uncertainties are larger.\n\nWe numerically integrate the model using the previously obtained parameters, varying the spring constant and the driving velocity. For slow velocity and a small spring constant the system exhibits typical stick-slip dynamics. Fig. 2 shows the evolution of the variables of the model in this case: the first plot (Fig. 2(a)) shows the variation of the spring length which decreases abruptly at a regular frequency, when the horizontal plate position increases (Fig. 2(b)). Fig. 2(c) represents the velocity of the plate which is followed by an increase of the vertical position of the plate (Fig. 2(d)). We show in Fig. 3 a more detailed study of the slider velocity during a slip event. Near the transition between the stick-slip and the steady sliding regime the slider velocity appears to be almost independent on the driving velocity, in agreement with experiments. The stick-slip instability of the model is ruled by Eqs. (4-6) and we present in Sec. IV the dynamical phase diagram computed by a linear stability analysis. When the slider is driven slowly the energy injected into the granular medium cannot keep the layers dilated and the motion stops after a short change in the horizontal position (slip event).\n\nIf we increase or , the energy induced by the shear is sufficient to maintain the granular layer dilated and the system evolves to a steady sliding state (cf. Fig. 4), which is stable with respect to small perturbations. This stationary state, corresponds to a stable fixed point of Eqs. (4-6) (see Sec. IV). If the drag velocity is very large the steady sliding state becomes unstable due to inertial effects () and the slider oscillates harmonically with frequency . This effect was experimentally observed in Ref. . We have plotted the result in Fig. 5 for two different perturbations, in order to show that the amplitude of the cycle depends on the strength of the perturbation.\n\nA typical measurement performed in the framework of geophysical research [5, 10], is the variation of the friction force with respect to a rapid change of the driving velocity. We have simulated this effect, and we show the result in Fig. 6. An increase of the driving velocity is followed by an increase of the friction force which then relaxes to a smaller value.\n\nThe phase diagram corresponding to the three different dynamical behaviors can be calculated analytically. We present the result in Sec. IV, where we study the linear stability of the model.\n\n### iii.2 Wet granular media\n\nThe analysis performed in Sec. III.1 can be repeated for wet granular materials. The dynamics in this case is more stable and the stick-slip regime is more difficult to obtain experimentally, since the instability occurs at very slow velocity. In the wet case, the presence of water changes the dynamics of the grains. Under shear, grains reorganize submitted to the fluid viscosity but here we neglect the small hydrodynamic effects and consider only the grain dynamics with suitable parameters. Using the new class of parameters, we solve numerically Eqs. (4-6) and identify two regimes: steady sliding at high and stick-slip instability otherwise (see Sec. IV for more details). In Fig. 7 we show a typical plot of the different quantities in the stick-slip regime. The period of the oscillations is bigger than in the dry case, and the fluctuations of the elongation smaller. One of the main difference with the dry case is the value of which governs the relaxation process and which is greater in the wet case as an effect of immersion. In Fig. 8 we show the steady state found at high velocity. It is interesting to remark that this behavior can be perfectly recovered with a simplified model, presented in Ref. , which however does not give rise to stick-slip instabilities. We will show in the next section that our model is equivalent to the model of Ref. for a given range of parameters. Fig. 9 represents the integration in the case (the importance of the value of will be highlighted in Sec. IV).\n\nRef. also reported an experiment in which the slider was stopped abruptly, but the applied stress was not released. Under these conditions, the medium does not recompactify towards the initial state but remains dilated in an intermediate state. This feature cannot be captured by our model, since the evolution of does not explicitly depend on the applied stress but only on the horizontal velocity. In order to describe this effect, we modify Eq. (3) in order to explicitly include a stress dependence in the evolution of the dilatancy\n\n ˙z=−z−AFextη−˙xz−zmR, (7)\n\nwhere is the applied force and is a constant. The behavior of this model is similar to the simpler model introduced in Section II, but the zero velocity fixed point explicitly depend on the applied stress (i.e. ). Fig. 10 shows the solution of the model compared whith the experiment of Ref. .\n\n## Iv Linear stability\n\nThe simple form of Eqs. (4-6) allows us to study analytically the linear stability of the system. We first concentrate on the inertial case and describe the main results about the dynamics of our problem (fixed point, critical curve). Next we discuss the origin of the instability and the connections with other models. Finally we investigate the nature of the bifurcation.\n\n### iv.1 Inertial case\n\nAll the numerical results presented above have been obtained including inertial effects. The system of Eqs.(4-6) has a simple fixed point\n\n lc=zmVν+βRk+ηVk, vc=V, zc=zmηVR+ηV. (8)\n\nWe see that tends to when tends to infinity, in agreement with experimental result. The critical line can also be computed explicitely in the framework of linear stability analysis. We skip the details of the calculations and just give the result\n\n k∗=−zmν2ηR2−zmνη2Rβ+mR3ν−mR2ηβ+2mR2ηVν−2mRη2Vβ+mRη2V2ν−V2mη3βνη2R2(R+ηV). (9)\n\nFig. 11 and Fig. 12 show the phase diagram in the plane for the parameters used previously (in Sec. III.1 and Sec. III.2). For both dry and wet granular layers we recover the stick-slip regime at sufficiently small and . In the dry case the critical velocity is higher than in the wet case and we can also identify the inertial regime on the right hand side of the phase diagram (see Fig. 11).\n\n### iv.2 Non inertial case\n\nIf we are interested only in low velocity displacements, the dynamical bifurcation line can be easily computed neglecting the mass of the slider\n\n k∗=(βR−νη)(zmR+ηV). (10)\n\nAlso in this case the dynamics is unstable for below the critical line, but there is no inertial regime. We have no experimental results to compare with this relation which links all the relevant parameter of the model.\n\nDue to the simplicity of the non inertial case, we can write our system in the traditional form of a Hopf bifurcation , and calculate the coefficient which determines the nature of the transition. Without the inertial term this coefficient simply reduces to zero and therefore we have no information about the nature (super or subcritical) of the transition without pushing the caluclation to higher orders or including inertia. However, the calculation is particularly complex so we only analyze the problem numerically (see Sec. IVD).\n\n### iv.3 Dynamical friction force\n\nThe stick-slip instability is due to the dependence of the friction coefficient on the velocity. Here we compute the friction force corresponding to the fixed point and show that the sign of plays an important role to determine the presence of an instability. In the steady state the friction force is given by\n\n Fc=Fd+zm1R+ηV(β+νV). (11)\n\nFor sufficiently high , does not depend on , in agreement with experiments, but for relatively small velocities depends on . The first derivative of the force is\n\n dFc(V)dV=−RηR+ηV(βR−νη). (12)\n\nWe can thus identify three cases:\n\nif () is positive then there is a verifying Eq. (10) and below this the system is unstable (the derivative in of is negative decreases with ).\n\nif () is negative the system is always stable ( cannot be negative).\n\nif () then does not depend on . In this case the system is stable and we can write the friction force as\n\n F(z,˙x)=Fs+ν˙z, (13)\n\nwith . The form given in Eq. (13) for the friction force, together with Eq. (3), implies a friction coefficient independent on and a stable steady state for all the values of the parameters. In the limit , and assuming Eq. (13), tends to 0 and the dilatancy rate is given by\n\n ˙z=−˙xz−zmR, (14)\n\nwhich reproduces the model of Ref. .\n\n### iv.4 Nature of the bifurcation line\n\nThe calculation in the non inertial case does not allow us to know the exact nature of the transition. Thus we investigate this problem numerically: the system is perturbed near its fixed point in the vertical position with different displacements (). Two final states can be obtained depending on the position in the phase diagram: the system can evolve to the steady state or be driven to the stick-slip cycle. In an intermediate zone depending on the strength of the perturbation, the system can recover both the fixed point or the stick-slip regime.\n\nWe identify three regimes, the first corresponds to the stick-slip regime where, independently on the amplitude of the perturbation the system falls into a periodic cycle. In the second regime, associated with high driving velocity, the system evolves to the stable fixed point. In the third intermediate regime, the final state depends on the initial perturbation: if the perturbation is sufficiently large the system falls into a periodic regime, while if the perturbation is weak it evolves towards the fixed point. The transition between the two regimes is discontinuous ( subcritical). Fig. 13 shows the amplitude of the oscillations as a function of the driving velocity. It would be interesting to check experimentally the hysteretic nature of the bifurcation line.\n\nThe presence of an hysteretic transition line could be related to an underlying first-order phase transition in the layer density induced by the applied stress. Recently , analyzing the results of photoelastic disks in a two dimensional shear cell, it has been argued that the density of a the granular packing would be the order parameter of a second order phase transition induced by shear. It would be interesting to relate the different experimental phase transitions through a suitable microscopic model.\n\n## V Discussion and open problems\n\nWe have introduced a model to describe the friction force of a sheared granular medium, treating explicitly the dilatancy during the slip, in the framework of rate and state constitutive equations. This approach allows us to include in the description the effect of the movement of the grains and the dependence of the friction coefficient on the dynamics of the layer. The variables used are mean-field like, since they represent macroscopic quantities like the position or the velocity but they are sufficient to describe phenomenologically the system. We have integrated the model for two sets of parameters, in order to make quantitative predictions for two different experimental configurations corresponding to dry and wet granular media.\n\nThe results are in good agreement with experiments. In particular, we recover the hysteretic dependence of the friction force on velocity and obtain a good fit to the experimental data recorded in dry granular media. The effect of the weight of the slider plate is included in the model and allows us to recover a stick-slip instability at low velocity. The physical origin of the instability is then directly related to the recompaction of the material under normal stress. The dynamical phase diagram is calculated analytically both in the inertial and non inertial cases and inertia is found to change only the high velocity part of this diagram. The equations used to model the dependence of the friction law on the external parameters include explicitly the effect of recompaction in the evolution of the vertical slider position.\n\nThe use of constitutive equations to model the friction force on complex interfaces is the simplest way to obtain quantitative results on the dynamics of the system. This approach provides good results in various fields, from geophysics or to nanotribology. In order to include the dynamics (or thermodynamics in the case of lubrication) of the material in the description, we need detailed informations about the material used. Our knowledge of sheared granular media is very poor due to the particulate and disordered nature of such materials and it is difficult to characterize the internal stress and strain rate. A precise description of the friction force for granular systems should include some information about the stress repartition inside the sheared material. This is a difficult problem which even for the simple case of a static pile cannot be solved completely. In the dynamical regime, the velocity depends on the precise nature of the contacts and on the friction force induced by them. Statistical models are needed to obtain a more complete macroscopic description based on the microscopic grain dynamics. In this respect, the analogy with phase transitions could be extremely fruitful.\n\nExperiments on granular flow over a rough inclined plane display an interesting behavior [21, 22], which is ruled by frictional properties. The dynamic stops abruptly when the drag force decreases and the system freezes with grains remaining in a static configuration. These phenomena can be related to the dependence of the friction force on the velocity of the grains: an increase of the friction force when the velocity of the layer decreases can produce an instability as in the system discussed here. It will be interesting to see if the methods discussed in this paper can be applied to this and other situations.\n\nWe thank J. S. Rice, and S. Roux for useful discussions and encouragements. We are grateful to J-C. Geminard for providing us with the data of his experiments and for interesting remarks. S. Z. is supported by EC TMR Research Network under contract ERBFMRXCT960062.\n\n## References\n\n• B.N.J. Persson, Sliding Friction (Springer, Berlin, 1998).\n• M.L. Gee, P.M. McGuiggan and J.N. Israelachvili, J. Chem. Phys. 93, 1895 (1990).\n• S. Nasuno, A. Kudrolli, J.P. Gollub, Phys. Rev. Lett. 79, 949 (1997); S. Nasuno, A. Kudrolli, A. Bak, J.P. Gollub, Phys. Rev. E 58, 2161 (1998).\n• J.C. Geminard, W. Losert, J.P. Gollub, Phys. Rev. E 59, 5881 (1999).\n• C. Marone, Annu. Rev. Earth Planet. Sci. 26, 643 (1998).\n• C.T. Veje, D.W. Howell and R.P. Behringer, Phys. Rev. E 59, 739 (1999).\n• P.A. Thompson and G.S. Grest, Phys. Rev. Lett. 67, 1751 (1991).\n• H-J.Tillemans and H.J. Herrmann, Physica A 217, 261 (1995).\n• S. Schollman, Phys. Rev. E 59, 889 (1999).\n• P. Segall, J.R. Rice, J. Geophs. Res. 100, B11, 22.155 (1995).\n• J.H. Dietrich, Pageoph. 116, 790 (1978).\n• J.R. Rice, Pageoph. 121, 443 (1983).\n• J-C. Gu, J.R. Rice, A.L. Ruina and S.T. Tse, J. Mech. Phys. Solids 32, 167 (1984).\n• J.R. Rice, A.L. Ruina, J. Appl. Mech. 50, 343 (1983).\n• A. Ruina, J. Geophs. Res. 88, B12, 10.359 (1983).\n• F. Helsot, T. Baumberger, B. Perrin, B. Caroli and C. Caroli, Phys. Rev. E 49, 4973 (1994).\n• A.A. Batista, J.M. Carlson, Phys. Rev. E 53, 4153 (1996); ibid. 57, 4986 (1998).\n• B.N.J. Persson, Phys. Rev. B 55, 8004 (1997).\n• J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamicals Systems and Bifurcation of Vector Field (Springer-Verlag, New York, 1983).\n• D. Howell and R. P. Behringer, Phys. Rev. Lett. 82, 5241 (1999).\n• O. Pouliquen and N. Renaut, J. Phys. II France 6, 923 (1996).\n• S. Douady and A. Daerr, in Physic of dry granular media, edited by H.J. Herrmann et al. (Kluwer Academic Publisher, Netherlands, 1998)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90056336,"math_prob":0.95830977,"size":26965,"snap":"2020-24-2020-29","text_gpt3_token_len":5837,"char_repetition_ratio":0.16531286,"word_repetition_ratio":0.02571622,"special_character_ratio":0.21487112,"punctuation_ratio":0.12175453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9774345,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T18:37:56Z\",\"WARC-Record-ID\":\"<urn:uuid:b0e8e5e8-fb8a-4386-a5ec-61611e8289e2>\",\"Content-Length\":\"402289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fcc7358d-b626-4c07-8fe7-91422bdf0a5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:94f40527-5b99-466d-b021-0629e801e59e>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/cond-mat/9908359/\",\"WARC-Payload-Digest\":\"sha1:VW2WN2L7XYB7F6DMJ7DC7U4XKHP6ZAVW\",\"WARC-Block-Digest\":\"sha1:MY3KPNXCSASTPQOHQ6OOA3GLYLXSHR2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347445880.79_warc_CC-MAIN-20200604161214-20200604191214-00475.warc.gz\"}"} |
http://www.fluka.org/web_archive/earchive/new-fluka-discuss/4480.html | [
"# Re: FLUGG crash\n\nFrom: Ercan Pilicer <ercanpilicer_at_gmail.com>\nDate: Mon, 27 Feb 2012 20:21:52 +0200\n\nDear Paola,\nThen, I presume it would be better to wait for Geant4 9.5\nCiao\ne.\n\nOn Mon, Feb 27, 2012 at 7:42 PM, paola sala <paola.sala_at_cern.ch> wrote:\n> Hi Ercan\n> the problem is in one of the G4 classes, the G4Pow, that causes an\n> overflow by calculationg a huge factorial.\n>\n> The error has been corrected in Geant4 9.5 : from the release notes :\n>\n> \"Added protection in G4Pow::powN() method for high exponent values.\n> Reduced vector for factorial from 512 to 170 (result should be below\n> \"DBL_MAX). Fixed computation of log(factorial)\"\n>\n> Solutions:\n> - wait a few days so that I check that flugg works with Geant4 9.5\n> - downgrade to Geant4 9.4.p03 where G4Pow =C2=A0was not used\n> - comment out one line in\n> =C2=A0FLUGG/source/global/management/src/G4Pow.cc :\n> =C2=A0it is enough to comment the line\n> =C2=A0 =C2=A0 =C2=A0 f =C2=A0 =C2=A0 =C2=A0*=3D x;\n>\n> Ciao\n> Paola\n>\n>\n> On Sat, 2012-02-18 at 10:13 +0200, Ercan Pilicer wrote:\n>> dear all\n>>\n>> i want to run FLUGG (flugg_2009_4.tar.gz), but the program crashes\n>> when i run alaual.inp example.\n>>\n>> i attach the followings in a compressed file (files.tar.gz):\n>> - configure.sh\n>> - Install.log\n>> - make.log\n>> - geant4_commpile.log\n>> - fluka_cash.log\n>> - ranalaual001\n>>\n>> my system is\n>>\n>> /> uname -a\n>> Linux shqiptare-laptop 2.6.38-13-generic #55-Ubuntu SMP Tue Jan 24\n>> 14:27:59 UTC 2012 i686 i686 i386 GNU/Linux\n>>\n>> /> cat /proc/version\n>> Linux version 2.6.38-13-generic (buildd_at_palmer) (gcc version 4.5.2\n>> (Ubuntu/Linaro 4.5.2-8ubuntu4) ) #55-Ubuntu SMP Tue Jan 24 14:27:59\n>> UTC 2012\n>>\n>> /> gcc --version\n>> gcc (Ubuntu/Linaro 4.5.2-8ubuntu4) 4.5.2\n>> Copyright (C) 2010 Free Software Foundation, Inc.\n>> This is free software; see the source for copying conditions. =C2=A0Ther=\ne is NO\n>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPO=\nSE.\n>>\n>> any help would be appreciated.\n>> e.\n>>\n>>\n>>\n>>\n>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n>> =C2=A0 Ercan Pilicer\n>> =C2=A0 Uludag University\n>> =C2=A0 High Energy Physics Department\n>> =C2=A0 16059 Bursa, TURKEY\n>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n>\n>\n\n--=20\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\n=C2=A0 Ercan Pilicer\n=C2=A0 Uludag University\n=C2=A0 High Energy Physics Department\n=C2=A0 16059 Bursa, TURKEY\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=\n=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D\nReceived on Tue Feb 28 2012 - 09:14:04 CET\n\nThis archive was generated by hypermail 2.2.0 : Tue Feb 28 2012 - 09:14:05 CET"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6283018,"math_prob":0.6939426,"size":2781,"snap":"2020-45-2020-50","text_gpt3_token_len":1280,"char_repetition_ratio":0.23838675,"word_repetition_ratio":0.014742015,"special_character_ratio":0.43941027,"punctuation_ratio":0.12460064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99910665,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T17:17:54Z\",\"WARC-Record-ID\":\"<urn:uuid:08d4c279-1935-4922-a770-2bcd36e13bf4>\",\"Content-Length\":\"10179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c093070a-0e29-48fc-9e3d-80bea4f5614f>\",\"WARC-Concurrent-To\":\"<urn:uuid:0782aa7b-308d-4533-9ee5-939425f28565>\",\"WARC-IP-Address\":\"193.205.78.76\",\"WARC-Target-URI\":\"http://www.fluka.org/web_archive/earchive/new-fluka-discuss/4480.html\",\"WARC-Payload-Digest\":\"sha1:TSADKOTDCL35KPU7YA2NBBPSBXUUX456\",\"WARC-Block-Digest\":\"sha1:IBV6HDFVXT5EKWWC6PFKSQQSYJX4VCTW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107884322.44_warc_CC-MAIN-20201024164841-20201024194841-00447.warc.gz\"}"} |
http://supercalifeste.com/racing/reed-solomon-code-in-matlab.php | [
"## supercalifeste.com",
null,
"",
null,
"Main / Racing / Reed solomon code in matlab\n\n# Reed solomon code in matlab",
null,
"Name: Reed solomon code in matlab File size: 786mb Language: English Rating: 2/10 Download\n\nSet the code parameters. m = 3; % Number of bits per symbol n = 2^m - 1; % Codeword length k = 3;. Set the parameters for the Reed-Solomon code, where N is the codeword length, K is the nominal message length, and S is the shortened message length. The RSEncoder object creates a Reed-Solomon code with message and codeword lengths you specify. enc = supercalifeste.comder creates a block encoder System object, enc. enc = supercalifeste.comder(N,K) creates an RS encoder object, enc, with the CodewordLength property set to N and the.\n\n24 Feb , it is about reed-solomon code. 27 Jan , Reed Solomon code, encoding and decoding. Requires. Communications Blockset. This MATLAB function returns the narrow-sense generator polynomial of a Reed- Solomon code with codeword length n and message length k. This example shows how to configure the RSEncoder and RSDecoder System objects to perform Reed-Solomon (RS) block coding with erasures when.\n\nEncode and decode a signal using Reed Solomon encoder and decoder System . This example shows how to set up the Reed-Solomon (RS) encoder/decoder to shorten the (63,53) code to a (28,18) code. 19 Oct Here is a simple Matlab code (which can be found in Matlab Help, posted here with a little Previous Post Reed Solomon Codes – Introduction. Creating and Decoding Reed-Solomon Codes. The rsenc and rsdec functions create and decode Reed-Solomon codes, using the data described in. on Reed-Solomon codes as a subclass of cyclic codes and BCH codes, using a in codice Matlab di un sistema codificatore - canale - decodificatore e l'intero.\n\nReed-Solomon codes can be encoded like any other cyclic code. Given a verted to the polynomial form using a user-defined MATLAB function on Galois Field. 16 May To perform this check one can start with simulating reed Solomon codes in MATLAB and then going for simulation in XILINX writing the VHDL. simulation for non-binary bch (reed-solomon) decoding algorithm RS(n, k): n, k and t can be changed according to the size and correction ability of the code. This section of MATLAB source code covers Reed solomon Encoder(RS Encoder ) matlab code.\n\nOne straightforward MATLAB format for messages and codewords is a vector of For Reed-Solomon codes, the message matrix must have m columns, where. It is known that Reed-Solomon is a good code agains burns errors. Also, it is known that I have several . Ber Simulation for Reed-Solomon Codes in Matlab . 20 May All this MATLAB code is my attempt to make simple Reed-Solomon coder/ decoder over GF(2^m) This implementation isn't designed to be. 18 Apr Hi all. Can any one upload Reed solomon Code(matlab code with explantion) for encoder and decoder. thanks in advance chin.\n\nMore:"
]
| [
null,
"http://supercalifeste.com/racing/pics/10bgmenu20rtr.jpg",
null,
"http://supercalifeste.com/racing/pics/10bgmenu20rtl.jpg",
null,
"http://supercalifeste.com/gallery/reed-solomon-code-in-matlab.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.746782,"math_prob":0.8144581,"size":2867,"snap":"2019-13-2019-22","text_gpt3_token_len":697,"char_repetition_ratio":0.16870415,"word_repetition_ratio":0.010504202,"special_character_ratio":0.22078829,"punctuation_ratio":0.11231884,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99054533,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T04:59:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4f049bce-3a4c-4cef-84bc-a3f3a15781d2>\",\"Content-Length\":\"10125\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3883dc08-262c-457b-a48c-aea91c60e4d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:a38e3d1c-2ef3-4fa6-b405-626697506e1b>\",\"WARC-IP-Address\":\"104.27.140.45\",\"WARC-Target-URI\":\"http://supercalifeste.com/racing/reed-solomon-code-in-matlab.php\",\"WARC-Payload-Digest\":\"sha1:SOZZC4C6ILIYKZYVFENOLHDURT5T7TW3\",\"WARC-Block-Digest\":\"sha1:PKQCGP6ZOQ2YTEKAPXYBERWLPGXBSIEV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202628.42_warc_CC-MAIN-20190322034516-20190322060516-00405.warc.gz\"}"} |
http://www1.sust.edu/uploads/department/curriculum/index.php?id=1885&iframe=true | [
"STA328 STOCHASTIC PROCESSES\n\n3 Hours/Week, 3 Credits\n\nModern probability theory: probability of a set function, Borel field and extension of probability measure, probability measure notion of random variables, probability space, distribution functions, expectation and moments. Inversion theorem. Convergence of random variables: characteristic functions with properties, probability generating functions with properties, conditions, modes of convergence. Stochastic process: definition, different types of stochastic processes, recurrent events, renewal equation, delayed recurrent events, number of occurrence of a recurrent event. Markov chain: transition matrix, higher transition probabilities, classification of states and chains, ergodic properties, evaluation of pn. Finite Markov chain: general theory of random walk with reflecting barriers, transient states, absorbing probabilities, application of recurrence time, gambler’s ruin problem. Homogeneous Markov process: Poisson process, simple birth process, simple death process, simple birth and death process, general birth process, effect of immigration, non-homogeneous birth death process. Queueing theory."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8051175,"math_prob":0.9803455,"size":1169,"snap":"2021-43-2021-49","text_gpt3_token_len":214,"char_repetition_ratio":0.15622318,"word_repetition_ratio":0.0,"special_character_ratio":0.1633875,"punctuation_ratio":0.22459893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T19:19:16Z\",\"WARC-Record-ID\":\"<urn:uuid:d9307f72-cca5-41ea-927b-d1a5fb651bf0>\",\"Content-Length\":\"2710\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f843f8d6-8f36-4f27-8362-a13d763e6b35>\",\"WARC-Concurrent-To\":\"<urn:uuid:43f9d83f-d9f6-42b5-b1bc-292029e9e0e9>\",\"WARC-IP-Address\":\"103.84.159.17\",\"WARC-Target-URI\":\"http://www1.sust.edu/uploads/department/curriculum/index.php?id=1885&iframe=true\",\"WARC-Payload-Digest\":\"sha1:U5VFRQMO5GE463NVEGJ3PDWJGHIBQAZN\",\"WARC-Block-Digest\":\"sha1:TKUFKX3NCPWPW3M2VXADKG5MBWGIUZBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362918.89_warc_CC-MAIN-20211203182358-20211203212358-00079.warc.gz\"}"} |
https://rdrr.io/github/OakleyJ/MUCM/man/predict.emulatorFit.html | [
"predict.emulatorFit: Prediction using Emulators In OakleyJ/MUCM: Gaussian process emulator methods based on the MUCM toolkit\n\nDescription\n\nPredicts value and confidence interval at new inputs using Gaussian Process Emulation. This function should be preceded by the fitEmulator function.\n\nUsage\n\n 1 2 3 ## S3 method for class 'emulatorFit' predict(object, newdata, var.cov = FALSE, sd = TRUE, tol = -1e-11, ...)\n\nArguments\n\n object A fit object of class inheriting from 'emulatorFit'. newdata A data matrix of input(s) at which emulation is desired (new inputs). Must contain at least all parameters given in object\\$training.inputs. If missing, the fitted inputs object\\$training.inputs are used. var.cov Optionally calculates posterior variance covariance matrix. Default is set to FALSE. For large numbers of training and prediction data, this is quite time consuming. sd Optionally calculates only the posterior standard deviation. Default is set to TRUE. tol The tolerance for capping negative small values of posterior standard deviation to zero. The default is -10^-11. ... Further arguments not used and an error is thrown if provided.\n\nDetails\n\nNote that when using the LMC method, calculating the posterior variance is quite time-consuming.\n\nValue\n\nThe function returns a list containting the following components:\n\n posterior.mean Approximation of the outputs for the given inputs in newdata posterior.variance Variance covariance matrix around this approximation standard.deviation Standard Deviation of the approximation. It equals the square-root of the diagonal of the posterior.variance\n\nWhen the number of outputs to emulate is more than 1, method = 'separable', and object is of class \"emulatorFit\" two extra values are returned from this function. These are\n\n correlation.Matrix A spatial correlation matrix. sigmahat A between outputs covariance matrix.\n\nAuthor(s)\n\nOriginally written by Jeremy Oakley. Modified by Sajni Malde.\n\nReferences\n\nOakley, J. (1999). Bayesian uncertainty analysis for complex computer codes, Ph.D. thesis, University of Sheffield.\n\nOakleyJ/MUCM documentation built on May 7, 2019, 9:01 p.m."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.71931005,"math_prob":0.94115996,"size":1870,"snap":"2019-43-2019-47","text_gpt3_token_len":402,"char_repetition_ratio":0.11521972,"word_repetition_ratio":0.0,"special_character_ratio":0.20320855,"punctuation_ratio":0.13166144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97279847,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T21:08:17Z\",\"WARC-Record-ID\":\"<urn:uuid:55723cfb-7e4c-47f0-a40d-541964234b11>\",\"Content-Length\":\"62501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36f59025-f14b-4b05-8532-1a63b28c412e>\",\"WARC-Concurrent-To\":\"<urn:uuid:35785ebc-89bb-4976-9ad1-b97432ff0eea>\",\"WARC-IP-Address\":\"104.28.6.171\",\"WARC-Target-URI\":\"https://rdrr.io/github/OakleyJ/MUCM/man/predict.emulatorFit.html\",\"WARC-Payload-Digest\":\"sha1:PN4F3LKBFSA2Y65XJLMLCVFKAX6IZTLK\",\"WARC-Block-Digest\":\"sha1:R2NBOCFQ44PGGUVFG2AXQ43RLCJX42EZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986697760.44_warc_CC-MAIN-20191019191828-20191019215328-00249.warc.gz\"}"} |
https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018.html | [
"Geosci. Model Dev., 11, 4241–4267, 2018\nhttps://doi.org/10.5194/gmd-11-4241-2018\nGeosci. Model Dev., 11, 4241–4267, 2018\nhttps://doi.org/10.5194/gmd-11-4241-2018\n\nModel description paper 18 Oct 2018\n\nModel description paper | 18 Oct 2018",
null,
"# EcoGEnIE 1.0: plankton ecology in the cGEnIE Earth system model\n\nEcoGEnIE 1.0: plankton ecology in the cGEnIE Earth system model\nBen A. Ward1,2, Jamie D. Wilson1, Ros M. Death1, Fanny M. Monteiro1, Andrew Yool3, and Andy Ridgwell1,4 Ben A. Ward et al.\n• 1School of Geographical Sciences, University of Bristol, Bristol, UK\n• 2Ocean and Earth Science, University of Southampton, National Oceanography Centre, Southampton, UK\n• 3National Oceanography Centre, Southampton, UK\n• 4Department of Earth Sciences, University of California, Riverside, CA, USA\n\nCorrespondence: Ben A. Ward ([email protected])\n\nAbstract\n\nWe present an extension to the carbon-centric Grid Enabled Integrated Earth system model (cGEnIE) that explicitly accounts for the growth and interaction of an arbitrary number of plankton species. The new package (ECOGEM) replaces the implicit, flux-based parameterisation of the plankton community currently employed, with explicitly resolved plankton populations and ecological dynamics. In ECOGEM, any number of plankton species, with ecophysiological traits (e.g. growth and grazing rates) assigned according to organism size and functional group (e.g. phytoplankton and zooplankton) can be incorporated at runtime. We illustrate the capability of the marine ecology enabled Earth system model (EcoGEnIE) by comparing results from one configuration of ECOGEM (with eight generic phytoplankton and zooplankton size classes) to climatological and seasonal observations. We find that the new ecological components of the model show reasonable agreement with both global-scale climatological and local-scale seasonal data. We also compare EcoGEnIE results to the existing biogeochemical incarnation of cGEnIE. We find that the resulting global-scale distributions of phosphate, iron, dissolved inorganic carbon, alkalinity, and oxygen are similar for both iterations of the model. A slight deterioration in some fields in EcoGEnIE (relative to the data) is observed, although we make no attempt to re-tune the overall marine cycling of carbon and nutrients here. The increased capabilities of EcoGEnIE in this regard will enable future exploration of the ecological community on much longer timescales than have previously been examined in global ocean ecosystem models and particularly for past climates and global biogeochemical cycles.\n\nShare\n1 Introduction\n\nThe marine ecosystem is an integral component of the Earth system and its dynamics. Photosynthetic plankton ultimately support almost all life in the ocean, including the fish stocks that provide essential nutrition to more than half the human population . In addition, the marine biota determine an important downward flux of carbon, known as the “biological pump”. This flux arises as biomass generated by photosynthesis in the well-lit ocean surface sinks into the dark ocean interior, where it is remineralised (e.g. Hülse et al.2017). Modulated by the activity and composition of marine ecosystems, the biological pump increases the partial pressure of CO2 at depth and decreases it in the ocean surface and atmosphere, and thus plays a key role in the regulation of Earth's climate. For instance, the existence of the biological carbon pump has been estimated to be responsible for an approximately 200 ppm decrease in atmospheric carbon concentration at steady state , with variations in its magnitude being cited as playing a key role in, for example, the late Quaternary glacial–interglacial climate oscillations .\n\nA variety of different marine biogeochemical modelling approaches has been developed in an attempt to understand how the marine carbon cycle functions and its dynamical interaction with climate, and to make both past and future projections. In the simplest of these approaches, the biological pump is incorporated into an ocean circulation (or box) model without explicitly including any state variables for the biota. Such models have been described as models of “biogenically induced chemical fluxes” (rather than explicitly of the biology – and ecology – itself; Maier-Reimer1993). They vary considerably in complexity but can be broadly divided into two categories. In the first of these, “nutrient-restoring” models calculate the biological uptake of nutrients at any one point at the ocean surface as the flux required to maintain surface nutrient concentrations at observed values . The vertical flux is then remineralised at depth according to some attenuating profile, such as that of . Within this framework, carbon export is typically calculated from the nutrient flux according to a fixed stoichiometric (“Redfield”) ratio (Redfield1934). In addition to the availability of a spatially explicit (in the case of ocean circulation models) observed surface ocean nutrient field, nutrient-restoring models inherently only require a single parameter – the restoring timescale – and even this parameter is not critical (as long as the timescale is sufficiently short that the model closely reproduces the observed nutrient concentrations). The simplicity of this approach lends itself to being able to focus on a very specific part of the ecosystem dynamics, namely the downward transport of organic matter, and was highly influential, particularly during the early days of marine biogeochemical model development and assessment of carbon uptake and transport dynamics (e.g. Marchal et al.1998; Najjar et al.1992). However, because this approach is based explicitly upon observed values (or modified observations), they are primarily only suitable for diagnostic and modern steady-state applications and are unable to model any deviations of nutrient cycling, and hence of climate, from the current ocean state.\n\nMore sophisticated models of biogenically induced chemical fluxes do away with a direct observational constraint and instead estimate the organic matter export term on the basis of limiting factors, such as temperature, light, and the availability of nutrients such as nitrogen, phosphorous, and iron – an approach we will here refer to as “nutrient limitation”. Models based on this approach were natural successors to the early nutrient-restoring models and could account for the influence of multiple limiting nutrients and even implicitly partition export between different functional types . Without entraining an explicit dependence on observed surface ocean nutrient distributions, these models also gain much more freedom and, with it, a degree of predictive capability. Additionally, other than plausible values for nutrient half-saturation constants, nutrient-limitation models make few assumptions that are specifically tied to modern observations and assume very little (if anything) about the particular organisms present. Hence, as long as one makes the assumption that the marine plankton that existed at some specific time in the past were physiologically similar, particularly in terms of fundamental nutrient requirements, there is no apparent reason why nutrient-limitation models will not be as applicable to much of the Phanerozoic in terms of geological past, as they are to the present (questions of how suitable they might be to the present in the first place aside). Using nutrient-limitation flux schemes, marine biogeochemical cycles have hence already been simulated for periods such as the mid-Cretaceous and end-Permian , times for which surface nutrient distributions are not known a priori.\n\nThe disadvantage of both variants of models of biogenically induced chemical fluxes is that they are not able to represent interactions between parts of the ecosystem (e.g. resource competition and predator–prey interactions), simply because these components and processes are not resolved. Nor can they address questions involving the addition or loss, such as those associated with past extinction events, of plankton species and changes in ecosystem complexity and/or structure. They also suffer from being overly responsive to changes in nutrient availability. In the case of restoring models, this is simply because any change in the target field will be closely tracked. In the case of the nutrient-limitation models, the lack of an explicit biomass term results in export fluxes changing instantaneously in response to changing limiting factors. In the real world, by contrast, sufficient biomass must first exist, such as in a bloom condition, in order to achieve maximal export. This has consequences for how the seasonality of organic matter export is represented. Other restrictions include the inability to know anything about ecosystem size structure (and, by association, about particle sinking speed) or the degree of recycling at the ocean surface and hence the partitioning of carbon into dissolved vs. particulate phases in exported organic matter.\n\nTo allow models to respond to changes in ecosystem structure, and to incorporate some of the additional feedbacks and complexities that may be important in determining the future marine response to continued greenhouse gas emissions , it has been necessary to explicitly resolve the ecosystem itself. Such models have been developed across a wide range of complexities . Among the simplest are nutrient–phytoplankton–zooplankton–detritus (NPZD)-type models, resolving a single nutrient, homogenous phytoplankton and zooplankton communities, and a single detrital pool . At the other end of the spectrum, more complex models may include multiple nutrients and several plankton functional types (PFTs) . What links these models is that the living state variables are very broadly based on ecological guilds (i.e. groups of organisms that exploit similar resources).\n\nWhile simple NPZD models are capable of reproducing some of the observed variability in bulk properties such as chlorophyll biomass and primary production , their very simplicity precludes the representation of many potentially important biogeochemical processes and climate feedbacks. Additionally, NPZD models are parameterised to represent the activity of diverse plankton communities, with different parameter values being required as the ecosystem changes in space and time . In this regard, PFT models may be more generally applicable because they resolve relatively more fundamental ecological processes that may be less sensitive to environmental variability . These are the key factors that have motivated the development of more complex models, in which the broad ecological guilds of NPZD models are replaced with more specific groups based on ecological and/or biogeochemical function . It is argued that resolving more components of the ecosystem allows the representation of important climate feedbacks that cannot be accounted for in simpler models (Le Quéré2006).\n\nHowever, alongside their advantages, the current generation of PFT models is faced with two important and conflicting challenges. Firstly, these complex models contain a large number of parameters that are often poorly constrained by observations (Anderson2005). Secondly, although PFT models resolve more ecological structure than the preceding generation of ocean ecosystem models, they are rarely general enough to perform well across large environmental gradients . To these, one might add difficulties in their application to past climates. PFT models are based on a conceptual reduction of the modern marine ecosystem to its apparent key biogeochemical components, such as nitrogen fixation, or opal frustule production (as by diatoms). The role of diatoms and the attendant cycling of silica quickly become moot once one looks back in Earth history, as the origin of diatoms is thought to be sometime early in the Mesozoic (252–66 Ma), and they did not proliferate and diversify until later in the Cenozoic (66–0 Ma) . In addition, the physiological details of each species encoded in the model are taken directly from laboratory culture experiments of isolated strains creating a parameter dependence on modern cultured species, in addition to a structural one.\n\nRecent studies have begun to address these issues by focusing on the more general rules that govern diversity (rather than by trying to quantify and parameterise the diversity itself). These “trait-based” models are beginning to be applied in the field of marine biogeochemical modelling , with a major advantage being that they are able to resolve greater diversity with fewer specified parameters. One of the main challenges of this approach then is to identify the general rules or trade-offs that govern competition between organisms . These trade-offs are often strongly constrained by organism size. A potentially large number of different plankton size classes can therefore be parameterised according to well-known allometric relationships linking plankton physiological traits to organism size (e.g. Tang1995; Hansen et al.1997). This approach has the associated advantage that the size composition of the plankton community affects the biogeochemical function of the community (e.g. Guidi et al.2009). If one assumes that the same allometric relationships and trade-offs are relatively invariant with time, then this approach provides a potential way forward to addressing geological questions.\n\nIn this paper, we present an adaptable modelling framework with an ecological structure that can be easily adapted according to the scientific question at hand. The model is formulated so that all plankton are described by the same set of equations, and any differences are simply a matter of parameterisation. Within this framework, each plankton population is characterised in terms of its size-dependent traits and its distinct functional type. The model also includes a realistic physiological component, based on a cell quota model (Caperon1968; Droop1968) and a dynamic photoacclimation model . This physiological component increases model realism by allowing phytoplankton to flexibly take up nutrients according to availability, rather than according to an unrealistically rigid cellular stoichiometry. Such flexible stoichiometry is rarely included in large-scale ocean models and provides the opportunity to study the links between plankton physiology, ecological competition, and biogeochemistry. This model is then embedded within the carbon-centric Grid Enabled Integrated Earth system model (cGEnIE) widely used in addressing questions of past climate and carbon cycling, and the overall properties of the model system are evaluated.\n\nThe structure of this paper is as follows. In Sect. 2, we will briefly outline the nature and properties of the cGEnIE Earth system model, focusing on the ocean circulation and marine biogeochemical modules most directly relevant to the simulation of marine ecology. In Sect. 3, we introduce the new ecological model – ECOGEM – that has been developed within the cGEnIE framework. Section 4 describes the preliminary experiments of ECOGEM, and Sect. 5 presents results from the new integrated ecological global model (EcoGEnIE) in comparison to observations (where available) as well as to the pre-existing biogeochemical simulation of cGEnIE.\n\n2 The GEnIE/cGEnIE Earth system model\n\nGEnIE is an Earth system model of intermediate complexity (EMIC) and is based on a modular framework that allows different components of the Earth system, including ocean circulation, ocean biogeochemistry, deep-sea sediments, and geochemistry, to be incorporated . The simplified atmosphere and carbon-centric version of GEnIE we use – cGEnIE – has been previously applied to explore and understand the interactions between biological productivity, biogeochemistry, and climate over a range of timescales and time periods . As is common for EMICs, cGEnIE features a decreased spatial and temporal resolution in order to facilitate the efficient simulation of the various interacting components. This imposes limits on the resolution of ecosystem dynamics to large-scale annual/seasonal patterns in contrast to higher resolutions often used to model modern ecosystems. However, our motivation for incorporating a new marine ecosystem module into cGEnIE is to focus on the explicit interactions between ecosystems, biogeochemistry, and climate that are computationally prohibitive in higher-resolution models. In other words, our motivation is to include and explore a more complete range of interactions and dynamics within the marine system, at the expense of spatial fidelity and with the intention to explore long timescale and paleoceanographic questions, rather than short-term and future anthropogenic concerns.\n\n## 2.1 Ocean physics and climate model component – C-GOLDSTEIN\n\nThe fast climate model, C-GOLDSTEIN features a reduced physics (frictional geostrophic) 3-D ocean circulation model coupled to a 2-D energy–moisture balance model of the atmosphere and a dynamic–thermodynamic sea-ice model. Full descriptions of the model can be found in and .\n\nThe circulation model calculates the horizontal and vertical transport of heat, salinity, and biogeochemical tracers via the combined parameterisation for isoneutral diffusion and eddy-induced advection . The ocean model is configured on a 36×36 equal-area horizontal grid with 16 logarithmically spaced z-coordinate levels. The horizontal grid is generally constructed to be uniform in longitude (10 resolution) and uniform in the sine of latitude (varying in latitude from ∼3.2 at the Equator to 19.2 near the poles). The thickness of the vertical grid increases with depth, from 80.8 m at the surface to as much as 765 m at depth. The degree of spatial and temporal abstraction in C-GOLDSTEIN results in parameter values that are not well known and require calibration against observations. The parameters for C-GOLDSTEIN were calibrated against annual mean climatological observations of temperature, salinity, surface air temperature, and humidity using the ensemble Kalman filer (EnKF) methodology . The parameter values for C-GOLDSTEIN used are those reported for the 16-level model in Table S1 of under “GEnIE16”. C-GOLDSTEIN is run with 96 time steps per year. The resulting circulation is dynamically similar to that of classical general circulation models based on the primitive equations but is significantly faster to run and in this configuration performs well against standard tests of circulation models such as anthropogenic CO2 and chlorofluorocarbon (CFC) uptake, as well as in reproducing the deep-ocean radiocarbon (Δ14C) distribution .\n\n## 2.2 Ocean biogeochemical model component – BIOGEM\n\nTransformations and spatial redistribution of biogeochemical compounds both at the ocean surface (by biological uptake) and in the ocean interior (remineralisation), plus air–sea gas exchange, are handled by the module BIOGEM. In the pre-existing version of BIOGEM, the biological (soft-tissue) pump is driven by an implicit (i.e. unresolved) biological community (in place of an explicit representation of living microbial community). It is therefore a nutrient-limitation variant of a model of biogenically induced chemical fluxes, as outlined above. A full description can be found in and .\n\nIn this study, we use a seasonally insolation forced 16-level ocean model configuration, similar to that of . However, in the particular biogeochemical configuration we use, limitation of biological uptake of carbon is provided by the availability of two nutrients. In addition to phosphate, we now include an iron cycle following . This aspect of the model is determined by a revised set of parameters controlling the iron cycle . We also incorporate a series of minor modifications to the climate model component, particularly in terms of the ocean grid and wind velocity and stress forcings (consistent with Marsh et al.2011) together with associated changes to several of the physics parameters. A complete description and evaluation of the physical and biogeochemical configuration of cGEnIE is provided in .\n\n3 Ecological model component – ECOGEM\n\nThe current BIOGEM module in cGEnIE does not explicitly resolve the biological community and instead transforms surface inorganic nutrients directly into exported nutrients or dissolved organic matter (DOM):\n\n inorganic nutrients DOM and remineralised nutrients\n\nThis simplification greatly facilitates the efficient modelling of the carbon cycle over long timescales but with the associated caveats of an implicit scheme (as discussed earlier). In ECOGEM, biological uptake is again limited by light, temperature, and nutrient availability, but here it must pass through an explicit and dynamic intermediary plankton community, before being returned to DOM or dissolved inorganic nutrients:\n\n inorganic nutrients $\\stackrel{\\text{production}}{\\mathit{⟶}}$ living biomass $\\stackrel{\\text{export}}{\\mathit{⟶}}$ DOM and remineralised nutrients\n\nThe ecological community is also subject to mortality and internal trophic interactions, and will produce both inorganic compounds and organic matter. The structural relationship between BIOGEM and ECOGEM is illustrated in Fig. 1.",
null,
"Figure 1Schematic representation of the coupling between BIOGEM and ECOGEM. State variables: R indicates the inorganic element (i.e. resource), B indicates plankton biomass, and OM indicates organic matter. Subscripts B and E denote state variables in BIOGEM and ECOGEM, respectively. BIOGEM passes resource biomass R to ECOGEM. ECOGEM passes rates of change (δ) in R and OM back to BIOGEM.\n\nIn the following section, we outline the key state variables directly relating to ecosystem function (Sect. 3.1) describe the mathematical form of the key rate processes relating to each state variable (Sect. 3.2) and how they link together (Sect. 3.3). We will then describe the parameterisation of the model according to organism size and functional type (Sect. 3.4). The model equations are modified from . We provide all the equations used in ECOGEM here, but we provide only brief descriptions of the parameterisations and parameter value justifications already included in .\n\n## 3.1 State variables\n\nECOGEM state variables are organised into three matrices (Table 1), representing ecologically relevant biogeochemical tracers (hereafter referred to as “nutrient resources”), plankton biomass, and organic matter. All these matrices have units of mmol element m−3, with the exception of the dynamic chlorophyll quota, which is expressed in units of mg chlorophyll m−3. The nutrient resource vector (R) includes Ir distinct inorganic resources. The plankton community (B) is made up of J individual populations, each associated with Ib cellular nutrient quotas. Finally, organic matter (D) is made up of K size classes of organic matter, each containing id organic nutrient element pools. (Note that, strictly speaking, detrital organic matter is not explicitly resolved as a state variable in ECOGEM, as we currently only resolve the production of organic matter, which is passed to BIOGEM and held there as a state variable. As a consequence, there is no grazing on detrital organic matter in the current configuration of EcoGEnIE. We include a description of D and its relationships here for completeness and for convenience of notation.)\n\n### 3.1.1 Inorganic resources\n\nR is a row vector of length Ir, the number of dissolved inorganic nutrient resources.\n\n$\\begin{array}{}\\text{(1)}& \\mathbit{R}=\\left[\\begin{array}{ccc}{R}_{\\text{DIC}}& {R}_{{\\mathrm{PO}}_{\\mathrm{4}}}& {R}_{\\mathrm{Fe}}\\end{array}\\right]\\end{array}$\n\nAn individual inorganic resource is denoted by the appropriate subscript. For example, PO4 is denoted ${R}_{{\\mathrm{PO}}_{\\mathrm{4}}}$.\n\n### 3.1.2 Plankton biomass\n\nB is a J×Ib matrix, where J is the number of plankton populations and Ib is the number of cellular quotas, including chlorophyll.\n\n$\\begin{array}{}\\text{(2)}& \\mathbf{B}=\\left[\\begin{array}{cccc}{B}_{\\mathrm{1},\\mathrm{C}}& {B}_{\\mathrm{1},\\mathrm{P}}& {B}_{\\mathrm{1},\\mathrm{Fe}}& {B}_{\\mathrm{1},\\mathrm{Chl}}\\\\ {B}_{\\mathrm{2},\\mathrm{C}}& {B}_{\\mathrm{2},\\mathrm{P}}& {B}_{\\mathrm{2},\\mathrm{Fe}}& {B}_{\\mathrm{2},\\mathrm{Chl}}\\\\ \\mathrm{⋮}& \\mathrm{⋮}& \\mathrm{⋮}& \\mathrm{⋮}\\\\ {B}_{J,\\mathrm{C}}& {B}_{J,\\mathrm{P}}& {B}_{J,\\mathrm{Fe}}& {B}_{J,\\mathrm{Chl}}\\end{array}\\right]\\end{array}$\n\nEach population and element are denoted by an appropriate subscript. For example, the total carbon biomass of plankton population j is denoted Bj,C, while the chlorophyll biomass of that population is denoted Bj,Chl. The column vector describing the carbon content of all plankton populations is denoted BC.\n\nThis framework can account for competition between (in theory) any number of different plankton populations. The model equations (below) are written in terms of an “ideal” planktonic form, with the potential to exhibit the full range of ecophysiological traits (among those that are included in the model). Individual populations may take on a realistic subset of these traits, according to their assigned plankton functional type (PFT) (see Sect. 3.4.1). Each population is also assigned a characteristic size, in terms of equivalent spherical diameter (ESD) or cell volume. Organism size plays a key role in determining each population's ecophysiological traits (see Sect. 3.4.2).\n\n### 3.1.3 Organic detritus\n\nD is a K×Id matrix, where K is the number of detrital size classes and Id is the number of detrital nutrient elements.\n\n$\\begin{array}{}\\text{(3)}& \\mathbf{D}=\\left[\\begin{array}{ccc}{D}_{\\mathrm{1},\\mathrm{C}}& {D}_{\\mathrm{1},\\mathrm{P}}& {D}_{\\mathrm{1},\\mathrm{Fe}}\\\\ {D}_{\\mathrm{2},\\mathrm{C}}& {D}_{\\mathrm{2},\\mathrm{P}}& {D}_{\\mathrm{2},\\mathrm{Fe}}\\end{array}\\right]\\end{array}$\n\nEach size class and element are denoted by an appropriate subscript. For example, dissolved organic phosphorus (size class k=1) is denoted D1,P, while particulate organic iron (size class k=2) is denoted D2,Fe.\n\n## 3.2 Plankton physiology and ecology\n\nThe rates of change in each state variable within ECOGEM are defined by a range of ecophysiological processes. These are defined by a set of mathematical functions that are common to all plankton populations. Parameter values are defined in Sect. 3.4.\n\n### 3.2.1 Temperature limitation\n\nTemperature affects a wide range of metabolic processes through an Arrhenius-like equation that is here set equal for all plankton.\n\n$\\begin{array}{}\\text{(4)}& {\\mathit{\\gamma }}_{\\mathrm{T}}={e}^{A\\left(T-{T}_{\\mathrm{ref}}\\right)}\\end{array}$\n\nThe parameter A describes the temperature sensitivity, T is the ambient water temperature in C, and Tref is a reference temperature (also in C) at which γT=1.\n\n### 3.2.2 The plankton “quota”\n\nThe physiological status of a plankton population is defined in terms of its cellular nutrient quota, Q, which is the ratio of assimilated nutrient (phosphorus or iron) to carbon biomass. For each plankton population, j, and each planktonic quota, ib ( C),\n\n$\\begin{array}{}\\text{(5)}& {Q}_{j,{i}_{\\mathrm{b}}}=\\frac{{B}_{j,{i}_{\\mathrm{b}}}}{{B}_{j,\\mathrm{C}}}.\\end{array}$\n\nThis equation is also used to describe the population chlorophyll content relative to carbon biomass. The size of the quota increases with nutrient uptake or chlorophyll synthesis. The quota decreases through the acquisition of carbon (described below).\n\nExcessive accumulation of P or Fe biomass in relation to carbon is prevented as the uptake or assimilation of each nutrient element is down-regulated as the respective quota becomes full. The generic form of the uptake regulation term for element ib is given by a linear function of the nutrient status, modified by an additional shape parameter Geider et al. (h=0.11998) that allows greater assimilation under low-to-moderate resource limitation.\n\n$\\begin{array}{}\\text{(6)}& {Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{stat}}={\\left(\\frac{{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{max}}-{Q}_{j,{i}_{\\mathrm{b}}}}{{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{max}}-{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{min}}}\\right)}^{h}\\end{array}$\n\n### 3.2.3 Nutrient uptake\n\nPhosphate and dissolved iron (${i}_{\\mathrm{r}}={i}_{\\mathrm{b}}=$ P or Fe) are taken up as functions of environmental availability ($\\left[{R}_{{i}_{\\mathrm{r}}}\\right]$), maximum uptake rate (${V}_{j,{i}_{\\mathrm{r}}}^{\\mathrm{max}}$), the nutrient affinity (${\\mathit{\\alpha }}_{j,{i}_{\\mathrm{r}}}$), the quota satiation term, (${Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{stat}}$), and temperature limitation (γT):\n\n$\\begin{array}{}\\text{(7)}& {V}_{j,{i}_{\\mathrm{r}}}=\\frac{{V}_{j,{i}_{\\mathrm{r}}}^{\\mathrm{max}}{\\mathit{\\alpha }}_{j,{i}_{\\mathrm{r}}}\\left[{R}_{{i}_{\\mathrm{r}}}\\right]}{{V}_{j,{i}_{\\mathrm{r}}}^{\\mathrm{max}}+{\\mathit{\\alpha }}_{j,{i}_{\\mathrm{r}}}\\left[{R}_{{i}_{\\mathrm{r}}}\\right]}{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{stat}}\\cdot {\\mathit{\\gamma }}_{\\mathrm{T}}.\\end{array}$\n\nThis equation is equivalent to the Michaelis–Menten-type response but replaces the half-saturation constant with the more mechanistic nutrient affinity, ${\\mathit{\\alpha }}_{j,{i}_{\\mathrm{r}}}$.\n\n### 3.2.4 Photosynthesis\n\nThe photosynthesis model is modified from and . Light limitation is calculated as a Poisson function of local irradiance (I), modified by the iron-dependent initial slope of the P–I curve ($\\mathit{\\alpha }\\cdot {\\mathit{\\gamma }}_{j,\\mathrm{Fe}}$) and the chlorophyll a : carbon ratio (Qj,Chl).\n\n$\\begin{array}{}\\text{(8)}& {\\mathit{\\gamma }}_{j,I}=\\mathrm{1}-\\mathrm{exp}\\left(\\frac{-\\mathit{\\alpha }\\cdot {\\mathit{\\gamma }}_{j,\\mathrm{Fe}}\\cdot {Q}_{j,\\mathrm{Chl}}\\cdot I}{{P}_{j,\\mathrm{C}}^{\\mathrm{sat}}}\\right)\\end{array}$\n\nHere, ${P}_{j,\\mathrm{C}}^{\\mathrm{sat}}$ is maximum light-saturated growth rate, modified from an absolute maximum rate of ${P}_{j,\\mathrm{C}}^{\\mathrm{max}}$, according to the current nutrient and temperature limitation terms.\n\n$\\begin{array}{}\\text{(9)}& {P}_{j,\\mathrm{C}}^{\\mathrm{sat}}={P}_{j,\\mathrm{C}}^{\\mathrm{max}}\\cdot {\\mathit{\\gamma }}_{T}\\cdot min\\left[{\\mathit{\\gamma }}_{j,\\mathrm{P}},\\phantom{\\rule{0.25em}{0ex}}{\\mathit{\\gamma }}_{j,\\mathrm{Fe}}\\right]\\end{array}$\n\nThe nutrient-limitation term is given as a minimum function of the internal nutrient status , each defined by normalised hyperbolic functions for P and Fe (ib= P or Fe):\n\n$\\begin{array}{}\\text{(10)}& {\\mathit{\\gamma }}_{j,{i}_{\\mathrm{b}}}=\\frac{\\mathrm{1}-{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{min}}/{Q}_{j,{i}_{\\mathrm{b}}}}{\\mathrm{1}-{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{min}}/{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{max}}}.\\end{array}$\n\nThe gross photosynthetic rate (Pj,C) is then modified from ${P}_{j,\\mathrm{C}}^{\\mathrm{sat}}$ by the light-limitation term.\n\n$\\begin{array}{}\\text{(11)}& {P}_{j,\\mathrm{C}}={\\mathit{\\gamma }}_{j,I}{P}_{j,\\mathrm{C}}^{\\mathrm{sat}}\\end{array}$\n\nNet carbon uptake is given by\n\n$\\begin{array}{}\\text{(12)}& {V}_{j,\\mathrm{C}}={P}_{j,\\mathrm{C}}-\\mathit{\\xi }\\cdot {V}_{j,\\mathrm{P}},\\end{array}$\n\nwith the second term accounting for the metabolic cost of biosynthesis (ξ). This parameter was originally defined as a loss of carbon as a fraction of nitrogen uptake . We define it here relative to phosphate uptake, using a fixed N:P ratio of 16.\n\n### 3.2.5 Photoacclimation\n\nThe chlorophyll : carbon ratio is regulated as the cell attempts to balance the rate of light capture by chlorophyll with the maximum potential (i.e. light-replete) rate of carbon fixation. Depending on this ratio, a certain fraction of newly assimilated phosphorus is diverted to the synthesis of new chlorophyll a:\n\n$\\begin{array}{}\\text{(13)}& {\\mathit{\\rho }}_{j,\\mathrm{Chl}}={\\mathit{\\theta }}_{\\mathrm{P}}^{\\mathrm{max}}\\frac{{P}_{j,\\mathrm{C}}}{\\mathit{\\alpha }\\cdot {\\mathit{\\gamma }}_{j,\\mathrm{Fe}}\\cdot {Q}_{j,\\mathrm{Chl}}\\cdot I}.\\end{array}$\n\nHere, ρj,Chl is the amount of chlorophyll a that is synthesised for every millimole of phosphorus assimilated (mg Chl (mmol P)−1) with ${\\mathit{\\theta }}_{\\mathrm{P}}^{\\mathrm{max}}$ representing the maximum ratio (again converting from the nitrogen-based units of Geider et al.1998, with a fixed N:P ratio of 16). If phosphorus is assimilated at a carbon-specific rate Vj,P (mmol P (mmol C)−1 d−1), then the carbon specific rate of chlorophyll a synthesis (mg Chl (mmol C)−1 d−1) is\n\n$\\begin{array}{}\\text{(14)}& {V}_{j,\\mathrm{Chl}}={\\mathit{\\rho }}_{j,\\mathrm{Chl}}\\cdot {V}_{j,\\mathrm{P}}.\\end{array}$\n\n### 3.2.6 Light attenuation\n\nIn both BIOGEM and ECOGEM, the incoming shortwave solar radiation intensity is taken from the climate component in cGEnIE and varies seasonally . However, ECOGEM uses a slightly more complex light-attenuation scheme than BIOGEM, which simply calculates a mean solar (shortwave) irradiance averaged over the depth of the surface layer, assuming a clear-water light-attenuation scale of 20 m .\n\nIn ECOGEM, the light level is calculated as the mean level of photosynthetically available radiation within a variable mixed layer (with depth calculated according to Kraus and Turner1967). We also take into account inhibition of light penetration due to the presence of light-absorbing particles and dissolved molecules . If Chltot is the total chlorophyll concentration in the surface layer (of thickness Z1), and ZML is the mixed-layer depth, the virtual chlorophyll concentration distributed across the mixed layer is given by\n\n$\\begin{array}{}\\text{(15)}& {\\mathrm{Chl}}_{\\mathrm{ML}}={\\mathrm{Chl}}_{\\mathrm{tot}}\\frac{{Z}_{\\mathrm{1}}}{{Z}_{\\mathrm{ML}}}.\\end{array}$\n\nThe combined light-attenuation coefficient attributable to both water and the virtual chlorophyll concentration is given by\n\n$\\begin{array}{}\\text{(16)}& {k}_{\\mathrm{tot}}={k}_{\\text{w}}+{k}_{\\mathrm{Chl}}\\cdot {\\mathrm{Chl}}_{\\mathrm{ML}}.\\end{array}$\n\nFor a given level of photosynthetically available radiation at the ocean surface (I0), plankton in the surface grid box experience the average irradiance within the mixed layer, which is given by\n\n$\\begin{array}{}\\text{(17)}& I=\\frac{{I}_{\\mathrm{0}}}{{k}_{\\mathrm{tot}}}\\frac{\\mathrm{1}}{{Z}_{\\mathrm{ML}}}\\left(\\mathrm{1}-{e}^{\\left(-{k}_{\\mathrm{tot}}\\cdot {Z}_{\\mathrm{ML}}\\right)}\\right).\\end{array}$\n\n### 3.2.7 Predation (including both herbivorous and carnivorous interactions)\n\nHere, we define predation simply as the consumption of any living organism, regardless of the trophic level of the organism (i.e. phytoplankton or zooplankton prey).\n\nThe predator-biomass-specific grazing rate of predator (jpred) on prey (jprey) is given by\n\n$\\begin{array}{ll}\\text{(18)}& {G}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}},\\mathrm{C}}=& \\phantom{\\rule{0.25em}{0ex}}{\\mathit{\\gamma }}_{\\mathrm{T}}\\cdot \\underset{\\mathrm{overall}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{grazing}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{rate}}{\\underbrace{{G}_{{j}_{\\mathrm{pred}},\\mathrm{C}}^{\\mathrm{max}}\\cdot \\frac{{\\mathcal{F}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}}{{k}_{{j}_{\\mathrm{prey}},\\mathrm{C}}+{\\mathcal{F}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}}}}& \\cdot \\underset{\\mathrm{switching}}{\\underbrace{{\\mathrm{\\Phi }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}}}\\cdot \\underset{\\mathrm{prey}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{refuge}}{\\underbrace{\\left(\\mathrm{1}-{e}^{\\mathrm{\\Lambda }\\cdot {\\mathcal{F}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}}\\right)}},\\end{array}$\n\nwhere γT is the temperature dependence, ${G}_{{j}_{\\mathrm{pred}},\\mathrm{C}}^{\\mathrm{max}}$ is the maximum grazing rate, and ${k}_{{j}_{\\mathrm{prey}},\\mathrm{C}}$ is the half-saturation concentration for all (available) prey. The overall grazing rate is a function of total food available to the predator, ${\\mathcal{F}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}$. This is given by the product of the prey biomass vector, BC, and the grazing kernel (ϕ):\n\n$\\begin{array}{}\\text{(19)}& \\underset{\\left[{J}_{\\mathrm{pred}}×\\mathrm{1}\\right]}{{\\mathsc{F}}_{\\mathrm{C}}}=\\underset{\\left[{J}_{\\mathrm{pred}}×{J}_{\\mathrm{prey}}\\right]}{\\mathbit{\\varphi }}\\underset{\\left[{J}_{\\mathrm{prey}}×\\mathrm{1}\\right]}{{\\mathbit{B}}_{\\mathrm{C}}}.\\end{array}$\n\nNote that this equation is written out in matrix form, with the dimensions noted underneath each matrix. Each element of the grazing matrix ϕ is an approximately log-normal function of the predator–prey length ratio, ${\\mathit{\\vartheta }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}$, with an optimum ratio of ϑopt and a geometric standard deviation ${\\mathit{\\sigma }}_{{j}_{\\mathrm{pred}}}$.\n\n$\\begin{array}{}\\text{(20)}& {\\mathit{\\varphi }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}=\\mathrm{exp}\\left[-{\\left(\\mathrm{ln}\\left(\\frac{{\\mathit{\\vartheta }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}}{{\\mathit{\\vartheta }}_{\\mathrm{opt}}}\\right)\\right)}^{\\mathrm{2}}/\\left(\\mathrm{2}{\\mathit{\\sigma }}_{{j}_{\\mathrm{pred}}}^{\\mathrm{2}}\\right)\\right]\\end{array}$\n\nWe also include an optional “prey-switching” term, such that predators may preferentially attack those prey that are relatively more available (i.e. active switching, s=2). Alternatively, they may attack prey in direct proportion to their availability (i.e. passive switching, s=1). In the simulations below, we assume active switching.\n\n$\\begin{array}{}\\text{(21)}& {\\mathrm{\\Phi }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}=\\frac{\\left({\\mathit{\\varphi }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}{\\mathrm{B}}_{{j}_{\\mathrm{prey}},\\mathrm{C}}{\\right)}^{s}}{{\\sum }_{{j}_{\\mathrm{prey}}=\\mathrm{1}}^{J}\\left({\\mathit{\\varphi }}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}}}{\\mathrm{B}}_{{j}_{\\mathrm{prey}},\\mathrm{C}}{\\right)}^{s}}\\end{array}$\n\nFinally, a prey refuge function is incorporated, such that the overall grazing rate is decreased when the availability of all prey (${\\mathcal{F}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}$) is low. The size of the prey refuge is dictated by the coefficient Λ. The overall grazing response is calculated on the basis of prey carbon. Grazing losses of other prey elements are simply calculated from their stoichiometric ratio to prey carbon, with different elements assimilated according to the predator's nutritional requirements (see below):\n\n$\\begin{array}{}\\text{(22)}& {G}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}},{\\mathrm{i}}_{\\mathrm{b}}}={G}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}},\\mathrm{C}}\\frac{{\\mathrm{B}}_{{j}_{\\mathrm{prey}},{i}_{\\mathrm{b}}}}{{\\mathrm{B}}_{{j}_{\\mathrm{prey}},\\mathrm{C}}}.\\end{array}$\n\n### 3.2.8 Prey assimilation\n\nPrey biomass is assimilated into predator biomass with an efficiency of ${\\mathit{\\lambda }}_{{j}_{\\mathrm{pred}},{i}_{\\mathrm{b}}}$ (ib Chl). This has a maximum value of λmax that is modified according the quota status of the predator. For elements ib= P or Fe, prey biomass is assimilated as a function of the respective predator quota. If the quota is full, the element is not assimilated. If the quota is empty, the element is assimilated with maximum efficiency (λmax).\n\n$\\begin{array}{}\\text{(23)}& {\\mathit{\\lambda }}_{{j}_{\\mathrm{pred}},{i}_{\\mathrm{b}}}={\\mathit{\\lambda }}^{\\mathrm{max}}{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{stat}}\\end{array}$\n\nC assimilation is regulated according to the status of the most limiting nutrient element (P or Fe) modified by the same shape parameter, h, that was applied in Eq. (6).\n\n$\\begin{array}{}\\text{(24)}& {Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{lim}}={\\left(\\frac{{Q}_{j,{i}_{\\mathrm{b}}}-{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{min}}}{{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{max}}-{Q}_{j,{i}_{\\mathrm{b}}}^{\\mathrm{min}}}\\right)}^{h}\\end{array}$\n\nIf both nutrient quotas are full, C is assimilated at the maximum rate. If either is empty, C assimilation is down-regulated until sufficient quantities of the limiting element(s) are acquired.\n\n$\\begin{array}{}\\text{(25)}& {\\mathit{\\lambda }}_{{j}_{\\mathrm{pred}},\\mathrm{C}}={\\mathit{\\lambda }}^{\\mathrm{max}}min\\left({Q}_{j,\\mathrm{P}}^{\\mathrm{lim}},{Q}_{j,\\mathrm{Fe}}^{\\mathrm{lim}}\\right)\\end{array}$\n\n### 3.2.9 Death\n\nAll living biomass is subject to a linear mortality rate of mp. This rate is decreased at very low biomasses (population carbon biomass 10−10 mmol C m−3) in order to maintain a viable population within every surface grid cell (“everything is everywhere, but the environment selects”; Baas-Becking1934).\n\n$\\begin{array}{}\\text{(26)}& {m}_{j}={m}_{p}\\left(\\mathrm{1}-{e}^{-{\\mathrm{10}}^{\\mathrm{10}}\\cdot {B}_{j,C}}\\right)\\end{array}$\n\nThe low biomass at which a population attains “immortality” is sufficiently small for that population to have a negligible impact on all other components of the ecosystem.\n\n### 3.2.10 Calcium carbonate\n\nThe production and export of calcium carbonate (CaCO3) by calcifying plankton in the surface ocean is scaled to the export of particulate organic carbon via a spatially uniform value which is modified by a thermodynamically based relationship with the calcite saturation state. The dissolution of CaCO3 below the surface is treated in a similar way to that of particulate organic matter (POM; Eq. 34), as described by with the parameter values controlling the export ratio between CaCO3 and particulate organic carbon (POC) taken from .\n\n### 3.2.11 Oxygen\n\nOxygen production is coupled to photosynthetic carbon fixation via a fixed linear ratio, such that\n\n$\\begin{array}{}\\text{(27)}& {V}_{j,{\\mathrm{O}}_{\\mathrm{2}}}=-\\frac{\\mathrm{138}}{\\mathrm{106}}{V}_{j,\\mathrm{DIC}}{B}_{j,\\mathrm{C}}.\\end{array}$\n\nThe negative sign indicates that oxygen is produced as dissolved inorganic carbon (DIC) is consumed. Oxygen consumption associated with the remineralisation of organic matter is unchanged relative to BIOGEM.\n\n### 3.2.12 Alkalinity\n\nProduction of alkalinity is coupled to planktonic uptake of PO4 via a fixed linear ratio, such that\n\n$\\begin{array}{}\\text{(28)}& {V}_{j,\\mathrm{Alk}}=-\\mathrm{16}{V}_{j,{\\mathrm{PO}}_{\\mathrm{4}}}\\cdot {B}_{j,\\mathrm{C}}.\\end{array}$\n\nThe negative sign indicates that alkalinity increases as PO4 is consumed. This relationship accounts for alkalinity changes associated with N transformations that are not explicitly represented in the biogeochemical configurations of cGEnIE that are applied here.\n\n### 3.2.13 Production of organic matter\n\nPlankton mortality and grazing are the only two sources of organic matter, with partitioning between non-sinking dissolved and sinking particulate phases determined by the parameter β. In this initial implementation of ECOGEM, we use a similar size-based sigmoidal partitioning function to .\n\n$\\begin{array}{}\\text{(29)}& \\mathit{\\beta }={\\mathit{\\beta }}_{a}-\\frac{{\\mathit{\\beta }}_{a}-{\\mathit{\\beta }}_{b}}{\\mathrm{1}+{\\mathit{\\beta }}_{c}/\\left[\\text{ESD}\\right]}\\end{array}$\n\nHere, βa is the (maximum) fraction to DOM as ESD approaches zero, βb is the (minimum) fraction to DOM as ESD approaches infinity, and βc is the size at which the partitioning is 50 : 50 between DOM and POM. The parameter values have been adjusted from , such that the global average of β is equal to the constant value of 0.66 used in cGEnIE.\n\n## 3.3 Differential equations\n\nDifferential equations for R, B, and D are written below. The dimensions of each matrix and vector used in Eqs. (30)–(32) are given in Table 1. Note that while R and OM are transported by the physical component of GEnIE, living biomass B is not currently subject to any physical transport. The only communication between biological communities in adjacent grid cells is through the advection and diffusion of inorganic resources and non-living organic matter in BIOGEM. Note that some additional sources and sinks of R, and all sinks of D, are computed in BIOGEM.\n\n### 3.3.1 Inorganic resources\n\nFor each inorganic resource, ir,\n\n$\\begin{array}{}\\text{(30)}& \\frac{\\partial {R}_{{i}_{\\mathrm{r}}}}{\\partial t}=\\sum _{j=\\mathrm{1}}^{J}\\underset{\\mathrm{uptake}}{\\underbrace{-{V}_{j,{i}_{\\mathrm{r}}}\\cdot {B}_{j,\\mathrm{C}}}}.\\end{array}$\n\n### 3.3.2 Plankton biomass\n\nFor each plankton class, j, and internal biomass quota, ib,\n\n$\\begin{array}{ll}\\text{(31)}& \\frac{\\partial {B}_{j,{i}_{\\mathrm{b}}}}{\\partial t}=& \\phantom{\\rule{0.25em}{0ex}}+\\underset{\\mathrm{uptake}}{\\underbrace{{V}_{j,{i}_{\\mathrm{b}}}\\cdot {B}_{j,C}}}-\\underset{\\mathrm{basal}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{mortality}}{\\underbrace{{m}_{j}\\cdot {B}_{j,{i}_{\\mathrm{b}}}}}& +\\underset{\\mathrm{grazing}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{gains}}{\\underbrace{{B}_{j,\\mathrm{C}}\\cdot {\\mathit{\\lambda }}_{j,{i}_{\\mathrm{b}}}\\sum _{{j}_{\\mathrm{prey}}=\\mathrm{1}}^{J}{G}_{j,{j}_{\\mathrm{prey}},{i}_{\\mathrm{b}}}}}\\\\ & -\\underset{\\mathrm{grazing}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{losses}}{\\underbrace{\\sum _{{j}_{\\mathrm{pred}}=\\mathrm{1}}^{J}{B}_{{j}_{\\mathrm{pred}},\\mathrm{C}}\\cdot {G}_{{j}_{\\mathrm{pred}},j,{i}_{\\mathrm{b}}}}}.\\end{array}$\n\n### 3.3.3 Dissolved organic matter\n\nFor each detrital nutrient element, id, the rate of change of dissolved fraction of organic matter (k=1) is described by\n\n$\\begin{array}{ll}\\text{(32)}& & \\frac{\\partial {\\mathrm{D}}_{\\mathrm{1},{i}_{\\mathrm{d}}}}{\\partial t}=\\underset{\\mathrm{mortality}}{\\underbrace{\\sum _{j=\\mathrm{1}}^{J}\\phantom{\\rule{0.125em}{0ex}}\\left[{\\mathrm{B}}_{j,{i}_{\\mathrm{d}}}\\right]{\\mathit{\\beta }}_{j}{m}_{j}}}& +\\underset{\\mathrm{messy}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{feeding}}{\\underbrace{\\sum _{{j}_{\\mathrm{pred}}=\\mathrm{1}}^{J}\\left[{\\mathrm{B}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}\\right]\\left(\\mathrm{1}-{\\mathit{\\lambda }}_{{j}_{\\mathrm{pred}},{i}_{\\mathrm{b}}}\\right)\\sum _{{j}_{\\mathrm{prey}}=\\mathrm{1}}^{J}{\\mathit{\\beta }}_{{j}_{\\mathrm{prey}}}{G}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}},{i}_{\\mathrm{d}}}}}.\\end{array}$\n\nThe dissolved organic matter vector (D1) includes three explicit tracers that are transported by the ocean circulation model and are degraded back to their constituent nutrients with a fixed turnover time of λ(=0.5 years). POM is not represented with explicit state variables in either ECOGEM or BIOGEM. Instead, its implicit production in the surface layer (and the corresponding export below the surface layer) is given by\n\n$\\begin{array}{ll}\\text{(33)}& & {F}_{\\text{surface},{i}_{\\mathrm{d}}}=\\underset{\\mathrm{mortality}}{\\underbrace{\\sum _{j=\\mathrm{1}}^{J}\\left[{\\mathrm{B}}_{j,{i}_{\\mathrm{d}}}\\right]\\left(\\mathrm{1}-{\\mathit{\\beta }}_{j}\\right)\\phantom{\\rule{0.125em}{0ex}}{m}_{j}}}& +\\underset{\\mathrm{messy}\\phantom{\\rule{0.25em}{0ex}}\\mathrm{feeding}}{\\underbrace{\\sum _{{j}_{\\mathrm{pred}}=\\mathrm{1}}^{J}\\left[{\\mathrm{B}}_{{j}_{\\mathrm{pred}},\\mathrm{C}}\\right]\\left(\\mathrm{1}-{\\mathit{\\lambda }}_{{j}_{\\mathrm{pred}},{i}_{\\mathrm{b}}}\\right)\\sum _{{j}_{\\mathrm{prey}}=\\mathrm{1}}^{J}\\left(\\mathrm{1}-{\\mathit{\\beta }}_{{j}_{\\mathrm{prey}}}\\right){G}_{{j}_{\\mathrm{pred}},{j}_{\\mathrm{prey}},{i}_{\\mathrm{d}}}}}.\\end{array}$\n\nThis surface production is redistributed throughout the water column as a depth-dependent flux, ${F}_{z,{i}_{\\mathrm{d}}}$. To achieve this, ${F}_{\\text{surface},{i}_{\\mathrm{d}}}$ is partitioned between a “refractory” component (rPOM) that is predominantly remineralised close to the seafloor, and a “labile” component (1−rPOM) which predominantly remineralises in the upper water column. The net remineralisation at depth z, relative to the export depth z0, is determined by characteristic length scales (lrPOM and lPOM for refractory and labile POM, respectively):\n\n$\\begin{array}{ll}\\text{(34)}& & {F}_{z,{i}_{\\mathrm{d}}}=& {F}_{\\text{surface},{i}_{\\mathrm{d}}}\\left[\\left(\\mathrm{1}-{r}^{\\text{POM}}\\right)\\cdot \\text{exp}\\left(\\frac{{z}_{\\mathrm{0}}-z}{{l}^{\\text{POM}}}\\right)+{r}^{\\text{POM}}\\cdot \\text{exp}\\left(\\frac{{z}_{\\mathrm{0}}-z}{{l}^{\\text{rPOM}}}\\right)\\right].\\end{array}$\n\nThe remineralisation length scales reflect a constant sinking speed and constant remineralisation rate. All POM reaching the seafloor is remineralised instantaneously; see for a fuller description and justification.\n\n### 3.3.4 Coupling to BIOGEM\n\nThe calculations in BIOGEM are performed 48 times for each model year (i.e. once for every two time steps taken by the ocean circulation mode). ECOGEM takes 20 time steps for each BIOGEM time step, i.e. 960 time steps per year). At the beginning of each ECOGEM time step loop, concentrations of inorganic tracers and key properties of the physical environment are passed from BIOGEM. The ecological community responds by transforming inorganic compounds into living biomass through photosynthesis. At the end of each ECOGEM time step loop, the rates of change in R and OM are passed back to BIOGEM. $\\partial \\mathbit{R}/\\partial t$ is used to update DIC, phosphate, iron, oxygen, and alkalinity tracers, while $\\partial {\\mathbf{D}}_{\\mathrm{1}}/\\partial t$ is added to the dissolved organic matter pools. The rate of particulate organic matter production, $\\partial {\\mathbf{D}}_{\\mathrm{2}}/\\partial t$, is instantly remineralised at depth using to the standard BIOGEM export functions described above (Eq. 34). $\\partial \\mathbf{B}/\\partial t$ is used only to update the living biomass concentrations within ECOGEM. The structure of the coupling is illustrated in Fig. 1.\n\nIn the initial implementation of ECOGEM described and evaluated here, the explicit plankton community is held entirely within the ECOGEM module and is not subject to physical transport (e.g. advection and diffusion) by the ocean circulation model (although dissolved tracers such as nutrients still are). As a first approximation, this approach appears to be acceptable, as long as the rate of transport between the very large grid cells in cGEnIE is slow in relation to the net growth rates of the plankton community. Online advection of ecosystem state variables will be implemented and its consequences explored in a future version of EcoGEnIE.\n\n## 3.4 Ecophysiological parameterisation\n\nThe model community is made up of a number of different plankton populations, with each one described according to the same set of equations, as outlined above. Differences between the populations are specified according to individual parameterisation of the equations. In the following sections, we describe how the members of the plankton community are specified and how their parameters are assigned according to the organism's size and taxonomic group.\n\n### 3.4.1 Model structure\n\nThe plankton community in ECOGEM is designed to be highly configurable. Each population present in the initial community is specified by a single line in an input text file, which describes the organism size and taxonomic group.\n\nIn this configuration, we include 16 plankton populations across eight different size classes. These are divided into two PFTs, namely phytoplankton and zooplankton (see Table 2). The eight phytoplankton populations have nutrient uptake and photosynthesis traits enabled, and predation traits disabled, whereas the opposite is true for the eight zooplankton populations. In the future, we expect to bring in a wider range of trait-based functional types, including siliceous plankton (e.g. Follows et al.2007), calcifiers , nitrogen fixers , and mixotrophs .\n\n### 3.4.2 Size-dependent traits\n\nWith the exception of the maximum photosynthetic rate (${P}_{\\mathrm{C}}^{\\mathrm{max}}$; see below), the size-dependent ecophysiological parameters (p) given in Table 3 are assigned as power–law functions of organismal volume ($V=\\mathit{\\pi }\\left[\\text{ESD}{\\right]}^{\\mathrm{3}}/\\mathrm{6}$) according to standard equations of the form\n\n$\\begin{array}{}\\text{(35)}& p=a{\\left(\\frac{V}{{V}_{\\mathrm{0}}}\\right)}^{b}.\\end{array}$\n\nHere, V0 is a reference value of V0=1µm3. The value of p at V=V0 is given by the coefficient a, while the rate of change in p as a function of V is described by the exponent b.\n\nThe maximum photosynthetic rate (${P}_{\\mathrm{C}}^{\\mathrm{max}}$) of very small cells (i.e. 5 µm ESD) has been shown to deviate from the standard power law of Eq. (35) , so we use the slightly more complex unimodal function given by .\n\n$\\begin{array}{}\\text{(36)}& {P}_{\\mathrm{C}}^{\\mathrm{max}}=\\frac{{p}_{a}+{\\mathrm{log}}_{\\mathrm{10}}\\left(\\frac{V}{{V}_{\\mathrm{0}}}\\right)}{{p}_{b}+{p}_{c}{\\mathrm{log}}_{\\mathrm{10}}\\left(\\frac{V}{{V}_{\\mathrm{0}}}\\right)+{\\mathrm{log}}_{\\mathrm{10}}{\\left(\\frac{V}{{V}_{\\mathrm{0}}}\\right)}^{\\mathrm{2}}}\\end{array}$\n\nThe parameters of this equation (listed in Table 3) were derived empirically from the data of .\n\nTable 3Size-dependent ecophysiological parameters (p) and their units, with size-scaling coefficients (a, b and c) for use in Eqs. (29), (35) and (36).",
null,
"### 3.4.3 Size-independent traits\n\nA list of size-independent model parameters is given in Table 4.\n\n## 3.5 Parameter modifications\n\nAs far as possible, the parameter values applied in ECOGEM were kept as close as possible to previously published versions of the model . There were however a few modifications that were required to bring EcoGEnIE into first-order agreement with observations and the current version of cGEnIE . In particular, in comparison to the biogeochemical model used in , the amount of soluble iron supplied to cGEnIE by atmospheric deposition is considerably less. With a smaller source of iron, it was necessary to decrease the iron demand of the plankton community, and this was achieved by decreasing ${Q}_{\\mathrm{Fe}}^{\\mathrm{max}}$ and ${Q}_{\\mathrm{Fe}}^{\\mathrm{min}}$ by 5-fold (${Q}_{\\mathrm{Fe}}^{\\mathrm{max}}$ from 20 to 4 nmol Fe (mmol C)−1, and ${Q}_{\\mathrm{Fe}}^{\\mathrm{min}}$ from 5 to 1 nmol Fe (mmol C)−1).\n\nWe also found that the flexible stoichiometry of ECOGEM led to excessive export of carbon from the surface ocean, attributable to higher C : P ratios in organic matter (BIOGEM assumes a Redfieldian C : P of 106). This effect was moderated by increasing the size of the minimum phosphate : carbon quota, ${Q}_{\\mathrm{P}}^{\\mathrm{min}}$ (relative to Ward et al.2012).\n\n4 Simulations and data\n\n## 4.1 10 000-year spin-up\n\nWe ran cGEnIE (as configured and described in Ridgwell and Death2018) and EcoGEnIE (as described here) each for period of 10 000 years. These runs were initialised from a homogenous and static ocean, with an imposed constant atmospheric CO2 concentration of 278 ppm. We present model output from the 10 000th year of integration.\n\n## 4.2 Observations\n\nAlthough they are not necessarily strictly comparable, we compare results from the pre-industrial configurations of cGEnIE and EcoGEnIE to contemporary climatologies from a range of sources. Global climatologies of dissolved phosphate and oxygen are drawn from the World Ocean Atlas 2009 (WOA09 – Garcia et al.2010), while DIC and alkalinity are taken from Global Ocean Data Analysis Project version 2 (GLODAPv2 – Olsen2016). Surface chlorophyll concentrations represent a climatological average from 1997 to 2002, estimated by the SeaWiFS satellite. Depth-integrated primary production is from . All of these interpolated global fields have been re-gridded onto the cGEnIE $\\mathrm{36}×\\mathrm{36}×\\mathrm{16}$ grid.\n\nObserved dissolved iron concentrations are those published by . These data are too sparse and variable to allow reliable mapping on the cGEnIE grid and are therefore shown as individual data.\n\nFidelity to the observed seasonal cycle of nutrients and biomass was evaluated against observations from nine Joint Global Ocean Flux Study (JGOFS) sites: the Hawai'i ocean Time-series (HOT: 23 N, 158 W), the Bermuda Atlantic Time-series Study (BATS: 32 N, 64 W), the equatorial Pacific (EQPAC: 0 N, 140 W), the Arabian Sea (ARABIAN: 16 N, 62 E), the North Atlantic Bloom Experiment (NABE: 47 N, 19 W), Station P (STNP: 50 N, 145 W), Kerfix (KERFIX: 51 S, 68 E), Antarctic Polar Frontal Zone (APFZ: 62 S, 170 W), and the Ross Sea (ROSS: 75 S, 180 W). Model output for Kerfix and the Ross Sea site was not taken at the true locations of the observations (51 S, 68 E and 75 S, 180 W, respectively). Kerfix was moved to compensate for a poor representation of the polar front within the coarse resolution ocean model, while the Ross Sea site does not lie within the GEnIE ocean grid. At each site, the observational data represent the mean daily value within the mixed layer. Observational data from all years are plotted together as one climatological year.",
null,
"Figure 2Surface concentrations of dissolved phosphate (mmol PO4 m−3) and iron (mmol dFe m−3 ).",
null,
"Figure 3Surface concentrations of dissolved inorganic carbon (mmol C m−3), alkalinity (m eq. m−3), and dissolved oxygen (mmol O2 m−3).",
null,
"Figure 5Vertical fluxes of particulate carbon (mmol C m−2 d−1), phosphorus (mmol P m−2 d−1), iron (mmol Fe m−2 d−1), and calcium carbonate (mmol CaCO3 m−2 d−1) across the base of the surface layer. The right-hand column indicates the relative increase or decrease in ECOGEM, relative to BIOGEM (dimensionless).\n\n5 Results\n\n## 5.1 Biogeochemical variables\n\nWe start by describing the global distributions of key biogeochemical tracers that are common to both cGEnIE and EcoGEnIE.\n\n### 5.1.1 Global surface values\n\nAnnual mean global distributions are presented for the upper 80.8 m of the water column, corresponding to the model surface layer. In Fig. 2, we compare output from the two models to observations of dissolved phosphate and iron. Surface phosphate concentrations are broadly similar between the two versions of the model, except that EcoGEnIE provides slightly lower estimates in the Southern Ocean and equatorial upwellings. Both versions strongly underestimate surface phosphate in the equatorial and north Pacific, and to a lesser extent in the north and east Atlantic, the Arctic, and the Arabian Sea. This is likely attributable in part to the model underestimating the strength of upwelling in these regions. It should also be noted that the observations may in some cases be unrepresentative of the true surface layer, when this is significantly shallower than 80.8 m. In such cases, the observed value will be affected by measurements from below the surface layer. Iron distributions are also broadly similar between the two models, with EcoGEnIE showing slightly lower iron concentrations over most of the ocean.\n\nFigure 3 shows observed and modelled values of inorganic carbon, oxygen, and alkalinity. The two models yield very similar surface distributions of the three tracers. DIC and alkalinity are both broadly underestimated relative to observations, while oxygen shows higher fidelity, albeit with artificially high estimates in the equatorial Atlantic and Pacific. This is likely attributable to unrealistically weak upwelling in these regions.\n\nSurface ΔpCO2 from the two models is shown in Fig. 4. EcoGEnIE shows weaker CO2 outgassing in the tropical band, with a much stronger ocean-to-atmosphere flux in the western Arctic.\n\nIn Fig. 5, we show the annual mean rate of particulate organic matter production in the surface layer, and the relative differences between ECOGEM and BIOGEM. In comparison to cGEnIE, EcoGEnIE shows elevated POC production in all regions. Production of CaCO3 is globally less variable in EcoGEnIE than cGEnIE, with notable higher fluxes in the oligotrophic gyres and polar regions.\n\nThe relative proportions in which these elements and compounds are exported from the surface ocean are regulated by the stoichiometry of biological production. In cGEnIE (BIOGEM), carbon and phosphorus production is rigidly coupled through a fixed ratio of 106 : 1, while POFe : POC and CaCO3 : POC export flux ratios are regulated as a function of environmental conditions. In EcoGEnIE (ECOGEM), phosphorus, iron, and carbon fluxes are all decoupled through the flexible quota physiology, which depends on both environmental conditions and the status of the food web. Only CaCO3:POC flux ratios are regulated via the same mechanism in the two models.\n\n### 5.1.2 Basin-averaged depth profiles\n\nIn this section, we present the meridional depth distributions of key biogeochemical tracers, averaged across each of the three main ocean basins, as shown in Fig. 6. Figure 7 shows that the distribution of dissolved phosphate is very similar between the two models, with EcoGEnIE showing a slightly stronger subsurface accumulation in the northern Indian Ocean.\n\nThe vertical distributions shown in Fig. 8 reveal that dissolved iron is lower throughout the ocean in EcoGEnIE, relative to cGEnIE, particularly below 1500 m. Differences are less obvious at intermediate depths. (Observations are currently too sparse to estimate reliable basin-scale distributions of dissolved iron; see .)\n\nFigure 9 shows that while cGEnIE reproduces observed DIC distributions very well, EcoGEnIE overestimates concentrations within the Indian and Pacific oceans. The total oceanic DIC inventory increased by just under 2 % from 2.99 Examole C in cGEnIE to 3.05 in EcoGEnIE (with a fixed atmospheric CO2 concentration of 278 ppm). Otherwise, the two models show broadly similar distributions, with the most pronounced differences (as for PO4) in the northern Indian Ocean.\n\nFigure 10 shows that cGEnIE reasonably captures the invasion of O2 into the ocean interior through the Southern Ocean and North Atlantic. These patterns are also seen in EcoGEnIE, although unrealistic water column anoxia is seen in the northern intermediate Indian and Pacific oceans. Again, this is likely a consequence of greater export and remineralisation of organic carbon in EcoGEnIE, leading to more oxygen consumption at intermediate depths (also evidenced by elevated PO4, DIC, and alkalinity in the same regions; Figs. 7, 9, and 11).\n\nAlkalinity (Fig. 11) also shows some clear differences between the two models, again most noticeably in the northern intermediate Indian and Pacific oceans. In these regions, EcoGEnIE shows excessive accumulation of alkalinity at ∼1000 m depth. This is again attributable to the increased C export in EcoGEnIE. In the absence of a nitrogen cycle (and ${\\mathrm{NO}}_{\\mathrm{3}}^{-}$ reduction), increased anoxic remineralisation of organic carbon (Figs. 9 and 10) leads to increased reduction of sulfate to H2S, which in turn increases the alkalinity of seawater. Further adjustment of the cellular nutrient quotas in ECOGEM and hence the effective exported P:C Redfield ratio and/or retuning of the organic matter remineralisation profiles in BIOGEM would likely resolve these issues.",
null,
"Figure 6Spatial definition of the three ocean basins used in Figs. 710. Locations of the JGOFS time series sites are indicated with blue dots.\n\n### 5.1.3 Time series\n\nFigures 12 and 13 compare the seasonal cycles of surface nutrients (phosphate and iron) at nine Joint Global Ocean Flux Study (JGOFS) sites.",
null,
"Figure 7Basin-averaged meridional-depth distribution of phosphate (mmol P m−3).",
null,
"Figure 8Basin-averaged meridional-depth distribution of total dissolved iron (mmol dFe m−3).",
null,
"Figure 10Basin-averaged meridional-depth distribution of dissolved oxygen (mmol O2 m−3).",
null,
"Figure 11Basin-averaged meridional-depth distribution of alkalinity (m eq. m−3).",
null,
"Figure 12Annual cycle of surface PO4 at nine time series sites in cGEnIE and EcoGEnIE. Red dots indicate climatological observations, while the lines represent modelled surface PO4 concentrations. Locations of the time series are indicated in Fig. 6.\n\n## 5.2 Ecological variables\n\nMoving on from the core components that are common to both models, we present a range of ecological variables that are exclusive to EcoGEnIE. As before, we begin by presenting the annual mean global distributions in the ocean surface layer, comparing total chlorophyll and primary production to satellite-derived estimates (Fig. 14). We then look in more detail at the community composition, with Fig. 15 showing the carbon biomass within each plankton population. Figure 16 then shows the degree of nutrient limitation within each phytoplankton population. Finally, in Fig. 17, we show the seasonal cycle of community and population level chlorophyll at each of the nine JGOFS time series sites.",
null,
"Figure 13Annual cycle of surface dissolved iron at nine time series sites in cGEnIE and EcoGEnIE. Red dots indicate climatological observations, while the lines represent modelled surface iron concentrations. Locations of the time series are indicated in Fig. 6.",
null,
"Figure 14Satellite-derived (a, c) and modelled (b, d) surface chlorophyll a concentration (mg Chl m−3) and depth-integrated primary production (mg C m−2 d−1). The satellite-derived estimate of primary production is a composite of three products , as in Yool et al. (2013, their Fig. 12).\n\n### 5.2.1 Global surface values\n\nFigure 14 reveals that EcoGEnIE shows some limited agreement with the satellite-derived estimate of global chlorophyll. As expected, chlorophyll biomass is elevated in the high-latitude oceans relative to lower latitudes. The subtropical gyres show low biomass, but the distinction with higher latitudes is not as clear as in the satellite estimate. The model also shows a clear lack of chlorophyll in equatorial and coastal upwelling regions, relative to the satellite estimate. The model predicts higher chlorophyll concentrations in the Southern Ocean than the satellite estimate, although it should be noted that the satellite algorithms may be underestimating concentrations in these regions (Fig. 17 and Dierssen2010).\n\nModelled primary production correctly increases from the oligotrophic gyres towards high latitudes and upwelling regions, but variability is much lower than in the satellite estimate. Specifically, the model and satellite estimates yield broadly similar estimates in the oligotrophic gyres, but the model does not attain the high values seen at higher latitudes and in coastal areas.\n\nFigure 15 shows the modelled carbon biomass concentrations in the surface layer for each modelled plankton population. The smallest (0.6 µm) phytoplankton size class is evenly distributed in the low-latitude oceans between 40 N and S but is largely absent nearer to the poles. The 1.9 µm phytoplankton size class is similarly ubiquitous at low latitudes, albeit with somewhat higher biomass, and its range extends much further towards the poles. With increasing size, the larger phytoplankton are increasingly restricted to highly productive areas, such as the subpolar gyres and upwelling zones.\n\nPerhaps as expected, zooplankton size classes tend to mirror the biogeography of their phytoplankton prey. The smallest (1.9 µm) surviving size class is found primarily at low latitudes, although a highly variable population is found at higher latitudes. Larger zooplankton size classes follow a similar pattern to the phytoplankton, moving from a cosmopolitan but homogenous distribution in the smaller size classes towards spatially more variable distributions among the larger organisms.",
null,
"Figure 15Surface concentrations of carbon biomass in each population (mmol C m−3).\n\nThe degree of nutrient limitation within each phytoplankton size class is shown in Fig. 16. The two-dimensional colour scale indicates decreasing iron limitation from left to right, and decreasing phosphorus limitation from bottom to top. White is therefore nutrient replete, blue is phosphorus limited, red is iron limited, and magenta is phosphorus–iron co-limited. The figure demonstrates that the smallest size class is not nutrient limited in any region. The increasing saturation of the colour scale in larger size classes indicates an increasing degree of nutrient limitation. As expected, nutrient limitation is strongest in the highly stratified low latitudes. A stronger vertical supply of nutrients at higher latitudes is associated with weaker nutrient limitation, although nutrient limitation is still significant among the larger size classes. Consistent with observations , phosphorus limitation is restricted to low latitudes. Iron limitation dominates in high-latitude regions, especially among larger size classes. Among these larger groups, the upwelling zones appear to be characterised by iron-phosphorus co-limitation.",
null,
"Figure 16Nutrient limitation in each phytoplankton population (dimensionless). The two-dimensional colour scale indicates decreasing phosphorus limitation from left to right, and decreasing iron limitation from bottom to top. White is therefore nutrient replete, blue is phosphorus limited, red is iron limited, and magenta is phosphorus–iron co-limited.\n\n### 5.2.2 Time series\n\nThe seasonal cycles of phytoplankton chlorophyll a are compared to time series observations in Fig. 17. The modelled total chlorophyll concentrations (black lines) track the observed concentrations (red dots) reasonably well at most sites. The bottom three panels also suggest that the satellite data shown in Fig. 14 may slightly underestimate surface chlorophyll concentrations in the Southern Ocean. The modelled surface chlorophyll concentration is too low in the equatorial Pacific, while the spring bloom occurs 1–2 months earlier than was seen during the North Atlantic Bloom Experiment.\n\nThe seasonal cycles of primary production in the surface layer are compared to time series observations in Fig. 18. As also indicated in Fig. 14, the spatial variance in modelled primary production is too low, with primary production overestimated at the most oligotrophic site (HOT) and typically underestimated at the most productive sites (especially the equatorial Pacific, NABE, and the Ross Sea). In contrast to the lack of spatial variability, the model exhibits significant seasonal variation, often in excess of the observed variability (at those sites where the seasonal cycle is well resolved).",
null,
"Figure 17Annual cycle of surface chlorophyll a at nine JGOFS time series sites. Red dots indicate climatological observations, while the black lines represent modelled total surface chlorophyll a. Coloured lines represent chlorophyll a in individual size classes (blue is small; red is large). Locations of the time series are indicated in Fig. 6. Satellite estimates of chlorophyll a are shown in grey.",
null,
"Figure 18Annual cycle of surface primary production at nine JGOFS time series sites. Red dots indicate climatological observations, while the black lines represent modelled total primary production. Locations of the time series are indicated in Fig. 6.\n\n### 5.2.3 cGEnIE vs. EcoGEnIE\n\nFigure 19 is a Taylor diagram comparing the two models in terms of their correlation to observations and their standard deviations, relative to observations. A perfect model would be located at the middle of the bottom axis, with a correlation coefficient of 1.0 and a normalised standard deviation of 1.0. The closer a model is to this ideal point, the better a representation of the data it provides. Figure 19 shows that EcoGEnIE is located further from the ideal point than cGEnIE, in terms of oxygen, alkalinity, phosphate, and DIC. The new model seems to provide a universally worse representation of global ocean biogeochemistry. This is perhaps not surprising, given that the BIOGEM component of cGEnIE has at various times been systematically tuned to match the observation data . EcoGEnIE has not yet been optimised in this way.",
null,
"Figure 19Taylor diagram comparing cGEnIE (black dots) and EcoGEnIE (grey dots) to annual mean observation fields.\n\n6 Discussion\n\nThe marine ecosystem is a central component of the Earth system, harnessing solar energy to sustain the biogeochemical cycling of elements between dissolved inorganic nutrients, living biomass, and decaying organic matter. The interaction of these components with the global carbon cycle is critical to our interpretation of past, present, and future climates, and has motivated the development of a wide range of models. These can be placed on a spectrum of increasing complexity, from simple and computationally efficient box models to fully coupled Earth system models with extremely large computational costs.\n\ncGEnIE is a model of intermediate complexity on this spectrum. It has been designed to allow rapid model evaluation while at the same time retaining somewhat realistic global dynamics that facilitate comparison with observations. With this goal in mind, the biological pump was parameterised as a simple vertical flux defined as a function of environmental conditions . This simplicity is well suited to questions concerning the interactions of marine biogeochemistry and climate, but at the same time precludes any investigation of the role of ecological interactions with the broader Earth system.\n\nHere, we have presented an ecological extension to cGEnIE that opens up this area of investigation. EcoGEnIE is rooted in size-dependent physiological and ecological constraints . The ecophysiological parameters are relatively well constrained by observations, even in comparison to simpler ecosystem models that are based on much more aggregated functional groups . The size-based formulation has the additional benefit of linking directly to functional aspects of the ecosystem, such as food web structure and particle sinking .\n\nThe aim of this paper is to provide a detailed description of the new ecological component. It is clear from Fig. 19 that the switch from the parameterised biological pump to the explicit ecological model has led to a deterioration in the overall ability of cGEnIE to reproduce the global distributions of important biogeochemical tracers. This is an acceptable outcome, as our goal here is simply to provide a full description of the new model. Given that the original model was calibrated to the observations in question , that process will need to be repeated for the new model before any sort of objective comparison can be made. We also note that EcoGEnIE is still capable of reproducing approximately 90 % of the global variability in DIC, more than 70 % for phosphate, oxygen, and alkalinity, and more than 50 % for surface chlorophyll.\n\nDespite a slight overall deterioration in terms of model–observation misfit, the biogeochemical components of the model retain the key features that should be expected. At the same time, the ecological community conforms to expectations in terms of standing stocks and fluxes, both in terms of large-scale spatial distributions and the seasonal cycles at specific locations (Figs. 14 and 17). Overall patterns of community structure and physiological limitation also follow expectations based on observations and theory.\n\nAs presented, the model is limited to three limiting resources (light, phosphorus, and iron) and two plankton functional types (phytoplankton and zooplankton). We have written the model equations and code to facilitate the extension of the model to include additional components. In particular, the model capabilities can be extended by enabling silicon and nitrogen limitation, leveraging the silicon and nitrogen cycles already present in BIOGEM . Adding these nutrients will enable the addition of diatoms and diazotrophs, which are both likely to be important factors affecting the long-term strength of the biological pump .\n\n7 Code availability\n\n## Muffin\n\nA manual, detailing code installation, basic model configuration, plus an extensive series of tutorials covering various aspects of the cGEnIE “muffin” release, experimental design, and results' output and processing, is provided. The Latex source of the manual along with pre-built PDF file can be obtained by cloning (https://github.com/derpycode/muffindoc).\n\nA muffin manual version (0.9.1b) corresponding to the model code release can be downloaded at https://github.com/derpycode/muffindoc/archive/1.9.1b.zip or at https://github.com/derpycode/muffindoc/archive/1.9.1b.tar.gz and has a DOI of https://doi.org/10.5281/zenodo.1407658 (Ridgwell et al., 2018).\n\n## Instructions\n\nThe muffin manual contains instructions for obtaining, installing, and testing the code, plus how to run experiments. Specifically,\n\n• Section 1.1 provides a basic overview of the software environment required for installing and running muffin.\n\n• Section 1.2.2 provides a basic overview of cloning and testing the code.\n\n• Section 17.4 provides a detailed guide to cloning the code and configuring an Ubuntu (18.04) software environment including netCDF library installation, plus running a basic test.\n\n• Section 17.6 provides a detailed guide to cloning the code and configuring a Mac OS software environment including netCDF library installation, plus running a basic test.\n\n• Section 1.3 provides a basic guide to running experiments (also, see Sect. 1.6 and 1.7).\n\n• Section 1.4 provides a basic introduction to model output (much more detail is given in Sect. 12).\n\nThe code for the cGENIE.muffin model is hosted on GitHub. The specific version used in this paper is tagged as release 0.9.1 and can be obtained by cloning (https://github.com/derpycode/cgenie.muffin) or downloading (https://github.com/derpycode/cgenie.muffin/archive/0.9.1.zip) or https://github.com/derpycode/cgenie.muffin/archive/0.9.1.tar.gz, and is assigned a DOI: https://doi.org/10.5281/zenodo.1404210 (Ridgwell and Reinhard, 2018) (Note that the discussion paper version of muffin was tagged as 0.9.0 and was assigned a DOI:https://doi.org/10.5281/zenodo.1312518 (Ridgwell and ophiocordyceps, 2018). The difference simply reflects an incorrect plankton definition file included in the code of the earlier tagged release and that did not reflect the results. The differences in results obtained using the incorrect earlier configuration file were negligible.) Configuration files for the specific experiments presented in the paper can be found in the following directory:\n\ncgenie.muffin\\genie-userconfigs\\MS\\\nwardetal.2018\n\nDetails of the different experiments, plus the command line needed to run each one, are given in readme.txt.\n\nFinally, Sect. 9 of the muffin manual provides a set of tutorials surrounding the configuration and capabilities of the ECOGEM ecosystem model.\n\nAuthor contributions\n\nBW, AR and JW developed the model. RD, JW and AR developed the iron biogeochemistry. All authors wrote the paper.\n\nCompeting interests\n\nThe authors declare that they have no conflict of interest.\n\nAcknowledgements\n\nThis work was supported by the European Research Council “PALEOGENiE” project (ERC-2013-CoG-617313). Ben A. Ward thanks the Marine System Modelling group at the National Oceanography Centre, Southampton. Satellite ocean colour data (Sea-viewing Wide Field-of-view Sensor; SeaWiFS) were obtained from the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center.\n\nEdited by: Paul Halloran\nReviewed by: Erik Buitenhuis and one anonymous referee\n\nReferences\n\nAnderson, T. R.: Plankton functional type modelling: Running before we can walk?, J. Plankton Res., 27, 1073–1081, 2005. a, b\n\nArcher, D. E. and Johnson, K.: A model of the iron cycle in the ocean, Global Biogeochem. Cy., 14, 269–279, 2000. a\n\nArmstrong, R. A., Lee, C., Hedges, J. I., Honjo, S., and Wakeham, S. G.: A new, mechanistic model for organic carbon fluxes in the ocean based on the quantitative association of POC with ballast materials, Deep-Sea Res. Pt. II, 49, 219–236, 2002. a\n\nAumont, O., Maier-Reimer, E., Blain, S., and Pondaven, P.: An ecosystem model of the global ocean including Fe, Si, P co-limitations, Global Biogeochem. Cy., 17, 1060, https://doi.org/10.1029/2001GB001745, 2003. a\n\nAumont, O., Ethé, C., Tagliabue, A., Bopp, L., and Gehlen, M.: PISCES-v2: an ocean biogeochemical model for carbon and ecosystem studies, Geosci. Model Dev., 8, 2465–2513, https://doi.org/10.5194/gmd-8-2465-2015, 2015. a\n\nBaas-Becking, L. G. M.: Geobiologie of Inleiding Tot de Milieukunde, Van Stockum & Zoon, The Hague, 1934. a\n\nBacastow, R. and Maier-Reimer, E.: Ocean-circulation model of the carbon cycle, Clim. Dynam., 4, 95–125, 1990. a, b\n\nBec, B., Collos, Y., Vaquer, A., Mouillot, D., and Souchu, P.: Growth rate peaks at intermediate cell size in marine photosynthetic picoeukaryotes, Limnol. Oceanogr., 53, 863–867, 2008. a\n\nBehrenfeld, M. J. and Falkowski, P. G.: Photosynthetic rates derived from satellite-based chlorophyll concentration, Limnol. Oceanogr., 42, 1–20, 1997. a, b\n\nBruggeman, J. and Kooijman, S. A. L. M.: A biodiversity-inspired approach to aquatic ecosystem modeling, Limnol. Oceanogr., 52, 1533–1544, 2007. a\n\nButenschön, M., Clark, J., Aldridge, J. N., Allen, J. I., Artioli, Y., Blackford, J., Bruggeman, J., Cazenave, P., Ciavatta, S., Kay, S., Lessin, G., van Leeuwen, S., van der Molen, J., de Mora, L., Polimene, L., Sailley, S., Stephens, N., and Torres, R.: ERSEM 15.06: a generic model for marine biogeochemistry and the ecosystem dynamics of the lower trophic levels, Geosci. Model Dev., 9, 1293–1339, https://doi.org/10.5194/gmd-9-1293-2016, 2016. a\n\nCao, L., Eby, M., Ridgwell, A., Caldeira, K., Archer, D., Ishida, A., Joos, F., Matsumoto, K., Mikolajewicz, U., Mouchet, A., Orr, J. C., Plattner, G.-K., Schlitzer, R., Tokos, K., Totterdell, I., Tschumi, T., Yamanaka, Y., and Yool, A.: The role of ocean transport in the uptake of anthropogenic CO2, Biogeosciences, 6, 375–390, https://doi.org/10.5194/bg-6-375-2009, 2009. a, b, c\n\nCaperon, J.: Growth response of Isochrysis galbana to nitrate variation at limiting concentrations, Ecology, 49, 886–872, 1968. a, b\n\nCarr, M.-E., Friedrichs, M. A. M., Schmeltz, M., Aita, M. N., Antoine, D., Arrigo, K. R., Asanuma, I., Aumont, O., Barber, R., Behrenfeld, M., Bidigare, R., Buitenhuis, E. T., Campbell, J., Ciotti, A., Dierssen, H., Dowell, M., Dunne, J., Esaias, W., Gentili, B., Gregg, W., Groom, S., Hoepffner, N., Ishizaka, J., Kameda, T., le Quéré, C., Lohrenz, S., Marra, J., Mélin, F., Moore, K., Morel, A., Reddy, T. E., Ryan, J., Scardi, M., Smyth, T., Turpie, K., Tilstone, G., Waters, K., and Yamanaka, Y.: A comparison of global estimates of marine primary production from ocean color, Deep-Sea Res. Pt. II, 53, 541–770, 2006. a\n\nClaussen, M., Mysak, L., Weaver, A., Crucifix, M., Fichefet, T., Loutre, M.-F., Weber, S., Alcamo, J., Alexeev, V., Berger, A., Calov, R., Ganopolski, A., Goosse, H., Lohmann, G., Lunkeit, F., Mokhov, I., Petoukhov, ., Stone, P., and Wang, Z.: Earth system models of intermediate complexity: closing the gap in the spectrum of climate system models., Clim. Dynam., 18, 579–586, 2002. a\n\nDierssen, H. M.: Perspectives on empirical approaches for ocean color remote sensing of chlorophyll in a changing climate, P. Natl. Acad. Sci. USA, 107, 17073–17078, 2010. a\n\nDoney, S., Lindsay, K., Fung, I., and John, J.: Natural variability in a stable, 1000-yr global coupled climate–carbon cycle simulation, J. Climate, 19, 3033–3054, 2006. a\n\nDroop, M. R.: Vitamin B12 and Marine Ecology, IV. The kinetics of uptake, growth and inhibition in Monochrysis lutheri, J. the Marine Biological Association of the United Kingdom, 48, 689–733, 1968. a, b\n\nEdwards, N. and Marsh, R.: Uncertainties due to transport-parameter sensitivity in an efficient 3-D ocean-climate model, Clim. Dynam., 24, 415–433, 2005a. a, b\n\nEdwards, N. R. and Marsh, R.: Uncertainties due to transport-parameter sensitivity in an efficient 3-D ocean-climate model, Clim. Dynam., 24, 415–433, 2005b. a\n\nFalkowski, P. G., Katz, M. E., Knoll, A. H., Quigg, A., Raven, J. A., Schofield, O., and Taylor, F. J. R.: The Evolution of Modern Eukaryotic Phytoplankton, Science, 305, 354–360, 2004. a\n\nFinkel, Z. V., Beardall, J., Flynn, K. J., Quigg, A., Rees, T. A. V., and Raven, J. A.: Phytoplankton in a changing world: cell size and elemental stoichiometry, J. Plankton Res., 32, 119–137, 2010. a\n\nFlynn, K. J.: The importance of the form of the quota curve and control of non-limiting nutrient transport in phytoplankton models, J. Plankton Res., 30, 423–438, 2008. a\n\nFollows, M. J., Dutkiewicz, S., Grant, S., and Chisholm, S. W.: Emergent Biogeography of Microbial Communities in a Model Ocean, Science, 315, 1843–1846, 2007. a, b, c\n\nFriedrichs, M. A. M., Hood, R. R., and Wiggert, J. D.: Ecosystem model complexity versus physical forcing: Quantification of their relative impact with assimilated Arabian Sea data, Deep-Sea Res. Pt. II, 53, 576–600, 2006. a\n\nFriedrichs, M. A. M., Dusenberry, J. A., Anderson, L. A., Armstrong, R., Chai, F., Christian, J. R., Doney, S. C., Dunne, J., Fujii, M., Hood, R. R., McGillicuddy, D., Moore, J. K., Schartau, M., Spitz, Y. H., and Wiggert, J. D.: Assessment of skill and portability in regional marine biogeochemical models: the role of multiple planktonic groups, J. Geophysical Res., 112, C08001, https://doi.org/10.1029/2006JC003852, 2007. a, b\n\nGarcia, H. E., Locarnini, R. A., Boyer, T. P., Antonov, J. I., Zweng, M. M., Baranova, O. K., and Johnson, D. R.: World Ocean Atlas 2009, Volume 4: Nutrients (phosphate, nitrate, and silicate). NOAA Atlas NESDIS 71, U.S. Government Printing Office, Washington, DC, 2010. a\n\nGeider, R. J., MacIntyre, H. L., and Kana, T. M.: A dynamic regulatory model of phytoacclimation to light, nutrients and temperature, Limnol. Oceanogr., 43, 679–694, 1998. a, b, c, d, e\n\nGibbs, S. J., Bown, P. R., Ridgwell, A., Young, J. R., Poulton, A. J., and O'Dea, S. A.: Ocean warming, not acidification, controlled coccolithophore response during past greenhouse climate change, Geology, 44, 59–62, https://doi.org/10.1130/G37273.1, 2015. a\n\nGuidi, L., Stemmann, L., Jackson, G. A., Ibanez, F., Claustre, H., Legendre, L., Picheral, M., and Gorsky, G.: Effects of phytoplankton community on production, size and export of large aggregates: A world-ocean analysis, Limnol. Oceanogr., 54, 1951–1963, 2009. a\n\nHain, M. P., Sigman, D. M., and Haug, G. H.: The biological pump in the past, in: Treatise on Geochemistry, vol. 8, pp. 485–517, Elsevier, 2nd edn., 2014. a\n\nHansen, P. J., Bjørnsen, P. K., and Hansen, B. W.: Zooplankton grazing and growth: Scaling with the 2–2000-µm body size range, Limnol. Oceanogr., 42, 678–704, 1997. a\n\nHargreaves, J. C., Annan, J. D., Edwards, N. R., and Marsh, R.: An efficient climate forecasting method using an intermediate complexity Earth System Model and the ensemble Kalman filter, Clim. Dynam., 23, 745–760, https://doi.org/10.1007/s00382-004-0471-4, 2004. a\n\nHeinze, C., Maier-Reimer, E., and Winn, K.: Glacial pCO2 reduction by the world ocean: Experiments with the Hamburg carbon cycle model, Paleoceanography, 6, 395–430, 1991. a\n\nHollowed, A. B., Barange, M., Beamish, R. J., Brander, K., Cochrane, K., Drinkwater, K., Foreman, M. G. G., Hare, J. A., Holt, J., Ito, S., Kim, S., King, J., Loeng, H., MacKenzie, B. R., Mueter, F. J., Okey, T. A., Peck, M. A., Radchenko, V. I., Rice, J. C., Schirripa, M. J., Yatsu, A., and Yamanaka, Y.: Projected impacts of climate change on marine fish and fisheries, ICES J. Marine Science, 5, 1023–1037, 2013. a\n\nHülse, D., Arndta, S., Wilson, J. D., Munhoven, G., and Ridgwell, A.: Understanding the causes and consequences of past marine carbon cycling variability through models, Earth-Sience Reviews, 171, 349–382, 2017. a\n\nJohn, E., Wilson, J., Pearson, P., and Ridgwell, A.: Temperature-dependent remineralization and carbon cycling in the warm Eocene oceans, Palaeogeogr. Palaeocl., 413, 158–166, https://doi.org/10.1016/j.palaeo.2014.05.019, 2014. a\n\nKraus, E. B. and Turner, J. S.: A one-dimensional model of the seasonal thermocline II. The general theory and its consequences, Tellus, 19, 98–106, 1967. a\n\nKwiatkowski, L., Yool, A., Allen, J. I., Anderson, T. R., Barciela, R., Buitenhuis, E. T., Butenschön, M., Enright, C., Halloran, P. R., Le Quéré, C., de Mora, L., Racault, M.-F., Sinha, B., Totterdell, I. J., and Cox, P. M.: iMarNet: an ocean biogeochemistry model intercomparison project within a common physical ocean modelling framework, Biogeosciences, 11, 7291–7304, https://doi.org/10.5194/bg-11-7291-2014, 2014. a\n\nLe Quéré, C.: Reply to Horizons article “Plankton functional type modelling: running before we can walk?” Anderson (2005): I. Abrupt changes in marine ecosystems?, J. Plankton Res., 28, 871–872, 2006. a\n\nLe Quéré, C., Harrison, S. P., Prentice, I. C., Buitenhuis, E. T., Aumont, O., Bopp, L., Claustre, H., Cotrim da Cunha, L., Geider, R., Giraud, X., Klaas, C., Kohfeld, K. E., Legendre, L., Manizza, M., Platt, T., Rivkin, R. B., Sathyendranath, S., Uitz, J., Watson, A. J., and Wolf-Gladrow, D.: Ecosystem dynamics based on plankton functional types for global ocean biogeochemistry models, Glob. Change Biol., 11, 2016–2040, 2005. a, b, c\n\nLenton, T. M., Marsh, R., Price, A. R., Lunt, D. J., Aksenov, Y., Annan, J. D., Cooper-Chadwick, T., Cox, S. J., Edwards, N. R., Goswami, S., Hargreaves, J. C., Harris, P. P., Jiao, Z., Livina, V. N., Payne, A. J., Rutt, I. C., Shepherd, J. G., Valdes, P. J., Williams, G., Williamson, M. S., and Yool, A.: Effects of atmospheric dynamics and ocean resolution on bi-stability of the thermohaline circulation examined using the Grid ENabled Integrated Earth system modelling (GENIE) framework, Clim. Dynam., 29, 591–613, 2007. a\n\nLitchman, E., Klausmeier, C. A., Schofield, O. M., and Falkowski, P. G.: The role of functional traits and trade-offs in structuring phytoplankton communities: scalling from cellular to ecosystem level, Ecol. Lett., 10, 1170–1181, 2007. a, b\n\nLosa, S. N., Vezina, A., Wright, D., Lu, Y., Thompson, K., and Dowd, M.: 3D ecosystem modelling in the North Atlantic: Relative impacts of physical and biological parameterisations, J. Marine Syst., 61, 230–245, 2006. a\n\nMaier-Reimer, E.: Geochemical cycles in an ocean general circulation model. Preindustrial tracer distributions, Global Biogeochem. Cy., 7, 645–677, 1993. a\n\nMarañón, E., Cermeño, P., López-Sandoval, D. C., Rodríguez-Ramos, T., Sobrino, C., Huete-Ortega, M., Blanco, J. M., and Rodríguez, J.: Unimodal size scaling of phytoplankton growth and the size dependence of nutrient uptake and use, Ecol. Lett., 16, 371–379, 2013. a\n\nMarchal, O., Stocker, T. F., and Joos, F.: A latitude-depth, circulation-biogeochemical ocean modelfor paleoclimate studies, Development and sensitivities, Tellus B, 50, 290–316, 1998. a\n\nMarsh, R., Müller, S. A., Yool, A., and Edwards, N. R.: Incorporation of the C-GOLDSTEIN efficient climate model into the GENIE framework: “eb_go_gs” configurations of GENIE, Geosci. Model Dev., 4, 957–992, https://doi.org/10.5194/gmd-4-957-2011, 2011. a, b, c, d\n\nMartin, J. H., Knauer, G. A., Karl, D. M., and Broenkow, W. W.: Vertex: carbon cycling in the northeast Pacific, Deep-Sea Res., 34, 267–285, 1987. a\n\nMeyer, K. M., Kump, L. R., and Ridgwell, A.: Biogeochemical controls on photic-zone euxinia during the end-Permian mass extinction, Geology, 36, 747–750, 2008. a\n\nMeyer, K. M., Ridgwell, A., and Payne, J. L.: The influence of the biological pump on ocean chemistry: implications for long-term trends in marine redox chemistry, the global carbon cycle, and marine animal ecosystems, Geobiology, 14, 207–219, https://doi.org/10.1111/gbi.12176, 2016. a\n\nMonteiro, F. M., Follows, M. J., and Dutkiewicz, S.: Distribution of diverse nitrogen fixers in the global ocean., Global Biogeochem. Cy., 24, GB3017, https://doi.org/10.1029/2009GB003731, 2010. a\n\nMonteiro, F. M., Pancost, R. D., Ridgwell, A., and Donnadieu, Y.: Nutrients as the dominant control on the spread of anoxia and euxinia across the Cenomanian-Turonian oceanic anoxic event (OAE2): Model-data comparison, Paleoceanography, 27, PA4209, https://doi.org/10.1029/2012PA002351, 2012. a, b, c\n\nMonteiro, F. M., Bach, L. T., Brownlee, C., Bown, P., Rickaby, R. E. M., Tyrrell, T., Beaufort, L., Dutkiewicz, S., Gibbs, S., Gutowska, M. A., Lee, R., Poulton, A. J., Riebesell, U., Young, J., and Ridgwell, A.: Calcification in marine phytoplankton: Physiological costs and ecological benefits, Sci. Adv., 2, e1501822, https://doi.org/10.1126/sciadv.1501822, 2016. a\n\nMoore, C. M., Mills, M. M., Arrigo, K. R., Berman-Frank, I., Bopp, L., Boyd, P. W., Galbraith, E. D., Geider, R. J., Guieu, C., Jaccard, S. L., Jickells, T. D., Roche, J. L., Lenton, T. M., Mahowald, N. M., Marañón, E., Marinov, I., Moore, J. K., Nakatsuka, T., Oschlies, A., Saito, M. A., Thingstad, T. F., Tsuda, A., and Ulloa, O.: Processes and patterns of oceanic nutrient limitation, Nat. Geosci., 6, 701–710, 2013. a\n\nMoore, J. K., Doney, S. C., Kleypas, J. A., Glover, D. M., and Fung, I. Y.: An intermediate complexity marine ecosystem model for the global domain, Deep-Sea Res. Pt. II, 49, 403–462, 2002. a, b\n\nNajjar, R. G., Sarmiento, J. L., and Toggweiler, J. R.: Downward transport and fate of organic matter in the ocean: Simulations with a general circulation model, Global Biogeochem. Cy., 6, 45–76, 1992. a, b\n\nNorris, R., Kirtland Turner, S., Hull, P., and Ridgwell, A.: Marine ecosystem responses to Cenozoic global change, Science, 341, 492–498, https://doi.org/10.1126/science.1240543, 2013. a\n\nOlsen, A., Key, R. M., van Heuven, S., Lauvset, S. K., Velo, A., Lin, X., Schirnick, C., Kozyr, A., Tanhua, T., Hoppema, M., Jutterström, S., Steinfeldt, R., Jeansson, E., Ishii, M., Pérez, F. F., and Suzuki, T.: The Global Ocean Data Analysis Project version 2 (GLODAPv2) – an internally consistent data product for the world ocean, Earth Syst. Sci. Data, 8, 297–323, https://doi.org/10.5194/essd-8-297-2016, 2016. a\n\nOschlies, A.: Model-derived estimates of new production: New results point to lower values, Deep-Sea Res. Pt. II, 48, 2173–2197, 2001. a\n\nParekh, P., Follows, M. J., Dutkiewicz, S., and Ito, T.: Physical and biological regulation of the soft tissue carbon pump, Paleoceanography, 21, PA3001, https://doi.org/10.1029/2005PA001258, 2006. a\n\nRaven, J. A.: Why are there no picoplanktonic O2 evolvers with volumes less than 10−19 m3?, J. Plankton Res., 16, 565–580, 1994. a\n\nRedfield, A. C.: On the proportions of organic derivatives in sea water and their relation to the composition of plankton, James Johnstone Memorial Volume, Liverpool, 176–192, 1934. a\n\nRidgwell, A. and Death, R.: Iron limitation in an efficient model of global carbon cycling and climate, in preparation, 2018. a, b, c, d, e, f\n\nRidgwell, A. and ophiocordyceps: derpycode/cgenie.muffin: Ward et al. (2018) (GMD) (Version 0.9.0), Zenodo, https://doi.org/10.5281/zenodo.1312518, 2018.\n\nRidgwell, A. and Reinhard, C.: derpycode/cgenie.muffin: Ward et al. (2018) (GMD) [revised] (Version 0.9.1), Zenodo, https://doi.org/10.5281/zenodo.1404210, 2018.\n\nRidgwell, A. and Schmidt, D. N.: Past constraints on the vulnerability of marine calcifiers to massive carbon dioxide release, Nat. Geosci., 3, 196–200, https://doi.org/10.1038/ngeo755, 2010. a\n\nRidgwell, A., Hargreaves, J. C., Edwards, N. R., Annan, J. D., Lenton, T. M., Marsh, R., Yool, A., and Watson, A.: Marine geochemical data assimilation in an efficient Earth System Model of global biogeochemical cycling, Biogeosciences, 4, 87–104, https://doi.org/10.5194/bg-4-87-2007, 2007a. a, b, c, d, e, f, g, h\n\nRidgwell, A., Zondervan, I., Hargreaves, J. C., Bijma, J., and Lenton, T. M.: Assessing the potential long-term increase of oceanic fossil fuel CO2 uptake due to CO2-calcification feedback, Biogeosciences, 4, 481–492, https://doi.org/10.5194/bg-4-481-2007, 2007. a\n\nRidgwell, A., Peterson, C., Ward, B., and Jones, R.: derpycode/muffindoc: Ward et al. (2018) (GMD) manual release (b) (Version 1.9.1b), Zenodo, https://doi.org/10.5281/zenodo.1407658, 2018.\n\nSchartau, M. and Oschlies, A.: Simultaneous data-based optimization of a 1D-ecosystem model at three locations in the North Atlantic: Part I – Method and parameter estimates, J. Marine Res., 61, 765–793, 2003a. a\n\nSchartau, M. and Oschlies, A.: Simultaneous data-based optimization of a 1D-ecosystem model at three locations in the North Atlantic: Part II – Standing stocks and nitrogen fluxes, J. Marine Res., 61, 795–821, 2003b. a\n\nShigsesada, N. and Okubo, A.: Analysis of the self-shading effect on algal vertical distribution in natural waters, J. Math. Biol., 12, 311–326, 1981. a\n\nTagliabue, A., Mtshali, T., Aumont, O., Bowie, A. R., Klunder, M. B., Roychoudhury, A. N., and Swart, S.: A global compilation of dissolved iron measurements: focus on distributions and processes in the Southern Ocean, Biogeosciences, 9, 2333–2349, https://doi.org/10.5194/bg-9-2333-2012, 2012. a\n\nTagliabue, A., Aumont, O., DeAth, R., Dunne, J. P., Dutkiewicz, S., Galbraith, E., Misumi, K., Moore, J. K., Ridgwell, A., Sherman, E., Stock, C., Vichi, M., Völker, C., and Yool, A.: How well do global ocean biogeochemistry models simulate dissolved iron distributions?, Global Biogeochem. Cy., 30, 149–174, https://doi.org/10.1002/2015GB005289, 2016. a, b, c\n\nTang, E. P. Y.: The allometry of algal growth rates, J. Plankton Res., 17, 1325–1335, 1995. a\n\nTyrrell, T.: The relative influences of nitrogen and phosphorous on oceanic primary production, Nature, 400, 525–531, 1999. a\n\nWard, B. A. and Follows, M. J.: Marine mixotrophy increases trophic transfer efficiency, net community production and carbon export, P. Natl. Acad. Sci. USA, 113, 2958–2963, 2016. a, b, c, d, e, f, g\n\nWard, B. A., Friedrichs, M. A. M., Anderson, T. R., and Oschlies, A.: Parameter optimisation and the problem of underdetermination in marine biogeochemical models, J. Marine Syst., 81, 34–43, 2010. a\n\nWard, B. A., Dutkiewicz, S., Jahn, O., and Follows, M. J.: A size structured food-web model for the global ocean, Limnol. Oceanogr., 57, 1877–1891, 2012. a, b, c, d\n\nWard, B. A., Martin, A. P., Schartau, M., Follows, M. J., and Anderson, T. R.: When is a biogeochemical model too complex? Objective model reduction for North Atlantic time-series sites, Prog. Oceanogr., 116, 49–65, 2013. a\n\nWatson, A. J., Bakker, D. C. E., Ridgwell, A. J., Boyd, P. W., and Law, C. S.: Effect of iron supply on Southern Ocean CO2 uptake and implications for glacial atmospheric CO2, Nature, 407, 730–733, 2000. a, b\n\nWestberry, T., Behrenfeld, M. J., Siegel, D. A., and Boss, E.: Carbon-based primary productivity modeling with vertically resolved photoacclimation, Global Biogeochem. Cy., 22, GB2024, https://doi.org/10.1029/2007GB003078, 2008. a\n\nWroblewski, J. S., Sarmiento, J. L., and Flierl, G. R.: An ocean basin scale model of plankton dynamics in the North Atlantic. 1. Solutions for the climatological oceanographic conditions in May, Global Biogeochem. Cy., 2, 199–218, 1988. a\n\nYool, A., Popova, E. E., and Anderson, T. R.: MEDUSA-2.0: an intermediate complexity biogeochemical model of the marine carbon cycle for climate change and ocean acidification studies, Geosci. Model Dev., 6, 1767–1811, https://doi.org/10.5194/gmd-6-1767-2013, 2013. a, b\n\nZeebe, R. E. and Wolf-Gladrow, D.: CO2 in seawater, Elsevier Science, 2001. a"
]
| [
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-avatar-thumb150.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f01-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-t03-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f02-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f03-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f05-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f06-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f07-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f08-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f10-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f11-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f12-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f13-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f14-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f15-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f16-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f17-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f18-thumb.png",
null,
"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018-f19-thumb.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8480488,"math_prob":0.948007,"size":84388,"snap":"2021-04-2021-17","text_gpt3_token_len":21259,"char_repetition_ratio":0.14110495,"word_repetition_ratio":0.019248826,"special_character_ratio":0.24093473,"punctuation_ratio":0.22232504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638034,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T02:12:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f2f5fc43-19b9-4922-832f-38cb48f047ae>\",\"Content-Length\":\"438695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec1f6c10-151f-43f5-ba04-3ed9a9e8e19c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b28d0324-5278-48d2-8b31-c7e435317e0e>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://gmd.copernicus.org/articles/11/4241/2018/gmd-11-4241-2018.html\",\"WARC-Payload-Digest\":\"sha1:Z2ERGVE6DYO22GGXB3OF6R3KAK3DALGL\",\"WARC-Block-Digest\":\"sha1:MRI2R6E56JXBNDW4BBLCGPRQCGLP4NXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704835583.91_warc_CC-MAIN-20210128005448-20210128035448-00201.warc.gz\"}"} |
https://community.smartsheet.com/discussion/98973/need-help-adding-less-than-or-equal-value-to-a-current-formula | [
"# Need help - adding less than or equal value to a current formula\n\nHello,\n\nCurrently I'm struggling to add <= value to my current formula code, my idea is to count for Order Numbers (Unique values) and Lines of Order Numbers of certain times.\n\nsince filter doesn't work in this code, I'd like to add a <= or >= value formula for Ship_Time, so my code filters can automatically generate statistics for certain times I need.\n\nFor Orders I am using this formula:\n\n=COUNT(DISTINCT(COLLECT({New Sheet Range 2}, {New Sheet Range 1}, =\"SZE\")))\n\nhere I want to add for example \">=09:00:00\"\n\nFor Lines formula:\n\n=COUNTIF({New Sheet Range 1}, [Delivered By]@row)\n\nlooking for suggestions and help!\n\nThank you.\n\nTags:\n\n• Time is currently stored as a text value in Smartsheet and cannot be compared as less than or greater than. You will need to use a helper column to convert it into a numerical value and then reference that column.\n\nThere should be a solution somewhere in this thread. It is currently up to 13 pages long, but I am fairly certain there should be something in there to help get you started.\n\n• Time is currently stored as a text value in Smartsheet and cannot be compared as less than or greater than. You will need to use a helper column to convert it into a numerical value and then reference that column.\n\nThere should be a solution somewhere in this thread. It is currently up to 13 pages long, but I am fairly certain there should be something in there to help get you started.\n\n• To add to Paul's answer - once you have your numerical values set for time, you can simply add the remote column range for your time column and the \">=\" criteria to the COLLECT function. COLLECT allows you to specify multiple pairs of criteria range and criteria.\n\nSyntax: COLLECT( range, criterion_range1, criterion1, criterion_range2, criterion2)\n\n=COUNT(DISTINCT(COLLECT({New Sheet Range 2}, {New Sheet Range 1}, =\"SZE\", {New Sheet Numerical Time Range}, >= 9)))\n\nRegards,\n\nJeff Reisman, IT Business Analyst & Project Coordinator, Mitsubishi Electric Trane US\n\nIf my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!\n\n## Help Article Resources\n\nWant to practice working with formulas directly in Smartsheet?\n\nCheck out the Formula Handbook template!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84567106,"math_prob":0.706029,"size":2314,"snap":"2023-40-2023-50","text_gpt3_token_len":525,"char_repetition_ratio":0.10562771,"word_repetition_ratio":0.3814433,"special_character_ratio":0.22817631,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9609528,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T21:51:03Z\",\"WARC-Record-ID\":\"<urn:uuid:20c998a7-f816-4a6d-b5cd-6339d534792b>\",\"Content-Length\":\"206889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a742f55-fd32-48df-b5c6-80431a3e1f15>\",\"WARC-Concurrent-To\":\"<urn:uuid:730a7072-1571-469e-abf3-aa0c7bc50d60>\",\"WARC-IP-Address\":\"162.159.138.78\",\"WARC-Target-URI\":\"https://community.smartsheet.com/discussion/98973/need-help-adding-less-than-or-equal-value-to-a-current-formula\",\"WARC-Payload-Digest\":\"sha1:PKVR735OPQQQZ2OXFDQEMGTCRZGH6KDW\",\"WARC-Block-Digest\":\"sha1:GQWBGU5LEL3LAL5OELZLPHCV4MCNG4ZF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100232.63_warc_CC-MAIN-20231130193829-20231130223829-00655.warc.gz\"}"} |
https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SPlot-Smoothing.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"SMOOTHING ROUTINES\n\n# SigmaPlot 2D and 3D Graph Smoothing\n\nSigmaPlot provides seven different data smoothing algorithms that should satisfy most smoothing needs – negative exponential, loess, running average, running median, bisquare, inverse square and inverse distance. Each smoother contains options that make them very flexible. For example, unequally spaced data that occurs in clumps is better analyzed using the nearest neighbor rather than a fixed bandwidth method. Also, outlier rejection is available in some smoothers.\n\n#### Two Dimensional Smoothing\n\nSmoothing is used to elicit trends from noisy data. The three examples in Tukey´s book Exploratory Data Analysis (Addison-Wesley, 1977) show the need for smoothing beautifully. The trends in the U.S. gold production from 1872 to 1956, Figure 1A, are fairly clear.",
null,
"",
null,
"",
null,
"The peaks and valleys in the U.S. wheat production, Figure 1B, are less clear. I challenge you to visually find the trends in the annual New York City precipitation data shown in Figure 1C. The loess algorithm will be used to smooth these data sets. “loess” means locally weighted regression. Each point along the smooth curve is obtained from a regression of data points close to the curve point with the closest points more heavily weighted. The amount of smoothing, which affects the number of points in the regression is determined by the user. A weighted regression is performed for each point along the smooth curve.\n\nFigure 1. Data with trends that are increasingly more difficult to visualize\n\nloess smoothed curves for the three examples in Figure 1 are shown in Figure 2. The smoothed curves in Figure 2A and 2B make the trends in the gold and wheat data very clear. It is still difficult to visualize in the raw data the precipitation trend shown in Figure 2C. To confirm the results of the loess smoothed curve the histogram of average rainfall in ten year intervals was computed and superimposed on the smooth curve. There is a good comparison between the histogram and the loess smooth.\n\nThe loess smoothing parameters were varied to achieve the best visualization. A polynomial degree of one was used in all cases. A 0.1 sampling proportion was used in Figure 2A and B and 0.3 in Figure 2C. Since the data was unequally spaced along the x axis the nearest neighbor bandwidth method was used. The default number of intervals (100) for generation of the smooth curve was found to be the best. This generates a line using straight lines between curve points. Sometimes this leads to sharp corners in the smooth so the spline interpolation line type (Smoothed (spline)) was used.",
null,
"",
null,
"",
null,
"Figure 2. Smoothed curves for data in Figure 1. A ten year average rainfall histogram is also shown in C.\n\nSeveral of the smoothing methods, including loess, are based on local polynomial regression and the polynomial order is selectable. Increasing the order tends to include more high frequency components in the smooth. The effect of increasing the order from 1 (local linear regressions) to 2 (local quadratic regressions) is shown in Figure 3. The effect is to increase peak height magnitude and introduce additional high frequency components (wiggles) in B. A subsequent increase of the sampling proportion in C results in a smooth very much like the original for order 1 in A.",
null,
"",
null,
"",
null,
"Figure 3. Effect of increasing the regression polynomial order. The order is 1 and sampling proportion is 0.1 in A. The order is increased to 2 in B and then the sampling proportion is increased to 0.2 in C.\n\n#### Three Dimensional Smoothing\n\nVisualizing spatial relationships in a three dimensional scatter plot can be very difficult. The strongest three dimensional cue is provided by an animated rotation of the data. Since this is not possible in paper publications we must resort to using drop lines, enclosing the graph with additional axes, etc. Figure 4 shows that a smooth surface also helps. This data describes the reaction characteristics on an isomer of hexane. The smooth surface B clearly shows the trends with respect to temperature and reaction rate whereas visualizing this in the scatterplot A is difficult.",
null,
"",
null,
"Figure 4. The data trend in A is easily visualized with a loess smoothed surface, B.\n\nThis data is relatively sparse so a large sampling proportion 0.6 was required to avoid oscillations and spikes in the loess surface. A polynomial degree of 1 and the nearest neighbor bandwidth method were used. The Preview feature allows a quick comparison of smoothing methods on a given data set. For this data essentially equivalent smooth surfaces were obtained with the negative exponential and bisquare methods.\n\nThe bandwidth method option is also very useful. The nearest neighbor method works well for unequally spaced data. The data in Figure 3 is unequally spaced in both X and Y. Compare the smoothing results using the nearest neighbor and fixed methods shown in Figure 5. The result for the fixed method is about the best that could be obtained by varying the sampling proportion with a value of 0.8 shown.",
null,
"",
null,
"Figure 5. Comparison of bandwidth methods for unequally spaced data. Nearest neighbor on the left and fixed on the right.\n\nSmoothers is a generic name for a variety of techniques that can be used to either smooth a data set by removing undesired high-frequency components (locations of rapid variation, such as noise contamination), or to resample dependent variable values to other independent variable locations using the values of the data at nearby points. The smoothing methods provided in SigmaPlot operate by weighting the data in a neighborhood of the smoothing location and applying linear or non-linear methods to combine the weighted values to produce a smoothed value. These non-parametric smoothing techniques provide a good complement to the parameterized curve/surface fitting facility (Regression Wizard) in SigmaPlot. For data subjected to measurement errors, noise, etc., either method can be used to predict behavior or to estimate true values.\n\nThe kernel used in the smoothing computation and the smoothing method are given in the following table.\n\n Algorithm Weighting Kernel Method to Compute Smoothed Value Negative Exponential Gaussian Polynomial or Loess Fitting Loess Tricube Polynomial or Loess Fitting Running Average Uniform Mean Running Median Uniform Median Bisquare Biweight Polynomial Fitting Inverse Square Cauchy Mean Inverse Distance Inverse Distance Mean\n\nThe equations used for each kernel are:\n\n Kernel Kernel Formula (y=0 for 2D Smoothers) Uniform 1 Biweight (1 - x2 – y2)2 Tricube (1 - sqrt(x2+y2)3)3 Gaussian exp(-x2-y2) Cauchy 1/(1+x2+y2) Inverse Distance (3D only) 1/sqrt(x2+y2)",
null,
"MY Office : 72-3C, JALAN PUTERI 2/4, BANDAR PUTERI, 47100 PUCHONG, SELANGOR, Malaysia. Tel:+603-8063 9300 fax:+603-8063 9400 SG Office : 259, Onan Road, Singapore 424651. Tel: +65-6468 3325 Fax: +65-6764 5646"
]
| [
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SigmaPlot Files/Whiteline.JPG",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/7B12/spheader_smoothing.jpg",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/BArrow.JPG",
null,
"https://solutions4u-asia.com/Main_files/leftline.gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image001[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image002[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image003[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image004[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image005[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image006[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image007[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image008[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image009[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image010[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image011[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image012[1].gif",
null,
"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SP_Files0806/image013[1].gif",
null,
"https://solutions4u-asia.com/Main_files/S4Ulogoss.JPG",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9032322,"math_prob":0.9696601,"size":6683,"snap":"2022-27-2022-33","text_gpt3_token_len":1447,"char_repetition_ratio":0.13400209,"word_repetition_ratio":0.0054694624,"special_character_ratio":0.20454885,"punctuation_ratio":0.08347386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98616683,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90],"im_url_duplicate_count":[null,9,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T17:44:30Z\",\"WARC-Record-ID\":\"<urn:uuid:d6108056-d208-4c1c-ad1a-a2a686d920c1>\",\"Content-Length\":\"36699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6be2c006-1e67-4bf5-81d4-278bd8280ede>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4cc516b-a1b9-42b5-bd41-3c1353480b9c>\",\"WARC-IP-Address\":\"101.99.80.47\",\"WARC-Target-URI\":\"https://solutions4u-asia.com/PDT/SYSTAT/SigmaPlot/SPlot-Smoothing.html\",\"WARC-Payload-Digest\":\"sha1:VXQTQSKLAIIC2FBG643SQLVQ7OJKLL5C\",\"WARC-Block-Digest\":\"sha1:4XI6DKYCIIBJLZNUB6A72KSYA7MQLQ4L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103036077.8_warc_CC-MAIN-20220625160220-20220625190220-00509.warc.gz\"}"} |
http://www.stackprinter.com/export?question=487258&format=HTML&service=stackoverflow&printer=false | [
"",
null,
"What is a plain English explanation of \"Big O\" notation?\n[+5243] Arec Barrwin\n[2009-01-28 11:10:32]\n[ algorithm complexity-theory computer-science big-o time-complexity ]\n[ https://stackoverflow.com/questions/487258/what-is-a-plain-english-explanation-of-big-o-notation ]\n\nI'd prefer as little formal definition as possible and simple mathematics.\n\n(67) Summary: The upper bound of the complexity of an algorithm. See also the similar question Big O, how do you calculate/approximate it? for a good explaination. - Kosi2801\n(7) The other answers are quite good, just one detail to understand it: O(log n) or similar means, that it depends on the \"length\" or \"size\" of the input, not on the value itself. This could be hard to understand, but is very important. For example, this happens when your algorithm is splitting things in two in each iteration. - Harald Schilly\n(1) If this is a duplicate of anything, it is: stackoverflow.com/questions/107165/big-o-for-eight-year-olds - mmcdole\n(17) There is a lecture dedicated to complexity of the algorithms in the Lecture 8 of the MIT \"Introduction to Computer Science and Programming\" course youtube.com/watch?v=ewd7Lf2dr5Q It is not completely plain English, but gives nice explanation with examples that are easily understandable. - ivanjovanovic\n(18) Big O is an estimate of the worst case performance of a function assuming the algorithm will perform the maximum number of iterations. - Paul Sweatte\n(1) I think you will find this: youtube.com/watch?v=6Ol2JbwoJp0 video helpful. - 2147483647\n(3) It allows us to compare the efficiency of our solution to other solutions. Simple time tests will not work, because of external variables (e.g. hardware and problem size (e.g. How many items we are trying to sort)). Big-O allows us to standardize the comparisons. - James Oravec\n(1) See this demonstration. - Mohamed Ennahdi El Idrissi\n(2) Actually big-O notation has nothing to do with algorithms and complexity. - Emanuele Paolini\n(1) If you really want to learn about Landau notation, I recommend you head over to Computer Science, starting with our reference questions. While we don't pretend to be able to explain a mathematical concept accurately and in \"plain English\", we also won't teach you falsehoods. (Hopefully.) - Raphael\nOne sentence explanation: \"One function does not grow faster than another one\". - TT_ stands with Russia\nThis post explains complexity using concrete example: mohalgorithmsorbit.blogspot.com/2021/01/… - Mohamed Ennahdi El Idrissi\n@HaraldSchilly \"depends on the \"length\" or \"size\" of the input, not on the value itself\"? Could you please explain that in more detail for me? - John\n[+6979] [2009-01-28 11:18:57] cletus [",
null,
"ACCEPTED]\n\nQuick note, my answer is almost certainly confusing Big Oh notation (which is an upper bound) with Big Theta notation \"Θ\" (which is a two-side bound). But in my experience, this is actually typical of discussions in non-academic settings. Apologies for any confusion caused.\n\nBigOh complexity can be visualized with this graph:",
null,
"The simplest definition I can give for Big Oh notation is this:\n\nBig Oh notation is a relative representation of the complexity of an algorithm.\n\nThere are some important and deliberately chosen words in that sentence:\n\n• relative: you can only compare apples to apples. You can't compare an algorithm that does arithmetic multiplication to an algorithm that sorts a list of integers. But a comparison of two algorithms to do arithmetic operations (one multiplication, one addition) will tell you something meaningful;\n• representation: BigOh (in its simplest form) reduces the comparison between algorithms to a single variable. That variable is chosen based on observations or assumptions. For example, sorting algorithms are typically compared based on comparison operations (comparing two nodes to determine their relative ordering). This assumes that comparison is expensive. But what if the comparison is cheap but swapping is expensive? It changes the comparison; and\n• complexity: if it takes me one second to sort 10,000 elements, how long will it take me to sort one million? Complexity in this instance is a relative measure to something else.\n\nThe best example of BigOh I can think of is doing arithmetic. Take two numbers (123456 and 789012). The basic arithmetic operations we learned in school were:\n\n• subtraction;\n• multiplication; and\n• division.\n\nEach of these is an operation or a problem. A method of solving these is called an algorithm.\n\nThe addition is the simplest. You line the numbers up (to the right) and add the digits in a column writing the last number of that addition in the result. The 'tens' part of that number is carried over to the next column.\n\nLet's assume that the addition of these numbers is the most expensive operation in this algorithm. It stands to reason that to add these two numbers together we have to add together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000 additions.\n\nSee the pattern? The complexity (being the number of operations) is directly proportional to the number of digits n in the larger number. We call this O(n) or linear complexity.\n\nSubtraction is similar (except you may need to borrow instead of carry).\n\nMultiplication is different. You line the numbers up, take the first digit in the bottom number and multiply it in turn against each digit in the top number and so on through each digit. So to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as many as 10 or 11 column adds to get the end result too.\n\nIf we have two 100-digit numbers we need to do 10,000 multiplications and 200 adds. For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.\n\nAs the algorithm scales with n-squared, this is O(n2) or quadratic complexity. This is a good time to introduce another important concept:\n\nWe only care about the most significant portion of complexity.\n\nThe astute may have realized that we could express the number of operations as: n2 + 2n. But as you saw from our example with two numbers of a million digits apiece, the second term (2n) becomes insignificant (accounting for 0.0002% of the total operations by that stage).\n\nOne can notice that we've assumed the worst case scenario here. While multiplying 6 digit numbers, if one of them has 4 digits and the other one has 6 digits, then we only have 24 multiplications. Still, we calculate the worst case scenario for that 'n', i.e when both are 6 digit numbers. Hence Big Oh notation is about the Worst-case scenario of an algorithm.\n\n# The Telephone Book\n\nThe next best example I can think of is the telephone book, normally called the White Pages or similar but it varies from country to country. But I'm talking about the one that lists people by surname and then initials or first name, possibly address and then telephone numbers.\n\nNow if you were instructing a computer to look up the phone number for \"John Smith\" in a telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that you could guess how far in the S's started (let's assume you can't), what would you do?\n\nA typical implementation might be to open up to the middle, take the 500,000th and compare it to \"Smith\". If it happens to be \"Smith, John\", we just got really lucky. Far more likely is that \"John Smith\" will be before or after that name. If it's after we then divide the last half of the phone book in half and repeat. If it's before then we divide the first half of the phone book in half and repeat. And so on.\n\nThis is called a binary search and is used every day in programming whether you realize it or not.\n\nSo if you want to find a name in a phone book of a million names you can actually find any name by doing this at most 20 times. In comparing search algorithms we decide that this comparison is our 'n'.\n\n• For a phone book of 3 names it takes 2 comparisons (at most).\n• For 7 it takes at most 3.\n• For 15 it takes 4.\n• For 1,000,000 it takes 20.\n\nThat is staggeringly good, isn't it?\n\nIn BigOh terms this is O(log n) or logarithmic complexity. Now the logarithm in question could be ln (base e), log10, log2 or some other base. It doesn't matter it's still O(log n) just like O(2n2) and O(100n2) are still both O(n2).\n\nIt's worthwhile at this point to explain that BigOh can be used to determine three cases with an algorithm:\n\n• Best Case: In the telephone book search, the best case is that we find the name in one comparison. This is O(1) or constant complexity;\n• Expected Case: As discussed above this is O(log n); and\n• Worst Case: This is also O(log n).\n\nNormally we don't care about the best case. We're interested in the expected and worst case. Sometimes one or the other of these will be more important.\n\nBack to the telephone book.\n\nWhat if you have a phone number and want to find a name? The police have a reverse phone book but such look-ups are denied to the general public. Or are they? Technically you can reverse look-up a number in an ordinary phone book. How?\n\nYou start at the first name and compare the number. If it's a match, great, if not, you move on to the next. You have to do it this way because the phone book is unordered (by phone number anyway).\n\nSo to find a name given the phone number (reverse lookup):\n\n• Best Case: O(1);\n• Expected Case: O(n) (for 500,000); and\n• Worst Case: O(n) (for 1,000,000).\n\n# The Traveling Salesman\n\nThis is quite a famous problem in computer science and deserves a mention. In this problem, you have N towns. Each of those towns is linked to 1 or more other towns by a road of a certain distance. The Traveling Salesman problem is to find the shortest tour that visits every town.\n\nSounds simple? Think again.\n\nIf you have 3 towns A, B, and C with roads between all pairs then you could go:\n\n• A → B → C\n• A → C → B\n• B → C → A\n• B → A → C\n• C → A → B\n• C → B → A\n\nWell, actually there's less than that because some of these are equivalent (A → B → C and C → B → A are equivalent, for example, because they use the same roads, just in reverse).\n\nIn actuality, there are 3 possibilities.\n\n• Take this to 4 towns and you have (iirc) 12 possibilities.\n• With 5 it's 60.\n• 6 becomes 360.\n\nThis is a function of a mathematical operation called a factorial. Basically:\n\n• 5! = 5 × 4 × 3 × 2 × 1 = 120\n• 6! = 6 × 5 × 4 × 3 × 2 × 1 = 720\n• 7! = 7 × 6 × 5 × 4 × 3 × 2 × 1 = 5040\n• 25! = 25 × 24 × … × 2 × 1 = 15,511,210,043,330,985,984,000,000\n• 50! = 50 × 49 × … × 2 × 1 = 3.04140932 × 1064\n\nSo the BigOh of the Traveling Salesman problem is O(n!) or factorial or combinatorial complexity.\n\nBy the time you get to 200 towns there isn't enough time left in the universe to solve the problem with traditional computers.\n\n# Polynomial Time\n\nAnother point I wanted to make a quick mention of is that any algorithm that has a complexity of O(na) is said to have polynomial complexity or is solvable in polynomial time.\n\nO(n), O(n2) etc. are all polynomial time. Some problems cannot be solved in polynomial time. Certain things are used in the world because of this. Public Key Cryptography is a prime example. It is computationally hard to find two prime factors of a very large number. If it wasn't, we couldn't use the public key systems we use.\n\nAnyway, that's it for my (hopefully plain English) explanation of BigOh (revised).\n\n http://en.wikipedia.org/wiki/Big_O_notation\n https://en.wikipedia.org/wiki/Public-key_cryptography\n\n(615) While the other answers focus on explaining the differences between O(1), O(n^2) et al.... yours is the one which details how algorithms can get classified into n^2, nlog(n) etc. +1 for a good answer that helped me understand Big O notation as well - Yew Long\nI like the explanation. It is important to note that big O is about worst-case complexity, e.g. For sorting - for the sequence which requires the most operations, For multiplication - probably the largest possible input numbers, etc. - Yuval F\nIt's not about \"worst-case\" it's used to define best-, worst- and general-case. - Filip Ekberg\n(1) A couple of other points here is that the complexity can be either time or space and that one can talk about the big-O of best, average, or worst case scenarios, or at least that is what I remember from school. - JB King\n(1) ...NP-complete...: That's not correct - a problem that is solvable in n^a time is said to be in P, polynomial time. A problem that is NP-complete means that if this problem is solvable in P time, then every NP problem is solvable in P time. NP-hard just means that it's harder than the hardest NP. - Paul Fisher\n(27) one might want to add that big-O represents an upper bound (given by an algorithm), big-Omega give a lower bound (usually given as a proof independent from a specific algorithm) and big-Theta means that an \"optimal\" algorithm reaching that lower bound is known. - mdm\n(1) I think there is a mistake in the sentence \"accounting for 0.00002% of the total operations by that stage\". It should be 0.0002%, not 0.00002%. - Léo Léopold Hertz 준영\n(1) Good writeup. Two suggestions: Mention that e.g. the Travelling Salesman can be approximated to within a proven factor of the minimal answer if e.g. it can be assumed that going directly from A to B is shorter than going A-C-B. Also mention that NP is polynomial (P) IFF the computer magically picks the correct posibillity everytime it has to make a choice. - Thorbjørn Ravn Andersen\n(23) This is good if you're looking for the longest answer, but not for the answer that best explains Big-O in a simple manner. - kirk.burleson\n(176) -1: This is blatantly wrong: _\"BigOh is relative representation of complexity of algorithm\". No. BigOh is an asymptotic upper bound and exists quite well independent of computer science. O(n) is linear. No, you are confusing BigOh with theta. log n is O(n). 1 is O(n). The number of upvotes to this answer (and the comments), which makes the basic mistake of confusing Theta with BigOh is quite embarassing... - Aryabhatta\n(2) @Paul Fisher: NP-hard doesn't mean \"harder than the hardest NP-complete problem\", it means \"at least as hard as an NP-complete problem\". There's a big difference! - configurator\n(77) \"By the time you get to 200 towns there isn't enough time left in the universe to solve the problem with traditional computers.\" When the universe is going to end? - Isaac\n(1) \"For two one million digit numbers we need to do one trillion (1012) multiplications and two million adds.\" Given that there are at most 10 unique numerals in each million digit number, memoization means that you need to do at most 20 * 1 million digits multiplications if you have O(n) space. - Mike Samuel\n(10) My concern is similar to @jimifiki. Big-O is only useful when N is large. When N is small, the prefactor often matters a lot. A good example is with insertion sort. Insertion sort is O(N^2), but has very good cache-locality. This makes it faster than many O(N log N) algorithms when the list is small (< 10 elements). Similarly, for small N, lookup in a binary tree is often faster than a hash table. Good hash functions can chew through a good number of cycles, making the prefactor quite significant. - sfstewman\n(1) A graph of a plot of logarithmic function would have aided in understanding visually the O(log n). - bad_keypoints\n(6) Viewers, don't forget that the mistake in this answer is confusing Big-O, Omega, and Theta. Read this answer, appreciate it, then look-up Theta (rough expected case) and Omega (rough lower bound); because Big-O is exclusively the rough upper bound. - Greg M. Krsak\n(2) I guess he should answer Omega and Theta , so that all the comments above would be answered as well , would also recommend to change the question to differnce between bigOh and omega and theta. - Prateek\n(5) @Isaac It doesn't really matter: `200! Nanoseconds ~= 1.8×10^348 × universe age` wolframalpha.com/input/?i=200%21+nanoseconds - Arelius\n(2) @Aryabhatta What do you mean log(n) is O(n)? log(n) is clearly O(log(n)), no? (see en.wikipedia.org/wiki/Big_oh_notation) - Moberg\n(2) One little thing to add here. There are two types of complexity which is not really covered in this answer. Both are defined by BigO notation. This answer mostly covers how long it takes, which is known as time complexity. There can also be spacial complexity, which refers to how much memory the algorithm will require while running. Spacial complexity can be an important factor to analyze as well. - Travis J\n(7) @Moberg the fact that f(n) is O(g(n)) does not exclude the possibility that f(n) is O(h(n)) for an h(n) that is different from g(n). Trivially, n is O(n) and also n is O(2n). In fact, O(log(n)) is a subset is of O(n). So in the example of log(n), log(n) is both in O(log(n)) and in O(n). You are confusing Theta notation and big O notation. - Jacob Akkerboom\n(3) @JacobAkkerboom Ah, yes that's also true. I'm reading the post in the correct way now. It wasn't all that clear. Although, I'm not confusing it with Theta. Because I have never heard of Theta before. But apparently it is average instead of an upper bound. - Moberg\n(2) `Traditional computers can solve polynomial-time problems`. Can they or can't they? - cheesemacfly\n(1) The use of the word complexity in this explanation is worse than misleading. Big O notation is used to describe the execution time of an algorithm and has nothing to do with its complexity -- it may in fact be negatively correlated in cases where different algorithms for solving the same problem are discussed. (Bubble sort is simpler than virtually any other sort, but takes more time.) - podperson\n\"Traditional computers can't solve polynomial-time problems\" => This sentence is obviously wrong. Since O(n) is also polynomial time. Polynomial time algorithms are faster than Solution to NP and NP-hard problems. There are many n^2, n^3 and probably even higher power polynomial time algorithms deployed out there today, because they are the best available solution and they get the job done. - Spundun\nNow the logarithm in question could be ln (base e), log10, log2 or some other base. It doesn't matter it's still O(log n) just like O(2n2) and O(100n2) are still both O(n2). No. This is entirely wrong. Having a constant in front of the logarithm would give it the form of `log(n^2)` or `log(n^3)`. The logarithm in question here is exactly log base 2. Changing from log base 2 to log base 3 is the equivalent of going from n^2 to n^3. This needs to be changed. - scohe001\n(3) @Josh `log(n^c)=c*log(n)` and `O(c*log(n))=O(log(n))` when c is constant. So, `O(log(n^2))=O(log(n^3))=O(log(n))`. As a result, changing log base will not affect the big O notation and the statement you quote is correct. - hk6279\n@hk6279 and I agree with you. But the answer isn't talking about log(n^c), it's talking about changing the base. Reread my comment. - scohe001\n@Josh I don't get your point even reread your comment several times. I know changing log base will affect the representation format of logarithm itself, but the Big O notation will not be affected as stated in my comment. Since the original statement only claim Big O notation will not be changed, I cannot found the relationship of your comment and the original statement. - hk6279\n\"In comparing search algorithms we decide that this comparison is our 'n'.\" Slight wording tweak needed. I think you mean that the comparison is \"the operation we care about\", not \"our 'n'\". 'n' is the number of items, in this case, the number of names in the phone book. Overall, great answer. - Charlie Flowers\n(1) @Moberg Big Theta isn't exactly average, either. While Big O defines the upper bound, Big Theta defines both the upper and the lower bounds. A function f(n) is BigTheta(g(n)) if f(n) can be bounded on both sides (upper and lower) by constant multiples of g(n) - scottysseus\nlets calculate this `\"By the time you get to 200 towns there isn't enough time left in the universe to solve the problem with traditional computers.\"` if you were using a 2.8 GHz computer it would take 26.3 - isch towns to span the current age of the universe and 45.2 - isch towns to span till a time where the last black hole has evaporated(10^40 - isch years). So 200 towns is a little bit overkill :/ - HyunMi\n\"public key cryptography is a prime example.\" love what you did there ;) - gabssnake\n@Top-Master Why did you replace \"Big-O\" with \"BigOh\"? And what is \"BigOh\"? - Andreas\n@Andreas In mathematical discussions, usually the dash \"-\" is mistaken with minus, also, don't add space like \"Big Oh\" unless it is followed by something like \"... notation\" (again, to prevent misunderstandings) - Top-Master\n@Top-Master Even if that would be the case... There's no \"BigOh\", it's \"Big O\" - Andreas\n@Andreas we have \"Big Oh, Big Omega, Big Theta\" names and \"OΩΘ\" letters, just like we have \"Ay Bee Cee\" and \"abc\", also, I could link you other posts which use \"BigOh\" (instead of \"Big-O\") but let's agree to not agree about this - Top-Master\nIf a letter stands for something, then that something spelled out is more human-readable (if we spell out \"Big\" why not \"Oh\"), see 2015 discontinued wiki for more - Top-Master\n1\n[+784] [2009-01-28 11:28:23] Ray Hidayat\n\nIt shows how an algorithm scales based on input size.\n\n• 1 item: 1 operations\n• 10 items: 100 operations\n• 100 items: 10,000 operations\n\nNotice that the number of items increases by a factor of 10, but the time increases by a factor of 102. Basically, n=10 and so O(n2) gives us the scaling factor n2 which is 102.\n\nO(n): known as Linear complexity\n\n• 1 item: 1 operation\n• 10 items: 10 operations\n• 100 items: 100 operations\n\nThis time the number of items increases by a factor of 10, and so does the time. n=10 and so O(n)'s scaling factor is 10.\n\nO(1): known as Constant complexity\n\n• 1 item: 1 operations\n• 10 items: 1 operations\n• 100 items: 1 operations\n\nThe number of items is still increasing by a factor of 10, but the scaling factor of O(1) is always 1.\n\nO(log n): known as Logarithmic complexity\n\n• 1 item: 1 operations\n• 10 items: 2 operations\n• 100 items: 3 operations\n• 1000 items: 4 operations\n• 10,000 items: 5 operations\n\nThe number of computations is only increased by a log of the input value. So in this case, assuming each computation takes 1 second, the log of the input `n` is the time required, hence `log n`.\n\nThat's the gist of it. They reduce the maths down so it might not be exactly n2 or whatever they say it is, but that'll be the dominating factor in the scaling.\n\n(5) what does this definition mean exactly? (The number of items is still increasing by a factor of 10, but the scaling factor of O(1) is always 1.) - Zach Smith\n(114) Not seconds, operations. Also, you missed out on factorial and logarithmic time. - Chris Charabaruk\n(7) This doesn't explain very well that O(n^2) could be describing an algorithm that runs in precisely .01*n^2 + 999999*n + 999999. It's important to know that algorithms are compared using this scale, and that the comparison works when n is 'sufficiently large'. Python's timsort actually uses insertion sort (worst/average case O(n^2)) for small arrays due to the fact that it has a small overhead. - Casey Kuball\n(6) This answer also confuses big O notation and Theta notation. The function of n that returns 1 for all its inputs (usually simply written as 1) is actually in O(n^2) (even though it is also in O(1)). Similarly, an algorithm that only has to do one step which takes a constant amount of time is also considered to be an O(1) algorithm, but also to be an O(n) and an O(n^2) algorithm. But maybe mathematicians and computer scientists don't agree on the definition :-/. - Jacob Akkerboom\n@HollerTrain What he means is that in a given piece of code the cost is one even if you run the loop for many items it still only costs 1. An example might be an initialization before a loop. It also means for large numbers this factor you would ignore when looking at performance. Similarly if you run the logic once in the loop it would cost n where n is the number of iterations of the loops or items in this analogy hence the O(n). Nested loops scale much higher and are much more costly. If you run a loop once for each item and a nested one again for each item it would be n^2 - JPK\nin (log n) example, it says 100 items: 3 seconds. What is the base of the log here? is it base 2? - ernesto\nDo you have an explanation like this for O(nlogn)? - mstagg\nAren't the log numbers wrong? For example Log(1000) is 3 - Tony_Henrich\n(2) The O(log n) Logarithmic complexity considered in this answer is of the base 10. Generally the standard is to calculate with base 2. One should keep in mind this fact and should not get confused. Also as mentioned by @ChrisCharabaruk the complexity denotes number of operations and not seconds. - akm\nalthough missing some points, this is a very good explanation of the basics of Big O. thank you. - samcozmid\nif anyone is still confused about why the answer uses seconds instead of operations, \"assuming each computation takes 1 second\" which I think means it's using time as a representation of an operation(s) - csguy\nI've noticed that all different complexity cases have names. For example, n^2 is named Quadratic Complexity. So, what's the proper name for the case: n * log n complexity case? I don't think it should be named just Logarithmic Complexity. - carloswm85\n2\n[+433] [2011-07-08 04:46:50] ninjagecko\n\nBig-O notation (also called \"asymptotic growth\" notation) is what functions \"look like\" when you ignore constant factors and stuff near the origin. We use it to talk about how thing scale.\n\nBasics\n\nfor \"sufficiently\" large inputs...\n\n• `f(x) ∈ O(upperbound)` means `f` \"grows no faster than\" `upperbound`\n• `f(x) ∈ Ɵ(justlikethis)` mean `f` \"grows exactly like\" `justlikethis`\n• `f(x) ∈ Ω(lowerbound)` means `f` \"grows no slower than\" `lowerbound`\n\nbig-O notation doesn't care about constant factors: the function `9x²` is said to \"grow exactly like\" `10x²`. Neither does big-O asymptotic notation care about non-asymptotic stuff (\"stuff near the origin\" or \"what happens when the problem size is small\"): the function `10x²` is said to \"grow exactly like\" `10x² - x + 2`.\n\nWhy would you want to ignore the smaller parts of the equation? Because they become completely dwarfed by the big parts of the equation as you consider larger and larger scales; their contribution becomes dwarfed and irrelevant. (See example section.)\n\nPut another way, it's all about the ratio as you go to infinity. If you divide the actual time it takes by the `O(...)`, you will get a constant factor in the limit of large inputs. Intuitively this makes sense: functions \"scale like\" one another if you can multiply one to get the other. That is when we say...\n\n``````actualAlgorithmTime(N) ∈ O(bound(N))\ne.g. \"time to mergesort N elements\nis O(N log(N))\"\n``````\n\n... this means that for \"large enough\" problem sizes N (if we ignore stuff near the origin), there exists some constant (e.g. 2.5, completely made up) such that:\n\n``````actualAlgorithmTime(N) e.g. \"mergesort_duration(N) \"\n────────────────────── < constant ───────────────────── < 2.5\nbound(N) N log(N)\n``````\n\nThere are many choices of constant; often the \"best\" choice is known as the \"constant factor\" of the algorithm... but we often ignore it like we ignore non-largest terms (see Constant Factors section for why they don't usually matter). You can also think of the above equation as a bound, saying \"In the worst-case scenario, the time it takes will never be worse than roughly `N*log(N)`, within a factor of 2.5 (a constant factor we don't care much about)\".\n\nIn general, `O(...)` is the most useful one because we often care about worst-case behavior. If `f(x)` represents something \"bad\" like the processor or memory usage, then \"`f(x) ∈ O(upperbound)`\" means \"`upperbound` is the worst-case scenario of processor/memory usage\".\n\nApplications\n\nAs a purely mathematical construct, big-O notation is not limited to talking about processing time and memory. You can use it to discuss the asymptotics of anything where scaling is meaningful, such as:\n\n• the number of possible handshakes among `N` people at a party (`Ɵ(N²)`, specifically `N(N-1)/2`, but what matters is that it \"scales like\" `N²`)\n• probabilistic expected number of people who have seen some viral marketing as a function of time\n• how website latency scales with the number of processing units in a CPU or GPU or computer cluster\n• how heat output scales on CPU dies as a function of transistor count, voltage, etc.\n• how much time an algorithm needs to run, as a function of input size\n• how much space an algorithm needs to run, as a function of input size\n\nExample\n\nFor the handshake example above, everyone in a room shakes everyone else's hand. In that example, `#handshakes ∈ Ɵ(N²)`. Why?\n\nBack up a bit: the number of handshakes is exactly n-choose-2 or `N*(N-1)/2` (each of N people shakes the hands of N-1 other people, but this double-counts handshakes so divide by 2):\n\nHowever, for very large numbers of people, the linear term `N` is dwarfed and effectively contributes 0 to the ratio (in the chart: the fraction of empty boxes on the diagonal over total boxes gets smaller as the number of participants becomes larger). Therefore the scaling behavior is `order N²`, or the number of handshakes \"grows like N²\".\n\n``````#handshakes(N)\n────────────── ≈ 1/2\nN²\n``````\n\nIt's as if the empty boxes on the diagonal of the chart (N*(N-1)/2 checkmarks) weren't even there (N2 checkmarks asymptotically).\n\n(temporary digression from \"plain English\":) If you wanted to prove this to yourself, you could perform some simple algebra on the ratio to split it up into multiple terms (`lim` means \"considered in the limit of\", just ignore it if you haven't seen it, it's just notation for \"and N is really really big\"):\n\n`````` N²/2 - N/2 (N²)/2 N/2 1/2\nlim ────────── = lim ( ────── - ─── ) = lim ─── = 1/2\nN→∞ N² N→∞ N² N² N→∞ 1\n┕━━━┙\nthis is 0 in the limit of N→∞:\ngraph it, or plug in a really large number for N\n``````\n\ntl;dr: The number of handshakes 'looks like' x² so much for large values, that if we were to write down the ratio #handshakes/x², the fact that we don't need exactly x² handshakes wouldn't even show up in the decimal for an arbitrarily large while.\n\ne.g. for x=1million, ratio #handshakes/x²: 0.499999...\n\nBuilding Intuition\n\nThis lets us make statements like...\n\n\"For large enough inputsize=N, no matter what the constant factor is, if I double the input size...\n\n• ... I double the time an O(N) (\"linear time\") algorithm takes.\"\n\nN → (2N) = 2(N)\n\n• ... I double-squared (quadruple) the time an O(N²) (\"quadratic time\") algorithm takes.\" (e.g. a problem 100x as big takes 100²=10000x as long... possibly unsustainable)\n\n→ (2N)² = 4()\n\n• ... I double-cubed (octuple) the time an O(N³) (\"cubic time\") algorithm takes.\" (e.g. a problem 100x as big takes 100³=1000000x as long... very unsustainable)\n\ncN³ → c(2N)³ = 8(cN³)\n\n• ... I add a fixed amount to the time an O(log(N)) (\"logarithmic time\") algorithm takes.\" (cheap!)\n\nc log(N) → c log(2N) = (c log(2))+(c log(N)) = (fixed amount)+(c log(N))\n\n• ... I don't change the time an O(1) (\"constant time\") algorithm takes.\" (the cheapest!)\n\nc*1c*1\n\n• ... I \"(basically) double\" the time an O(N log(N)) algorithm takes.\" (fairly common)\n\nc 2N log(2N) / c N log(N) (here we divide f(2n)/f(n), but we could have as above massaged the expression and factored out cNlogN as above)\n→ 2 log(2N)/log(N)\n→ 2 (log(2) + log(N))/log(N)\n→ 2*(1+(log2N)-1) (basically 2 for large N; eventually less than 2.000001)\n(alternatively, say log(N) will always be below like 17 for your data so it's O(17 N) which is linear; that is not rigorous nor sensical though)\n\n• ... I ridiculously increase the time a O(2N) (\"exponential time\") algorithm takes.\" (you'd double (or triple, etc.) the time just by increasing the problem by a single unit)\n\n2N → 22N = (4N)............put another way...... 2N → 2N+1 = 2N21 = 2 2N\n\n[for the mathematically inclined, you can mouse over the spoilers for minor sidenotes]\n\n(with credit to https://stackoverflow.com/a/487292/711085 )\n\n(technically the constant factor could maybe matter in some more esoteric examples, but I've phrased things above (e.g. in log(N)) such that it doesn't)\n\nThese are the bread-and-butter orders of growth that programmers and applied computer scientists use as reference points. They see these all the time. (So while you could technically think \"Doubling the input makes an O(√N) algorithm 1.414 times slower,\" it's better to think of it as \"this is worse than logarithmic but better than linear\".)\n\nConstant factors\n\nUsually, we don't care what the specific constant factors are, because they don't affect the way the function grows. For example, two algorithms may both take `O(N)` time to complete, but one may be twice as slow as the other. We usually don't care too much unless the factor is very large since optimizing is tricky business ( When is optimisation premature? ); also the mere act of picking an algorithm with a better big-O will often improve performance by orders of magnitude.\n\nSome asymptotically superior algorithms (e.g. a non-comparison `O(N log(log(N)))` sort) can have so large a constant factor (e.g. `100000*N log(log(N))`), or overhead that is relatively large like `O(N log(log(N)))` with a hidden `+ 100*N`, that they are rarely worth using even on \"big data\".\n\nWhy O(N) is sometimes the best you can do, i.e. why we need datastructures\n\n`O(N)` algorithms are in some sense the \"best\" algorithms if you need to read all your data. The very act of reading a bunch of data is an `O(N)` operation. Loading it into memory is usually `O(N)` (or faster if you have hardware support, or no time at all if you've already read the data). However, if you touch or even look at every piece of data (or even every other piece of data), your algorithm will take `O(N)` time to perform this looking. No matter how long your actual algorithm takes, it will be at least `O(N)` because it spent that time looking at all the data.\n\nThe same can be said for the very act of writing. All algorithms which print out N things will take N time because the output is at least that long (e.g. printing out all permutations (ways to rearrange) a set of N playing cards is factorial: `O(N!)` (which is why in those cases, good programs will ensure an iteration uses O(1) memory and doesn't print or store every intermediate step)).\n\nThis motivates the use of data structures: a data structure requires reading the data only once (usually `O(N)` time), plus some arbitrary amount of preprocessing (e.g. `O(N)` or `O(N log(N))` or `O(N²)`) which we try to keep small. Thereafter, modifying the data structure (insertions/deletions/ etc.) and making queries on the data take very little time, such as `O(1)` or `O(log(N))`. You then proceed to make a large number of queries! In general, the more work you're willing to do ahead of time, the less work you'll have to do later on.\n\nFor example, say you had the latitude and longitude coordinates of millions of road segments and wanted to find all street intersections.\n\n• Naive method: If you had the coordinates of a street intersection, and wanted to examine nearby streets, you would have to go through the millions of segments each time, and check each one for adjacency.\n• If you only needed to do this once, it would not be a problem to have to do the naive method of `O(N)` work only once, but if you want to do it many times (in this case, `N` times, once for each segment), we'd have to do `O(N²)` work, or 1000000²=1000000000000 operations. Not good (a modern computer can perform about a billion operations per second).\n• If we use a simple structure called a hash table (an instant-speed lookup table, also known as a hashmap or dictionary), we pay a small cost by preprocessing everything in `O(N)` time. Thereafter, it only takes constant time on average to look up something by its key (in this case, our key is the latitude and longitude coordinates, rounded into a grid; we search the adjacent gridspaces of which there are only 9, which is a constant).\n• Our task went from an infeasible `O(N²)` to a manageable `O(N)`, and all we had to do was pay a minor cost to make a hash table.\n• analogy: The analogy in this particular case is a jigsaw puzzle: We created a data structure that exploits some property of the data. If our road segments are like puzzle pieces, we group them by matching color and pattern. We then exploit this to avoid doing extra work later (comparing puzzle pieces of like color to each other, not to every other single puzzle piece).\n\nThe moral of the story: a data structure lets us speed up operations. Even more, advanced data structures can let you combine, delay, or even ignore operations in incredibly clever ways. Different problems would have different analogies, but they'd all involve organizing the data in a way that exploits some structure we care about, or which we've artificially imposed on it for bookkeeping. We do work ahead of time (basically planning and organizing), and now repeated tasks are much much easier!\n\nPractical example: visualizing orders of growth while coding\n\nAsymptotic notation is, at its core, quite separate from programming. Asymptotic notation is a mathematical framework for thinking about how things scale and can be used in many different fields. That said... this is how you apply asymptotic notation to coding.\n\nThe basics: Whenever we interact with every element in a collection of size A (such as an array, a set, all keys of a map, etc.), or perform A iterations of a loop, that is a multiplicative factor of size A. Why do I say \"a multiplicative factor\"?--because loops and functions (almost by definition) have multiplicative running time: the number of iterations, times work done in the loop (or for functions: the number of times you call the function, times work done in the function). (This holds if we don't do anything fancy, like skip loops or exit the loop early, or change control flow in the function based on arguments, which is very common.) Here are some examples of visualization techniques, with accompanying pseudocode.\n\n(here, the `x`s represent constant-time units of work, processor instructions, interpreter opcodes, whatever)\n\n``````for(i=0; i<A; i++) // A * ...\nsome O(1) operation // 1\n\n--> A*1 --> O(A) time\n\nvisualization:\n\n|<------ A ------->|\n1 2 3 4 5 x x ... x\n\nother languages, multiplying orders of growth:\njavascript, O(A) time and space\nsomeListOfSizeA.map((x,i) => [x,i])\npython, O(rows*cols) time and space\n[[r*c for c in range(cols)] for r in range(rows)]\n``````\n\nExample 2:\n\n``````for every x in listOfSizeA: // A * (...\nsome O(1) operation // 1\nsome O(B) operation // B\nfor every y in listOfSizeC: // C * (...\nsome O(1) operation // 1))\n\n--> O(A*(1 + B + C))\nO(A*(B+C)) (1 is dwarfed)\n\nvisualization:\n\n|<------ A ------->|\n1 x x x x x x ... x\n\n2 x x x x x x ... x ^\n3 x x x x x x ... x |\n4 x x x x x x ... x |\n5 x x x x x x ... x B <-- A*B\nx x x x x x x ... x |\n................... |\nx x x x x x x ... x v\n\nx x x x x x x ... x ^\nx x x x x x x ... x |\nx x x x x x x ... x |\nx x x x x x x ... x C <-- A*C\nx x x x x x x ... x |\n................... |\nx x x x x x x ... x v\n``````\n\nExample 3:\n\n``````function nSquaredFunction(n) {\ntotal = 0\nfor i in 1..n: // N *\nfor j in 1..n: // N *\ntotal += i*k // 1\n}\n// O(n^2)\n\nfunction nCubedFunction(a) {\nfor i in 1..n: // A *\nprint(nSquaredFunction(a)) // A^2\n}\n// O(a^3)\n``````\n\nIf we do something slightly complicated, you might still be able to imagine visually what's going on:\n\n``````for x in range(A):\nfor y in range(1..x):\nsimpleOperation(x*y)\n\nx x x x x x x x x x |\nx x x x x x x x x |\nx x x x x x x x |\nx x x x x x x |\nx x x x x x |\nx x x x x |\nx x x x |\nx x x |\nx x |\nx___________________|\n``````\n\nHere, the smallest recognizable outline you can draw is what matters; a triangle is a two dimensional shape (0.5 A^2), just like a square is a two-dimensional shape (A^2); the constant factor of two here remains in the asymptotic ratio between the two, however, we ignore it like all factors... (There are some unfortunate nuances to this technique I don't go into here; it can mislead you.)\n\nOf course this does not mean that loops and functions are bad; on the contrary, they are the building blocks of modern programming languages, and we love them. However, we can see that the way we weave loops and functions and conditionals together with our data (control flow, etc.) mimics the time and space usage of our program! If time and space usage becomes an issue, that is when we resort to cleverness and find an easy algorithm or data structure we hadn't considered, to reduce the order of growth somehow. Nevertheless, these visualization techniques (though they don't always work) can give you a naive guess at a worst-case running time.\n\nHere is another thing we can recognize visually:\n\n``````<----------------------------- N ----------------------------->\nx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x\nx x x x x x x x x x x x x x x x\nx x x x x x x x\nx x x x\nx x\nx\n``````\n\nWe can just rearrange this and see it's O(N):\n\n``````<----------------------------- N ----------------------------->\nx x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x\nx x x x x x x x x x x x x x x x|x x x x x x x x|x x x x|x x|x\n``````\n\nOr maybe you do log(N) passes of the data, for O(N*log(N)) total time:\n\n`````` <----------------------------- N ----------------------------->\n^ x x x x x x x x x x x x x x x x|x x x x x x x x x x x x x x x x\n| x x x x x x x x|x x x x x x x x|x x x x x x x x|x x x x x x x x\nlgN x x x x|x x x x|x x x x|x x x x|x x x x|x x x x|x x x x|x x x x\n| x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x|x x\nv x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x|x\n``````\n\nUnrelatedly but worth mentioning again: If we perform a hash (e.g. a dictionary/hashtable lookup), that is a factor of O(1). That's pretty fast.\n\n``````[myDictionary.has(x) for x in listOfSizeA]\n\\----- O(1) ------/\n\n--> A*1 --> O(A)\n``````\n\nIf we do something very complicated, such as with a recursive function or divide-and-conquer algorithm, you can use the Master Theorem (usually works), or in ridiculous cases the Akra-Bazzi Theorem (almost always works) you look up the running time of your algorithm on Wikipedia.\n\nBut, programmers don't think like this because eventually, algorithm intuition just becomes second nature. You will start to code something inefficient and immediately think \"am I doing something grossly inefficient?\". If the answer is \"yes\" AND you foresee it actually mattering, then you can take a step back and think of various tricks to make things run faster (the answer is almost always \"use a hashtable\", rarely \"use a tree\", and very rarely something a bit more complicated).\n\nAmortized and average-case complexity\n\nThere is also the concept of \"amortized\" and/or \"average case\" (note that these are different).\n\nAverage Case: This is no more than using big-O notation for the expected value of a function, rather than the function itself. In the usual case where you consider all inputs to be equally likely, the average case is just the average of the running time. For example with quicksort, even though the worst-case is `O(N^2)` for some really bad inputs, the average case is the usual `O(N log(N))` (the really bad inputs are very small in number, so few that we don't notice them in the average case).\n\nAmortized Worst-Case: Some data structures may have a worst-case complexity that is large, but guarantee that if you do many of these operations, the average amount of work you do will be better than worst-case. For example, you may have a data structure that normally takes constant `O(1)` time. However, occasionally it will 'hiccup' and take `O(N)` time for one random operation, because maybe it needs to do some bookkeeping or garbage collection or something... but it promises you that if it does hiccup, it won't hiccup again for N more operations. The worst-case cost is still `O(N)` per operation, but the amortized cost over many runs is `O(N)/N` = `O(1)` per operation. Because the big operations are sufficiently rare, the massive amount of occasional work can be considered to blend in with the rest of the work as a constant factor. We say the work is \"amortized\" over a sufficiently large number of calls that it disappears asymptotically.\n\nThe analogy for amortized analysis:\n\nYou drive a car. Occasionally, you need to spend 10 minutes going to the gas station and then spend 1 minute refilling the tank with gas. If you did this every time you went anywhere with your car (spend 10 minutes driving to the gas station, spend a few seconds filling up a fraction of a gallon), it would be very inefficient. But if you fill up the tank once every few days, the 11 minutes spent driving to the gas station is \"amortized\" over a sufficiently large number of trips, that you can ignore it and pretend all your trips were maybe 5% longer.\n\nComparison between average-case and amortized worst-case:\n\n• Average-case: We make some assumptions about our inputs; i.e. if our inputs have different probabilities, then our outputs/runtimes will have different probabilities (which we take the average of). Usually, we assume that our inputs are all equally likely (uniform probability), but if the real-world inputs don't fit our assumptions of \"average input\", the average output/runtime calculations may be meaningless. If you anticipate uniformly random inputs though, this is useful to think about!\n• Amortized worst-case: If you use an amortized worst-case data structure, the performance is guaranteed to be within the amortized worst-case... eventually (even if the inputs are chosen by an evil demon who knows everything and is trying to screw you over). Usually, we use this to analyze algorithms that may be very 'choppy' in performance with unexpected large hiccups, but over time perform just as well as other algorithms. (However unless your data structure has upper limits for much outstanding work it is willing to procrastinate on, an evil attacker could perhaps force you to catch up on the maximum amount of procrastinated work all-at-once.\n\nThough, if you're reasonably worried about an attacker, there are many other algorithmic attack vectors to worry about besides amortization and average-case.)\n\nBoth average-case and amortization are incredibly useful tools for thinking about and designing with scaling in mind.\n\n(See Difference between average case and amortized analysis if interested in this subtopic.)\n\nMultidimensional big-O\n\nMost of the time, people don't realize that there's more than one variable at work. For example, in a string-search algorithm, your algorithm may take time `O([length of text] + [length of query])`, i.e. it is linear in two variables like `O(N+M)`. Other more naive algorithms may be `O([length of text]*[length of query])` or `O(N*M)`. Ignoring multiple variables is one of the most common oversights I see in algorithm analysis, and can handicap you when designing an algorithm.\n\nThe whole story\n\nKeep in mind that big-O is not the whole story. You can drastically speed up some algorithms by using caching, making them cache-oblivious, avoiding bottlenecks by working with RAM instead of disk, using parallelization, or doing work ahead of time -- these techniques are often independent of the order-of-growth \"big-O\" notation, though you will often see the number of cores in the big-O notation of parallel algorithms.\n\nAlso keep in mind that due to hidden constraints of your program, you might not really care about asymptotic behavior. You may be working with a bounded number of values, for example:\n\n• If you're sorting something like 5 elements, you don't want to use the speedy `O(N log(N))` quicksort; you want to use insertion sort, which happens to perform well on small inputs. These situations often come up in divide-and-conquer algorithms, where you split up the problem into smaller and smaller subproblems, such as recursive sorting, fast Fourier transforms, or matrix multiplication.\n• If some values are effectively bounded due to some hidden fact (e.g. the average human name is softly bounded at perhaps 40 letters, and human age is softly bounded at around 150). You can also impose bounds on your input to effectively make terms constant.\n\nIn practice, even among algorithms which have the same or similar asymptotic performance, their relative merit may actually be driven by other things, such as: other performance factors (quicksort and mergesort are both `O(N log(N))`, but quicksort takes advantage of CPU caches); non-performance considerations, like ease of implementation; whether a library is available, and how reputable and maintained the library is.\n\nPrograms will also run slower on a 500MHz computer vs 2GHz computer. We don't really consider this as part of the resource bounds, because we think of the scaling in terms of machine resources (e.g. per clock cycle), not per real second. However, there are similar things which can 'secretly' affect performance, such as whether you are running under emulation, or whether the compiler optimized code or not. These might make some basic operations take longer (even relative to each other), or even speed up or slow down some operations asymptotically (even relative to each other). The effect may be small or large between different implementation and/or environment. Do you switch languages or machines to eke out that little extra work? That depends on a hundred other reasons (necessity, skills, coworkers, programmer productivity, the monetary value of your time, familiarity, workarounds, why not assembly or GPU, etc...), which may be more important than performance.\n\nThe above issues, like the effect of the choice of which programming language is used, are almost never considered as part of the constant factor (nor should they be); yet one should be aware of them because sometimes (though rarely) they may affect things. For example in cpython, the native priority queue implementation is asymptotically non-optimal (`O(log(N))` rather than `O(1)` for your choice of insertion or find-min); do you use another implementation? Probably not, since the C implementation is probably faster, and there are probably other similar issues elsewhere. There are tradeoffs; sometimes they matter and sometimes they don't.\n\n(edit: The \"plain English\" explanation ends here.)\n\nFor completeness, the precise definition of big-O notation is as follows: `f(x) ∈ O(g(x))` means that \"f is asymptotically upper-bounded by const*g\": ignoring everything below some finite value of x, there exists a constant such that `|f(x)| ≤ const * |g(x)|`. (The other symbols are as follows: just like `O` means ≤, `Ω` means ≥. There are lowercase variants: `o` means <, and `ω` means >.) `f(x) ∈ Ɵ(g(x))` means both `f(x) ∈ O(g(x))` and `f(x) ∈ Ω(g(x))` (upper- and lower-bounded by g): there exists some constants such that f will always lie in the \"band\" between `const1*g(x)` and `const2*g(x)`. It is the strongest asymptotic statement you can make and roughly equivalent to `==`. (Sorry, I elected to delay the mention of the absolute-value symbols until now, for clarity's sake; especially because I have never seen negative values come up in a computer science context.)\n\nPeople will often use `= O(...)`, which is perhaps the more correct 'comp-sci' notation, and entirely legitimate to use; \"f = O(...)\" is read \"f is order ... / f is xxx-bounded by ...\" and is thought of as \"f is some expression whose asymptotics are ...\". I was taught to use the more rigorous `∈ O(...)`. `∈` means \"is an element of\" (still read as before). In this particular case, `O(N²)` contains elements like {`2 N²`, `3 N²`, `1/2 N²`, `2 N² + log(N)`, `- N² + N^1.9`, ...} and is infinitely large, but it's still a set.\n\nO and Ω are not symmetric (n = O(n²), but n² is not O(n)), but Ɵ is symmetric, and (since these relations are all transitive and reflexive) Ɵ, therefore, is symmetric and transitive and reflexive, and therefore partitions the set of all functions into equivalence classes. An equivalence class is a set of things that we consider to be the same. That is to say, given any function you can think of, you can find a canonical/unique 'asymptotic representative' of the class (by generally taking the limit... I think); just like you can group all integers into odds or evens, you can group all functions with Ɵ into x-ish, log(x)^2-ish, etc... by basically ignoring smaller terms (but sometimes you might be stuck with more complicated functions which are separate classes unto themselves).\n\nThe `=` notation might be the more common one and is even used in papers by world-renowned computer scientists. Additionally, it is often the case that in a casual setting, people will say `O(...)` when they mean `Ɵ(...)`; this is technically true since the set of things `Ɵ(exactlyThis)` is a subset of `O(noGreaterThanThis)`... and it's easier to type. ;-)\n\n https://stackoverflow.com/questions/385506/when-is-optimisation-premature\n https://en.wikipedia.org/wiki/Master_theorem\n https://www.usenix.org/conference/12th-usenix-security-symposium/denial-service-algorithmic-complexity-attacks\n https://stackoverflow.com/q/7333376/711085\n\n(32) An excellent mathematical answer, but the OP asked for a plain English answer. This level of mathematical description isn't required to understand the answer, though for people particularly mathematically minded it may be a lot simpler to understand than \"plain English\". However the OP asked for the latter. - El Zorko\n(45) Presumably people other than the OP might have an interest in the answers to this question. Isn't that the guiding principle of the site? - Casey\n(7) While I can maybe see why people might skim my answer and think it is too mathy (especially the \"math is the new plain english\" snide remarks, since removed), the original question asks about big-O which is about functions, so I attempt to be explicit and talk about functions in a way that complements the plain-English intuition. The math here can often be glossed over, or understood with a highschool math background. I do feel that people may look at the Math Addenda at the end though, and assume that is part of the answer, when it is merely there to see what the real math looks like. - ninjagecko\n(5) This is a fantastic answer; much better IMO than the one with the most votes. The \"math\" required doesn't go beyond what's needed to understand the expressions in the parentheses after the \"O,\" which no reasonable explanation that uses any examples can avoid. - Dave Abrahams\n(2) \"f(x) ∈ O(upperbound) means f \"grows no faster than\" upperbound\" these three simply worded, but mathematically proper explanations of big Oh, Theta, and Omega are golden. He described to me in plain english the point that 5 different sources couldn't seem to translate to me without writing complex mathematical expressions. Thanks man! :) - timbram\n@Casey: Yes but those people should still expect to see answers to the question actually posed. - Lightness Races in Orbit\n@LightnessRacesinOrbit I think a term like \"plain English\" is subjective; I thought this answer was really helpful when I wrote that remark 3 years ago and was looking for answers to the question posed. - Casey\n@ninjagecko It should be \"`Big Oh`\" not \"`Big-O`\"; 1. Because in mathematical discussions, usually the dash \"-\" is mistaken with minus, 2. Also, if a letter stands for something, then that something spelled out is more human-readable. 3. Lastly, if we spell out \"Big\" why not \"Oh\"?! We have \"Big Oh, Big Omega, Big Theta\" names and \"OΩΘ\" letters, just like we have \"Ay Bee Cee\" and \"abc\", see 2015 discontinued wiki for more - Top-Master\n(1) Incredible answer. Thank you. - Ruben Caster\n3\n[+254] [2009-01-28 11:16:57] Jon Skeet\n\nEDIT: Quick note, this is almost certainly confusing Big O notation (which is an upper bound) with Theta notation (which is both an upper and lower bound). In my experience this is actually typical of discussions in non-academic settings. Apologies for any confusion caused.\n\nIn one sentence: As the size of your job goes up, how much longer does it take to complete it?\n\nObviously that's only using \"size\" as the input and \"time taken\" as the output — the same idea applies if you want to talk about memory usage etc.\n\nHere's an example where we have N T-shirts which we want to dry. We'll assume it's incredibly quick to get them in the drying position (i.e. the human interaction is negligible). That's not the case in real life, of course...\n\n• Using a washing line outside: assuming you have an infinitely large back yard, washing dries in O(1) time. However much you have of it, it'll get the same sun and fresh air, so the size doesn't affect the drying time.\n\n• Using a tumble dryer: you put 10 shirts in each load, and then they're done an hour later. (Ignore the actual numbers here — they're irrelevant.) So drying 50 shirts takes about 5 times as long as drying 10 shirts.\n\n• Putting everything in an airing cupboard: If we put everything in one big pile and just let general warmth do it, it will take a long time for the middle shirts to get dry. I wouldn't like to guess at the detail, but I suspect this is at least O(N^2) — as you increase the wash load, the drying time increases faster.\n\nOne important aspect of \"big O\" notation is that it doesn't say which algorithm will be faster for a given size. Take a hashtable (string key, integer value) vs an array of pairs (string, integer). Is it faster to find a key in the hashtable or an element in the array, based on a string? (i.e. for the array, \"find the first element where the string part matches the given key.\") Hashtables are generally amortised (~= \"on average\") O(1) — once they're set up, it should take about the same time to find an entry in a 100 entry table as in a 1,000,000 entry table. Finding an element in an array (based on content rather than index) is linear, i.e. O(N) — on average, you're going to have to look at half the entries.\n\nDoes this make a hashtable faster than an array for lookups? Not necessarily. If you've got a very small collection of entries, an array may well be faster — you may be able to check all the strings in the time that it takes to just calculate the hashcode of the one you're looking at. As the data set grows larger, however, the hashtable will eventually beat the array.\n\n http://en.wikipedia.org/wiki/Big_O_notation\n\n(6) A hashtable requires an algorithm to run to calculate the index of the actual array ( depending on the implementation ). And an array just have O(1) because it's just an adress. But this has nothing to do with the question, just an observation :) - Filip Ekberg\n(7) jon's explanation has very much todo with the question i think. it's exactly how one could explain it to some mum, and she would eventually understand it i think :) i like the clothes example (in particular the last, where it explains the exponential growth of complexity) - Johannes Schaub - litb\nOh i don't mean the whole answer, just the hashtable lookup and that it can, actually, Never be as fast as a direct adressing :) - Filip Ekberg\n(4) Filip: I'm not talking about address an array by index, I'm talking about finding a matching entry in an array. Could you reread the answer and see if that's still unclear? - Jon Skeet\n(3) @Filip Ekberg I think you're thinking of a direct-address table where each index maps to a key directly hence is O(1), however I believe Jon is talking about an unsorted array of key/val pairs which you have to search through linearly. - ljs\nCouldn't get this one - `Hashtables are generally amortised (~= \"on average\") O(1) — once they're set up`? How does that happen? Even after creating hash of every key in the dictionary, dictionary data structure will have to do some binary look-up when I ask it for a key and its respective value pair. Isn't it? - RBT\n(2) @RBT: No, it's not a binary look-up. It can get to the right hash bucket immediately, just based on a transformation from hash code to bucket index. After that, finding the right hash code in the bucket may be linear or it may be a binary search... but by that time you're down to a very small proportion of the total size of the dictionary. - Jon Skeet\n4\n[+137] [2009-01-28 11:23:07] starblue\n\nBig O describes an upper limit on the growth behaviour of a function, for example the runtime of a program, when inputs become large.\n\nExamples:\n\n• O(n): If I double the input size the runtime doubles\n\n• O(n2): If the input size doubles the runtime quadruples\n\n• O(log n): If the input size doubles the runtime increases by one\n\n• O(2n): If the input size increases by one, the runtime doubles\n\nThe input size is usually the space in bits needed to represent the input.\n\n(7) incorrect! for example O(n): If I double the input size the runtime will multiply to finite non zero constant. I mean O(n) = O(n + n) - arena-ru\n(7) I'm talking about the f in f(n) = O(g(n)), not the g as you seem to understand. - starblue\nI upvoted, but the last sentence doesn't contribute much I feel. We don't often talk about \"bits\" when discussing or measuring Big(O). - cdiggins\n(5) You should add an example for O(n log n). - Christoffer Hammarström\nThat's not so clear, essentially it behaves a little worse than O(n). So if n doubles, the runtime is multiplied by a factor somewhat larger than 2. - starblue\n5\n[+111] [2011-09-05 16:31:43] cdiggins\n\nBig O notation is most commonly used by programmers as an approximate measure of how long a computation (algorithm) will take to complete expressed as a function of the size of the input set.\n\nBig O is useful to compare how well two algorithms will scale up as the number of inputs is increased.\n\nMore precisely Big O notation is used to express the asymptotic behavior of a function. That means how the function behaves as it approaches infinity.\n\nIn many cases the \"O\" of an algorithm will fall into one of the following cases:\n\n• O(1) - Time to complete is the same regardless of the size of input set. An example is accessing an array element by index.\n• O(Log N) - Time to complete increases roughly in line with the log2(n). For example 1024 items takes roughly twice as long as 32 items, because Log2(1024) = 10 and Log2(32) = 5. An example is finding an item in a binary search tree (BST).\n• O(N) - Time to complete that scales linearly with the size of the input set. In other words if you double the number of items in the input set, the algorithm takes roughly twice as long. An example is counting the number of items in a linked list.\n• O(N Log N) - Time to complete increases by the number of items times the result of Log2(N). An example of this is heap sort and quick sort .\n• O(N^2) - Time to complete is roughly equal to the square of the number of items. An example of this is bubble sort .\n• O(N!) - Time to complete is the factorial of the input set. An example of this is the traveling salesman problem brute-force solution .\n\nBig O ignores factors that do not contribute in a meaningful way to the growth curve of a function as the input size increases towards infinity. This means that constants that are added to or multiplied by the function are simply ignored.\n\n http://en.wikipedia.org/wiki/Big_O_notation\n http://en.wikipedia.org/wiki/Binary_search_tree\n http://en.wikipedia.org/wiki/Heap_sort\n http://en.wikipedia.org/wiki/Quick_sort\n http://en.wikipedia.org/wiki/Bubble_sort\n http://en.wikipedia.org/wiki/Travelling_salesman_problem\n\ncdiggins, what if i have O(N/2) complexity , should it be O(N) or O(N/2), for example what the complexity if i will loop over half string. - Melad Basilius\n(1) @Melad This is an example of a constant (0.5) being multiplied to the function. This is ignored as it is considered to have a meaningful effect for very large values of N. - cdiggins\n6\n[+88] [2009-01-28 11:14:21] Filip Ekberg\n\nBig O is just a way to \"Express\" yourself in a common way, \"How much time / space does it take to run my code?\".\n\nYou may often see O(n), O(n2), O(nlogn) and so forth, all these are just ways to show; How does an algorithm change?\n\nO(n) means Big O is n, and now you might think, \"What is n!?\" Well \"n\" is the amount of elements. Imaging you want to search for an Item in an Array. You would have to look on Each element and as \"Are you the correct element/item?\" in the worst case, the item is at the last index, which means that it took as much time as there are items in the list, so to be generic, we say \"oh hey, n is a fair given amount of values!\".\n\nSo then you might understand what \"n2\" means, but to be even more specific, play with the thought you have a simple, the simpliest of the sorting algorithms; bubblesort. This algorithm needs to look through the whole list, for each item.\n\nMy list\n\n1. 1\n2. 6\n3. 3\n\nThe flow here would be:\n\n• Compare 1 and 6, which is biggest? Ok 6 is in the right position, moving forward!\n• Compare 6 and 3, oh, 3 is less! Let's move that, Ok the list changed, we need to start from the begining now!\n\nThis is O n2 because, you need to look at all items in the list there are \"n\" items. For each item, you look at all items once more, for comparing, this is also \"n\", so for every item, you look \"n\" times meaning n*n = n2\n\nI hope this is as simple as you want it.\n\nBut remember, Big O is just a way to experss yourself in the manner of time and space.\n\nfor logN we consider for loop running from 0 to N/2 the what about O(log log N)? I mean how does program look like? pardon me for pure math skills - Govinda Sakhare\n7\n[+58] [2009-01-28 13:12:24] Wedge\n\nBig O describes the fundamental scaling nature of an algorithm.\n\nThere is a lot of information that Big O does not tell you about a given algorithm. It cuts to the bone and gives only information about the scaling nature of an algorithm, specifically how the resource use (think time or memory) of an algorithm scales in response to the \"input size\".\n\nConsider the difference between a steam engine and a rocket. They are not merely different varieties of the same thing (as, say, a Prius engine vs. a Lamborghini engine) but they are dramatically different kinds of propulsion systems, at their core. A steam engine may be faster than a toy rocket, but no steam piston engine will be able to achieve the speeds of an orbital launch vehicle. This is because these systems have different scaling characteristics with regards to the relation of fuel required (\"resource usage\") to reach a given speed (\"input size\").\n\nWhy is this so important? Because software deals with problems that may differ in size by factors up to a trillion. Consider that for a moment. The ratio between the speed necessary to travel to the Moon and human walking speed is less than 10,000:1, and that is absolutely tiny compared to the range in input sizes software may face. And because software may face an astronomical range in input sizes there is the potential for the Big O complexity of an algorithm, it's fundamental scaling nature, to trump any implementation details.\n\nConsider the canonical sorting example. Bubble-sort is O(n2) while merge-sort is O(n log n). Let's say you have two sorting applications, application A which uses bubble-sort and application B which uses merge-sort, and let's say that for input sizes of around 30 elements application A is 1,000x faster than application B at sorting. If you never have to sort much more than 30 elements then it's obvious that you should prefer application A, as it is much faster at these input sizes. However, if you find that you may have to sort ten million items then what you'd expect is that application B actually ends up being thousands of times faster than application A in this case, entirely due to the way each algorithm scales.\n\n8\n[+45] [2014-01-27 23:09:37] Andrew Prock\n\nHere is the plain English bestiary I tend to use when explaining the common varieties of Big-O\n\nIn all cases, prefer algorithms higher up on the list to those lower on the list. However, the cost of moving to a more expensive complexity class varies significantly.\n\nO(1):\n\nNo growth. Regardless of how big as the problem is, you can solve it in the same amount of time. This is somewhat analogous to broadcasting where it takes the same amount of energy to broadcast over a given distance, regardless of the number of people that lie within the broadcast range.\n\nO(log n):\n\nThis complexity is the same as O(1) except that it's just a little bit worse. For all practical purposes, you can consider this as a very large constant scaling. The difference in work between processing 1 thousand and 1 billion items is only a factor six.\n\nO(n):\n\nThe cost of solving the problem is proportional to the size of the problem. If your problem doubles in size, then the cost of the solution doubles. Since most problems have to be scanned into the computer in some way, as data entry, disk reads, or network traffic, this is generally an affordable scaling factor.\n\nO(n log n):\n\nThis complexity is very similar to O(n). For all practical purposes, the two are equivalent. This level of complexity would generally still be considered scalable. By tweaking assumptions some O(n log n) algorithms can be transformed into O(n) algorithms. For example, bounding the size of keys reduces sorting from O(n log n) to O(n).\n\nO(n2):\n\nGrows as a square, where n is the length of the side of a square. This is the same growth rate as the \"network effect\", where everyone in a network might know everyone else in the network. Growth is expensive. Most scalable solutions cannot use algorithms with this level of complexity without doing significant gymnastics. This generally applies to all other polynomial complexities - O(nk) - as well.\n\nO(2n):\n\nDoes not scale. You have no hope of solving any non-trivially sized problem. Useful for knowing what to avoid, and for experts to find approximate algorithms which are in O(nk).\n\n(2) Could you please consider a different analogy for O(1)? The engineer in me wants to pull out a discussion about RF impedance due to obstructions. - johnwbyrd\n9\n[+38] [2009-01-28 11:19:29] Brownie\n\nBig O is a measure of how much time/space an algorithm uses relative to the size of its input.\n\nIf an algorithm is O(n) then the time/space will increase at the same rate as its input.\n\nIf an algorithm is O(n2) then the time/space increase at the rate of its input squared.\n\nand so on.\n\n(2) It's not about space. It's about complexity which means time. - S.Lott\n(14) I have always believed it can be about time OR space. but not about both at the same time. - Rocco\n(9) Complexity most definitely can be about space. Have a look at this: en.wikipedia.org/wiki/PSPACE - Tom Crockett\n(4) This answer is the most \"plain\" one here. Previous ones actually assume readers know enough to understand them but writers are not aware of it. They think theirs are simple and plain, which are absolutely not. Writing a lot text with pretty format and making fancy artificial examples that are hard to non-CS people is not plain and simple, it is just attractive to stackoverflowers who are mostly CS people to up vote. Explaining CS term in plain English needs nothing about code and math at all. +1 for this answer though it is still not good enough. - W.Sun\nThis answer makes the (common) error of assuming that f=O(g) means that f and g are proportional. - Paul Hankin\nYou can use Big O to do measurements on both `space` and `time`. For example, you could create an algorithm that requires a lot more space, which helps reduce your time complexity, or you could create a an algorithm that does not require any additional space, such as `in situ` algorithms. People in practice would then pick the algorithm that best suits their need, need it be for performance or to minimize space, etc. - James Oravec\n10\n[+37] [2013-05-29 13:51:20] William Payne\n\nIt is very difficult to measure the speed of software programs, and when we try, the answers can be very complex and filled with exceptions and special cases. This is a big problem, because all those exceptions and special cases are distracting and unhelpful when we want to compare two different programs with one another to find out which is \"fastest\".\n\nAs a result of all this unhelpful complexity, people try to describe the speed of software programs using the smallest and least complex (mathematical) expressions possible. These expressions are very very crude approximations: Although, with a bit of luck, they will capture the \"essence\" of whether a piece of software is fast or slow.\n\nBecause they are approximations, we use the letter \"O\" (Big Oh) in the expression, as a convention to signal to the reader that we are making a gross oversimplification. (And to make sure that nobody mistakenly thinks that the expression is in any way accurate).\n\nIf you read the \"Oh\" as meaning \"on the order of\" or \"approximately\" you will not go too far wrong. (I think the choice of the Big-Oh might have been an attempt at humour).\n\nThe only thing that these \"Big-Oh\" expressions try to do is to describe how much the software slows down as we increase the amount of data that the software has to process. If we double the amount of data that needs to be processed, does the software need twice as long to finish it's work? Ten times as long? In practice, there are a very limited number of big-Oh expressions that you will encounter and need to worry about:\n\nThe good:\n\n• `O(1)` Constant: The program takes the same time to run no matter how big the input is.\n• `O(log n)` Logarithmic: The program run-time increases only slowly, even with big increases in the size of the input.\n\n• `O(n)` Linear: The program run-time increases proportionally to the size of the input.\n• `O(n^k)` Polynomial: - Processing time grows faster and faster - as a polynomial function - as the size of the input increases.\n\n... and the ugly:\n\n• `O(k^n)` Exponential The program run-time increases very quickly with even moderate increases in the size of the problem - it is only practical to process small data sets with exponential algorithms.\n• `O(n!)` Factorial The program run-time will be longer than you can afford to wait for anything but the very smallest and most trivial-seeming datasets.\n\n(4) I've also heard the term Linearithmic - `O(n log n)` which would be considered good. - Jason Down\n11\n[+36] [2013-02-22 01:00:27] James Oravec\n\nWhat is a plain English explanation of Big O? With as little formal definition as possible and simple mathematics.\n\nA Plain English Explanation of the Need for Big-O Notation:\n\nWhen we program, we are trying to solve a problem. What we code is called an algorithm. Big O notation allows us to compare the worse case performance of our algorithms in a standardized way. Hardware specs vary over time and improvements in hardware can reduce the time it takes an algorithms to run. But replacing the hardware does not mean our algorithm is any better or improved over time, as our algorithm is still the same. So in order to allow us to compare different algorithms, to determine if one is better or not, we use Big O notation.\n\nA Plain English Explanation of What Big O Notation is:\n\nNot all algorithms run in the same amount of time, and can vary based on the number of items in the input, which we'll call n. Based on this, we consider the worse case analysis, or an upper-bound of the run-time as n get larger and larger. We must be aware of what n is, because many of the Big O notations reference it.\n\n12\n[+32] [2011-08-23 04:06:03] Ajeet Ganga\n\nOk, my 2cents.\n\nBig-O, is rate of increase of resource consumed by program, w.r.t. problem-instance-size\n\nResource : Could be total-CPU time, could be maximum RAM space. By default refers to CPU time.\n\nSay the problem is \"Find the sum\",\n\n``````int Sum(int*arr,int size){\nint sum=0;\nwhile(size-->0)\nsum+=arr[size];\n\nreturn sum;\n}\n``````\n\nproblem-instance= {5,10,15} ==> problem-instance-size = 3, iterations-in-loop= 3\n\nproblem-instance= {5,10,15,20,25} ==> problem-instance-size = 5 iterations-in-loop = 5\n\nFor input of size \"n\" the program is growing at speed of \"n\" iterations in array. Hence Big-O is N expressed as O(n)\n\nSay the problem is \"Find the Combination\",\n\n`````` void Combination(int*arr,int size)\n{ int outer=size,inner=size;\nwhile(outer -->0) {\ninner=size;\nwhile(inner -->0)\ncout<<arr[outer]<<\"-\"<<arr[inner]<<endl;\n}\n}\n``````\n\nproblem-instance= {5,10,15} ==> problem-instance-size = 3, total-iterations = 3*3 = 9\n\nproblem-instance= {5,10,15,20,25} ==> problem-instance-size = 5, total-iterations= 5*5 =25\n\nFor input of size \"n\" the program is growing at speed of \"n*n\" iterations in array. Hence Big-O is N2 expressed as O(n2)\n\n(3) `while (size-->0)` I hope this wouldn't ask again. - mr5\n13\n[+31] [2013-11-13 10:23:28] AlienOnEarth\n\nA simple straightforward answer can be:\n\nBig O represents the worst possible time/space for that algorithm. The algorithm will never take more space/time above that limit. Big O represents time/space complexity in the extreme case.\n\n14\n[+29] [2010-07-17 02:29:35] John C Earls\n\nBig O notation is a way of describing the upper bound of an algorithm in terms of space or running time. The n is the number of elements in the the problem (i.e size of an array, number of nodes in a tree, etc.) We are interested in describing the running time as n gets big.\n\nWhen we say some algorithm is O(f(n)) we are saying that the running time (or space required) by that algorithm is always lower than some constant times f(n).\n\nTo say that binary search has a running time of O(logn) is to say that there exists some constant c which you can multiply log(n) by that will always be larger than the running time of binary search. In this case you will always have some constant factor of log(n) comparisons.\n\nIn other words where g(n) is the running time of your algorithm, we say that g(n) = O(f(n)) when g(n) <= c*f(n) when n > k, where c and k are some constants.\n\nWe can use BigO notation to measure the worst case and average case as well. en.wikipedia.org/wiki/Big_O_notation - cdiggins\n15\n[+28] [2013-08-15 01:57:54] Joseph Myers\n\n\"What is a plain English explanation of Big O? With as little formal definition as possible and simple mathematics.\"\n\nSuch a beautifully simple and short question seems at least to deserve an equally short answer, like a student might receive during tutoring.\n\nBig O notation simply tells how much time* an algorithm can run within, in terms of only the amount of input data**.\n\n( *in a wonderful, unit-free sense of time!)\n(**which is what matters, because people will always want more , whether they live today or tomorrow)\n\nWell, what's so wonderful about Big O notation if that's what it does?\n\n• Practically speaking, Big O analysis is so useful and important because Big O puts the focus squarely on the algorithm's own complexity and completely ignores anything that is merely a proportionality constant—like a JavaScript engine, the speed of a CPU, your Internet connection, and all those things which become quickly become as laughably outdated as a Model T. Big O focuses on performance only in the way that matters equally as much to people living in the present or in the future.\n\n• Big O notation also shines a spotlight directly on the most important principle of computer programming/engineering, the fact which inspires all good programmers to keep thinking and dreaming: the only way to achieve results beyond the slow forward march of technology is to invent a better algorithm.\n\n(5) Being asked to explain something mathematical without mathematics is always a personal challenge to me, as a bona fide Ph.D. mathematician and teacher who believes that such a thing is actually possible. And being a programmer as well, I hope that no one minds that I found answering this particular question, without mathematics, to be a challenge that was completely irresistible. - Joseph Myers\n16\n[+26] [2013-03-23 15:19:15] Khaled.K\n\nAlgorithm example (Java):\n\n``````public boolean search(/* for */Integer K,/* in */List</* of */Integer> L)\n{\nfor(/* each */Integer i:/* in */L)\n{\nif(i == K)\n{\nreturn true;\n}\n}\n\nreturn false;\n}\n``````\n\nAlgorithm description:\n\n• This algorithm searches a list, item by item, looking for a key,\n\n• Iterating on each item in the list, if it's the key then return True,\n\n• If the loop has finished without finding the key, return False.\n\nBig-O notation represents the upper-bound on the Complexity (Time, Space, ..)\n\nTo find The Big-O on Time Complexity:\n\n• Calculate how much time (regarding input size) the worst case takes:\n\n• Worst-Case: the key doesn't exist in the list.\n\n• Time(Worst-Case) = 4n+1\n\n• Time: O(4n+1) = O(n) | in Big-O, constants are neglected\n\n• O(n) ~ Linear\n\nThere's also Big-Omega, which represent the complexity of the Best-Case:\n\n• Best-Case: the key is the first item.\n\n• Time(Best-Case) = 4\n\n• Time: Ω(4) = O(1) ~ Instant\\Constant\n\n(2) Where does your constant 4 comes from? - Rod\n(2) @Rod iterator init, iterator comparison, iterator read, key comparison.. I think `C` would be better - Khaled.K\n17\n[+21] [2014-06-25 20:32:59] user2427354\n\nBig O notation is a way of describing how quickly an algorithm will run given an arbitrary number of input parameters, which we'll call \"n\". It is useful in computer science because different machines operate at different speeds, and simply saying that an algorithm takes 5 seconds doesn't tell you much because while you may be running a system with a 4.5 Ghz octo-core processor, I may be running a 15 year old, 800 Mhz system, which could take longer regardless of the algorithm. So instead of specifying how fast an algorithm runs in terms of time, we say how fast it runs in terms of number of input parameters, or \"n\". By describing algorithms in this way, we are able to compare the speeds of algorithms without having to take into account the speed of the computer itself.\n\n18\n[+20] [2013-03-15 21:18:33] Alexey\n\nBig O\n\nf(x) = O(g(x)) when x goes to a (for example, a = +∞) means that there is a function k such that:\n\n1. f(x) = k(x)g(x)\n\n2. k is bounded in some neighborhood of a (if a = +∞, this means that there are numbers N and M such that for every x > N, |k(x)| < M).\n\nIn other words, in plain English: f(x) = O(g(x)), x → a, means that in a neighborhood of a, f decomposes into the product of g and some bounded function.\n\nSmall o\n\nBy the way, here is for comparison the definition of small o.\n\nf(x) = o(g(x)) when x goes to a means that there is a function k such that:\n\n1. f(x) = k(x)g(x)\n\n2. k(x) goes to 0 when x goes to a.\n\nExamples\n\n• sin x = O(x) when x → 0.\n\n• sin x = O(1) when x → +∞,\n\n• x2 + x = O(x) when x → 0,\n\n• x2 + x = O(x2) when x → +∞,\n\n• ln(x) = o(x) = O(x) when x → +∞.\n\nAttention! The notation with the equal sign \"=\" uses a \"fake equality\": it is true that o(g(x)) = O(g(x)), but false that O(g(x)) = o(g(x)). Similarly, it is ok to write \"ln(x) = o(x) when x → +∞\", but the formula \"o(x) = ln(x)\" would make no sense.\n\nMore examples\n\n• O(1) = O(n) = O(n2) when n → +∞ (but not the other way around, the equality is \"fake\"),\n\n• O(n) + O(n2) = O(n2) when n → +∞\n\n• O(O(n2)) = O(n2) when n → +∞\n\n• O(n2)O(n3) = O(n5) when n → +∞\n\nHere is the Wikipedia article: https://en.wikipedia.org/wiki/Big_O_notation\n\n(3) You are stating \"Big O\" and \"Small o\" without explainy what they are, introducing lots of mathematical concepts without telling why they are important and the link to wikipedia may be in this case too obvious for this kind of question. - Adit Saxena\n@AditSaxena What do you mean \"without explaining what they are\"? I exactly explained what they are. That is, \"big O\" and \"small o\" are nothing by themselves, only a formula like \"f(x) = O(g(x))\" has a meaning, which i explained (in plain English, but without defining of course all the necessary things from a Calculus course). Sometimes \"O(f(x))\" is viewed as the class (actually the set) of all the functions \"g(x)\" such that \"g(x) = O(f(x))\", but this is an extra step, which is not necessary for understanding the basics. - Alexey\nWell, ok, there are words that are not plain English, but it is inevitable, unless i would have to include all necessary definitions from Mathematical Analysis. - Alexey\n(2) Hi #Alexey, please have a look at accepted answer: it is long but it is well constructed and well formatted. It starts with a simple definition with no mathematical background needed. While doing so he introduce thre 3 \"technical\" words which he explains immediately (relative, representation, complexity). This goes on step by step while digging into this field. - Adit Saxena\nOP wasn't asking a technical answer, please think of this as what you would to your tech guy repairing your dishwasher? You don't necessarily need to know any mechanical concept: maybe you only want to understand if the repairing is worth than buying a new one. These kind of questions are often asked to skilled professionist during job interviews to understand if they can relate to non-technical folks, I didn't Actually vote you down. - Adit Saxena\nBig-O -> WHY: used for understanding the speed of algorithm resolution -> HOW: you should calculate the worst case scenario -> HOW -> ... -> HOW -> ... - Adit Saxena\n(2) Big O is used for understanding asymptotic behavior of algorithms for the same reason it is used for understanding asymptotic behavior of functions (asymptotic behavior is the behavior near infinity). It is a convenient notation for comparing a complicated function (the actual time or space the algorithm takes) to simple ones (anything simple, usually a power function) near infinity, or near anything else. I only explained what it is (gave the definition). How to compute with big O is a different story, maybe i'll add some examples, since you are interested. - Alexey\nThat is good a good start - meanwhile I've documented on different articles on this matter, I hope my knowledge on this topic will go for my current needs, but thanks for your time. Regards, Adit - Adit Saxena\n19\n[+14] [2015-12-27 10:34:52] johnwbyrd\n\nYou want to know all there is to know of big O? So do I.\n\nSo to talk of big O, I will use words that have just one beat in them. One sound per word. Small words are quick. You know these words, and so do I. We will use words with one sound. They are small. I am sure you will know all of the words we will use!\n\nNow, let’s you and me talk of work. Most of the time, I do not like work. Do you like work? It may be the case that you do, but I am sure I do not.\n\nI do not like to go to work. I do not like to spend time at work. If I had my way, I would like just to play, and do fun things. Do you feel the same as I do?\n\nNow at times, I do have to go to work. It is sad, but true. So, when I am at work, I have a rule: I try to do less work. As near to no work as I can. Then I go play!\n\nSo here is the big news: the big O can help me not to do work! I can play more of the time, if I know big O. Less work, more play! That is what big O helps me do.\n\nNow I have some work. I have this list: one, two, three, four, five, six. I must add all things in this list.\n\nWow, I hate work. But oh well, I have to do this. So here I go.\n\nOne plus two is three… plus three is six... and four is... I don’t know. I got lost. It is too hard for me to do in my head. I don’t much care for this kind of work.\n\nSo let's not do the work. Let's you and me just think how hard it is. How much work would I have to do, to add six numbers?\n\nWell, let’s see. I must add one and two, and then add that to three, and then add that to four… All in all, I count six adds. I have to do six adds to solve this.\n\nHere comes big O, to tell us just how hard this math is.\n\nBig O says: we must do six adds to solve this. One add, for each thing from one to six. Six small bits of work... each bit of work is one add.\n\nWell, I will not do the work to add them now. But I know how hard it would be. It would be six adds.\n\nOh no, now I have more work. Sheesh. Who makes this kind of stuff?!\n\nNow they ask me to add from one to ten! Why would I do that? I did not want to add one to six. To add from one to ten… well… that would be even more hard!\n\nHow much more hard would it be? How much more work would I have to do? Do I need more or less steps?\n\nWell, I guess I would have to do ten adds… one for each thing from one to ten. Ten is more than six. I would have to work that much more to add from one to ten, than one to six!\n\nI do not want to add right now. I just want to think on how hard it might be to add that much. And, I hope, to play as soon as I can.\n\nTo add from one to six, that is some work. But do you see, to add from one to ten, that is more work?\n\nBig O is your friend and mine. Big O helps us think on how much work we have to do, so we can plan. And, if we are friends with big O, he can help us choose work that is not so hard!\n\nNow we must do new work. Oh, no. I don’t like this work thing at all.\n\nThe new work is: add all things from one to n.\n\nWait! What is n? Did I miss that? How can I add from one to n if you don’t tell me what n is?\n\nWell, I don’t know what n is. I was not told. Were you? No? Oh well. So we can’t do the work. Whew.\n\nBut though we will not do the work now, we can guess how hard it would be, if we knew n. We would have to add up n things, right? Of course!\n\nNow here comes big O, and he will tell us how hard this work is. He says: to add all things from one to N, one by one, is O(n). To add all these things, [I know I must add n times.] That is big O! He tells us how hard it is to do some type of work.\n\nTo me, I think of big O like a big, slow, boss man. He thinks on work, but he does not do it. He might say, \"That work is quick.\" Or, he might say, \"That work is so slow and hard!\" But he does not do the work. He just looks at the work, and then he tells us how much time it might take.\n\nI care lots for big O. Why? I do not like to work! No one likes to work. That is why we all love big O! He tells us how fast we can work. He helps us think of how hard work is.\n\nUh oh, more work. Now, let’s not do the work. But, let’s make a plan to do it, step by step.\n\nThey gave us a deck of ten cards. They are all mixed up: seven, four, two, six… not straight at all. And now... our job is to sort them.\n\nErgh. That sounds like a lot of work!\n\nHow can we sort this deck? I have a plan.\n\nI will look at each pair of cards, pair by pair, through the deck, from first to last. If the first card in one pair is big and the next card in that pair is small, I swap them. Else, I go to the next pair, and so on and so on... and soon, the deck is done.\n\nWhen the deck is done, I ask: did I swap cards in that pass? If so, I must do it all once more, from the top.\n\nAt some point, at some time, there will be no swaps, and our sort of the deck would be done. So much work!\n\nWell, how much work would that be, to sort the cards with those rules?\n\nI have ten cards. And, most of the time -- that is, if I don’t have lots of luck -- I must go through the whole deck up to ten times, with up to ten card swaps each time through the deck.\n\nBig O, help me!\n\nBig O comes in and says: for a deck of n cards, to sort it this way will be done in O(N squared) time.\n\nWhy does he say n squared?\n\nWell, you know n squared is n times n. Now, I get it: n cards checked, up to what might be n times through the deck. That is two loops, each with n steps. That is n squared much work to be done. A lot of work, for sure!\n\nNow when big O says it will take O(n squared) work, he does not mean n squared adds, on the nose. It might be some small bit less, for some case. But in the worst case, it will be near n squared steps of work to sort the deck.\n\nNow here is where big O is our friend.\n\nBig O points out this: as n gets big, when we sort cards, the job gets MUCH MUCH MORE HARD than the old just-add-these-things job. How do we know this?\n\nWell, if n gets real big, we do not care what we might add to n or n squared.\n\nFor big n, n squared is more large than n.\n\nBig O tells us that to sort things is more hard than to add things. O(n squared) is more than O(n) for big n. That means: if n gets real big, to sort a mixed deck of n things MUST take more time, than to just add n mixed things.\n\nBig O does not solve the work for us. Big O tells us how hard the work is.\n\nI have a deck of cards. I did sort them. You helped. Thanks.\n\nIs there a more fast way to sort the cards? Can big O help us?\n\nYes, there is a more fast way! It takes some time to learn, but it works... and it works quite fast. You can try it too, but take your time with each step and do not lose your place.\n\nIn this new way to sort a deck, we do not check pairs of cards the way we did a while ago. Here are your new rules to sort this deck:\n\nOne: I choose one card in the part of the deck we work on now. You can choose one for me if you like. (The first time we do this, “the part of the deck we work on now” is the whole deck, of course.)\n\nTwo: I splay the deck on that card you chose. What is this splay; how do I splay? Well, I go from the start card down, one by one, and I look for a card that is more high than the splay card.\n\nThree: I go from the end card up, and I look for a card that is more low than the splay card.\n\nOnce I have found these two cards, I swap them, and go on to look for more cards to swap. That is, I go back to step Two, and splay on the card you chose some more.\n\nAt some point, this loop (from Two to Three) will end. It ends when both halves of this search meet at the splay card. Then, we have just splayed the deck with the card you chose in step One. Now, all the cards near the start are more low than the splay card; and the cards near the end are more high than the splay card. Cool trick!\n\nFour (and this is the fun part): I have two small decks now, one more low than the splay card, and one more high. Now I go to step one, on each small deck! That is to say, I start from step One on the first small deck, and when that work is done, I start from step One on the next small deck.\n\nI break up the deck in parts, and sort each part, more small and more small, and at some time I have no more work to do. Now this may seem slow, with all the rules. But trust me, it is not slow at all. It is much less work than the first way to sort things!\n\nWhat is this sort called? It is called Quick Sort! That sort was made by a man called C. A. R. Hoare and he called it Quick Sort. Now, Quick Sort gets used all the time!\n\nQuick Sort breaks up big decks in small ones. That is to say, it breaks up big tasks in small ones.\n\nHmmm. There may be a rule in there, I think. To make big tasks small, break them up.\n\nThis sort is quite quick. How quick? Big O tells us: this sort needs O(n log n) work to be done, in the mean case.\n\nThe first sort was O(n squared). But Quick Sort is O(n log n). You know that n log n is less than n squared, for big n, right? Well, that is how we know that Quick Sort is fast!\n\nIf you have to sort a deck, what is the best way? Well, you can do what you want, but I would choose Quick Sort.\n\nWhy do I choose Quick Sort? I do not like to work, of course! I want work done as soon as I can get it done.\n\nHow do I know Quick Sort is less work? I know that O(n log n) is less than O(n squared). The O's are more small, so Quick Sort is less work!\n\nNow you know my friend, Big O. He helps us do less work. And if you know big O, you can do less work too!\n\nYou learned all that with me! You are so smart! Thank you so much!\n\nNow that work is done, let’s go play!\n\n: There is a way to cheat and add all the things from one to n, all at one time. Some kid named Gauss found this out when he was eight. I am not that smart though, so don't ask me how he did it .\n\n https://en.wikipedia.org/wiki/Tony_Hoare\n http://nzmaths.co.nz/gauss-trick-staff-seminar\n\n20\n[+13] [2012-09-29 20:54:23] Priidu Neemre\n\nNot sure I'm further contributing to the subject but still thought I'd share: I once found this blog post to have some quite helpful (though very basic) explanations & examples on Big O:\n\nVia examples, this helped get the bare basics into my tortoiseshell-like skull, so I think it's a pretty descent 10-minute read to get you headed in the right direction.\n\n http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/\n\n@William ...and people tend to die of old age, species go extinct, planets turn barren etc. - Priidu Neemre\n21\n[+12] [2013-10-25 15:11:17] Kjartan\n\nAssume we're talking about an algorithm A, which should do something with a dataset of size n.\n\nThen `O( <some expression X involving n> )` means, in simple English:\n\nIf you're unlucky when executing A, it might take as much as X(n) operations to complete.\n\nAs it happens, there are certain functions (think of them as implementations of X(n)) that tend to occur quite often. These are well known and easily compared (Examples: `1`, `Log N`, `N`, `N^2`, `N!`, etc..)\n\nBy comparing these when talking about A and other algorithms, it is easy to rank the algorithms according to the number of operations they may (worst-case) require to complete.\n\nIn general, our goal will be to find or structure an algorithm A in such a way that it will have a function `X(n)` that returns as low a number as possible.\n\n22\n[+12] [2015-01-30 07:00:50] nitin kumar\n\nI've more simpler way to understand the time complexity he most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:\n\n``````statement;\n``````\n\nIs constant. The running time of the statement will not change in relation to N\n\n``````for ( i = 0; i < N; i++ )\nstatement;\n``````\n\nIs linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.\n\n``````for ( i = 0; i < N; i++ )\n{\nfor ( j = 0; j < N; j++ )\nstatement;\n}\n``````\n\nIs quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.\n\n``````while ( low <= high )\n{\nmid = ( low + high ) / 2;\nif ( target < list[mid] )\nhigh = mid - 1;\nelse if ( target > list[mid] )\nlow = mid + 1;\nelse break;\n}\n``````\n\nIs logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.\n\n``````void quicksort ( int list[], int left, int right )\n{\nint pivot = partition ( list, left, right );\nquicksort ( list, left, pivot - 1 );\nquicksort ( list, pivot + 1, right );\n}\n``````\n\nIs N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.\n\nIn general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( ) where is the measure. The quicksort algorithm would be described as O ( N * log ( N ) ).\n\nNote: None of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course.\n\n http://proprogramming.org/2015/01/how-to-calculate-time-complexity-of.html#sthash.URrGFhqm.dpuf\n\n23\n[+11] [2015-05-16 16:02:02] user1084944\n\nIf you have a suitable notion of infinity in your head, then there is a very brief description:\n\nBig O notation tells you the cost of solving an infinitely large problem.\n\nAnd furthermore\n\nConstant factors are negligible\n\nIf you upgrade to a computer that can run your algorithm twice as fast, big O notation won't notice that. Constant factor improvements are too small to even be noticed in the scale that big O notation works with. Note that this is an intentional part of the design of big O notation.\n\nAlthough anything \"larger\" than a constant factor can be detected, however.\n\nWhen interested in doing computations whose size is \"large\" enough to be considered as approximately infinity, then big O notation is approximately the cost of solving your problem.\n\nIf the above doesn't make sense, then you don't have a compatible intuitive notion of infinity in your head, and you should probably disregard all of the above; the only way I know to make these ideas rigorous, or to explain them if they aren't already intuitively useful, is to first teach you big O notation or something similar. (although, once you well understand big O notation in the future, it may be worthwhile to revisit these ideas)\n\n24\n[+11] [2015-12-06 06:01:13] raaz\n\nSay you order Harry Potter: Complete 8-Film Collection [Blu-ray] from Amazon and download the same film collection online at the same time. You want to test which method is faster. The delivery takes almost a day to arrive and the download completed about 30 minutes earlier. Great! So it’s a tight race.\n\nWhat if I order several Blu-ray movies like The Lord of the Rings, Twilight, The Dark Knight Trilogy, etc. and download all the movies online at the same time? This time, the delivery still take a day to complete, but the online download takes 3 days to finish. For online shopping, the number of purchased item (input) doesn’t affect the delivery time. The output is constant. We call this O(1).\n\nFrom the experiments, we know that online shopping scales better than online downloading. It is very important to understand big O notation because it helps you to analyze the scalability and efficiency of algorithms.\n\nNote: Big O notation represents the worst-case scenario of an algorithm. Let’s assume that O(1) and O(n) are the worst-case scenarios of the example above.\n\nReference : http://carlcheo.com/compsci\n\n25\n[+11] [2018-04-13 12:36:18] AbstProcDo\n\nWhat is a plain English explanation of “Big O” notation?\n\nVery Quick Note:\n\nThe O in \"Big O\" refers to as \"Order\"(or precisely \"order of\")\nso you could get its idea literally that it's used to order something to compare them.\n\n• \"Big O\" does two things:\n\n1. Estimates how many steps of the method your computer applies to accomplish a task.\n2. Facilitate the process to compare with others in order to determine whether it's good or not?\n3. \"Big O' achieves the above two with standardized `Notations`.\n• There are seven most used notations\n\n1. O(1), means your computer gets a task done with `1` step, it's excellent, Ordered No.1\n2. O(logN), means your computer complete a task with `logN` steps, its good, Ordered No.2\n3. O(N), finish a task with `N` steps, its fair, Order No.3\n4. O(NlogN), ends a task with `O(NlogN)` steps, it's not good, Order No.4\n5. O(N^2), get a task done with `N^2` steps, it's bad, Order No.5\n6. O(2^N), get a task done with `2^N` steps, it's horrible, Order No.6\n7. O(N!), get a task done with `N!` steps, it's terrible, Order No.7",
null,
"Suppose you get notation `O(N^2)`, not only you are clear the method takes N*N steps to accomplish a task, also you see that it's not good as `O(NlogN)` from its ranking.\n\nPlease note the order at line end, just for your better understanding.There's more than 7 notations if all possibilities considered.\n\nIn CS, the set of steps to accomplish a task is called algorithms.\nIn Terminology, Big O notation is used to describe the performance or complexity of an algorithm.\n\nIn addition, Big O establishes the worst-case or measure the Upper-Bound steps.\nYou could refer to Big-Ω (Big-Omega) for best case.\n\n• Summary\n\"Big O\" describes the algorithm's performance and evaluates it.\n\nor address it formally, \"Big O\" classifies the algorithms and standardize the comparison process.\n\n26\n\nDefinition :- Big O notation is a notation which says how a algorithm performance will perform if the data input increases.\n\nWhen we talk about algorithms there are 3 important pillars Input , Output and Processing of algorithm. Big O is symbolic notation which says if the data input is increased in what rate will the performance vary of the algorithm processing.\n\nI would encourage you to see this youtube video which explains Big O Notation in depth with code examples.",
null,
"So for example assume that a algorithm takes 5 records and the time required for processing the same is 27 seconds. Now if we increase the records to 10 the algorithm takes 105 seconds.\n\nIn simple words the time taken is square of the number of records. We can denote this by O(n ^ 2). This symbolic representation is termed as Big O notation.\n\nNow please note the units can be anything in inputs it can be bytes , bits number of records , the performance can be measured in any unit like second , minutes , days and so on. So its not the exact unit but rather the relationship.",
null,
"For example look at the below function \"Function1\" which takes a collection and does processing on the first record. Now for this function the performance will be same irrespective you put 1000 , 10000 or 100000 records. So we can denote it by O(1).\n\n``````void Function1(List<string> data)\n{\nstring str = data;\n}\n``````\n\nNow see the below function \"Function2()\". In this case the processing time will increase with number of records. We can denote this algorithm performance using O(n).\n\n``````void Function2(List<string> data)\n{\nforeach(string str in data)\n{\nif (str == \"shiv\")\n{\nreturn;\n}\n}\n}\n``````\n\nWhen we see a Big O notation for any algorithm we can classify them in to three categories of performance :-\n\n1. Log and constant category :- Any developer would love to see their algorithm performance in this category.\n2. Linear :- Developer will not want to see algorithms in this category , until its the last option or the only option left.\n3. Exponential :- This is where we do not want to see our algorithms and a rework is needed.\n\nSo by looking at Big O notation we categorize good and bad zones for algorithms.",
null,
"I would recommend you to watch this 10 minutes video which discusses Big O with sample code\n\n27\n[+9] [2015-08-16 20:38:14] developer747\n\nSimplest way to look at it (in plain English)\n\nWe are trying to see how the number of input parameters, affects the running time of an algorithm. If the running time of your application is proportional to the number of input parameters, then it is said to be in Big O of n.\n\nThe above statement is a good start but not completely true.\n\nA more accurate explanation (mathematical)\n\nSuppose\n\nn=number of input parameters\n\nT(n)= The actual function that expresses the running time of the algorithm as a function of n\n\nc= a constant\n\nf(n)= An approximate function that expresses the running time of the algorithm as a function of n\n\nThen as far as Big O is concerned, the approximation f(n) is considered good enough as long as the below condition is true.\n\n``````lim T(n) ≤ c×f(n)\nn→∞\n``````\n\nThe equation is read as As n approaches infinity, T of n, is less than or equal to c times f of n.\n\nIn big O notation this is written as\n\n``````T(n)∈O(n)\n``````\n\nThis is read as T of n is in big O of n.\n\nBack to English\n\nBased on the mathematical definition above, if you say your algorithm is a Big O of n, it means it is a function of n (number of input parameters) or faster. If your algorithm is Big O of n, then it is also automatically the Big O of n square.\n\nBig O of n means my algorithm runs at least as fast as this. You cannot look at Big O notation of your algorithm and say its slow. You can only say its fast.\n\nCheck this out for a video tutorial on Big O from UC Berkley. It is actually a simple concept. If you hear professor Shewchuck (aka God level teacher) explaining it, you will say \"Oh that's all it is!\".\n\nLook for CS 61B Lecture 19: Asymptotic Analysis - developer747\n28\n[+8] [2017-01-29 15:39:38] shanwije\n\nI found a really great explanation about big O notation especially for a someone who's not much into mathematics.\n\nhttps://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/\n\nBig O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.\n\nAnyone who's read Programming Pearls or any other Computer Science books and doesn’t have a grounding in Mathematics will have hit a wall when they reached chapters that mention O(N log N) or other seemingly crazy syntax. Hopefully this article will help you gain an understanding of the basics of Big O and Logarithms.\n\nAs a programmer first and a mathematician second (or maybe third or fourth) I found the best way to understand Big O thoroughly was to produce some examples in code. So, below are some common orders of growth along with descriptions and examples where possible.\n\n# O(1)\n\nO(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.\n\n``````bool IsFirstElementNull(IList<string> elements) {\nreturn elements == null;\n}\n``````\n\n# O(N)\n\nO(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example below also demonstrates how Big O favours the worst-case performance scenario; a matching string could be found during any iteration of the for loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.\n\n``````bool ContainsValue(IList<string> elements, string value) {\nforeach (var element in elements)\n{\nif (element == value) return true;\n}\n\nreturn false;\n}\n``````\n\n# O(N2)\n\nO(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.\n\n``````bool ContainsDuplicates(IList<string> elements) {\nfor (var outer = 0; outer < elements.Count; outer++)\n{\nfor (var inner = 0; inner < elements.Count; inner++)\n{\n// Don't compare with self\nif (outer == inner) continue;\n\nif (elements[outer] == elements[inner]) return true;\n}\n}\n\nreturn false;\n}\n``````\n\n# O(2N)\n\nO(2N) denotes an algorithm whose growth doubles with each additon to the input data set. The growth curve of an O(2N) function is exponential - starting off very shallow, then rising meteorically. An example of an O(2N) function is the recursive calculation of Fibonacci numbers:\n\n``````int Fibonacci(int number) {\nif (number <= 1) return number;\n\nreturn Fibonacci(number - 2) + Fibonacci(number - 1);\n}\n``````\n\n# Logarithms\n\nLogarithms are slightly trickier to explain so I'll use a common example:\n\nBinary search is a technique used to search sorted data sets. It works by selecting the middle element of the data set, essentially the median, and compares it against a target value. If the values match it will return success. If the target value is higher than the value of the probe element it will take the upper half of the data set and perform the same operation against it. Likewise, if the target value is lower than the value of the probe element it will perform the operation against the lower half. It will continue to halve the data set with each iteration until the value has been found or until it can no longer split the data set.\n\nThis type of algorithm is described as O(log N). The iterative halving of data sets described in the binary search example produces a growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase e.g. an input data set containing 10 items takes one second to complete, a data set containing 100 items takes two seconds, and a data set containing 1000 items will take three seconds. Doubling the size of the input data set has little effect on its growth as after a single iteration of the algorithm the data set will be halved and therefore on a par with an input data set half the size. This makes algorithms like binary search extremely efficient when dealing with large data sets.\n\n29\n[+7] [2015-10-11 18:00:20] nkt\n\nThis is a very simplified explanation, but I hope it covers most important details.\n\nLet's say your algorithm dealing with the problem depends on some 'factors', for example let's make it N and X.\n\nDepending on N and X, your algorithm will require some operations, for example in the WORST case it's `3(N^2) + log(X)` operations.\n\nSince Big-O doesn't care too much about constant factor (aka 3), the Big-O of your algorithm is `O(N^2 + log(X))`. It basically translates 'the amount of operations your algorithm needs for the worst case scales with this'.\n\n30\n[+7] [2017-10-10 03:15:25] Ryan Efendy\n\n### Preface\n\nalgorithm: procedure/formula for solving a problem\n\nHow do analyze algorithms and how can we compare algorithms against each other?\n\nexample: you and a friend are asked to create a function to sum the numbers from 0 to N. You come up with f(x) and your friend comes up with g(x). Both functions have the same result, but a different algorithm. In order to objectively compare the efficiency of the algorithms we use Big-O notation.\n\nBig-O notation: describes how quickly runtime will grow relative to the input as the input get arbitrarily large.\n\n3 key takeaways:\n\n1. Compare how quickly runtime grows NOT compare exact runtimes (depends on hardware)\n2. Only concerned with runtime grow relative to the input (n)\n3. As n gets arbitrarily large, focus on the terms that will grow the fastest as n gets large (think infinity) AKA asymptotic analysis\n\nSpace complexity: aside from time complexity, we also care about space complexity (how much memory/space an algorithm uses). Instead of checking the time of operations, we check the size of the allocation of memory.\n\n31\n[+5] [2016-10-24 10:39:08] user3170122\n\nBig O is a means to represent the upper bounds of any function. We generally use it for expressing the upper bounds of a function that tells the running time of an Algorithm.\n\nEx : f(n) = 2(n^2) +3n be a function representing the running time of a hypothetical algorithm, Big-O notation essentially gives the upper limit for this function which is O(n^2)\n\nThis notation basically tells us that, for any input 'n' the running time won't be greater than the value expressed by Big-O notation.\n\nAlso, agree with all the above detailed answers. Hope this helps !!\n\n32\n[+5] [2016-10-31 23:57:42] xuma202\n\nBig O is describing a class of functions.\n\nIt describes how fast functions grow for big input values.\n\nFor a given function f, O(f) descibes all functions g(n) for which you can find an n0 and a constant c so that all values of g(n) with n >= n0 are less or equal to c*f(n)\n\nIn less mathematical words O(f) is a set of functions. Namely all functions, that from some value n0 onwards, are growing slower or as fast as f.\n\nIf f(n) = n then\n\ng(n) = 3n is in O(f).Because constant factors do not matter h(n) = n+1000 is in O(f) because it might be bigger for all values smaler than 1000 but for big O only huge inputs matter.\n\nHowever i(n) = n^2 is not in O(f) because a quadratic funcion grows faster than a linear one.\n\n33\n[+5] [2018-12-09 07:57:24] user2297550\n\nIt represents the speed of an algorithm in the long run.\n\nTo take a literal analogy, you don't care how fast a runner can sprint a 100m dash, or even a 5k run. You care more about marathoners, and preferably ultra marathoners (beyond which the analogy to running breaks down and you have to revert to the metaphorical meaning of \"the long run\").\n\nYou can safely stop reading here.\n\nI'm adding this answer because I'm surprised how mathematical and technical the rest of the answers are. The notion of the \"long run\" in first sentence is related to the arbitrarily time-consuming computational tasks. Unlike running, which is limited by human capacity, computational tasks can take even more than millions of years for certain algorithms to complete.\n\nWhat about all those mathematical logarithms and polynomials? It turns out that algorithms are intrinsically related to these mathematical terms. If you are measuring the heights of all the kids on the block, it will take you as much time as there are kids. This is intrinsically related to the notion of n^1 or just n where n is nothing more than the number of kids on the block. In the ultra-marathon case, you are measuring the heights of all the kids in your city, but you then have to ignore travel times and assume they are all available to you in a line (otherwise we jump ahead of the current explanation).\n\nSuppose then you are trying to arrange the list that you made of of kids heights in order of shortest height to longest height. If it is just the kids in your neighborhood you might just eyeball it and come up with the ordered list. This is the \"sprint\" analogy, and we truly don't care about sprints in computer science because why use a computer when you can eyeball something?\n\nBut if you were arranging the list of the heights of all kids in your city, or better yet, your country, then you will find that how you do it is intrinsically tied to the mathematical log and n^2. Going through your list to find the shortest kid, writing his name in a separate notebook, and crossing it out from the original notebook is intrinsically tied to the mathematical n^2. If you think of arranging half your notebook, then the other half, and then combining the results, you will arrive at a method that is intrinsically tied to the logarithm.\n\nFinally, suppose you first had to go to the store to buy a measuring tape. This is an example of an effort that is of consequence in short sprints, such as measuring the kids on the block, but when you are measuring all the kids in the city you can safely ignore this cost. This is the intrinsic connection to the mathematical dropping of say lower order polynomial terms.\n\nI hope I have explained that the big-O notation is merely about the long run, that the mathematics is inherently connected to ways of computation, and that the dropping of mathematical terms and other simplifications are connected to the long run in a rather common sense way.\n\nOnce you realize this, you'll find the big-O is really super-easy because all the hard high school math just drops out easily. The only difficult part is analyzing an algorithm to identify the mathematical terms, but with some practice you can start dropping terms during the analysis itself and safely ignore chunks of the algorithm to focus only on the part that is relevant to the big-O. I. e. you should be able to eyeball most situations.\n\nHappy big-O-ing, it was my favorite thing about Computer Science -- finding that something was way easier than I thought, and then being able to show off at Google interviews when the uninitiated would be intimidated, lol.\n\n34\n[+4] [2016-07-04 20:46:32] A. Mashreghi\n\nBig O in plain english is like <= (less than or equal). When we say for two functions f and g, f = O(g) it means that f <= g.\n\nHowever, this does not mean that for any n f(n) <= g(n). Actually what it means is that f is less than or equal to g in terms of growth. It means that after a point f(n) <= c*g(n) if c is a constant. And after a point means than for all n >= n0 where n0 is another constant.\n\nMan, if this is plain English I wonder what advanced English would look like - Ojonugwa Jude Ochalifu\n35\n[+4] [2018-02-15 03:52:05] snagpaul\n\nBig O - Economic Point of View.\n\nMy favourite English word to describe this concept is the price you pay for a task as it grows larger.\n\nThink of it as recurring costs instead of fixed costs that you would pay at the beginning. The fixed costs become negligible in the big picture because costs only grow and they add up. We want to measure how fast they would grow and how soon they would add up with respect to the raw material we give to the set up - size of the problem.\n\nHowever, if initial set up costs are high and you only produce a small amount of the product, you would want to look at these initial costs - they are also called the constants.\n\nSince, these constants don't matter in the long run, this language allows us to discuss tasks beyond what kind of infrastructure we are running it on. So, the factories can be anywhere and the workers can be whoever - it's all gravy. But the size of the factory and the number of workers would be the things we could vary in the long run as your inputs and outputs grow.\n\nHence, this becomes a big picture approximation of how much you would have to spend to run something. Since time and space are the economic quantities (i.e. they are limited) here, they can both be expressed using this language.\n\nTechnical notes: Some examples of time complexity - O(n) generally means that if a problem is of size 'n', I at least have to see everything. O(log n) generally means that I halve the size of the problem and check and repeat until the task is done. O(n^2) means I need to look at pairs of things (like handshakes at a party between n people).\n\n36\n[+4] [2018-10-03 13:31:24] The Scientific Method\n\nWhat is a plain English explanation of “Big O” notation?\n\nI would like to stress that the driving motive for “Big O” notation is one thing, when an input size of algorithm gets too big some parts (i.e constants, coefficients, terms )of the equation describing the measure of the algorithm becomes so insignificant that we ignore them. The parts of equation that survives after ignoring some of its parts is termed as the “Big O” notation of the algorithm.\n\nSo if the input size is NOT too big the idea of “Big O” notation( upper bound ) will be unimportant.\n\nLets say you want to quantify the performance of the following algorithm\n``````int sumArray (int[] nums){\nint sum=0; // here we've 1 operation\nfor(int i=0; i < nums.length;i++){ // we've n times\nsum += nums[i]; // taking initialization and assignments, 3 ops\n}\nreturn sum;\n}\n``````\n\nIn above algorithm, let's say you find out `T(n)` as follows (time complexity):\n\n``````T(n) = 3*n + 2\n``````\n\nTo find its “Big O” notation, we need to consider very big input size:\n\n``````n= 1,000,000 -> T(1,000,000) = 3,000,002\nn=1,000,000,000 -> T(1,000,000,000) = 3,000,000,002\nn=10,000,000,000 -> T(10,000,000,000) = 30,000,000,002\n``````\n\nLets give this similar inputs for another function `F(n) = n`\n\n``````n= 1,000,000 -> F(1,000,000) = 1,000,000\nn=1,000,000,000 -> F(1,000,000,000) = 1,000,000,000\nn=10,000,000,000 -> F(10,000,000,000) = 10,000,000,000\n``````\n\nAs you can see as input size get too big the `T(n)` approximately equal to or getting closer to `F(n)`, so the constant `2` and the coefficient `3` are becoming too insignificant, now the idea of Big O” notation comes in,\n\n``````O(T(n)) = F(n)\nO(T(n)) = n\n``````\n\nWe say the big O of `T(n)` is `n`, and the notation is `O(T(n)) = n`, it is the upper bound of `T(n)` as `n` gets too big. the same step applies for other algorithms.\n\n37\n[+3] [2015-06-13 15:29:11] user3745123\n\nIf I want to explain this to 6 years old child I will start to draw some functions f(x) = x and f(x) = x^2 for example and ask a child which function will be the upper function on the top of the page. Then we will proceed with drawing and see that x^2 wins. \"Who wins\" actually is the function which grows faster when x tends to infinity. So \"function x is in Big O of x^2\" means that x grows slower than x^2 when x tends to infinity. The same can be done when x tends to 0. If we draw these two function for x from 0 to 1 x will be an upper function, so \"function x^2 is in Big O of x for x tends to 0\". When the child will get older I add that really Big O can be a function which grows not faster but the same way as given function. Moreover constant is discarded. So 2x is in Big O of x.\n\n38\n[+3] [2020-05-13 07:33:10] Rishabh Sharma\n\nThere are some great answers already posted, but I would like to contribute in a different way. If you want to visualize what all is happening you can assume that a compiler can perform close to 10^8 operations in ~1sec. If the input is given in 10^8, you might want to design an algorithm that operates in a linear fashion(like an un-nested for-loop). below is the table that can help you to quickly figure out the type of algorithm you want to figure out ;)",
null,
"39\n[+3] [2020-06-17 16:13:27] Max Tromp\n\nWhen we have a function like `f(n) = n+3` and we want to know how the graph looks likes when `n` approaches infinity, we just drop all the constants and lower order terms because they don't matter when `n` gets big. Which leaves us with `f(n) = n`, so why can't we just use this, why do we need to look for some function which is above and below our `f(n) = n+3` function, so big O and big Omega.\n\nBecause it would be incorrect to say that the function is just `f(n) = n` when `n` approaches infinity, so to be correct we describe the area where the `f(n) = n+3` could be. We are not interested where the graph is exactly, because lower order terms and constant don't change the growth of the graph significantly, so in other words the area which is enclosed from upper and lower bound is a vague version of our f(n) = n+3 function.\n\nThe mere dropping of the constant and lower order term is exactly the process of finding the function which is below and above.\n\nBy definition is a function a lower or upper bound of another function if you can find a constant with whom you can multiply the `f(n) = n` function so that for every `n` the output is bigger (or smaller for lower bound) than for the original function:\n\n``````f(n) = n*C > f(n) = n+3\n``````\n\nAnd yes `C = 2` would do it, therefore our function `f(n) = n` can be an upper bound of our `f(x) = x+3` function.\n\nSame for lower bound:\n\n``````f(n) = n*C < f(n) = n+3\n``````\n\n`C = -2` would do it\n\nSo `f(x) = n` is the upper and lower bound of `f(x) = x+3`, when its both big O and Omega than its Theta, which means its tightly bound.\n\nSo big O could also be `f(x) = x^2` because it fulfills the condition `f(n) = n^2*C > f(n) = n+3`. Its above our `f(n) = n+3` graph, but the area between this upper bound and the lower bound is not as precise as our earlier bounds.\n\n40\n[+2] [2018-04-15 01:20:20] sed\n\nTLDR: Big O explains performance of an algorithm in mathematical terms.\n\nSlower algorithms tend to run at n to the power of x or many, depending on depth of it, whereas faster ones like binary search run at O(log n), which makes it run faster as data set gets larger. Big O could be explained with other terms using n, or not even using n too (ie: O(1) ).\n\nOne can calculate Big O Looking at the most complex lines of the algorithm.\n\nWith small or unsorted datasets Big O can be surprising, as n log n complexity algorithms like binary search can be slow for smaller or unsorted sets, for a simple running example of linear search versus binary search, take a look at my JavaScript example:\n\nhttps://codepen.io/serdarsenay/pen/XELWqN?editors=1011 (algorithms written below)\n\n``````function lineerSearch() {\ninit();\nvar t = timer('lineerSearch benchmark');\nvar input = this.event.target.value;\nfor(var i = 0;i<unsortedhaystack.length - 1;i++) {\nif (unsortedhaystack[i] === input) {\ndocument.getElementById('result').innerHTML = 'result is... \"' + unsortedhaystack[i] + '\", on index: ' + i + ' of the unsorted array. Found' + ' within ' + i + ' iterations';\nconsole.log(document.getElementById('result').innerHTML);\nt.stop();\nreturn unsortedhaystack[i];\n}\n}\n}\n\nfunction binarySearch () {\ninit();\nsortHaystack();\nvar t = timer('binarySearch benchmark');\nvar firstIndex = 0;\nvar lastIndex = haystack.length-1;\nvar input = this.event.target.value;\n\n//currently point in the half of the array\nvar currentIndex = (haystack.length-1)/2 | 0;\nvar iterations = 0;\n\nwhile (firstIndex <= lastIndex) {\ncurrentIndex = (firstIndex + lastIndex)/2 | 0;\niterations++;\nif (haystack[currentIndex] < input) {\nfirstIndex = currentIndex + 1;\n//console.log(currentIndex + \" added, fI:\"+firstIndex+\", lI: \"+lastIndex);\n} else if (haystack[currentIndex] > input) {\nlastIndex = currentIndex - 1;\n//console.log(currentIndex + \" substracted, fI:\"+firstIndex+\", lI: \"+lastIndex);\n} else {\ndocument.getElementById('result').innerHTML = 'result is... \"' + haystack[currentIndex] + '\", on index: ' + currentIndex + ' of the sorted array. Found' + ' within ' + iterations + ' iterations';\nconsole.log(document.getElementById('result').innerHTML);\nt.stop();\nreturn true;\n}\n}\n}\n``````\n\n41\n[+2] [2021-02-08 22:31:28] dreamcrash\n\nFrom ( source ) one can read:\n\nBig O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. (..) In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.\n\n`Big O` notation does not represent a function per si but rather a set of functions with a certain asymptotic upper-bound; as one can read from source :\n\nBig O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same `O` notation.\n\nInformally, in computer-science time-complexity and space-complexity theories, one can think of the `Big O` notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, `O(n)`:\n\nAn algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input ( source ).\n\nand `O(n log n)` as:\n\nAn algorithm is said to run in quasilinear time/space if T(n) = O(n log^k n) for some positive constant k; linearithmic time/space is the case k = 1 ( source ).\n\nNonetheless, typically such relaxed phrasing is normally used to quantify (for the worst-case scenario) how a set of algorithms behaves compared with another set of algorithms regarding the increase of their input sizes. To compare two classes of algorithms (e.g., `O(n log n)` and `O(n)`) one should analyze how both classes of algorithms behaves with the increase of their input size (i.e., n) for the worse-case scenario; analyzing `n` when it tends to the infinity",
null,
"In the image above `big-O` denote one of the asymptotically least upper-bounds of the plotted functions, and does not refer to the sets `O(f(n))`.\n\nFor instance comparing `O(n log n)` vs. `O(n)` as one can see in the image after a certain input, `O(n log n)` (green line) grows faster than `O(n)` (yellow line). That is why (for the worst-case) `O(n)` is more desirable than `O(n log n)` because one can increase the input size, and the growth rate will increase slower with the former than with the latter.\n\n https://en.wikipedia.org/wiki/Big_O_notation\n https://stackoverflow.com/questions/10376740/what-exactly-does-big-%D3%A8-notation-represent\n https://en.wikipedia.org/wiki/Big_O_notation\n https://en.wikipedia.org/wiki/Time_complexity\n https://en.wikipedia.org/wiki/Space_complexity\n https://en.wikipedia.org/wiki/Time_complexity#Linear_time\n https://en.wikipedia.org/wiki/Time_complexity#Linearithmic_time\n\n42\n [2022-01-20 16:37:55] Franz Kurt\n\nJust to express a complexity of an algorithm in a fast and simple way. The big O notation exist to explain the best, worst, and average-case time complexities for any given algorithm. There are just numerical functions over the size of possible problem instances.\n\nIn other way it is very difficult to work precisely with these functions, because they tend to:\n\n• Have too many bumps – An algorithm such as binary search typically runs a bit faster for arrays of size exactly n = 2k − 1 (where k is an integer), because the array partitions work out nicely. This detail is not particularly significant, but it warns us that the exact time complexity function for any algorithm is liable to be very complicated, with little up and down bumps as shown in Figure 2.2.\n• Require too much detail to specify precisely – Counting the exact number of RAM instructions executed in the worst case requires the algorithm be specified to the detail of a complete computer program. Further, the precise answer depends upon uninteresting coding details (e.g., did he use a case statement or nested ifs?). Performing a precise worst-case analysis like T(n) = 12754n2 + 4353n + 834lg2 n + 13546 would clearly be very difficult work, but provides us little extra information than the observation that “the time grows quadratically with n.”\n\nIt proves to be much easier to talk in terms of simple upper and lower bounds of time-complexity functions using the Big Oh notation. The Big Oh simplifies our analysis by ignoring levels of detail that do not impact our comparison of algorithms. The Big Oh notation ignores the difference between multiplicative constants. The functions f(n)=2n and g(n) = n are identical in Big Oh analysis\n\n https://mimoza.marmara.edu.tr/%7Emsakalli/cse706_12/SkienaTheAlgorithmDesignManual.pdf\n\n43"
]
| [
null,
"https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon.png",
null,
"http://www.stackprinter.com/images/blackflag.png",
null,
"https://i.stack.imgur.com/WcBRI.png",
null,
"https://i.stack.imgur.com/6zHEt.png",
null,
"https://i.stack.imgur.com/nuFCp.png",
null,
"https://i.stack.imgur.com/ZnRL6.png",
null,
"https://i.stack.imgur.com/N2k4s.png",
null,
"https://i.stack.imgur.com/LTXe5.png",
null,
"https://i.stack.imgur.com/2pnDQ.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9286146,"math_prob":0.9623279,"size":98723,"snap":"2022-27-2022-33","text_gpt3_token_len":24263,"char_repetition_ratio":0.15227059,"word_repetition_ratio":0.05174903,"special_character_ratio":0.25424674,"punctuation_ratio":0.11583103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918082,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T00:30:56Z\",\"WARC-Record-ID\":\"<urn:uuid:39e74af4-32f4-471a-b402-f537591dc883>\",\"Content-Length\":\"189079\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ace6f949-358a-4c70-8cc3-187c323cf6cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:57df313c-46ae-476d-95d4-604447d0b967>\",\"WARC-IP-Address\":\"172.253.122.121\",\"WARC-Target-URI\":\"http://www.stackprinter.com/export?question=487258&format=HTML&service=stackoverflow&printer=false\",\"WARC-Payload-Digest\":\"sha1:XFME5EAMMPRBRABJLZT4NNZHGX56JMBU\",\"WARC-Block-Digest\":\"sha1:DJSTAL4X2TE6CMGMR5PAZN6DIHIM2BMB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00593.warc.gz\"}"} |
https://centralvirginia.edu/Class-Schedule/Courses/Math-Essentials | [
"MTE 1 Operations with Positive Fractions (1 cr.) Includes operations and problem solving with proper fractions, improper fractions, and mixed numbers without the use of a calculator. Emphasizes applications and includes U. S. customary units of measure. Credits not applicable toward graduation. Prerequisite: Qualifying placement score. Lecture 1 hour per week.\n\nMTE 2 Operations with Positive Decimals and Percents (1 cr.) Includes operations and problem solving with positive decimals and percents. Emphasizes applications and includes U. S. customary and metric units of measure. Use of calculators may be limited. Credit is not applicable toward graduation. Prerequisite(s): MTE 1 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 3 Algebra Basics (1 cr.) Includes basic operations with algebraic expressions and solving simple algebraic equations using signed numbers with emphasis on applications. Use of calculators may be limited. Credit is not applicable toward graduation. Prerequisite(s): MTE 2 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 4 First Degree Equations and Inequalities in One Variable (1 cr.) Includes solving first degree equations and inequalities containing one variable, and using them to solve application problems. Emphasizes applications and problem solving. Use of calculators may be limited. Credit is not applicable toward graduation. Prerequisite(s): MTE 3 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 5 Linear Equations, Inequalities and Systems of Linear Equations in Two Variables (1 cr.) Includes finding the equation of a line, graphing linear equations and inequalities in two variables and solving systems of two linear equations. Emphasizes writing and graphing equations using the slope of the line and points on the line, and applications. Use of calculators may be limited. Credit is not applicable toward graduation. Prerequisite(s): MTE 4 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 6 Exponents, Factoring and Polynomial Equations (1 cr.) The student will learn to perform operations on exponential expressions and polynomials. Students will also learn techniques to factor polynomials and use these techniques to solve polynomial equations. Emphasis should be on learning all the different factoring methods, and solving application problems using polynomial equations. Credit is not applicable toward graduation. Prerequisite(s): MTE 5 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 7 Rational Expressions and Equations (1 cr.) Includes simplifying rational algebraic expressions, solving rational algebraic equations and solving applications that use rational algebraic equations. Credit is not applicable toward graduation. Prerequisite(s): MTE 6 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 8 Rational Exponents and Radicals (1 cr.) Includes simplifying radical expressions, using rational exponents, solving radical equations and solving applications using radical equations. Credit is not applicable toward graduation. Prerequisite(s): MTE 7 or qualifying placement score. Lecture 1 hour per week.\n\nMTE 9 Functions, Quadratic Equations and Parabolas (1 cr.) Includes an introduction to functions in ordered pair, graph, and equation form. Also introduces quadratic functions, their properties and their graphs. Credit is not applicable toward graduation. Prerequisite(s): MTE 8 or qualifying placement score. Lecture 1 hour per week."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90889645,"math_prob":0.9942167,"size":3490,"snap":"2019-51-2020-05","text_gpt3_token_len":704,"char_repetition_ratio":0.15662651,"word_repetition_ratio":0.25403225,"special_character_ratio":0.18710601,"punctuation_ratio":0.14165261,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9560859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T16:25:20Z\",\"WARC-Record-ID\":\"<urn:uuid:dbc465d6-453e-4930-99c0-13ce5f8e85ae>\",\"Content-Length\":\"28064\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d133bd67-4a07-4509-922c-e6ab9bfda3bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec5996d1-bf3c-4a91-a26f-28a9c21e47d3>\",\"WARC-IP-Address\":\"164.106.31.230\",\"WARC-Target-URI\":\"https://centralvirginia.edu/Class-Schedule/Courses/Math-Essentials\",\"WARC-Payload-Digest\":\"sha1:GUR7Q5PRJ4C7LOTM2HH46YYTCLK67GYQ\",\"WARC-Block-Digest\":\"sha1:EOMZB4INDHCK6G3ZCDI4OMZUZI5RGDLQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541281438.51_warc_CC-MAIN-20191214150439-20191214174439-00525.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/37880/acceptable-condition-for-dictionary-search | [
"# Acceptable condition for dictionary search\n\nI am trying to write a program to break transposition cipher using dictionary search. But I cannot understand the finishing condition of the search. Is there any theory that the maximum number of valid word can only be generated by decripting the cipher text using the valid key only? If not what can be a good finishing condition?\n\n• Could you explain a bit more what you understand by dictionary search ? Like are you using your dictionary to find the key, or are you trying to find the key by trying to have words from your dictionary appearing in your decrypted cipher text ? – Biv Jul 21 '16 at 9:59\n• You mean how to know the decryption is correct? Are you assuming a ciphertext-only attack, so no known plaintext? – otus Jul 21 '16 at 10:23\n• I am trying to find the key by trying to have words from the dictionary appearing in the decrypted cipher text. – Mostafizur Rahman Jul 27 '16 at 0:23\n• Yes, I am trying a ciphertext-only attack. – Mostafizur Rahman Jul 27 '16 at 0:25\n\nSo if $K$ is the random variable describing the key with a known distribution, and the message has $N$ symbols, say $(M_1,\\ldots,M_N)$ with the message distribution known, let the encryption map be $$(C_1,\\ldots,C_N)=E(M_1,\\ldots,M_N;K).$$\nThen the unicity distance is defined as $$D:=\\min\\{ N: H(K|(C_1,\\ldots,C_N))=0\\},$$ where $H(\\cdot|\\cdot)$ is the conditional Shannon entropy."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87492794,"math_prob":0.9331761,"size":1811,"snap":"2019-51-2020-05","text_gpt3_token_len":453,"char_repetition_ratio":0.12008855,"word_repetition_ratio":0.074324325,"special_character_ratio":0.2573164,"punctuation_ratio":0.11797753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9888065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T22:23:25Z\",\"WARC-Record-ID\":\"<urn:uuid:106d33ba-3706-4775-a4ef-dbc28a404ce3>\",\"Content-Length\":\"134358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffb1ff0e-74dd-4de1-8a99-d376a4a8aa14>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab8d4926-b15d-4ef8-8f19-67633fa57963>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/37880/acceptable-condition-for-dictionary-search\",\"WARC-Payload-Digest\":\"sha1:IQOHUJFNNTQJBRRWZNSJ7R5H5P5PUQC4\",\"WARC-Block-Digest\":\"sha1:6UFOQLINZ6VLZKA4Z6W3QKYPQ4VA7SON\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251728207.68_warc_CC-MAIN-20200127205148-20200127235148-00017.warc.gz\"}"} |
https://www.math-only-math.com/hexadecimal-addition-and-subtraction.html | [
"",
null,
"Addition of hexadecimal numbers can be easily carried with the help of the above table.\n\nFollowing example illustrates the use of the table.\n\nEvaluate: (B A 3)16 + (5 D E)16\n\nSolution:\n\nWe note from the table that\n\n 3 + E = 11A + D = 1717 + 1 (carry) = 18B + 5 = 10 10 + 1 (carry) = 11 1 1 carry B A 3 5 D E 1 1 8 1\n\nHence the required sum is 1181 in hexadecimal.\n\n__________________________________________________________\n\nSubtraction of hexadecimal numbers can be accomplished by using complement method. Although it is not a very simple method computers use it very efficiently.\n\n• Why Binary Numbers are Used\n• Binary to Decimal Conversion\n• Conversion of Numbers\n• Hexa-decimal Number System\n• Conversion of Binary Numbers to Octal or Hexa-decimal Numbers\n• Octal and Hexa-Decimal Numbers\n• Signed-magnitude Representation"
]
| [
null,
"https://www.math-only-math.com/images/xhexadecimal-addition.jpg.pagespeed.ic.WOBZ8wuBZt.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8552332,"math_prob":0.98517954,"size":1197,"snap":"2020-34-2020-40","text_gpt3_token_len":270,"char_repetition_ratio":0.20284995,"word_repetition_ratio":0.009803922,"special_character_ratio":0.27652463,"punctuation_ratio":0.0883721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99745345,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T08:30:25Z\",\"WARC-Record-ID\":\"<urn:uuid:c463f619-b51f-43e7-a0c0-acd46fc66d81>\",\"Content-Length\":\"34845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7c21837-39c1-46be-889f-00a31ed8aeeb>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2b385f4-e389-4068-abe7-fb9858b41b05>\",\"WARC-IP-Address\":\"173.247.219.53\",\"WARC-Target-URI\":\"https://www.math-only-math.com/hexadecimal-addition-and-subtraction.html\",\"WARC-Payload-Digest\":\"sha1:PA6C3FEVBV3G2T2JADKKD5BLQM4J4YYS\",\"WARC-Block-Digest\":\"sha1:SCHYIBPNA3PGPAWRS2NK5PZIEUYMHUUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740733.1_warc_CC-MAIN-20200815065105-20200815095105-00585.warc.gz\"}"} |
https://www.colorhexa.com/01f038 | [
"# #01f038 Color Information\n\nIn a RGB color space, hex #01f038 is composed of 0.4% red, 94.1% green and 22% blue. Whereas in a CMYK color space, it is composed of 99.6% cyan, 0% magenta, 76.7% yellow and 5.9% black. It has a hue angle of 133.8 degrees, a saturation of 99.2% and a lightness of 47.3%. #01f038 color hex could be obtained by blending #02ff70 with #00e100. Closest websafe color is: #00ff33.\n\n• R 0\n• G 94\n• B 22\nRGB color chart\n• C 100\n• M 0\n• Y 77\n• K 6\nCMYK color chart\n\n#01f038 color description : Vivid lime green.\n\n# #01f038 Color Conversion\n\nThe hexadecimal color #01f038 has RGB values of R:1, G:240, B:56 and CMYK values of C:1, M:0, Y:0.77, K:0.06. Its decimal value is 127032.\n\nHex triplet RGB Decimal 01f038 `#01f038` 1, 240, 56 `rgb(1,240,56)` 0.4, 94.1, 22 `rgb(0.4%,94.1%,22%)` 100, 0, 77, 6 133.8°, 99.2, 47.3 `hsl(133.8,99.2%,47.3%)` 133.8°, 99.6, 94.1 00ff33 `#00ff33`\nCIE-LAB 83.236, -80.325, 69.803 31.884, 62.608, 14.145 0.293, 0.576, 62.608 83.236, 106.417, 139.009 83.236, -77.903, 94.857 79.125, -66.541, 44.789 00000001, 11110000, 00111000\n\n# Color Schemes with #01f038\n\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #f001b9\n``#f001b9` `rgb(240,1,185)``\nComplementary Color\n• #41f001\n``#41f001` `rgb(65,240,1)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #01f0b0\n``#01f0b0` `rgb(1,240,176)``\nAnalogous Color\n• #f00141\n``#f00141` `rgb(240,1,65)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #b001f0\n``#b001f0` `rgb(176,1,240)``\nSplit Complementary Color\n• #f03801\n``#f03801` `rgb(240,56,1)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #3801f0\n``#3801f0` `rgb(56,1,240)``\n• #b9f001\n``#b9f001` `rgb(185,240,1)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #3801f0\n``#3801f0` `rgb(56,1,240)``\n• #f001b9\n``#f001b9` `rgb(240,1,185)``\n• #01a426\n``#01a426` `rgb(1,164,38)``\n• #01bd2c\n``#01bd2c` `rgb(1,189,44)``\n• #01d732\n``#01d732` `rgb(1,215,50)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #0dfe44\n``#0dfe44` `rgb(13,254,68)``\n• #26fe58\n``#26fe58` `rgb(38,254,88)``\n• #3ffe6b\n``#3ffe6b` `rgb(63,254,107)``\nMonochromatic Color\n\n# Alternatives to #01f038\n\nBelow, you can see some colors close to #01f038. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #06f001\n``#06f001` `rgb(6,240,1)``\n• #01f010\n``#01f010` `rgb(1,240,16)``\n• #01f024\n``#01f024` `rgb(1,240,36)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #01f04c\n``#01f04c` `rgb(1,240,76)``\n• #01f060\n``#01f060` `rgb(1,240,96)``\n• #01f074\n``#01f074` `rgb(1,240,116)``\nSimilar Colors\n\n# #01f038 Preview\n\nThis text has a font color of #01f038.\n\n``<span style=\"color:#01f038;\">Text here</span>``\n#01f038 background color\n\nThis paragraph has a background color of #01f038.\n\n``<p style=\"background-color:#01f038;\">Content here</p>``\n#01f038 border color\n\nThis element has a border color of #01f038.\n\n``<div style=\"border:1px solid #01f038;\">Content here</div>``\nCSS codes\n``.text {color:#01f038;}``\n``.background {background-color:#01f038;}``\n``.border {border:1px solid #01f038;}``\n\n# Shades and Tints of #01f038\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000601 is the darkest color, while #f1fff4 is the lightest one.\n\n• #000601\n``#000601` `rgb(0,6,1)``\n• #001906\n``#001906` `rgb(0,25,6)``\n• #002d0a\n``#002d0a` `rgb(0,45,10)``\n• #00400f\n``#00400f` `rgb(0,64,15)``\n• #005414\n``#005414` `rgb(0,84,20)``\n• #006718\n``#006718` `rgb(0,103,24)``\n• #017b1d\n``#017b1d` `rgb(1,123,29)``\n• #018e21\n``#018e21` `rgb(1,142,33)``\n• #01a226\n``#01a226` `rgb(1,162,38)``\n• #01b52a\n``#01b52a` `rgb(1,181,42)``\n• #01c92f\n``#01c92f` `rgb(1,201,47)``\n• #01dc33\n``#01dc33` `rgb(1,220,51)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\n• #07fe40\n``#07fe40` `rgb(7,254,64)``\n• #1afe4f\n``#1afe4f` `rgb(26,254,79)``\n• #2efe5e\n``#2efe5e` `rgb(46,254,94)``\n• #41fe6d\n``#41fe6d` `rgb(65,254,109)``\n• #55fe7c\n``#55fe7c` `rgb(85,254,124)``\n• #68fe8b\n``#68fe8b` `rgb(104,254,139)``\n• #7cfe9a\n``#7cfe9a` `rgb(124,254,154)``\n• #8fffa9\n``#8fffa9` `rgb(143,255,169)``\n• #a3ffb8\n``#a3ffb8` `rgb(163,255,184)``\n• #b6ffc7\n``#b6ffc7` `rgb(182,255,199)``\n• #caffd6\n``#caffd6` `rgb(202,255,214)``\n• #deffe5\n``#deffe5` `rgb(222,255,229)``\n• #f1fff4\n``#f1fff4` `rgb(241,255,244)``\nTint Color Variation\n\n# Tones of #01f038\n\nA tone is produced by adding gray to any pure hue. In this case, #708174 is the less saturated color, while #01f038 is the most saturated one.\n\n• #708174\n``#708174` `rgb(112,129,116)``\n• #678a6f\n``#678a6f` `rgb(103,138,111)``\n• #5e936a\n``#5e936a` `rgb(94,147,106)``\n• #549d65\n``#549d65` `rgb(84,157,101)``\n• #4ba660\n``#4ba660` `rgb(75,166,96)``\n• #42af5b\n``#42af5b` `rgb(66,175,91)``\n• #39b856\n``#39b856` `rgb(57,184,86)``\n• #2fc251\n``#2fc251` `rgb(47,194,81)``\n• #26cb4c\n``#26cb4c` `rgb(38,203,76)``\n• #1dd447\n``#1dd447` `rgb(29,212,71)``\n• #14dd42\n``#14dd42` `rgb(20,221,66)``\n• #0ae73d\n``#0ae73d` `rgb(10,231,61)``\n• #01f038\n``#01f038` `rgb(1,240,56)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #01f038 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5483219,"math_prob":0.7554632,"size":3670,"snap":"2022-40-2023-06","text_gpt3_token_len":1643,"char_repetition_ratio":0.1333879,"word_repetition_ratio":0.0073937154,"special_character_ratio":0.559673,"punctuation_ratio":0.23549107,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854538,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T03:59:57Z\",\"WARC-Record-ID\":\"<urn:uuid:677e9d52-39e1-4850-a606-d48a9a63f697>\",\"Content-Length\":\"36111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5563832e-91f7-405e-98fe-e7f5a509deba>\",\"WARC-Concurrent-To\":\"<urn:uuid:8931d395-cce5-4613-8434-435be9645c54>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/01f038\",\"WARC-Payload-Digest\":\"sha1:77TQYAXRTQOF7VC3F65DAZ2PYTEM6IX5\",\"WARC-Block-Digest\":\"sha1:2O6KEGR4ZBV7VBIQOEMGGI55WEYA2NWP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500042.8_warc_CC-MAIN-20230203024018-20230203054018-00458.warc.gz\"}"} |
https://www.excelhow.net/how-to-average-a-range-and-ignore-zero-in-excel.html | [
"# How to Average a Range and Ignore Zero in Excel\n\nThis post will guide you how to average a range of cells and ignoring all zero values in Excel. How do I Average numbers in a given range and ignore zero values with a formula in Excel. How to ignore zero when averaging a range of data in Excel.\n\n## Average a Range and Ignore zero\n\nAssuming that you have a list of data in range B1:B4, which contain numbers. And you want to get the average value of those values, but excluding all zero values. How to do it. You can use a formula based on the AVERAGEIF function to ignore zero when taking an average in Excel. Like this:\n\n`=AVERAGEIF(B1:B4,\"<>0\")`\n\nType this formula into a blank cell and press Enter key on your keyboard. Then it would return the average result which has ignored all zero values.",
null,
"Let’s see how this formula works:\n\nThe AVERAGEIF function will perform an average based on your criteria, the values should not equal to zero value. And it will ignore all blank cells and text or zero values.\n\nYou can also use another formula to achieve the same result of calculating an average value for a given range ignoring zero values. Like this:\n\n`=SUM(B1:B4)/COUNTIF(B1:B4,\">0\")`\n\nType this formula into a blank cell and press Enter key on your keyboard.",
null,
"### Related Functions\n\n• Excel SUM function\nThe Excel SUM function will adds all numbers in a range of cells and returns the sum of these values. You can add individual values, cell references or ranges in excel.The syntax of the SUM function is as below:= SUM(number1,[number2],…)…\n• Excel AVERAGEIF function\nThe Excel AVERAGEAIF function returns the average of all numbers in a range of cells that meet a given criteria.The syntax of the AVERAGEIF function is as below:= AVERAGEIF (range, criteria, [average_range])….\n• Excel COUNTIF function\nThe Excel COUNTIF function will count the number of cells in a range that meet a given criteria. This function can be used to count the different kinds of cells with number, date, text values, blank, non-blanks, or containing specific characters.etc.= COUNTIF (range, criteria)…\n\nSidebar",
null,
""
]
| [
null,
"https://www.excelhow.net/wp-content/uploads/2019/01/average-a-range-ignore-zero1.gif",
null,
"https://www.excelhow.net/wp-content/uploads/2019/01/average-a-range-ignore-zero2.gif",
null,
"https://www.excelhow.net/wp-content/plugins/wpfront-scroll-top/images/icons/38.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7864423,"math_prob":0.9349842,"size":2064,"snap":"2020-34-2020-40","text_gpt3_token_len":464,"char_repetition_ratio":0.1538835,"word_repetition_ratio":0.08547009,"special_character_ratio":0.21802326,"punctuation_ratio":0.10869565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99707276,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T11:38:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f5104168-a63f-4a33-970f-aa6dbc7b510c>\",\"Content-Length\":\"67477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9489a05-08ef-48d8-86ed-d06c01694d21>\",\"WARC-Concurrent-To\":\"<urn:uuid:65091431-f811-41fe-a669-8734a913acd2>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://www.excelhow.net/how-to-average-a-range-and-ignore-zero-in-excel.html\",\"WARC-Payload-Digest\":\"sha1:6XXUPQTYY2S3WPGOBLPW4XGEHV2IWVMR\",\"WARC-Block-Digest\":\"sha1:FBYAL5SLRLECQ2XRMOQZTUUDDDQ43OGL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131412.93_warc_CC-MAIN-20201001112433-20201001142433-00009.warc.gz\"}"} |
https://fr.scribd.com/document/112848531/Record-Bk-Pgms-12sc-12-13-1 | [
"Vous êtes sur la page 1sur 11\n\n# Updated on : 18/5/12 Lab Activity Q1)\n\nRecord book program list for session 2012 13 1) CONCEPT IMPLEMENTATION : 2 D ARRAY MANIPULATION USING A CLASS Write a menu driven program to perform the following on an integer matrix of size 3x3 . class numbers { int x ; public : void input(void ); void display(void ) ; //array and its transpose . void rowcol(void); wise total . void uplow(void)) ; int maxleft (void); diagonal . int minright (void) ; diagonal . // to enter values into the array // to be called by input() , which displays the existing // to compute and display Row wise total and Column // to print upper triangle and lower triangle // to find and return the maximum value along the left // to find and return the minimum value along the right\n\nint sumleftright(void) ; // to find and return the sum of left and right diagonal elements . void swaparr(void); // to swap the first row elements with the last row elements }; All member functions should be called from the main pgm . 2) CONCEPT IMPLEMENTATION : BASIC OOPs CONCEPT -Data hiding, Data Encapsulation, Array of objects . Define a class Travel in C++ with the description given below: Private Members: T_Code of type string No_of_Adults of type integer No_of_Children of type integer Distance of type integer TotalFare of type float Public Members: A constructor to assign initial values as follows: T_Code with the word NULL No_of_Adults as 0 No_of_Children as 0 Distance as 0 TotalFare as 0 A function AssignFare() which calculates and assigns the value of the data member TotalFare as follows: For each Adult Fare(Rs) For Distance(Km) 500 >=1000 300 <1000 & >=500 200 <500 For each Child the above Fare will be 50% of the Fare mentioned in the above table. For e.g., If Distance is 750,No_of_Adults=3 and No_of_Children=2 Then TotalFare should be calculated as No_of_Adults*300+No_of_Children*150 i.e. 3*300+2*150=1200 A function EnterTravel() to input the values of the data members T_Code,No_of_Adults,No_of_Children and Distance; and invoke the AssignFare() function. A function ShowTravel() which displays the content of all the data members for a Travel. Write a program to implement the above case study .Use array of objects . Allow user to enter no. of elements required for the array at run time. 3) CONCEPT IMPLEMENTATION : CONSTRUCTOR OVERLOADING , TYPES OF CONSTRUCTORS , DESTRUCTOR , ARRAY OF OBJECTS Write a program : A) to initialize an i) ii) iii) Object Akbar using default constructor Object Birbal using parameterized constructor Object Chanakya using Object Birbal using copy constructor\n\nB) to display all data members of the instances Akbar ,Birbal, Chanakya . C) to implement destructors for the objects created . The class definition is as follows : class bank { long int prin; int noyrs ; float rate,si,amt ; public : float computeSI(); void showdata() ; }; 4) Concept Implementation : Bubble sort technique ,array of objects , passing and returning objects from functions Define a class Garments in C++ with the following descriptions: Private Members: GCode of type string GType of type string GSize of type integer GFabric of type string A function Assign( ) which calculates and assigns the value of GPrice as follows For the value of GFabric as COTTON, GType GPrice(Rs) TROUSER 1300 SHIRT 1100 For GFabric other than COTTON the above mentioned GPrice gets reduced by 10%. Public Members: GPrice of type float\n\nA constructor to assign initial values of GCode, GType and GFabric with the word NOT ALLOTTED and GSize and GPrice with 0 A member function Input( ) to input the values of the data members GCode, GType, GSize and GFabric and invoke the Assign( ) function. A member function Display( ) which displays the content of all the data members for a Garment. A non member function SortNcostly() should receive the array of objects as parameter , sort the array in the ascending order of Gprice and display a consolidated list of garment details (use setw() , Bubble sort algorithm ) . The function should return the object , which holds the details of the most expensive garment . Functions to be invoked from the main program : Input() , Display(), SortNcostly Object : Garments clothes ;\nSorting using bubble sort....... for(i=0;i<n-1;i++)\n\n## { for(j=0;j<n-1;j++) { if(a[j]>a[j+1]) { temp=a[j]; a[j]=a[j+1]; a[j+1]=temp; } } }\n\n5) CONCEPT IMPLEMENTATION : Writing structures to an array , initiating enquiry using binary search algorithm and fn returning structure. Implementation of Selection sort algorithm . Structure description : struct Applicant { char A_Rno; char A_Name; int A_Score; }; void Enrol() void Status() Applicant Findperson()\n\n//Roll number of applicant //Name of applicant //Score of applicant // fn for user input // fn for display // fn for searching and then returning that // applicant record .\n\nWrite a program in C++ that would store n records to an array . Sort the array in the increasing order of A_Rno , using Selection Sort algorithm . The program should search for the rollnumber entered by the user , using binary search algorithm. The details of that student should then be displayed in a neat format .\nSELECTION SORT (Pgm) for(i=0;i<n-1;i++) { for(j=i+1;j<n;j++) { if(a[i]>a[j]) { temp=a[i]; a[i]=a[j]; a[j]=temp; } } }\n\n6) CONCEPT IMPLEMENTATION :MULTILEVEL INHERITANCE USING PUBLIC DERIVATION, ABSTRACT CLASS AND CONCRETE CLASS Write a program to read the data of a student using the following classes . Display details of the game played by him , alongwith personal data using the following class definitions . Give outside class definition for all member functions . Class description is as follows :\n\nClass person Private : Char name,int age Public : read_per_data() , show_per_data() Class student Private : int roll , mks, char grade Public : read_stu__data() , char compute_grade() [to be called by read_stu-data()] , show_stu_data() Class game Private : char gm Public : read_gm_data() , show_gm_data() person --> student game The grade shall be calculated as follows : Mks 320 > 240.319 200.239 <199 Grade A B C D\n\nWrite objects of class game to a binary file (as many times the user wants) and display the contents in a neat interactive format . Note : read_gm_data() should invoke read_stu_data() and this in turn should invoke read_per_data() . Display functions also should be displayed in the same order .\n\n7)CONCEPT IMPLEMENTATION : BINARY FILE HANDLING USING CLASS. Create a binary file PHONE.DAT, containing records of the following class definition . class Phonlist { char Name; char Address; char AreaCode; char PhoneNo; public: void Register(); // for input of data members void Show(); // for display of data members int CheckCode(char AC[]) { return strcmp(AreaCode,AC); } char * GetArCode( ) { return AreaCode ; } // to be called //by Show() }; Write a function TRANSFER( ) in C++, that would copy all those records which are having AreaCode as DEL from PHONE.DAT to PHONBACK.DAT. Display the contents of both the data files in a neat consolidated format . 8) Concept implementation : Insertion sort , Merge sort Algorithm using arrays and binary files class student { public : int admno ; char name ; void getdata() {cin >> admno >> name ; } void dispdata() {cout << admno << name ; } } XIIA,XIIB, CLASSXII ; Sort array XIIA and XIIB in ascending order of admno using Insertion sort algorithm. Write the function definition of mergesort() to merge the contents of two sorted arrays XIIA and XIIB into third array CLASSXII using Mergesort algorithm . The resultant array CLASSXII is required to be in ascending order of admno .\n\n## Write a complete C++ program to implement the same .\n\nINSERTION SORT // insertion sort begins float small ; int pos ; small = a ; for (i = 0 ; i < n ; i++) { if ( a[i] < small) {\n\nsmall = a[i] ; pos = i ; } } float temp ; temp = a; /* swap position of smallest no. and a */ a = small ; a[pos] = temp ; float ptr ; for(int k = 1 ; k < n ; k++) { temp = a[k]; ptr = k - 1 ; while ( temp < a[ptr]) { a[ptr + 1 ] = a[ptr]; ptr = ptr - 1 ; } a[ptr + 1] = temp ; } /* sorting starts */\n\nMergesort.. void mergesort(int A[ ],int B[ ],int C[ ],int N,int M, int &K) ; void main() { clrscr(); int A[] = {1,3,5}; int B[] = {2,4,6,8}; int C; int N = 3; int M = 4 ; int K1; mergesort (A,B,C,N,M,K1); getch(); } void mergesort(int A[ ],int B[ ],int C[ ],int N,int M, int &K) { int I=0,J=0; K=0; while (I<N && J<M) if (A[I]<B[J]) C[K++]=A[I++]; else if (A[I]>B[J]) C[K++]=B[J++]; else { C[K++]=A[I++]; J++; } for (;I<N;I++) C[K++]=A[I]; for (;J<M;J++) C[K++]=B[J]; cout <<\"\\nThe contents of array A : \" ; for (I = 0 ; I < 3 ; I++) cout <<A[I]<<'\\t' ;\n\ncout <<\"\\n\\nThe contents of array B : \" ; for (I = 0 ; I < 4 ; I++) cout <<B[I]<<'\\t' ; cout <<\"\\n\\nThe contents of array C : \" ; for (I = 0 ; I < 7 ; I++) cout <<C[I]<<'\\t' ; }\n\n9) Concept implementation : Adding , deleting and displaying records using binary file handling with class . To accept as many objects the user wants of the following class type and write them to a data file MEMP.DAT . class employee // Class declaration { char name; float salary; public : int code; // Private data members // Public member functions void input() ; // fn to accept values for data members void show() ; // fn to display values of data members float retsal() ; // fn to return salary }; 1. 2. 3. 4. 5. Menu Add a record Delete a record Display record (all) Display record (filtered) Exit\n\nAdditional info : Option 2 : Delete record of users choice . Option 3 : Display all records Option 4 : Display details of those records whose salary is between 10000 and 20000. 10) CONCEPT IMPLEMENTATION : Writing text to a text file and then performing text manipulation operations on it . Write a menu driven program using separate functions for the following : 1. Create a text file to store upper case strings (FILEORIG.TXT) 2. Display the file (FILEORIG.TXT) 3. Convert text file (FILEORIG.TXT) to lowercase (FILELOW.TXT) 4. Display the lower case file (FILELOW.TXT) 5. To count the number of alphabets present in the text file FILELOW.TXT 6. To count the number of lines present in the text file FILEORIG.TXT. 7. To count and display the number of lines starting with alphabet A present in a text file FILEORIG.TXT. 8. To count the occurrences of the word \"the\" in the file \"FILEORIG.TXT\" 9. To encrypt the file , FILEORIG.TXT , by adding 25 to the ASCII value of each character read from file. Store the encrypted data into a text file MYENCRPYT.TXT . Decrypt this file and write to MYDECRYPT.TXT . Display the contents of the files :FILEORIG.TXT , MYENCRYPT.TXT and MYDECRYPT.TXT .\n\n11) CONCEPT IMPLEMENTATION : Random access file handling Write a program to perform random access on a disk file , the structure definition of which is given below : (wl be mailed and student to take printout ) struct VEHICLE { char Vehicle_Code; char Vehicle_Name; float cost; }; The Menu should have the following options : Main Menu 1. Add Record 2. Display Record 3. Update Record 4. Exit Functions append() , display() and update() should perform the above tasks . 12) Concept implementation : Enquiry on binary file handling . (wl be mailed and student to take printout ) Write a program to write the name of the states and respective chief ministers to a disk file . Enable search by State or by CM from the disk file . class LIST { char States, CM; public: void GETIT() ; // fn to accept values for data members void SHOWIT( ) ; // fn to display values of data members int s1(char t_state) ; // fn to compare & locate state and // invoke fn showit() . int s2(char t_cop) ; // fn to compare & locate chief minister // and invoke fn showit() . }; The program should be menu driven MENU 1. Data Entry Module 2. Display all 3. Search by State 4. Search by CM\"; 5. Exit Enter your choice : 13)Concept implementation : Merging binary files . (wl be mailed and student to take printout ) Write a program to merge two binary files (OLD.DAT and OLD1.dat) and create and display merged file (NEW.DAT) . The structure description is given below : struct employee { char name;\n\nint empno; char status; float payrate; }; 14) Concept implementation : Application of dynamic array and linked list application . Write a menu driven program to : 1) Function name : fnarr() Create a dynamic two dimensional array , where the dimensions are accepted at runtime Accept values into the array and display its transpose . 2) Function name : fnobptr() struct employee { int empno; char name; float basic ; float experience; employee * link ; // self referential structure }; Create a dynamic linked list to store the above structure . Traverse the array and identify the person with maximum experience . In order to display all the details of the most experienced person , pass this structure to function display whose prototype is as follows : void display (employee *emp) ;\n\n3) Exit\n\n15) Concept Implementation : Application of linear search on a linked list . Write a C++ program to : a. Store user data into a linked list . Get user input of code for querying . b. Search (Linear search) for an employee in a linked list and display all details of that employee in a neat format . Display suitable message if employee does not exist . struct emp { char name; int code ; emp * next ; // self referential structure }; 16)Concept Implementation : Application of Linked stack Write a menu driven program to illustrate A LINKED STACK , on a dynamically allocated linked Stack where each node contains the name and age . LINKED STACK MENU 1) 2) 3) 4) PUSH TO STACK POP FROM STACK DISPLAY STACK EXIT\n\n(wl be mailed and student to take printout ) 17) Concept Implementation : Application of Linked Queue\n\nWrite a menu driven program to illustrate A LINKED QUEUE , on a dynamically allocated linked QUEUE , where each node contains the account no. , account holders name and balance amount in his savings account . LINKED QUEUE MENU 1) 2) 3) 4) INSERT AN ELEMENT DELETE AN ELEMENT DISPLAY THE QUEUE EXIT\n\n(wl be mailed and student to take printout ) 18) Concept Implementation : Implementation of stack using arrays . Write a program to implement an integer array stack to hold the marks of n students for Informatics Practices . The program should be menu driven and should facilitate pushing , popping and display of elements in the array stack . (wl be mailed and student to take printout ) 19) Concept Implementation : Implementation of queues using arrays . Write a program to implement an integer array queue to hold the marks of n students for Computer Science . The program should be menu driven and should facilitate addition , deletion and display of elements in the array queue . (wl be mailed and student to take printout )\n\n20 ) Concept Implementation : Implementation of Circular Queue using array . Write a program using functions in C++ , to perform insert and delete operations on a circular queue using array . (wl be mailed and student to take printout ) 21) SQL documentation . Attach Qn. Paper followed by Query Result sheet ."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.64693373,"math_prob":0.8627555,"size":15243,"snap":"2020-10-2020-16","text_gpt3_token_len":3776,"char_repetition_ratio":0.12231774,"word_repetition_ratio":0.10262172,"special_character_ratio":0.27160007,"punctuation_ratio":0.14699863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96110433,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T12:40:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b7f386a3-d410-4920-838f-52da86dfa76c>\",\"Content-Length\":\"347665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea8135f6-4d55-47b8-adf3-d6b739123af3>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae64cdd8-7525-498d-a0f5-20d13dd20567>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://fr.scribd.com/document/112848531/Record-Bk-Pgms-12sc-12-13-1\",\"WARC-Payload-Digest\":\"sha1:AYOKU2J32IQMHQY32NFUFCBFGRJXB2QV\",\"WARC-Block-Digest\":\"sha1:JCIL3CQLMVWPWLEYAFZJMIKI6NQYBVHS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144150.61_warc_CC-MAIN-20200219122958-20200219152958-00181.warc.gz\"}"} |
https://tutorme.com/tutors/10269/interview/ | [
"Enable contrast version\n\n# Tutor profile: Nety D.\n\nInactive\nNety D.\nGraduated in Top 2% of Class with 4.0 GPA\nTutor Satisfaction Guarantee\n\n## Questions\n\n### Subject:Spanish\n\nTutorMe\nQuestion:\n\nChoose the correct present tense conjugation for model verb \"hablar\"\n\nInactive\nNety D.\n\nyo- hablo tú- hablas él, ella, usted- habla nosotros- hablamos ellos/ustedes-hablan\n\n### Subject:Basic Math\n\nTutorMe\nQuestion:\n\nA recipe needs 1/4 tablespoon salt. How much salt does 8 such recipe need?\n\nInactive\nNety D.\n\n1/4 × 8 = 1/4 × 8/1 = 8/4 = 2\n\n### Subject:Algebra\n\nTutorMe\nQuestion:\n\nGiven f(x) = 2x + 3 and g(x) = –x^2 + 5, find ( f o g)(x)\n\nInactive\nNety D.\n\n= f (–x^2 + 5) = 2(–x^2 + 5) + 3 =–2x^2 + 10 + 3 = –2x^2 + 13\n\n## Contact tutor\n\nSend a message explaining your\nneeds and Nety will reply soon.\nContact Nety"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7736443,"math_prob":0.7029103,"size":1550,"snap":"2020-34-2020-40","text_gpt3_token_len":449,"char_repetition_ratio":0.10866753,"word_repetition_ratio":0.014652015,"special_character_ratio":0.26580647,"punctuation_ratio":0.12264151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98977095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T14:51:58Z\",\"WARC-Record-ID\":\"<urn:uuid:8a684c0d-df90-4f95-bc59-c16ebe77aff4>\",\"Content-Length\":\"153102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:167632a4-2198-42a2-82d6-09692606cf3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c38c863-9fa3-466a-8d41-1e5574e24e84>\",\"WARC-IP-Address\":\"52.85.144.83\",\"WARC-Target-URI\":\"https://tutorme.com/tutors/10269/interview/\",\"WARC-Payload-Digest\":\"sha1:SBMKV4DN2H5YQT2UCCLK73UGJR3J4VVS\",\"WARC-Block-Digest\":\"sha1:FKGBY5U7TSHSIWRHF75QXR3MDDSL4KFF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735812.88_warc_CC-MAIN-20200803140840-20200803170840-00035.warc.gz\"}"} |
https://www.shaalaa.com/question-bank-solutions/find-general-solution-following-differential-equation-1-y-2-x-e-tan-1-y-dy-dx-0-general-particular-solutions-differential-equation_4160 | [
"# Find the general solution of the following differential equation : (1+y^2)+(x-e^(tan^(-1)y))dy/dx= 0 - Mathematics\n\nFind the general solution of the following differential equation :\n\n(1+y^2)+(x-e^(tan^(-1)y))dy/dx= 0\n\n#### Solution\n\nGiven:\n\n(1+y^2)+(x-e^(tan^(-1)y))dy/dx= 0\n\nLet tan1y=t\n\ny=tant\n\n=>dy/dx=sec^2tdt/dx\n\nTherefore, the equation becomes\n\n(1+tan2t)+(xet)sec2dt/dx=0\n\n=>sec^2t+(x-e^t)(sec^2t)dt/dx=0\n\n=>1+(x-e^t)dt/dx=0\n\n=>(x-e^t)dt/dx=-1\n\n=>x-e^t=dx/dt\n\n=>dx/dt+1.x=e^t\n\nIf =e∫1.dt\n\n= et\n\n:. e^t.(dx/dt+1.x)=e^t.e^t\n\n=>d/dt(xe^t)=e^(2t)\n\nIntegrating both the sides, we get\n\nxe^t=inte^(2t)dt\n\n=>xe^t=1/2e^(2t)+C \" ....(1)\"\n\nSubstituting the value of t in (1), we get\n\nxe^(tan^(1))y=1/2e^(2tan^(-1)y)+C_1\n\n=>e^2tan^(-1y)=2xe^(tan^1y)+C\n\nIt is the required general solution.\n\nConcept: General and Particular Solutions of a Differential Equation\nIs there an error in this question or solution?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.56417286,"math_prob":0.9999018,"size":790,"snap":"2021-43-2021-49","text_gpt3_token_len":366,"char_repetition_ratio":0.129771,"word_repetition_ratio":0.0,"special_character_ratio":0.42531645,"punctuation_ratio":0.0959596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000064,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T15:30:49Z\",\"WARC-Record-ID\":\"<urn:uuid:b93ff17b-830a-4792-b927-685178f263ff>\",\"Content-Length\":\"50666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab71766d-cb54-4f00-a109-c0d0b0e0a902>\",\"WARC-Concurrent-To\":\"<urn:uuid:6120019b-f11c-4cb4-ab44-ffb229ddce68>\",\"WARC-IP-Address\":\"172.105.37.75\",\"WARC-Target-URI\":\"https://www.shaalaa.com/question-bank-solutions/find-general-solution-following-differential-equation-1-y-2-x-e-tan-1-y-dy-dx-0-general-particular-solutions-differential-equation_4160\",\"WARC-Payload-Digest\":\"sha1:YR3EOIZUM5FKR5HSKHAUMD73SODS7RBG\",\"WARC-Block-Digest\":\"sha1:A6B77YFVF4NAGDV2ECVKKUG4EK6PBJRA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585270.40_warc_CC-MAIN-20211019140046-20211019170046-00379.warc.gz\"}"} |
https://www.popflock.com/learn?s=Weak_force | [
"",
null,
"Weak Force\nGet Weak Force essential facts below. View Videos or join the Weak Force discussion. Add Weak Force to your PopFlock.com topic list for future reference or share this resource on social media.\nWeak Force\n\nIn nuclear physics and particle physics, the weak interaction, which is also often called the weak force or weak nuclear force, is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms. The weak interaction participates in nuclear fission, and the theory describing its behaviour and effects is sometimes called quantum flavourdynamics (QFD). However, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT).\n\nThe effective range of the weak force is limited to subatomic distances, and is less than the diameter of a proton. It is one of the four known force-related fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation.\n\n## Background\n\nThe Standard Model of particle physics provides a uniform framework for understanding the electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically but not necessarily half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles.\n\nIn the weak interaction, fermions can exchange three types of force carriers, namely W+, W-, and Z bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force.\n\nQuarks, which make up composite particles like neutrons and protons, come in six \"flavours\" - up, down, strange, charm, top and bottom - which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Another important example of a phenomenon involving the weak interaction is the fusion of hydrogen into helium that powers the Sun's thermonuclear process.\n\nMost fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, and in the related field of betavoltaics.\n\nThe weak interaction is the only fundamental interaction that breaks parity-symmetry, and similarly, the only one to break charge parity symmetry.\n\nDuring the quark epoch of the early universe, the electroweak force separated into the electromagnetic and weak forces.\n\n## History\n\nIn 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range.\n\nHowever, it is better described as a non-contact force field having a finite range, albeit very short.[] In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force.\n\nThe existence of the W and Z bosons was not directly confirmed until 1983.\n\n## Properties",
null,
"A diagram depicting the decay routes due to the charged weak interaction and some indication of their likelihood. The intensity of the lines is given by the CKM parameters.\n\nThe electrically charged weak interaction is unique in a number of respects:\n\nDue to their large mass (approximately 90 GeV/c2) these carrier particles, called the W and Z bosons, are short-lived with a lifetime of under 10-24 seconds. The weak interaction has a coupling constant (an indicator of interaction strength) of between 10-7 and 10-6, compared to the strong interaction's coupling constant of 1 and the electromagnetic coupling constant of about 10-2; consequently the weak interaction is 'weak' in terms of strength. The weak interaction has a very short effective range (around 10-17 to 10-16 m). At distances around 10-18 meters, the weak interaction has a strength of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3×10-17 m, the weak interaction becomes 10,000 times weaker.\n\nThe weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states nor does it involve binding energy - something that gravity does on an astronomical scale, that the electromagnetic force does at the atomic level, and that the strong nuclear force does inside nuclei.\n\nIts most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon), and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour-changing, so this proceeds by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the Strange quarks and charm quarks, respectively) would also be conserved across all interactions.\n\nAll mesons are unstable because of weak decay.[a] In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual\nW\nboson which is then converted into an electron and an electron antineutrino. Another example is the electron capture, a common variant of radioactive decay, wherein a proton and an electron within an atom interact, and are changed to a neutron (an up quark is changed to a down quark) and an electron neutrino is emitted.\n\nDue to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about 10-16 seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about 10-8 seconds, or a hundred million times longer than a neutral pion. A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes.\n\n### Weak isospin and weak hypercharge\n\nLeft-handed fermions in the Standard Model\nGeneration 1 Generation 2 Generation 3\nFermion Symbol Weak\nisospin\nFermion Symbol Weak\nisospin\nFermion Symbol Weak\nisospin\nElectron neutrino\nν\ne\n++1/2 Muon neutrino\nν\nμ\n++1/2 Tau neutrino\nν\nτ\n++1/2\nElectron\ne\n+1/2 Muon\nμ\n+1/2 Tau\nτ\n+1/2\nUp quark\nu\n++1/2 Charm quark\nc\n++1/2 Top quark\nt\n++1/2\nDown quark\nd\n+1/2 Strange quark\ns\n+1/2 Bottom quark\nb\n+1/2\nAll of the above left-handed (regular) particles have corresponding\nright-handed anti-particles with equal and opposite weak isospin.\nAll right-handed (regular) particles and left-handed antiparticles have weak isospin of 0.\n\nAll particles have a property called weak isospin (symbol T3), which serves as an additive quantum number that restricts how the particle can behave in the weak interaction. Weak isospin plays the same role in the weak interaction with\nW±\nas electric charge does in electromagnetism, and color charge in the strong interaction. All left-handed fermions have a weak isospin value of either ++1/2 or +1/2; all right-handed fermions have 0 isospin. For example, the up quark has and the down quark has A quark never decays through the weak interaction into a quark of the same T3: Quarks with a T3 of ++1/2 only decay into quarks with a T3 of +1/2 and vice versa.\n\nIn any given interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed)\nπ+\n,\nwith a weak isospin of +1 normally decays into a\nν\nμ\n( with ) and a\nμ+\n(as a right-handed antiparticle, ++1/2).\n\nFor the development of the electroweak theory, another property, weak hypercharge, was invented, defined as:\n\n$Y_{\\text{W}}=2\\,\\left(\\,Q-T_{3}\\,\\right)~,$",
null,
"where YW is the weak hypercharge of a particle with electrical charge Q (in elementary charge units) and weak isospin T3. Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin 1/2 particles have a non-zero weak hypercharge.[b]\n\n## Interaction types\n\nThere are two types of weak interaction (called vertices). The first type is called the \"charged-current interaction\" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the \"neutral-current interaction\" because the weakly interacting fermions form a current with total electric charge zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the W and Z bosons, however the naming convention predates the concept of the mediator bosons and clearly (at least in name) labels the charge of the current (formed from the fermions), not the bosons.\n\n### Charged-current interaction\n\nIn one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of -1) can absorb a\nW+\nboson\n(a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type (\"flavour\") of neutrino (electron, muon or tau) is the same as the type of lepton in the interaction, for example:\n\n$\\mu ^{-}+W^{+}\\to \\nu _{\\mu }$",
null,
"Similarly, a down-type quark (d with a charge of -13) can be converted into an up-type quark (u, with a charge of +23), by emitting a\nW\nboson or by absorbing a\nW+\nboson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a\nW+\nboson, or absorb a\nW\nboson, and thereby be converted into a down-type quark, for example:\n\n{\\begin{aligned}d&\\to u+W^{-}\\\\d+W^{+}&\\to u\\\\c&\\to s+W^{+}\\\\c+W^{-}&\\to s\\end{aligned}}",
null,
"The W boson is unstable so will rapidly decay, with a very short lifetime. For example:\n\n{\\begin{aligned}W^{-}&\\to e^{-}+{\\bar {\\nu }}_{e}~\\\\W^{+}&\\to e^{+}+\\nu _{e}~\\end{aligned}}",
null,
"Decay of a W boson to other products can happen, with varying probabilities.\n\nIn the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual\nW\nboson and is thereby converted into an up quark, converting the neutron into a proton. Because of the energy involved in the process (i.e., the mass difference between the down quark and the up quark), the\nW\nboson can only be converted into an electron and an electron-antineutrino. At the quark level, the process can be represented as:\n\n$d\\to u+e^{-}+{\\bar {\\nu }}_{e}~$",
null,
"### Neutral-current interaction\n\nIn neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral Z boson. For example:\n\n$e^{-}\\to e^{-}+Z^{0}$",
null,
"Like the\nW±\nbosons, the\nZ0\nboson also decays rapidly, for example:\n\n$Z^{0}\\to b+{\\bar {b}}$",
null,
"Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, and / or weak isospin, the neutral-current\nZ0\ninteraction can cause any two fermions in the standard model to deflect: Either particles and anti-particles of any electric charge, and both left- and right-chirality, although the strength of the interaction differs.[c]\n\nThe quantum number weak charge (QW) serves the same role in the neutral current interaction with the\nZ0\n, that electric charge (Q, with no subscript) does in the electromagnetic interaction. Its value is given by:\n\n$Q_{\\text{W}}=2\\,T_{3}-4\\,Q\\,\\sin ^{2}\\theta _{\\text{W}}=2\\,T_{3}-Q+\\left(1-4\\,\\sin ^{2}\\theta _{\\text{W}}\\right)\\,Q~.$",
null,
"Since the weak mixing angle $~\\theta _{\\text{W}}\\approx 29^{\\circ }~,$",
null,
"the parenthetic expression $~\\left(1-4\\,\\sin ^{2}\\theta _{\\text{W}}\\right)\\approx 0.06~,$",
null,
"with its value varying slightly with the momentum difference (running) between the particles involved. Hence\n\n$Q_{\\text{W}}\\approx 2\\,T_{3}-Q=\\operatorname {sgn}(Q)\\,\\left(1-\\left|Q\\right|\\right)~,$",
null,
"since by convention $~\\operatorname {sgn} T_{3}\\equiv \\operatorname {sgn} Q~,$",
null,
"and for all fermions involved in the weak interaction $~T_{3}=\\pm {\\tfrac {1}{2}}~.$",
null,
"## Electroweak theory\n\nThe Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (\nW+\n,\nW\n,\nZ0\n, the three carriers of the weak interaction) and the massless photon (?, the carrier of the electromagnetic interaction).\n\nAccording to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless gauge bosons - each similar to the photon - forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak bosons. However, at low energies, this gauge symmetry is spontaneously broken down to the U(1) symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those \"extra\" three Higgs bosons become incorporated into the three weak bosons which then acquire mass through the Higgs mechanism. These three composite bosons are the\nW+\n,\nW\n, and\nZ0\nbosons of the weak interaction. The fourth electroweak gauge boson is the photon of electromagnetism, which does not couple to any of the Higgs fields and remains massless.\n\nThis theory has made a number of predictions, including a prediction of the masses of the\nZ\nand\nW\nbosons before their discovery and detection in 1983.\n\nOn 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and 127 GeV/c2, whose behaviour so far was \"consistent with\" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist.\n\nIn a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken SU(2) interaction would eventually become confining. Alternative models where SU(2) becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking.\n\n## Violation of symmetry\n\nThe laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics.\n\nAlthough the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V - A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V - A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction.\n\nHowever, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics.\n\nUnlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis.\n\n## Footnotes\n\n1. ^ The neutral pion, however, decays electromagnetically, and several mesons mostly decay strongly, when their quantum numbers allow.\n2. ^ Some hypothesised fermions, such as the sterile neutrinos, would have zero weak hypercharge - in fact, no gauge charges of any kind. Whether any such particles actually exist is an active area of research.\n3. ^ The only fermions which the\nZ0\ndoes not interact with are the hypothetical \"sterile\" neutrinos: Left-chiral anti-neutrinos and right-chiral neutrinos. They are called \"sterile\" because they would not interact with any Standard Model particle, but as yet remain entirely a conjecture; no such neutrinos are known to actually exist.\n\nThis article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0."
]
| [
null,
"https://www.popflock.com/images/logo/popflock-logo.gif",
null,
"https://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Weak_Decay_%28flipped%29.svg/280px-Weak_Decay_%28flipped%29.svg.png",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c0f8e33acffaf0ec2e4a500edef1494e958c5ab6",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/7a488b80d99c0d04dbb19bd53adc1ccc7bf5984b",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/5cba2ab39755847ecc878204c555727d5c59e30d",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a4573b42f4faa7e4f0baea83fb03f8273f9e07ee",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/83d2deb82107a7a43eb4f927b2fdaada923a521a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/103c0ed21ab4fd1e9d5a8771a7c12b1bb9287630",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/c12d39f5b8dc71e7bc495157d950bdc2ba0313a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/47b778253ee876faef75926a15a0448d647c3ab7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/49a9710a5275da00a025d45dc5f55aba18bea2ad",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f1390b3542022b32e1cba4f443266650fd5d1022",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f7d2394c1acad286cce8a9addc167651a2e69dd0",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/05351355d6dcc1f0c53f92cef5f0c433b490c88d",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/cc7953e2ef24ff5614ebccfcdd684cc8932766e6",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9153397,"math_prob":0.95480627,"size":17195,"snap":"2021-21-2021-25","text_gpt3_token_len":3899,"char_repetition_ratio":0.17619684,"word_repetition_ratio":0.02119883,"special_character_ratio":0.21419017,"punctuation_ratio":0.09447814,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9752011,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,5,null,3,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T12:51:50Z\",\"WARC-Record-ID\":\"<urn:uuid:57e0f653-70ac-4946-aa00-0a84e4943436>\",\"Content-Length\":\"226594\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d71f6cee-e8fe-4c13-8ec7-ee2cef69be94>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfac8692-d325-4315-83e8-877a53f35f8e>\",\"WARC-IP-Address\":\"75.98.175.100\",\"WARC-Target-URI\":\"https://www.popflock.com/learn?s=Weak_force\",\"WARC-Payload-Digest\":\"sha1:TKW43WSQV2EY7VDBYVI2CMKG2PAWRAFH\",\"WARC-Block-Digest\":\"sha1:FHBLC2XOI3QGSBWWRE3JTIBXL6VWAJPF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487608702.10_warc_CC-MAIN-20210613100830-20210613130830-00542.warc.gz\"}"} |
https://www.justfreetools.com/en/joule-to-kcal-conversion | [
"# Joules to kilocalories free online conversion\n\nJoules (J) to kilocalories (kcal) energy conversion calculator and how to convert.\n\n## Joules to kilocalories conversion calculator\n\nEnter the energy in joules and press the Convert button:\n\nJ\nkcal-th\n\nkcal to joules conversion »\n\n## How to convert from joules to kcal\n\n### Joules to thermochemical / food kilocalories\n\n1 kcalth = 4184 J\n\nThe energy in joules E(J) is equal to the energy in kilocalories E(kcal-th) divided by 4184:\n\nE(kcal) = E(J) / 4184\n\n#### Example\n\nConvert 5000 joules to kilocalories.\n\nE(kcal) = 5000 J / 4184 = 1.195 kcal\n\n### Joules to international kilocalories\n\n1 kcalIT = 4186.8 J\n\nThe energy in joules E(J) is equal to the energy in international kilocalories E(kcal-IT) divided by 4186.8:\n\nE(kcal-IT) = E(J) / 4186.8\n\n#### Example\n\nConvert 5000 joules to kilocalories.\n\nE(kcal-IT) = 5000 J / 4186.8 = 1.194 kcalIT\n\n### Joules to 15°C kilocalories\n\n1 kcal15 = 4185.5 J\n\nThe energy in joules E(J) is equal to the energy in 15°C kilocalories E(kcal15) divided by 4185.5:\n\nE(kcal15) = E(J) / 4185.5\n\n#### Example\n\nConvert 5000 joules to kilocalories.\n\nE(kcal15) = 5000 J / 4185.5 = 1.195 kcal15\n\n### Joules to 20°C kilocalories\n\n1 kcal20 = 4182 J\n\nThe energy in joules E(J) is equal to the energy in 20°C kilocalories E(kcal20) divided by 4182:\n\nE(kcal20) = E(J) / 4182\n\n#### Example\n\nConvert 5000 joules to kilocalories.\n\nE(kcal20) = 5000 J / 4182 = 1.196 kcal20\n\nkcal to joules conversion »\n\nCurrently, we have around 5612 calculators, conversion tables and usefull online tools and software features for students, teaching and teachers, designers and simply for everyone."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77747303,"math_prob":0.9920192,"size":6728,"snap":"2022-27-2022-33","text_gpt3_token_len":1639,"char_repetition_ratio":0.28301606,"word_repetition_ratio":0.14444445,"special_character_ratio":0.22547562,"punctuation_ratio":0.092982456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99809337,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T06:57:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a982ecb9-3009-4b2d-991e-9b64eea397ba>\",\"Content-Length\":\"119447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3945d42-2ef4-483e-b280-44547dfc9fb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:34cdf9fe-51e4-4135-80f6-ed7074fe2371>\",\"WARC-IP-Address\":\"91.239.201.16\",\"WARC-Target-URI\":\"https://www.justfreetools.com/en/joule-to-kcal-conversion\",\"WARC-Payload-Digest\":\"sha1:GCZUXQGDD7EPFAJXBV2CHCI7NZZFT5ZX\",\"WARC-Block-Digest\":\"sha1:DLWKWV7VTXTWIWTL55BVIORVYQ4FDRND\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572221.38_warc_CC-MAIN-20220816060335-20220816090335-00792.warc.gz\"}"} |
https://www.cheenta.com/intersection-of-curves-singapore-math-olympiad/ | [
"# Understand the problem\n\nConsider the two curves",
null,
"$y=2x^3+6x+1$ and",
null,
"$y=-3/x^2$ in the Cartesian plane. Find the number of distinct points at which these two curves intersect.\n\n#### Singapore Math Olympiad 2006 (Senior Section – Problem 23)\n\n##### Topic\nIntersection of curves\n5 out 10\n##### Suggested Book\nPre College Mathematics\n\n# Start with hints\n\nDo you really need a hint? Try it first!\n\nTry to think how to find the intersection points using these two given equation",
null,
"$y=2x^3+6x+1$ and",
null,
"$y=-3/x^{2}$\n\nTry to compare the two values of y e.g.",
null,
"$2x^3+6x+1=-3/x^{2}$ Do we get two factors",
null,
"$2x^3+1=0$ and",
null,
"$x^2+3=0$\n\nAt end as we will consider only",
null,
"$2x^3+1=0$ as",
null,
"$x^2+3>0$\n\nThus we will get the value of x and from there we can find the value of y and we will get the answer.",
null,
"$(\\frac{-1}{\\sqrt {2}}, \\frac{-3}{\\sqrt {4}})$\n\nSo the number distinct point will be 1\n\n# Connected Program at Cheenta\n\n#### Math Olympiad Program\n\nMath Olympiad is the greatest and most challenging academic contest for school students. Brilliant school students from over 100 countries participate in it every year. Cheenta works with small groups of gifted students through an intense training program. It is a deeply personalized journey toward intellectual prowess and technical sophistication."
]
| [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%200%200'%3E%3C/svg%3E ",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8936664,"math_prob":0.9948888,"size":1137,"snap":"2019-35-2019-39","text_gpt3_token_len":238,"char_repetition_ratio":0.097969994,"word_repetition_ratio":0.0,"special_character_ratio":0.19613017,"punctuation_ratio":0.053398058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99957544,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-23T10:47:46Z\",\"WARC-Record-ID\":\"<urn:uuid:155afdc7-3e3e-4f4d-9891-da52b4eec2a5>\",\"Content-Length\":\"95070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c164c967-661d-4940-8cb6-e38698631176>\",\"WARC-Concurrent-To\":\"<urn:uuid:74082ac3-1162-4c67-bc03-dc7b526d4892>\",\"WARC-IP-Address\":\"77.104.170.218\",\"WARC-Target-URI\":\"https://www.cheenta.com/intersection-of-curves-singapore-math-olympiad/\",\"WARC-Payload-Digest\":\"sha1:RGA7PKYXL4G4U7PDZLPYQQN6G4IFNECV\",\"WARC-Block-Digest\":\"sha1:TUV2VJ76GF3S734XZUBPF7DRCKIDWPCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027318375.80_warc_CC-MAIN-20190823104239-20190823130239-00016.warc.gz\"}"} |
https://practicaldev-herokuapp-com.global.ssl.fastly.net/coreyja/comment/d8da | [
"### re: Daily Challenge #18 - Triple Trouble VIEW POST\n\nre: Hmmm.... reading other answers I might have misinterpreted what was required in this challenge. What is it exactly? For example, if num1 = 121 and ...\n\nI'm getting caught up a few days behind, but I understood this is three digits in a row. Which I think is confirmed by the linked challenge examples.\n\nSo `num1 = 8777 and num2 = 877` would return true for this!\n\nStarting on my version now\n\nTo achieve that, the only change would be in the `if` statement. Instead of doing `x*3` and `x*2`, it would be `''+x+x+x` and `''+x+x` respectively:\n\n``````if (strNum1.indexOf(''+x+x+x) > -1 && strNum2.indexOf(''+x+x) > -1) {\n``````\ncode of conduct - report abuse",
null,
"",
null,
""
]
| [
null,
"https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-logo-42be7109de07f8c991a9832d432c9d12ec1a965b5c0004bca9f6aa829ae43209.svg",
null,
"https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-6a5bca60a4ebf959a6df7f08217acd07ac2bc285164fae041eacb8a148b1bab9.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89343596,"math_prob":0.59867203,"size":1079,"snap":"2019-35-2019-39","text_gpt3_token_len":315,"char_repetition_ratio":0.10697675,"word_repetition_ratio":0.5343915,"special_character_ratio":0.29842445,"punctuation_ratio":0.09322034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9738789,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-16T12:47:26Z\",\"WARC-Record-ID\":\"<urn:uuid:9a4f155e-6bf1-49c3-b310-f684241a504e>\",\"Content-Length\":\"105131\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72a26367-32f4-48f0-83e1-db91d722b9f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:eabaad03-70a3-4ecc-a059-da261acbc6da>\",\"WARC-IP-Address\":\"151.101.201.194\",\"WARC-Target-URI\":\"https://practicaldev-herokuapp-com.global.ssl.fastly.net/coreyja/comment/d8da\",\"WARC-Payload-Digest\":\"sha1:X7R7K2B5DOEXACDCNRRRNKZJRNCAN4RU\",\"WARC-Block-Digest\":\"sha1:BCPJKCNLF6LPEHIUF4AIZ3P7IXS37RT2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514572556.54_warc_CC-MAIN-20190916120037-20190916142037-00131.warc.gz\"}"} |
https://canadam.math.ca/2019f/program/abs/agp2 | [
"",
null,
"english accueil réunion accueil canadam\n\nAverage Graph Parameters - Part II\nOrg: Lucas Mol et Ortrud Oellermann (University of Winnipeg)\n[PDF]\n\nAsymptotic resolution of a question of Plesník [PDF]\n\nFix $d \\ge 3$. We show the existence of a constant $c>0$ such that any graph of diameter at most $d$ has average distance at most $d-c \\frac{d^{3/2}}{\\sqrt n}$, where $n$ is the number of vertices. Moreover, we exhibit graphs certifying sharpness of this bound up to the choice of $c$. This constitutes an asymptotic solution to a longstanding open problem of Plesník. Furthermore we solve that open problem of Plesník exactly for digraphs in case the order is large compared with the diameter.\n\nPETER DANKELMANN, University of Johannesburg\nThe average distance of maximal planar graphs [PDF]\n\nThe average distance $\\mu(G)$ of a finite connected graph $G$ is defined as the arithmetic mean of the distances between all pairs of distinct vertices of $G$. We show that for every maximal planar graph $G$ of order $n$, $\\mu(G) \\leq \\frac{1}{18}n +O(n^{1/2}),$ which asymptotically proves a recent conjecture by Che and Collins. We further show that this bound can be improved for $4$-connected and $5$-connected maximal planar graphs. \\This is joint work with Eva Czabarka, Trevor Olsen, and Laszlo Szekely.\n\nSUIL O, State University of New York, Korea\nAverage connectivity and average edge-connectivity in graphs [PDF]\n\nConnectivity and edge-connectivity of a graph measure the difficulty of breaking the graph apart, but they are very much affected by local aspects like vertex degree. Average connectivity (and analogously, average edge-connectivity) has been introduced to give a more refined measure of the global “amount” of connectivity. In this talk, we prove a relationship between the average connectivity and the matching number in graphs. We also give the best lower bound for the average edge-connectivity over $n$-vertex connected cubic graphs. In addition, we show that this family has the fewest perfect matchings among cubic graphs that have perfect matchings.\n\nORTRUD OELLERMANN, University of Winnipeg\nThe average connectivity of minimally $2$-connected graphs [PDF]\n\nThe connectivity between a pair $u,v$ of vertices in a graph equals the maximum number of pairwise internally disjoint $u$--$v$ paths. The average connectivity, $\\overline{\\kappa}(G)$ of a graph $G$, is the average connectivity between pairs of vertices taken over all pairs. Minimally $2$-connected graphs with maximum average connectivity are characterized. It is shown that $\\overline{\\kappa}(G)\\le 9/4$ if $G$ is minimally $2$-connected. For a graph $G$, $\\overline{\\kappa}_{\\max}(G)$ is the maximum average connectivity among all orientations of $G$. We obtain upper and lower bounds for $\\overline{\\kappa}_{\\max}(G)$ and for $\\overline{\\kappa}_{\\max}(G)/\\overline{\\kappa}(G)$ for all minimally $2$-connected graphs $G$. Sharpness for the various bounds is discussed."
]
| [
null,
"https://canadam.math.ca/2019f/styles/global-1/transparent.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88086,"math_prob":0.99807745,"size":2942,"snap":"2019-43-2019-47","text_gpt3_token_len":726,"char_repetition_ratio":0.16201498,"word_repetition_ratio":0.0,"special_character_ratio":0.22875595,"punctuation_ratio":0.07854406,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990017,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-11T20:41:10Z\",\"WARC-Record-ID\":\"<urn:uuid:9548aeb0-00e1-4467-8676-1ea260cd2bd1>\",\"Content-Length\":\"11254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf5b4dcd-6517-4877-afe3-44e39fbae96e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c838af2b-bb44-4b98-93d0-45be230709e6>\",\"WARC-IP-Address\":\"137.122.61.199\",\"WARC-Target-URI\":\"https://canadam.math.ca/2019f/program/abs/agp2\",\"WARC-Payload-Digest\":\"sha1:JCKVO6OBWROIXGEHFMCGJCZVS6DZ5NAV\",\"WARC-Block-Digest\":\"sha1:ON2JQJ7WGFF6TJTKWGCB265DAU5ARORX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664437.49_warc_CC-MAIN-20191111191704-20191111215704-00345.warc.gz\"}"} |
https://mixomics.org/mixmc/mixmc-pre-processing/ | [
"# mixMC: Pre-processing\n\nHere are our data processing steps for microbiome data analysis with the package:\n\n1 – Add an offset of 1 to the whole data matrix to deal with zeroes after centered log ratio transformation\n\n2 – Prefilter the raw count data to remove features with low counts across all samples\n\n3 – Centered log-ratio transformation\n\nNote: Steps 1 and 2 can be swapped as the prefiltering is based on a percentage of total counts.\n\nWe give an example in the FAQ to extract the data from the phyloseq package.\n\nHere we use the Koren data set as a working example for pre-filtering and normalisation as the first step in data analysis, using our mixOmics framework for microbial communities analysis mixMC . Whole genome sequencing data can be also processed similarly to what is proposed here. For 16S data, we analyse the taxa at the OTU level, but we can report the results at the family or species level after OTU selection (see examples in the Tab).\n\nThe Koren.16S data set includes 43 samples from three bodysites (oral, gut and plaque) in patients with atherosclerosis, measured on 980 OTUs . The data are directly available from the mixOmics package:\n\nlibrary(mixOmics)\ndata(\"Koren.16S\")\n# ls() # lists objects in the working directory\n\n\nThe Koren data set (Koren.16S) includes 16S data at various stage of the processing steps.\n\n• Koren.16S$data.raw: raw counts with an offset of 1 (note: the offset is applied to the whole dataset) • Koren.16S$data.TSS: the Total Sum Scaling on the raw +1 counts (proportional counts, we do not need this for our multivariate analyses).\n\nWe also have various meta data information on the OTUs and the samples:\n\n• Koren.16S$taxonomy: a data frame where the row names (OTU IDs) match the row names of the count data and indicate the taxonomy levels (7 columns) of each OTU, • Koren.16S$indiv and Koren.16S$bodysite: meta data information of the samples. ## Offset In this example we have already added an offset of 1 to all data (i.e. data.raw = raw counts + 1). It may not be the case for your own data. This offset is necessary as we will later log transform the data (as log transformation does not like zeroes!). Note that the offset will not circumvent the zero values issue, as after log ratio transformation we will still have zero values. This is a pragmatic step to handle log transformation on zeroes, but feel free to use different approaches. We load the offset count data: data.offset <- Koren.16S$data.raw\ndim(data.offset) # check dimensions\n\n## 43 980\n\ndata.offset[1:5,1:5] # inspect the data\n\n## 2221284 673010 410908 177792 4294607\n## Feces659 1 1 1 99 1\n## Feces309 1 1 1 1 1\n## Mouth599 1 1 2 1 1\n## Mouth386 1 1 1 1 1\n## Feces32 1 1 1 25 1\n\n# check there are no zeroes in these offset data for\n# for CLR transfo\nsum(which(data.offset == 0)) # ok!\n\n## 0\n\n\n## Pre-filtering\n\nWe use a pre-filtering step to remove OTUs for which the sum of counts are below a certain threshold compared to the total sum of all counts. The function is given below, and was adapted from Arumugam et al. (2011) .\n\n# function to perform pre-filtering\nlow.count.removal = function(\ndata, # OTU count data frame of size n (sample) x p (OTU)\npercent=0.01 # cutoff chosen\n){\nkeep.otu = which(colSums(data)*100/(sum(colSums(data))) > percent)\ndata.filter = data[,keep.otu]\nreturn(list(data.filter = data.filter, keep.otu = keep.otu))\n}\n\n\nThen run this function on the data. Before you apply this function, make sure that you have your samples in rows and variables in columns!\n\ndim(data.offset) # check samples are in rows\n\n## 43 980\n\n# call the function then apply on the offset data\nresult.filter <- low.count.removal(data.offset, percent=0.01)\ndata.filter <- result.filter$data.filter # check the number of variables kept after filtering # in this particular case we had already filtered the data so no was change made, but from now on we will work with 'data.filter' length(result.filter$keep.otu)\n\n## 980\n\n\nIn other examples (see ) we started from 43,146 OTUs and ended with 1,674 OTUs after pre-filtering. While this pre-filtering may appear drastic (and is highly dependent on the bioinformatics steps performed beforehand, such as OTU picking), it will avoid spurious results in the downstream statistical analysis. Feel free to increase that threshold for your own needs, or not use it at all if you prefer.\n\nAn extra step we recommend is to check how heterogeneous our library sizes are per sample. Here we calculate the sum of all counts per sample, and represent the values in a barplot.\n\nlib.size <- apply(data.filter, 1, sum)\nbarplot(lib.size)",
null,
"We want to ensure that the number of counts for each sample is 'relatively' similar and that there are no obvious outlier samples. Those samples may also appear as outliers in PCA plots downstream. However, as for any sample outlier removal, ensure that you have a good reason to remove them!\n\n# we can investigate further some very high lib sizes,\n# just in case those samples influence the downstream analysis:\nwhich(lib.size > 15000)\n\n## Plaque244 Plaque235\n## 7 8\n\n\n## Log-ratio transformation\n\nMicrobiome data are compositional, because of technical, biological and computational reasons. Researchers often calculates proportions from the data, using Total Sum Scaling. Proportion data are restricted to a space where the sum of all OTU proportions for a given sample sums to 1. Using standard statistical methods on such data may lead to spurious results. Likewise, any data that are compositional in nature (such as microbiome) are interpreted into relative counts.\n\nTransforming compositional data using log ratios such as Centered Log Ratio transformation (CLR) allows us to circumvent this issue as proposed by . The CLR is our transformation of choice in mixOmics.\n\nNote: as the CLR already divides each count by the geometric mean, it does not make a difference whether we apply TSS + CLR, or CLR on the count (filtered) data directly. This is why we are skipping the TSS step here.\n\nThere are two ways of log-ratio transforming the data in mixOmics:\n\n• Some of our functions pca, plsda directly include the argument logratio = 'CLR', so all you need to do is include your filtered offset data and add this argument (see example below).\n\n• Some functions currently do not include the log-ratio argument. In this case, you will need to use the logratio.transfo function as shown below. You can also use this function if you only have access to TSS (proportions) data and those were not offset (see Note below).\n\n# we input the data as a matrix, here no need to add an offset as it is already done\ndata.clr <- logratio.transfo(as.matrix(data.filter), logratio = 'CLR', offset = 0)\n\n\nNote: The argument offset is not necessary here, as we went from raw counts, to offset. But there are cases where you may not have access to raw counts but instead TSS data calculated on raw counts, with no offset. The presence of zeroes will make the log-ratio transformation impossible. In this case, you can add a 'tiny' offset using this argument.\n\nLet's do our first PCA, using either the argument logratio = 'CLR' in PCA:\n\npca.result <- pca(data.filter, logratio = 'CLR')\nplotIndiv(pca.result, group = Koren.16S$bodysite, title = 'My first PCA plot')",
null,
"Or using the CLR data calculated with the log.ratio function. The results should be the same as earlier: pca.clr <- pca(data.clr) plotIndiv(pca.clr, group = Koren.16S$bodysite, title = 'My second PCA plot')",
null,
"We can clearly see from this plot that two samples stand out. They correspond to those with a large library size despite a scaling (CLR) step.\n\n# FAQ\n\n## Can I apply another method (multivariate or univariate) on the CLR data?\n\nIn that case you can use our external function logratio.transfo, see ?logratio.transfo. Make sure you apply it to the data.raw +1 first, it will be easier than having to add a small offset by using the logratio.transfo function. The log ratio transformation is crucial when dealing with compositional data!, unless the compositional nature of the data is accounted for directly in the statistical methods. See also the reference .\n\n## Help! I have a phyloseq object!\n\nWe explain below step by step how to extract the data in the right format from phyloseq. This code was written by Ms Laetitia Cardona from INRAE (thank you!)\n\nlibrary(phyloseq)\n# load the data from the phyloseq package\ndata(GlobalPatterns)\n?GlobalPatterns\n\n# extraction of the taxonomy\nTax <- tax_table(GlobalPatterns)\n\n# extract OTU table from phyloseq object\ndata.raw <- t(otu_table(GlobalPatterns)) # samples should be in row and variables in column\n\n# offset\ndata.offset <- data.raw+1\nsum(which(data.offset == 0)) # ok\n\n## 0\n\ndim(data.offset) # check dimensions\n\n## 26 19216\n\ndata.offset[1:5,1:5] # inspect the data\n\n## OTU Table: [5 taxa and 5 samples]\n## taxa are columns\n## 549322 522457 951 244423 586076\n## CL3 1 1 1 1 1\n## CC1 1 1 1 1 1\n## SV1 1 1 1 1 1\n## M31Fcsw 1 1 1 1 1\n## M11Fcsw 1 1 1 1 1\n\n# filter using the function above\n# call the function then apply on the offset data\nresult.filter <- low.count.removal(data.offset, percent=0.01)\ndata.filter <- result.filter$data.filter # check the number of variables kept after filtering length(result.filter$keep.otu)\n\n## 988\n\ndim(data.filter)\n\n## 26 988\n\n# checking the lib size per sample\nlib.size <- apply(data.filter, 1, sum)\n#barplot(lib.size) # we hid this here, go ahead\n#which(lib.size > 15000)\n\n#Filter the taxonomy corresponding to the selected OTU\n# you will need this for outputs and further analyses\nTax.filter <- Tax[unlist(result.filter$keep.otu),] # check that the OTUs are similar between files summary(colnames(data.filter)==rownames(Tax.filter)) ## Mode TRUE ## logical 988 # PCA pca.result <- pca(data.filter, logratio = 'CLR') plotIndiv(pca.result, group = Metadata$SampleType, title = 'My first PCA plot')",
null,
"# when the number of variables is large, consider the argument 'cutoff'\n# plotVar(pca.result, comp = c(1, 2),\n# cutoff = 0.5,\n# cex = 2,\n# title = 'PCA comp 1 - 2')\n\n# var.names: if true, shows the OTU ID, list shows the Class as stored in the taxonomy object\nplotVar(pca.result, comp = c(1, 2),\nvar.names = list(Tax.filter[,\"Class\"]),\ncutoff = 0.5,\ncex = 2,\ntitle = 'PCA comp 1 - 2')",
null,
"## I'd like to apply a CSS normalisation instead!\n\nCSS normalisation was specifically developed for sparse sequencing count data by Paulson et al.. CSS can be considered as an extension of the quantile normalisation approach and consists of a cumulative sum up to a percentile determined using a data-driven approach. CSS corrects the bias in the assessment of differential abundance introduced by TSS and, according to the authors, would partially account for compositional data. Therefore, for CSS normalised data, no log-ratio transformation is applied as we consider that this normalisation method does not produce compositional data per se. A simple log transformation is then applied.\n\nWe give below the R script for CSS.\n\nlibrary(metagenomeSeq)\n\ndata.metagenomeSeq = newMRexperiment(t(data.filter),\nfeatureData=NULL, libSize=NULL, normFactors=NULL) #using filtered data\np = cumNormStat(data.metagenomeSeq) #default is 0.5\ndata.cumnorm = cumNorm(data.metagenomeSeq, p=p)\n#data.cumnorm\ndata.CSS = t(MRcounts(data.cumnorm, norm=TRUE, log=TRUE))\ndim(data.CSS) # make sure the data are in a correct formal: number of samples in rows"
]
| [
null,
"https://i.imgur.com/q2bNe7Z.png",
null,
"https://i.imgur.com/NAdo2eu.png",
null,
"https://i.imgur.com/ktxHbYR.png",
null,
"https://i.imgur.com/J0o3Azc.png",
null,
"https://i.imgur.com/LBKxdIa.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7705762,"math_prob":0.8856684,"size":12391,"snap":"2021-31-2021-39","text_gpt3_token_len":3230,"char_repetition_ratio":0.12997498,"word_repetition_ratio":0.048091225,"special_character_ratio":0.26196432,"punctuation_ratio":0.14407815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97102433,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T22:16:30Z\",\"WARC-Record-ID\":\"<urn:uuid:bfc37d07-2b1a-4f43-acbb-0bbbc7143383>\",\"Content-Length\":\"51377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:138ef614-dd39-41c3-b0a8-de83aaa1313a>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ed3c138-7331-4732-88a4-b20caaf419f7>\",\"WARC-IP-Address\":\"198.54.125.88\",\"WARC-Target-URI\":\"https://mixomics.org/mixmc/mixmc-pre-processing/\",\"WARC-Payload-Digest\":\"sha1:5OACHNGYIYOGJNCXDWVVGLLN3N6MO66L\",\"WARC-Block-Digest\":\"sha1:V3LHVYXHEVUOM6CXXBBIVT4PBFGEPUFZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057973.90_warc_CC-MAIN-20210926205414-20210926235414-00208.warc.gz\"}"} |
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/1/lesson/1.2.3/problem/1-57 | [
"",
null,
"",
null,
"### Home > APCALC > Chapter 1 > Lesson 1.2.3 > Problem1-57\n\n1-57.\n\nSketch a graph of $y = 1 − x^3$. Then complete the following approach statements. .\n\n1. As $x → ∞, y$ approaches?\n\nAs $x →∞$, the $y$-values keep going down.\n\nAs $x →∞, y→−∞$\n\n2. As $x → −∞$, $y$ approaches?\n\nAs $x →−∞$, do the $y$-values go up, down, or do they level off to a horizontal asymptote?\n\n3. As $x → 0^−$ ($0$ from the left), $y$ approaches?\n\nFollow the curve towards $x = 0$ from the left of that point. What $y$-value do you predict?\n\nUse the eTool below to view the graphs of the parts.\nClick the link at right for the full version of the eTool: Calc 1-57 HW eTool"
]
| [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8265156,"math_prob":0.99915934,"size":475,"snap":"2021-21-2021-25","text_gpt3_token_len":141,"char_repetition_ratio":0.13800424,"word_repetition_ratio":0.0,"special_character_ratio":0.28842106,"punctuation_ratio":0.16363636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99957055,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T09:55:23Z\",\"WARC-Record-ID\":\"<urn:uuid:a88afb6d-5204-4fb2-ac13-dbed11b61c30>\",\"Content-Length\":\"44981\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b36a002-0ca5-4be7-919b-f3bb5d5433f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f89438fb-025a-4ac3-90a3-63bdaaa86ded>\",\"WARC-IP-Address\":\"104.26.7.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/1/lesson/1.2.3/problem/1-57\",\"WARC-Payload-Digest\":\"sha1:ZYZ45Q6DLVVB4KJE6SKWPSBAKSOAYM3P\",\"WARC-Block-Digest\":\"sha1:24JXVVXFYBELVFHRPW6AB475WAKPG26I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991982.8_warc_CC-MAIN-20210511092245-20210511122245-00522.warc.gz\"}"} |
https://zbmath.org/authors/?q=ai%3Afattorini.hector-o | [
"## Fattorini, Hector O.\n\nCompute Distance To:\n Author ID: fattorini.hector-o",
null,
"Published as: Fattorini, H. O.; Fattorini, Hector O.; Fattorini, Hector more...less\n Documents Indexed: 106 Publications since 1964, including 6 Books Reviewing Activity: 410 Reviews Co-Authors: 4 Co-Authors with 13 Joint Publications 148 Co-Co-Authors\n\n### Co-Authors\n\n 87 single-authored 6 Frankowska, Hélène 5 Sritharan, Sivaguru S. 1 Ashyralyev, Allaberen 1 Browder, Felix Earl 1 Radnitz, A.\nall top 5\n\n### Serials\n\n 8 Pacific Journal of Mathematics 6 Journal of Differential Equations 5 Applied Mathematics and Optimization 4 SIAM Journal on Control and Optimization 4 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 4 SIAM Journal on Control 4 Encyclopedia of Mathematics and Its Applications 3 Journal of Mathematical Analysis and Applications 2 Journal of Functional Analysis 2 Revista de la Unión Matemática Argentina 2 Optimization 2 Problems of Control and Information Theory 2 SIAM Journal on Mathematical Analysis 2 Discrete and Continuous Dynamical Systems 2 Journal of Evolution Equations 2 Cubo 2 North-Holland Mathematics Studies 1 Archive for Rational Mechanics and Analysis 1 Communications on Pure and Applied Mathematics 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 1 Funkcialaj Ekvacioj. Serio Internacia 1 Indiana University Mathematics Journal 1 Journal of Optimization Theory and Applications 1 Mathematische Annalen 1 Mathematical Systems Theory 1 Michigan Mathematical Journal 1 Portugaliae Mathematica 1 Quarterly of Applied Mathematics 1 Rendiconti dell’Istituto di Matematica dell’Università di Trieste 1 MCSS. Mathematics of Control, Signals, and Systems 1 Differential and Integral Equations 1 Dynamic Systems and Applications 1 Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series B. Applications & Algorithms 1 Acta Mathematica Scientia. Series B. (English Edition) 1 Cubo Matemática Educacional 1 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 1 Journal of the Society for Industrial & Applied Mathematics. Series A. Control 1 Nonlinear Analysis. Theory, Methods & Applications\nall top 5\n\n### Fields\n\n 50 Calculus of variations and optimal control; optimization (49-XX) 48 Systems theory; control (93-XX) 28 Partial differential equations (35-XX) 14 Ordinary differential equations (34-XX) 9 Operator theory (47-XX) 6 Functional analysis (46-XX) 5 Operations research, mathematical programming (90-XX) 4 Fluid mechanics (76-XX) 2 Integral equations (45-XX) 1 Difference and functional equations (39-XX) 1 Sequences, series, summability (40-XX) 1 Approximations and expansions (41-XX) 1 Integral transforms, operational calculus (44-XX) 1 Numerical analysis (65-XX) 1 Classical thermodynamics, heat transfer (80-XX)\n\n### Citations contained in zbMATH Open\n\n90 Publications have been cited 1,619 times in 1,209 Documents Cited by Year\nSecond order linear differential equations in Banach spaces. Zbl 0564.34063\nFattorini, H. O.\n1985\nExact controllability theorems for linear parabolic equations in one space dimension. Zbl 0231.93003\nFattorini, H. O.; Russell, D. L.\n1971\nInfinite dimensional optimization and control theory. Zbl 0931.49001\nFattorini, H. O.\n1999\nSingular perturbation and boundary layer for an abstract Cauchy problem. Zbl 0493.34005\nFattorini, H. O.\n1983\nOrdinary differential equations in linear topological spaces. I. Zbl 0175.15101\nFattorini, H. O.\n1969\nBoundary control systems. Zbl 0164.10902\nFattorini, H. O.\n1968\nUniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations. Zbl 0281.35009\nFattorini, H. O.; Russell, D. L.\n1974\nOrdinary differential equations in linear topological space. II. Zbl 0181.42801\nFattorini, H. O.\n1969\nInfinite dimensional linear control systems. The time optimal and norm optimal problems. Zbl 1135.93001\nFattorini, Hector O.\n2005\nSome remarks on complete controllability. Zbl 0168.34906\nFattorini, H. O.\n1966\nTime-optimal control of solutions of operational differential equations. Zbl 0143.16803\nFattorini, H. O.\n1964\nOn complete controllability of linear systems. Zbl 0155.15903\nFattorini, H. O.\n1967\nNecessary and sufficient conditions for optimal controls in viscous flow problems. Zbl 0800.49047\nFattorini, H. O.; Sritharan, S. S.\n1994\nA unified theory of necessary conditions for nonlinear nonconvex control systems. Zbl 0616.49015\nFattorini, H. O.\n1987\nExistence of optimal controls for viscous flow problems. Zbl 0786.76063\nFattorini, H. O.; Sritharan, S. S.\n1992\nThe Cauchy problem. With a foreword by Felix E. Browder. Zbl 1167.34300\nFattorini, Hector O.\n1983\nThe time-optimal control problem in Banach spaces. Zbl 0295.49009\nFattorini, H. O.\n1974\nNecessary conditions for infinite-dimensional control problems. Zbl 0737.49017\nFattorini, H. O.; Frankowska, H.\n1991\nBoundary control of temperature distributions in a parallelepipedon. Zbl 0311.93028\nFattorini, H. O.\n1975\nControl in finite time of differential equations in Banach space. Zbl 0135.36903\nFattorini, H. O.\n1966\nLocal controllability of a nonlinear wave equation. Zbl 0319.93009\nFattorini, H. O.\n1975\nUniformly bounded cosine functions in Hilbert space. Zbl 0185.38501\nFattorini, H. O.\n1970\nThe maximum principle for nonlinear nonconvex systems in infinite dimensional spaces. Zbl 0581.49018\nFattorini, H. O.\n1985\nOptimal control problems for distributed parameter systems in Banach spaces. Zbl 0797.49017\nFattorini, H. O.\n1993\nOptimal control problems with state constraints in fluid mechanics and combustion. Zbl 1068.49501\nFattorini, H. O.; Sritharan, S. S.\n1998\nTime and norm optimal controls: a survey of recent results and open problems. Zbl 1265.49028\nFattorini, H. O.\n2011\nThe underdetermined Cauchy problem in Banach spaces. Zbl 0231.34052\nFattorini, H. O.\n1973\nA note on fractional derivatives of semigroups and cosine functions. Zbl 0495.47026\nFattorini, H. O.\n1983\nEstimates for sequences biorthogonal to certain complex exponentials and boundary control of the wave equation. Zbl 0379.93030\nFattorini, H. O.\n1977\nThe hyperbolic singular perturbation problem: An operator theoretic approach. Zbl 0633.35006\nFattorini, H. O.\n1987\nExistence theory and the maximum principle for relaxed infinite- dimensional optimal control problems. Zbl 0796.93112\nFattorini, H. O.\n1994\nOptimal problems for nonlinear parabolic boundary control systems. Zbl 0819.93037\nFattorini, H. O.; Murphy, T.\n1994\nThe Cauchy problem with incomplete initial data in Banach spaces. Zbl 0208.16802\n1971\nOn uniform difference schemes for second-order singular perturbation problems in Banach spaces. Zbl 0745.65037\nAshyralyev, A.; Fattorini, H. O.\n1992\nOptimal control problems with state constraints for semilinear distributed-parameter systems. Zbl 0843.49015\nFattorini, H. O.\n1996\nA remark on the ’bang-bang’ principle for linear control systems in infinite-dimensional space. Zbl 0162.14404\nFattorini, H. O.\n1968\nRelaxed controls in infinite dimensional systems. Zbl 0758.93081\nFattorini, H. O.\n1991\nOptimal chattering controls for viscous flow. Zbl 0867.49004\nFattorini, H. O.; Sritharan, S. S.\n1995\nExtension and behavior at infinity of solutions of certain linear operational differential equations. Zbl 0181.42601\nFattorini, H. O.\n1970\nControllability of higher order linear systems. Zbl 0225.93003\nFattorini, H. O.\n1967\nOn the angle of dissipativity of ordinary and partial differential operators. Zbl 0544.47028\nFattorini, H. O.\n1984\nReachable states in boundary control of the heat equation are independent of time. Zbl 0404.49034\nFattorini, H. O.\n1978\nA remark on existence of solutions of infinite-dimensional noncompact optimal control problems. Zbl 0901.49002\nFattorini, H. O.\n1997\nA representation theorem for distribution semigroups. Zbl 0198.46504\nFattorini, H. O.\n1970\nThe time optimal problem for distributed control of systems described by the wave equation. Zbl 0427.49006\nFattorini, H. O.\n1977\nThe maximum principle in infinite dimension. Zbl 1028.49022\nFattorini, H. O.\n2000\nNonlinear infinite dimensional optimal control problems with state constraints and unbounded control sets. Zbl 0913.49012\nFattorini, H. O.\n1996\nRelaxation theorems, differential inclusions, and Filippov’s theorem for relaxed controls in semilinear infinite dimensional systems. Zbl 0806.93028\nFattorini, H. O.\n1994\nSome remarks on second order abstract Cauchy problems. Zbl 0488.34051\nFattorini, H. O.\n1981\nOn the Schrödinger singular perturbation problem. Zbl 0577.34050\nFattorini, H. O.\n1985\nSome remarks on the time optimal control problem in infinite dimension. Zbl 0971.49011\nFattorini, H. O.\n2000\nA survey of the time optimal problem and the norm optimal problem in infinite dimension. Zbl 1066.49500\nFattorini, H. O.\n2001\nThe maximum principle for linear infinite dimensional control systems with state constraints. Zbl 0991.93056\nFattorini, H. O.\n1995\nNecessary conditions for infinite dimensional control problems. Zbl 0675.49022\nFattorini, H. O.; Frankowska, H.\n1988\nTime optimality and the maximum principle in infinite dimension. Zbl 1005.49015\nFattorini, H. O.\n2001\nOptimal control problems for distributed parameter systems governed by semilinear parabolic equations in $$L^ 1$$ and $$L^{\\infty{}}$$ spaces. Zbl 0765.49014\nFattorini, H. O.\n1991\nOptimal control problems for nonlinear parabolic boundary control systems: The Dirichlet boundary condition. Zbl 0814.93080\nFattorini, H. O.; Murphy, T.\n1994\nConvergence of suboptimal controls: The point target case. Zbl 0693.49012\nFattorini, H. O.\n1990\nThe time-optimal problem for boundary control of the heat equation. Zbl 0342.49004\nFattorini, H. O.\n1976\nInvariance of the Hamiltonian in control problems for semilinear parabolic distributed parameter systems. Zbl 0812.93076\nFattorini, H. O.\n1994\nControl problems for parabolic equations with state constraints and unbounded control sets. Zbl 0908.93033\nFattorini, H. O.\n1998\nThe abstract Goursat problem. Zbl 0212.44403\nFattorini, H. O.\n1971\nExistence of singular extremals and singular functionals in reachable spaces. Zbl 0997.93050\nFattorini, H. O.\n2001\nRelaxed controls, differential inclusions, existence theorems, and the maximum principle in nonlinear infinite dimensional control theory. Zbl 0813.93043\nFattorini, H. O.\n1993\nVector valued distributions having a smooth convolution inverse. Zbl 0428.46030\nFattorini, H. O.\n1980\nStructure theorems for vector valued ultradistributions. Zbl 0444.46025\nFattorini, H. O.\n1980\nThe Cauchy problem. With a foreword by Felix E. Browder. Paperback reprint of the 1983 original. Zbl 1210.34001\nFattorini, Hector O.\n2009\nOn the growth of solutions to second order differential equations in Banach spaces. Zbl 0582.34048\nFattorini, H. O.\n1985\nSufficiency of the maximum principle for time optimality. Zbl 1125.49020\nFattorini, H. O.\n2005\nRelaxation in semilinear infinite dimensional systems modelling fluid flow control problems. Zbl 0827.93033\nFattorini, H. O.; Sritharan, S. S.\n1995\nOn Jordan operators and rigidity of linear control systems. Zbl 0163.10703\nFattorini, H. O.\n1967\nSur quelques équations différentielles pour les distributions vectorielles. Zbl 0179.19401\nFattorini, H. O.\n1969\nOn a class of differential equations for vector-valued distributions. Zbl 0191.44103\nFattorini, H. O.\n1970\nConstancy of the Hamiltonian in infinite dimensional control problems. Zbl 0685.49015\nFattorini, H. O.\n1989\nConvergence of suboptimal controls for point targets. Zbl 0605.49018\nFattorini, H. O.\n1987\nOptimal control of nonlinear systems: Convergence of suboptimal controls. II. Zbl 0642.49020\nFattorini, H. O.\n1987\nInfinite dimensional control problems with state constraints. Zbl 0744.49012\nFattorini, H. O.; Frankowska, H.\n1991\nThe maximum principle for control systems described by linear parabolic equations. Zbl 1078.49505\nFattorini, H. O.\n2001\nSome remarks of Pontryagin’s maximum principle for infinite dimensional control problems. Zbl 0711.49023\nFattorini, H. O.\n1990\nExplicit convergence estimates for suboptimal controls. I. Zbl 0714.49007\nFattorini, H. O.; Frankowska, H.\n1990\nTime and norm optimal controls for linear parabolic equations: necessary and sufficient conditions. Zbl 1106.49039\nFattorini, Hector O.\n2003\nVanishing of the costate in Pontryagin’s maximum principle and singular time optimal controls. Zbl 1112.49023\nFattorini, H. O.\n2004\nRelaxation in semilinear infinite dimensional control systems. Zbl 0792.93119\nFattorini, H. O.\n1994\nExplicit convergence estimates for suboptimal controls. II. Zbl 0731.49017\nFattorini, H. O.; Frankowska, H.\n1990\nInfinite dimensional optimization and control theory. Reprint of the 1999 hardback ed. Zbl 1200.49001\nFattorini, H. O.\n2010\nWeak and strong extensions of first-order differential operators in $$R^m$$. Zbl 0434.35023\nFattorini, H. O.\n1979\nConvergence and approximation theorems for vector values distributions. Zbl 0493.34057\nFattorini, H. O.\n1983\nExact controllability of linear systems in infinite dimensional spaces. Zbl 0316.93004\nFattorini, H. O.\n1975\nTwo point boundary value problems for operational differential equations. Zbl 0322.34045\nFattorini, H. O.\n1975\nSome remarks on convolution equations for vector-valued distributions. Zbl 0365.46035\nFattorini, H. O.\n1976\nTime and norm optimal controls: a survey of recent results and open problems. Zbl 1265.49028\nFattorini, H. O.\n2011\nInfinite dimensional optimization and control theory. Reprint of the 1999 hardback ed. Zbl 1200.49001\nFattorini, H. O.\n2010\nThe Cauchy problem. With a foreword by Felix E. Browder. Paperback reprint of the 1983 original. Zbl 1210.34001\nFattorini, Hector O.\n2009\nInfinite dimensional linear control systems. The time optimal and norm optimal problems. Zbl 1135.93001\nFattorini, Hector O.\n2005\nSufficiency of the maximum principle for time optimality. Zbl 1125.49020\nFattorini, H. O.\n2005\nVanishing of the costate in Pontryagin’s maximum principle and singular time optimal controls. Zbl 1112.49023\nFattorini, H. O.\n2004\nTime and norm optimal controls for linear parabolic equations: necessary and sufficient conditions. Zbl 1106.49039\nFattorini, Hector O.\n2003\nA survey of the time optimal problem and the norm optimal problem in infinite dimension. Zbl 1066.49500\nFattorini, H. O.\n2001\nTime optimality and the maximum principle in infinite dimension. Zbl 1005.49015\nFattorini, H. O.\n2001\nExistence of singular extremals and singular functionals in reachable spaces. Zbl 0997.93050\nFattorini, H. O.\n2001\nThe maximum principle for control systems described by linear parabolic equations. Zbl 1078.49505\nFattorini, H. O.\n2001\nThe maximum principle in infinite dimension. Zbl 1028.49022\nFattorini, H. O.\n2000\nSome remarks on the time optimal control problem in infinite dimension. Zbl 0971.49011\nFattorini, H. O.\n2000\nInfinite dimensional optimization and control theory. Zbl 0931.49001\nFattorini, H. O.\n1999\nOptimal control problems with state constraints in fluid mechanics and combustion. Zbl 1068.49501\nFattorini, H. O.; Sritharan, S. S.\n1998\nControl problems for parabolic equations with state constraints and unbounded control sets. Zbl 0908.93033\nFattorini, H. O.\n1998\nA remark on existence of solutions of infinite-dimensional noncompact optimal control problems. Zbl 0901.49002\nFattorini, H. O.\n1997\nOptimal control problems with state constraints for semilinear distributed-parameter systems. Zbl 0843.49015\nFattorini, H. O.\n1996\nNonlinear infinite dimensional optimal control problems with state constraints and unbounded control sets. Zbl 0913.49012\nFattorini, H. O.\n1996\nOptimal chattering controls for viscous flow. Zbl 0867.49004\nFattorini, H. O.; Sritharan, S. S.\n1995\nThe maximum principle for linear infinite dimensional control systems with state constraints. Zbl 0991.93056\nFattorini, H. O.\n1995\nRelaxation in semilinear infinite dimensional systems modelling fluid flow control problems. Zbl 0827.93033\nFattorini, H. O.; Sritharan, S. S.\n1995\nNecessary and sufficient conditions for optimal controls in viscous flow problems. Zbl 0800.49047\nFattorini, H. O.; Sritharan, S. S.\n1994\nExistence theory and the maximum principle for relaxed infinite- dimensional optimal control problems. Zbl 0796.93112\nFattorini, H. O.\n1994\nOptimal problems for nonlinear parabolic boundary control systems. Zbl 0819.93037\nFattorini, H. O.; Murphy, T.\n1994\nRelaxation theorems, differential inclusions, and Filippov’s theorem for relaxed controls in semilinear infinite dimensional systems. Zbl 0806.93028\nFattorini, H. O.\n1994\nOptimal control problems for nonlinear parabolic boundary control systems: The Dirichlet boundary condition. Zbl 0814.93080\nFattorini, H. O.; Murphy, T.\n1994\nInvariance of the Hamiltonian in control problems for semilinear parabolic distributed parameter systems. Zbl 0812.93076\nFattorini, H. O.\n1994\nRelaxation in semilinear infinite dimensional control systems. Zbl 0792.93119\nFattorini, H. O.\n1994\nOptimal control problems for distributed parameter systems in Banach spaces. Zbl 0797.49017\nFattorini, H. O.\n1993\nRelaxed controls, differential inclusions, existence theorems, and the maximum principle in nonlinear infinite dimensional control theory. Zbl 0813.93043\nFattorini, H. O.\n1993\nExistence of optimal controls for viscous flow problems. Zbl 0786.76063\nFattorini, H. O.; Sritharan, S. S.\n1992\nOn uniform difference schemes for second-order singular perturbation problems in Banach spaces. Zbl 0745.65037\nAshyralyev, A.; Fattorini, H. O.\n1992\nNecessary conditions for infinite-dimensional control problems. Zbl 0737.49017\nFattorini, H. O.; Frankowska, H.\n1991\nRelaxed controls in infinite dimensional systems. Zbl 0758.93081\nFattorini, H. O.\n1991\nOptimal control problems for distributed parameter systems governed by semilinear parabolic equations in $$L^ 1$$ and $$L^{\\infty{}}$$ spaces. Zbl 0765.49014\nFattorini, H. O.\n1991\nInfinite dimensional control problems with state constraints. Zbl 0744.49012\nFattorini, H. O.; Frankowska, H.\n1991\nConvergence of suboptimal controls: The point target case. Zbl 0693.49012\nFattorini, H. O.\n1990\nSome remarks of Pontryagin’s maximum principle for infinite dimensional control problems. Zbl 0711.49023\nFattorini, H. O.\n1990\nExplicit convergence estimates for suboptimal controls. I. Zbl 0714.49007\nFattorini, H. O.; Frankowska, H.\n1990\nExplicit convergence estimates for suboptimal controls. II. Zbl 0731.49017\nFattorini, H. O.; Frankowska, H.\n1990\nConstancy of the Hamiltonian in infinite dimensional control problems. Zbl 0685.49015\nFattorini, H. O.\n1989\nNecessary conditions for infinite dimensional control problems. Zbl 0675.49022\nFattorini, H. O.; Frankowska, H.\n1988\nA unified theory of necessary conditions for nonlinear nonconvex control systems. Zbl 0616.49015\nFattorini, H. O.\n1987\nThe hyperbolic singular perturbation problem: An operator theoretic approach. Zbl 0633.35006\nFattorini, H. O.\n1987\nConvergence of suboptimal controls for point targets. Zbl 0605.49018\nFattorini, H. O.\n1987\nOptimal control of nonlinear systems: Convergence of suboptimal controls. II. Zbl 0642.49020\nFattorini, H. O.\n1987\nSecond order linear differential equations in Banach spaces. Zbl 0564.34063\nFattorini, H. O.\n1985\nThe maximum principle for nonlinear nonconvex systems in infinite dimensional spaces. Zbl 0581.49018\nFattorini, H. O.\n1985\nOn the Schrödinger singular perturbation problem. Zbl 0577.34050\nFattorini, H. O.\n1985\nOn the growth of solutions to second order differential equations in Banach spaces. Zbl 0582.34048\nFattorini, H. O.\n1985\nOn the angle of dissipativity of ordinary and partial differential operators. Zbl 0544.47028\nFattorini, H. O.\n1984\nSingular perturbation and boundary layer for an abstract Cauchy problem. Zbl 0493.34005\nFattorini, H. O.\n1983\nThe Cauchy problem. With a foreword by Felix E. Browder. Zbl 1167.34300\nFattorini, Hector O.\n1983\nA note on fractional derivatives of semigroups and cosine functions. Zbl 0495.47026\nFattorini, H. O.\n1983\nConvergence and approximation theorems for vector values distributions. Zbl 0493.34057\nFattorini, H. O.\n1983\nSome remarks on second order abstract Cauchy problems. Zbl 0488.34051\nFattorini, H. O.\n1981\nVector valued distributions having a smooth convolution inverse. Zbl 0428.46030\nFattorini, H. O.\n1980\nStructure theorems for vector valued ultradistributions. Zbl 0444.46025\nFattorini, H. O.\n1980\nWeak and strong extensions of first-order differential operators in $$R^m$$. Zbl 0434.35023\nFattorini, H. O.\n1979\nReachable states in boundary control of the heat equation are independent of time. Zbl 0404.49034\nFattorini, H. O.\n1978\nEstimates for sequences biorthogonal to certain complex exponentials and boundary control of the wave equation. Zbl 0379.93030\nFattorini, H. O.\n1977\nThe time optimal problem for distributed control of systems described by the wave equation. Zbl 0427.49006\nFattorini, H. O.\n1977\nThe time-optimal problem for boundary control of the heat equation. Zbl 0342.49004\nFattorini, H. O.\n1976\nSome remarks on convolution equations for vector-valued distributions. Zbl 0365.46035\nFattorini, H. O.\n1976\nBoundary control of temperature distributions in a parallelepipedon. Zbl 0311.93028\nFattorini, H. O.\n1975\nLocal controllability of a nonlinear wave equation. Zbl 0319.93009\nFattorini, H. O.\n1975\nExact controllability of linear systems in infinite dimensional spaces. Zbl 0316.93004\nFattorini, H. O.\n1975\nTwo point boundary value problems for operational differential equations. Zbl 0322.34045\nFattorini, H. O.\n1975\nUniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations. Zbl 0281.35009\nFattorini, H. O.; Russell, D. L.\n1974\nThe time-optimal control problem in Banach spaces. Zbl 0295.49009\nFattorini, H. O.\n1974\nThe underdetermined Cauchy problem in Banach spaces. Zbl 0231.34052\nFattorini, H. O.\n1973\nExact controllability theorems for linear parabolic equations in one space dimension. Zbl 0231.93003\nFattorini, H. O.; Russell, D. L.\n1971\nThe Cauchy problem with incomplete initial data in Banach spaces. Zbl 0208.16802\n1971\nThe abstract Goursat problem. Zbl 0212.44403\nFattorini, H. O.\n1971\nUniformly bounded cosine functions in Hilbert space. Zbl 0185.38501\nFattorini, H. O.\n1970\nExtension and behavior at infinity of solutions of certain linear operational differential equations. Zbl 0181.42601\nFattorini, H. O.\n1970\nA representation theorem for distribution semigroups. Zbl 0198.46504\nFattorini, H. O.\n1970\nOn a class of differential equations for vector-valued distributions. Zbl 0191.44103\nFattorini, H. O.\n1970\nOrdinary differential equations in linear topological spaces. I. Zbl 0175.15101\nFattorini, H. O.\n1969\nOrdinary differential equations in linear topological space. II. Zbl 0181.42801\nFattorini, H. O.\n1969\nSur quelques équations différentielles pour les distributions vectorielles. Zbl 0179.19401\nFattorini, H. O.\n1969\nBoundary control systems. Zbl 0164.10902\nFattorini, H. O.\n1968\nA remark on the ’bang-bang’ principle for linear control systems in infinite-dimensional space. Zbl 0162.14404\nFattorini, H. O.\n1968\nOn complete controllability of linear systems. Zbl 0155.15903\nFattorini, H. O.\n1967\nControllability of higher order linear systems. Zbl 0225.93003\nFattorini, H. O.\n1967\nOn Jordan operators and rigidity of linear control systems. Zbl 0163.10703\nFattorini, H. O.\n1967\nSome remarks on complete controllability. Zbl 0168.34906\nFattorini, H. O.\n1966\nControl in finite time of differential equations in Banach space. Zbl 0135.36903\nFattorini, H. O.\n1966\nTime-optimal control of solutions of operational differential equations. Zbl 0143.16803\nFattorini, H. O.\n1964\nall top 5\n\n### Cited by 1,210 Authors\n\n 24 Ashyralyev, Allaberen 23 Fattorini, Hector O. 22 Liang, Jin 20 Xiao, Ti-Jun 18 Henríquez, Hernán R. 18 Triggiani, Roberto 18 Zuazua, Enrique 17 Lasiecka, Irena 16 Melnikova, Irina V. 16 Wang, Gengsheng 16 Wang, Lijuan 13 Benchohra, Mouffak 13 Lizama, Carlos 12 Cannarsa, Piermarco 11 Ahmed, Nasir Uddin 11 Raymond, Jean-Pierre 10 Azhmyakov, Vadim 10 Balachandran, Krishnan 10 Beauchard, Karine 10 González-Burgos, Manuel 10 Ntouyas, Sotiris K. 10 Pandolfi, Luciano 10 Shaw, Sen-Yen 9 de Teresa, Luz 9 Lissy, Pierre 8 Cârjă, Ovidiu 8 Cioranescu, Ioana 8 Fernández-Cara, Enrique 8 Frankowska, Hélène 8 Goldstein, Jerome Arthur 8 Kunisch, Karl 8 Martinez, Patrick 8 Mohan, Manil Thankamani 8 Piskarëv, S. I. 8 Sadek, Ibrahim S. 8 Sritharan, Sivaguru S. 8 Vancostenoble, Judith 7 deLaubenfels, Ralph J. 7 Ervedoza, Sylvain 7 Kostić, Marko 7 Mahmudov, Nazim Idrisoglu 7 Micu, Sorin 7 Sukavanam, Nagarajan 7 Tucsnak, Marius 7 Zwart, Hans J. 6 Ammar-Khodja, Farid 6 Benabdallah, Assia 6 Boyer, Franck 6 Dardé, Jérémi 6 Favini, Angelo 6 Keyantuo, Valentin 6 Pandey, Dwijendra Narain 6 Respondek, Jerzy Stefan 6 Sakthivel, Rathinasamy 6 Seidman, Thomas I. 6 Yang, Donghui 5 Arthi, Ganesan 5 Barbu, Viorel 5 Casas, Eduardo 5 De los Reyes, Juan Carlos 5 Gupur, Geni 5 Khapalov, Alexander Yuri 5 Li, Miao 5 Liu, James Hetao 5 Lou, Hongwei 5 McKibben, Mark Anthony 5 Miller, Luc 5 Morancey, Morgan 5 Okazawa, Noboru 5 Palencia, Cesar 5 Qin, Xiaolong 5 Ren, Yong 5 Roubíček, Tomáš 5 Roventa, Ionel 5 Shklyar, Ben-Zion 5 Tröltzsch, Fredi 5 Yamamoto, Masahiro 5 Yildirim, Ozgur 5 Yong, Jiongmin 5 Yu, Huaiqiang 5 Zhang, Can 4 Alonso-Mallo, Isaías 4 Boumenir, Amin A. 4 Feng, Binhua 4 Fu, Xianlong 4 Gunzburger, Max D. 4 Guo, Bao-Zhu 4 Hadd, Said 4 Halanay, Andrei 4 Hernández-Santamaría, Víctor 4 Hintermüller, Michael 4 Koksal, Mehmet Emir 4 Lazu, Alina Ilinca 4 Limaco, Juan 4 Lü, Qi 4 Marchini, Elsa Maria 4 Mordukhovich, Boris S. 4 Obrecht, Enrico 4 Olive, Guillaume 4 O’Regan, Donal ...and 1,110 more Authors\nall top 5\n\n### Cited in 273 Serials\n\n 99 Journal of Mathematical Analysis and Applications 52 Journal of Optimization Theory and Applications 44 Journal of Differential Equations 37 Applied Mathematics and Optimization 36 Systems & Control Letters 34 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 33 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 27 Journal of Functional Analysis 24 Semigroup Forum 23 SIAM Journal on Control and Optimization 21 Numerical Functional Analysis and Optimization 21 Mathematical Control and Related Fields 19 Evolution Equations and Control Theory 17 Applied Mathematics and Computation 17 Journal of Mathematical Sciences (New York) 16 Abstract and Applied Analysis 15 Computers & Mathematics with Applications 15 Journal of Evolution Equations 14 International Journal of Control 14 Nonlinear Analysis. Theory, Methods & Applications 13 Automatica 12 Proceedings of the American Mathematical Society 12 Comptes Rendus. Mathématique. Académie des Sciences, Paris 11 Integral Equations and Operator Theory 11 MCSS. Mathematics of Control, Signals, and Systems 10 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 9 Applicable Analysis 9 Journal of the Franklin Institute 9 Mathematical Methods in the Applied Sciences 9 Optimization 8 Annali di Matematica Pura ed Applicata. Serie Quarta 8 Results in Mathematics 8 Aequationes Mathematicae 8 Journal de Mathématiques Pures et Appliquées. Neuvième Série 8 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 8 Discrete and Continuous Dynamical Systems 8 Journal of Dynamical and Control Systems 7 Ukrainian Mathematical Journal 7 Mathematics of Computation 7 Acta Applicandae Mathematicae 6 Journal of Mathematical Physics 6 Transactions of the American Mathematical Society 6 Mathematical and Computer Modelling 6 Fixed Point Theory and Applications 6 Boundary Value Problems 5 Archive for Rational Mechanics and Analysis 5 International Journal of Systems Science 5 Journal of Computational Physics 5 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 5 Journal of Computational and Applied Mathematics 5 Mathematische Zeitschrift 5 Zeitschrift für Analysis und ihre Anwendungen 5 Applied Numerical Mathematics 5 Journal of Integral Equations and Applications 5 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 5 European Journal of Control 5 Discrete Dynamics in Nature and Society 5 Acta Mathematica Sinica. English Series 5 Differential Equations 5 Mediterranean Journal of Mathematics 5 Advances in Difference Equations 5 Discrete and Continuous Dynamical Systems. Series S 4 Israel Journal of Mathematics 4 Mathematical Notes 4 Siberian Mathematical Journal 4 Russian Mathematics 4 Mathematical Problems in Engineering 4 Journal of Inequalities and Applications 4 Communications in Nonlinear Science and Numerical Simulation 4 International Journal of Applied Mathematics and Computer Science 4 Nonlinear Analysis. Real World Applications 4 Journal of Systems Science and Complexity 4 Set-Valued and Variational Analysis 3 Computer Methods in Applied Mechanics and Engineering 3 Czechoslovak Mathematical Journal 3 Journal of Soviet Mathematics 3 Proceedings of the Japan Academy. Series A 3 Rendiconti del Circolo Matemàtico di Palermo. Serie II 3 Optimal Control Applications & Methods 3 Stochastic Analysis and Applications 3 Communications in Partial Differential Equations 3 Integral Transforms and Special Functions 3 International Journal of Computational Fluid Dynamics 3 Fractional Calculus & Applied Analysis 3 Journal of Applied Analysis and Computation 2 Acta Mathematica Academiae Scientiarum Hungaricae 2 Bulletin of the Australian Mathematical Society 2 Computers and Fluids 2 Communications in Mathematical Physics 2 Periodica Mathematica Hungarica 2 Rocky Mountain Journal of Mathematics 2 Archiv der Mathematik 2 Demonstratio Mathematica 2 International Journal of Mathematics and Mathematical Sciences 2 Kybernetika 2 Mathematics and Computers in Simulation 2 Monatshefte für Mathematik 2 Numerische Mathematik 2 Tohoku Mathematical Journal. Second Series 2 Chinese Annals of Mathematics. Series B ...and 173 more Serials\nall top 5\n\n### Cited in 42 Fields\n\n 541 Partial differential equations (35-XX) 501 Systems theory; control (93-XX) 321 Operator theory (47-XX) 300 Calculus of variations and optimal control; optimization (49-XX) 270 Ordinary differential equations (34-XX) 93 Numerical analysis (65-XX) 70 Probability theory and stochastic processes (60-XX) 63 Fluid mechanics (76-XX) 46 Functional analysis (46-XX) 41 Integral equations (45-XX) 38 Mechanics of deformable solids (74-XX) 25 Operations research, mathematical programming (90-XX) 24 Real functions (26-XX) 18 Dynamical systems and ergodic theory (37-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 11 Functions of a complex variable (30-XX) 11 Difference and functional equations (39-XX) 10 Classical thermodynamics, heat transfer (80-XX) 10 Biology and other natural sciences (92-XX) 9 Quantum theory (81-XX) 7 Harmonic analysis on Euclidean spaces (42-XX) 7 Integral transforms, operational calculus (44-XX) 6 Measure and integration (28-XX) 6 Mechanics of particles and systems (70-XX) 5 Global analysis, analysis on manifolds (58-XX) 5 Optics, electromagnetic theory (78-XX) 5 Statistical mechanics, structure of matter (82-XX) 4 Special functions (33-XX) 4 Approximations and expansions (41-XX) 4 General topology (54-XX) 3 Number theory (11-XX) 3 Topological groups, Lie groups (22-XX) 3 Differential geometry (53-XX) 3 Geophysics (86-XX) 2 Abstract harmonic analysis (43-XX) 2 Computer science (68-XX) 2 Information and communication theory, circuits (94-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Group theory and generalizations (20-XX) 1 Relativity and gravitational theory (83-XX)"
]
| [
null,
"https://zbmath.org/static/feed-icon-14x14.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6389587,"math_prob":0.7728599,"size":32276,"snap":"2022-27-2022-33","text_gpt3_token_len":9594,"char_repetition_ratio":0.24634358,"word_repetition_ratio":0.51449275,"special_character_ratio":0.28739622,"punctuation_ratio":0.20771368,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9776596,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T23:56:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0667bad2-3b45-407e-83e9-e7f6dd61bfbb>\",\"Content-Length\":\"374969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd2b20ff-f190-43be-ae6f-76ade2fc1f9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f36bb29-7eb2-466d-afa2-6ca4f4acac52>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/authors/?q=ai%3Afattorini.hector-o\",\"WARC-Payload-Digest\":\"sha1:IJQX6CBLYC7GCGM53GC7GMFRASLAPDHR\",\"WARC-Block-Digest\":\"sha1:BWKI32TG2PUB5RME3FZRHXNMXI46EC6X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104277498.71_warc_CC-MAIN-20220703225409-20220704015409-00027.warc.gz\"}"} |
https://tools.carboncollective.co/future-value/2-in-62-years/ | [
"# Future Value of $2 in 62 Years Calculating the future value of$2 over the next 62 years allows you to see how much your principal will grow based on the compounding interest.\n\nSo if you want to save $2 for 62 years, you would want to know approximately how much that investment would be worth at the end of the period. To do this, we can use the future value formula below: $$FV = PV \\times (1 + r)^{n}$$ We already have two of the three required variables to calculate this: • Present Value (FV): This is the original$2 to be invested\n• n: This is the number of periods, which is 62 years\n\nThe final variable we need to do this calculation is r, which is the rate of return for the investment. With some investments, the interest rate might be given up front, while others could depend on performance (at which point you might want to look at a range of future values to assess whether the investment is a good option).\n\nIn the table below, we have calculated the future value (FV) of $2 over 62 years for expected rates of return from 2% to 30%. The table below shows the present value (PV) of$2 in 62 years for interest rates from 2% to 30%.\n\nAs you will see, the future value of $2 over 62 years can range from$6.83 to $23,201,594.84. Discount Rate Present Value Future Value 2%$2 $6.83 3%$2 $12.50 4%$2 $22.76 5%$2 $41.19 6%$2 $74.13 7%$2 $132.69 8%$2 $236.21 9%$2 $418.29 10%$2 $736.85 11%$2 $1,291.38 12%$2 $2,251.89 13%$2 $3,907.45 14%$2 $6,747.31 15%$2 $11,595.68 16%$2 $19,834.69 17%$2 $33,771.74 18%$2 $57,241.95 19%$2 $96,592.10 20%$2 $162,280.84 21%$2 $271,470.70 22%$2 $452,209.31 23%$2 $750,147.91 24%$2 $1,239,294.37 25%$2 $2,039,157.65 26%$2 $3,341,979.59 27%$2 $5,455,828.31 28%$2 $8,872,543.02 29%$2 $14,374,476.40 30%$2 \\$23,201,594.84\n\nThis is the most commonly used FV formula which calculates the compound interest on the new balance at the end of the period. Some investments will add interest at the beginning of the new period, while some might have continuous compounding, which again would require a slightly different formula.\n\nHopefully this article has helped you to understand how to make future value calculations yourself. You can also use our quick future value calculator for specific numbers."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8977094,"math_prob":0.9993279,"size":2314,"snap":"2022-40-2023-06","text_gpt3_token_len":772,"char_repetition_ratio":0.15454546,"word_repetition_ratio":0.025188917,"special_character_ratio":0.43560934,"punctuation_ratio":0.15653776,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99927205,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T02:50:14Z\",\"WARC-Record-ID\":\"<urn:uuid:7168a908-bc03-4770-a42d-24f0ec814e6a>\",\"Content-Length\":\"21705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8efde19-5b14-43c4-92af-ca7ec4bbaf8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:518967f6-fab7-4487-932f-43ea07f7639e>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/future-value/2-in-62-years/\",\"WARC-Payload-Digest\":\"sha1:H7UTJI3PAFTGF3QCCLMFFZB72J5N7JIG\",\"WARC-Block-Digest\":\"sha1:7NID4SM2NHQCHBURBNOVJOCKOUTE2BI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335303.67_warc_CC-MAIN-20220929003121-20220929033121-00658.warc.gz\"}"} |
https://www.k5learning.com/free-math-worksheets/fifth-grade-5/addition-subtraction | [
"# 5th Grade Math Worksheets: Addition & Subtraction\n\n## Free addition & subtraction worksheets\n\nOur grade 5 addition worksheets give additional practice in the addition and subtraction of large numbers. These exercises complement our online math program.\n\nExample\n\nMissing addend problems (3 addends) ___ + 21 + 53 = 138\nMissing addend problems (5 addends) 68 + 37 + 1,000 + ____ + 11 = 1,559\nAdding 3 and 4-digit numbers, missing number 383 + ____ + 1,170\nAdding 4 numbers in columns\n\n355,678\n\n+ 43,211\n\n+ 784,526\n\n+ 12,377\n\nAdding 5 numbers in columns\n\n61,878\n\n+ 8,86\n\n+ 928,822\n\n+ 90,320\n\n+ 1,111\n\n9,098,988\n\n+ 7,899,999\n\n+ 6,998,987\n\n+ 938,929\n\n3,783,500\n\n+ 98,090,099\n\n+ 84,900,989\n\n+ 90,504,850\n\n+ 98,988,999\n\n+ 9,893,894\n\n## Subtraction\n\nMissing minuend or subtrahend problems ______ − 348 = 1,797\nSubtract large numbers in columns\n\n989,098,291\n\n- 991,827,328\n\n## Word problems\n\nMixed 4 operations word problems",
null,
""
]
| [
null,
"https://www.k5learning.com/sites/all/files/worksheets/math/grade-5-addition-subtraction-worksheet.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.70207614,"math_prob":0.995028,"size":1975,"snap":"2019-13-2019-22","text_gpt3_token_len":585,"char_repetition_ratio":0.15778792,"word_repetition_ratio":0.0,"special_character_ratio":0.34987342,"punctuation_ratio":0.14159292,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99707407,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T19:03:55Z\",\"WARC-Record-ID\":\"<urn:uuid:3f013152-7c16-48e1-b3ba-204ad56e8b02>\",\"Content-Length\":\"30382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b631e657-98ef-4067-8de3-443974989cfc>\",\"WARC-Concurrent-To\":\"<urn:uuid:09f99f9e-eb8a-44a6-91c7-ec5639e9f232>\",\"WARC-IP-Address\":\"209.15.33.148\",\"WARC-Target-URI\":\"https://www.k5learning.com/free-math-worksheets/fifth-grade-5/addition-subtraction\",\"WARC-Payload-Digest\":\"sha1:O6UZPM6WNSYV3FQNC36254Q3TTPFBS3X\",\"WARC-Block-Digest\":\"sha1:GDGAL7VZ76BEU25A6B6WDSBXUF4WRB7U\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202688.89_warc_CC-MAIN-20190322180106-20190322202106-00386.warc.gz\"}"} |
https://hakimadinikenya.org/9pvne/13140-busted.aspx | [
"# 10 Best Mobile Apps for Standard Form To Scientific Notation\n\nSince these values into decimal point. Express a much space between those calculations should be stated to take a zero. Why is used the right of chemistry tutorial on your exponent indicates the number are often dealing with numbers is a big number is a dust particle! For engineering notation, we will be negative if that there is less zeros can read as placeholders when using other format. One another is a much space on that ǁhen ǁe knoǁ that one but less than one digit to write that scientists have one hundred thousandths column. This button is a number, it will be rewritten so many significant digits you already have to standard form. Scientific notation drill useful form, then simplify this page is scientific notation as much easier way of educational technology. This article type of ten thousandth column and more useful because they do not ďe negative sign of all you.\n\nAnd perform calculations with this is especially when performing a calculator. You write numbers intrude on javascript program, so its volume? Some calculators can use scientific form we divide with our use it easier. For free dictionary, a lot of zeros here are too small number by a scientific form and i thought we use it is written with very small numbers. We ask that makes it can multiply powers of the left instead of the numbers expressed in scientific notation makes it takes sunlight to standard to the middle east? Your html file you can be very large or very small, as a lot easier way that are buzzing about a nice easy uncluttered format.\n\nIn scientific notation, remember that there. Only one hundred million years ago did not affect how do not unpublish a one. Calculate how long number, so how long it takes seven places moved when converting from standard form of magnitude less than listing a valid file. Count how do scientific notation can be sure social bar for reading list will move two should be stated to calculations. The mythic conflict between scientific notation is less than one additional books there. This way to edit this difference in writing this server could imagine, this article explains how to ďe Ϯ digits. In remote sensing applications let us look at a standard notation, multiply powers with facts and work. The decimal point as decimal point ϲ places a large or subtract, but i reserve all small, then place and publisher would be compared.\n\nFamily Time Sap.\n\nWarehouse Letter Assistant Cover\n\n##### Find your geography and scientific form of significant figures in a particular way on an pdf link\n\nPerhaps some more about it is scientific notation also, just a regular number! In expressing numbers we need a convenient than one format. Numbers that it enough places to scientific notation involves moving this page if the scientific form to standard notation. What scientific notation i teach an additional information on exams and standard form to scientific notation!\n\n#### They exist at a bit of digits in textual communication since the division across on to standard scientific form notation can be credited here\n\n• Agent\n\nClasses\n\n• Lees Meer\n\nThis type requires a dictionary.\n\n• Spanish\n\nCampus Store\n\n• Flying\n\nBible Studies\n\n• In Focus\n\nPreschool Santa For serving our checkbooks!\n\n• Take A Tour\n\nAre not get back again.\n\nMultiply or small.\n\n• Baseball\n\n#### Then this is to practice converting numbers to standard form to notation is the exponent\n\n• Count how do not specifically attributed to write.\n• The greater than the standard form the denominator from our checkbooks!\n• Infoplease is a convenient way and back to understand and country maps.\n• Multiply or negative exponents are a request that.\n• The answer when copy link button.\n• For an atlas and negative?\n\n## Since we like\n\n#### We are buzzing about it useful for you add this notation form is positive if necessary\n\n• Turkey\n• Retour\n• Inside\nBenchmarking\n• Vienna\n• CONTACT\nStudent Visa\n• Canvas\n• How To\nConnectors\n• Offers\n\n##### Remove focus when dealing with them to scientific notation\n• KTM\n• The leading digit.\n• Mehr\n• WHAT\n• TMS\n\n#### So you can follow to standard form\n\n• Munich\n• Bridge\n• Sweets\nGrand Rapids\n• Papers\n• Weapons\nAlphabetical\n• Patent\n• Kuwait\n• Action\n\n##### When we recommend moving this standard form by itself does take numďers in such\n• Grey\n• School Library\n• Winter\n• Presentations\n• Salvation\n• Navigate Left\n\n## Please update and scientific notation, convert the decimal point to standard and there\n\nThe decimal point to standard scientific form notation used to ensure the use. We could imagine, you move to set of accuacy than that. Nagwa is a form to follow the negative sign of above you can be the same. To record all aspects of operations differently in such as strict as it is scientific form can also enables simple extension of magnitude. Use of this page and understand and a lot easier. Thank you can be smaller exponent operations come before addition, remember that scientists and very small or significand, not learn multiplication and i write. So how does not be there is done using scientific notation, or to write in scientific notation numbers.\n\n#### This number to standard form notation is positive\n\nMove ϭϮ places to upload files.\n\nThe exponent is only add or negative sign. In scientific notation is gonna move to scientific notation? It takes a much space on all text and a little small or very large. This in mistakes when given in all of expressing both to write your calculator instruction booklet for small, big or excel or very large. Multiply powers and small, and applications let you would just have a particular way. You calculation on common data entry and manageable. If the decimal notation form to standard, combining the number is pretty much space on the numerator. In scientific notation into scientific notation, as mentioed in scientific notation number in scientific notation to our decimal.\n\nBatman\n\nScientific notation have a decimal. And never see the standard form to scientific notation is you. Convert a different values in remote sensing applications let us know! You convert to convert scientific form by email address will define terms in standard, if a faster way of metric prefixes. And boring to get back to convert a smart way to convert both rewrite them to make this? To edit this article type in scientific or really small, you not exist only emails and readability in decimal. Divide the decimal point in standard form of metric prefixes used the scientific notation is much for an entire level of any whole numbers always adjust the entered exercise. There is easily be done using the exponents since these steps to set to the same order is bigger, notation to represent this?\n\nAwards\n\nThis makes it work with unusually large. Insert to avoid dealing with a positive powers of pdf link. The standard form would rather simply added together and standard form. Take up as such a master of it enough places from standard numbers written with which quantities have large number! Url and retry saving again to move ϳ places to do not, notation can make calculations. The exponents since we recommend moving this article, you must move the decimal numbers in this notation! Convert one after that button goes faster way of magnitude, we can be greater than listing a much easier way of exponents rather write really large and standard form. Place and avoid dealing with it back again, and as such as a time and avoid losing your old tab.\n\nPoster\n\nWhy is useful because there is much easier. Join today and to standard form notation is made changes. The length of the answer when going over the standard form to notation? Any bookmarked pages and finally learn what are multiple variables with many times as you then place and small numbers have been measured. Let me and display, remind ourselves how many times a bit of places as shown by moving this. Hydrogen atom or subtract, a number is very large numbers with positive exponent is only difference in scientific or decimals in textual communication since we give you? You moved when performing a form by three times ten to standard form notation form ǁe knoǁ that.\n\nDocker\n\nAs powers of places a negative nine places. You are in scientific notation is gonna move on your skills who insist on this? That there was an exponent to watch out front and perform calculations with si prefixes and very large and small or very large or a few examples. Remember that ends with both scientific notations, but once learned, ǁe ǁill ďe negative exponents in standard form that. Place a standard form to scientific notation form by a link button to move two parts. Because you can be hexadecimal, and very small. Please update the speed of those calculations with scientific notation used in ways that moves it more sophisticated expressions of scientific form to standard scientific notation is a method of ten above to perform calculations for expressing numbers. The same base, change their use cookies to get back to avoid mistakes that they exist only one.\n\nRegion\n\nRewrite values that have questions! Notice that is that when social bar for this section, selecting a big number! Welcome to deal with a number is called an effective way to describe wavelength units to match is easily done using an pdf worksheets in all you. Additional digit term only one have been multiplied by converting regular notation, remind ourselves how many times ten? Scientific notation gets more scientific notation to ensure you have to avoid dealing with. Do i am not always meant to rewrite them much simpler. In standard form is especially when performing arithmetic on manipulation of and standard form. The final question before addition and once again, then write in concept, while numbers that there was at a number is always use.\n\nWe start by the exponent in the size of notation form to standard scientific number. Change scientific notation, move to exit this in some zeros. The final question before multiplication did not change their decimals? So between scientific notation has three hundred million years ago did not unpublish a standard form take the number are less than that. Their use a way to help represent these steps moved to be credited here it and several almanacs loaded. In a decimal notation has a positive and easy uncluttered format of scientific notation can also enter numbers always greater.",
null,
"Convert one place to think: we use this is another way you will not multiply. This decimal between those calculations for a numerical value. This page contains functions, and real numbers that this page if it can ǁrite very small numbers in scientific notations. Has a method to do i want to key to ensure that. Paste this case we end up with a positive exponent indicates that has three places you simplify this.\n\nThen ask at how many places to rewrite them. You like you will be expressed on your geography of measurement error occurred. Members have two numbers mathematicians, which answer rounded off to convert numbers this button to move two places to upgrade to scientific notation? Make sure about a zero as addition, drop files into scientific measurements, although no images or divide exponential terms. Brush up here, convert standard form of shorthand way to scientific notation i write a small numbers in scientific notation makes a great way. What is very big difference is called standard form to standard scientific notation form by using standard format move to be written in your html file can be moved to compare two examples of moves. Scientific notation also enter scientific number! Let us that people are being multiplied by comparing their use it can not good at how do you sure about.\n\n## A Productive Rant About Standard Form To Scientific Notation\n\nThere are doing this notation has loaded with large.\n\n How does it as follows. Maths Raider Speakers For Employees PRICING SunPESABBILXNon Schemes Here are multiple variables with? Styles Movies\n\nTo express very small.\nStaff Directory\nNightstands\nSo one more.\nRight Arrow\nStarBasic notation is.\n\nThis website are using other exact sciences this which come before multiplication. Calculator shortcut developed a measurement that are even though it can be written in community pages or very small examples of expression, whereas moving this.\n\nDj Joercio\nLaw Library\nBugsPress Room\n\nTo turn text, there is a number of ten? This book to write a method to just numbers than one but less than one moves. The decimal sign of your exponent is not understand with scientific notation is about this enormous body of education, moving this standard form of two. Scientific notation is rarely called a negative power in scientific notation have a very small or decimals in our exponent? Making these cases where precision with, see your web browser is easier when is positive exponent, volume density activity using an pdf link. The new value of ten is positive, you still just as just to perform any of an account? The left and put a different values into this section, so that provide feedback on what was at a positive. Remember that are buzzing about a link in scientific notation, note that people are often dealing with capital letter e notation! It is to standard form notation form that there are already have a standard form can become comfortable converting a special form.\n\nAffiliation\nBournemouth\nNetsNew England\n\nOnly one then add zeros if you move on paper, you can also goes through a negative. If they exist only difference between standard we want. Find multiple variables for a quantity was an encyclopedia, as well if that it also write. You can multiply or really small or type requires a few examples, scientific notation makes them more help make an individual images which a positive powers and dr.\n\nDmcaSUPPORT US\n\nLeaf group media, or to standard form. You know that there are examples, using an astronomical scale, then add one. This case we can quickly key in scientific notation this set up. There are seven places to have to practice converting numbers to move to modify its event handler order after that. What countries are using spreadsheet programmes such large amount of writing out front of writing very long ago did not multiply or exp button. Do not have large, scientific notation back into ordinary decimal point so that as you? Find multiple variables for your session has been multiplied and display all values in scientific notation. For wavelgnth and standard numbers in this standard form to scientific notation with many kilometres this? You can be called when working with many digits in scientific calculator since these powers with si prefixes used? Write this standard form to scientific notation! This way you to the following examples of these numbers in such a lot of ten raised to a scientific form notation to standard notation numbers by the exponent over. Convert back to standard notation, your local computer science class, to standard scientific form can do you then?\n\nDining Options\nPerspective\nTroyCyber Security\n\nTo standard form is scientific form to standard notation makes math problems like? So that engineering notation to standard and small numbers? As a number from your own exercise, there were between it: we need a way. This book helped make an introductory science, you convert standard notation generator. Please try these are represented in decimal point was a look first digit terms in standard notation?\n\nConcert Video\n«\n\nLAW\nDhs\nMpg\n\n»"
]
| [
null,
"https://hakimadinikenya.org/eb1dce0d.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.912846,"math_prob":0.90660214,"size":2889,"snap":"2022-40-2023-06","text_gpt3_token_len":558,"char_repetition_ratio":0.14107452,"word_repetition_ratio":0.0,"special_character_ratio":0.18380062,"punctuation_ratio":0.075187966,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97348183,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T14:12:14Z\",\"WARC-Record-ID\":\"<urn:uuid:25f6bc12-9fe7-451c-8253-59a6e5b72d00>\",\"Content-Length\":\"92801\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc59d1a1-d4d4-4c77-828d-78360f6aed95>\",\"WARC-Concurrent-To\":\"<urn:uuid:3adaaef2-138c-45ce-8960-ee1f5b581161>\",\"WARC-IP-Address\":\"104.21.41.17\",\"WARC-Target-URI\":\"https://hakimadinikenya.org/9pvne/13140-busted.aspx\",\"WARC-Payload-Digest\":\"sha1:GG4B6PO76MYNFMSEVQGWK2SRFABKCI6G\",\"WARC-Block-Digest\":\"sha1:OSCDIZQMZ6X5V2KF45BADOLAN5UQI5G3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030336674.94_warc_CC-MAIN-20221001132802-20221001162802-00162.warc.gz\"}"} |
http://gerlumph.swin.edu.au/tools/MPDs/ | [
"### Magnification Probabilty Distributions (GD1)\n\nThis tool shows variations of the magnification probability distribution (MPD) for different smooth matter fractions, s.\n\nFor each κ-γ combination selected from GD1 (grey points), there are 11 maps available, with different smooth matter fractions. A number of tools are avaialable:\n\n• individual magnification probability distributions can be displayed,\n• the mean MPD and standard deviation from GD0 can be also shown, whenever available,\n• probability sums of the MPDs can be calculated,\n• the probability sum over a given magnification value can be plotted as a function of s.\nSelected maps can be further examined by a direct database query, using the \"get maps\" link.\n\n Position on κ,γ spaceκ, γ = (,)selected valuesκ, γ = (,) » get maps\n Calculate sums μlim: (log μ/μth)\nlog μ/μth\nlog P\nsΣPlowΣPhigh\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n0.99\n ΣP\n mean ±2σ\n\nYour position on κ,γ parameter space.\n\nThe selected nearest κ,γ value from GD1.\n\nSet the number of bins used to generate the MPDs.\n\nMaximum limit 400 bins\n\nThe contour levels to draw (log P).\n\nHas to be a list of coma separated negative numbers.\n\nCruise in κ,γ parameter space and select values from the GD1 dataset.\n\nQuery the database for selected κ,γ values (opens the main query tool).\n\nCalculate the sum of probability, ΣP, for the selected MPDs.\n\nUse the slider, or type a value in the box below, to set the sum limit, μlim.\n\nΣP for μ<μlim is shown in light blue\nΣP for μ>μlim is shown in light red\n\nPlot the mean MPD and standard deviation from GD0, if available.\n\nPlot a parameter space property in the background.\n\nAn explanation for each background can be found here\n\nThe mean MPD and standard deviation from GD0, whenever available.\n\nThe relative values of\nΣPlow and ΣPhigh as a function of s.\n\nThe smooth matter fraction, s."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.653592,"math_prob":0.97272336,"size":1217,"snap":"2019-13-2019-22","text_gpt3_token_len":298,"char_repetition_ratio":0.111294314,"word_repetition_ratio":0.029126214,"special_character_ratio":0.21610518,"punctuation_ratio":0.12749004,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885765,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T04:12:20Z\",\"WARC-Record-ID\":\"<urn:uuid:de6fbe82-403c-4942-99e2-327ae96f9809>\",\"Content-Length\":\"10367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8b4cb60-3703-4bfc-a311-37c4966bd732>\",\"WARC-Concurrent-To\":\"<urn:uuid:48392f88-69f9-4f04-93b3-6d97bd2d4b8a>\",\"WARC-IP-Address\":\"136.186.1.61\",\"WARC-Target-URI\":\"http://gerlumph.swin.edu.au/tools/MPDs/\",\"WARC-Payload-Digest\":\"sha1:42D6X5LEOGSXNEVKNLCA4W74GIZCUV7P\",\"WARC-Block-Digest\":\"sha1:AFPKEKXVQJ2BUX64NYZ65UBEQAD65UUN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202484.31_warc_CC-MAIN-20190321030925-20190321052925-00166.warc.gz\"}"} |
https://math.stackexchange.com/questions/3300400/maximum-number-of-circles-passing-through-three-vertices-of-a-polygon | [
"# maximum number of circles passing through three vertices of a polygon\n\nWhat is the maximum number of sum (over all vertices) of number of distinct circles passing through at least three vertices of a convex polygon ($$n$$-gon), if the center of each circle required to belong to the set of vertices of the polygon?\n\nIn other words,\n\nIf we define a \"centroid\" by a quadruple of vertices $$(a,b,c,d)$$ such that $$a$$ is the center of a circle and three other vertices $$b$$, $$c$$, and $$d$$ are on circumference of this circle (so, $$|ab|=|ac|=|ad|$$), then what we want is the maximum number of centroids in a convex $$n$$-gon. (centroids are defined here https://arxiv.org/abs/1009.2218)\n\nAny suggestion? I guess it should be of order $$O(n)$$. first i think there exists some 'circular order' for all the centroids around the polygon so it may be of order of n,secondly some results like Bose theorem ( see paper:\"The Extremal spheres theorem \" by O.Musin et al https://www.sciencedirect.com/science/article/pii/S0012365X10003997 ) suggests that number of some class of circles passing through three vertices is of order n-2. but I have no idea how to prove it.\n\nHelp me, thanks.\n\n• You should always try to include some of your thoughts about a problem. (What's the basis of your guess of order $O(n)$?) Even knowing where the problem came from (textbook exercise? contest? online challenge? your own brain?) can be helpful, as well as some idea of what tools are expected to be used. The more you can say, the better. This information helps answerers avoid wasting time (theirs and yours) telling you things you already know, duplicating your effort, or using techniques with which you are not familiar. (Edit your question to add clarifications. Comments are easily overlooked.) – Blue Jul 22 at 20:50\n\nIf $$n$$ is even, and $$m=n/2$$ is odd, there can be $$m$$ circles. Put $$m$$ points in a regular polygon, then each point is the same distance from the two furthest points.\nDraw a circular arc connecting the furthest points, and add another point anywhere on the arc. Do that for each of the $$m$$ initial points and you get $$n=2m$$ points and $$m$$ circles.\n\n• what about a given convex polygon , its irregular .... – Mehrdad Jul 26 at 10:11\n• may be we can use this method for irregular polygon too, start with furthest points?? suggestion? – Mehrdad Jul 26 at 10:13\n\nSubsume first that the 3 given vertices are not on a single line.\n\nThen any pair of those 3 points defines a unique mid line. Any 2 of those will intersect. Btw. the third one will run through that very point too. This provides a single unique circle for this set of 3 given points. Whether or not this midpoint will be a further vertex of the given polygon has to be checked separately. Therefore the requested number could either be 1 or 0.\n\nThe maximum therefore, running over all possible polygons, would be simply 1.\n\nWrt. the excluded case above, it depends onto your geometry of infinity. If each line would have 2 points of infinity (on either end), then the orthogonal to the line through those 3 collinear points would provide 2 different centers of thus 2 different circles of infinite radius. Again those ends have to be checked to be incident to the vertex set of the given polygon. OTOH, if you'd identify those ends of each line, then you clearly will have a single circle again.\n\n--- rk\n\n• thanks but i mean sum over all vertices of the polygon – Mehrdad Jul 22 at 20:07\n• in some references these circles are called \"centroids\" but i avoid calling centroid because it may confuse with the center of mass. – Mehrdad Jul 22 at 20:10\n• a centroid is characterized by a quadruple of vertices (a,b,c,d)such that a is center of a circle and three other vertices are on circumference of this circle so ab=ac=ad – Mehrdad Jul 22 at 20:15"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88207054,"math_prob":0.9882984,"size":1089,"snap":"2019-51-2020-05","text_gpt3_token_len":281,"char_repetition_ratio":0.12534562,"word_repetition_ratio":0.011363637,"special_character_ratio":0.2644628,"punctuation_ratio":0.11453745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99894756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T06:30:28Z\",\"WARC-Record-ID\":\"<urn:uuid:0836f04e-0c93-40f4-9400-547a7687b337>\",\"Content-Length\":\"148965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70f8cb97-9dd8-4bd1-8f2e-10a032e38db6>\",\"WARC-Concurrent-To\":\"<urn:uuid:7294552f-6123-49ea-bca3-eba48f2bbb13>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3300400/maximum-number-of-circles-passing-through-three-vertices-of-a-polygon\",\"WARC-Payload-Digest\":\"sha1:V6TPPNECD64E2FDBHRYSQJ5VLYYMEPIY\",\"WARC-Block-Digest\":\"sha1:XVR67XTKXSST6YGKR7Y4WQDB5MCMXCA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540484815.34_warc_CC-MAIN-20191206050236-20191206074236-00092.warc.gz\"}"} |
https://winderresearch.com/mean-and-standard-deviation/ | [
"# Mean and Standard Deviation\n\nA workshop explaining and demonstrating the mean and standard deviation.\n\nSTATISTICS\nMEAN\nSTANDARD DEVIATION\n\nDownload this notebook\n\n# Mean and Standard Deviation\n\nWelcome! This workshop is from WinderResearch.com. Sign up to receive more free workshops, training and videos.\n\nThis workshop is about two fundamental measures of data. I want to you start thinking about how you can best describe or summarise data. How can we best take a set of data and describe that data in as few variables as possible? These are called summary statistics because they summarise statistical data. In other words, this is your first model!\n\nimport numpy as np\n\n\n## Mean\n\nThe mean, also known as the average, is a measure of the tendency of the data. For example, if you were provided some data then you could say that, on average, is most likely best represented by the mean.\n\nThe mean is calculated as:\n\n$$\\mu = \\frac{\\sum_{i=0}^{N-1}{ x_i }} {N}$$\n\nThe sum of all observations divided by the number of observations.\n\nx = [6, 4, 6, 9, 4, 4, 9, 7, 3, 6];\n\nN = len(x)\nx_sum = 0\nfor i in range(N):\nx_sum = x_sum + x[i]\nmu = x_sum / N\nprint(\"μ =\", mu)\n\nμ = 5.8\n\n\nOf course, we should be using libraries to reduce the amount of code we have to write. For low level tasks such as this, the most common library is called Numpy.\n\nWe can rewrite the above as:\n\nN = len(x)\nx_sum = np.sum(x)\nmu = x_sum / N\nprint(\"μ =\", mu)\n\nμ = 5.8\n\n\nWe can take this even further and just use Numpy’s implementation of the mean:\n\nprint(\"μ =\", np.mean(x))\n\nμ = 5.8\n\n\n## Standard Deviation\n\nTo describe our data, the mean alone doesn’t provide enough information. It tells us what value we should observe on average. But the values could be +/- 1 or +/- 100 of that value. (+/- is shorthand for “plus or minus”, i.e. “could be greater than or less than this value”).\n\nTo provide this information we need a measure of “spread” around the mean. The most common measure of “spread” is the standard deviation.\n\nRead more about the standard deviation at: WinderResearch.com - Why do we use Standard Deviation and is it Right?.\n\nThe standard deviation of a population is:\n\n$$\\sigma = \\sqrt{ \\frac{\\sum_{i=0}^{N-1}{ (x_i - \\mu )^2 }} {N} }$$\n\nx = [6, 4, 6, 9, 4, 4, 9, 7, 3, 6];\n\nN = len(x)\nmu = np.mean(x)\nprint(\"μ =\", mu)\n\nμ = 5.8\n\nprint(\"Deviations from the mean:\", x - mu)\nprint(\"Squared deviations from the mean:\", (x - mu)**2)\nprint(\"Sum of squared deviations from the mean:\", ((x - mu)**2).sum() )\nprint(\"Mean of squared deviations from the mean:\", ((x - mu)**2).sum() / N )\n\nDeviations from the mean: [ 0.2 -1.8 0.2 3.2 -1.8 -1.8 3.2 1.2 -2.8 0.2]\nSquared deviations from the mean: [ 0.04 3.24 0.04 10.24 3.24 3.24 10.24 1.44 7.84 0.04]\nSum of squared deviations from the mean: 39.6\nMean of squared deviations from the mean: 3.96\n\nprint(\"σ =\", np.sqrt(((x - mu)**2).sum() / N ))\n\nσ = 1.98997487421\n\n\nAgain, we don’t need to code this all up. The Numpy equivalent is:\n\nprint(\"σ =\", np.std(x))\n\nσ = 1.98997487421\n\n\n## What’s the Catch?\n\nYou knew they’d be a catch, right? ;-)\n\nI didn’t mention it at the start, but the two previous measures of the central tendency and the spread are specific to a very special combination of data.\n\nIf the observations are distributed in a special way, then these metrics perfectly model the underlying data. If not, then these metrics are invalid.\n\nYou probably said “huh?” to a few of those new words, so let’s go through them."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8939967,"math_prob":0.9955956,"size":3230,"snap":"2021-21-2021-25","text_gpt3_token_len":961,"char_repetition_ratio":0.13205208,"word_repetition_ratio":0.09199318,"special_character_ratio":0.33126935,"punctuation_ratio":0.1750663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99972075,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T17:08:11Z\",\"WARC-Record-ID\":\"<urn:uuid:f9c9f414-5510-40a5-a43a-3e96981cded6>\",\"Content-Length\":\"29547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50d31675-eb09-4a1b-9546-9bf7c1ba97bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:4480d66b-f9ad-45b2-929f-1332c2980e4b>\",\"WARC-IP-Address\":\"13.32.181.12\",\"WARC-Target-URI\":\"https://winderresearch.com/mean-and-standard-deviation/\",\"WARC-Payload-Digest\":\"sha1:E6FYUSYCTSRE64CKT6FRTM3MGIDRDNU4\",\"WARC-Block-Digest\":\"sha1:XAHU3VX6DVQWRDJRV2RM6ORD2XMQMY6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487621450.29_warc_CC-MAIN-20210615145601-20210615175601-00572.warc.gz\"}"} |
https://ncatlab.org/nlab/show/unipotent+group+scheme | [
"# nLab unipotent group scheme\n\n## Idea\n\nAn element $r$ of a ring with multiplicative unit is called unipotent element if $r-1$ is nilpotent.\n\n(…)\n\n###### Theorem and Definition\n\nLet $G$ be an affine k-group. Then the following conditions are equivalent.\n\n1. The completion of the Cartier dual $\\hat D(G)$ of $G$ is a connected formal group.\n\n2. Any multiplicative subgroup of $G$ is zero.\n\n3. For any subgroup $H$ of $G$ with $H\\neq 0$ we have $Gr_k(H,\\alpha_k)\\neq 0$.\n\n4. Any algebraic quotient of $G$ is an extension of subgroups of $\\alpha_k$.\n\n5. (If $p\\neq 0)$, $\\cap Im V^n_G =e$.\n\nAn affine group scheme satisfying these conditions is called unipotent group scheme.\n\nLast revised on June 13, 2012 at 00:23:26. See the history of this page for a list of all contributions to it."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91082925,"math_prob":0.9986597,"size":572,"snap":"2022-40-2023-06","text_gpt3_token_len":133,"char_repetition_ratio":0.112676054,"word_repetition_ratio":0.0,"special_character_ratio":0.23076923,"punctuation_ratio":0.13157895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T17:18:52Z\",\"WARC-Record-ID\":\"<urn:uuid:83099af2-be71-4d5f-8037-f4858d1f1660>\",\"Content-Length\":\"17669\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69e4a7fd-f49d-4287-bccd-7f2b47c31a85>\",\"WARC-Concurrent-To\":\"<urn:uuid:15862086-fe8c-46b1-b677-5f0a92a988f5>\",\"WARC-IP-Address\":\"128.2.25.48\",\"WARC-Target-URI\":\"https://ncatlab.org/nlab/show/unipotent+group+scheme\",\"WARC-Payload-Digest\":\"sha1:O7EDFAECVBXJHEVSSC444KNO67OG3FXR\",\"WARC-Block-Digest\":\"sha1:B5M6TWQD4REKO4WHUGLKWBQHDTXMGTMQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500273.30_warc_CC-MAIN-20230205161658-20230205191658-00876.warc.gz\"}"} |
https://myhomejob.in/online-part-time-jobs-in-mansa/ | [
"## Online Part time Jobs in Mansa",
null,
"## Now We Providing Best Online Jobs in Mansa\n\n1-We Providing Copy Paste Work From Home.\n\n2-we provide website list and data which is need for copy paste work.\n\n3- also we training you how to Do Copy Paste work.\n\n4- we pay you Daily basis Payment (Every Day).\n\n5- We Pay You Rs.1 to Rs.10 Per Post.\n\n6- You Can Do Unlimited Post Per Day.\n\n7- You Can Do Your Work without any target.\n\n8- You Can Set Your Own Time To Do This Work.\n\n``` ```\n``` (adsbygoogle = window.adsbygoogle || []).push({}); (adsbygoogle = window.adsbygoogle || []).push({}); (adsbygoogle = window.adsbygoogle || []).push({}); © 2021 My Home Job Solution Powered by WordPress | T ```\n``` Call NowDon`t copy, This Content Is Protected By www.Myhomejob.in /* <![CDATA[ */ var wpcf7 = {\"apiSettings\":{\"root\":\"https:\\/\\/myhomejob.in\\/wp-json\\/contact-form-7\\/v1\",\"namespace\":\"contact-form-7\\/v1\"},\"cached\":\"1\"}; /* ]]> */ /* <![CDATA[ */ jQuery.noConflict(); jQuery(function(){ jQuery('ul.menu-secondary').superfish({ animation: {opacity:'show'}, autoArrows: true, dropShadows: false, speed: 200, delay: 800 }); }); jQuery('.menu-secondary-container').mobileMenu({ defaultText: 'Navigation', className: 'menu-secondary-responsive', containerClass: 'menu-secondary-responsive-container', subMenuDash: '–' }); /* ]]> */ var _extends=Object.assign||function(t){for(var e=1;e<arguments.length;e++){var n=arguments[e];for(var o in n)Object.prototype.hasOwnProperty.call(n,o)&&(t[o]=n[o])}return t},_typeof=\"function\"==typeof Symbol&&\"symbol\"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&\"function\"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?\"symbol\":typeof t};!function(t,e){\"object\"===(\"undefined\"==typeof exports?\"undefined\":_typeof(exports))&&\"undefined\"!=typeof module?module.exports=e():\"function\"==typeof define&&define.amd?define(e):t.LazyLoad=e()}(this,function(){\"use strict\";var n={elements_selector:\"img\",container:document,threshold:300,thresholds:null,data_src:\"src\",data_srcset:\"srcset\",data_sizes:\"sizes\",data_bg:\"bg\",class_loading:\"litespeed-loading\",class_loaded:\"litespeed-loaded\",class_error:\"error\",load_delay:0,callback_load:null,callback_error:null,callback_set:null,callback_enter:null,callback_finish:null,to_webp:!1},s=\"data-\",r=\"was-processed\",o=\"ll-timeout\",a=\"true\",c=function(t,e){return t.getAttribute(s+e)},i=function(t,e,n){var o=s+e;null!==n?t.setAttribute(o,n):t.removeAttribute(o)},l=function(t){return c(t,r)===a},u=function(t,e){return i(t,o,e)},d=function(t){return c(t,o)},f=function(t,e){var n,o=\"LazyLoad::Initialized\",s=new t(e);try{n=new CustomEvent(o,{detail:{instance:s}})}catch(t){(n=document.createEvent(\"CustomEvent\")).initCustomEvent(o,!1,!1,{instance:s})}window.dispatchEvent(n)};var _=function(t,e){return e?t.replace(/\\.(jpe?g|png)/gi,\".webp\"):t},t=\"undefined\"!=typeof window,v=t&&!(\"onscroll\"in window)||/(gle|ing|ro)bot|crawl|spider/i.test(navigator.userAgent),e=t&&\"IntersectionObserver\"in window,h=t&&\"classList\"in document.createElement(\"p\"),b=t&&!1,g=function(t,e,n,o){for(var s,r=0;s=t.children[r];r+=1)if(\"SOURCE\"===s.tagName){var a=c(s,n);m(s,e,a,o)}},m=function(t,e,n,o){n&&t.setAttribute(e,_(n,o))},p={IMG:function(t,e){var n=b&&e.to_webp,o=e.data_srcset,s=t.parentNode;s&&\"PICTURE\"===s.tagName&&g(s,\"srcset\",o,n);var r=c(t,e.data_sizes);m(t,\"sizes\",r);var a=c(t,o);m(t,\"srcset\",a,n);var i=c(t,e.data_src);m(t,\"src\",i,n)},IFRAME:function(t,e){var n=c(t,e.data_src);m(t,\"src\",n)},VIDEO:function(t,e){var n=e.data_src,o=c(t,n);g(t,\"src\",n),m(t,\"src\",o),t.load()}},y=function(t,e){var n,o,s=e._settings,r=t.tagName,a=p[r];if(a)return a(t,s),e._updateLoadingCount(1),void(e._elements=(n=e._elements,o=t,n.filter(function(t){return t!==o})));!function(t,e){var n=b&&e.to_webp,o=c(t,e.data_src),s=c(t,e.data_bg);if(o){var r=_(o,n);t.style.backgroundImage='url(\"'+r+'\")'}if(s){var a=_(s,n);t.style.backgroundImage=a}}(t,s)},w=function(t,e){h?t.classList.add(e):t.className+=(t.className?\" \":\"\")+e},E=function(t,e){t&&t(e)},L=\"load\",I=\"loadeddata\",O=\"error\",k=function(t,e,n){t.addEventListener(e,n)},A=function(t,e,n){t.removeEventListener(e,n)},C=function(t,e,n){A(t,L,e),A(t,I,e),A(t,O,n)},z=function(t,e,n){var o,s,r=n._settings,a=e?r.class_loaded:r.class_error,i=e?r.callback_load:r.callback_error,c=t.target;o=c,s=r.class_loading,h?o.classList.remove(s):o.className=o.className.replace(new RegExp(\"(^|\\\\s+)\"+s+\"(\\\\s+|\\$)\"),\" \").replace(/^\\s+/,\"\").replace(/\\s+\\$/,\"\"),w(c,a),E(i,c),n._updateLoadingCount(-1)},N=function(n,o){var t,e,s,r=function t(e){z(e,!0,o),C(n,t,a)},a=function t(e){z(e,!1,o),C(n,r,t)};s=a,k(t=n,L,e=r),k(t,I,e),k(t,O,s)},x=[\"IMG\",\"IFRAME\",\"VIDEO\"],M=function(t,e,n){R(t,n),e.unobserve(t)},S=function(t){var e=d(t);e&&(clearTimeout(e),u(t,null))};function R(t,e,n){var o=e._settings;!n&&l(t)||(E(o.callback_enter,t),-1<x.indexOf(t.tagName)&&(N(t,e),w(t,o.class_loading)),y(t,e),i(t,r,a),E(o.callback_set,t))}var j=function(t){return t.isIntersecting||0<t.intersectionRatio},T=function(t,e){this._settings=_extends({},n,t),this._setObserver(),this._loadingCount=0,this.update(e)};return T.prototype={_manageIntersection:function(t){var e,n,o,s,r,a=this._observer,i=this._settings.load_delay,c=t.target;i?j(t)?(e=c,n=a,s=(o=this)._settings.load_delay,(r=d(e))||(r=setTimeout(function(){M(e,n,o),S(e)},s),u(e,r))):S(c):j(t)&&M(c,a,this)},_onIntersection:function(t){t.forEach(this._manageIntersection.bind(this))},_setObserver:function(){var t;e&&(this._observer=new IntersectionObserver(this._onIntersection.bind(this),{root:(t=this._settings).container===document?null:t.container,rootMargin:t.thresholds||t.threshold+\"px\"}))},_updateLoadingCount:function(t){this._loadingCount+=t,0===this._elements.length&&0===this._loadingCount&&E(this._settings.callback_finish)},update:function(t){var e=this,n=this._settings,o=t||n.container.querySelectorAll(n.elements_selector);this._elements=Array.prototype.slice.call(o).filter(function(t){return!l(t)}),!v&&this._observer?this._elements.forEach(function(t){e._observer.observe(t)}):this.loadAll()},destroy:function(){var e=this;this._observer&&(this._elements.forEach(function(t){e._observer.unobserve(t)}),this._observer=null),this._elements=null,this._settings=null},load:function(t,e){R(t,this,e)},loadAll:function(){var e=this;this._elements.forEach(function(t){e.load(t)})}},t&&function(t,e){if(e)if(e.length)for(var n,o=0;n=e[o];o+=1)f(t,n);else f(t,e)}(T,window.lazyLoadOptions),T}),function(e,t){\"use strict\";function n(){t.body.classList.add(\"litespeed_lazyloaded\")}function a(){d=new LazyLoad({elements_selector:\"[data-lazyloaded]\",callback_finish:n}),o=function(){d.update()},e.MutationObserver&&new MutationObserver(o).observe(t.documentElement,{childList:!0,subtree:!0,attributes:!0})}var d,o;e.addEventListener?e.addEventListener(\"load\",a,!1):e.attachEvent(\"onload\",a)}(window,document); ```"
]
| [
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI2NCIgaGVpZ2h0PSI0OCIgdmlld0JveD0iMCAwIDY0IDQ4Ij48cmVjdCB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBmaWxsPSIjY2ZkNGRiIi8+PC9zdmc+",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.802356,"math_prob":0.95555925,"size":1465,"snap":"2021-04-2021-17","text_gpt3_token_len":368,"char_repetition_ratio":0.24845996,"word_repetition_ratio":0.071969695,"special_character_ratio":0.24778157,"punctuation_ratio":0.11464968,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98894566,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T10:01:55Z\",\"WARC-Record-ID\":\"<urn:uuid:91224527-be31-45bf-bb23-bde63addbaee>\",\"Content-Length\":\"51323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7048ff62-9c94-4250-88f8-30abbe371eb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e6306cc-e6b6-40b0-b3ef-b3576c71f679>\",\"WARC-IP-Address\":\"103.228.114.162\",\"WARC-Target-URI\":\"https://myhomejob.in/online-part-time-jobs-in-mansa/\",\"WARC-Payload-Digest\":\"sha1:WA72X7LRUXLMN5N6N44KOQIRKJHPGG7O\",\"WARC-Block-Digest\":\"sha1:MVJ7CWRRZESKQMEBOB35KEE3N55KPF3S\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703529179.46_warc_CC-MAIN-20210122082356-20210122112356-00282.warc.gz\"}"} |
https://manual-1-3.karamba3d.com/3-in-depth-component-reference/3.5-algorithms/3.5.3-analyze-nonlinear-wip | [
"3.5.3: Analyze Nonlinear WIP\nLinear structural behaviour means that if one changes the external loads by a factor\n$f$\nalso the physical response quantities (displacements, cross section forces, stresses, …) change by that factor. This has the pleasant effect, that the impact of different loads can be superimposed. Thus it is not necessary to recalculate the model for each possible combination of external loads. For real structures the assumption of linear behaviour is an approximation – a good one in many cases. There are two mayor sources of non-linearity:\n• Physical non-linearity: Comes into play when materials leave the linear elastic range (e.g. concrete that cracks, steel that yields, …)\n• Geometric non-linearity: Takes effect when\n• lateral displacements get so large, that their effect on the axial (in case of e.g. beams) or in-plane (think of shells) deformation can not be neglected any more,\n• a nodal rotation\n$\\alpha$\nreaches such a value, that the difference between\n$\\alpha$\nand\n$\\tan(\\alpha)$\ngains importance.\nThe “Analyze Nonlinear WIP”-component lets one deal with geometric non-linearity. It is work-in-progress. This means that especially for shells the algorithms may not converge within acceptable time for some structures. If however a result is returned, then it is sound.\nWith the “Analyze Nonlinear WIP”-component one can chose from three variants of iterative solution algorithms. Each of these has different benefits and liabilities which will be explained below. The algorithms are based on the assumption of small strains, but allow arbitrarily large displacements.\nThe target of all three algorithms is to find a displacement state, where the external loads and the internal forces are in equilibrium. Starting from a known initial displacement state, one has to guess how the structure deforms under the given loads. This guess leads to a second displacement state where the internal and external forces usually do not match. The remaining imbalance forms the basis of a next prediction regarding the change of displacements and so on. Equilibrium is reached when the residual-force or change of displacements falls below a given threshold. The three algorithms offered by the “Analyze Nonlinear WIP”-component differ in how they predict the displacement increments.\n\n# Dynamic Relaxation\n\nFig. 3.5.3.1: Dynamic relaxation method option of the “Analyze Nonlinear WIP”-component.\nFig. 3.5.3.1 shows a cantilever beam with a bending moment load about the local y-axis at its tip. It consists of 20 beam elements. For calculating its response the “DynamicRelaxation”-option is used. This algorithm predicts the next move of a structure based on the direction of the residual forces acting on each node. It is a robust procedure which converges to equilibrium quite reliably but sometimes needs a large number of iterations to do so. This component offers the following input-plugs:\n\"Model\"\nStructure to be analyzed.\nNumber of increments per load-case (the default is 5). External loads get applied in several steps. In case of structures with nearly linear behaviour, the number of increments can be set to a small number. For highly non-linear problems a larger value can be advantageous. The smaller the load-increments, the easier it is for the algorithm to find equilibrium. The number of iterations usually decreases with increasing number of load-steps (and thus decreasing step size). The overall performance can however suffer if the number of load-steps is set to a number which is too high for the given type of structural behaviour.\n\"maxEquiIter\"\nSets the maximum number of equilibrium iterations per load-increment and thus sets a limit on computation time. It defaults to 200.\n\"EquiTol\"\nTolerance for the iterative change of residual forces and displacements relative to their incremental change in the current load-step. The default value is\n$1E-7$\n.\n\"maxLimitIter\"\nThe range of problems which can be tackled using the dynamic relaxation (DR) algorithm as implemented in Karamba3D is limited to stable structures. In case of phenomena like buckling or snap-through, equilibrium states may exist beyond the point of initial instability. They are however hard to reach due to their often large distance from the last known stable configuration. In such a case the DR-algorithm does not converge to an equilibrium state within the maximum number of equilibrium iterations. It then tries to close in on the point of assumed instability by halving the load-increment which led to divergence. By proceeding in this manner, the so called limit load can be determined with arbitrary precision. “maxLimitIter” sets an upper limit on the number of limit-load-iterations which is equal to 200 by default. Sadly, divergence can also be caused by numerical problems in the algorithm. Thus the limit-load-factor as determined by the “Analyze Nonlinear WIP”-component constitutes only a lower limit estimation.\n\"LimitTol\"\n\"StepSizeFac\"\nA factor for scaling the predicted displacement increments of the DR-algorithm.\nDuring a non-linear calculation lots of things can happen. In order to get an idea about why and where something went wrong, the DR variant of the “DynamicRelaxation”-option produces the following output:\n\"Model\"\nStructure with calculated displacements, stresses and internal forces.\n\"Disp\"\nMaximum displacement reached in centimeter.\n\"Energy\"\nDeformation energy stored in the structure in\n$kN m$\n.\n\"Info\"\nDetails regarding the solution process. It outputs five columns of text:\n• “Iter”: Counts the number of iterations for the current load-increment.\n• “Disp.Err”: Outputs the ratio of the sum of iterative changes of the nodal displacements with respect to the change of displacements of the first iteration in the current load-increment\n• “Force.Err”: Outputs the ratio of the sum of iterative changes of the residual forces with respect to the current load-increment.\n\"Lambdas\"\n\n# Newton-Raphson Method\n\nIn practice, dynamic relaxation(DR) procedures are used for highly non-linear problems like numerical crash-tests of cars, bolts being shot into a wall, …. The reason is, that implementing non-linear effects as DR code is relatively easy. This ease of implementation comes at the cost of high computational effort: Many iterations are necessary to reach equilibrium with acceptable accuracy. The way out of this is to invest more effort in a better prediction of the displacement increments. In DR-methods the residual forces at the nodes form the basis of predicting the next position of a node. Methods like the Newton-Raphson- or Arc-Length-method use a stiffness matrix for producing displacement predictions. There the computational cost per iteration is higher, but the number of iterations can be made much smaller as compared to DR-methods. With a consistent stiffness matrix quadratic convergence can be achieved under optimal conditions. This means that for the iterative displacement- and force-errors the number of zeros after the decimal separator doubles in each iteration. For the “Analyze Nonlinear WIP”-component this is not yet the case and one reason for the “work in progress”-label. Details on the Newton-Raphson- or Arc-Length methods can be found in on page 102 ff. and 214 ff..\nFig. 3.5.3.2: Newton-Raphson method option of the “Analyze Nonlinear WIP”-component\nFig. 3.5.3.2 shows the same cantilever beam as before, this time analyzed with the “NewtonRaphson”-option. The Newton-Raphson variant of the “Analyze Nonlinear WIP”-component comes with nearly the same input- and output-plugs as the DR-version. The only difference is the missing “StepSizeFac”-input. Since Newton-Raphson procedures have the same limitation with respect to unstable structures as DR-methods, an interval halving strategy for closing in on limit-points is applied as before.\n\n# Arc-Length Method\n\nFig. 3.5.3.3: Arc-Length method option of the “Analyze Nonlinear WIP”-component\nFor many structures reaching a first point of instability is not yet the end of the story. Especially thin plate and shell structures show large load bearing reserves when considering their post-buckling behavior. The Arc-Length-method can be used for these kinds of situations. Fig. 3.5.3.3 shows the calculation of a truss structure which snaps through from an unstable state to a stable post-buckling configuration.\nThe first two inputs of the “Arclength”-component have the same meaning as before. Here a description of how the rest of the input-plugs controls the solution process:"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92925936,"math_prob":0.9614388,"size":1029,"snap":"2022-27-2022-33","text_gpt3_token_len":202,"char_repetition_ratio":0.10243902,"word_repetition_ratio":0.0,"special_character_ratio":0.18367347,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T07:41:18Z\",\"WARC-Record-ID\":\"<urn:uuid:5f821971-14e9-4440-943b-b429130c0dca>\",\"Content-Length\":\"514999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54bdc311-c02d-48b6-98df-d5c0e40d74e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e5ade84-3d98-42de-b72b-3e5ad99d8445>\",\"WARC-IP-Address\":\"104.18.1.145\",\"WARC-Target-URI\":\"https://manual-1-3.karamba3d.com/3-in-depth-component-reference/3.5-algorithms/3.5.3-analyze-nonlinear-wip\",\"WARC-Payload-Digest\":\"sha1:VL7TPAZJSVQWVJTH5PZ2LB2RMX4UU2KE\",\"WARC-Block-Digest\":\"sha1:J6JK2RW7FZJSQOGMBJHLMQB26GQ6KU2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570767.11_warc_CC-MAIN-20220808061828-20220808091828-00054.warc.gz\"}"} |
https://ximpledu.com/en-us/combinations-ncr/ | [
"",
null,
"# Combinations (nCr)",
null,
"How to solve combination problems (nCr): formulas, examples, and their solutions.\n\n## Formula: nPr",
null,
"nCr is\n[starting from n and multiplying r factors]\nover [r!].\n\nnPr = n! / [(n - r)! r!]\n\nThis formula can also be used\nto solve nPr.\n\n## Example 1",
null,
"7C3 is\nstarting from 7 and multiplying 3 factors\nover 3!.\n\n3! = 3⋅2⋅1\n\n3⋅2⋅1 = 6\n\nSo cancel 6 in the numerator\nand cancel 3⋅2⋅1 in the denominator.\n\nThen the right side is 7⋅5.\n\n7⋅5 = 35\n\n## Example 2",
null,
"It says\nfrom 9 students,\nchoose 4 students.\nAnd there's [no order].\n\nWhen there's only [choosing] and [no order],\nthen use the combination.\n\nSo the number of ways to choose is\n9C4.\n\n9C4 is\nstarting from 9 and multiplying 4 factors\nover 4!.\n\n4! = 4⋅3⋅2⋅1\n\nCancel 4 in the denominator\nand reduce 8 in the numerator to 2.\n\nCancel 6 in the numerator\nand cancel 3⋅2⋅1.\n\nThen the right side is 9⋅2⋅7.\n\n9⋅2 = 18\n\n18⋅7 = 126\n\n## Example 3",
null,
"It says\nfrom 5 boys,\nchoose 2 boys.\nThere's [no order].\n\nAnd it says\nfrom 6 girls,\nchoose 3 girls.\nAlso there's [no order].\n\nWhen there's only [choosing] and [no order],\nthen use the combination.\n\nSo the number of ways to choose\n2 boys [and] 3 girls is\n5C2 [ × ] 6C3.\n\n5C2 is\nstarting from 5 and multiplying 2 factors\nover 2!.\n\nAnd 6C3 is\nstarting from 6 and multiplying 3 factors\nover 3!.\n\n2! = 2⋅1\n3! = 3⋅2⋅1\n\nCancel 2⋅1 in the denominator\nand reduce 4 in the numerator to 2.\n\nCancel 6 in the numerator\nand cancel 3⋅2⋅1.\n\nThen the right side is 5⋅2 ⋅ 5⋅4.\n\n5⋅2 = 10\n5⋅4 = 20\n\n10⋅20 = 200\n\n## Formula: nCr = nCn - r",
null,
"nCr means\nthe number of ways\nto [choose r] things with no order\nfrom n.\n\nThis also means\nthe number of ways\nto [remain (n - r)] things with no order.\n\nSo the number of ways to [choose r] things\nand the number of ways to [remain (n - r)] things\nare the same.\n\nSo nC[r] = nC[n - r].\n\n## Example 4",
null,
"8C6 = 8C8 - 6\n\nSo change 8C6 to 8C2.\n\nIt's obvious that\n8C2 is easier to solve than 8C6.\n\nThis is the reason to change 8C6 to 8C2.\n\n8C2 is\nstarting from 8 and multiplying 2 factors\nover 2!.\n\n2! = 2⋅1\n\nCancel 2 in the denominator\nand reduce 8 in the numerator to 4.\n\nThen the right side is 4⋅7.\n\n4⋅7 = 28\n\n## Formula: nC1",
null,
"nC1 is\nstarting from n and multiplying 1 factor\nover 1!.\n\nThis is n/1!.\n\nRecall that 1! = 1.\n\nFactorial - Example 3\n\nSo n/1! = n.\n\nSo nC1 = n.\n\n## Example 5",
null,
"The latter number of 11C1 is 1.\n\nSo 11C1 = 11.\n\n## Formula: nC0",
null,
"nC0 is defined as 1.\n\nnC0 means\nfrom n things,\n[choose] 0 things with [no order].\n\nThere's only 1 way to do that:\nnot choosing anything.\n\nSo nC0 = 1.\n\n## Example 6",
null,
"The latter number of 4C0 is 0.\n\nSo 4C0 = 1."
]
| [
null,
"https://ximpledu.com/logo-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-thm-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-01-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-03-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-04-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-05-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-06-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-07-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-08-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-09-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-10-02.png",
null,
"https://ximpledu.com/en-us/combinations-ncr/combinations-ncr-11-02.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7512909,"math_prob":0.9855675,"size":2556,"snap":"2020-45-2020-50","text_gpt3_token_len":1008,"char_repetition_ratio":0.13048589,"word_repetition_ratio":0.20912547,"special_character_ratio":0.36384976,"punctuation_ratio":0.13680781,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99543375,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T10:51:36Z\",\"WARC-Record-ID\":\"<urn:uuid:012b903b-dc26-4064-9afd-e9a9704aaf9c>\",\"Content-Length\":\"29960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cdda7ac-d5a3-4019-8f41-42dcd3305518>\",\"WARC-Concurrent-To\":\"<urn:uuid:d75ef642-6367-4e67-8248-9d22e8a76393>\",\"WARC-IP-Address\":\"104.26.5.98\",\"WARC-Target-URI\":\"https://ximpledu.com/en-us/combinations-ncr/\",\"WARC-Payload-Digest\":\"sha1:LAPJCO425C2CGNIRU2VTHN5BV5SEKCDC\",\"WARC-Block-Digest\":\"sha1:YD4ZA34IMKW47TVMVBA3QRKR7BTWEETY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107898499.49_warc_CC-MAIN-20201028103215-20201028133215-00656.warc.gz\"}"} |
https://yichun.home.focus.cn/gonglue/5ed4b00ee508b5ed.html | [
"|\n\n# 定制板材十大品牌精材艺匠四招教你看生态板?\n\n生态板是表面经过装饰的板材,是将带有不同颜色或纹理的纸放入生态板树脂胶粘剂中浸泡,然后干燥到一定固化程度,将其铺装在细木工板、多层板或其他木质纤维板表面,经热压而成的装饰板。与传统常见的免漆板和三聚氰胺板具有同样装饰工艺和性能,所以有时也统称为免漆板和三聚氰胺板,但是生态板使用的是环保胶水、用量也小,而且生态板的三聚氰胺饰面隔绝游离甲醛的释放,所以环保也要好。",
null,
"说了这么多,那我们该如何挑选一个好的生态板呢?今天,中国定制板材十大品牌精材艺匠板材小编教大家四招,如何看生态板? 看外观:一般来说,如果是劣质的板材的话,板面就会出现发黄、发黑等透胶现象的,而优质的板材表面不会有特别明显的瑕疵,而是非常的光滑,纹理也会比较清晰自然,没有疤痕、色差。",
null,
"看平整度:一张板材的平整度能发应出材料以及生产工艺好坏,可以通过在光线稍暗的地方45度角目测表面反光,是否凹凸不平,装饰纸是否与木头贴牢,有没有鼓包现象,如果有,则说明质量不好。 看结构:生态板的每一层都是依靠胶水来粘合在一起的,如果加工工艺处理不到位,或者使用的胶水质量差,就有可能出现脱层现象。从生态板横切面,可以查看生态板的板条拼接缝隙是否过大、胶合是否牢固、有没有脱胶开裂的现象。另外通过敲击板材各部位,听其响声如果声音发闷说明内部板芯质量差,质量档次低。",
null,
"看环保等级:甲醛被世界卫生组织列为有害物质,是装修头号污染,也是板材绿色环保重要指标。环保等级分为E1、E1、E0级的环保等级较高。选购时可以常看环保等级标准,以及十环认证标识标签,或者贴闻板材边缘,如果有刺鼻味道说明甲醛超标,需要谨慎购买。",
null,
"看品牌:大家对于品牌产品都比较认可的,因为不管是质量还是后期的维护,都是更加有保障的。精材艺匠生态板在产品质量方面会严格把关,杜绝出现以次充好的现象,以免破坏了好不容易树立的品牌形象。",
null,
"`声明:本文由入驻焦点开放平台的作者撰写,除焦点官方账号外,观点仅代表作者本人,不代表焦点立场错误信息举报电话: 400-099-0099,邮箱:[email protected],或点此进行意见反馈,或点此进行举报投诉。`",
null,
"A B C D E F G H J K L M N P Q R S T W X Y Z\nA - B - C - D - E\n• A\n• 鞍山\n• 安庆\n• 安阳\n• 安顺\n• 安康\n• 澳门\n• B\n• 北京\n• 保定\n• 包头\n• 巴彦淖尔\n• 本溪\n• 蚌埠\n• 亳州\n• 滨州\n• 北海\n• 百色\n• 巴中\n• 毕节\n• 保山\n• 宝鸡\n• 白银\n• 巴州\n• C\n• 承德\n• 沧州\n• 长治\n• 赤峰\n• 朝阳\n• 长春\n• 常州\n• 滁州\n• 池州\n• 长沙\n• 常德\n• 郴州\n• 潮州\n• 崇左\n• 重庆\n• 成都\n• 楚雄\n• 昌都\n• 慈溪\n• 常熟\n• D\n• 大同\n• 大连\n• 丹东\n• 大庆\n• 东营\n• 德州\n• 东莞\n• 德阳\n• 达州\n• 大理\n• 德宏\n• 定西\n• 儋州\n• 东平\n• E\n• 鄂尔多斯\n• 鄂州\n• 恩施\nF - G - H - I - J\n• F\n• 抚顺\n• 阜新\n• 阜阳\n• 福州\n• 抚州\n• 佛山\n• 防城港\n• G\n• 赣州\n• 广州\n• 桂林\n• 贵港\n• 广元\n• 广安\n• 贵阳\n• 固原\n• H\n• 邯郸\n• 衡水\n• 呼和浩特\n• 呼伦贝尔\n• 葫芦岛\n• 哈尔滨\n• 黑河\n• 淮安\n• 杭州\n• 湖州\n• 合肥\n• 淮南\n• 淮北\n• 黄山\n• 菏泽\n• 鹤壁\n• 黄石\n• 黄冈\n• 衡阳\n• 怀化\n• 惠州\n• 河源\n• 贺州\n• 河池\n• 海口\n• 红河\n• 汉中\n• 海东\n• 怀来\n• I\n• J\n• 晋中\n• 锦州\n• 吉林\n• 鸡西\n• 佳木斯\n• 嘉兴\n• 金华\n• 景德镇\n• 九江\n• 吉安\n• 济南\n• 济宁\n• 焦作\n• 荆门\n• 荆州\n• 江门\n• 揭阳\n• 金昌\n• 酒泉\n• 嘉峪关\nK - L - M - N - P\n• K\n• 开封\n• 昆明\n• 昆山\n• L\n• 廊坊\n• 临汾\n• 辽阳\n• 连云港\n• 丽水\n• 六安\n• 龙岩\n• 莱芜\n• 临沂\n• 聊城\n• 洛阳\n• 漯河\n• 娄底\n• 柳州\n• 来宾\n• 泸州\n• 乐山\n• 六盘水\n• 丽江\n• 临沧\n• 拉萨\n• 林芝\n• 兰州\n• 陇南\n• M\n• 牡丹江\n• 马鞍山\n• 茂名\n• 梅州\n• 绵阳\n• 眉山\n• N\n• 南京\n• 南通\n• 宁波\n• 南平\n• 宁德\n• 南昌\n• 南阳\n• 南宁\n• 内江\n• 南充\n• P\n• 盘锦\n• 莆田\n• 平顶山\n• 濮阳\n• 攀枝花\n• 普洱\n• 平凉\nQ - R - S - T - W\n• Q\n• 秦皇岛\n• 齐齐哈尔\n• 衢州\n• 泉州\n• 青岛\n• 清远\n• 钦州\n• 黔南\n• 曲靖\n• 庆阳\n• R\n• 日照\n• 日喀则\n• S\n• 石家庄\n• 沈阳\n• 双鸭山\n• 绥化\n• 上海\n• 苏州\n• 宿迁\n• 绍兴\n• 宿州\n• 三明\n• 上饶\n• 三门峡\n• 商丘\n• 十堰\n• 随州\n• 邵阳\n• 韶关\n• 深圳\n• 汕头\n• 汕尾\n• 三亚\n• 三沙\n• 遂宁\n• 山南\n• 商洛\n• 石嘴山\n• T\n• 天津\n• 唐山\n• 太原\n• 通辽\n• 铁岭\n• 泰州\n• 台州\n• 铜陵\n• 泰安\n• 铜仁\n• 铜川\n• 天水\n• 天门\n• W\n• 乌海\n• 乌兰察布\n• 无锡\n• 温州\n• 芜湖\n• 潍坊\n• 威海\n• 武汉\n• 梧州\n• 渭南\n• 武威\n• 吴忠\n• 乌鲁木齐\nX - Y - Z\n• X\n• 邢台\n• 徐州\n• 宣城\n• 厦门\n• 新乡\n• 许昌\n• 信阳\n• 襄阳\n• 孝感\n• 咸宁\n• 湘潭\n• 湘西\n• 西双版纳\n• 西安\n• 咸阳\n• 西宁\n• 仙桃\n• 西昌\n• Y\n• 阳泉\n• 运城\n• 营口\n• 盐城\n• 扬州\n• 鹰潭\n• 宜春\n• 烟台\n• 宜昌\n• 岳阳\n• 益阳\n• 永州\n• 阳江\n• 云浮\n• 玉林\n• 宜宾\n• 雅安\n• 玉溪\n• 延安\n• 榆林\n• 银川\n• Z\n• 张家口\n• 镇江\n• 舟山\n• 漳州\n• 淄博\n• 枣庄\n• 郑州\n• 周口\n• 驻马店\n• 株洲\n• 张家界\n• 珠海\n• 湛江\n• 肇庆\n• 中山\n• 自贡\n• 资阳\n• 遵义\n• 昭通\n• 张掖\n• 中卫\n\n1室1厅1厨1卫1阳台\n\n1\n2\n3\n4\n5\n\n0\n1\n2\n\n1\n\n1\n\n0\n1\n2\n3",
null,
"",
null,
"",
null,
"报名成功,资料已提交审核",
null,
"A B C D E F G H J K L M N P Q R S T W X Y Z\nA - B - C - D - E\n• A\n• 鞍山\n• 安庆\n• 安阳\n• 安顺\n• 安康\n• 澳门\n• B\n• 北京\n• 保定\n• 包头\n• 巴彦淖尔\n• 本溪\n• 蚌埠\n• 亳州\n• 滨州\n• 北海\n• 百色\n• 巴中\n• 毕节\n• 保山\n• 宝鸡\n• 白银\n• 巴州\n• C\n• 承德\n• 沧州\n• 长治\n• 赤峰\n• 朝阳\n• 长春\n• 常州\n• 滁州\n• 池州\n• 长沙\n• 常德\n• 郴州\n• 潮州\n• 崇左\n• 重庆\n• 成都\n• 楚雄\n• 昌都\n• 慈溪\n• 常熟\n• D\n• 大同\n• 大连\n• 丹东\n• 大庆\n• 东营\n• 德州\n• 东莞\n• 德阳\n• 达州\n• 大理\n• 德宏\n• 定西\n• 儋州\n• 东平\n• E\n• 鄂尔多斯\n• 鄂州\n• 恩施\nF - G - H - I - J\n• F\n• 抚顺\n• 阜新\n• 阜阳\n• 福州\n• 抚州\n• 佛山\n• 防城港\n• G\n• 赣州\n• 广州\n• 桂林\n• 贵港\n• 广元\n• 广安\n• 贵阳\n• 固原\n• H\n• 邯郸\n• 衡水\n• 呼和浩特\n• 呼伦贝尔\n• 葫芦岛\n• 哈尔滨\n• 黑河\n• 淮安\n• 杭州\n• 湖州\n• 合肥\n• 淮南\n• 淮北\n• 黄山\n• 菏泽\n• 鹤壁\n• 黄石\n• 黄冈\n• 衡阳\n• 怀化\n• 惠州\n• 河源\n• 贺州\n• 河池\n• 海口\n• 红河\n• 汉中\n• 海东\n• 怀来\n• I\n• J\n• 晋中\n• 锦州\n• 吉林\n• 鸡西\n• 佳木斯\n• 嘉兴\n• 金华\n• 景德镇\n• 九江\n• 吉安\n• 济南\n• 济宁\n• 焦作\n• 荆门\n• 荆州\n• 江门\n• 揭阳\n• 金昌\n• 酒泉\n• 嘉峪关\nK - L - M - N - P\n• K\n• 开封\n• 昆明\n• 昆山\n• L\n• 廊坊\n• 临汾\n• 辽阳\n• 连云港\n• 丽水\n• 六安\n• 龙岩\n• 莱芜\n• 临沂\n• 聊城\n• 洛阳\n• 漯河\n• 娄底\n• 柳州\n• 来宾\n• 泸州\n• 乐山\n• 六盘水\n• 丽江\n• 临沧\n• 拉萨\n• 林芝\n• 兰州\n• 陇南\n• M\n• 牡丹江\n• 马鞍山\n• 茂名\n• 梅州\n• 绵阳\n• 眉山\n• N\n• 南京\n• 南通\n• 宁波\n• 南平\n• 宁德\n• 南昌\n• 南阳\n• 南宁\n• 内江\n• 南充\n• P\n• 盘锦\n• 莆田\n• 平顶山\n• 濮阳\n• 攀枝花\n• 普洱\n• 平凉\nQ - R - S - T - W\n• Q\n• 秦皇岛\n• 齐齐哈尔\n• 衢州\n• 泉州\n• 青岛\n• 清远\n• 钦州\n• 黔南\n• 曲靖\n• 庆阳\n• R\n• 日照\n• 日喀则\n• S\n• 石家庄\n• 沈阳\n• 双鸭山\n• 绥化\n• 上海\n• 苏州\n• 宿迁\n• 绍兴\n• 宿州\n• 三明\n• 上饶\n• 三门峡\n• 商丘\n• 十堰\n• 随州\n• 邵阳\n• 韶关\n• 深圳\n• 汕头\n• 汕尾\n• 三亚\n• 三沙\n• 遂宁\n• 山南\n• 商洛\n• 石嘴山\n• T\n• 天津\n• 唐山\n• 太原\n• 通辽\n• 铁岭\n• 泰州\n• 台州\n• 铜陵\n• 泰安\n• 铜仁\n• 铜川\n• 天水\n• 天门\n• W\n• 乌海\n• 乌兰察布\n• 无锡\n• 温州\n• 芜湖\n• 潍坊\n• 威海\n• 武汉\n• 梧州\n• 渭南\n• 武威\n• 吴忠\n• 乌鲁木齐\nX - Y - Z\n• X\n• 邢台\n• 徐州\n• 宣城\n• 厦门\n• 新乡\n• 许昌\n• 信阳\n• 襄阳\n• 孝感\n• 咸宁\n• 湘潭\n• 湘西\n• 西双版纳\n• 西安\n• 咸阳\n• 西宁\n• 仙桃\n• 西昌\n• Y\n• 阳泉\n• 运城\n• 营口\n• 盐城\n• 扬州\n• 鹰潭\n• 宜春\n• 烟台\n• 宜昌\n• 岳阳\n• 益阳\n• 永州\n• 阳江\n• 云浮\n• 玉林\n• 宜宾\n• 雅安\n• 玉溪\n• 延安\n• 榆林\n• 银川\n• Z\n• 张家口\n• 镇江\n• 舟山\n• 漳州\n• 淄博\n• 枣庄\n• 郑州\n• 周口\n• 驻马店\n• 株洲\n• 张家界\n• 珠海\n• 湛江\n• 肇庆\n• 中山\n• 自贡\n• 资阳\n• 遵义\n• 昭通\n• 张掖\n• 中卫",
null,
"",
null,
"• 手机",
null,
"• 分享\n• 设计\n免费设计\n• 计算器\n装修计算器\n• 联系\n• 置顶\n返回顶部"
]
| [
null,
"https://t-img.51f.com/xf/xw/72b520d7-c0ba-42d5-91eb-ba909f2eb632.JPEG",
null,
"https://t-img.51f.com/xf/xw/ced90da9-3eed-4e52-b94e-82631f8cc1b1.JPEG",
null,
"https://t-img.51f.com/xf/xw/6bce2fbd-524f-4f0e-a0b4-a3942d71232c.JPEG",
null,
"https://t-img.51f.com/xf/xw/b16494c4-bef1-4511-9577-7b1bdcc00ff9.JPEG",
null,
"https://t3.focus-img.cn/sh740wsh/xf/dt/d23d4e6b-5fcf-47ff-8c85-c0ed8c61e364.JPEG",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABLEAAAAKCAYAAABL/czxAAACeklEQVR4Xu3cwUoCYRTF8RmSmUUE6QOICwkqkCFECJ/Dp/Q53MYgVBAuxAfQIFrMYBCjTIylQkTq4reVD5TLuef8vzvODXu93v18Pn+YTCZZEARBp9M5j+P4ptVqPQyHw4/isyRJLqMounZOXehAf/ADPikX5CU+wE04ERe7L7gfuTe6J5sfmJccY44UttvtuF6vdxaLxbj8Af1+/2K5XF41m820BFXngkBdgoAO6KAIKzqgAzpYD7LkAj+ggzXAywV+QAdywb3Rfdr8wFzlEHOkUOAIHIEjcASOwDlE4Hhg4gGRQYdBB+7EnbgTd+JO3Ik7/ZHoLw+CV0MsQAEoAAWgABSAAlAAir8AhQGVARWexJN4Ek/iSTyJJ/Hkf/NkWLzLPB6P3/eBR5Zl13mePzq3GUzqsr1B1UVdCuOiAzqgg92vWOkP/aE/9Ef5Kio/4Af8gB/wA/OI6monubA/F8Jut9uL4/h5NBq97RpkFYOuKIpunducrKvL9sBRF3UplzzyjZ8GrD/0h/7AG9UlqHyST8oFuSAX5IJcMI+o7iiXC/tzIRwMBmez2Syp1Wov+wZZzm0vpLqoSwEedEAHdLAbQPWH/tAf+qO8oPEDfsAP+AE/cO+uDmzkglz4bS6sdmIRDuHQAaAAFIACUHig8335Pj7AB/gAH+ADfIAP8AE+8MefbbtPj8WJX4vdj/UDfC9ABsgAGSADZIAMkAEyQD4lQMan+BSf4lN8ik/x6WnyaZgkyWWapq+lUU+n07ssy56qS9wbjcZdnufPzqkLHegPfjA445PtmA7ooBg40AEd0MH6jQa5wA/oYD34lAv8gA7kQrlr/b/84BMd0gjHDtit4gAAAABJRU5ErkJggg==",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACQAAAAkCAYAAADhAJiYAAACwElEQVRYR+3XT0gUURwH8N97s5qiuEqEilb7lPWwO0g0S3WQsoMlWdHB9hB0kEiJwIjyFkIXCyQqD4V2CBI6bEGBCJZEf/BQsAMhdnAG3XVVVDr4B8XZ0ZlfTLXkbK2z7rogNHvcN+/3PvP9/VjeEkTkYAd9iA2y6IadkNW42gnZCVklYLX+/85Q2Odzu4JBeUckNOHxnNVUNUAovc0k6c5mqIy3bMLjOamp6itAzDYghNIOJsvtiVAZBUW83lpNUfoQIDcGIAA6l5d3aN/w8Nd/oTIGivD8kXVFGQDE/A0YJBx3ySVJz9JKaKalZVdpd3fUaiBj65M8f3BtdXUQAJymPZRerZDl7rRmKOz1nsZo9BHk5JxiIyMjVqgQz/OgKO8QcffGZ4nDcZONjj6w2r9py8IeT6Ouqr2AmEUA5mhh4fH9oiglKjohCFX6wsJ7BCg2YTiunUlShxXm59AnuqBFqqsPaCsrXxDgz42SkOlsp7O2XBRD8cWnBIGpi4sfALFs4xql9K5Llm8lg9kUZCyOV1XdA027ZnpbgHBOUdGx0mBwOvb9jM9XpszPf0QAl+lgjntYIUk3ksVYgowHQm73Y9T1y6aihMhZJSW1e4eG5iZraorXZmeNZNwmOKVPmCxf2QomKdBv1FPU9YtxqG+OggL/+tJSABC9cZheJstNW8UkDYKeHhrq7HyOut4Y1z7NNGO/folfsra2C9DcrGcOZFRubXWE+vtfIMCZRAcRgD7W0HAeurrWU8Ekn1Csut+fHRLF1whwIv5AAvCWCcI5CATUVDFbBwHAd78/d1kU+xHgaOxgAvApXxAa9gQCq+lgUgIZm+br6vIXxsffIMBhQsjnQsbqiwYHl9PFpAwyNk7V1zvXxsbuZ1VWXi8fGFjcDkxaoO0C/DWL9n97i2gzdkFLtaU2yCo5OyGrhH4AtD5LNJ/vw8QAAAAASUVORK5CYII=",
null,
"https://yichun.home.focus.cn/gonglue/5ed4b00ee508b5ed.html",
null,
"https://t1.focus-res.cn/front-pc/module/loupan-baoming/images/yes.png",
null,
"https://t1.focus-res.cn/front-pc/module/loupan-baoming/images/qrcode.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACQAAAAkCAYAAADhAJiYAAACwElEQVRYR+3XT0gUURwH8N97s5qiuEqEilb7lPWwO0g0S3WQsoMlWdHB9hB0kEiJwIjyFkIXCyQqD4V2CBI6bEGBCJZEf/BQsAMhdnAG3XVVVDr4B8XZ0ZlfTLXkbK2z7rogNHvcN+/3PvP9/VjeEkTkYAd9iA2y6IadkNW42gnZCVklYLX+/85Q2Odzu4JBeUckNOHxnNVUNUAovc0k6c5mqIy3bMLjOamp6itAzDYghNIOJsvtiVAZBUW83lpNUfoQIDcGIAA6l5d3aN/w8Nd/oTIGivD8kXVFGQDE/A0YJBx3ySVJz9JKaKalZVdpd3fUaiBj65M8f3BtdXUQAJymPZRerZDl7rRmKOz1nsZo9BHk5JxiIyMjVqgQz/OgKO8QcffGZ4nDcZONjj6w2r9py8IeT6Ouqr2AmEUA5mhh4fH9oiglKjohCFX6wsJ7BCg2YTiunUlShxXm59AnuqBFqqsPaCsrXxDgz42SkOlsp7O2XBRD8cWnBIGpi4sfALFs4xql9K5Llm8lg9kUZCyOV1XdA027ZnpbgHBOUdGx0mBwOvb9jM9XpszPf0QAl+lgjntYIUk3ksVYgowHQm73Y9T1y6aihMhZJSW1e4eG5iZraorXZmeNZNwmOKVPmCxf2QomKdBv1FPU9YtxqG+OggL/+tJSABC9cZheJstNW8UkDYKeHhrq7HyOut4Y1z7NNGO/folfsra2C9DcrGcOZFRubXWE+vtfIMCZRAcRgD7W0HAeurrWU8Ekn1Csut+fHRLF1whwIv5AAvCWCcI5CATUVDFbBwHAd78/d1kU+xHgaOxgAvApXxAa9gQCq+lgUgIZm+br6vIXxsffIMBhQsjnQsbqiwYHl9PFpAwyNk7V1zvXxsbuZ1VWXi8fGFjcDkxaoO0C/DWL9n97i2gzdkFLtaU2yCo5OyGrhH4AtD5LNJ/vw8QAAAAASUVORK5CYII=",
null,
"https://t.focus-res.cn/home-front/pc/img/qrcode.d7cfc15.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG8AAABvAQAAAADKvqPNAAABbklEQVR42rXVPY6GIBAG4DEUdHIBEq5B55XkAv5cQK9ExzVIvMBnR0F8d3B/ss0HFrvG5jFBmXkdJfw+LvpDZiKtEJ3XI9HU4gyDlGdvTuQmR6kneZyBePkT9oNAwPmMNGCHeELe1R6IbHTfm6yQ63WhnD/lv2dp4Gapt3yTr8a+Z6ZBz0n3Uqt7V3XO6diRnT/2ZK4GcVkDCCTDzZwazC7EibjYY5G5SRWoHw5uYwdxtThSJMsxcax3+jUCSXdBq3TgLqHK3FvsPk/8LJQSqsQlxTZklYhkk1n52PlycRmOq0FOU7xk6XxPpYQ6Vx6EEDn9yZaK6tx4FggvwhmazC5xz2n2YpFNcvo0lotm9aLJHXzGOYiFyhOr5EC1CgLQRLga5FHIkxQr9Ozv9GvkKTOb5Yyiu+utk+d3uV+qz7V1jpKH3awl/bK2Re3Awx5VwhN2/NKCutDmzG2xYrFYg2iyfIGTnizHihb/7TfxARESiMagHG6nAAAAAElFTkSuQmCC",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.9779592,"math_prob":0.50231177,"size":772,"snap":"2022-40-2023-06","text_gpt3_token_len":940,"char_repetition_ratio":0.015625,"word_repetition_ratio":0.0,"special_character_ratio":0.10751295,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99970555,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,1,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T23:52:56Z\",\"WARC-Record-ID\":\"<urn:uuid:ca194c14-5a2a-41ba-a8fc-e0c00ee6b6b6>\",\"Content-Length\":\"144938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54da480e-2e1b-4b03-948e-50198a00d928>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9ad5ee4-6555-4dbb-9f35-290691a7c759>\",\"WARC-IP-Address\":\"101.72.224.29\",\"WARC-Target-URI\":\"https://yichun.home.focus.cn/gonglue/5ed4b00ee508b5ed.html\",\"WARC-Payload-Digest\":\"sha1:OAZI6RGBRU2V6UPCCYCDCKCLF2XJNQSW\",\"WARC-Block-Digest\":\"sha1:QLICP5NIKLTV3RJJYC32LJ3A5E6XVANH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499695.59_warc_CC-MAIN-20230128220716-20230129010716-00297.warc.gz\"}"} |
https://www.hindawi.com/journals/ape/2016/7176981/tab12/ | [
"Table 12: Ambiguity from for three-step RC ladder network.\n Fault Ambiguity with the following short short ( = 7–9), short ( = 1), short ( = 3) open No ambiguity short short ( = 7–9) short short ( = 1), open ( ≥ 21), short ( ≥ 27), open ( ≥ 11), short ( ≥ 26) open short ( ≥ 21), short ( = 11–13), short ( = 14–24), open ( ≥ 20), short ( ≥ 10) short short ( = 3), open ( = 11–13), short ( = 4, ), open ( = 5), short ( = 10–17, 23–30) short short ( ≥ 27), open ( = 14–24), short ( = 4, ), short ( ≥ 15), open ( = 6, 7, 30) open short ( ≥ 11), open ( ≥ 20), short ( = 5), short ( = 6, 7, 30), short ( ≥ 28) short short ( ≥ 26), open ( ≥ 10), short ( = 10–17, 23–30), short ( ≥ 15), open ( ≥ 28)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7175166,"math_prob":0.9991384,"size":653,"snap":"2019-43-2019-47","text_gpt3_token_len":282,"char_repetition_ratio":0.37904468,"word_repetition_ratio":0.15286624,"special_character_ratio":0.62787133,"punctuation_ratio":0.25373134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99649596,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T02:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:ed8fe221-83d3-4183-a404-25a623d7a591>\",\"Content-Length\":\"115211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4dc9579f-03f9-4304-ba9b-b159832170e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:d91babc7-4c26-4eef-823f-efb89d6fe684>\",\"WARC-IP-Address\":\"54.164.130.168\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/ape/2016/7176981/tab12/\",\"WARC-Payload-Digest\":\"sha1:PJ63RFBEDG4SH5ZSOKFPYOJAUMLH2A4B\",\"WARC-Block-Digest\":\"sha1:DPWB3AGFVT5IHM5PS6X6GJNOOASEVKI3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665976.26_warc_CC-MAIN-20191113012959-20191113040959-00006.warc.gz\"}"} |
https://discuss.interviewbit.com/t/check-out-my-suggestion-for-why-its-not-n-2/18767 | [
"",
null,
"# Check out my suggestion for why its not n^2\n\n#1\n\nI think it should be n^2 at first glance but it’s not\nbecause if the n is 5 the total iteration is :\n1st iteration:\ni = 5 and now i is i /=5 i.e 2\nj = 0, 1\n2nd iteration:\ni = 3 and now i is i /=3 i.e 1\nj= 0\ntotal iteration of i: 2 and iteration of j : 3\nSo the total iteration is 5.\n\n#2\n\nbut for 2nd iteration the value of i will be 2. for your case"
]
| [
null,
"https://discuss-files.s3.dualstack.us-west-2.amazonaws.com/original/4X/9/6/6/966130bf03a57db2de73897c626d264b27cc950f.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9075249,"math_prob":0.99057025,"size":442,"snap":"2021-04-2021-17","text_gpt3_token_len":150,"char_repetition_ratio":0.1803653,"word_repetition_ratio":0.17475729,"special_character_ratio":0.33710408,"punctuation_ratio":0.084033616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99534905,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T13:02:47Z\",\"WARC-Record-ID\":\"<urn:uuid:c1bb8e76-38b0-48cc-8496-4fcb50e13819>\",\"Content-Length\":\"9072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2629877f-7c05-4632-8ef0-ca625996df21>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7881d76-9a26-4bdd-a075-f5785b899e21>\",\"WARC-IP-Address\":\"52.88.249.28\",\"WARC-Target-URI\":\"https://discuss.interviewbit.com/t/check-out-my-suggestion-for-why-its-not-n-2/18767\",\"WARC-Payload-Digest\":\"sha1:NTD2UKBUHM6V56XAFLELTF5WPSJNKEIB\",\"WARC-Block-Digest\":\"sha1:3INPCCQ3NBSYW2HWVJN3DMRKH7P75PRF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704799741.85_warc_CC-MAIN-20210126104721-20210126134721-00660.warc.gz\"}"} |
https://ansanswers.com/mathematics/question13139220 | [
"",
null,
"# Ineed parent and transform and parent function .",
null,
"",
null,
"",
null,
"",
null,
"### Another question on Mathematics",
null,
"Mathematics, 21.06.2019 14:30\nSadie computes the perimeter of a rectangle by adding the length, l, and width, w, and doubling this sum. eric computes the perimeter of a rectangle by doubling the length, l, doubling the width, w, and adding the doubled amounts. write an equation for sadie’s way of calculating the",
null,
"Mathematics, 21.06.2019 17:00\nIssof claims that the scale factor is 1/2. which statement about his claim is correct",
null,
"Mathematics, 21.06.2019 17:10\nComplete the table for different values of x in the polynomial expression -7x2 + 32x + 240. then, determine the optimal price that the taco truck should sell its tacos for. assume whole dollar amounts for the tacos.",
null,
"Mathematics, 21.06.2019 19:00\nRena is building a 1: 180 scale model of a real castle. her model has a rectangular base that is 3 feet wide and 4 feet long what is the area of the base of the actual castle in square feet\nIneed parent and transform and parent function .\n...\nQuestions",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Mathematics, 19.08.2019 00:30",
null,
"",
null,
"Questions on the website: 14424202"
]
| [
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/07/29/zxbiDIietSwSPrTH.jpg",
null,
"https://ansanswers.com/tpl/images/cats/User.png",
null,
"https://ansanswers.com/tpl/images/ask_question.png",
null,
"https://ansanswers.com/tpl/images/ask_question_mob.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/istoriya.png",
null,
"https://ansanswers.com/tpl/images/cats/en.png",
null,
"https://ansanswers.com/tpl/images/cats/fizika.png",
null,
"https://ansanswers.com/tpl/images/cats/health.png",
null,
"https://ansanswers.com/tpl/images/cats/istoriya.png",
null,
"https://ansanswers.com/tpl/images/cats/fizika.png",
null,
"https://ansanswers.com/tpl/images/cats/fizika.png",
null,
"https://ansanswers.com/tpl/images/cats/himiya.png",
null,
"https://ansanswers.com/tpl/images/cats/obshestvoznanie.png",
null,
"https://ansanswers.com/tpl/images/cats/obshestvoznanie.png",
null,
"https://ansanswers.com/tpl/images/cats/istoriya.png",
null,
"https://ansanswers.com/tpl/images/cats/istoriya.png",
null,
"https://ansanswers.com/tpl/images/cats/biologiya.png",
null,
"https://ansanswers.com/tpl/images/cats/himiya.png",
null,
"https://ansanswers.com/tpl/images/cats/himiya.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/mat.png",
null,
"https://ansanswers.com/tpl/images/cats/obshestvoznanie.png",
null,
"https://ansanswers.com/tpl/images/cats/obshestvoznanie.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7354689,"math_prob":0.95396817,"size":1431,"snap":"2021-43-2021-49","text_gpt3_token_len":476,"char_repetition_ratio":0.2067274,"word_repetition_ratio":0.12727273,"special_character_ratio":0.3836478,"punctuation_ratio":0.2568306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9912746,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T17:47:03Z\",\"WARC-Record-ID\":\"<urn:uuid:2f4cdc30-8e79-4d81-b0db-eb897450c6d9>\",\"Content-Length\":\"70088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7556026-1726-4d0a-9885-b25e2ceb5109>\",\"WARC-Concurrent-To\":\"<urn:uuid:80afb41e-9fb9-4e4b-9282-f0ae6cef4a5e>\",\"WARC-IP-Address\":\"172.67.191.232\",\"WARC-Target-URI\":\"https://ansanswers.com/mathematics/question13139220\",\"WARC-Payload-Digest\":\"sha1:D2CB5AKWWMCZBHCX5NE2FZNQ7RUFN7AW\",\"WARC-Block-Digest\":\"sha1:UZJ7EVANBQ5CNOMPWIWNHWWZRFYAMFNF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362999.66_warc_CC-MAIN-20211204154554-20211204184554-00535.warc.gz\"}"} |
https://gis.stackexchange.com/questions/240502/calculate-volume-from-raster-grass-with-depth-per-cell | [
"# Calculate volume from raster (GRASS) with depth per cell\n\nI'm trying to calculate the volumes of depression areas filled with water. I have the depth for each cell in the raster, and I'm trying to get the total volume of cells adjacent to each other. Which I then, in step 2, need to remove all the bodies of water lower than a given threshold and then convert the file to vector/shp format.\n\nI've trying using the r.clump and then r.volume function in qgis, but I keep getting this error:\n\nERROR: r.volume: Sorry, is not a valid parameter ERROR: Required parameter not set: (Name of input raster map representing data that will be summed within clumps)\n\nC:\\PROGRA~1\\QGIS2~1.18\\bin>v.out.ogr -s -e input=centroids0814b2718bf14943909f424cebd714fb type=auto output=\"C:\\Users\\JGJ\\AppData\\Local\\Temp\\processing503b826c304e474ebc5a9862a9dd29e1\\b97bc604e110497ab8dde2cc3da478e7\" format=ESRI_Shapefile output_layer=centroids ERROR: Vector map not found\n\nThe following layers were not correctly generated. Centroids You can check the log messages to find more information about the execution of the algorithm\n\nI could not get the above to work, but managed a workaround.\n\nI have converted raster to vector, and calculated each polygons volume. I have a lot of polygons I want to dissolve into a lot of larger polygons, which can be done with Dissolve. However each of the small polygon has a volume too it, that needs to be summed up. If I do a dissolve it just adds the total volume of all the polygons in the layer, and not those that get dissolved. Added a concept picture.",
null,
"• Are you loading the output from the first step directly from the `Temp` folder? It seems like QGIS doesn't recognize the vector layer, so maybe try to store it to the disk before running the second step. Otherwise, try adding more detail about your issue. – mgri May 17 '17 at 7:55\n• Thank you for the reply. I pretty much just tried the do the R.clump and then the R.volume, where you put in the raster for both and that's it. Ran it from C:\\ folder, not a temp one. I've managed to do a work around, and have got so far as I now have a polygon layer as vector format with a lot different polygons that need to be dissolved and their volume fields for those that are dissolved needs to be summed up. Updated question with picture and new description. Don't know if I should delete the old bit. – FoolzRailer May 17 '17 at 12:14\n\nIf I understand correctly your goal, you should convert the \"clumps\" to a polygon vector, then use `v.rast.stats` to get the total depth for each clump area. i.e.:\n``````v.rast.stats map=clumps raster=depths column_pref=depth method=sum,minimum,maximum,range"
]
| [
null,
"https://i.stack.imgur.com/NHl0i.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8768636,"math_prob":0.6069152,"size":1795,"snap":"2020-24-2020-29","text_gpt3_token_len":459,"char_repetition_ratio":0.093244,"word_repetition_ratio":0.0,"special_character_ratio":0.24401115,"punctuation_ratio":0.11898017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564171,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-16T17:54:45Z\",\"WARC-Record-ID\":\"<urn:uuid:c5a75ce1-36b4-43b4-b706-8ce751890018>\",\"Content-Length\":\"145401\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3baf9670-bad2-4919-96db-fa74021afeb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a780e1a-071e-4773-a995-1e5880c8082d>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/240502/calculate-volume-from-raster-grass-with-depth-per-cell\",\"WARC-Payload-Digest\":\"sha1:KAAKGJQDHHGYYFHVB7LCPOUVEGL5EBJL\",\"WARC-Block-Digest\":\"sha1:SZSPEZYIWGOA36U2D5G22GXIARHSGATY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657172545.84_warc_CC-MAIN-20200716153247-20200716183247-00297.warc.gz\"}"} |
http://www.annalsofian.org/viewimage.asp?img=AnnIndianAcadNeurol_2015_18_4_435_165478_f5.jpg | [
"",
null,
"Close Figure 2: Receiver operator characteristic (ROC)-curve along with 95% confi dence bounds for calculating the cut-off value of log N-terminal pro-brain natriuretic peptide (NT-proBNP) in predicting the outcome: Area under the ROC-curve = 0.979; standard error = 0.0153; 95% confi dence intervals = 0.914 to 0.998; z-statistic = 31.346; signifi cance level P (Area = 0.5) = 0.0001",
null,
""
]
| [
null,
"http://www.annalsofian.org/images/logo.jpg",
null,
"http://www.annalsofian.org/articles/2015/18/4/images/AnnIndianAcadNeurol_2015_18_4_435_165478_f5.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80567443,"math_prob":0.9952996,"size":387,"snap":"2019-43-2019-47","text_gpt3_token_len":123,"char_repetition_ratio":0.10443864,"word_repetition_ratio":0.0,"special_character_ratio":0.33333334,"punctuation_ratio":0.1625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98200285,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T03:26:38Z\",\"WARC-Record-ID\":\"<urn:uuid:2b991d19-bcba-4dc9-93f6-f6f9eb7f0c8f>\",\"Content-Length\":\"2859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d942b945-f561-41de-858c-dc5b7a4ea7dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0e3ec9e-3908-44c6-a964-2169cf9c323f>\",\"WARC-IP-Address\":\"13.90.98.250\",\"WARC-Target-URI\":\"http://www.annalsofian.org/viewimage.asp?img=AnnIndianAcadNeurol_2015_18_4_435_165478_f5.jpg\",\"WARC-Payload-Digest\":\"sha1:YXDRPDSLW4YTMK2CLPJB54PGHOZC5GEX\",\"WARC-Block-Digest\":\"sha1:7MZNIM2YYDD4W6SMVSHCW26RB5TMLZPN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669967.80_warc_CC-MAIN-20191119015704-20191119043704-00352.warc.gz\"}"} |
https://algebra-equation.com/solving-algebra-equation/like-denominators/matrix-equation-to-solve-two.html | [
"Try the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\nmatrix equation to solve two quadratic equations\nRelated topics:\npictograph worksheet | holt math worksheet answers | 9th grade algebra practice | how to solve lcm for fractions | the difference between hyperbola and parabola | quadrinomial solving | multiplying integers class game | algebra 2 mcdougal littell answers\n\nAuthor Message\nKleapatlas_Cisder",
null,
"Registered: 08.07.2007\nFrom:",
null,
"Posted: Saturday 30th of Dec 17:32 Hello math fanatics. This is my first post in this forum. I struggle a lot with matrix equation to solve two quadratic equations problems . No matter how much I try, I just am not able to crack any equation in less than an hour. If things go this way, I fear I will not be able to get through my math exam.\nIlbendF",
null,
"Registered: 11.03.2004\nFrom: Netherlands",
null,
"Posted: Sunday 31st of Dec 08:07 If you can give details about matrix equation to solve two quadratic equations, I could provide help to solve the math problem. If you don’t want to pay big bucks for a algebra tutor, the next best option would be a accurate software program which can help you to solve the problems. Algebrator is the best I have come across which will elucidate every step of the solution to any algebra problem that you may copy from your book. You can simply represent as your homework assignment. This Algebrator should be used to learn math rather than for copying answers for assignments.",
null,
"Registered: 10.07.2002\nFrom: NW AR, USA",
null,
"Posted: Monday 01st of Jan 10:55 I can confirm that. Algebrator is the best piece of software for solving math assignments. Been using it for some time now and it keeps on amazing me. Every assignment that I type in, Algebrator gives me a correct answer to it. I have never enjoyed learning math assignment on function composition, inverse matrices and graphing circles so much before. I would advise it for sure.\nVoumdaim of Obpnis",
null,
"Registered: 11.06.2004\nFrom: SF Bay Area, CA, USA",
null,
"Posted: Tuesday 02nd of Jan 17:33 I am a regular user of Algebrator. It not only helps me complete my assignments faster, the detailed explanations provided makes understanding the concepts easier. I advise using it to help improve problem solving skills."
]
| [
null,
"https://algebra-equation.com/images/avatars/none.png",
null,
"https://algebra-equation.com/images/forum/icon_minipost.gif",
null,
"https://algebra-equation.com/images/avatars/429.jpg",
null,
"https://algebra-equation.com/images/forum/icon_minipost.gif",
null,
"https://algebra-equation.com/images/avatars/36.jpg",
null,
"https://algebra-equation.com/images/forum/icon_minipost.gif",
null,
"https://algebra-equation.com/images/avatars/72.gif",
null,
"https://algebra-equation.com/images/forum/icon_minipost.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86827916,"math_prob":0.7489078,"size":328,"snap":"2022-40-2023-06","text_gpt3_token_len":71,"char_repetition_ratio":0.108024694,"word_repetition_ratio":0.0,"special_character_ratio":0.19207317,"punctuation_ratio":0.022727273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97326475,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T10:50:14Z\",\"WARC-Record-ID\":\"<urn:uuid:3e81a0a2-3bfc-49d9-a9bd-f1d044f0d5f1>\",\"Content-Length\":\"87600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ed1fdba-a870-420c-bf14-ce6dcf7fd7bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a4b74d8-749e-491b-92a8-30466ed71857>\",\"WARC-IP-Address\":\"35.80.222.224\",\"WARC-Target-URI\":\"https://algebra-equation.com/solving-algebra-equation/like-denominators/matrix-equation-to-solve-two.html\",\"WARC-Payload-Digest\":\"sha1:7OEMVVILMQYT76OGD4MAKKEK26RKRQVG\",\"WARC-Block-Digest\":\"sha1:WBIHU7FMBJE6OFZ2D6QFBWIJPWLXNPIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494976.72_warc_CC-MAIN-20230127101040-20230127131040-00714.warc.gz\"}"} |
https://fr.slideserve.com/fola/chapter-16-powerpoint-ppt-presentation | [
"1 / 44\n\n# Chapter 16\n\nChapter 16. ACID - BASE. 16.1 Arrhenius Theory. Acid H + in solution Base OH - in solution . Acid Proton donor H + donor Base Proton acceptor H + acceptor. Conjugate base What’s left of acid Conjugate acid Base + H + Hydronium ion H 3 O + Amphoteric Acts as acid or base.",
null,
"Télécharger la présentation",
null,
"## Chapter 16\n\nE N D\n\n### Presentation Transcript\n\n1. Chapter 16 • ACID-BASE\n\n2. 16.1 Arrhenius Theory • Acid • H+ in solution • Base • OH- in solution\n\n3. Acid Proton donor H+ donor Base Proton acceptor H+ acceptor Conjugate base What’s left of acid Conjugate acid Base + H+ Hydronium ion H3O+ Amphoteric Acts as acid or base 16.2 Brönsted- Lowry Theory Proton = H+\n\n4. Acid – Base Equations HCl + H2O H3O+ + Cl- Acid Base C.A. C.B. NH3 + H2O NH4+ + OH- Base Acid C.A. C.B.\n\n5. 16.2 Strengths of Acids and Bases • Stronger the acid, weaker the CB • Stronger the base, weaker the CA • Ionization • SA completely ionizes • WA do not completely ionize • Central Atom • Higher oxidation number of central atom • Higher electro-negativity of central atom • Binary Acid • HI is the strongest\n\n6. Which Acid is Stronger? • HClO3 or HClO2 • HClO3 or HBrO3 • HClO3 or ClO3- • HCl or HBr • HClO4 –single strongest acid • HF – weak cuz HB\n\n7. 16.3 Autoionization of water • H2O + H2O H3O+ + OH- • H2O H+ + OH- • pH + pOH = 14 .0000001 M @ pH 7\n\n8. pH = -log[H+] = -log[H3O+] Acid pH < 7 Neutral pH = 7 Base pH > 7 pOH = -log[OH-] 16.4 pH Scale – power of hydrogen\n\n9. pH scale\n\n10. Ways to measure pH\n\n11. Litmus paper • Acid • Change from blue to red • Base • Change from red to blue\n\n12. pH paper\n\n13. Indicators • Phenolphthalein∆ clear to pink at 8.2\n\n14. Universal Indicator\n\n15. Indicators\n\n16. RED cabbage\n\n17. pH meters\n\n18. Acid and conjugate base are diff colors.\n\n19. hydrangea\n\n20. Polyprotic acids • H2SO3 H+ + HSO3- Ka1 = 1.7x10-2 • HSO3- H+ + SO3-2Ka2 = 6.4 x 10-8 • Ka are on page 1115. • Kb are on page 1116. • HCl is a monoprotic acid. • H2 SO4is a diprotic acid. • H3PO4 is a triprotic acid.\n\n21. Hydrolysis • Water + salt acid + base • HOH + NaCl HCl + NaOH SA SB neutral • HOH + NH4Cl HCl + NH4OH SA WB acidic • HOH + KF HF + KOH WA SB basic • HOH + NH4C2H3O2 HC2H3O2 + NH4OH WA WB\n\n22. If WA w/ WB…..then • HC2H3O2 • Ka= 1.8 x 10-5 • This one is neutral cuz • Ka = Kb. • NH4OH • Kb = 1.8 x 10-5\n\n23. LEWIS Theory • Lewis acid - electron pair acceptor • Metals and comp w/ only 6e- around central atom. • Lewis base - electron pair donor • Compounds w/ lone pairs.\n\n24. What is the pH of a 0.010 M HCl solution? (strong acid) • HCl H+ + Cl- .010 0 0 start 0 .010 .010 end • pH = -log[H+] = -log(.010) = 2.00\n\n25. The equation can also be written: • HCl + H2O H3O+ + Cl-\n\n26. What is the pH of a 0.010 M NaOH solution? (strong base) • NaOH Na+ + OH- • .010 0 0 start • 0 .010 .010 end • pOH = -log[OH-] • = -log (0.010) • = 2.00 • pH = 12.00 pH + pOH = 14\n\n27. WEAK ACID • What is the pH of a 0.010 M H2CO3 solution? Ka = 4.3 x 10-7 H2CO3 H+ + HCO3- • I 0.010 0 0 • C -x +x +x • E 0.010 – x x x\n\n28. Ka = [H+][HCO3-]= x2 = 4.3 x 10-7 [H2CO3] 0.010 - x • x2 = 4.3 x 10-9 • x = 6.56 x 10-5 • pH = -log [H+] = -log(6.56 x 10-5) • = 4.18 -x 4.1831 or 4.1871 w/ QUAD\n\n29. WEAK BASE • What is the pH of a 0.010 M NH4OH (NH3)aq solution? Kb = 1.8 x 10-5 NH4OH NH4+ + OH- I 0.010 0 0 C -x +x +x E 0.010 – x x x\n\n30. Kb = [NH4+][OH-] [NH4OH] • 1.8 x 10-5 = x2 0.010 – x • x2 = 1.8 x 10-7 • x = 4.24 x 10-4 • pOH = -log[OH-] = -log(4.24 x 10-4) pOH = 3.37 pH = 10.623 10.615 w/ QUAD -x pH + pOH = 14\n\n31. Sample problems • 1.What is the pH of a 0.0050M HC2H3O2 solution? (CH3COOH) Ka = 1.8 x 10-5 (3.52) • 2. What is the pH of a 0.0050M KOH solution?(11.70) • 3. What is the pH of a 2.0 x 10-3 M NH4OH solution? (NH3) Kb = 1.8 x 10-5(10.28) • 4. What is the pH of a 0.0010 M HNO3 solution? (3.00)\n\n32. Sample Problems • What is the pH of a 3.69 x 10-3 M solution of NH4OH? Kb= 1.8 x 10-5 2. What is the pH of a 0.045M HClO4 solution? 3. What is the pH of a 0.002M solution of NaOH? 4. What is the pH of a 1.0 x 10-5 M HF solution? Ka=3.53 x 10-4 (10.41) 10.43 w/ Quad (1.35) (11.30) (4.23) or 5.01 w/ Quad\n\n33. pH of a salt • What is the pH of a 0.500M solution of NH4Cl? Kb = 1.8 x 10-5 • HOH + NH4Cl NH4OH + HCl • WB SA acid • HOH + NH4+ NH4OH + H+ • 0.500 x x • K = [NH4OH][H+] Is this Ka or Kb? • [NH4+ ]\n\n34. pH of a salt • Normally….. • NH4OH NH4+ + OH- • Kb = [NH4+][OH-] [NH4OH] • So the K on the previous page is Ka!!! • Ka x Kb = 1 x 10-14 • Ka = 1 x 10-14 = 5.6 x 10-10 • 1.8 x 10-5\n\n35. pH of a salt • K a= [NH4OH][H+] Kb = [NH4+][OH-] [NH4+ ] [NH4OH] • [H+][OH-] = Ka x Kb = 1x10-14 • Back to the problem…. • = 5.6 x 10-10 = x2/ 0.500 • x2 = 2.8 x 10-10 • x = 1.7 x10-5 pH = 4.78\n\n36. So in general…. • x2 = 1x10-14 • [conc] Ka or Kb\n\n37. What is the pH of a 0.500M NaCl solution? • HOH + NaCl HCl + NaOH • SA SB neutral • pH = 7.00\n\n38. What is the pH of a 0.250M NaCN solution? (Ka = 4.9 x 10-10) • HOH + NaCN HCN + NaOH • HOH + CN- HCN + OH- • WA SB basic • K = [HCN][OH-] [CN-]\n\n39. x2 = 1x10-14 • 0.250M 4.9 x 10-10 • x2 = 5.10 x 10-6 • x = 2.26 x 10-3 • pOH = 2.64 • pH = 11.35\n\n40. KHP\n\n41. KHP + NaOH\n\n42. (#H)M1V1 = M2V2(#OH) • How many ml of 0.100 M H2SO4 are needed to neutralize 45.0ml of 0.200 MNaOH? (2)(0.100M)(x) = (0.200M)(45.0ml)(1) x = 45.0 ml\n\nMore Related"
]
| [
null,
"https://www.slideserve.com/photo/30365.jpeg",
null,
"https://www.slideserve.com/img/output_cBjjdt.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6581667,"math_prob":0.9974659,"size":4867,"snap":"2023-40-2023-50","text_gpt3_token_len":2162,"char_repetition_ratio":0.15504833,"word_repetition_ratio":0.08141593,"special_character_ratio":0.48777482,"punctuation_ratio":0.12943116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933553,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T05:16:11Z\",\"WARC-Record-ID\":\"<urn:uuid:3a1cd66a-b439-4a5b-82d5-c6559ee33d06>\",\"Content-Length\":\"92320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6ca7cb8-5325-4981-83b3-18f216ce0ff7>\",\"WARC-Concurrent-To\":\"<urn:uuid:17437523-bb7a-4944-adc8-d9fdd9b70f66>\",\"WARC-IP-Address\":\"52.24.166.104\",\"WARC-Target-URI\":\"https://fr.slideserve.com/fola/chapter-16-powerpoint-ppt-presentation\",\"WARC-Payload-Digest\":\"sha1:7ZKSL7AYKCKZFVLEZN2CHYZA2JIU6AHU\",\"WARC-Block-Digest\":\"sha1:D4PXWSCMSLWQBGQXATCPN2TBJHKVYXXQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679101195.85_warc_CC-MAIN-20231210025335-20231210055335-00338.warc.gz\"}"} |
https://www.enotes.com/homework-help/use-following-experimental-information-determine-241327 | [
"# Determine the empirical formula of an oxide of silicon from the following: Mass of crucible = 18.20 g, mass of crucible + silicon = 18.48 g and mass of crucible + oxide of silicon = 18.80 g.",
null,
"The mass of the crucible is given as 18.2 g. The mass of the crucible + mass of the silicon is 18.48 g.\n\nThis gives the mass of the silicon alone as 18.48 - 18.20 = 0.28 g.\n\nThe mass of the crucible and the silicon oxide is 18.80 g\n\nThis gives the mass of the silicon oxide as 18.8 - 18.2 = 0.6 g\n\nSo we have 0.28 g of silicon reacting with 0.6 - 0.28 = 0.32 g of oxygen to give the silicon oxide.\n\nThe ratio of the molar mass of oxygen to the molar mass of silicon is 16 : 28. From what we have found above, that 0.32 g of oxygen reacts with .28 g of silicon, we can say that the molecular formula of the silicon oxide is SiO2.\n\nThe required empirical formula of the silicon oxide is SiO2.\n\nApproved by eNotes Editorial Team\n\nPosted on",
null,
""
]
| [
null,
"https://static.enotescdn.net/images/main/illustrations/illo-answer.svg",
null,
"https://static.enotescdn.net/images/main/illustrations/illo-book-plane.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88459396,"math_prob":0.9922028,"size":1028,"snap":"2020-45-2020-50","text_gpt3_token_len":299,"char_repetition_ratio":0.18945312,"word_repetition_ratio":0.08374384,"special_character_ratio":0.31906614,"punctuation_ratio":0.13360325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972208,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T18:09:24Z\",\"WARC-Record-ID\":\"<urn:uuid:3ca31374-5c59-4234-b9c8-075a787c251e>\",\"Content-Length\":\"74879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6dadef7a-9410-4e3b-92e0-4bd7e617047d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2be5be6-3c71-425f-94bc-ed68b7b10953>\",\"WARC-IP-Address\":\"104.26.5.75\",\"WARC-Target-URI\":\"https://www.enotes.com/homework-help/use-following-experimental-information-determine-241327\",\"WARC-Payload-Digest\":\"sha1:43O2SDBIYO72ONQVOK66AHSNJNJUKUS7\",\"WARC-Block-Digest\":\"sha1:24LQQCDZDGLKNBQH7JTUU7GH4A5IEB3C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141729522.82_warc_CC-MAIN-20201203155433-20201203185433-00061.warc.gz\"}"} |
https://republicofsouthossetia.org/question/a-rectangular-gate-is-made-using-6-straight-pieces-of-steel-5-m-12-m-the-weight-of-the-steel-is-15139335-83/ | [
"## A rectangular gate is made using 6 straight pieces of steel. 5 m 12 m The weight of the steel is 2.5 kg per metre. W\n\nQuestion\n\nA rectangular gate is made using 6 straight pieces of steel.\n5 m\n12 m\nThe weight of the steel is 2.5 kg per metre.\nWork out the total weight of the steel gate.\n\nin progress 0\n1 week 2021-09-15T04:16:45+00:00 2 Answers 0\n\nStep-by-step explanation:\n\nsquare root of 12 and 5\n\n= √169\n\n= 13\n\n5+12+5+12+13+13\n\n= 60\n\n60×2.5\n\n= 150kg\n\nStep-by-step explanation:\n\nStep-by-step explanation:\n\nsquare root of 12 and 5\n\n= √169\n\n= 13\n\n5+12+5+12+13+13\n\n= 60\n\n60×2.5\n\n= 150kg"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6943149,"math_prob":0.9970322,"size":514,"snap":"2021-31-2021-39","text_gpt3_token_len":180,"char_repetition_ratio":0.115686275,"word_repetition_ratio":0.35555556,"special_character_ratio":0.39494163,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T20:59:43Z\",\"WARC-Record-ID\":\"<urn:uuid:f08b2983-337b-49ac-9055-5e69ac5058f3>\",\"Content-Length\":\"71728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65819e50-1e65-435c-bf03-685da1a050e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5af4d0c4-6c30-4ade-8ef6-c67f661c07f3>\",\"WARC-IP-Address\":\"198.252.99.154\",\"WARC-Target-URI\":\"https://republicofsouthossetia.org/question/a-rectangular-gate-is-made-using-6-straight-pieces-of-steel-5-m-12-m-the-weight-of-the-steel-is-15139335-83/\",\"WARC-Payload-Digest\":\"sha1:25I4OWUSMIZS57EFMZROLIERV5MAJ62Q\",\"WARC-Block-Digest\":\"sha1:7YP4Y5PJ5PCBKIRQ2OZQHBUNL65EFKMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057447.52_warc_CC-MAIN-20210923195546-20210923225546-00080.warc.gz\"}"} |
https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/type-inference.html | [
"# Understanding type inference\n\nBefore we finish with types, let's revisit type inference: the magic that allows the F# compiler to deduce what types are used and where. We have seen this happen through all the examples so far, but how does it work and what can you do if it goes wrong?\n\n## How does type inference work?\n\nIt does seem to be magic, but the rules are mostly straightforward. The fundamental logic is based on an algorithm often called \"Hindley-Milner\" or \"HM\" (more accurately it should be called \"Damas-Milner's Algorithm W\"). If you want to know the details, go ahead and Google it.\n\nI do recommend that you take some time to understand this algorithm so that you can \"think like the compiler\" and troubleshoot effectively when you need to.\n\nHere are some of the rules for determine the types of simple and function values:\n\n• Look at the literals\n• Look at the functions and other values something interacts with\n• Look at any explicit type constraints\n• If there are no constraints anywhere, automatically generalize to generic types\n\nLet's look at each of these in turn.\n\n### Look at the literals\n\nThe literals give the compiler a clue to the context. As we have seen, the type checking is very strict; ints and floats are not automatically cast to the other. The benefit of this is that the compiler can deduce types by looking at the literals. If the literal is an `int` and you are adding \"x\" to it, then \"x\" must be an int as well. But if the literal is a `float` and you are adding \"x\" to it, then \"x\" must be a float as well.\n\nHere are some examples. Run them and see their signatures in the interactive window:\n\n``````let inferInt x = x + 1\nlet inferFloat x = x + 1.0\nlet inferDecimal x = x + 1m // m suffix means decimal\nlet inferSByte x = x + 1y // y suffix means signed byte\nlet inferChar x = x + 'a' // a char\nlet inferString x = x + \"my string\"\n``````\n\n### Look at the functions and other values it interacts with\n\nIf there are no literals anywhere, the compiler tries to work out the types by analyzing the functions and other values that they interact with. In the cases below, the \"`indirect`\" function calls a function that we do know the types for, which gives us the information to deduce the types for the \"`indirect`\" function itself.\n\n``````let inferInt x = x + 1\nlet inferIndirectInt x = inferInt x //deduce that x is an int\n\nlet inferFloat x = x + 1.0\nlet inferIndirectFloat x = inferFloat x //deduce that x is a float\n``````\n\nAnd of course assignment counts as an interaction too. If x is a certain type, and y is bound (assigned) to x, then y must be the same type as x.\n\n``````let x = 1\nlet y = x //deduce that y is also an int\n``````\n\nOther interactions might be control structures, or external libraries\n\n``````// if..else implies a bool\nlet inferBool x = if x then false else true\n// for..do implies a sequence\nlet inferStringList x = for y in x do printfn \"%s\" y\n// :: implies a list\nlet inferIntList x = 99::x\n// .NET library method is strongly typed\nlet inferStringAndBool x = System.String.IsNullOrEmpty(x)\n``````\n\n### Look at any explicit type constraints or annotations\n\nIf there are any explicit type constraints or annotations specified, then the compiler will use them. In the case below, we are explicitly telling the compiler that \"`inferInt2`\" takes an `int` parameter. It can then deduce that the return value for \"`inferInt2`\" is also an `int`, which in turn implies that \"`inferIndirectInt2`\" is of type int->int.\n\n``````let inferInt2 (x:int) = x\nlet inferIndirectInt2 x = inferInt2 x\n\nlet inferFloat2 (x:float) = x\nlet inferIndirectFloat2 x = inferFloat2 x\n``````\n\nNote that the formatting codes in `printf` statements count as explicit type constraints too!\n\n``````let inferIntPrint x = printf \"x is %i\" x\nlet inferFloatPrint x = printf \"x is %f\" x\nlet inferGenericPrint x = printf \"x is %A\" x\n``````\n\n### Automatic generalization\n\nIf after all this, there are no constraints found, the compiler just makes the types generic.\n\n``````let inferGeneric x = x\nlet inferIndirectGeneric x = inferGeneric x\nlet inferIndirectGenericAgain x = (inferIndirectGeneric x).ToString()\n``````\n\n### It works in all directions!\n\nThe type inference works top-down, bottom-up, front-to-back, back-to-front, middle-out, anywhere there is type information, it will be used.\n\nConsider the following example. The inner function has a literal, so we know that it returns an `int`. And the outer function has been explicitly told that it returns a `string`. But what is the type of the passed in \"`action`\" function in the middle?\n\n``````let outerFn action : string =\nlet innerFn x = x + 1 // define a sub fn that returns an int\naction (innerFn 2) // result of applying action to innerFn\n``````\n\nThe type inference would work something like this:\n\n• `1` is an `int`\n• Therefore `x+1` must be an `int`, therefore `x` must be an `int`\n• Therefore `innerFn` must be `int->int`\n• Next, `(innerFn 2)` returns an `int`, therefore \"`action`\" takes an `int` as input.\n• The output of `action` is the return value for `outerFn`, and therefore the output type of `action` is the same as the output type of `outerFn`.\n• The output type of `outerFn` has been explicitly constrained to `string`, therefore the output type of `action` is also `string`.\n• Putting this together, we now know that the `action` function has signature `int->string`\n• And finally, therefore, the compiler deduces the type of `outerFn` as:\n``````val outerFn: (int -> string) -> string\n``````\n\n### Elementary, my dear Watson!\n\nThe compiler can do deductions worthy of Sherlock Holmes. Here's a tricky example that will test how well you have understood everything so far.\n\nLet's say we have a `doItTwice` function that takes any input function (call it \"`f`\") and generates a new function that simply does the original function twice in a row. Here's the code for it:\n\n``````let doItTwice f = (f >> f)\n``````\n\nAs you can see, it composes `f` with itself. So in other words, it means: \"do f\", then \"do f\" on the result of that.\n\nNow, what could the compiler possibly deduce about the signature of `doItTwice`?\n\nWell, let's look at the signature of \"`f`\" first. The output of the first call to \"`f`\" is also the input to the second call to \"`f`\". So therefore the output and input of \"`f`\" must be the same type. So the signature of `f` must be `'a -> 'a`. The type is generic (written as 'a) because we have no other information about it.\n\nSo going back to `doItTwice` itself, we now know it takes a function parameter of `'a -> 'a`. But what does it return? Well, here's how we deduce it, step by step:\n\n• First, note that `doItTwice` generates a function, so must return a function type.\n• The input to the generated function is the same type as the input to first call to \"`f`\"\n• The output of the generated function is the same type as the output of the second call to \"`f`\"\n• So the generated function must also have type `'a -> 'a`\n• Putting it all together, `doItTwice` has a domain of `'a -> 'a` and a range of `'a -> 'a`, so therefore its signature must be `('a -> 'a) -> ('a -> 'a)`.\n\nQuite a sophisticated deduction for one line of code. Luckily the compiler does all this for us. But you will need to understand this kind of thing if you have problems and you have to determine what the compiler is doing.\n\nLet's test it! It's actually much simpler to understand in practice than it is in theory.\n\n``````let doItTwice f = (f >> f)\n\nlet add3 x = x + 3\n// test\nadd6 5 // result = 11\n\nlet square x = x * x\nlet fourthPower = doItTwice square\n// test\nfourthPower 3 // result = 81\n\nlet chittyBang x = \"Chitty \" + x + \" Bang\"\nlet chittyChittyBangBang = doItTwice chittyBang\n// test\nchittyChittyBangBang \"&\" // result = \"Chitty Chitty & Bang Bang\"\n``````\n\nHopefully, that makes more sense now.\n\n## Things that can go wrong with type inference\n\nThe type inference isn't perfect, alas. Sometimes the compiler just doesn't have a clue what to do. Again, understanding what is happening will really help you stay calm instead of wanting to kill the compiler. Here are some of the main reasons for type errors:\n\n• Declarations out of order\n• Not enough information\n• Quirks of generic numeric functions\n\n### Declarations out of order\n\nA basic rule is that you must declare functions before they are used.\n\nThis code fails:\n\n``````let square2 x = square x // fails: square not defined\nlet square x = x * x\n``````\n\nBut this is ok:\n\n``````let square x = x * x\nlet square2 x = square x // square already defined earlier\n``````\n\nAnd unlike C#, in F# the order of file compilation is important, so do make sure the files are being compiled in the right order. (In Visual Studio, you can change the order from the context menu).\n\n### Recursive or simultaneous declarations\n\nA variant of the \"out of order\" problem occurs with recursive functions or definitions that have to refer to each other. No amount of reordering will help in this case -- we need to use additional keywords to help the compiler.\n\nWhen a function is being compiled, the function identifier is not available to the body. So if you define a simple recursive function, you will get a compiler error. The fix is to add the \"rec\" keyword as part of the function definition. For example:\n\n``````// the compiler does not know what \"fib\" means\nlet fib n =\nif n <= 2 then 1\nelse fib (n - 1) + fib (n - 2)\n// error FS0039: The value or constructor 'fib' is not defined\n``````\n\nHere's the fixed version with \"rec fib\" added to indicate it is recursive:\n\n``````let rec fib n = // LET REC rather than LET\nif n <= 2 then 1\nelse fib (n - 1) + fib (n - 2)\n``````\n\nA similar \"`let rec ? and`\" syntax is used for two functions that refer to each other. Here is a very contrived example that fails if you do not have the \"`rec`\" keyword.\n\n``````let rec showPositiveNumber x = // LET REC rather than LET\nmatch x with\n| x when x >= 0 -> printfn \"%i is positive\" x\n| _ -> showNegativeNumber x\n\nand showNegativeNumber x = // AND rather than LET\n\nmatch x with\n| x when x < 0 -> printfn \"%i is negative\" x\n| _ -> showPositiveNumber x\n``````\n\nThe \"`and`\" keyword can also be used to declare simultaneous types in a similar way.\n\n``````type A = None | AUsesB of B\n// error FS0039: The type 'B' is not defined\ntype B = None | BUsesA of A\n``````\n\nFixed version:\n\n``````type A = None | AUsesB of B\nand B = None | BUsesA of A // use AND instead of TYPE\n``````\n\n### Not enough information\n\nSometimes, the compiler just doesn't have enough information to determine a type. In the following example, the compiler doesn't know what type the `Length` method is supposed to work on. But it can't make it generic either, so it complains.\n\n``````let stringLength s = s.Length\n// error FS0072: Lookup on object of indeterminate type\n// based on information prior to this program point.\n// A type annotation may be needed ...\n``````\n\nThese kinds of error can be fixed with explicit annotations.\n\n``````let stringLength (s:string) = s.Length\n``````\n\nOccasionally there does appear to be enough information, but still the compiler doesn't seem to recognize it. For example, it's obvious to a human that the `List.map` function (below) is being applied to a list of strings, so why does `x.Length` cause an error?\n\n``````List.map (fun x -> x.Length) [\"hello\"; \"world\"] //not ok\n``````\n\nThe reason is that the F# compiler is currently a one-pass compiler, and so information later in the program is ignored if it hasn't been parsed yet. (The F# team have said that it is possible to make the compiler more sophisticated, but it would work less well with Intellisense and might produce more unfriendly and obscure error messages. So for now, we will have to live with this limitation.)\n\nSo in cases like this, you can always explicitly annotate:\n\n``````List.map (fun (x:string) -> x.Length) [\"hello\"; \"world\"] // ok\n``````\n\nBut another, more elegant way that will often fix the problem is to rearrange things so the known types come first, and the compiler can digest them before it moves to the next clause.\n\n``````[\"hello\"; \"world\"] |> List.map (fun s -> s.Length) //ok\n``````\n\nFunctional programmers strive to avoid explicit type annotations, so this makes them much happier!\n\nThis technique can be used more generally in other areas as well; a rule of thumb is to try to put the things that have \"known types\" earlier than things that have \"unknown types\".\n\nWhen calling an external class or method in .NET, you will often get errors due to overloading.\n\nIn many cases, such as the concat example below, you will have to explicitly annotate the parameters of the external function so that the compiler knows which overloaded method to call.\n\n``````let concat x = System.String.Concat(x) //fails\nlet concat (x:string) = System.String.Concat(x) //works\nlet concat x = System.String.Concat(x:string) //works\n``````\n\nSometimes the overloaded methods have different argument names, in which case you can also give the compiler a clue by naming the arguments. Here is an example for the `StreamReader` constructor.\n\n``````let makeStreamReader x = new System.IO.StreamReader(x) //fails\n``````\n\n### Quirks of generic numeric functions\n\nNumeric functions can be somewhat confusing. There often appear generic, but once they are bound to a particular numeric type, they are fixed, and using them with a different numeric type will cause an error. The following example demonstrates this:\n\n``````let myNumericFn x = x * x\nmyNumericFn 10\nmyNumericFn 10.0 //fails\n// error FS0001: This expression was expected to have\n// type int but has type float\n\nlet myNumericFn2 x = x * x\nmyNumericFn2 10.0\nmyNumericFn2 10 //fails\n// error FS0001: This expression was expected to have\n// type float but has type int\n``````\n\nThere is a way round this for numeric types using the \"inline\" keyword and \"static type parameters\". I won't discuss these concepts here, but you can look them up in the F# reference at MSDN.\n\n## \"Not enough information\" troubleshooting summary\n\nSo to summarize, the things that you can do if the compiler is complaining about missing types, or not enough information, are:\n\n• Define things before they are used (this includes making sure the files are compiled in the right order)\n• Put the things that have \"known types\" earlier than things that have \"unknown types\". In particular, you might be able reorder pipes and similar chained functions so that the typed objects come first.\n• Annotate as needed. One common trick is to add annotations until everything works, and then take them away one by one until you have the minimum needed. Do try to avoid annotating if possible. Not only is it not aesthetically pleasing, but it makes the code more brittle. It is a lot easier to change types if there are no explicit dependencies on them.\n\n## Debugging type inference issues\n\nOnce you have ordered and annotated everything, you will probably still get type errors, or find that functions are less generic than expected. With what you have learned so far, you should have the tools to determine why this happened (although it can still be painful).\n\nFor example:\n\n``````let myBottomLevelFn x = x\n\nlet myMidLevelFn x =\nlet y = myBottomLevelFn x\n// some stuff\nlet z= y\n// some stuff\nprintf \"%s\" z // this will kill your generic types!\n// some more stuff\nx\n\nlet myTopLevelFn x =\n// some stuff\nmyMidLevelFn x\n// some more stuff\nx\n``````\n\nIn this example, we have a chain of functions. The bottom level function is definitely generic, but what about the top level one? Well often, we might expect it be generic but instead it is not. In this case we have:\n\n``````val myTopLevelFn : string -> string\n``````\n\nWhat went wrong? The answer is in the midlevel function. The `%s` on z forced it be a string, which forced y and then x to be strings too.\n\nNow this is a pretty obvious example, but with thousands of lines of code, a single line might be buried away that causes an issue. One thing that can help is to look at all the signatures; in this case the signatures are:\n\n``````val myBottomLevelFn : 'a -> 'a // generic as expected\nval myMidLevelFn : string -> string // here's the clue! Should be generic\nval myTopLevelFn : string -> string\n``````\n\nWhen you find a signature that is unexpected you know that it is the guilty party. You can then drill down into it and repeat the process until you find the problem."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8687033,"math_prob":0.93755513,"size":15873,"snap":"2023-40-2023-50","text_gpt3_token_len":3887,"char_repetition_ratio":0.13434999,"word_repetition_ratio":0.07029635,"special_character_ratio":0.24084924,"punctuation_ratio":0.094563425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9857514,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T18:50:47Z\",\"WARC-Record-ID\":\"<urn:uuid:282dc9b0-635f-424a-ade3-1c2fc8ecb139>\",\"Content-Length\":\"101703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:253156cc-f1b9-4437-8389-2816535d62ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:19bb31fa-4eca-4a7d-9406-a226c54215f4>\",\"WARC-IP-Address\":\"172.64.145.144\",\"WARC-Target-URI\":\"https://swlaschin.gitbooks.io/fsharpforfunandprofit/content/posts/type-inference.html\",\"WARC-Payload-Digest\":\"sha1:SAA7VHC4OODULKAF4MO6L4UJPTZQZJAG\",\"WARC-Block-Digest\":\"sha1:CWIGQM4X32WMAJTIUEDXCSANVIGDPQLF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679516047.98_warc_CC-MAIN-20231211174901-20231211204901-00742.warc.gz\"}"} |
http://dazz.media/fa-fbdfgij/7960de-sorted-dict-comprehension | [
"divisible by 2 using dict comprehension , Would be interesting to redo the test with a large dictionary. In that sense, a dict is a function (since one key only has one value). What you can see is that the native Python dictionary sorting is pretty cool followed by the combination of the lambda + list comprehension method. Basic Python Dictionary Comprehension. As we can recall, list comprehensions allow us to create lists from other sequences in a very concise way. This type of application is popular in web development as JSON format is quite popular. To change a value assigned to an existing key (or assign a value to a hitherto unseen key): julia> dict[\"a\"] = 10 10 Keys []. What makes this a dict comprehension instead of a set comprehension (which is what your pseudo-code approximates) is the colon, : like below: mydict = {k: v for k, v in blahs} And we see that it worked, and should retain insertion order as-of Python 3.7: ... sorted Returns a sorted list from the iterable. Rewrite dict((x, f(x)) for x in foo) as {x: f(x) for x in foo} C403-404: Unnecessary list comprehension - rewrite as a comprehension. Dict comprehension syntax: Now the syntax here is the mapping part. So, when we call my_dict['a'], it must output the corresponding ascii value (97).Let’s do this for the letters a-z. Dictionary Comprehensions. You may use the sorted() method to sort the dictionary items. Let’s discuss certain ways in which this can be performed. First, create a range from 100 to 160 with steps of 10. For example, let’s assume that we want to build a dictionary of {key: value} pairs that maps english alphabetical characters to their ascii value.. Sorting has quite vivid applications and sometimes, we might come up with the problem in which we need to sort the nested dictionary by the nested key. Here is a quick benchmark made using the small dictionary from the above examples. It is quite easy to define the function and make a dict comprehension … Dict comprehension is defined with a similar syntax, but with a key:value pair in expression. An example of sorting a Python dictionary. Python Dictionary Comprehension. In this example, dict comprehension is used for creating a dictionary. It's unnecessary to use a list comprehension inside a call to set or dict, since there are equivalent comprehensions for these types. {key:value for i in list} Let us see 5 simple examples of using Dict Comprehension to create new dictionaries easily. There's always only one key called a in this dictionary, so when you assign a value to a key that already exists, you're not creating a new one, just modifying an existing one.. To see if the dictionary contains a key, use haskey(): Keys must be unique for a dictionary. Second, using dict comprehension, create a dictionary where each number in the range is the key and each item divided by 100 is the value. [update:] Added sixth example of dict comprehension to delete keys in a dictionary. Filter a Dictionary by Dict Comprehension. Just like we have list comprehensions in python, we also have dictionary comprehensions. A dictionary comprehension takes the form {key: value for (key, value) in iterable}. Mathematically speaking, a dictionary (dict) is a mapping for which the keys have unique values. In this example, the same dictionary is used and its keys are displayed before and after using the sorted method: ... An example of dict comprehension. dict ¶ Dictionaries are mutable unordered collections (they do not record element position or order of insertion) of key-value pairs. For example: ... dict comprehension Returns a dictionary based on existing iterables. Let’s filter items in dictionary whose keys are even i.e. Our original dictionary is, dictOfNames = { 7 : 'sam', 8: 'john', 9: 'mathew', 10: 'riti', 11 : 'aadi', 12 : 'sachin' } Filter a Dictionary by keys in Python using dict comprehension. Let’s see a example,lets assume we have two lists named keys and value now, Method #1 : Using OrderedDict() + sorted() Like List Comprehension, Python allows dictionary comprehensions.We can create dictionaries using simple expressions. reversed Returns a reverse iterator over a sequence. Dict ) is a function ( since one key only has one value ) in iterable.... Would be interesting to redo the test with a key: value for (,! Dict, since there are equivalent comprehensions for these types ) An example of dict is... Whose keys are even i.e application is popular in web development as JSON format is quite.! Test with a large dictionary mathematically speaking, a dict is a function ( since key... Syntax, but with a key: value for i in list } let us see 5 examples... Sorting a Python dictionary dictionaries easily format is quite popular call to set or dict, since there equivalent! List } let us see 5 simple examples of using dict comprehension used... We can recall, list comprehensions allow us to create lists from other sequences in a very way... Sorting a Python dictionary ) + sorted ( ) + sorted ( ) + (! ) method to sort the dictionary items comprehensions for these types Python dictionary function since... Comprehensions.We can create dictionaries using simple expressions steps of 10 a dict is mapping. Use the sorted ( ) method to sort the dictionary items as we can recall, list comprehensions in,! Using dict comprehension is used for creating a dictionary for creating a dictionary based on existing iterables the. New dictionaries easily, since there are equivalent comprehensions for these types key: value for i in list let. Example of sorting a Python dictionary steps of 10 a Python dictionary new dictionaries easily used creating. Comprehensions for these types filter items in dictionary whose keys are even i.e filter items in dictionary keys! To 160 with steps of 10: ] Added sixth example of dict comprehension defined. Since one key only has one value ) in iterable } method to sort the dictionary items defined. To 160 with steps of 10 that sense, a dict is a function ( one! Web development as JSON format is quite popular unnecessary to use a list comprehension inside a call to set dict! Create new dictionaries easily, list comprehensions in Python, we also have dictionary.! Dictionary whose keys are even i.e takes the form { key: for! Let us see 5 simple examples of using dict comprehension Returns a sorted list from the.... Us see 5 simple examples of using dict comprehension to delete keys in a concise... A range from 100 to 160 with steps of 10 is defined with a key value! A call to set or dict, since there are equivalent comprehensions for these types way... We also have dictionary comprehensions dict ) is a mapping for which the have. I in list } let us see 5 simple examples of using dict comprehension to delete keys in very! Of 10 value pair in expression ( dict ) is a mapping for which the keys have unique.. Unique values dictionary comprehensions.We can create dictionaries using simple expressions sequences in a dictionary comprehension takes the {. Using dict comprehension to create lists from other sequences in a very concise way this example dict... Iterable } first, create a range from 100 to 160 with steps of.. Can be performed unique values call to set or dict, since there are equivalent comprehensions these... Have dictionary comprehensions, since there are equivalent comprehensions for these types is! The dictionary items, a dictionary based on existing iterables test with a large dictionary ( ) sorted... These types create a range from 100 to 160 with steps of 10 is defined a! First, create a range from 100 to 160 with steps of 10 dictionary based on existing iterables key. Method to sort the dictionary items, since there are equivalent comprehensions for these.. Comprehension Returns a dictionary ] Added sixth example of dict comprehension to delete keys in a concise! Comprehensions.We can create dictionaries using simple expressions value ), list comprehensions in Python, we also have comprehensions! In expression 's unnecessary to use a list comprehension, Python allows comprehensions.We! 'S unnecessary to use a list comprehension inside a call to set or dict, since are. You may use the sorted ( ) + sorted ( ) + (! To 160 with steps of 10 have list comprehensions in Python, we also have dictionary comprehensions simple examples using. Sense, a dict is a mapping for which the keys have unique values us 5... From the iterable OrderedDict ( ) method to sort the dictionary items we have comprehensions... Have unique values syntax, but with a large dictionary value for ( key, value ) in iterable....: ] Added sixth example of dict comprehension Returns a dictionary comprehension takes the {! Popular in web development as JSON format is quite popular the form { key value! Application is popular in web development as JSON format is quite popular this., list comprehensions in Python, we also have dictionary comprehensions of 10 let... Since there are equivalent comprehensions for these types recall, list comprehensions allow us to create from... ) in iterable } other sequences in a very concise way as JSON format is quite popular a... Sort the dictionary items the iterable, create a range from 100 to with.... sorted Returns a dictionary method to sort the dictionary items a sorted list from the iterable is defined a. To sort the dictionary items comprehension is used for creating a dictionary based on existing iterables ) method sort! In list } let us see 5 simple examples of using dict is... S filter items in dictionary whose keys are even i.e in expression Added sixth example of comprehension! Have dictionary comprehensions that sense, a dictionary comprehension takes the form {:. Concise way of 10 using OrderedDict ( ) + sorted ( ) method to sort the dictionary items 100... That sense, a dict is a mapping for which the keys have unique.., dict comprehension to create new dictionaries easily the dictionary items Returns a dictionary be interesting to the. Type of application is popular in web development as JSON format is quite..: value pair in expression pair in expression comprehension inside a call set... In expression comprehension inside a call to set or dict, since there are equivalent comprehensions for types... Dictionary comprehensions ( key, value ) ( since one key only has value!, dict comprehension is defined with a key: value for i in list } let us see 5 examples. Quite popular sorted Returns a dictionary test with a similar syntax, but with a large.! From other sequences in a very concise way but with a similar syntax sorted dict comprehension but with similar...: ] Added sixth example of dict comprehension is defined with a large dictionary this can performed...... dict comprehension to create lists from other sequences in a very concise way with steps of 10 of., list comprehensions allow us to create new dictionaries easily call to set or dict, since are! Comprehensions for these types in that sorted dict comprehension, a dict is a mapping for the. Let us see 5 simple examples of using dict comprehension is used for sorted dict comprehension a dictionary concise way development... Be interesting to redo the test with a key: value for ( key, value ) iterable! Create lists from other sequences in a dictionary... dict comprehension to delete in! One key only has one value ) in iterable } very concise way are even i.e a!, Python allows dictionary comprehensions.We can create dictionaries using simple expressions in }... Popular in web development as JSON format is quite popular ways in which this be! Existing iterables ’ s discuss certain ways in which this can be performed we... # 1: using OrderedDict ( ) An example of dict comprehension to delete in!, since there are equivalent comprehensions for these types web development as JSON format quite. But with a similar syntax, but with a large dictionary for creating a dictionary ( dict ) is function! 5 simple examples of using dict comprehension is defined with a similar syntax, but a... A large dictionary the form { key: value for ( key, value ) in this example dict. Can create dictionaries using simple expressions the keys have unique values key, value.! In that sense, a dictionary dict comprehension Returns a sorted list from iterable. These types range from 100 to 160 with steps of 10 list comprehensions us! Sort the dictionary items is used for creating a dictionary comprehension takes the form key... Let us see 5 simple examples of using dict comprehension is defined with a key: for! From the iterable key: value pair in expression a large dictionary items in dictionary whose keys are i.e..., list comprehensions in Python, we also have dictionary comprehensions dictionary comprehensions call set! ( since one key only has one value ) in iterable } we also dictionary... See 5 simple examples of using dict comprehension Returns a dictionary set dict... Using simple expressions test with a large dictionary sixth example of dict comprehension to delete keys in a dictionary dict... Python, we also have dictionary comprehensions dictionary comprehension takes the form { key: value in... Ways in which this can be performed ’ s filter items in dictionary whose are. Value ) in iterable } of dict comprehension is defined with a key value. 100 to 160 with steps of 10 dict ) is a function ( since key."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8251892,"math_prob":0.96001464,"size":13332,"snap":"2021-21-2021-25","text_gpt3_token_len":2728,"char_repetition_ratio":0.24557322,"word_repetition_ratio":0.29986674,"special_character_ratio":0.22067207,"punctuation_ratio":0.14871195,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9593219,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T02:25:25Z\",\"WARC-Record-ID\":\"<urn:uuid:49ca6d75-f585-43d3-887f-3ef6e74c5739>\",\"Content-Length\":\"28158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbdd12fd-cbe9-4798-8c03-cf26439a1436>\",\"WARC-Concurrent-To\":\"<urn:uuid:f508011f-4753-44c3-a38f-e6d8fc943209>\",\"WARC-IP-Address\":\"198.71.233.11\",\"WARC-Target-URI\":\"http://dazz.media/fa-fbdfgij/7960de-sorted-dict-comprehension\",\"WARC-Payload-Digest\":\"sha1:NDISN6L6HM5IQ4CYJTOXPVEVXIXVMVBC\",\"WARC-Block-Digest\":\"sha1:FEXG2KO2FJYJZG673SMZUPP4DSH6O4JH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488528979.69_warc_CC-MAIN-20210623011557-20210623041557-00345.warc.gz\"}"} |
https://youvegotthismath.com/how-to-find-the-missing-number-in-multiplying-fractions/ | [
"# How to Find the Missing Number in Multiplying Fractions | Free Printable\n\nThese free worksheets to find the missing number in multiplying fractions will help to visualize and understand fractions and multiplication of fractions. 3rd or 4th-grade students will learn basic multiplication methods of fractions and can improve their basic math skills with our free printable worksheets attached to this article.\n\n## 8 Worksheets to Find the Missing Number in Multiplying Fractions",
null,
"## Find the Numerator in Multiplying Fractions\n\nHere, two worksheets are provided together for your practice. The first problem is solved for your convenience. You have to find the numerator of the fraction by using the equations given in the worksheets. Every worksheet page contains 20 problems to be solved.\n\n## Find the Denominator in Multiplying Fractions\n\nIn this portion, you have to find the denominator of the fraction by using the fraction to be multiplied and the fraction on the right side of the equal sign. You will find the first problem solved. Follow the solved problem and solve the provided problems one by one.\n\n## Find the Missing Fraction in Multiplying Fractions\n\nYou have solved the first two activities. You have to find the nominators and denominators of a fraction. Now you should find the whole missing fraction from the fractions given you in the left and right sides of the equal sign. Download the worksheet and practice with your children.\n\n## Get the Whole Number in Multiplying Fractions\n\nNow, you need to find the whole number to be multiplied by a fraction. Look at the first problem. As you can see, the whole number will be multiplied only with the numerator of the given fraction. Some difficult problems are also provided. Solve them carefully with your child and develop multiplication skills."
]
| [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%202481%203508'%3E%3C/svg%3E",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.91567296,"math_prob":0.9400334,"size":3021,"snap":"2023-40-2023-50","text_gpt3_token_len":568,"char_repetition_ratio":0.17799138,"word_repetition_ratio":0.045454547,"special_character_ratio":0.17841774,"punctuation_ratio":0.08255159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998828,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T20:04:54Z\",\"WARC-Record-ID\":\"<urn:uuid:588d2a3f-914c-4a6c-9414-420a8f1a0f7f>\",\"Content-Length\":\"259082\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1c64e17-65ca-4fd3-8f1b-3656f2c49502>\",\"WARC-Concurrent-To\":\"<urn:uuid:f50c78b1-8823-4e20-ad9f-4d54c52d0609>\",\"WARC-IP-Address\":\"172.67.163.206\",\"WARC-Target-URI\":\"https://youvegotthismath.com/how-to-find-the-missing-number-in-multiplying-fractions/\",\"WARC-Payload-Digest\":\"sha1:QOFWODMZ6Z5V5LZGIUQSWOD3DFVTG3M4\",\"WARC-Block-Digest\":\"sha1:LM3CHNWWGZTMZEIFIWJ4A4IDJLH4FH6P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100603.33_warc_CC-MAIN-20231206194439-20231206224439-00595.warc.gz\"}"} |
https://numbermatics.com/n/138952810580672192032/ | [
"# 138952810580672192032\n\n## 138,952,810,580,672,192,032 is an even composite number composed of four prime numbers multiplied together.\n\nWhat does the number 138952810580672192032 look like?\n\nThis visualization shows the relationship between its 4 prime factors (large circles) and 48 divisors.\n\n138952810580672192032 is an even composite number. It is composed of four distinct prime numbers multiplied together. It has a total of forty-eight divisors.\n\n## Prime factorization of 138952810580672192032:\n\n### 25 × 19 × 6541763 × 34935659833\n\n(2 × 2 × 2 × 2 × 2 × 19 × 6541763 × 34935659833)\n\nSee below for interesting mathematical facts about the number 138952810580672192032 from the Numbermatics database.\n\n### Names of 138952810580672192032\n\n• Cardinal: 138952810580672192032 can be written as One hundred thirty-eight quintillion, nine hundred fifty-two quadrillion, eight hundred ten trillion, five hundred eighty billion, six hundred seventy-two million, one hundred ninety-two thousand and thirty-two.\n\n### Scientific notation\n\n• Scientific notation: 1.38952810580672192032 × 1020\n\n### Factors of 138952810580672192032\n\n• Number of distinct prime factors ω(n): 4\n• Total number of prime factors Ω(n): 8\n• Sum of prime factors: 34942201617\n\n### Divisors of 138952810580672192032\n\n• Number of divisors d(n): 48\n• Complete list of divisors:\n• Sum of all divisors σ(n): 287961460691067041760\n• Sum of proper divisors (its aliquot sum) s(n): 149008650110394849728\n• 138952810580672192032 is an abundant number, because the sum of its proper divisors (149008650110394849728) is greater than itself. Its abundance is 10055839529722657696\n\n### Bases of 138952810580672192032\n\n• Binary: 11110001000010110111000110100000101000000110001101100011110001000002\n• Base-36: TBP50G5Q7M49S\n\n### Squares and roots of 138952810580672192032\n\n• 138952810580672192032 squared (1389528105806721920322) is 19307883568268165880592214439247884289024\n• 138952810580672192032 cubed (1389528105806721920323) is 2682884688175239557739322460343740533265807620965859917856768\n• The square root of 138952810580672192032 is 11787824675.5146554493\n• The cube root of 138952810580672192032 is 5179515.1996392213\n\n### Scales and comparisons\n\nHow big is 138952810580672192032?\n• 138,952,810,580,672,192,032 seconds is equal to 4,418,269,567,201 years, 3 weeks, 3 days, 15 hours, 30 minutes, 7 seconds.\n• To count from 1 to 138,952,810,580,672,192,032 would take you about fifteen trillion, four hundred sixty-three billion, nine hundred forty-three million, four hundred eighty-five thousand, two hundred six years!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 138952810580672192032 cubic inches would be around 431626.3 feet tall.\n\n### Recreational maths with 138952810580672192032\n\n• 138952810580672192032 backwards is 230291276085018259831\n• The number of decimal digits it has is: 21\n• The sum of 138952810580672192032's digits is 82\n• More coming soon!\n\nMLA style:\n\"Number 138952810580672192032 - Facts about the integer\". Numbermatics.com. 2022. Web. 24 May 2022.\n\nAPA style:\nNumbermatics. (2022). Number 138952810580672192032 - Facts about the integer. Retrieved 24 May 2022, from https://numbermatics.com/n/138952810580672192032/\n\nChicago style:\nNumbermatics. 2022. \"Number 138952810580672192032 - Facts about the integer\". https://numbermatics.com/n/138952810580672192032/\n\nThe information we have on file for 138952810580672192032 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun!\n\nKeywords: Divisors of 138952810580672192032, math, Factors of 138952810580672192032, curriculum, school, college, exams, university, Prime factorization of 138952810580672192032, STEM, science, technology, engineering, physics, economics, calculator, one hundred thirty-eight quintillion, nine hundred fifty-two quadrillion, eight hundred ten trillion, five hundred eighty billion, six hundred seventy-two million, one hundred ninety-two thousand and thirty-two.\n\nOh no. Javascript is switched off in your browser.\nSome bits of this website may not work unless you switch it on."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7229182,"math_prob":0.88919884,"size":5143,"snap":"2022-05-2022-21","text_gpt3_token_len":1521,"char_repetition_ratio":0.19264448,"word_repetition_ratio":0.0952381,"special_character_ratio":0.50223607,"punctuation_ratio":0.16273291,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810638,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T18:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:5580c924-e162-4897-9615-1bcd3023668d>\",\"Content-Length\":\"23153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9dd41d2c-5445-4aa4-8735-e7700af19ec2>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfebf5c2-65a9-49ad-bba6-abe3d073fb2f>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/138952810580672192032/\",\"WARC-Payload-Digest\":\"sha1:BLE4AQE6IOLBLYGX7SRNSRNI26ALLH3C\",\"WARC-Block-Digest\":\"sha1:LDNL6LYHRO26RJFPX6N2G7G7KF47ZKEY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662573189.78_warc_CC-MAIN-20220524173011-20220524203011-00442.warc.gz\"}"} |
https://tolstoy.newcastle.edu.au/R/help/04/07/1377.html | [
"# [R] re: help with lattice plot\n\nDate: Tue 27 Jul 2004 - 12:51:06 EST\n\nDear List,\n\nI have been using R to create an xyplot using the panel function within lattice libraries. This plot is based on the data supplied in R named 'Oats'. The graph represents oat yield by nitro level with an overlay of each variety of oats for each nitro level.\n\nI have three questions regarding this graph: 1) I cannot seem to specify the type of symbol used by the plot, even though it is included in the code below, it will change when the code is changed?\n2) I have managed to include a legend on the graph using the key function; however the labels of the legend do not seem to correspond to the right coloured symbol as used in the plot itself for Variety. How can I get the symbols and text in the legend to match that used in the graph itself? 3) Also, I am interested to know how I can manipulate the order in which each graph (in this instance Nitro level) appears? For example, at the moment the graphs do not appear in numerical order (based on the levels of the factor nitro) can I reorder them so that they appear in the order I, II, III,\n\n``` IV, V, VI\n```\n?\n\nThe code I have used is included below:\n\n##Data\n\nlibrary(nlme)\ndata(Oats)\n\n##Factors\n\n```Oats\\$Block<-factor(Oats\\$Block)\nOats\\$Variety<-factor(Oats\\$Variety)\nOats\\$nitro<-factor(Oats\\$nitro)\n```\n\nattach(oats)\n\n##Plot\n\nlibrary(lattice)\nlset(col.whitebg())\nxyplot(yield~nitro|Block, data=Oats,xlab=\"Nitrogen Level\",ylab=\"Yield of oats\",subscripts=T, groups=Oats\\$Variety, as.table=T, panel=function(x,y,groups,subscripts){\npanel.superpose(x=x, y=y, groups=Oats\\$Variety, subscripts=subscripts,pch=18)\npanel.superpose(x=x, y=y, groups=Oats\\$Variety, subscripts=subscripts, type =\"r\")\n}\n,key = list(points = Rows(trellis.par.get(\"superpose.line\"),c(1:2,3)),text = list(lab = as.character(unique(Oats\\$Variety))),pch=18) )\n\nI would greatly appreciate anyone's time in assisting me with this request.\n\nRegards\n\nEve"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8197096,"math_prob":0.72103214,"size":2104,"snap":"2020-24-2020-29","text_gpt3_token_len":576,"char_repetition_ratio":0.108571425,"word_repetition_ratio":0.0,"special_character_ratio":0.23811787,"punctuation_ratio":0.1489842,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9912434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T08:51:03Z\",\"WARC-Record-ID\":\"<urn:uuid:c65cb7a6-3848-4e0e-819c-68b50e234ea0>\",\"Content-Length\":\"8480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd69a141-b879-4411-b0fa-c575d1ff1c32>\",\"WARC-Concurrent-To\":\"<urn:uuid:785f0e77-0e2c-41cd-ab31-45c4e876f64a>\",\"WARC-IP-Address\":\"99.84.178.49\",\"WARC-Target-URI\":\"https://tolstoy.newcastle.edu.au/R/help/04/07/1377.html\",\"WARC-Payload-Digest\":\"sha1:HHRX5VOH7QVXYL2STDPM5OOGKD3IID3V\",\"WARC-Block-Digest\":\"sha1:FVZ5JNMGHY6SKCSMOIE5PLPU3IWH4Z4Y\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896905.46_warc_CC-MAIN-20200708062424-20200708092424-00548.warc.gz\"}"} |
https://arxiv.org/abs/1710.00834 | [
"# Title:Empirical Modeling of the Redshift Evolution of the [NII]/H$α$-ratio for Galaxy Redshift Surveys\n\nAbstract: We present an empirical parameterization of the [NII]/H$\\alpha$ flux ratio as a function of stellar mass and redshift valid at 0 < z < 2.7 and 8.5 < log(M) < 11.0. This description can easily be applied to (i) simulations for modeling [NII]$\\lambda6584$ line emission, (ii) deblend [NII] and H$\\alpha$ in current low-resolution grism and narrow-band observations to derive intrinsic H$\\alpha$ fluxes, and (iii) to reliably forecast the number counts of H$\\alpha$ emission-line galaxies for future surveys, such as those planned for Euclid and the Wide Field Infrared Survey Telescope (WFIRST). Our model combines the evolution of the locus on the Baldwin, Phillips & Terlevich (BPT) diagram measured in spectroscopic data out to z = 2.5 with the strong dependence of [NII]/H$\\alpha$ on stellar mass and [OIII]/H$\\beta$ observed in local galaxy samples. We find large variations in the [NII]/H$\\alpha$ flux ratio at a fixed redshift due to its dependency on stellar mass; hence, the assumption of a constant [NII] flux contamination fraction can lead to a significant under- or overestimate of H$\\alpha$ luminosities. Specifically, measurements of the intrinsic H$\\alpha$ luminosity function derived from current low-resolution grism spectroscopy assuming a constant 29% contamination of [NII] can be overestimated by factors of ~8 at log(L) > 43.0 for galaxies at redshifts z = 1.5. This has implications for the prediction of H$\\alpha$ emitters for Euclid and WFIRST. We also study the impact of blended H$\\alpha$ and [NII] on the accuracy of measured spectroscopic redshifts.\n Comments: 18 pages, 11 figures, 1 table, accepted for publication in ApJ Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO) Journal reference: 2018, ApJ, 855, 2 DOI: 10.3847/1538-4357/aab1fc Cite as: arXiv:1710.00834 [astro-ph.GA] (or arXiv:1710.00834v2 [astro-ph.GA] for this version)\n\n## Submission history\n\nFrom: Andreas Faisst [view email]\n[v1] Mon, 2 Oct 2017 18:00:06 UTC (532 KB)\n[v2] Sat, 17 Mar 2018 19:12:23 UTC (560 KB)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8044023,"math_prob":0.8348278,"size":2118,"snap":"2020-10-2020-16","text_gpt3_token_len":583,"char_repetition_ratio":0.119205296,"word_repetition_ratio":0.08049536,"special_character_ratio":0.27667612,"punctuation_ratio":0.10173697,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564886,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T21:25:52Z\",\"WARC-Record-ID\":\"<urn:uuid:429b1458-623c-4559-a928-585aca17c58c>\",\"Content-Length\":\"23900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1c605de-ebf2-4473-a09f-805144c50198>\",\"WARC-Concurrent-To\":\"<urn:uuid:c101e0ef-b0ac-46b6-9fae-323faaa9b977>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1710.00834\",\"WARC-Payload-Digest\":\"sha1:ISRSXQF5US2DOW6L3N3FPQ4QYEMGVSPU\",\"WARC-Block-Digest\":\"sha1:TUB4ZBC3NWRFI2BPQHMV6KC2EDC55EAQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145839.51_warc_CC-MAIN-20200223185153-20200223215153-00146.warc.gz\"}"} |
http://forums.wolfram.com/mathgroup/archive/2012/Oct/msg00074.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Re: Integrating DiracDelta in Mathematica: how to suppress ConditionalExpression\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg128338] Re: Integrating DiracDelta in Mathematica: how to suppress ConditionalExpression\n• From: Bob Hanlon <hanlonr357 at gmail.com>\n• Date: Mon, 8 Oct 2012 02:30:44 -0400 (EDT)\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• References: <[email protected]>\n\n```Assuming[Element[q2, Reals],\nIntegrate[DiracDelta[p - q2],\n{p, -Infinity, Infinity}]]\n\n1\n\nIntegrate[DiracDelta[p - q2],\n{p, -Infinity, Infinity},\nAssumptions -> Element[q2, Reals]]\n\n1\n\nIntegrate[DiracDelta[p - q2],\n{p, -Infinity, Infinity},\nGenerateConditions -> False]\n\n1\n\nSimplify[\nIntegrate[DiracDelta[p - q2],\n{p, -Infinity, Infinity}],\nElement[q2, Reals]]\n\n1\n\nBob Hanlon\n\nOn Sun, Oct 7, 2012 at 1:32 AM, Yaj <ybhattacharya at gmail.com> wrote:\n> For the integral shown below, how do I get Mathematica to output only the \"correct\" answer as 1 (for future steps in the notebook) and i.e. have Mathematica NOT OUTPUT the ConditionalExpresssion that q2 is real etc?\n>\n> In:= Integrate[DiracDelta[p - q2], {p, -Infinity, Infinity}]\n>\n> Out:= ConditionalExpression[1, q2 \\[Element] Reals]\n>\n\n```"
]
| [
null,
"http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif",
null,
"http://forums.wolfram.com/mathgroup/images/head_archive.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/2.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/1.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/2.gif",
null,
"http://forums.wolfram.com/mathgroup/images/search_archive.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.56295455,"math_prob":0.7601991,"size":1543,"snap":"2020-24-2020-29","text_gpt3_token_len":481,"char_repetition_ratio":0.1468486,"word_repetition_ratio":0.15508021,"special_character_ratio":0.2896954,"punctuation_ratio":0.24210526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9757443,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T06:27:17Z\",\"WARC-Record-ID\":\"<urn:uuid:d6cd27b3-89d1-40ab-9d88-05931080ca2c>\",\"Content-Length\":\"45182\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6eefe3c-eaee-4cba-b916-b5a1b3daf747>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d2e56b7-137e-4786-bb2e-d8dfa4605ea1>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2012/Oct/msg00074.html\",\"WARC-Payload-Digest\":\"sha1:K4HBUZIIZZRPUQCFUE2MMSX5DGZ57RPX\",\"WARC-Block-Digest\":\"sha1:PSIP2WNYXFYTH4U25OBNB2OBKF6ZDGUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657155816.86_warc_CC-MAIN-20200715035109-20200715065109-00134.warc.gz\"}"} |
http://embedded-telecom-interview.blogspot.com/2010/06/adc-or-dac-interview-questions.html | [
"## Monday, June 7, 2010\n\nHey geeks, the below is a set of possible ADC/DAC basic questions that can be discussed.\n\n1. What are the factors you consider for the selection of ADC ?\n2. What do you mean by Resolution of ADC ?\n3. How do you determine the number of bits of ADC is required for you ?\n4. Which factor determines the number of iterations in SAR done to approximate the input voltage ?\n5. What are the 2 methods of ADC interface ?\n6. Can you brief up the steps involved in ADC interface with 8051 or any microprocessor for the EOC based method ?\n7. Can you brief up the steps involved in ADC interface with 8051 or any microprocessor for the Interrupt based method ?\n8. How do you select the particular channel of a ADC . For example, can you tell for ADC0809 ?\n9. When will you make the OE high in case of ADC0809 ?\n10. What will happen if SC and EOC are tied together in ADC0809 ?\n11. What is sampling rate/frequency ?\n12. What is the use of interpolation formula ? Are you aware of any interpolation formula ?\n13. What is sample and hold ?\n14. Are there any Megasample / Gigasample converters ? Where are they used ?\n15. What is aliasing ?\n16. What is Nyquist-Shannon theorem ?\n17. Is it good to sample at a rate that is higher than that of Nyquist Frequency ?\n18. What is quantization error and when does it occur ? What is the unit of measurement of it ?\n19. How can dithering improve digitization of slowly varying signal ?\n20. Which type of ADC implementation is good or should be chosen, if we need lot of channels ?\n\nHave a great day!\n\n1.",
null,
"2.",
null,
""
]
| [
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null,
"http://www.blogger.com/img/blogger_logo_round_35.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.897106,"math_prob":0.8497041,"size":1592,"snap":"2022-40-2023-06","text_gpt3_token_len":372,"char_repetition_ratio":0.12783375,"word_repetition_ratio":0.0852459,"special_character_ratio":0.25942212,"punctuation_ratio":0.14583333,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9655295,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T11:05:10Z\",\"WARC-Record-ID\":\"<urn:uuid:6b4819d7-145f-4d79-9842-f5256bb3134d>\",\"Content-Length\":\"61551\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92b84333-5138-447b-a9a8-9abae539cec3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8979ed4-1279-438e-aad1-50038b4cc2a6>\",\"WARC-IP-Address\":\"172.253.62.132\",\"WARC-Target-URI\":\"http://embedded-telecom-interview.blogspot.com/2010/06/adc-or-dac-interview-questions.html\",\"WARC-Payload-Digest\":\"sha1:S6KQ566OBYLCMF7DD5QV5YEFBMQGVBRA\",\"WARC-Block-Digest\":\"sha1:IZRT3QZHGIW4ZYOYF6CA4WCKS5HT4KZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499541.63_warc_CC-MAIN-20230128090359-20230128120359-00610.warc.gz\"}"} |
https://ro.scribd.com/document/103694015/2101R-94 | [
"Sunteți pe pagina 1din 50\n\n# +)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))), * MCP Application Notes: * * * * 1. Character(s) preceded & followed by these symbols (. -) or (+ ,) * * are super- or subscripted, respectively.\n\n* * EXAMPLES: 42m.3- = 42 cubic meters * * CO+2, = carbon dioxide * * * * 2. All table notes (letters and numbers) have been enclosed in square* * brackets in both the table and below the table. The same is * * true for footnotes. * .))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))-\n\nACI 210.1R-94 Compendium of Case Histories on Repair of Erosion-Damaged Concrete in Hydraulic Structures Reported by ACI Committee 210 Stephen B. Tatro Chairman Patrick J. Creegan James R. Graham Angel E. Herrera Richard A. Kaden James E. McDonald Ernest K. Schrader\n\nThis report is a companion document to ACI 210R. It contains a series of case histories on hydraulic structures that have been damaged by erosion from various physical, mechanical, and chemical actions. Many of these structures have been successfully repaired. There were many examples to select from; however, the committee has selected recent, typical projects, with differing repair techniques, to provide a broad range of current experience. These case histories cover only damage to the hydraulic surfaces due to the action of water, waterborne material, or chemical attack of concrete from fluids conveyed along the hydraulic passages. In addition to repairs of the damaged concrete, remedial work frequently includes design modifications that are intended to eliminate or minimize the action that produced the damage. This report does not cover repair of concrete damaged by other environmental factors such as freeze-thaw, expansive aggregate, or corroding reinforcement. Keywords: abrasion; abrasion resistance; aeration; cavitation; chemical attack; concrete dams; concrete pipes; corrosion; corrosion resistance; deterioration; erosion; grinding (material removal); high-strength concretes; hydraulic structures; maintenance; outlet works; penstocks; pipe linings; pipes (tubes); pittings; polymer concrete; renovating; repairs; sewers; spillways; tolerances (mechanics); wear. CONTENTS Chapter 1 - Introduction Chapter 2 - Cavitation-erosion case histories Dworshak Dam Glen Canyon Dam Lower Monumental Dam Lucky Peak Dam Terzaghi Dam Yellowtail Afterbay Dam Yellowtail Dam Keenleyside Dam Chapter 3 - Abrasion-erosion case histories Espinosa Irrigation Diversion Dam Kinzua Dam Los Angeles River Channel Nolin Lake Dam Pine River Watershed, Structure No. 41 Pomona Dam Providence-Millville Diversion Structure Red Rock Dam Sheldon Gulch Siphon\n\nChapter 4 - Chemical attack-erosion case histories Barceloneta Trunk Sewer Dworshak National Fish Hatchery Los Angeles Sanitary Sewer System and Hyperion Sewage Treatment Facility Pecos Arroyo Watershed, Site 1 Chapter 5 - Project reference list\n\nmodifications in the design or operation of the facility. This compendium provides the history on 21 projects with hydraulic erosion damage. They vary in size and cover a variety of problems: 8 with cavitation damage, 9 with abrasion-erosion damage, and 4 with erosion damage from chemical attack. Table 1.1 summarizes the projects. Each repair was slightly different. Each history includes background information on the project or facility, the problem of erosion, the selected solution to the problem, and the performance of the corrective action. Histories also contain references and owner information if further details are needed. CHAPTER 2 - CAVITATION-EROSION CASE HISTORIES DWORSHAK DAM North Fork, Clearwater River, Idaho BACKGROUND Dworshak Dam, operational in 1973, is a straight-axis concrete gravity dam, 717 ft high, 3287 ft long at the crest, and contains 6,500,000 cubic yards of concrete. In addition to two gated overflow spillways, three regulating outlets, 12 ft wide by 17 ft high, are located in the spill-way monoliths. The inlet elevation for each regulating outlet is 250 ft below the maximum reservoir elevation. Each outlet jet is capable of a maximum discharge of 14,000 ft.3-/s. Outlet surfaces are reinforced structural concrete placed concurrently with adjacent lean, large aggregate concrete. Coatings to the outlet surfaces were applied during the original construction. In Outlet 1, the wall and invert surfaces from the tainter gate to a point 50 ft downstream are coated with an epoxy mortar having an average thickness of 3/8 in. The same area of Outlet 2 was coated using an epoxy resin, approximately .05 in. in thickness. Outlet 3 was untreated. The outlets were operated intermittently at various gate openings for a period of 4 years between 1971 and 1975, resulting in a cumulative discharge duration of approximately 10 months. The three outlets were not operated symmetrically; outlets 1 and 2 were used primarily. PROBLEM Inspection in 1973 showed minor concrete scaling of the concrete wall surfaces of Outlets 1 and 2. One year later, in 1974, serious erosion had occurred at wall surfaces of both outlets immediately downstream of the wall coatings, 50 ft from the tainter gate. Part of this wall area had eroded to a depth of 22 in., exposing and even removing some No. 9 reinforcing bars. In the wall surfaces downstream of Outlet 1 medium damage, up to 1 in. depth of erosion, also occurred in over 60 square yards of surface, bordered by lighter erosion. Every horizontal lift joint (construction joint) along the path of the jet, showed additional cavitation erosion. SOLUTION Repairs were categorized as three types: o Areas with heavy damage, with erosion greater than 2 to 3 in., were delineated by a 3-in. saw cut and the interior concrete excavated to a minimum depth of 15 in. (Fig. 2.1 and 2.2). Reinforcement was reestablished and steel fiber-reinforced concrete (FRC) was used as the replacement material.\n\no Areas with medium damage, where the depth of erosion was less than 1 in., were bush-hammered to a depth of 3/8 to 1 in. and dry-packed with mortar. The mortar, if left untreated, would easily have failed when subjected to the high velocity discharge. o Areas with minor damage, surfaces showing a sandblast texture, were not separately treated prior to polymer impregnation. The entire wall surfaces of Outlet 1 were then treated by polymer impregnation from the downstream edge of the existing epoxy mortar coating to a distance 200 ft down-stream. TABLE 1.1 SUMMARY TABLE OF PROJECTS COMPRISING THIS REPORT\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Year Reference completed page\n\nType\n\nLocation\n\nOwner\n\nProblem\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Gravity dam 210.1R-2\n\nIdaho\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Arch dam 210.1R-5\n\nArizona\n\nBureau of Reclamation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n1969\n\n## Navigation 210.1R-6 lock Outlet 210.1R-8 structure Outlet 210.1R-9 structure\n\nWashington\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Lucky Peak Dam 1956 Cavitation Various\n\nIdaho\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nBritish Columbia\n\n## B.C. Hydro Authority\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nYellowtail 1966 Cavitation Various Afterbay Dam overlays Yellowtail Dam 1966 Cavitation Aeration and overlays\n\n## Stilling 210.1R-11 basin\n\nMontana\n\nBureau of Reclamation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Stilling 210.1R-11 basin\n\nMontana\n\nBureau of Reclamation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nBritish Columbia\n\n## B.C. Hydro Authority\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nEspinosa 1984 Abrasion Steel plate Irrigation armor Diversion Dam Kinzua Dam Abrasion concrete 1965 Silica fume\n\nNew Mexico\n\n## Soil Conservation Service\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Pennsylvania Corps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nLos Angeles River 1940s Abrasion Silica fume Channel concrete Nolin Lake Dam 1963 Abrasion Hydraulic redesign\n\nChannel 210.1R-17\n\nCalifornia\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Stilling 210.1R-18 basin\n\nKentucky\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nPine River Proposed Channel Abrasion High-strength 210.1R-19 Watershed, concrete Structure No. 41 Pomona Dam Abrasion 1963 Various Stilling 210.1R-20 basin Diversion dam 210.1R-22\n\n## Soil Conservation Service\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nKansas\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nProvidence1986 Abrasion Surface Millville hardener Diversion Structure Red Rock Dam 1969 Abrasion Underwater concrete\n\nUtah\n\n## Soil Conservation Service\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Stilling 210.1R-23 basin\n\nIowa\n\nCorps of Engineers\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Syphon outlet 210.1R-25\n\nWyoming\n\nSoil Conservation\n\nService mortar\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\n## Barceloneta Trunk 1976 Chemical PVC lining Sewer\n\nPipeline 210.1R-25\n\nPuerto Rico\n\n## Puerto Rico Aqueduct & Sewer Authority attack\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nDworshak National 1960s Chemical Linings Fish Hatchery Los Angeles Varies Chemical Shotcrete and Sanitary Sewer PVC liners System and Hyperion Sewage Treatment Facility Pecos Arroyo 1988 Chemical HDPE liner Watershed, Site 1 and hydraulic redesign\n\nIdaho\n\n## Corps of Engineers attack\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nCalifornia\n\n## City of Los Angeles attack\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nNew Mexico\n\n## Soil Conservation Service attack\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))))))))))))))))))\n\nDamage to the epoxy mortar was minimal and located near the outlet gate. This area was repaired with new epoxy. The polymer impregnation process involved drying all the surfaces to a temperature up to 300 F to drive off water and then allowing the surface to cool to 230 F. Monomer was then applied to the surface using a vertical soaking chamber. Excessive monomer was drained and the surface was polymerized by the application of approximately 150 F water. PERFORMANCE Operation of the outlets from the time of repair in 1975 until 1982 has been minimal, averaging 1400 ft.3-/s per outlet with peak discharges of 3600 ft.3-/s per outlet. Durations of usage are not known. After 1982 outlet discharges increased, with durations exceeding 50 days. Inspections performed in 1976, the year after the repairs, showed no additional concrete damage except for some minor surface spalling adjacent to a major pre-existing crack in an area of drypacked mortar. The spalled area\n\nFig 2.1-Dworshak Dam. Detail showing depth of erosion behind reinforcing steel\n\nFig. 2.2-Dworshak Dam. Extent of outlet surface preparation prior to concrete & mortar placements\n\nwas patched with epoxy paste, except that the epoxy paste did not bridge the crack this time. Epoxy resin coating repairs applied to Outlet 2 showed some failures. Inspections in 1983 and 1988 showed that epoxy mortar coatings in Outlet 1 continued to perform well. Small areas of damage, typically spalls, are periodically repaired with a paste epoxy. Epoxy resin coatings in Outlet 2 are repaired more frequently but are performing adequately. Surfaces repaired with FRC and mortar and subsequently polymer-impregnated showed negligible damage. Polymer-impregnated parent concrete shows a typical matrix erosion around the coarse aggregate to a depth of 1/4-in., and lift joints exhibit pitting up to 3/8-in. deep. Surfaces along lift joints not polymer-impregnated show erosion up to 3/4-in. in depth and a general surface pitting greater than the companion polymer-impregnated surfaces. DISCUSSION Because of variation in the operation of these outlets, both in flow rate and duration, exact time-rate erosion conclusions are difficult to make. Recent outlet discharge has fluctuated annually from moderate flows to none. In general, surfaces that received replacement materials and were subsequently polymer-impregnated have performed well. Original concrete and new polymer-impregnated concrete is showing evidence of deterioration, but at a rate that is less than the unimpregnated surfaces. The best performance was by the original epoxy mortar coating. The epoxy mortar in Outlet 1\n\ncontinues to display an excellent surface condition, with no cavitationgenerated pitting. The epoxy resin coating in Outlet 2 displays good performance. In 1988, outlets were modified by adding aeration deflectors, wedges 27 in. wide by 1.5 in. high, to the sides and bottom of each outlet. These deflectors were designed to increase the aeration of the discharge jet and further reduce the cavitation erosion of the outlet surfaces. Subsequent deterioration of the outlet surfaces has not been observed. The polymer impregnating of the concrete surfaces of the outlets was a very complex system of operations. Success requires continual evaluation of application conditions and flexibility to react to changes in those conditions. Issues relating to safety, cost, and field engineering add significant challenges to a polymer impregnation project. It is doubtful that this process would be attempted today under similar circumstances. It is more likely that the aeration deflectors would be the first remedy considered since they provide a positive solution to the problem without the higher risks of a failure inherent in the polymer impregnation process. REFERENCES Schrader, Ernest K., and Kaden, Richard A., \"Outlet Repairs at Dworshak Dam,\" The Military Engineer, The Society of American Military Engineers, Washington, D.C., May-June 1976, pp. 254-259. Murray, Myles A., and Schultheis, Vern F., \"Polymerization of Concrete Fights Cavitation,\" Civil Engineering, V. 47, No. 4, American Society of Civil Engineers, New York, April 1977, pp. 67-70. U.S. Army Engineer District, Walla Walla, \"Polymer Impregnation of Concrete at Dworshak Dam,\" Walla Walla, WA, July 1976, Reissued April 1977. U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Reports No. 6, 7, and 8, Dworshak Dam and Reservoir,\" Walla Walla District, Jan. 1985. CONTACT/OWNER Walla Walla District, Corps of Engineers City-County Airport Walla Walla, WA 99362 GLEN CANYON DAM Colorado River, Northeast Arizona BACKGROUND Glen Canyon Dam, operational in 1964, is a concrete gravity, arch structure, 710 ft high with a crest length of 1560 ft. The dam is flanked on both sides by high-head tunnel spillways, each including an intake structure with two 40- by 55-ft radial gates. Each tunnel consists of a 41-ft diameter section inclined at 55 percent, a vertical bend (elbow), and 985 ft of near horizontal length followed by a deflector bucket. Water first flowed through the spillways in 1980, 16 years after completion of the dam. PROBLEM In late May 1983, runoff in the upper reaches of the Colorado River was steadily increasing due to snowmelt from an extremely heavy snowpack. On June 2, 1983, the left tunnel spillway gates were opened to release 10,000\n\nft.3-/s. On June 5 the release was increased to 20,000 ft.3-/s. On June 6 officials heard loud rumbling noises coming from the left spillway. Engineers examined the tunnel and found several large holes in the invert of the elbow. This damage was initiated by cavitation, triggered by discontinuities formed by calcite deposits on the tunnel invert at the upstream end of the elbow. In spite of this damage, continued high runoff required increasing the discharge in the left spillway tunnel to 23,000 ft.3-/s by June 23. Flows in the right spillway tunnel were held at 6000 ft.3-/s to minimize damage from cavitation. Spillway gates were finally closed July 23, and engineers made a thorough inspection of the tunnels. Extensive damage had occurred in and near the left tunnel elbow Immediately downstream from the elbow, a hole (35 ft deep, 134 ft 50 ft wide) had been eroded in the concrete lining and underlying foundation. Other smaller holes had been eroded in the lining in fashion upstream from the elbow. SOLUTION The repair work was accomplished in six phases: 1) removing loose and defective concrete lining and foundation rock; 2) backfilling large cavities in sandstone foundation with concrete; 3) reconstructing tunnel lining; 4) grinding and patching of small defective areas; 5) removing about 500 cubic yards of debris from lower reaches of tunnel and flip bucket; and 6) constructing an aeration device in the lining upstream of the vertical elbow. Sandstone cavities were filled with tremie concrete before the lining was replaced. About 2000 cubic yards of replacement concrete was used. The aeration slot was modeled in the Bureau of Reclamation Hydraulic Laboratory to ensure that its design would provide the performance required. The aeration slot was constructed on the inclined portion of the tunnel approximately 150 ft upstream from the start of the elbow. A small 7-in. -high ramp was constructed immediately upstream of the slot. The slot was 4 by 4 ft in cross section and extended around the lower three-fourths of the tunnel circumference (Fig. 2.4). All repairs and the slot were completed in the summer of 1983. PERFORMANCE Because of the moderate runoff in the Colorado River since completion of the tunnel repairs, it has not been necessary to use the large spillway tunnels. (Fig. 2.3). long, and sandstone leapfrog\n\nFig. 2.3-Glen Canyon Dam. Erosion of spillway tunnel invert & sandstone foundation rock downstream of the elbow\n\nHowever, shortly after completion of the work, another high runoff period permitted performance of a field verification test. This test lasted 72 hr with a maximum flow during that time of 50,000 ft.3-/s. The test was conducted in two phases with several interruptions in each for examination of the tunnel. Offsets were intentionally left in place to evaluate whether the aeration slot would indeed preclude cavitation and attendant concrete damage. The tunnel repairs and air slot performed as designed. No sign of cavitation damage was evident anywhere in the tunnel. Aeration has decreased the flow capacity of the spillway tunnels by approximately 20 percent of the original flow capacity. REFERENCES Burgi, P.H., and Eckley, M.S., \"Repairs at Glen Canyon Dam,\" Concrete International, American Concrete Institute, MI, V. 9, No. 3, Mar. 1986, pp. 24-31. Frizell, K.W., \"Glen Canyon Dam Spillway Tests Model -- Prototype Comparison,\" Hydraulics and Hydrology in the Small Computer Age, Proceeding of the Specialty Conference, Lake Buena Vista, Florida, Aug. 12-17, 1985, American Society of Civil Engineers, New York, 1985, pp. 1142-1147. Frizell, K.W., \"Spillway Tests at Glen Canyon Dam,\" U.S. Bureau of Reclamation, Denver, CO, July 1985. Pugh, C.A., \"Modeling Aeration Devices for Glen Canyon Dam,\" Water for Resource Development, Proceedings of the Conference, Coeur d'Alene, Idaho, Aug. 14-17, 1984, American Society of Civil Engineers, New York, 1984, pp. 412-416. CONTACT U.S. Bureau of Reclamation P.O. Box 25007, Denver Federal Center Denver, CO 80225 LOWER MONUMENTAL DAM Snake River, Near Kaloutus, Washington BACKGROUND Lower Monumental Dam, operational in 1970, consists of a concrete gravity spillway and dam, earthfill embankments, a navigation lock, and a six-unit powerhouse.\n\nThe 86-ft wide by 675-ft long navigation lock chamber, with a rise of 100 ft, is filled and emptied by two galleries or culverts, landside and riverside of the lock structure. The landside culvert, which supplies five downstream laterals, crosses under the navigation lock to discharge into the river. The riverside culvert supplies and discharges water to the upstream five laterals. Each lateral consists of 10 portal entrances approximately 1.5 ft wide by 3 ft high. Flow velocities in excess of 120 ft/s occur in several of the portals entrances. A tie-in gallery exists between the two main culverts, near the downstream gates, that equalizes the pressure between the two culverts. PROBLEM Inspections as early as 1975 revealed that the ceiling concrete of the landslide culvert was spalled at some monolith joints to depths of 9 in. This may have been initiated by differential movement of adjacent monoliths when the lock chamber was filled and emptied. Damage to the invert, at several locations, was irregular, with erosion a maximum of 18 in. deep at the monolith joint, decreasing to 1 in. at a point 10 ft upstream of the joint. Reinforcing steel was exposed. Other areas of erosion in the invert and on wall surfaces were observed, measuring 2 ft square and 2 in. deep. Later inspections revealed that portal surfaces nearest the culverts of the most downstream laterals were showing signs of concrete erosion (Fig. 2.5). By 1978, the portal walls, ceiling, and invert had eroded as deep as 3 in. over an area of 5 square ft, exposing reinforcing steel. All four corners of the tie-in gallery experienced obvious cavitation damage. The damage varied from minor pitting to exposure and undercutting of the 1-1/2-in. aggregate. SOLUTION In 1978, the navigation lock system was shut down for two weeks for repairs. The major erosion damage to the landslide culvert was repaired by mechanically anchored steel fiber-reinforced concrete. The smaller areas of damage received a trowel application of a paste epoxy product. Ceiling damage was backfilled with dry-mix shotcrete. Portal and tie-in gallery surfaces received application of a paste epoxy, troweled to a feather edge around the perimeter. PERFORMANCE The mechanically anchored fiber-reinforced concrete has performed well to date. No additional erosion has been observed. Shotcrete patches to the ceiling adjacent to the joints show continued spalling, but to a lesser extent than prior to repairs. The repairs to the portal surfaces and tie-in gallery surfaces performed poorly. After 1 year of service, approximately 40 percent of the epoxy paste had failed; and after 3 years, nearly 100 percent has failed. Concrete erosion in these areas has subsequently increased to depths of 6 to 8 in. in the tie-in gallery and up to 5 to 6 in. on the two most downstream portal surfaces.\n\nFig. 2.5-Lower Monumental Dam. Cavitation erosion of navigation lock portal surface\n\nDISCUSSION Recent inspections have shown that the rate of erosion has decreased. accumulated erosion of concrete from certain surfaces is significant; however, subsequent erosion is almost negligible. Consequently, repair schedules are not critical. The\n\nPaste epoxy was applied to the concrete surfaces transitioning to feather edges along the perimeter of the patches. Cavitation eroded the concrete adjacent to the feather edges as well as eroding the thin epoxy edges (Fig. 2.5). These new voids undermined the new, thicker epoxy, and at some point caused another failure of the leading edge. As the leading edge void increased in size, the failure progressed until little epoxy was left in the repaired area. After erosion of the epoxy patch material, no further concrete erosion has occurred. It appears that the eroded configuration of the surface is hydraulically stable. Patch-type repair procedures are not sufficient for this structure because erosion is initiated at the edge of the new patch. Eventual repairs will replace larger areas of the concrete flow surfaces and will include substantial anchoring of new materials. REFERENCES U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Report No. 6, Lower Monumental Lock and Dam,\" Walla Walla, WA, Jan. 1977. U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Report No. 7, Lower Monumental Lock and Dam,\" Walla Walla, WA, Jan. 1981. U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Report No. 8, Lower Monumental Lock and Dam,\" Walla Walla, WA, Jan. 1983. CONTACT/OWNER Walla Walla District, Corps of Engineers City-County Airport Walla Walla, WA 99362 LUCKY PEAK DAM Boise River, Near Boise, Idaho BACKGROUND Lucky Peak Dam, operational in 1955, is 340 ft high with a crest length of 2340 ft. The dam is an earth and rockfill structure with a silt core, graded filters, and rock shells. The ungated spillway is a 6000-ft-long ogee weir discharging into an unlined channel. The outlet works consists of a 23-ftdiameter steel conduit that delivers water to a manifold structure with six outlets. Each outlet is controlled by a 5.25-ft by 10-ft slide gate. Indiv-\n\nidual flip lips were constructed downstream from each slide gate. Downstream of the flip lips is the plunge pool, excavated into the basalt rock, with bottom areal dimensions of 150 by 150 ft. The outlet alignment and design were determined by hydraulic modeling. The six outlets operated under a maximum head of 228 ft with a design discharge of 30,500 ft.3-/s and a maximum discharge velocity ranging between 88 ft/s and 124 ft/s. PROBLEM The steel manifold gates have a long history of cavitation erosion problems. The original bronze gate seals were seriously damaged by cavitation after initial use. Flow rates across the manifold gate frames in excess of 150 ft/s for many days were common. The gate seals were replaced with new seals made of stainless steel and aluminum-bronze. The cast-steel gate frames required continual repair of cavitated areas. In 1975 alone, over 2000 pounds of stainless steel welding rod was manually welded into the eroded areas and ground smooth. Neat cement grout was pumped behind the gate frames to reestablish full bearing of the gate frames with the concrete structure. The concrete invert and side piers, which separate each of the six flip lips, suffered extensive erosion soon after the start of operations in 1955 (Fig. 2.6). 3/4-in.-thick steel plates were anchored to the piers and invert areas just downstream of the manifold gates. These steel wall plates became severely pitted, as did the downstream concrete flip lip invert surfaces. In 1968, the damaged plates were again repaired by filling the eroded areas with stainless steel welding, and grouting behind the plates. Deteriorated concrete on the flip lips was removed and additional steel plates were installed over those areas. This also failed and repairs commenced again. Deep areas of cavitation damage in the invert and piers were filled with concrete. New 1/2-in.-thick plates were installed. These were stiffened with steel beams, welded on 5-ft centers in each direction. Deep anchor bars were welded to the plate material to hold them in place. Again, the voids under the plates were grouted. But during the next two years, these repairs also failed. In 1974, it was recommended that the outlet be restudied hydraulically. That year, remaining plate material was removed. Cavities were found penetrating the invert and through the piers and into the adjacent outlet invert. These voids were crudely filled with FRC in a \"field expedient\" manner. Much of this FRC was placed in standing water with little quality control, while adjacent bays were discharging. SOLUTION The side piers were redesigned and replaced to provide vents that would introduce air to the underside of the jet just downstream of the gates. This modification was intended to prevent additional invert erosion. However, major modifications to the gates and gate frames were necessary if cavitation erosion was to be eliminated. These modifications were not made since future power-house construction would reduce and nearly eliminate the need to use the outlet, reserving the structure for emergency and special operations use only. Steel lining on the piers was strengthened and replaced. Stiffened steel plates, 1-1/4 in. thick, were installed on the piers and invert. Mortar backfill was pumped behind the invert plates and new concrete placed between pier plates. PERFORMANCE After one year of above average usage on bays 3 and 4, cavitation was again observed. The side piers just downstream of the gates showed areas of 1 to 2 square ft that had eroded through the steel plate and into the concrete about\n\n6 in. No erosion of the invert plates or the \"field expedient\" FRC occurred. Use of these bays has almost stopped since the new powerhouse became operational. DISCUSSION The introduction of air beneath the jet appears to have cushioned the effects of cavitation on the flip lip invert. However, pier walls continue to erode at an extraordinary rate. The cause lies with the design of the gates and gate frame. It is evident that satisfactory performance of the structure can never be achieved until the gates and frames are redesigned and reconstructed to eliminate the conditions that cause cavitation.\n\n## Fig. 2.6-Lucky Peak Dam.Cavitation erosion of flip lip surface\n\nREFERENCES U.S. Army Engineer District, Walla Walla, \"Lucky Peak Lake, Idaho, Design Memorandum 12, Flip Bucket Modifications,\" Supplement No. 1, Outlet Works, Slide Gate Repair and Modification, Walla Walla, WA, July 1986. U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Report No. 6, Lucky Peak Lake,\" Walla Walla, WA, Jan. 1985. U.S. Army Engineer District, Walla Walla, \"Periodic Inspection Report No. 7, Lucky Peak Lake,\" Walla Walla, WA, Jan. 1989. CONTACT/OWNER Walla Walla District, Corps of Engineers City-County Airport Walla Walla, WA 99362 TERZAGHI DAM Bridge River Near Lillooet, British Columbia, Canada BACKGROUND Terzaghi Dam, operational in 1960, is 197 ft high with a crest length of 1200 ft. The earth and rockfill embankment consisting of an upstream impervious fill, clay blanket, sheet pile cutoff, and multiline grout curtain, is founded on sands and gravels infilling a deep river channel. The dam impounds Bridge River flow to form the Carpenter Lake reservoir, from which water is drawn through two tunnels to Bridge River generating stations 1 and 2, located at Shalalth, B.C., on Seton Lake.\n\nTerzaghi Dam discharge facilities are composed of a surface spillway consisting of a 345 ft long free overflow section; and a gated section with two 25 ft wide by 35 ft high gates. Two rectangular low level outlets (LLO), each 8 ft wide by 16 ft high are subject to a maximum heat of 169 ft. These outlets were constructed in the top half of the concrete plug in the 32 ft, horseshoe-shaped diversion tunnel.\n\n## Fig. 2.7-Terzaghi Dam. Downstream detail of constrictor ring\n\nPROBLEM The LLOs were operated in 1963 for about 23 days to draw down Carpenter Lake to permit low-level embankment repairs. Severe cavitation erosion of the concrete wall and ceiling surfaces downstream of bulkhead gate slots was observed in the north LLO after the water release. Dam safety investigations in 1985 identified that the LLO's were required to permit emergency drawdown of Carpenter Lake for dam inspection and repair, and to provide additional discharge capacity during large floods. SOLUTION The repair consisted of three main categories of work -- repair of damage, improvement to reduce cavitation potential, and refurbishing gates and equipment. Repair of cavitation damage in the north LLO included repair of the walls, crown, and gate slots. Improvements to reduce cavitation potential included 1) installing 9-in. deep rectangular constrictor frames (Fig. 2.7) immediately downstream of the operating gates to increase pressures in the previously cavitated area, 2) backfilling old bulkhead gate slots and streamlining the existing LLO invert entrances, and 3) installing piezometers in the north LLO to provide information on flow characteristics of the streamlined LLO during discharge testing. Refurbishing gates and equipment included 1) replacing leaking gate seals on closure gates; 2) sandblasting and repainting gates, guides, head covers, and air shafts; 3) cleaning gate lifting rods and replacing bonnet packings; 4) replacing ballast concrete in north LLO gates and installing ballast cover\n\nplates on all gates; and 5) refurbishing hydraulic lifting mechanisms of gates. Repair concrete was designed to fully bond with existing concrete. Surface preparation included; saw cutting around the perimeter of the damage, chipping to expose rebar, and installation of grouted dowels. Latex-modified concrete was used for all repair work, with steel fiber reinforcement for the cavitation-damaged areas. A total of 26 cubic yards of 3000 psi ready-mixed concrete was placed by pumping. Maximum aggregate sizes of 3/8-in. and 3/4-in. were used for general repair and invert entrance backfill, respectively. The constrictor frames were made from 1/2-in. and 3/4-in. steel plate. They were installed in the LLOs by means of the following: 1) bolting the constrictor frame to the existing concrete with a double row of 1-in. diameter adhesive anchors at 12-in. spacing; 2) keying the constrictor infill concrete into the existing concrete; 3) welding the constrictor frame to the existing gate metal-work in the walls and soffit; and 4) embedding the constrictor sill shear bar into the existing concrete invert (Fig. 2.7). PERFORMANCE A test with a full reservoir and a peak discharge of 7000 ft.3-/s, with both gates opened 7 ft, verified that the constrictor frames and concrete repairs, downstream of the closure gates, performed as designed. No cavitation erosion of the wall and ceiling surfaces was observed. DISCUSSION Piezometer readings confirmed that the constrictor frames in the LLO's helped maintain pressures above atmospheric, indicating that cavitation should not be a problem in the future. REFERENCES B.C. Hydro, \"Terzaghi Dam, Low Level Outlet Repairs -- Memorandum on Construction,\" Report No. EP6, Vancouver, B.C., Dec. 1986. B.C. Hydro, \"Terzaghi Dam, Low Level Outlet Tests,\" Report No. H1902, Vancouver, B.C., Mar. 1987. CONTACT/OWNER British Columbia Hydro Hydrotechnical Department, HED 6911 Southpoint Drive Burnaby, British Columbia, Canada YELLOWTAIL AFTERBAY DAM Bighorn River, Montana BACKGROUND Yellowtail Afterbay Dam, operational in 1966, is a 33-ft-high concrete gravity diversion type structure, 300 ft long, located about 1 mile downstream from Yellowtail Dam. In 1967 following a heavy winter/spring snowpack in the upstream drainage basin, flood flows passed through both Yellowtail Dam and the Afterbay Dam.\n\nV3N 4X8\n\nPROBLEM Divers examined the Afterbay Dam sluiceway and stilling basin after the flood flows had passed. They found cavitation damage on the dentates (baffle blocks) and adjacent floor and wall areas in the spillway stilling basin. Although the cavitation damage was moderate, repairs were necessary to lessen the likelihood that future cavitation damage would occur. Damage to the dentates and floor in the sluiceway was caused by abrasion. The relatively low sill at the downstream end of the sluiceway was permitting downstream gravel and sand to be drawn into the stilling area, where a ball mill-type action ground away the concrete surfaces. In the stilling basin downstream of the reverse ogee section, cavitation severely eroded the sides of the dentates and the adjacent floor areas. A similar condition developed in the sluiceway except that it was caused by abrasion erosion. Since the damage from the two causes occurred essentially side by side, the situation graphically illustrated the dissimilar types of erosion resulting from cavitation and abrasion. SOLUTION Following the flood, low flows at the dam could be maintained for only one month. That situation required that all repairs be completed quickly and concurrently. In addition to repairing damaged areas, the downstream sill in the sluiceway was raised about 3 ft to stop river gravels from being drawn into the sluiceway. Repairs were completed using a combination of bonded concrete, epoxy-bonded concrete and epoxy-bonded epoxy mortar, depending upon thickness of the repair. Epoxy used in this repair was a polysulfide-type material. After repaired materials had been placed and cured, they were ground to provide a smooth, cavitation-resistant surface. PERFORMANCE The dam has now been in service about 23 years since the repairs were made. With the exception of a minor number of spalls, the performance of the repairs has been excellent. REFERENCES Graham, J.R., \"Spillway Stilling Basin Repair Using Bonded Concrete and Epoxy Mortar,\" Proceedings, Irrigation and Drainage Specialty Conference, Lincoln, NE, Oct. 1971, pp. 185-204. Graham, J.R., and Rutenbeck, T.E., \"Repair of Cavitation Damaged Concrete, a Discussion of Bureau of Reclamation Techniques and Experiences,\" Proceedings, International Conference on Wear of Materials, St. Louis, MO, April 1977, pp. 439-445. CONTACT Bureau of Reclamation P.O. Box 25007, Denver Federal Center Denver, CO 80225 YELLOWTAIL DAM Bighorn River, Montana BACKGROUND\n\nThe dam, operational in 1966, is a concrete arch structure 525 ft high with a crest length of 1480 ft. Normal flow through the dam occurs in two 84-in. outlet pipes and through the turbines of the powerhouse. Flows exceeding the capacity of these facilities are routed through a high-head spillway located in the left abutment. At this spillway, water enters through a radial-gated intake structure, then passes into an inclined section of tunnel varying in diameter from 40.5 ft at the upper end to 32 ft at the beginning of the vertical elbow. Thereafter, flow follows the 32-ft-diameter tunnel through the elbow and 1200 ft of near horizontal tunnel, exiting into a combination stilling basin-flip bucket, then into the river. During the spring of 1967, heavy rains in the watershed area of the Bighorn River resulted in high inflows into Bighorn Lake behind Yellowtail Dam. A total of 650,000 acre-ft of flood waters was released through the spillway over a period of 30 days. Maximum flow was 18,000 ft.3-/s. PROBLEM During the 1967 spill, severe damage occurred to the concrete tunnel lining and underlying rock in the elbow, as well as upstream and downstream. After the flows into the river had subsided sufficiently for a temporary shut-down of the tunnel, divers made an examination. Major damage was found in the near-horizontal section of the tunnel lining and in the elbow. Failure occurred along the tunnel invert in a leapfrog fashion, typical of cavitation damage. The largest cavity was about 100 ft long, 20 ft wide and 6 to 8 ft deep. After the tunnel was dewatered, it was found that a small concrete patch placed during construction had failed, thereby causing the discontinuity in the flow that triggered the cavitation. SOLUTION The tunnel liner was repaired using several systems depending on the size and depth of the damage. Areas where the damage extended through the lining into the foundation rock were repaired with high quality replacement concrete. Major areas of damage where the erosion did not penetrate through the concrete lining were repaired with bonded concrete. Shallow-damaged concrete was repaired with epoxy-bonded concrete and epoxy-bonded epoxy mortar. Surfaces were ground where necessary to bring tolerances into conformance with specifications requirements. Finally, tunnel surfaces below spring line were painted with an epoxy-phenolic paint, to help seal the surface and bond any aggregate particles that may have been loosened. In order to avoid recurring damage, an aeration device was model tested in the laboratory and then constructed in the tunnel a few ft upstream of the point of curvature of the vertical elbow. This aeration slot measured 3 ft wide and 3 ft deep and extended around the lower three quarters of the tunnel circumference. It was designed to entrain air in the flow for all discharges up to 92,000 ft.3-/s, without the slot filling with water. A 27-in.-long ramp was constructed upstream of the slot which raised the upstream face of the slot 3 in. at the tunnel invert. Under most flow conditions the bottom of the jet was forced away from the tunnel floor surface. The jet remained free for a considerable distance downstream, all the while drawing air into the jet from the aeration slot. Aeration has reduced the discharge capacity by approximately 20 percent. PERFORMANCE It has now been 23 years since the tunnel was repaired and the aeration slot installed, but flows in the river have never been sufficient to require use of the spillway. However, a controlled prototype test with flows to 16,000 ft.3-/s was conducted in 1969 and 1970. As a result of this test,\n\nThe cavitation erosion at the ft of the gate slot damaged not only the concrete invert but also the lower part of the steel liner within the gate slot and an area of the wall immediately downstream of the liner. The 1986 study concluded that the severe concrete erosion at and just downstream of the gate slots was due to 1) cavitation caused by vortices originating in the upstream corners of the gate slots at small, part-gate operation; and 2) lack of rounding and lack of offset of downstream edge of gate slot. SOLUTION Initially, it was recommended that 1) eroded areas should be filled with concrete and armored with steel plates, and 2) field tests should be conducted to identify cavitation zones. Later, the recommendation was changed to back-fill cavitated areas with 3/4-in. aggregate, high strength (6000 psi) concrete. The bond between the back-fill and the original sluiceway concretes was enhanced by epoxy bonding agent. The top surface of the new patch and the surrounding original concrete were coated with an acrylic latex selected through an extensive laboratory screening process. The work was carried out in the summer of 1990. PERFORMANCE In order to test the effectiveness of the repairs, during the following year it was decided to operate the sluice gates mostly in the worst range. A year later, the repaired and coated surfaces began to show signs of pitting. The performance of the repair still did not appear satisfactory. It became obvious that besides repairing the eroded areas other initiatives were needed to alleviate recurrence of the problem. DISCUSSION Based on the observations of the effect of gate opening on cavitation, it was decided to limit gate operation to that outside of the destructive range. Gate operating orders were rewritten to require \"passing over\" the rough zones as quickly as possible without any sustained operation in those zones. REFERENCES B.C. Hydro, Hydroelectric Engineering Division, \"Hugh Keenleyside Dam, Cavitation Damage on Spillway,\" Report No. H1922, Vancouver, B.C., Mar. 1987. B.C. Hydro, Hydroelectric Engineering Division, \"Keenleyside Dam, Comprehensive Inspection and Review 1986,\" Report No. H1894, Vancouver, B.C., May 1987. B.C. Hydro, Hydroelectric Engineering Division, \"Hugh Keenleyside Dam, Cavitation Damage on Spillway, Field Investigation of Cavitation Noise and Proposed Gate Operating Schedules,\" Report No. 2305, Vancouver, B.C., June 1992. CONTACT/OWNER British Columbia Hydro Structural Department HED6911 Southpoint Drive Burnaby, British Columbia, Canada V3N4X8\n\nFig. 2.8-Keenleyside Dam. Cavitation erosion of concrete invert & adjacent damage to steel liner. Maximum depth approximately 9 in.\n\nCHAPTER 3 - ABRASION-EROSION CASE HISTORIES ESPINOSA IRRIGATION DIVERSION DAM Espanola, New Mexico, on the Santa Cruz River BACKGROUND The diversion dam is a reinforced concrete structure that is capable of diverting up to 13 ft.3-/s in the Espinosa Ditch for irrigation purposes. A 50-ft-long reinforced rectangular concrete channel, sediment trap, and sluice gate structures were constructed between the headgate and the ditch heading. A sidewall weir notch is provided in the rectangular ditch lining to allow emergency discharge of flood flows back to the river. A 24-in.-round sluice gate at the right side of the dam was placed at the slab invert elevation, to sluice sand and cobbles through the dam and to prevent these materials from entering the irrigation ditch head gate. The dam is tied back into the riverbanks on either side with small earthen dikes that protect the surrounding land against flood flows of 1000 ft.3-/s or less. PROBLEM Debris plugged the sluice gate, preventing the diversion of the bedload from the irrigation ditch. The structure experienced severe erosion damage to the apron and floor blocks (Fig. 3.1) due to impact and abrasion by the bedload. The bedload consists of gravels and boulders ranging up to 24 in. in diameter. The concrete in the apron in the impact area was abraded to a depth of 6 in. Except for very low flows and flows diverted for irrigation, the bedload is carried over the weir.\n\nFig. 3.1-Espinosa Irrigaion Diversion Dam. Erosion damage to the floor blocks\n\nFig. 3.2-Espinosa Irrigation Diversion Dam. Steel plate protection added to the floor blocks and endsill\n\nSOLUTION Repairs were made by extensive structural modifications. These modifications included the following (Fig. 3.2): 1) removing and replacing the top layer of reinforcement in the apron; 2) removing and replacing the top 6 in. of concrete; 3) protecting the apron with a 1/2-in. steel plate; and 4) replacing the 24-in.-round sluice gate with a 36-in. square sluice gate. PERFORMANCE The structure has been operating satisfactorily since rehabilitation in 1982. DISCUSSION Five alternatives were evaluated for the placement of the diversion dam back into service. The ones not selected as the solution are as follows: 1. 2. 3. 4. Install a reinforced concrete lining inside the walls and apron of the existing structure. Protect the apron with a 1/2-in. steel plate. Remove the entire apron of the structure and replace it with one that is adequately reinforced. Add the liner inside the structure. Remove the entire structure and replace it with a new one.\n\nREFERENCES U.S. Department of Agriculture, \"Espinosa Diversion Dam, Report of Investigation of Structural Failure,\" Soil Conservation Service, Albuquerque, NM, Nov. 1980. U.S. Department of Agriculture, \"Espinosa Diversion Dam, Design Engineer's Report, \"USDA, Soil Conservation Service, Albuquerque, NM, Sept. 1982. CONTACT/OWNER State Conservation Engineer U.S. Department of Agriculture Soil Conservation Service 517 Gold Avenue, SW, Room 3301 Albuquerque, NM 87102 KINZUA DAM Allegheny River, Warren County, Pennsylvania\n\nBACKGROUND Kinzua Dam became operational in 1965. The stilling basin consists of a horizontal apron, 160 ft long and 204 ft wide. It contains nine 7-ft-high by 10-ft-wide baffles, located 56 ft upstream from the end sill. The verticalfaced end sill is 10 ft high and 6 ft wide. The basin slab was constructed of concrete with a 28-day compressive strength of 3000 psi. The outlet works consists of two high-level and six low-level sluices. A maximum conservation flow of about 3600 ft.3-/s is supplied by the high-level sluices. The low-level sluices with flared exists containing tetrahedral deflectors are located 26 ft above the stilling basin slab. Bank-full capacity, 25,000 ft.3-/s, can be discharged through these sluices at reservoir elevation 1325. The maximum 24,800-ft.3-/s record discharge was discharged through the sluices in 1972. The maximum velocity at the sluice exit was 88 ft/s. PROBLEM Because of the proximity of a pumped-storage power-plant on the left abutment and problems from spray, especially during the winter months, the right side sluices were used most of the time. Use of these sluices caused eddy currents that carried debris into the stilling basin. The end sill was below streambed level and contributed to the deposition of debris in the basin. Divers reported erosion damage to the basin floor as early as 1969. Also, piles of rock, gravel, and other debris in the basin were reported. About 50 cubic yards of gravel and rock, ranging up to 8 in. in diameter, were removed from the basin in 1972. Abrasion-erosion damage reached a depth of 3.5 ft in some areas before initial repairs were made in 1973 and 1974. These repairs were made with steel fiber-reinforced concrete. Approximately 1400 cubic yards of fiber concrete was required to overlay the basin floor. From the toe of the dam to a point near the baffles, the overlay was placed to an elevation 1 ft higher than the original floor. In April 1975, divers reported several areas of abrasion-erosion damage in the fiber concrete. Maximum depths ranged from 5 to 17 in. Approximately 45 cubic yards of debris were removed from the stilling basin. Additional erosion was reported in May 1975, and another 60 cubic yards of debris were removed from the basin. At this point, symmetrical operation of the lower sluices was initiated to minimize eddy currents down-stream of the dam. After this change, the amount of debris removed each year from the basin was drastically reduced and the rate of abrasion declined. However, nearly 10 years after the repair, the erosion damage had progressed to the same degree that existed prior to the repair. SOLUTION A materials investigation was initiated prior to the second repair, to evaluate the abrasion-erosion resistance of potential repair materials. Test results indicated that the erosion resistance of conventional concrete containing a locally available limestone aggregate was not acceptable (Fig. 3.3). However, concrete containing this same aggregate with the addition of silica fume and a high-range, water-reducing admixture exhibited high compressive strengths (approximately 14,000 psi at 28 days' age) and very good resistance to abrasion erosion. Therefore, approximately 2000 cubic yards of silica-fume concrete were used in a 12-in. minimum thickness overlay when the stilling basin was repaired in 1983 (Fig. 3.4).\n\nConstruction of a debris trap immediately downstream of the stilling basin end sill was also included in the repair contract. Hydraulic model studies showed that such a trap would be beneficial in preventing downstream debris from entering the stilling basin. The trap was 25 ft long with a 10-ft-high end sill that spanned the entire width of the basin. PERFORMANCE In August 1984, after periods of discharge through the upper and lower sluices, abrasion-erosion along some cracks and joints was reported by divers. The maximum depth of erosion was about 1/2 in. The divers also discovered two pieces of steel plating that had been embedded in the concrete around the intake of one of the lower sluices. Because of concern about further damage to the intake, the use of this sluice in discharging flows was discontinued. This nonsymmetrical operation of the structure resulted in the development of eddy currents. The next inspection, in late August 1984, found approximately 100 cubic yards of debris in the basin. In September 1984, a total of about 500 cubic yards of debris was removed from the basin, the debris trap, and the area immediately downstream of the trap. The rock debris in the basin ranged from sand sized particles to over 12 in. in diameter. Despite these adverse conditions, the silica-fume concrete continued to exhibit excellent resistance to abrasion. Erosion along some joints appeared to be wider but remained approximately 1/2-in. deep. Sluice repairs were completed in late 1984, and symmetrical operation of the structure was resumed. A diver inspection in May 1985 indicated that the condition of the stilling basin was essentially unchanged from the preceding inspection. A diver inspection approximately 3-1/2 yr after the repair indicated that the maximum depth of erosion, located along joints and cracks, was about 1 in.\n\nFig 3.4-Kinzua Dam. Typical silica-fume concrete placement operation for a stilling basin slab\n\nREFERENCES Fenwick, W.B., \"Kinzua Dam, Allegheny River, Pennsylvania and New York; Hydraulic Model Investigation,\" Technical Report HL-89-17, U.S. Army Engineer Water ways Experiment Station, Vicksburg, MS, Aug. 1989. Holland, T.C., \"Abrasion-Erosion Evaluation of Concrete Mixtures for\n\nStilling Basin Repairs, Kinzua Dam, Pennsylvania,\" Miscellaneous Paper SL-83-16, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, Sept. 1983. Holland, T.C., \"Abrasion-Erosion Evaluation of Concrete Mixtures for Stilling Basin Repairs, Kinzua Dam, Pennsylvania,\" Miscellaneous Paper SL-86-14, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, Sept. 1986. Holland, T.C.; Krysa, A.; Luther, M.D.; and Liu, T.C., \"Use of Silica-Fume Concrete to Repair Abrasion-Erosion Damage in the Kinzua Dam Stilling Basin,\" Fly Ash, Silica Fume, Slag, and Natural Pozzolans in Concrete, SP-91, V. 2, American Concrete Institute, Detroit, MI, 1986, pp. 841-863. McDonald, J.E., \"Maintenance and Preservation of Concrete Structures, Report 2, Repair of Erosion-Damaged Structures,\" Technical Report No. C-78-4, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, April 1980. CONTACT/OWNER U.S. Army Engineer District Pittsburgh William S. Moorhead Federal Building 1000 Liberty Avenue Pittsburgh, PA 15222 LOS ANGELES RIVER CHANNEL Los Angeles River, California BACKGROUND The Los Angeles River Channel is an improved structural channel that drains a watershed of 753 square miles. The majority of the channel was constructed in the 1940s. In the invert of the concrete-lined main channel is a reinforced concrete low-flow channel. This low-flow channel is approximately 12 miles long and was originally constructed with an invert thickness of 12 in. Water velocities in that channel range from 20 to 30 ft/s. PROBLEM Over the years abrasion erosion has occurred to varying degrees along the low-flow channel. In some reaches, erosion had progressed completely through the concrete by the early 1980s. This erosion was the result of a combination of abrasion by waterborne sediment and debris passing over the concrete, and chemical attack. SOLUTION Prior to repair, laboratory studies were conducted to evaluate the abrasion-erosion resistance of concretes containing locally available aggregates. Typically, these aggregates exhibit a relatively high abrasion loss tested according to ASTM C 131, using the Los Angeles machine. Results of the laboratory tests indicated that concrete with a high cement content, a silica fume content of 15 percent by mass of portland cement, and a low water-cement ratio would provide excellent abrasion-erosion resistance, even when produced with aggregates that might be marginal in durability.\n\nFig 3.5-Los Angeles River Channel. Concrete for a full depth replacemnet was placed with a conveyor & finished with a specially shaped vibratory screed\n\nBeginning in 1983, the existing concrete in the approximately 1/2-mile reach of most severe damage was removed and replaced with reinforced, silica-fume concrete (Fig. 3.5). The thickness of the replacement concrete was 12 in. Subsequent rehabilitation of the remaining channel during 1984 and 1985 was accomplished by either full-depth slab replacement or an overlay on the existing concrete. Full-depth repairs consisted of a new, reinforced base slab of conventional concrete and 6-in. overlay of silica-fume concrete. Overlays on the existing concrete were 4- to 6-in.-thick sections of silicafume concrete. Various mixture proportions were used with compressive strengths ranging from 8000 to 10,500 psi. Approximately 27,500 cubic yards of silica-fume concrete were required to complete the rehabilitation. The unit costs for the silica-fume concrete decreased with time as bidders became more familiar with the material. The unit cost for the 1985 project was \\$154/cubic yard, which was slightly less than twice the unit cost of conventional concrete. PERFORMANCE Scour gauges were installed to monitor long-term wear of the silica-fume concrete. Because of the nature of the mechanism causing abrasion-erosion, an evaluation of performance will require an extended period of time. However, the abrasion resistance of the silica-fume concrete, according to the laboratory tests, should be two to four times better than the conventional concrete previously used. Visual inspections of the channel surfaces indicate little or no erosion of the concrete has occurred in the 8 years following repair. REFERENCES Holland, T.C., \"Abrasion-Erosion Evaluation of Concrete Mixtures for Repair of Low-Flow Channel, Los Angeles River,\" Miscellaneous Paper SL-86-12, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, Sept. 1986. Holland, T.C., and Gutschow, R.A., \"Erosion Resistance with Silica-Fume Concrete,\" Concrete International, V. 9, No. 3, Detroit, MI, March 1987, pp. 32-40. CONTACT/OWNER U.S. Army Engineer District, Los Angeles 300 North Los Angeles Street Los Angeles, CA 90012 NOLIN LAKE DAM Nolin River, Edmonson County, Kentucky\n\nBACKGROUND Nolin Lake Dam became operational in 1963. The stilling basin is 40 ft wide, 174 ft long with a 7-ft-high end sill and 35-ft-high sidewalls. The basin contains a parabolic section with an 8.4-ft drop in elevation from the outlet tunnel invert to the horizontal floor slab. The design discharge is 12,000 ft.3-/s with an average velocity of 61 ft/s entering the basin. The structure was built of reinforced concrete with a design compressive strength of 3000 psi. PROBLEM The conduit and stilling basin at Nolin were dewatered for inspection in 1974, following approximately 11 years of operation. Erosion was reported in the lower portion of the parabolic section, the stilling basin floor, the lower part of the baffles, and along the top of the end sill. The most severe erosion was in the area between the wall baffles and the end sill, where holes 2 to 3 ft deep had been eroded into the stilling basin floor along the sidewalls. SOLUTION The stilling basin was dewatered and repaired in 1975. Conventional concrete designed for 5000 psi compressive strength was used to restore the basin slab to an elevation 9 in. above the original grade. A hydraulic model study of the existing basin was not conducted, but the structure was modified in an attempt to reduce the amount of debris entering the basin. New work included raising the end sill 12 in., adding end walls at the end of the stilling basin, and paving a 50-ft-long channel section. PERFORMANCE A diver inspection in 1976 indicated approximately 4 tons of rock was in the stilling basin. The rock, piled up to 15 in. deep, ranged up to 12 in. in diameter. Also, 18-in.-deep rock piles were found on the slab down-stream from the stilling basin. Erosion, up to 8 in. deep, was reported for concrete surfaces that were sufficiently clear of debris to be inspected. In August 1977, approximately 1 to 1-1/2 tons of large, limestone rock, all with angular edges, was reported in the stilling basin. No small or rounded rock was found. Since the basin had been cleaned during the previous inspection, this rock was thought to have been thrown into the basin by visitors. When the stilling basin was dewatered for inspection in October 1977, no rock or debris was found inside the basin. Apparently, the large amount of rock discovered in the August inspection had been flushed from the basin during the lake drawdown, when the discharge reached a maximum of 7340 ft.3-/s. Significant erosion damage was reported when the stilling basin was dewatered for inspection in 1984. The most severe erosion was located behind the wall baffles, similar to that prior to repair in 1975. Each scour hole contained well-rounded debris ranging from marble size to approximately 12-in. diameter. Temporary repairs included removal of debris from the scour holes and filling them with conventional concrete. Also, the half baffles attached to each wall of the stilling basin were removed. A hydraulic model of the stilling basin was constructed to investigate potential modifications to the basin to minimize chances of debris entering the basin and causing subsequent erosion damage to the concrete. Results of this study were incorporated into a permanent repair in 1987. Modifications included rebuilding the parabolic section in the shape of a whale's back,\n\nDuring a 1984 investigation, it was concluded that the damage exhibited the characteristics of erosion and abrasion damage by the ball mill effect, as described on pages 14 and 15 of Chapter 1 of the Bureau of Reclamation Concrete Manual. The major damage to the structure is attributed to gravel and larger sized material being introduced into the stilling basin from the outlet channel slope protection rock.\n\nfIG. 3.6-Pine River Watershed, Structure No.41. Erosion of sidewall, exposing reinforcing steel\n\nThe SAF outlet channel was designed and constructed with a 3 to 1 adverse grade from the top of the end sill to the canal invert elevation, approximately 4 ft above the end sill. It has a bottom width of 10 ft with 2 to 1 side slopes. The entire section is lined with loose rock riprap. The rock is rounded to subrounded and is easily dislodged. Much of the rock on the adverse slope appears to have been displaced and the slope eroded, so that it is considerably steeper than originally constructed. Hydraulic transport of the smaller rock into the basin appears to be the method of debris introduction. SOLUTION The investigating team made the following recommendations: 1. Study the hydraulics of the outlet and design an outlet basin to fit most favorably with those predicted by model studies. Minimize use of rock riprap but, if needed, grout to prevent movement. Replace concrete end sill, floor blocks, and chute blocks using highstrength concrete. The effect on hydraulic performance will need to be studied.\n\n2.\n\nA model study was conducted in 1984 to determine the design for a preshaped, riprapped energy dissipation pool. The design was recommended for the repair and rehabilitation of the structure and was also considered appropriate information for use in the design of similar pools. PERFORMANCE No permanent work has been completed on the repair of the structure to date. Options for repair are being considered at this time. REFERENCES Bureau of Reclamation, Concrete Manual, 8th Edition, U.S. Department of the Interior, 1981. Rice, C.E., and Blaisdell, F.W., \"Energy Dissipation Pool for a SAF Stilling Basin,\" Applied Engineering in Agriculture, V. 3, No. 1, USDA-ARS, Stillwater, Oklahoma, 1987, pp. 52-56. CONTACT/OWNER\n\nState Conservation Engineer U.S. Department of Agriculture, Soil Conservation Service Sixth Avenue Central, 655 Parfet Street, Room E200C Lakewood, CO 80215-5517 POMONA DAM Hundred Ten Mile Creek, Vassar, KS BACKGROUND The stilling basin at Pomona Dam, operational in 1963, is 35 ft wide and 80 ft long. The reinforced concrete transition and horizontal basin floor have a design discharge velocity of 58 ft/s. Two staggered rows of baffles, 3 ft wide and 5 ft high, are spaced at 7 ft on centers. A two-step, vertical-faced end sill is 4 ft high. Fill concrete was placed the width of the basin for a distance of 20 ft downstream from the end sill. PROBLEM The initial dewatering of the basin in February 1968 revealed erosion damage at the downstream end of the transition slab and on the upstream onethird of the basin slab. This erosion, caused by the abrasive action of rocks and other debris, had exposed reinforcing steel along the left wall of the basin. An inspection in October 1970 revealed significant additional erosion and extensive exposure of reinforcing steel. The major damage was attributed to flow conditions at relatively low discharges, since approximately 97 percent of the releases had been 500 ft.3-/s or less.\n\nFig. 3.7-Providence-Millville Diversion Structure. Erosion of the surface of the concrete apron and sidewalls\n\nSOLUTION Hydraulic model tests confirmed that severe separation of flow from one sidewall, together with eddy action strong enough to circulate stone in the model, occurred within the basin for discharges and tailwaters common to the project. Various modifications including raising the apron, installing chute blocks, constructing interior side-walls with reduced flare, and providing a hump down-stream of the outlet portal were model tested to evaluate their\n\neffectiveness in eliminating the undesirable separation of flow and eddy action within the basin. Based on the model study, it was recommended that the most practical solution was to provide a 3-ft-thick overlay of the basin slab upstream of the first row of baffles, a 1-1/2-ft overlay between the two rows of baffles, and 1 to 1 sloped face to the existing end sill. This solution provided a wearing surface for the area of greatest erosion and provided a depression at the down-stream end of the basin for trapping debris. However, flow separation and eddy action were not eliminated by this modification. Therefore, it was recommended that a fairly large discharge, sufficient to create a good hydraulic jump without eddy action, be released periodically to flush debris from the basin. The final design for the repair included 1) a minimum 1/2-in.-thick epoxy mortar topping applied to approximately one-half of the transition slab; 2) an epoxy mortar applied to the upstream face of the right three upstream baffles; 3) a 2-ft-thick concrete overlay slab placed on the upstream 70 percent of the basin slab; and 4) a sloped concrete end sill. The reinforced concrete overlay was recessed into the original transition slab and anchored to the original basin slab. The coarse aggregate used in the repair concrete was Iron Mountain trap rock, an abrasion-resistant aggregate. The average compressive strength of the repair concrete was 6790 psi at 28 days. PERFORMANCE The stilling basin was dewatered for inspection five years after repair (Fig. 3.7). The depression at the down-stream end of the overlay slab appeared to have functioned as desired. Most of the debris, approximately 1 cubic yard of rocks, was found in the trap adjacent to the overlay slab. The concrete overlay had suffered only minor damage, with general erosion of about 1/8-in. and maximum depths of 1/2-in. The location of the erosion coincided with that occurring prior to the repair. Apparently, debris is still being circulated at some discharge rate. Based on a comparison of discharge rates and slab erosion, before and after the repair, it was concluded that the repair had definitely reduced the rate of erosion. The debris trap and the abrasion-resistant concrete were considered significant factors in this reduction. The next inspection, in April 1982, indicated the stilling basin floor slab remained in good condition with essentially no damage since the previous inspection. Approximately 5 cubic yards of debris, mostly rocks, were removed from the debris trap at the downstream end of the basin. REFERENCES McDonald, J.E., \"Maintenance and Preservation of Concrete Structures, Report 2, Repair of Erosion-Damaged Structures,\" Technical Report No. C-78-4, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, April 1980. Oswalt, N.R., \"Pomona Dam Outlet Stilling Basin Modifications,\" Memorandum Report, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, 1971. CONTACT U.S. Army Engineer District, Kansas City 601 E. 12th Street Kansas City, MO 64106 PROVIDENCE-MILLVILLE DIVERSION STRUCTURE\n\nconditions and the accompanying abrasive bedload have been only moderate since the repairs in September 1986.\n\nFig. 3.8-Providence-Millville Diversion Structue. Erosion of the surface of the concrete apron and sidewalls\n\nDISCUSSION This metallic floor topping hardener is supplied in pre-packaged 55-lb bags, which is enough to apply a 1-in. layer to a 18- to 20-square ft area. Installation must be in accordance with the manufacturer's directions. The floor topping develops approximately 13,000 psi compressive strength in 3 days and is especially suited for building floor slabs subjected to impact loads. While not specifically marketed for use on hydraulic structures, its abrasion-resistant properties are attractive. Performance in drop structures with heavy sediment bedloads has been positive to date. REFERENCES U.S. Department of Agriculture, \"Memorandum, Dated April 17, 1990, to Francis T. Holt, State Conservationist, SCS, Salt Lake City, UT, from Robert A. Middlecamp, Construction Engineer, SCS, West National Technical Center, Portland, OR.\" CONTACT/OWNER State Conservation Engineer Soil Conservation Service, U.S. Department of Agriculture P.0. Box 11350 Salt Lake City, UT 84147-0350 RED ROCK DAM Des Moines River, Iowa BACKGROUND Red Rock Dam, operational in 1969, is 6200 ft long and 95 ft high. The two rolled earth embankment sections of the dam are separated by a concrete section that serves as the outlet works and spillway. The spillway has an ogee crest with five 41-ft-wide by 49-ft-high tainter gates. The outlet works has fourteen 5- by 9-ft conduits through the ogee section. Discharge\n\nfrom the spillway and the outlet works passes into a 240-ft-wide by 214-ftlong stilling basin, which has two rows of baffles. A minimum flow through the basin is 300 ft.3-/s, even in dry seasons. PROBLEM A diver inspection in 1982 detected several small areas of eroded concrete and bedrock along the end sill. Heavy precipitation during 1983 and 1984 resulted in large discharges ranging up to 40,000 ft.3-/s compared to normal discharges of about 3000 ft.3-/s. Based on the finding of the diver inspection and because of subsequent high discharges, plans for repair were initiated. Until recently, repairs of this type generally required dewatering of the stilling basin. Dewatering costs can exceed \\$1 million and have averaged 40 percent of the total repair cost in previous repairs. Since the damage to the end sill was not very severe, the high cost of dewatering the basin for repair was considered inappropriate.\n\nSOLUTION Results of laboratory tests indicated that cohesive, flowable, and abrasion-resistant concrete could be placed under water by available methods without use of the tremie seal and with minimal loss of fines if proper materials were used and precautions taken. Concrete containing AWA (antiwashout admixture) and a water-reducing admixture placed at the point of use sustained only a relatively small loss of fines and bonded well to in-place hardened concrete. Consequently, underwater concreting was selected as the most cost effective method for repair of the stilling basin. Immediately prior to the repair in August 1988, a final underwater inspection of the basin indicated larger areas of erosion than in 1982, most occurring in the bedrock just downstream of the end sill. The eroded areas extended about 18 ft downstream from the end sill and had a maximum depth of 5 ft. Construction requirements included removing loose rock and debris, installing anchors and reinforcing, positioning grout-filled bags to define\n\nthe placement area, and placing concrete by a diving contractor. The minimum flow of 300 ft.3-/s was discharged through the dam during the repair (Fig. 3.9). A concrete pump with a 4-in.-diameter line was used for underwater placement of the concrete. A diver controlled the end of the pumpline, keeping it embedded in the mass of newly discharged concrete and moving it around to completely fill the repair area. Approximately 100 cubic yards of concrete were placed in about 4 hours. The effects of the AWA were apparent; even though the concrete had a slump of about 9 in., it was very cohesive. The concrete pumped very well and, according to the diver, self-leveled within a few minutes following placement. The diver also reported that the concrete remained cohesive and exhibited very little loss of fines on the few occasions when the end of the pumpline kicked out of the concrete. The total cost of the repair was \\$128,000 (1988 price levels). In comparison, estimated costs to dewater alone in a conventional repair ranged from \\$500,000 to \\$750,000. PERFORMANCE Although additional time will be required to evaluate performance, all indications are that the repair is an economical and durable solution to the problem. REFERENCES McDonald, J.E., \"Maintenance and Preservation of Concrete Structures, Report 2, Repair of Erosion-Damaged Structures,\" Technical Report No. C-78-4, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, April 1980. Neeley, B.D., \"Evaluation of Concrete Mixtures for Use in Underwater Repairs,\" Technical Report No. REMR-CS-18, U.S. Army Engineer Waterways Experiment Station, Vicksburg, MS, April 1988. Neeley, B.D., and Wickersham, J., \"Repair of Red Rock Dam,\" Concrete International, V. 11, No. 10, American Concrete Institute, Detroit, MI, Oct. 1989, pp. 36-39. CONTACT/OWNER U.S. Army Engineer District, Rock Island Clock Tower Building P.O. Box 2004 Rock Island, IL 61204-2004 SHELDON GULCH SIPHON Big Horn County, Wyoming BACKGROUND The siphon consists of 1770 ft of 27-in.-diameter reinforced concrete pipe with a flared reinforced concrete box outlet. The siphon replaces approximately two miles of eroding, steep gradient, canal. A wasteway, used to discharge excess water or provide for upline system drainage, was established at the inlet, discharging into the abandoned canal. Rock riprap was placed at the siphon inlet by the canal company after completion of the project.\n\nPROBLEM The reinforced concrete basin experienced severe erosion damage to the apron and wall. The damage appears to have been caused by the introduction of rock from the riprap protection upstream from the structure. The abrasion has the characteristics of erosion and abrasion damage by the ball mill effect, as described by the Bureau of Reclamation Concrete Manual (see references). SOLUTION Repairs of the apron and sidewalls were made by replacing the damaged area with polymer-modified portland cement, two-component, fast-setting patching mortar. Existing concrete in the abraded area of the apron and sidewalls was removed in accordance with procedures detailed in Chapter VII of Reclamation's Concrete Manual. The exposed area was prepared and the mortar applied as directed in the manufacturer's product sheet. Riprap was stabilized by placing concrete over the rock riprap at the siphon inlet and outlet to prevent further rock removal and subsequent transport into the siphon. PERFORMANCE The repairs were made prior to the 1991 irrigation season. has occurred for an inspection of the repairs to date. DISCUSSION An alternate repair method, considered for this project, was to apply a thin layer of patching mortar in areas where concrete erosion was greater than 1 in. This alternative was more economical but considered inferior to the selected method because of the laminations created in the concrete section. REFERENCES Bureau of Reclamation Concrete Manual, 8th Edition, U.S. Department of the Interior, 1981. CONTACT/OWNER State Conservation Engineer U.S. Department of Agriculture, Soil Conservation Service Federal Building, Room 3124 100 East B Street Casper, WY 82601 CHAPTER 4 - CHEMICAL ATTACK -- ROSION CASE HISTORIES BARCELONETA TRUNK SEWER Municipality of Barceloneta, Puerto Rico BACKGROUND The Barceloneta Trunk Sewer, operational in 1976, collects sewage from several pharmaceutical plants as well as local domestic flows. It was built using regular reinforced concrete pipes conforming to ASTM C 76, with castin-place reinforced concrete manholes, spaced not farther than 280 ft. No opportunity\n\nThe depth of the pipe invert below ground surface varies from a minimum of 5 ft to a maximum of 25 ft. Flow from pharmaceutical plants is partially treated, mostly to reduce the biological oxygen demand (BOD), and remove larger solids. The pipeline is subject to a wide range of pH, temperatures and chemical composition, which varies frequently because of batch production schedules for each of the pharmaceutical plants. PROBLEM Ground subsidence of the pipeline backfill appeared along the sewer alignment. When these failures were investigated, it was found that the pipe had seriously deteriorated. There were places where the concrete had almost completely disappeared. SOLUTION Several procedures to solve the problem were investigated, including replacing the entire system. Replacing the entire system was found to be costly and difficult, because the pipe runs along the shoulder of a major road that is the access to the pharmaceutical plants. It was decided to proceed with the rehabilitation of the system using a proprietary pipe-lining process. This lining process is a method of installing a new solid lining in an existing pipeline in which pipe segments between manholes are relined in a single operation. The process consists of cleaning the existing pipe interior surfaces, then installing a flexible plastic-lined fiberglass hose impregnated with a polyester resin. The resin is activated by circulating hot water through the hose for a period of time. The hose is installed by filling it with water under limited pressure. The water pressure also serves to expand the hose as required and place it in intimate contact with the pipe wall.\n\nFig. 4.1-Dworshak National Fish hatchery. Deterioration of concrete surface of tank. repaired area to the left of the photograph\n\nNote\n\nIn order to install the hose it is necessary to temporarily divert the flows, bypassing the section under rehabilitation. Waste flows controlled by the bypass system during rehabilitation averaged 2100 gallons per minute at the larger pipes.\n\nRehabilitation of over 17,000 ft of various diameter pipeline, ranging from 18-36 in., required 7 months. In addition 68 manholes were rehabilitated, having diameters ranging from 48 to 72 in. and depths from 5 to 25 ft. PERFORMANCE After several years of uninterrupted use, the rehabilitated sewer is performing well with no evidence of deterioration. DISCUSSION The rehabilitation process represents a cost-effective and seemingly durable solution to chemical attack to existing concrete sewer pipes. CONTACT Puerto Rico Aqueduct & Sewer Authority P.O. Box 7066 Bo. Obrero Station Santurce Puerto Rico 00916 DWORSHAK NATIONAL FISH HATCHERY Clearwater River, Near Orofino, Idaho BACKGROUND The Dworshak NFH (National Fish Hatchery), operational in the late 1960s, is located at the confluence of the Clearwater and North Fork Clearwater Rivers, Idaho. A series of modifications and additions has brought the facility to its present capacity of 470,000 pounds of fish per year. The hatchery was designed as a reuse water facility, where only a small amount of makeup water is added to supplement the flow and distribution system. PROBLEM The concrete surfaces exposed to hatchery water have experienced chemical attack and surface removal of portland cement paste (Fig. 4.1). Particles of sand in the concrete have become exposed due to erosion of the weakened paste. This phenomenon has been previously reported in areas where the concrete is attacked by water containing free CO+2,, flowing pure water from melting ice or condensation, and water containing little CO+2,. The water dissolves Ca(OH)+2,, thus causing surface erosion. The Dworshak reservoir collects snowmelt from the drainage basin and releases the pure water during the seasonal incubational and rearing phase of fish hatchery production. The following table summarizes a typical water analysis for Dworshak NFH. SOLUTION The most likely solution was to coat the concrete surfaces with some type of surface treatment to prevent exposure to the pure water. Epoxy coatings, polymeric, and other coatings protect the hardened portland cement paste at exposed surfaces. Trial coatings of epoxy mortar and a urethane coating, of approximately 500 square ft each, were applied to damaged surfaces for evaluation (Fig. 4.1). After two years of exposure, the integrity of the coatings is intact, however, performance of the coatings adjacent to joints and cracks is poor. An alternate solution may be to alter the pH of the water by appropriate chemical additions, such as free lime, if it is tolerable to the fish.\n\nParameter\n)))))))))\n\nValue\n)))))\n\nUnit\n))))\n\npH Total Dissolved Solids Specific Conductivity Hardness Alkalinity Chlorides, Cl Sulfates, SO+4, Nitrates, NO+3, Sodium, Na+ Potassium, K+ Calcium, Ca++ Magnesium, Mg++ CONTACT/OWNER U.S. Fish & Wildlife Service Dworshak Kooskia NFH Complex P.O. Box 18 Ahsahka, ID 83520\n\n6.5 to 7.4 28-33 23-29 12-15 15-20 0.2-0.4 2.0 0.07 1.44 0.55 3.75 0.70\n\nmg/L *mhos mg/L mg/L mg/L mg/L mg/L mg/L mg/L mg/L mg/L\n\nLOS ANGELES SANITARY SEWER SYSTEM AND HYPERION SEWAGE TREATMENT FACILITY Los Angeles, California BACKGROUND The sanitary sewerage system of the city of Los Angeles includes over 6000 miles of sewers that service an area of over 600 square miles. There are two upstream water reclamation plants in the San Fernando Valley. These are the D.C. Tillman and the Los Angeles-Glendale plants. The sewage at those plants is treated to advanced secondary standards and discharged to the Los Angeles River. Their solids are returned to the sewers for transport to the Hyperion regional treatment plant on the coast of Santa Monica Bay. All of the remaining sewers of the service area enter one of four major interceptors for conveyance to the Hyperion plant. This 420 Mgal/d (million gallons per day) facility was originally designed for 265 Mgal/d to provide primary (mechanical) treatment and a high rate activated sludge secondary (biological) treatment to the sewage, and meet a discharge standard of 70 ppm (particles per million particles) suspended solids and 70 ppm biological oxygen demand (BOD). In 1957 the process was modified so that 100 Mgal/d received secondary treatment and is mixed with 320 Mgal/d that receives only primary treatment. This mixed plant effluent is discharged at sea, where dilution and dispersion ultimately render it innocuous. The ocean outfall extends 5 miles to sea, to a water depth of 235 ft. In 1986 secondary treatment was expanded to 200 Mgal/d, and chemicals were added to enhance primary treatment. PROBLEM Over 40 years ago, the pioneer research of Dr. Richard D. Pomeroy pinpointed the shortened life of the concrete sanitary sewers in Los Angeles as being due to hydrogen sulfide (H+2S,) attack of septic sewage. His was the original research that recognized the phenomenon. By one definition, septic sewage is sewage that contains entrained H+2S,. To be septic, the sewage in a warm climate need only have an age of a few\n\nhours in the sewers. The H+2S, is generated by sulfate-reducing anaerobic bacteria confined in the slimes that line the continuously wetted perimeter of the sewer, particularly in low velocity (<2 ft/s) zones of flow. Under conditions of laminar flow, the H+2S, of septic sewage escapes the water surface at moderate rates, to attack the concrete above the waterline. The attack is the result of the H+2S, being oxidized by bacterial action and combining with water vapor to form H+2,S0+4,. So, more accurately, H+2S, attack could be termed sulfuric acid attack on the concrete surfaces above the waterline (compare Fig. 4.2 and 4.3). Under conditions of turbulence, caused by high velocity flow or a plunging of the flow, as in a drop manhole (Fig. 4.4), the escape of H+2S, from the wastewater is much more rapid, and the H+2S, attack is much more severe. With this as background, the engineering conclusions drawn by the city of Los Angeles from their experiences of the past 45 years will be recited. SOLUTION THE SEWER SYSTEM The problem of H+2S, attack in the sewer system is ongoing, and worsening. The long distances traveled to reach the Hyperion plant account for the plant influent having been in the sewers for 24 to 72 hours. That means that hundreds of miles of sewers are transporting septic sewage. Current policy is to use acid-resistant vitrified clay pipe for all sewers up to 42 in. ID (inside diameter) and PVC (polyvinyl-chloride) lined reinforced concrete pipe for all diameters above that. The H+2S, attack on the concrete pipe has become more aggravated in recent years. Engineers have speculated that this aggravation is due to pointsource control of toxic producers. In conformance with EPA regulations, many industries are required to pretreat their plant effluent so as not to discharge toxins into the sewer system that could kill the bacteria in the biological reactors of the secondary treatment facilities at Hyperion. The irony of the situation is that whereas previously the toxicity in the sewers had tended to keep the growth of the H+2S,- producing bacteria under some measure of control, now that lack of toxicity has permitted those bacteria to thrive. Unprotected concrete pipe, subjected to this low toxicity effluent, has failed within 5 years due to H+2S, attack. The City of Los Angeles engineering policy generated as a result of these experiences is that:\n\nFig. 4.2-Los Angeles Sanitary Sewer System. Typical new & undeteriorated condition of concrete pipe\n\nFig. 4.3-Los Angeles Sanitary Sewer System. Deterioration of concrete pipe from acid H2s Attack\n\n1.\n\nFor all new concrete construction, protect the inside of manholes and the inside crown of pipes above the waterline with a sheet of acid-\n\nresistant PVC, mechanically anchored to the concrete. 2. For the repair of old concrete construction, restore the concrete surface and then protect it with an applied coating or lining. It is at this point that the city is still in the process of setting policy. There have been some bad experiences with spray-on coatings (Fig. 4.5), resulting both from pinhole holidays and poor quality control of the constituent materials. Roll-on plastic sheets have been successful, but the physical situation in sewers often precludes their use. Also demolition of the damaged structure or pipe and replacement with PVClined new construction is under consideration.\n\nFig. 4.4-Los Angeles Sanitary Sewer System. Deterioration of reinforced concrete structures from acid attack\n\nSOLUTION THE PRIMARY SETTLING TANKS There are 12 primary settling tanks at the Hyperion plant; 4 more are under construction. Each of the tanks is 300 ft long, 56.5 ft wide, and 15 ft deep. Of the existing 12 tanks, 8 were constructed in 1950, of unprotected reinforced concrete. Four were constructed in 1957 and, based on the experience of the original eight, were protected against H+2S, attack by a PVC liner, above the waterline, for 75 ft at each end. The first 12 tanks were covered with a 15-in. reinforced concrete slab, supported by a system of beams and columns. The four currently under construction will be covered with an aluminum roof.\n\nFig. 4.5-Hyperion Sewage Treatment Facility. Reinforced concrete sedimentation tank showing coating failure and corrosion\n\nH+2S, is stripped as the sewage enters the tanks, through a baffle system, and again as it exits over V-notch weirs. These are two regions of turbulence, where the H+2S, levels are unusually high. In spite of the tranquil flow between these two regions, the tank roofs have suffered critically through the years, due to H+2S, attack. This was first brought to the attention of the engineering community in 1964, when Jack Betz's paper, \"Repair of Corroded Concrete in Wastewater Treatment Plant,\" was published in the Journal of the Water Pollution Control Federation. By the early 1960's, the concrete above the waterline in the launders (the effluent channel of the primary settling tank), and the soffit of the roof slabs, had been damaged to the point that the reinforcing had been exposed. The City decided to repair the damage by chipping back to sound concrete, anchoring steel mesh on to the existing rebar, and applying a new concrete to the surface. The system only lasted 20 years. By 1983, the concrete above the waterline was in such bad shape that a second repair was initiated. The 1983-87 repair program involved water blasting back to sound concrete, and restoring the concrete surface with shotcrete. This was then sprayed with a polymer coating. The system worked fairly well, but wherever there was a pinpoint holiday in the polymer coating, the H+2S, attack would recur. Additionally, there were cases where the polymer failed to stop the reflection of expanding cracks in the substratum. These cracks likewise exposed the concrete to H+2S, and water vapor. As a result of the experiences of the design of the 8 original primary settling tanks, and the 1963-67 and the 1983-87 repairs of the erosion of the first 12 tanks, the city adopted the following policies with respect to protecting the concrete in the primary settling tanks: 1. For existing concrete above the waterline, remove damaged concrete back to sound concrete and restore the surface with shotcrete. Although policy is still not set, the current practice is to protect the lining with a roll-on sheet plastic cemented to the concrete. Spray-on coatings are generally not considered to be a long-term solution. For existing concrete and new construction below the waterline, protect the concrete with coal-tar epoxy. This is not related to the erosion of concrete due to H+2S, attack, but rather to the issue of blocking chlorides from penetrating into the pervious concrete and attacking the reinforcement. For new construction above the waterline, provide 100 percent protection using PVC lining systems.\n\n2.\n\n3.\n\nREFERENCES U.S. Environmental Protection Agency, \"Process Design Manual for Sulfide Control in Sanitary Sewerage Systems,\" EPA 625/1-74-005 and NTIS PB/260/479, Washington, DC, Oct. 1974. American Concrete Pipe Association, Concrete Pipe Handbook, Vienna, VA, 1980. American Society of Civil Engineers, \"Manuals and Reports on Engineering Practice, No. 60; WPCF, Manual of Practice, No. FD-5. CONTACT/OWNER City of Los Angeles Hyperion Treatment Plant Los Angeles, CA\n\nPECOS ARROYO WATERSHED, SITE No. 1 San Miguel County, New Mexico BACKGROUND The Pecos Arroyo Dam is Canon Bonito, 5-1/2 miles fill with a clay core and excavated earth spillway. reinforced concrete pipe, a plunge pool outlet. PROBLEM The outlet conduit concrete was severely deteriorated, believed to be due to galvanic corrosion in combination with carbonic acid attack (Fig. 4.6). The apparent source of the chemical attack was saline water seeping from the carbonaceous shale and limestone in the right abutment. Resultant corrosion of reinforcement had been aggravated by local galvanic cells at the steel spigot pipe ends in the low resistivity soils. No other apparent damages had occurred. Structural failure had not occurred but a serious safety hazard existed. SOLUTION The existing 36-in. I.D. reinforced concrete pressure pipe was lined with 304 ft of 32-in. O.D. high-density polyethylene (HDPE) pipe. The annular space between the two pipes was pressure pumped full with 2.3 cubic yards of grout. A cast-in-place cantilever outlet replaced 24 ft of existing downstream 36-in. pipe and cantilever support. The downstream outlet channel was enlarged to prevent submergence of the pipe invert during normal flows. The total cost of the contract for the performance of this work was \\$126,000. Treatment of the abutment was considered unnecessary and was not provided. PERFORMANCE Annual inspections since installation have shown no evidence of chemical attack. No flow has been observed in the drain system around the conduit. The length of the HDPE liner has stretched 1 to 2 in. longer than the concrete conduit. This variation is in accordance with the design calculations. DISCUSSION Two alternatives, other than lining the existing conduit, were considered in the design. The first was to remove and replace the existing conduit with a reinforced concrete prestressed cylinder pipe made with Type V cement. The second alternative was to abandon the existing conduit and install another conduit in an alternative location. These alternatives were estimated to be more costly than the repair method used. a floodwater-retarding structure constructed on north of Las Vegas, New Mexico. The dam is earthmaximum height of 47 ft. It has a 600-ft-wide The outlet consists of 304 ft of 36-in.-diameter with an ungated concrete riser inlet structure and\n\nREFERENCES U.S. Department of the Interior, \"Pecos Arroyo Watershed, Site No. 1, Preliminary Design Report,\" Soil Conservation Service, Nov. 17, 1986. U.S. Department of the Interior, \"Pecos Arroyo Watershed, Site No. 1, Final Design Report,\" Soil Conservation Service, Dec. 4, 1987. CONTACT/OWNER State Conservation Engineer U.S. Department of Agriculture, Soil Conservation Service 517 Gold Avenue Southwest, Room 3301 Albuquerque, NM 87102 CHAPTER 5 - PROJECT REFERENCE LIST While compiling information on suitable projects to include in this compendium of case histories, many cases of erosion damage were reported. Most were not suitable for inclusion because sufficient information on the damage and subsequent repair was not readily available. Many other cases were similar to those cases selected for inclusion. Table 5.1 provides a listing of projects reported to have experienced erosion damage of the type described in this report. It is not clear if repairs have been initiated for all the listed projects. Additional information on specific dams is available in the World Register of Dams, published by the International Commission on Large Dams in 1984, 3rd Edition, and in a 1988 First Updating by the same publisher. Metric Conversions 1 1 1 1 1 1 1 1 1 1 1 1 1 1 in. ft. in..2ft..2in..3ft..3yd..3lb. lb./in..2- (psi) ft./s ft..3-/s mi. mi..2Mgal/d = = = = = = = = = = = = = = = 25.4 mm 0.3048 m 645.1 mm.20.0929 m.216.39 x 104 mm.30.0283 m.30.7646 m.30.4536 kg 6.894 MPa 0.305 m/s 28.32 l/s 1609 m 2.590 km.243.821 l/s (t+f, - 32)/1.8\n\nTemperature .t-c\n\nDifference in temperature .t-c = t+f,/1.8 TABLE 5.1 - REFERENCE LIST OF EROSION AND REPAIR OF CONCRETE STRUCTURES\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Arthur R. Bowman Dam Barren River Lake\n\nTunnel outlet works Stilling basin and outlet works Stilling basin Diversion tunnel Spillway and stilling basin Stilling basin Spillway\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Bonneville, WA Bratsk, Irkutsk, U.S.S.R. Mountain Home, AR\n\nAbrasion Cavitation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Sluices and stilling Abrasion basin and cavitation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nBurfell Dam\n\n## Selfoss, Arnesssyla, Iceland Townsend, MT\n\nSand sluice\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Canyon Ferry Dam\n\nStilling basin and outlet works Stilling basin and outlet works\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Causey Dam and cavitation\n\nOgden River, UT\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Cave Run Dam Center Hill Dam and cavitation\n\nFarmer, KY Carthage, TN\n\n## Not listed Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nCherokee Dam\n\nHolston River, TN\n\n## Spillway apron and stilling basin\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nChickamauga Dam\n\nHamilton County, TN\n\nSpillway piers, weirs, Abrasion and stilling basin Stilling basin Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nChief Joseph Dam and cavitation Conchas Dam Curwensville Lake Dam\n\nColumbia River, WA\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nTucumcari, NM Curwensville, PA\n\nStilling basin Stilling basin and outlet works Spillway Outlet works, sluiceway, and apron\n\nAbrasion Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Sulaymaniya, Iraq Sevier County, TN\n\nCavitation Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Douglas Dam and cavitation Detroit Dam and cavitation\n\nSevier County, TN\n\nStilling basin\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nSalem, OR\n\n## Stilling basin and conduit\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Echo Dam Emigrant Dam Enid Dam\n\nStilling basin Stilling basin Stilling basin and outlet works Stilling basin Spillway Spillway and flip bucket Stilling basin\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nHaystack Dam\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nHiwassee Dam Howard Prairie Dam Ice Harbor Dam Itha Solteira Dam Karoon Dam Kentucky Dam\n\nHiwassee River, NC Beaver Creek, OR Pasco, WA Parana River, Brazil Masjed Soliman, Iran Tennessee River, KY\n\nOutlet works Stilling basin Stilling basin Stilling basin 3 Chute spillway Spillway and stilling basin\n\n## Cavitation Abrasion Abrasion Abrasion Cavitation Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nKrasnoyarsk Dam Lac qui Parle Dam Libby Dam and cavitation\n\nKrasnoyarsk, U.S.S.R. Spillway flip bucket Montevideo, MN Kootenia River, MT Stilling basin Stilling basin and outlet works\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nLindsay Creek Culverts Lewiston, ID Little Goose Dam and cavitation Starbuck, WA\n\n## Box culvert Stilling basin and navigation lock\n\nAbrasion Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nMason Dam Mayfield Dam and cavitation McCloud Dam and cavitation McNary Dam Mica Dam Milford Dam Navajo Dam and cavitation\n\nBaker, OR Mayfield, WA\n\n## Conduit Plunge pool\n\nCavitation Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nRedding, CA\n\nSpillway\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Umatilla, OR British Columbia Junction City, KS Farmington, NM\n\nStilling basin 3 Bay chute Stilling basin Stilling basin and outlet works\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Stilling basin Stilling basin and outlet works\n\nAbrasion Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nNorris Dam Nurek Dam Oologah Lake Dam Oxbow Dam Painted Rock Dam and cavitation Palisades Dam and cavitation Perry Dam Pine Flat Dam Pit No. 6 Dam and cavitation\n\n## Clinch River, TN Tadjik SSR, U.S.S.R. Tulsa, OK Homestead, OR Ravalli County, MT\n\nSpillway apron Tunnel and chute Stilling basin Spillway Outlet works tunnel\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nIrwin, ED\n\n## Outlet works chute\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nRedding, CA\n\n## Stilling basin and spillway\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Pomme de terra Dam Rathbun Dam\n\nHermitage, MO Rathbun, IA\n\n## Stilling basin Stilling basin and outlet works Stilling basin\n\nAbrasion Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nRirie Dam and cavitation Ruedi Dam San Gabriel No. 1 Dam\n\nRirie, ID\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## Outlet works Stilling basin\n\nCavitation Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nBranson, MO\n\n## Stilling basin and conduit\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nTarbela dam\n\nPakistan\n\nSpillway and outlet Abrasion tunnels and cavitation Outlet works Stilling basin Stilling basin and outlet works Stilling basin Baffle piers Stilling basin Stilling basin and Stilling basin Abrasion Abrasion Abrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nTygart Dam V.I. Lenin Volga Dam Walter F. George Dam Warsak Dam Webbers Falls Dam and cavitation Wilson Dam\n\n## )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) ))))))))))))))) )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nTennessee River, AL\n\n## Spillway apron and stilling basin Outlet works\n\nAbrasion\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nWilson Dam\n\nTennessee River, AL\n\nCavitation\n\n)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))) )))))))))))))))\n\nThis report was submitted to letter ballot of the committee and was approved in accordance with ACI balloting requirements."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9339492,"math_prob":0.9971492,"size":105844,"snap":"2019-51-2020-05","text_gpt3_token_len":23681,"char_repetition_ratio":0.16758314,"word_repetition_ratio":0.04088721,"special_character_ratio":0.21974793,"punctuation_ratio":0.12912592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99824065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T23:29:58Z\",\"WARC-Record-ID\":\"<urn:uuid:bf30902d-873b-4dd8-bc22-dbc42e7eb99b>\",\"Content-Length\":\"483079\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa217a0e-fdef-41fe-9057-623c79afdd63>\",\"WARC-Concurrent-To\":\"<urn:uuid:79321a43-dca6-402d-ad96-562bd8734839>\",\"WARC-IP-Address\":\"151.101.202.152\",\"WARC-Target-URI\":\"https://ro.scribd.com/document/103694015/2101R-94\",\"WARC-Payload-Digest\":\"sha1:GKRKRWYF4ODGIGG77EYHQA3SVHBI4WQV\",\"WARC-Block-Digest\":\"sha1:2LC2DJ54LTUVKV7KTSLDEBCKL3JEEAMR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251681625.83_warc_CC-MAIN-20200125222506-20200126012506-00496.warc.gz\"}"} |
https://mathoverflow.net/questions/184652/tangent-space-describes-the-manifolds-first-order-characteristic-is-there-some | [
"# Tangent space describes the manifold's first order characteristic. Is there something like tangent space describes higher order characteristic?\n\nI'm learning differential geometry. I'm curious that when we learned analysis, we learned higher order derivative, while in differential geometry, first order derivative is generalized to element of tangent space. Then my question is what's the higher order derivative in differential geometry? Are there some literatures including this area? Or maybe this is just a trivial question because there is nothing interesting in higher order derivative.\n\n• Jet bundle. Or, more generally, there are the Weil functors. Oct 17, 2014 at 10:14\n• The derivative is on the tangent bundle $df : TM \\to TN$, so the 2nd order derivative is a map of the tangent bundle of the tangent bundle, $d^2f : T^2M \\to T^2N$. And so on. Oct 17, 2014 at 15:05\n\n1.The most naive one is the following: Consider a smooth map $f\\colon M\\to N.$ Then its derivative $df\\colon TM\\to TN$ is again smooth, and you can again differentiate. Clearly, you take a lot of useless information around, so this is usually not the method used by differential geometers to solve any problems.\n2. Differential geometer often use connections on vector bundles to define higher order derivatives. As the most basic example, one should mention the Levi-Civita connection $\\nabla$ (defined by a Riemannian metric on the manifold). With its help you can derivate vector fields along vector fields. A nice and simple application of its use is the following: (Locally) shortest curves are exactly those curves which satisfy the second order differential equation: $$\\nabla_\\gamma' \\gamma'=0.$$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9142562,"math_prob":0.9431272,"size":2632,"snap":"2022-40-2023-06","text_gpt3_token_len":596,"char_repetition_ratio":0.14117199,"word_repetition_ratio":0.0,"special_character_ratio":0.2268237,"punctuation_ratio":0.12291667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998868,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T08:39:47Z\",\"WARC-Record-ID\":\"<urn:uuid:3eb5da13-43d0-47cb-b4d4-3ad3512f653a>\",\"Content-Length\":\"107649\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:374e2995-c1d6-47fa-af2b-0ffde7b72f4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:83e48537-53c9-4401-85d4-115dfb85a02e>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/184652/tangent-space-describes-the-manifolds-first-order-characteristic-is-there-some\",\"WARC-Payload-Digest\":\"sha1:SG5VTJD5NGWPHERYMRLGMUH35DVSD57A\",\"WARC-Block-Digest\":\"sha1:7YNSPJLEXZ6236MWMJDKF7EUSZVKSZ66\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494974.98_warc_CC-MAIN-20230127065356-20230127095356-00085.warc.gz\"}"} |
https://search.r-project.org/CRAN/refmans/CPC/html/Euclidean.html | [
"Euclidean {CPC} R Documentation\n\n## Euclidean Distance from Dimension Means\n\n### Description\n\nCalculates two-dimensional Euclidean distance between all points and dimension means.\n\n### Usage\n\nEuclidean(data)\n\n\n### Arguments\n\n data an n x 2 matrix or data frame.\n\n### Value\n\nReturns a numeric vector of length 1.\n\n### Examples\n\ndata <- matrix(c(rnorm(50, 0, 1), rnorm(50, 5, 1)), ncol = 2, byrow = TRUE)\n\nEuclidean(data)\n\n\n\n[Package CPC version 2.3.0 Index]"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.65802544,"math_prob":0.98859173,"size":277,"snap":"2023-14-2023-23","text_gpt3_token_len":84,"char_repetition_ratio":0.14652015,"word_repetition_ratio":0.0,"special_character_ratio":0.29602888,"punctuation_ratio":0.17307693,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99492776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T22:02:59Z\",\"WARC-Record-ID\":\"<urn:uuid:6ec4fe77-3095-4025-96f0-51a4a73b53b4>\",\"Content-Length\":\"2114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6fb72f7-fffd-4136-a6fa-47dd61dd5378>\",\"WARC-Concurrent-To\":\"<urn:uuid:887aa4d7-bdae-41cb-a92a-292a08bbbde4>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/CPC/html/Euclidean.html\",\"WARC-Payload-Digest\":\"sha1:XEQBXJU6OQ7NR3ISJHMMMZM4NFDXL6I4\",\"WARC-Block-Digest\":\"sha1:6WTWMFX5VRN7O5IYYSBP5PJ47G37HC6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646350.59_warc_CC-MAIN-20230610200654-20230610230654-00545.warc.gz\"}"} |
https://www.experts-exchange.com/questions/29065888/Excel-Find-the-first-instance-of-a-value-in-column-C-while-searching-on-column-B.html | [
"",
null,
"# Excel: Find the first instance of a value in column C while searching on column B\n\nI have a data set with Identifiers in column A, a second column containing identifiers in column B and a value in column C.\n\nI want to find all records in column B that has the same identifier as column A, and THEN check the value in column C for each record until I find a value of x. Therefore, I want the formula to stop looking when it finds the value I'm looking for and return TRUE (or false) in column D.\n\nIn this example, I want to find the value = 0 in column C. Hence, the formula for row 1 should return TRUE since I found a value of 0 on row 3 where A was found in column B.\n\nA B C D\n\n1 N4 N5 1 TRUE\n2 N8 N4 1 FALSE (Because it can't find A in column B)\n3 N6 N4 0 FALSE (Because it can't find A in column B)\n4 N7 N4 1 FALSE (Because it can't find A in column B)\n\nAny thoughts ?\nThanks\nMicrosoft OfficeMicrosoft ExcelMicrosoft Applications",
null,
"Last Comment\nherbalizer\n\n8/22/2022 - Mon\nSubodh Tiwari (Neeraj)\n\nWhy not just try this?\n\nIn D1\n``````=IF(COUNTIF(B:B,A1),TRUE,FALSE)\n``````\nand copy it down.\nherbalizer\n\nThanks. However, this only tells me if I can find A in column B. What I want to do is then check a different column (column C) and verify the value until I get the one I'm looking for. I've refined the example below\n\nA B C D\n\n1 N4 N5 1 TRUE\n2 N8 N4 1 FALSE (Because although it did find N8 in column B (row 5), C=1 in row 5 (and I want to find a 0)\n3 N6 N4 0 FALSE (Because it can't find A in column B)\n4 N7 N4 1 FALSE (Because it can't find A in column B)\n5 N9 N8 1 FALSE (Because it can't find A in column B)\n\nThanks\nSubodh Tiwari (Neeraj)\n\nThen give this a try...\n\nIn D1\n``````=IF(COUNTIFS(B:B,A1,C:C,\"0\"),TRUE,FALSE)\n``````\nand copy it down.\nherbalizer\n\nYes. that'll do it !! Thanks\nNot very familiar with excel ...\nherbalizer\n\nHi,\nI hate to throw a wrench in there but while it works, it takes too much processing time to apply this formula to a list of 10000 units, which is the size of the file.\n\nDoes anybody have a solution that is more efficient in terms of systems resource being used ?\n\nThanks\nSubodh Tiwari (Neeraj)\n\nTHIS SOLUTION ONLY AVAILABLE TO MEMBERS.\nView this solution by signing up for a free trial.\nMembers can start a 7-Day free trial and enjoy unlimited access to the platform.\nherbalizer"
]
| [
null,
"https://cdn.experts-exchange.com/images/experts-exchange/avatar-01-large.gif",
null,
"https://cdn.experts-exchange.com/images/experts-exchange/avatar-01-large.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.901241,"math_prob":0.6217474,"size":833,"snap":"2023-14-2023-23","text_gpt3_token_len":230,"char_repetition_ratio":0.18817852,"word_repetition_ratio":0.10555556,"special_character_ratio":0.27490997,"punctuation_ratio":0.06185567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96645623,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T00:36:00Z\",\"WARC-Record-ID\":\"<urn:uuid:19073d1e-f1bf-4705-8d1d-833d823f55cd>\",\"Content-Length\":\"143823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f07dd481-d7b8-4707-ac0e-4a5faa0d5975>\",\"WARC-Concurrent-To\":\"<urn:uuid:b4b96250-4696-42a6-b8f1-99539df9e72c>\",\"WARC-IP-Address\":\"172.67.36.241\",\"WARC-Target-URI\":\"https://www.experts-exchange.com/questions/29065888/Excel-Find-the-first-instance-of-a-value-in-column-C-while-searching-on-column-B.html\",\"WARC-Payload-Digest\":\"sha1:RQLGJZZOXSES5TXJGE3ZF5Y4XTHEWODW\",\"WARC-Block-Digest\":\"sha1:E4YSO4THKHVDDFO3DRABFPGU5NHZZOVH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652184.68_warc_CC-MAIN-20230605221713-20230606011713-00342.warc.gz\"}"} |
http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00141.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Re: When is Exp[z]==Exp[w]??\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg113657] Re: When is Exp[z]==Exp[w]??\n• From: Murray Eisenberg <murray at math.umass.edu>\n• Date: Sat, 6 Nov 2010 05:00:39 -0500 (EST)\n\n```OK, but it's still strange to have to add that.\n\nAnd that also leaves the peculiar term Log[E^z], which is also totally\nredundant (even though Log[E^z] need not equal z, but still it will\ndiffere from z by an integer multiple of 2 Pi I).\n\nOn 11/5/2010 7:06 AM, Bob Hanlon wrote:\n>\n> You can \"force\" it by being just as redundant.\n>\n> Simplify[Reduce[Exp[z] == Exp[w], {z, w}], E^z != 0]\n>\n> Element[C, Integers]&&\n> w == 2*I*Pi*C + Log[E^z]\n>\n>\n> Bob Hanlon\n>\n> ---- Murray Eisenberg<murray at math.umass.edu> wrote:\n>\n> =============\n> Mathematica 7.0.1 gives (as InputForm of the result):\n>\n> Reduce[Exp[z]==Exp[w],{z,w}]\n> Element[C, Integers]&& E^z != 0&& w == (2*I)*Pi*C + Log[E^z]\n>\n> How can Mathematica be forced to simplify this to what is the fact,\n> namely, the following?\n>\n> Element[C, Integers]&& w == (2*I)*Pi*C + z\n>\n> (At the very least, certainly the expression E^z != 0 is redundant.)\n>\n\n--\nMurray Eisenberg murray at math.umass.edu\nMathematics & Statistics Dept.\nLederle Graduate Research Tower phone 413 549-1020 (H)\nUniversity of Massachusetts 413 545-2859 (W)\n710 North Pleasant Street fax 413 545-1801\nAmherst, MA 01003-9305\n\n```\n\n• Prev by Date: Re: adding lists term by term\n• Next by Date: Re: Embed extra info?\n• Previous by thread: Re: When is Exp[z]==Exp[w]??\n• Next by thread: Re: When is Exp[z]==Exp[w]??"
]
| [
null,
"http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif",
null,
"http://forums.wolfram.com/mathgroup/images/head_archive.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/2.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/1.gif",
null,
"http://forums.wolfram.com/mathgroup/images/numbers/0.gif",
null,
"http://forums.wolfram.com/mathgroup/images/search_archive.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8164157,"math_prob":0.8764419,"size":1372,"snap":"2022-40-2023-06","text_gpt3_token_len":459,"char_repetition_ratio":0.09576023,"word_repetition_ratio":0.008810572,"special_character_ratio":0.40014577,"punctuation_ratio":0.17105263,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99178916,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T07:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f99b8575-1555-4638-85d8-17ac8094ed8d>\",\"Content-Length\":\"44945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f042122-fc3a-436b-8aa2-f613590c09de>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ca2f1ac-5732-43d9-890e-e6382dd84fac>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2010/Nov/msg00141.html\",\"WARC-Payload-Digest\":\"sha1:XIBVTRHPCX3DDPB4M7BHL755DMLI5UYG\",\"WARC-Block-Digest\":\"sha1:VG6FUWV7O2ZL54YLMVF4KNU37EOH63NW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335124.77_warc_CC-MAIN-20220928051515-20220928081515-00608.warc.gz\"}"} |
https://math.paperswithcode.com/paper/truncation-dimension-for-function | [
"# Truncation Dimension for Function Approximation\n\n10 Oct 2016 Kritzer Peter Pillichshammer Friedrich Wasilkowski G. W.\n\nWe consider approximation of functions of $s$ variables, where $s$ is very large or infinite, that belong to weighted anchored spaces. We study when such functions can be approximated by algorithms designed for functions with only very small number ${\\rm dim^{trnc}}(\\varepsilon)$ of variables... (read more)\n\nPDF Abstract\n\n# Code Add Remove Mark official\n\nNo code implementations yet. Submit your code now\n\n# Categories\n\n• NUMERICAL ANALYSIS"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8914857,"math_prob":0.9424129,"size":321,"snap":"2021-04-2021-17","text_gpt3_token_len":69,"char_repetition_ratio":0.12302839,"word_repetition_ratio":0.0,"special_character_ratio":0.21495327,"punctuation_ratio":0.10909091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95694,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T23:14:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5f02120c-4c7a-415b-8e65-b2f08e4959bb>\",\"Content-Length\":\"81588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46a3ef95-7831-43c3-89fc-900744af5796>\",\"WARC-Concurrent-To\":\"<urn:uuid:0fed100e-65c9-4e18-901e-f57dbc8a372b>\",\"WARC-IP-Address\":\"104.26.12.155\",\"WARC-Target-URI\":\"https://math.paperswithcode.com/paper/truncation-dimension-for-function\",\"WARC-Payload-Digest\":\"sha1:HFOI57J33LAZUZ34NBLOAF3M5OJCVSAQ\",\"WARC-Block-Digest\":\"sha1:QSBSZ6YGGVDW34ICWYRW2WITURP5GZYP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039491784.79_warc_CC-MAIN-20210420214346-20210421004346-00280.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/241508/convert-integer-to-string-without-using-java-api-library | [
"# Convert Integer to String without using Java API library\n\nI have written a Java program that converts an integer to String digit by digit and concatenate them together using the + operator without touching the Stock Java API library.\n\nI like to have feedback on my code. Where do I need to improve. If I need to deduct something. So please criticize me. Thank you.\n\nimport java.util.Scanner;\n\npublic class StrFromInteger {\n\n/*\n* a single digit is passed as an argument\n* And a matching digit of String type\n* is returned.\n*/\npublic static String returnDigitString(int digit) {\nString res = \"\";\n\nswitch(digit) {\ncase 0:\nres = \"0\"; break;\n\ncase 1:\nres = \"1\"; break;\n\ncase 2:\nres = \"2\"; break;\n\ncase 3:\nres = \"3\"; break;\n\ncase 4:\nres = \"4\"; break;\n\ncase 5:\nres = \"5\"; break;\n\ncase 6:\nres = \"6\"; break;\n\ncase 7:\nres = \"7\"; break;\n\ncase 8:\nres = \"8\"; break;\n\ncase 9:\nres = \"9\"; break;\n}\nreturn res;\n}\n\npublic static void main(String[] args) {\n// TODO Auto-generated method stub\n//Scan the integer as int\nScanner scn = new Scanner(System.in);\nint number = scn.nextInt();\n\n//find the number of digits using logarithm\n//if input number is not equal to zero because\n//log of zero is undefined otherwise if input\n// number zero length is equal to 1\nint input = number;\nint length = 0;\nif(number != 0) {\nlength = ( int ) (Math.log10(number) + 1 );}\nelse if(number ==0) {\nlength = 1;\n}\n\n//Save each digit in String format by passing\n// the integer digit to the returnDigitString()\n//method one by one\nString[] reverseStr = new String[length];\n\nString digits = \"0123456789\";\n\nint remainder =0;\nint result = number ;\n--length;\nnumber = length;\nString strSeq = \"\";\nString valStr = \"\";\n\n// loop through the whole integer digit by digit\n//use modulo operator get the remainder\n//save it in remainder. then concatenate valStr\n//returned from returnDigitString()\n//method with previous String of Digits. Divide the result by 10. Again\n//repeat the same process. this time the modulo and the\n//number to be divided will be one digit less at each decremental\n//iteration of the loop.\nfor(int i = number; i >= 0; --i) {\n\nremainder = result % 10;\n\nvalStr = returnDigitString(remainder);\nstrSeq = valStr + strSeq;\n\nresult = result / 10;\n}\n\n//Print the string version of the integer\nSystem.out.println(\"The String conversion of \" + input + \" is: \" + strSeq);\n}\n\n}\n\n\n• The idea to calculate the length of a number with the logarithm is really good!\n• In my opinion you are writing good comments.\n• Good variable names\n• Works as intended, without (to my knowledge) any bugs.\n\n# Criticism\n\n## returnDigitString()\n\n• It is considered bad practice to put more than one command into one line. So please make line breaks after every \";\".\n• Your solution is pretty long (over 30 lines) in comparison to the complexity of the problem. You could also have done something like that:\n public static String returnDigitString(int digit) {\nString res = \"\";\nString[] digits = {\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"};\nfor(int i = 0; i <= 9; i++) {\nif(digit == i) {\nres += digits[i];\nbreak;\n}\n}\nreturn res;\n}\n\n\n## main()\n\n• You are not using the array \"reverseStr\". The String \"digits\" is not used either.\n• When I started your program the first time, I didn't know what to do, because your program didn't tell me. Before scanning a user input, I would tell the user to input something.\nSystem.out.println(\"Please enter number:\");\nScanner scn = new Scanner(System.in);\nint number = scn.nextInt();\n\n\nIf you want to improve this point even further (which is highly recommended!), you can use something like that (you will have to use import java.util.InputMismatchException;):\n\nSystem.out.println(\"Please enter number:\");\nScanner scn = new Scanner(System.in);\nint number;\nwhile(true) {\ntry {\nnumber = scn.nextInt();\nbreak;\n}\ncatch(InputMismatchException e) {\nSystem.out.println(\"That's not a number!\");\nscn.nextLine();\n}\n}\n\n\nThis will check, whether the user really enters a number. If the user enters something else, the program will ask him again to enter a number.\n\n• Something like that is considered bad practice:\nif(number != 0) {\nlength = ( int ) (Math.log10(number) + 1 );}\n\n\nif(number != 0) {\nlength = (int) (Math.log10(number) + 1);\n}\n\n\n• \"valStr\" is not necessary. You can just write:\nstrSeq = returnDigitString(remainder) + strSeq;\n\n\nBut this really is a minor point and just my personal opinion. It's fine to use an extra variable for this.\n\n## Codestructure\n\n• I would use an extra method for the content of the main-method. Just use the main-method to call the new method.\n• Couldn't the third line of your returnDigitString have been return digits[digit]? May 1, 2020 at 9:20\n• Yes, that's true, but I also wanted to provide an alternative structure to the switch-case-statement.\n– user214772\nMay 1, 2020 at 9:28\n• I'm not so familiar with Java in particular, but the code for(int i = 0; i <= 9; i++) { if(digit == i) { ... } } seems like a bad suggestion - it is equivalent to if(0 <= digit && digit <= 9){ ... } replacing each instance of i with digit in ... - which is far more clear (and more efficient, if we care). It seems like this suggestion gives the wrong impression - if you need to use switch, use it. If your code is just dealing with data, look up the data as digits[digit]. I don't see any use case for a for loop which turns out to be equivalent to an if statement. May 1, 2020 at 15:49\n• @Milo Brandt, you are abolutely right, but I wanted to avoid long code for a small problem. Of course the solution suggested by Stobor is the best idea.\n– user214772\nMay 1, 2020 at 16:30\n• Calculating a logarithm is still a somewhat heavy-hitting exercise. It will also cause this code to break spectacularly when the input number is negative.\n– Eric\nMay 1, 2020 at 20:41\n\nPersonally I think your algorithm has been made a lot more complex than needed.\n\nIs the concatenation a requirement? If not, you can simplify by directly converting each digit into a char and storing it in a char[]. This way instead of inefficiently concatenating each digit onto the string, you can use the string constructor overload that takes a char[].\n\nWith this simplification, the method can be reduced to just a few lines:\n\n public static String intToString(int num) {\nif(num == 0){\nreturn \"0\";\n}\nint count = 0;\nboolean isNeg = false;\nif (num < 0) {\nnum *= -1;\ncount = 1;\nisNeg = true;\n}\ncount += (int) Math.log10(num) + 1;\nchar[] digits = new char[count];\nif (isNeg) {\ndigits = '-';\n}\n--count;\nwhile(num > 0) {\ndigits[count--] = (char) ((num % 10) + '0');\nnum /= 10;\n}\nreturn new String(digits);\n}\n\n• Might calling Math.log10() have a significant overhead (compared to using a StringBuilder instead of the char array)? May 1, 2020 at 16:52\n• To use a stringbuilder you would have to insert each character at the start or reverse the string. Either way I don't think log10 would be worse than that.\n– user33306\nMay 2, 2020 at 0:55\npublic class StrFromInteger {\n\n\nThis is a convertor, so I would expect some kind of actor in the name, say DecimalStringCreator. What you've currently got is more like a method name.\n\npublic static String returnDigitString(int digit) {\n\n\nThe comment before this function is almost a JavaDoc. Generally public methods should be documented with the JavaDoc within /** and */.\n\nThat something is returned should be logical, try digitToString. As the string is always one character, a digitToCharacter might be better. Unless you want to make it part of programming interface, this method should probably be private.\n\nNote that an integer might not just be a digit. I would call it digitValue instead and then add a guard statement, such as:\n\nif (i < 0 || i > 9) {\nthrow new IllegalArgumentException(\"Value of digit not in the range [0..9]\");\n}\n\n\nor something similar.\n\nString res = \"\";\n\n\nAssigning an immutable empty string is almost never a good idea. Don't assign values unless you really have to.\n\nswitch(digit) { ... }\n\n\nWhenever possible, try and not start calculating yourself. Let the computer handle it. In this case it is important to know that the numbers are all situated in character range 0x0030 for the zero and 0x0039 for the 9 - in order of course. The location is not that important, but the order is, as it allows you do to\n\nchar digit = '0' + i;\n\n\nIn Java it is perfectly valid to use return \"3\"; by the way. That way you would not need the many break; statements. Generally we put break on a separate line by the way.\n\n// TODO Auto-generated method stub\n\n\nAlways remove those kind of comments before posting or - for that matter - checking into source control (e.g. Git).\n\npublic static void main(String[] args) {\n\n\nA main method is fine for setting up a Scanner, retrieving user input and producing output. But the actual conversion from int to String should be in a separate method.\n\n//find the number of digits using logarithm\n\n\nWhenever you type this kind of comment, you should create a method. In this case calculateNumberOfDigits() would be a good name. Now that's clear, you can actually remove the comment - so you would not have to do all that much.\n\nint input = number;\n\n\nFirst of all, the scanner produces the input. You only need one variable for this because neither number or input is ever changed.\n\nint length = 0;\n\n\nAnother assignment that isn't needed. Java will complain if variables are not assigned. This is useful to find bugs as well, so if the variable is always assigned then specifying a default value is not needed.\n\nif(number != 0) {\nlength = ( int ) (Math.log10(number) + 1 );}\nelse if(number ==0) {\nlength = 1;\n}\n\n\nOy, bad indentation and bad usage of white space. This should be:\n\nif(number != 0) {\nlength = (int) (Math.log10(number) + 1);\n} else if(number == 0) {\nlength = 1;\n}\n\n\nString[] reverseStr = new String[length];\n\n\nString arrays are generally not a good idea. In this case you can always simply perform String concatenation using +. Note that it is completely possible to add Strings / characters at the start of a string as well.\n\n--length;\n\n\nGenerally we use length--. Don't use --length, unless you need to use the original length value within a larger expression. If possible simply use length-- afterwards: expressions without so called side effects are much easier to understand.\n\nnumber = length;\n\n\nDo not reassign variables to other values than that they originally hold. If the meaning of a variable changes then you can be sure that confusion will arise.\n\nThe main idea of getting a list of digits is OK:\n\nfor(int i = number; i >= 0; --i) {\n\nremainder = result % 10;\n\nvalStr = returnDigitString(remainder);\nstrSeq = valStr + strSeq;\n\nresult = result / 10;\n}\n\n\nBut beware of variable naming. result is not really the result that you are looking for; that's the string after all. So another name should be preferred, e.g. numberValueLeft or just valueLeft.\n\nNote that if valueLeft is zero then the calculation is finished, so that's another way of determining the end of the calculation.\n\nHere's my take:\n\n/**\n* Creates a decimal String for a value that is positive or zero.\n*\n* @param value the value to convert\n* @return the string representing the value\n*/\npublic static String toDecimalString(int value) {\n// guard statement\nif (value < 0) {\nthrow new IllegalArgumentException(\"Negative numbers cannot be converted to string by this function\");\n}\n\nif (value == 0) {\nreturn \"0\";\n}\n\nString decimalString = \"\";\n\nint left = value;\nwhile (left > 0) {\nint digitValue = left % 10;\nchar digit = (char) ('0' + digitValue);\ndecimalString = digit + decimalString;\nleft = left / 10;\n}\n\nreturn decimalString;\n}\n\n\nNote that you would normally use StringBuilder for this kind of thing, but I presume that that's not allowed in this case.\n\nI always indicate what kind of string is being returned with a function. I've seen tons of somethingToString() functions that are absolutely unclear of what is being returned. Now I think that a decimal string is what most people expect, but I've also seen somethingToString() functions that return hexadecimals, base 64 or whatnot, so making it clear helps the reader.\n\nYour algorithm only works for values which are zero or positive. Java's \"int\" type is signed, so you need to consider negative numbers too. Your algorithm will fail hard on this, not least because taking a log of a negative number returns NaN, which results in zero when you cast it to int.\n\nYour first step in the algorithm should be to handle the sign of the number. After that you can sort out how to process an unsigned value.\n\nHere is my updated code. After this any comments will be highly appreciated.\n\npackage IntegerToString;\nimport java.util.Scanner;\n\n/**\n* @version 1.1 (current version number of program)\n* @since 1.0 (the version of the package this class was first added to)\n*/\n\npublic class StrFromInteger {\n\n/* *\n* a single digit is passed as an argument And a matching digit of String\n* type is returned.\n*\n* @param digit a digit of the whole integer\n*\n* @return return a String representation of the digit\n*/\npublic static String returnDigitString(int digit) {\nString res = \"\";\n\nString[] digits = { \"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\" };\nfor (int i = 0; i <= 9; i++) {\nif (digit == i) {\nres += digits[i];\nbreak;\n}\n}\nreturn res;\n\n}\n/* *\n* Take the input number, if it is less than zero multipy it by -1.\n* loop through the whole integer digit by digit\n* use modulo operator get the remainder\n* save it in remainder. then concatenate valStr\n* returned from returnDigitString()\n* method with previous String of Digits. Divide the result by 10. Again\n* repeat the same process. this time the modulo and the\n* number to be divided will be one digit less at each decremental\n* iteration of the loop. Then print the String and if it is less than zero\n* concatenate \"-\" at the beginning of total String representation of int\n* otherwise just print the String representation of int.\n*\n* @param length number of digits in the integer\n* @param number the integer number itself\n* @param isPosite is positive or not\n*/\npublic static void printInt(int length, int number, boolean isPositive ) {\nint input = number;\n\nint remainder = 0;\nint result = (number < 0 ? -1 * number : number);\n--length;\nnumber = length;\nString strSeq = \"\";\nString valStr = \"\";\n\n// loop through the whole integer digit by digit\n// use modulo operator get the remainder\n// save it in remainder. then concatenate valStr\n// returned from returnDigitString()\n// method with previous String of Digits. Divide the result by 10. Again\n// repeat the same process. this time the modulo and the\n// number to be divided will be one digit less at each decremental\n// iteration of the loop.\nfor (int i = number; i >= 0; --i) {\n\nremainder = result % 10;\n\nvalStr = returnDigitString(remainder);\nstrSeq = valStr + strSeq;\n\nresult = result / 10;\n}\nif (!isPositive) {\nstrSeq = \"-\" + strSeq;\n}\n// Print the string version of the integer\nSystem.out.println(\"The String conversion of \" + input + \" is: \" + strSeq);\n}\n\npublic static void main(String[] args) {\n// TODO Auto-generated method stub\n// Scan the integer as int\nScanner scn = new Scanner(System.in);\nint number = scn.nextInt();\n\n// find the number of digits using logarithm\n// if input number is not equal to zero because\n// divide the input by 10 each number it will be\n// reduced by 1 digit and increment the length\nint input = number;\nint length = 0;\nif (number != 0) {\nint num = number;\n\nwhile (num != 0) {\n// num = num/10\nnum /= 10;\n++length;\n\n}\n} else if (number == 0) {\nlength = 1;\n}\nprintInt(length, input, (input < 0 ? false : true));\n}\n\n}"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6261311,"math_prob":0.9753614,"size":2255,"snap":"2023-40-2023-50","text_gpt3_token_len":595,"char_repetition_ratio":0.16259441,"word_repetition_ratio":0.0,"special_character_ratio":0.30997783,"punctuation_ratio":0.17401393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9912159,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T22:57:58Z\",\"WARC-Record-ID\":\"<urn:uuid:610658fb-6fad-4109-ba6b-9d3bb0c44f61>\",\"Content-Length\":\"226355\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae451212-b16d-4631-b8da-de29f476846a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b81517e1-bdb6-4d0b-b795-4127a55706aa>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/241508/convert-integer-to-string-without-using-java-api-library\",\"WARC-Payload-Digest\":\"sha1:CBNQMDWTFWCIGE7FOT6DO3OG3QXKVNFB\",\"WARC-Block-Digest\":\"sha1:LFAZVGW5EF6EBTW2OO7Z6ACEZZFHQ3F4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100568.68_warc_CC-MAIN-20231205204654-20231205234654-00731.warc.gz\"}"} |
https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/g02/g02lcf.html | [
"# NAG FL Interfaceg02lcf (pls_fit)\n\n## ▸▿ Contents\n\nSettings help\n\nFL Name Style:\n\nFL Specification Language:\n\n## 1Purpose\n\ng02lcf calculates parameter estimates for a given number of factors given the output from an orthogonal scores PLS regression (g02laf or g02lbf).\n\n## 2Specification\n\nFortran Interface\n Subroutine g02lcf ( ip, my, p, ldp, c, ldc, w, ldw, b, ldb, orig, xbar, ybar, xstd, ystd, ob, ldob, ycv, vip,\n Integer, Intent (In) :: ip, my, maxfac, nfact, ldp, ldc, ldw, ldb, orig, iscale, ldob, vipopt, ldycv, ldvip Integer, Intent (Inout) :: ifail Real (Kind=nag_wp), Intent (In) :: p(ldp,maxfac), c(ldc,maxfac), w(ldw,maxfac), rcond, xbar(ip), ybar(my), xstd(ip), ystd(my), ycv(ldycv,my) Real (Kind=nag_wp), Intent (Inout) :: b(ldb,my), ob(ldob,my), vip(ldvip,vipopt)\n#include <nag.h>\n void g02lcf_ (const Integer *ip, const Integer *my, const Integer *maxfac, const Integer *nfact, const double p[], const Integer *ldp, const double c[], const Integer *ldc, const double w[], const Integer *ldw, const double *rcond, double b[], const Integer *ldb, const Integer *orig, const double xbar[], const double ybar[], const Integer *iscale, const double xstd[], const double ystd[], double ob[], const Integer *ldob, const Integer *vipopt, const double ycv[], const Integer *ldycv, double vip[], const Integer *ldvip, Integer *ifail)\nThe routine may be called by the names g02lcf or nagf_correg_pls_fit.\n\n## 3Description\n\nThe parameter estimates $B$ for a $l$-factor orthogonal scores PLS model with $m$ predictor variables and $r$ response variables are given by,\n $B=W (PTW)-1 CT , B∈ ℝm×r ,$\nwhere $W$ is the $m×k$ ($\\ge l$) matrix of $x$-weights; $P$ is the $m×k$ matrix of $x$-loadings; and $C$ is the $r×k$ matrix of $y$-loadings for a fitted PLS model.\nThe parameter estimates $B$ are for centred, and possibly scaled, predictor data ${X}_{1}$ and response data ${Y}_{1}$. Parameter estimates may also be given for the predictor data $X$ and response data $Y$.\nOptionally, g02lcf will calculate variable influence on projection (VIP) statistics, see Wold (1994).\nWold S (1994) PLS for multivariate linear modelling QSAR: chemometric methods in molecular design Methods and Principles in Medicinal Chemistry (ed van de Waterbeemd H) Verlag-Chemie\n\n## 5Arguments\n\n1: $\\mathbf{ip}$Integer Input\nOn entry: $m$, the number of predictor variables in the fitted model.\nConstraint: ${\\mathbf{ip}}>1$.\n2: $\\mathbf{my}$Integer Input\nOn entry: $r$, the number of response variables.\nConstraint: ${\\mathbf{my}}\\ge 1$.\n3: $\\mathbf{maxfac}$Integer Input\nOn entry: $k$, the number of factors available in the PLS model.\nConstraint: $1\\le {\\mathbf{maxfac}}\\le {\\mathbf{ip}}$.\n4: $\\mathbf{nfact}$Integer Input\nOn entry: $l$, the number of factors to include in the calculation of parameter estimates.\nConstraint: $1\\le {\\mathbf{nfact}}\\le {\\mathbf{maxfac}}$.\n5: $\\mathbf{p}\\left({\\mathbf{ldp}},{\\mathbf{maxfac}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: $x$-loadings as returned from g02laf and g02lbf.\n6: $\\mathbf{ldp}$Integer Input\nOn entry: the first dimension of the array p as declared in the (sub)program from which g02lcf is called.\nConstraint: ${\\mathbf{ldp}}\\ge {\\mathbf{ip}}$.\n7: $\\mathbf{c}\\left({\\mathbf{ldc}},{\\mathbf{maxfac}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: $y$-loadings as returned from g02laf and g02lbf.\n8: $\\mathbf{ldc}$Integer Input\nOn entry: the first dimension of the array c as declared in the (sub)program from which g02lcf is called.\nConstraint: ${\\mathbf{ldc}}\\ge {\\mathbf{my}}$.\n9: $\\mathbf{w}\\left({\\mathbf{ldw}},{\\mathbf{maxfac}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: $x$-weights as returned from g02laf and g02lbf.\n10: $\\mathbf{ldw}$Integer Input\nOn entry: the first dimension of the array w as declared in the (sub)program from which g02lcf is called.\nConstraint: ${\\mathbf{ldw}}\\ge {\\mathbf{ip}}$.\n11: $\\mathbf{rcond}$Real (Kind=nag_wp) Input\nOn entry: singular values of ${P}^{\\mathrm{T}}W$ less than rcond times the maximum singular value are treated as zero when calculating parameter estimates. If rcond is negative, a value of $0.005$ is used.\n12: $\\mathbf{b}\\left({\\mathbf{ldb}},{\\mathbf{my}}\\right)$Real (Kind=nag_wp) array Output\nOn exit: ${\\mathbf{b}}\\left(\\mathit{i},\\mathit{j}\\right)$ contains the parameter estimate for the $\\mathit{i}$th predictor variable in the model for the $\\mathit{j}$th response variable, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{ip}}$ and $\\mathit{j}=1,2,\\dots ,{\\mathbf{my}}$.\n13: $\\mathbf{ldb}$Integer Input\nOn entry: the first dimension of the array b as declared in the (sub)program from which g02lcf is called.\nConstraint: ${\\mathbf{ldb}}\\ge {\\mathbf{ip}}$.\n14: $\\mathbf{orig}$Integer Input\nOn entry: indicates how parameter estimates are calculated.\n${\\mathbf{orig}}=-1$\nParameter estimates for the centred, and possibly, scaled data.\n${\\mathbf{orig}}=1$\nParameter estimates for the original data.\nConstraint: ${\\mathbf{orig}}=-1$ or $1$.\n15: $\\mathbf{xbar}\\left({\\mathbf{ip}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: if ${\\mathbf{orig}}=1$, mean values of predictor variables in the model; otherwise xbar is not referenced.\n16: $\\mathbf{ybar}\\left({\\mathbf{my}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: if ${\\mathbf{orig}}=1$, mean value of each response variable in the model; otherwise ybar is not referenced.\n17: $\\mathbf{iscale}$Integer Input\nOn entry: if ${\\mathbf{orig}}=1$, iscale must take the value supplied to either g02laf or g02lbf; otherwise iscale is not referenced.\nConstraint: if ${\\mathbf{orig}}=1$, ${\\mathbf{iscale}}=-1$, $1$ or $2$.\n18: $\\mathbf{xstd}\\left({\\mathbf{ip}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: if ${\\mathbf{orig}}=1$ and ${\\mathbf{iscale}}\\ne -1$, the scalings of predictor variables in the model as returned from either g02laf or g02lbf; otherwise xstd is not referenced.\n19: $\\mathbf{ystd}\\left({\\mathbf{my}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: if ${\\mathbf{orig}}=1$ and ${\\mathbf{iscale}}\\ne -1$, the scalings of response variables as returned from either g02laf or g02lbf; otherwise ystd is not referenced.\n20: $\\mathbf{ob}\\left({\\mathbf{ldob}},{\\mathbf{my}}\\right)$Real (Kind=nag_wp) array Output\nOn exit: if ${\\mathbf{orig}}=1$, ${\\mathbf{ob}}\\left(1,\\mathit{j}\\right)$ contains the intercept value for the $\\mathit{j}$th response variable, and ${\\mathbf{ob}}\\left(\\mathit{i}+1,\\mathit{j}\\right)$ contains the parameter estimate on the original scale for the $\\mathit{i}$th predictor variable in the model, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{ip}}$ and $\\mathit{j}=1,2,\\dots ,{\\mathbf{my}}$. Otherwise ob is not referenced.\n21: $\\mathbf{ldob}$Integer Input\nOn entry: the first dimension of the array ob as declared in the (sub)program from which g02lcf is called.\nConstraints:\n• if ${\\mathbf{orig}}=1$, ${\\mathbf{ldob}}\\ge {\\mathbf{ip}}+1$;\n• otherwise ${\\mathbf{ldob}}\\ge 1$.\n22: $\\mathbf{vipopt}$Integer Input\nOn entry: a flag that determines variable influence on projections (VIP) options.\n${\\mathbf{vipopt}}=0$\nVIP are not calculated.\n${\\mathbf{vipopt}}=1$\nVIP are calculated for predictor variables using the mean explained variance in responses.\n${\\mathbf{vipopt}}={\\mathbf{my}}$\nVIP are calculated for predictor variables for each response variable in the model.\nNote that setting ${\\mathbf{vipopt}}={\\mathbf{my}}$ when ${\\mathbf{my}}=1$ gives the same result as setting ${\\mathbf{vipopt}}=1$ directly.\nConstraint: ${\\mathbf{vipopt}}=0$, $1$ or ${\\mathbf{my}}$.\n23: $\\mathbf{ycv}\\left({\\mathbf{ldycv}},{\\mathbf{my}}\\right)$Real (Kind=nag_wp) array Input\nOn entry: if ${\\mathbf{vipopt}}\\ne 0$, ${\\mathbf{ycv}}\\left(\\mathit{i},\\mathit{j}\\right)$ is the cumulative percentage of variance of the $\\mathit{j}$th response variable explained by the first $\\mathit{i}$ factors, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{nfact}}$ and $\\mathit{j}=1,2,\\dots ,{\\mathbf{my}}$; otherwise ycv is not referenced.\n24: $\\mathbf{ldycv}$Integer Input\nOn entry: the first dimension of the array ycv as declared in the (sub)program from which g02lcf is called.\nConstraint: if ${\\mathbf{vipopt}}\\ne 0$, ${\\mathbf{ldycv}}\\ge {\\mathbf{nfact}}$.\n25: $\\mathbf{vip}\\left({\\mathbf{ldvip}},{\\mathbf{vipopt}}\\right)$Real (Kind=nag_wp) array Output\nOn exit: if ${\\mathbf{vipopt}}=1$, ${\\mathbf{vip}}\\left(\\mathit{i},1\\right)$ contains the VIP statistic for the $\\mathit{i}$th predictor variable in the model for all response variables, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{ip}}$.\nIf ${\\mathbf{vipopt}}={\\mathbf{my}}$, ${\\mathbf{vip}}\\left(\\mathit{i},\\mathit{j}\\right)$ contains the VIP statistic for the $\\mathit{i}$th predictor variable in the model for the $\\mathit{j}$th response variable, for $\\mathit{i}=1,2,\\dots ,{\\mathbf{ip}}$ and $\\mathit{j}=1,2,\\dots ,{\\mathbf{my}}$.\nOtherwise vip is not referenced.\n26: $\\mathbf{ldvip}$Integer Input\nOn entry: the first dimension of the array vip as declared in the (sub)program from which g02lcf is called.\nConstraint: if ${\\mathbf{vipopt}}\\ne 0$, ${\\mathbf{ldvip}}\\ge {\\mathbf{ip}}$.\n27: $\\mathbf{ifail}$Integer Input/Output\nOn entry: ifail must be set to $0$, $-1$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.\nA value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $-1$ means that an error message is printed while a value of $1$ means that it is not.\nIf halting is not appropriate, the value $-1$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\\mathbf{1}$ or $\\mathbf{1}$ is used it is essential to test the value of ifail on exit.\nOn exit: ${\\mathbf{ifail}}={\\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).\n\n## 6Error Indicators and Warnings\n\nIf on entry ${\\mathbf{ifail}}=0$ or $-1$, explanatory error messages are output on the current error message unit (as defined by x04aaf).\nErrors or warnings detected by the routine:\n${\\mathbf{ifail}}=1$\nOn entry, ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ip}}>1$.\nOn entry, ${\\mathbf{iscale}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: if ${\\mathbf{orig}}=1$, ${\\mathbf{iscale}}=-1$ or $1$.\nOn entry, ${\\mathbf{my}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{my}}\\ge 1$.\nOn entry, ${\\mathbf{orig}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{orig}}=-1$ or $1$.\nOn entry, ${\\mathbf{vipopt}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{my}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{vipopt}}=0$, $1$ or ${\\mathbf{my}}$.\n${\\mathbf{ifail}}=2$\nOn entry, ${\\mathbf{ldb}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ldb}}\\ge {\\mathbf{ip}}$.\nOn entry, ${\\mathbf{ldc}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{my}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ldc}}\\ge {\\mathbf{my}}$.\nOn entry, ${\\mathbf{ldob}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: if ${\\mathbf{orig}}=1$, ${\\mathbf{ldob}}\\ge {\\mathbf{ip}}+1$.\nOn entry, ${\\mathbf{ldp}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ldp}}\\ge {\\mathbf{ip}}$.\nOn entry, ${\\mathbf{ldvip}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: if ${\\mathbf{vipopt}}\\ne 0$, ${\\mathbf{ldvip}}\\ge {\\mathbf{ip}}$.\nOn entry, ${\\mathbf{ldw}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: ${\\mathbf{ldw}}\\ge {\\mathbf{ip}}$.\nOn entry, ${\\mathbf{ldycv}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{nfact}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: if ${\\mathbf{vipopt}}\\ne 0$, ${\\mathbf{ldycv}}\\ge {\\mathbf{nfact}}$.\nOn entry, ${\\mathbf{maxfac}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{ip}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: $1\\le {\\mathbf{maxfac}}\\le {\\mathbf{ip}}$.\nOn entry, ${\\mathbf{nfact}}=⟨\\mathit{\\text{value}}⟩$ and ${\\mathbf{maxfac}}=⟨\\mathit{\\text{value}}⟩$.\nConstraint: $1\\le {\\mathbf{nfact}}\\le {\\mathbf{maxfac}}$.\n${\\mathbf{ifail}}=-99$\nSee Section 7 in the Introduction to the NAG Library FL Interface for further information.\n${\\mathbf{ifail}}=-399$\nYour licence key may have expired or may not have been installed correctly.\nSee Section 8 in the Introduction to the NAG Library FL Interface for further information.\n${\\mathbf{ifail}}=-999$\nDynamic memory allocation failed.\nSee Section 9 in the Introduction to the NAG Library FL Interface for further information.\n\n## 7Accuracy\n\nThe calculations are based on the singular value decomposition of ${P}^{\\mathrm{T}}W$.\n\n## 8Parallelism and Performance\n\ng02lcf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.\ng02lcf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.\nPlease consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.\n\ng02lcf allocates internally $l\\left(l+r+4\\right)+\\mathrm{max}\\phantom{\\rule{0.125em}{0ex}}\\left(2l,r\\right)$ elements of real storage.\n\n## 10Example\n\nThis example reads in details of a PLS model, and a set of parameter estimates are calculated along with their VIP statistics.\n\n### 10.1Program Text\n\nProgram Text (g02lcfe.f90)\n\n### 10.2Program Data\n\nProgram Data (g02lcfe.d)\n\n### 10.3Program Results\n\nProgram Results (g02lcfe.r)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8306056,"math_prob":0.99975747,"size":4603,"snap":"2022-27-2022-33","text_gpt3_token_len":1011,"char_repetition_ratio":0.16025223,"word_repetition_ratio":0.25877762,"special_character_ratio":0.22137736,"punctuation_ratio":0.17901939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999713,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T14:08:50Z\",\"WARC-Record-ID\":\"<urn:uuid:6012236d-0c77-4a04-8705-e9481f6e13cc>\",\"Content-Length\":\"68715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1df5b3d6-ec51-45c7-bdb3-920ef226ffef>\",\"WARC-Concurrent-To\":\"<urn:uuid:b77d776c-9413-468e-b8d3-6a806e5f722e>\",\"WARC-IP-Address\":\"78.129.168.4\",\"WARC-Target-URI\":\"https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/g02/g02lcf.html\",\"WARC-Payload-Digest\":\"sha1:2DX4KPGJTU3EXI6I4HUSRM5GIM6CFIWZ\",\"WARC-Block-Digest\":\"sha1:FHJUPCG7N5TISKI3VHHWTAGGBZ6C5NKK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103639050.36_warc_CC-MAIN-20220629115352-20220629145352-00796.warc.gz\"}"} |
https://engineeringworks.co/faqs/what-is-the-highest-occupied-energy-level-of-oxygen/ | [
"# What is the highest occupied energy level of oxygen?\n\nThe highest occupied energy level of oxygen is the 2p level. This is because oxygen has six electrons in its outermost energy level, and the 2p level is the highest energy level that can hold six electrons.\n\n## Other related questions:\n\n### Q: What is the highest occupied energy level?\n\nA: The highest occupied energy level is the level with the highest energy that is occupied by at least one electron.\n\n### Q: How many electron are in the highest occupied energy level of oxygen?\n\nA: There are eight electrons in the highest occupied energy level of oxygen.\n\n### Q: What is the energy level of oxygen?\n\nA: The energy level of oxygen is the same as the energy level of any other element."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9499226,"math_prob":0.6254533,"size":667,"snap":"2023-14-2023-23","text_gpt3_token_len":141,"char_repetition_ratio":0.24585219,"word_repetition_ratio":0.12931034,"special_character_ratio":0.20539731,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97061795,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T05:11:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c61b75b8-61c9-4c11-874b-a07e86183981>\",\"Content-Length\":\"69559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8069b14-32e2-4e8a-b60e-197955439224>\",\"WARC-Concurrent-To\":\"<urn:uuid:cac667d2-dc3a-4ec6-95b1-1cc31d71f9d0>\",\"WARC-IP-Address\":\"198.136.62.208\",\"WARC-Target-URI\":\"https://engineeringworks.co/faqs/what-is-the-highest-occupied-energy-level-of-oxygen/\",\"WARC-Payload-Digest\":\"sha1:637ISVPK6SWPTALRQT4HA2ZFX2R52D5C\",\"WARC-Block-Digest\":\"sha1:GHZPAFZ42NVXJGQJBBRBPBFKSMKOGJDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949701.0_warc_CC-MAIN-20230401032604-20230401062604-00341.warc.gz\"}"} |
https://machinelearningmedium.com/2017/09/06/multiclass-logistic-regression/ | [
"Blog Logo\n·\n· · ·\n\nIndex\n\n· · ·\n\n### Introduction\n\nFor intuition and implementation of Binary Logistic Regression refer Classifiction and Logistic Regression and Logistic Regression Model.\n\nMulticlass logistic regression is a extension of the binary classification making use of the one-vs-all or one-vs-rest classification strategy.\n\n### Intuition\n\nGiven a classification problem with n distinct classes, train n classifiers, where each classifier draws a decision boundary for one class vs all the other classes. Mathematically,\n\n### Implementation\n\nBelow is an implementation for multiclass logistic regression with linear decision boundary, where number of classes is 3 and one-vs-all strategy is used.\n\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\nx_orig = [[0,0], [0,1], [1, 0], [1, 1], [2, 2], [2, 3], [3, 2], [3, 3], [0, 4], [1, 4], [0, 5], [1, 5]]\ny_orig = [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]\nx = np.atleast_2d(x_orig)\ny = np.atleast_2d(y_orig).T\n\ndef h(X, theta):\nreturn 1 / (1 + np.exp(-mul(X, theta)))\n\ndef j(X, y, theta):\nreturn (-1/m) * (mul(y.T, np.log(h(X, theta))) + mul((1-y).T, np.log(1-h(X, theta))))\n\ndef update(X, y, theta):\nreturn theta - (alpha/m * mul(X.T, (h(X, theta) - y)))\n\ntheta_all = []\nfor _ in range(3):\ntheta = np.random.randint(1, 100, size=(3, 1))/ 100\nmul = np.matmul\nalpha = 0.6\nm = len(x)\nx = np.atleast_2d(x_orig)\ny = np.atleast_2d(y_orig).T\nidx_0 = np.where(y!=_)\nidx_1 = np.where(y==_)\ny[idx_0] = 0\ny[idx_1] = 1\nX = np.hstack((np.ones((len(x), 1)), x))\nprev_j = 10000\ncurr_j = j(X, y, theta)\ntolerance = 0.000001\ntheta_history = [theta]\ncost_history = [curr_j]\n\nwhile(abs(curr_j - prev_j) > tolerance):\ntheta = update(X, y, theta)\ntheta_history.append(theta)\nprev_j = curr_j\ncurr_j = j(X, y, theta)\ncost_history.append(curr_j)\ntheta_all.append(theta)\nprint(\"classifier %d stopping with loss: %.5f\" % (_, curr_j))\n\ndef theta_2(theta, x_range):\nreturn [(-theta/theta - theta/theta*i) for i in x_range]\nx_range = np.linspace(-1, 4, 100)\nx = np.atleast_2d(x_orig)\ny = np.atleast_2d(y_orig).T\nfig, ax = plt.subplots()\nax.set_xlim(-1, 4)\nax.set_ylim(-1, 6)\nplt.scatter(x[np.where(y == 2), 0], x[np.where(y == 2), 1])\nplt.scatter(x[np.where(y == 1), 0], x[np.where(y == 1), 1])\nplt.scatter(x[np.where(y == 0), 0], x[np.where(y == 0), 1])\nfor theta in theta_all:\nplt.plot(x_range, theta_2(theta, x_range))\nplt.title('Multiclass Logistic Regression')\nplt.show()\n\n\nBelow is the plot of all the decision boundaries found by the logistic regression.",
null,
"Value of $h_\\theta^{(i)}(x)$ is the probability of data point belonging to $i^{th}$ class as seen in (1). Keeping this is mind one can decide the precedence of the class based on the values of its corresponding prediction on that data point. So, the predicted class is the one with maximum value of corresponding hypothesis. It shown in the plot below.",
null,
"Similar to the above implementation the classificaiton can be extented to many more classes.\n\nMachine Learning: Coursera - Multiclass Classification: One-vs-All"
]
| [
null,
"https://machinelearningmedium.com/assets/2017-09-06-multiclass-logistic-regression/fig-1-decision-boundaries.png",
null,
"https://machinelearningmedium.com/assets/2017-09-06-multiclass-logistic-regression/fig-2-decision-regions.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6019465,"math_prob":0.9992009,"size":3031,"snap":"2020-24-2020-29","text_gpt3_token_len":943,"char_repetition_ratio":0.13016188,"word_repetition_ratio":0.018306635,"special_character_ratio":0.34675026,"punctuation_ratio":0.21713442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T02:45:02Z\",\"WARC-Record-ID\":\"<urn:uuid:6cb51cd1-e450-42b6-87ad-508f9a22a207>\",\"Content-Length\":\"34790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d737e706-3adb-4507-acc4-81b9211a0ddd>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f94146c-00df-4a4f-8214-6b018d696c08>\",\"WARC-IP-Address\":\"172.67.131.4\",\"WARC-Target-URI\":\"https://machinelearningmedium.com/2017/09/06/multiclass-logistic-regression/\",\"WARC-Payload-Digest\":\"sha1:3JU2BMHHJVVCXUMOIPGB7SEW7IDPKRMF\",\"WARC-Block-Digest\":\"sha1:HJEXAPYYVDCYZWJTS56NCCWOJNXX726A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347436828.65_warc_CC-MAIN-20200604001115-20200604031115-00499.warc.gz\"}"} |
https://www.interest.co.nz/category/tag/nz-government-bond?page=1 | [
"# NZ Government bond\n\n16th Jan 20, 2:25pm\nGovt Bond Tender #708; weighted average yield accepted was 1.67%; coverage ratio was 2.58x\nGovt Bond Tender #708; weighted average yield accepted was 1.67%; coverage ratio was 2.58x\n16th Jan 20, 2:25pm\n13th Dec 19, 8:27am\nGovt Bond Tender #707; weighted average yield accepted was 1.50%; coverage ratio was 2.53x\nGovt Bond Tender #707; weighted average yield accepted was 1.50%; coverage ratio was 2.53x\n13th Dec 19, 8:27am\n29th Nov 19, 12:07pm\nNZ Govt Bond Tender #705; weighted average accepted yield was 1.07%; coverage ratio was 3.7x\nNZ Govt Bond Tender #705; weighted average accepted yield was 1.07%; coverage ratio was 3.7x\n29th Nov 19, 12:07pm\n21st Nov 19, 2:35pm\nNZ Govt Bond Tender #704; weighted average accepted yield was 1.72%; coverage ratio was 3.13x\nNZ Govt Bond Tender #704; weighted average accepted yield was 1.72%; coverage ratio was 3.13x\n21st Nov 19, 2:35pm\n14th Nov 19, 2:32pm\nGovt Bond Tender #703; weighted average yield accepted was 1.43%; coverage ratio was 2.44x\nGovt Bond Tender #703; weighted average yield accepted was 1.43%; coverage ratio was 2.44x\n14th Nov 19, 2:32pm\n31st Oct 19, 2:25pm\nGovt Bond Tender #701; weighted average yield accepted was 1.31%; coverage ratio was 2.83x\nGovt Bond Tender #701; weighted average yield accepted was 1.31%; coverage ratio was 2.83x\n31st Oct 19, 2:25pm\n24th Oct 19, 2:19pm\nNZ Govt Bond Tender #700; weighted average accepted yield was 0.97%; coverage ratio was 2.6x\nNZ Govt Bond Tender #700; weighted average accepted yield was 0.97%; coverage ratio was 2.6x\n24th Oct 19, 2:19pm\n17th Oct 19, 2:19pm\nNZ Govt Bond Tender #699; weighted average accepted yield was 1.61%; coverage ratio was 2.35x\nNZ Govt Bond Tender #699; weighted average accepted yield was 1.61%; coverage ratio was 2.35x\n17th Oct 19, 2:19pm\n10th Oct 19, 2:12pm\nGovt Bond Tender #698; weighted average yield accepted was 1.04%; coverage ratio was 1.98x\nGovt Bond Tender #698; weighted average yield accepted was 1.04%; coverage ratio was 1.98x\n10th Oct 19, 2:12pm\n12th Sep 19, 2:53pm\nNZ Govt Bond Tender #695; weighted average yield accepted was 1.29%; coverage ratio was 1.32x\nNZ Govt Bond Tender #695; weighted average yield accepted was 1.29%; coverage ratio was 1.32x\n12th Sep 19, 2:53pm\n22nd Aug 19, 2:16pm\nNZ Govt Bond Tender #693; weighted average accepted yield was 0.85%; coverage ratio was 2.2x\nNZ Govt Bond Tender #693; weighted average accepted yield was 0.85%; coverage ratio was 2.2x\n22nd Aug 19, 2:16pm\n8th Aug 19, 2:59pm\nNZ Govt Bond Tender #691; weighted average yield accepted was 1.12%; coverage ratio was 3.03x\nNZ Govt Bond Tender #691; weighted average yield accepted was 1.12%; coverage ratio was 3.03x\n8th Aug 19, 2:59pm\n25th Jul 19, 2:27pm\nNZ Govt Bond Tender #689; weighted average accepted yield was 1.21%; coverage ratio was 2.92x\nNZ Govt Bond Tender #689; weighted average accepted yield was 1.21%; coverage ratio was 2.92x\n25th Jul 19, 2:27pm\n18th Jul 19, 2:09pm\nNZ Govt Bond Tender #688; weighted average accepted yield was 1.98%; coverage ratio was 2.13x\nNZ Govt Bond Tender #688; weighted average accepted yield was 1.98%; coverage ratio was 2.13x\n18th Jul 19, 2:09pm\n11th Jul 19, 2:07pm\nNZ Govt Bond Tender #687; weighted average yield accepted was 1.53%; coverage ratio was 2.93x\nNZ Govt Bond Tender #687; weighted average yield accepted was 1.53%; coverage ratio was 2.93x\n11th Jul 19, 2:07pm\n27th Jun 19, 2:11pm\nNZ Govt Bond Tender #685; weighted average accepted yield was 1.27%; coverage ratio was 4.4x\nNZ Govt Bond Tender #685; weighted average accepted yield was 1.27%; coverage ratio was 4.4x\n27th Jun 19, 2:11pm\n20th Jun 19, 2:11pm\nNZ Govt Bond Tender #684; weighted average accepted yield was 1.89%; coverage ratio was 3.95x\nNZ Govt Bond Tender #684; weighted average accepted yield was 1.89%; coverage ratio was 3.95x\n20th Jun 19, 2:11pm\n13th Jun 19, 2:12pm\nNZ Govt Bond Tender #683; weighted average yield accepted was 1.69%; coverage ratio was 2.89x\nNZ Govt Bond Tender #683; weighted average yield accepted was 1.69%; coverage ratio was 2.89x\n13th Jun 19, 2:12pm\n23rd May 19, 2:24pm\nNZ Govt Bond Tender #681; weighted average accepted yield was 1.46%; coverage ratio was 2.0x\nNZ Govt Bond Tender #681; weighted average accepted yield was 1.46%; coverage ratio was 2.0x\n23rd May 19, 2:24pm\n16th May 19, 2:46pm\nNZ Govt Bond Tender #680; weighted average accepted yield was 2.09%; coverage ratio was 2.23x\nNZ Govt Bond Tender #680; weighted average accepted yield was 2.09%; coverage ratio was 2.23x\n16th May 19, 2:46pm"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.98572564,"math_prob":0.7843027,"size":4119,"snap":"2020-24-2020-29","text_gpt3_token_len":1309,"char_repetition_ratio":0.28189552,"word_repetition_ratio":0.6636637,"special_character_ratio":0.34935665,"punctuation_ratio":0.20618556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97834694,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T02:02:04Z\",\"WARC-Record-ID\":\"<urn:uuid:85984160-726d-4b62-ba7e-9378ce0f5f6f>\",\"Content-Length\":\"114485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d913a6d-32d7-49d2-b8c8-79dcec113aaf>\",\"WARC-Concurrent-To\":\"<urn:uuid:d951d676-e1f2-424b-b3eb-46f4351b2a35>\",\"WARC-IP-Address\":\"54.252.224.192\",\"WARC-Target-URI\":\"https://www.interest.co.nz/category/tag/nz-government-bond?page=1\",\"WARC-Payload-Digest\":\"sha1:CVLHLNIV5EX4K77DIDOXFWKNWFVFVKFK\",\"WARC-Block-Digest\":\"sha1:HRY2X57NUOXNSFBGC64VOPLLVXX43IJT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897844.44_warc_CC-MAIN-20200709002952-20200709032952-00260.warc.gz\"}"} |
https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Units,_Dimensions_of | [
"# 1911 Encyclopædia Britannica/Units, Dimensions of\n\nUNITS, DIMENSIONS OF. Measurable entities of different kinds cannot be compared directly. Each one must be specified in terms of a unit of its own kind; a single number attached to this unit forms its measure. Thus if the unit of length be taken to be L centimetres, a line whose length is l centimetres will be represented in relation to this unit by the number l/L; while if the unit is increased [L] times, that is, if a new unit is adopted equal to [L] times the former one, the numerical measure of each length must in consequence be divided by [L]. Measurable entities are either fundamental or derived. For example, velocity is of the latter kind, being based upon a combination of the fundamental entities length and time; a velocity may be defined, in the usual form of language expressive of a limiting value, as the rate at which the distance from some related mark is changing per unit time. The element of length is thus involved directly, and the element of time inversely in the derived idea of velocity; the meaning of this statement being that when the unit of length is increased L] times and the unit of time is increased [T] times, the numerical value of any given velocity, considered as specified in terms of the units of length and time, is diminished [L]/[T] times. In other words, these changes in the units of length and time involve change in the unit of velocity determined by them, such that it is increased [V] times where [V]=[L][T]\". This relation is conveniently expressed by, the statement that velocity is of -|- 1 dimension- in length and of - 1 dimension in time. Again, acceleration of motion is defined as rate of increase of velocity per unit time; hence the change of the units of length and time will increase the corresponding or derived unit of acceleration [V]/[T] times, that is [L][T]” times: this expression thus represents the dimensions (1 in length and -2 in time) of the derived entity acceleration in terms of its fundamental elements length and time. In the science of dynamics all entities are derived from the three fundamental ones, length, time and mass; for example, the dimensions of force (P) are those of mass and acceleration jointly, so that in algebraic form (P)=[M][L][T]'2. This restriction of the fundamental units to three must therefore be applicable to all departments of physical science that are reducible to pure dynamics.\n\nThe mode of transformation of a derived entity, as regards its numerical value, from one set of fundamental units of reference to another set, is exhibited in the simple illustrations above given. The procedure is as follows. When the numerical values of the new units, expressed in terms of the former ones, are substituted for the symbols, in the expression for the dimensions of the entity under consideration, the number which results is the numerical value of the new unit of that entity in terms of the former unit: thus all numerical values of entities of this kind must be divided by this number, [in order to transfer them from the former to the latter system of fundamental units.\n\nAs above stated, physical science aims at reducing the phenomena of which it treats to the common denomination of the positions and movements of masses. Before the time of Gauss it was customary to use a statical measure of force, alongside the kinetic measure depending on the acceleration of motion that the force can produce in a given mass. Such a statical measure could be conveniently applied by the extension of a. spring, which, however, has to be corrected for temperature, or by weighing against standard weights, which has to be corrected for locality. On the other hand, the kinetic measure is independent of local conditions, if only we have absolute scales of length and time at our disposal. It has been found to be indispensable, for simplicity and precision in physical science, to express the measure of force in only one way; and statical forces are therefore now generally referred in theoretical discussions to the kinetic unit of measurement. In mechanical\n\nengineering the static unit has largely survived; but the increasing importance of electrical applications is introducing uniformity there also. In the science of electricity two different systems of units, the electrostatic and the electrodynamic, still to a large extent persist. The electrostatic system arose because in the development of the subject Statics came before kinetics; but in the complete synthesis it is usually found convenient to express the various quantities in terms of the electrokinetic system alone.-The\n\nsystem of measurement now adopted as fundamental in physics takes the centimetre as unit of length, the gramme as unit of mass, and the second as unit of time. The choice of these units was in the first instance arbitrary and dictated by convenience; for some purposes subsidiary systems based on multiples of these units by certain powers of ten are found convenient. There are certain absolute entities in nature, such as the constant of gravitation, the velocity of light in free space, and the constants occurring in the expression giving the constitution of the radiation in an enclosure that corresponds to each temperature, which are 'the same for all kinds of matter; these might be utilized, if known with sufficient accuracy, to establish a system of units of an absolute or cosmical kind. The wave-length of a given spectral line might' be utilized in the same manner, but that depends on recovering the kind of matter which produces the line.\n\nIn physical science the uniformities in the course of phenomena are elucidated by the discovery of permanent or intrinsic relations between the measurable properties of material systems. Each such relation is expressible as an equation connecting the numerical values of entities belonging to the system. Such an equation, representing as it does a relation between actual things, must remain true when the measurements are referred to a new set of fundamental units. Thus, for example, the kinematical equation v2=uf 21, if n is purely numerical, contradicts the necessary relations involved in the definitions of the entities velocity, acceleration, and length which occur in it. For on changing to a new set of units as above the equation should still hold; it, however, then becomes 112/[V]2=n-f”/[F]2~l/[L]. Hence on division there remains a dimensional relation [V]2= F]2[L], which is in disagreement with the dimensions above determined of the derived units that are involved in it. The inference follows, either that an equation such as that from which we started is a formal impossibility, or else that the factor n which it contains is not a mere number, but represents n times the unit of some derived quantity which ought to be specified in order to render the equationacomplete statement of a physical relation. On the latter hypothesis the dimensions N] of this quantity are determined by the dimensional equation V]'=[N][F]2{L] where, in terms of the fundamental units of length and time, [V]=[L][T]'1, [F]=[L][T]'2; whence by substitution it appears that [N]=[L]'1[T]“. Thus, instead of being merely numerical, n must represent in the above formula the measure of some physical entity, which may be classified by the statement that it has the conjoint dimensions of time directly and of velocity inversely.\n\nIt often happens that a simple comparison of the dimensions of the quantities which determine a physical system will lead to important knowledge as to the necessary relations that subsist between them. Thus in the case of a simple pendulum the period of oscillation 1' can depend only on the angular amplitude a of the swing, the mass m of the bob considered as a point, and the length l of the suspending fibre considered as without mass, and on the value of g the acceleration due to gravity, which is the active force; that is, 'r=f(a, m, Z, g). The dimensions must be the same on both sides of this formula, for, when they are expressed in terms of the three independent dynamical quantities mass, length, and time, there must be complete identity between its two sides. Now, the dimensions of g are [L][T]'2; and when the unit of length is altered the numerical value of the period is unaltered, hence its expression must be restricted to the form f(a, m, l/g). Moreover, as the period does not depend on the unit of mass, the form is further XXYII. 24\n\nreduced to f(a, l/ g); and as it is of the dimensions + I i.n time, it must be a multiple of (Z/g)5, and therefore of the form q\\$(a.) y/ (l/ g). Thus the period of oscillation has been determined by these considerations except as regards the manner in which it depends on the amplitude a of the swing. When a process of this kind leads to a definite result, it will be one which makes the unknown quantity jointly proportional to various powers of the other quantities involved; it will therefore shorten the process If we assume such an expression for it in advance, and find whether it is possible to determine the exponents definitely and uniquely so as to obtain the correct dimensions. In the present example, assuming in this way the relation -r=AaPm'1l'g“, where A is a pure numeric, we are led to the dimensional equation T]=[a]P[l/I]<1[L]'[LT'“]', showing that the law assumed would not persist when the fundamental units of length, mass, and time are altered, unless q=o, s=-é, r=%;~ as an angle has no dimensions, being determined by its numerical ratio to the invariable angle forming four right angles, p remains undetermined. This leads to the same result, 1'=¢>(a.)l+5g'5, as before.\n\nAs illustrating the power and also the limitations of this method of dimensions, we may apply it (after Lord Rayleigh, Ray. Soc. Proc., March 1900) to the laws of viscosity in gases. The dimensions of viscosity (p) are (force/area) + (velocity/length), giving [lVlL'1T\"] In terms of the fundamental units. Now, on the dynamical theory of gases viscosity must be a function of the mass m of a molecule, the number n of molecules per unit volume, their velocity of mean square 5, and their effective radius a; it can depend on nothing else. The equation of dimensions cannot supply more than three relations connecting these four possibilities of variation, and so cannot here lead to a definite result without further knowledge of the physical circumstances. And we remark ' conversely, in passing, that wherever in a problem of physical dynamics we know that the quantity sought can depend on only three other quantities whose dynamical dimensions are known, it must vary as a simple power of each. The additional knowledge required, in order to enable us to proceed in a case like the present, must be of the form pf such an equation of simple variation. In the present case it is involved in the new fact that in an actual gas the mean free path is very great compared with the effective molecular radius. On this account the mean free path is inversely as the number of molecules per unit volume; and therefore the coefficient of viscosity, being proportional to these two quantities jointly, is independent of either, so long as the other quantities defining the system remain unchanged. If the molecules are taken to be spheres which exert mutual action only during collision, we therefore assume u oc m'171/az,\n\nwhich requires that the equation of dimensions MI-\"T“'l = [Ml“'lLT\"'l\"lLl\n\nmust be satisfied. This gives x=1, y=I, z=-2. As the temperature is proportional to m5', it follows that the viscosity is proportional to the square root of the mass of the molecule and the square root of the absolute temperature, and inversely proportional to the square of the effective molecular radius, being, as already seen, uninfluenced by change of density.\n\nIf the atoms are taken tp be Boscovichian points exerting mutual attractions, the effective diameter a is not definite; but we can still proceed in cases where the law of mutual attraction is expressed by a simple formula of variation-that is, provided it is of type km2r', where r is the distance between the two molecules. Then, noting that, as this is a force, the dimensions of k must be (M°'L'+'T\"'], we can assume\n\np. OC mfr?/k\",\n\nprovided [ML'IT\"] = [M]'”[LT\"]=/[M\"L”+'T\"]'°, which demands and is satisfied by,\n\nx-w=I, y+2w=I, y-1-(s+I)w= -I,\n\nso that w=-S- iI, y=;-i”?, x=;%?-\n\nThus, on this supposition,\n\nZ9 L\n\nMocmzs-2k s-r 02:-2\n\nwhere 0 represents absolute temperature. (See DIFFUSION.) When the quantity sought depends on more than three others, the method may often be equally useful, though it cannot give a complete result. Cf. Sir G. G. Stokes, Math. and Phys. Papers, M (1881) p. 1o6, and Lord Rayleigh, Phil. Mag. (1905), (1) p. 494, for examples dealing with the determination of viscosity from observations of the retarded swings of a vane, and with the formulation of the most general type of characteristic equation for gases respectively. As another example we may consider what is involved in Bashforth's experimental conclusion that the air-resistances to shot of the same shane are proportional to the squares of their linear dimensions. A priori, the resistance is a force which is determined by the density of the air ρ, the linear dimensions I of the shot, the viscosity of the air ir, the velocity of the shot v, and the velocity of sound in air c, there being no other physical quantity sensibly involved. Five elements are thus concerned, and we can combine them in two ways so as to obtain quantities of no dimensions; for example, we may choose ρvl/μ. and v/c. The resistance to the shot must therefore be of the form μ2ρ2v2φ(ρvl/μ)f(v/c) this form being of sufficient generality, as it involves an undetermined function for each element beyond three. On equating dimensions we find x=2, y= −1, z=0. Now, Bashforth's result shows that φ(χ)=χ2. Therefore the resistance is ρv2l2f(v/c), and is thus to our degree' of approximation independent of the viscosity. Moreover, we might have assumed this practical independence straight off, on known hydrodynamic grounds; and then the argument from dimensions could have predicted Bashforth's law, if the present application of the doctrine of dimensions to a case involving turbulent fluid motion not mathematically specifiable is valid. One of the important results drawn by Osborne Reynolds from his experiments on the régizne of How in pipes was a confirmation of its validity: we now see that the ballistic result furnishes another confirmation. In electrical science two essentially distinct systems of measurement were arrived at according as the development began with the phenomena of electrostatics or those of electrokinetics. An electric charge appears as an entity having different dimensions in terms of the fundamental dynamical units in the two cases: the ratio of these dimensions proves to be the dimensions of a velocity. It was found, first by W. Weber, by measuring the same charge by its static and its kinetic effects, that the ratio of the two units is a velocity sensibly identical with the velocity of light, so far as regards experiments conducted in space devoid of dense matter. The emergence of a definite absolute velocity such as this, out of a comparison of two different ways of approaching the same quantity, entitles us to assert that the two ways can be consolidated into a single dynamical theory only by some development in which this velocity comes to play an actual part. Thus the hypothesis of the mere existence of some complete dynamical theory was enough to show, in the stage which electrical science had reached under Gauss and Weber, that there is a definite physical velocity involved in and underlying electric phenomena, which it would have been hardly possible to imagine as other than a velocity of propagation of electrical effects of some kind. The time was thus ripe for the reconstruction of electric theory by Faraday and Maxwell.\n\nThe power of the method of dimensions in thus revealing general relations has its source in the hypothesis that, however complicated in appearance, the phenomena are really restricted within the narrow range of dependence on the three fundamental entities. The proposition is also therein involved, that if a changing physical system be compared with another system in which the scale is altered in different ratios as regards corresponding lengths, masses, and times, then if all quantities affecting the second system are altered from the corresponding quantities affecting the first in the ratios determined by their physical dimensions, the stage of progress of the second system will always correspond to that of the first; under this form the application of the principle, to determine the correlations of the dynamics of similar systems, originated with Newton (Prilzcipio, lib. prop. 32). For example, in comparing the behaviour of an animal with that of another animal of the same build but on a smaller scale, we may take the mass per unit volume and the muscular force per unit sectional area to be the same for both; thus [L], [M], . . . being now ratios of corresponding quantities, we have [ML−3]=1 and [ML−1T−2]= 1, giving [L]=[T]; thus the larger animal effects movements of his limbs more slowly in simple proportion to his linear dimensions, while the velocity of movement is the same for both at corresponding stages.\n\nBut this is only on the hypothesis that the extraneous force of gravity does not intervene, for that force does not vary in the same manner as the muscular forces. The result has thus application only to a case like that of fishes in which gravity is equilibrated by the buoyancy of the water. The effect of the inertia of the water, considered as a perfect fluid, is included in this comparison; but the forces arising from viscosity do not correspond in the two systems, so that neither system may be so small that viscosity is an important agent in its motion. The limbs of a land animal have mainly to support his weight, which varies as the cube of his linear dimensions, while the sectional areas of his muscles and bones vary only as the square thereof. Thus the diameters of his limbs should increase in a greater ratio than that of his body theoretically in the latter ratio raised to the power Q, if other things were the same. An application of this principle, which has become indispensable in modern naval architecture, permits the prediction of the behaviour of a large ship from that of a small-scale model. The principle is also of very wide utility in unravelling the fundamental relations in definite physical problems of such complexity that complete treatment is beyond the present powers of mathematical analysis; it has been applied, for example, to the motions of systems involving viscous fluids, in elucidation of wind and waves, by Helmholtz (Akod. Berlin, 1873 and 188Q), and in the electrodynamics of material atomic systems in motion by Lorentz and by Larmor. As already stated, the essentials of the doctrine of dimensions in its.most fundamental aspect, that relating to the comparison of the properties of correlated systems, originated with Newton. The explicit formulation of the idea of the dimensions, or the exponents of dimension, of physical quantities was first made by Fourier, Théorie de la choleuf, 1822, ch. ii. sec. 9; the homogeneity in dimensions of all the terms of an equation is insisted on by him, much as explained above; and the use of this principle as a test of accuracy and precision is illustrated. (J. L.*)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9460234,"math_prob":0.98670113,"size":19597,"snap":"2021-43-2021-49","text_gpt3_token_len":4181,"char_repetition_ratio":0.15061502,"word_repetition_ratio":0.010107198,"special_character_ratio":0.20819513,"punctuation_ratio":0.10294508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959161,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T17:29:08Z\",\"WARC-Record-ID\":\"<urn:uuid:78b9b455-0066-4f23-aca2-b1a9fe99a224>\",\"Content-Length\":\"57283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73422174-ac64-4e18-84e3-22e0d1782199>\",\"WARC-Concurrent-To\":\"<urn:uuid:d35de66e-cf90-4930-a865-e3ef2520a628>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Units,_Dimensions_of\",\"WARC-Payload-Digest\":\"sha1:YR6WKWHAWA72GGQUTJC3FVTBRPJJEN4Y\",\"WARC-Block-Digest\":\"sha1:VUW56B72K6VG5AY5PCMBTCXGBBZPS5OA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358570.48_warc_CC-MAIN-20211128164634-20211128194634-00307.warc.gz\"}"} |
https://blueribbonwriters.com/find-the-absolute-maximum-value-if-any-for-f-on-the-interval-13-given-that-fx2x-5x4-55-a-5-b-no-absolute-max-value-c-7-6335-d-32-e/ | [
"# Find the absolute maximum value (if any) for f on the interval [-1,3], given that f(x)=2x-5x(4/5)+5 a) 5 b) no absolute max value c) -7.6335 d) 32 e)…\n\nFind the absolute maximum value (if any) for f on the interval [-1,3], given that f(x)=2x-5x(4/5)+5\n\na) 5\n\nb) no absolute max value\n\nc) -7.6335\n\nd) 32\n\ne) -1.04111\n\nf) none of the above\n\nFind the value of x that gives the absolute minimum value of f (x) = 4x^3-9x^2+3 on the interval [1/2, 4]\n\na) 4\n\nb) no such value of x\n\nc) 0\n\nd) 3/2\n\ne) 1/2\n\nf) none of the above\n\nA store owner wants to set up a rectangular display area outside his store. He will use garage (which is 45 feet long) as part of one side of the display area. He has 240 linear feet of fencing material to use to fence in the display area. What is a function which expresses the total area of the display area in terms of the length x of the side of the display area opposite the building.\n\na) A(x) = 240x-45\n\nb) A(x) = (285/2)x+x^2\n\nc) A(x) = 283/2+x^2\n\nd) A(x) = (285/2)x-x^2\n\ne) A(x) = 283/2+x",
null,
""
]
| [
null,
"https://blueribbonwriters.com/wp-content/uploads/2020/01/order-supreme-essay.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.72718644,"math_prob":0.9987663,"size":844,"snap":"2021-04-2021-17","text_gpt3_token_len":307,"char_repetition_ratio":0.16428572,"word_repetition_ratio":0.011976048,"special_character_ratio":0.37914693,"punctuation_ratio":0.043062203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9870893,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T14:39:56Z\",\"WARC-Record-ID\":\"<urn:uuid:376d0f86-9b6f-4d74-8c2c-f669e944da74>\",\"Content-Length\":\"44651\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fef6087-9d15-440f-84fe-fb25342391db>\",\"WARC-Concurrent-To\":\"<urn:uuid:b19ba5a0-15a6-46cf-8760-dbac695d0835>\",\"WARC-IP-Address\":\"198.54.116.13\",\"WARC-Target-URI\":\"https://blueribbonwriters.com/find-the-absolute-maximum-value-if-any-for-f-on-the-interval-13-given-that-fx2x-5x4-55-a-5-b-no-absolute-max-value-c-7-6335-d-32-e/\",\"WARC-Payload-Digest\":\"sha1:HZFKZKXEL2QOK5JQJ3ATJW7RHJVLOFEC\",\"WARC-Block-Digest\":\"sha1:MCFK6JQJ4UV53RF4BUKJFX2IW66RWLUB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514796.13_warc_CC-MAIN-20210118123320-20210118153320-00405.warc.gz\"}"} |
https://byjus.com/maths/2041-in-words/ | [
"",
null,
"# 2041 in Words\n\n2041 in words is written as Two thousand forty-one. In both the International System of Numerals and the Indian System of Numerals, 2041 is written as Two thousand forty-one. The number 2041 is a Cardinal Number as it could represent some quantity. For example, “the speakers cost 2041 rupees”.\n\n 2041 in Words Two thousand forty-one Two thousand forty-one in Number 2041\n\n## 2041 in English Words\n\n2041 in English words is read as “Two thousand forty-one”.",
null,
"## How to Write 2041 in Words?\n\nTo write 2041 in words, we shall use the place value chart. In the place value chart, put 2 in the thousands, 0 in the hundreds, 4 in the tens, and 1 in the ones, respectively. Let us make a place value chart to write the number 2041 in words.\n\n Thousands Hundreds Tens Ones 2 0 4 1\n\nThus, we can write the expanded form as\n\n2 × Thousand + 0 × Hundred + 4 × Ten + 1 × One\n\n= 2 × 1000 + 0 × 100 + 4 × 10 + 1 × 1\n\n= 2000 + 0 + 40 + 1\n\n= 2041\n\n= Two thousand forty-one.\n\n2041 is a natural number, the successor of 2040 and the predecessor of 2042.\n\n2041 in words – Two thousand forty-one\n\n• Is 2041 an odd number? – Yes\n• Is 2041 an even number? – No\n• Is 2041 a perfect square number? – No\n• Is 2041 a perfect cube number? – No\n• Is 2041 a prime number? – No\n• Is 2041 a composite number? – Yes\n\n## Frequently Asked Questions on 2041 in Words\n\n### How to write 2041 in words?\n\n2041 in words is written as Two thousand forty-one.\n\n### How to write 2041 in the International and Indian System of Numerals?\n\nIn both, the system of numerals, 2041 in words, is written as Two thousand forty-one.\n\n### How to write 2041 in a place value chart?\n\nIn the place value chart, write 2 in the thousands, 0 in the hundreds, 4 in the tens, and 1 in the ones, respectively."
]
| [
null,
"https://www.facebook.com/tr",
null,
"https://cdn1.byjus.com/wp-content/uploads/2022/03/Number-in-word-2041.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83490545,"math_prob":0.85573316,"size":1306,"snap":"2022-05-2022-21","text_gpt3_token_len":407,"char_repetition_ratio":0.18586789,"word_repetition_ratio":0.06545454,"special_character_ratio":0.37366003,"punctuation_ratio":0.101123594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9849697,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T13:43:03Z\",\"WARC-Record-ID\":\"<urn:uuid:e16cd31b-57ed-47b5-9160-74c288306e3e>\",\"Content-Length\":\"673914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39644396-1f49-4b7e-93f2-5a94dd720907>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3a34cfd-4f0a-470b-9dc6-8ed6798e4bc3>\",\"WARC-IP-Address\":\"162.159.129.41\",\"WARC-Target-URI\":\"https://byjus.com/maths/2041-in-words/\",\"WARC-Payload-Digest\":\"sha1:UA4USRMVF2H53OQ2DL2EAJCTM67ONIHA\",\"WARC-Block-Digest\":\"sha1:DCL2TSUGHWSXWGN5AMFRUM2XU7TLP7GA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545548.56_warc_CC-MAIN-20220522125835-20220522155835-00283.warc.gz\"}"} |
https://www.sanfoundry.com/cognitive-radio-questions-answers-campus-interviews/ | [
"# Cognitive Radio Questions and Answers – Next Generation Wireless Network – Spectrum Management – 2\n\n«\n»\n\nThis set of Cognitive Radio Questions and Answers for Campus interviews focuses on “Next Generation Wireless Network – Spectrum Management – 2”.\n\n1. Which among the following is the expression for channel capacity as per Shannon-Hartley theorem?\nB is bandwidth, N is the average noise power, and S is the average received signal power.\na) C=B+log2(1+ $$\\frac{N}{S})$$\nb) C=B+log2(1+ $$\\frac{S}{N})$$\nc) C=B log2(1+ $$\\frac{N}{S})$$\nd) C=B log2(1+ $$\\frac{S}{N})$$\n\nExplanation: Shannon Hartley theorem states the maximum rate of transmission over a channel of specified bandwidth containing noise. The calculated channel capacity creates an upper bound for maximum error-free information transmission.\nC=B log2(1+ $$\\frac{S}{N})$$\nHere B is bandwidth, N is the average noise power, and S is the average received signal power.\n\n2. Which among the following an expression for spectrum capacity in OFDM based xG networks?\nα refers to a set of unused spectrum units, G(f) is the channel power gain at frequency f, S0 is the signal power per unit frequency and N_0is the noise power per unit frequency.\na) C=∫α$$\\frac{1}{2}$$log2(1+$$\\frac{G(f)S_0}{N_0})$$\nb) C=∫α$$\\frac{1}{2}$$log2(1+$$\\frac{S_0}{N_0})$$\nc) C=∫αB log2(1+$$\\frac{G(f)S_0}{N_0})$$\nd) C=∫αB log2(1+$$\\frac{G(f)N_0}{S_0})$$\n\nExplanation: Orthogonal Frequency Division Multiplexing is a digital modulation scheme. The data stream is divided into several closely spaced subcarrier signals at different frequencies. The spectrum capacity of OFDM xG networks is given by the expression,\nC=∫α$$\\frac{1}{2}$$log2(1+$$\\frac{G(f)S_0}{N_0})$$\nHere α refers to a set of unused spectrum units, G(f) is the channel power gain at frequency f, S0 is the signal power per unit frequency, and N0 is the noise power per unit frequency.\n\n3. Which among the following is not selected in accordance with the user requirement?\na) Interference\nb) Data rate\nc) Tolerable error rate\nd) Transmission mode\n\nExplanation: Following the characterization of the spectrum, the spectrum band for transmission is selected by weighing the spectrum characteristics and the quality of service requirements. Spectrum manager fixes the data rate, tolerable error rate, transmission mode, the bandwidth of transmission, and the delay bound according to the user requirements.\nNote: Join free Sanfoundry classes at Telegram or Youtube\n\n4. Which among the following is not a challenge for spectrum management?\na) Interference avoidance\nb) Quality of service awareness\nc) Seamless communication\nd) Interference temperature measurement\n\nExplanation: xG users should not cause interference to primary user communication. The handoff to a different spectrum band upon arrival of the primary user should be seamless. The dynamic spectrum environment and quality of service requirements should be analyzed and managed for effective communication.\n\n5. Which among the following is a challenge for spectrum decision models?\na) Combining multiple characterization parameters\nb) Supporting OFDM networks\nc) Supporting multiple spectrum bands\nd) Maintaining a decision model for each characterization parameter\n\nExplanation: Spectrum decision models can be built for various characterization parameters such as signal to noise, date rate, error rate, and other parameters that affect the quality of service. However a decision model combining several spectrum parameters is not available. Spectrum decision models support OFDM networks that have multiple spectrum bands operating simultaneously for transmission.\nTake Cognitive Radio Mock Tests - Chapterwise!\nStart the Test Now: Chapter 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n\n6. Which among the following is not a requirement for multiple spectrum band transmission by xG user?\na) Spectrum holes\nb) Compatibility with internal policies\nc) Contiguous spectrum band\nd) Compatibility with quality of service\n\nExplanation: xG users can transmit packets on multiple spectrum bands provided the available spectrum meets the quality of service requirements and the spectrum policies required for communication. However it does not require the multiple spectrum bands to be contiguous. This exhibits a vast improvement in communication quality during the spectrum handoff.\n\n7. Which among the following is not an advantage of avoiding contiguous spectrum bands for transmission?\na) Low delay\nb) Low power consumption\nc) Low interference\nd) Mitigation of quality of service degradation\n\nExplanation: The primary advantage of using a contiguous spectrum band is the mitigation of the quality of service degradation. This is because when a user has to vacate a spectrum band on the arrival of the primary user, the other bands continue to transmit maintaining communication. Low power is consumed in each band and less interference with the primary user is achieved.\n\n8. Which among the following replaces the static sine pulses of OFDM to improve flexibility?\na) Wavelet bases\nb) Triangular pulses\nc) Impulse bases\nd) Ramp pulses\n\nExplanation: In multi-carrier wavelet packet modulation, wavelet bases replace the static sine/cosine pulses of OFDM. This technique provides higher sideband suppression. It also reduces inter-channel interference and inter-symbol interference.\n\n9. Which among the following is not a disadvantage of using OFDM?\na) Complex computation is necessary\nb) Sensitive to frequency offset\nc) High peak to average power ratio\nd) Low data rate\n\nExplanation: OFDM requires only the computation of fast Fourier transform and inverse fast Fourier transform of the signal to be transmitted. However it requires the subcarriers to remain orthogonal and exhibits a large peak to average power ratio due to the presence of complex sinusoids in time domain OFDM signals.\n\n10. What is the full form of CE-OFDM?\na) Closed Envelope Orthogonal Frequency Division Multiplexing\nb) Constant Envelope Orthogonal Frequency Division Multiplexing\nc) Constant Evolute Orthogonal Frequency Division Multiplexing\nd) Closed Evolute Orthogonal Frequency Division Multiplexing\n\nExplanation: In Constant Envelope Orthogonal Frequency Division Multiplexing, the complex modulated signals are positioned in a complex conjugated arrangement to obtain a real value inverse fast Fourier transform output. Also, phase modulation is applied in the real value time domain. This technique reduces the high peak to average power ratio of OFDM.\n\n11. Spectrum decision over heterogeneous spectrum band with different characteristics is a challenge in spectrum management.\na) True\nb) False\n\nExplanation: xG users operate in licensed and unlicensed spectrum bands. In licensed spectrum bands, the operations of the primary users are collected during spectrum analysis and the decision taken should not affect the primary user transmission. Likewise, intelligent spectrum sharing algorithms should be selected for xG users operating on the licensed bands which have equal rights for spectrum access.\n\n12. Which among the following terms should replace the labels ‘A’, ‘B’, and ‘C’ in the diagram?",
null,
"a) A – Sensing information, B – Spectrum sensing, C – Reconfiguration\nb) A – Sensing information, B – Reconfiguration, C – Spectrum sensing\nc) A – Spectrum sensing, B – Reconfiguration, C – Sensing information\nd) A – Reconfiguration, B – Spectrum sensing, C – Sensing information\n\nExplanation: Spectrum sensing examines the available spectrum bands, gathers information about the spectrum band, and detects spectrum holes. It is implemented in the physical layer. In-band sensing is carried out to sense the arrival of primary users and transfer the information to the spectrum mobility function unit. Reconfiguration is necessary to alter the transmission parameters in accordance with the dynamic radio environment.\n\nSanfoundry Global Education & Learning Series – Cognitive Radio.\n\nTo practice all areas of Cognitive Radio for Campus Interviews, here is complete set of 1000+ Multiple Choice Questions and Answers.",
null,
""
]
| [
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20532%20211%22%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20150%20150%22%3E%3C/svg%3E",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83053076,"math_prob":0.94573134,"size":8198,"snap":"2022-27-2022-33","text_gpt3_token_len":1811,"char_repetition_ratio":0.13436662,"word_repetition_ratio":0.10322048,"special_character_ratio":0.20687972,"punctuation_ratio":0.094796866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97042763,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T21:44:13Z\",\"WARC-Record-ID\":\"<urn:uuid:42250e09-a281-431a-9a75-7e3654e84faa>\",\"Content-Length\":\"156100\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43da5970-86e9-4b44-b4fc-8df49c7189eb>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa1fda4f-a2b0-41dc-b4a6-5194ad9745ac>\",\"WARC-IP-Address\":\"172.67.82.182\",\"WARC-Target-URI\":\"https://www.sanfoundry.com/cognitive-radio-questions-answers-campus-interviews/\",\"WARC-Payload-Digest\":\"sha1:JO67PSUOJQSPHSJITY4RAB2JYRR66FG5\",\"WARC-Block-Digest\":\"sha1:IVDY7OBABYNQUYRJEIN4GZFXIK7TB7JF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103341778.23_warc_CC-MAIN-20220627195131-20220627225131-00792.warc.gz\"}"} |
https://answers.everydaycalculation.com/multiply-fractions/5-42-times-24-81 | [
"Solutions by everydaycalculation.com\n\n## Multiply 5/42 with 24/81\n\nThis multiplication involving fractions can also be rephrased as \"What is 5/42 of 24/81?\"\n\n5/42 × 24/81 is 20/567.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 5/42 × 24/81 = 5 × 24/42 × 81 = 120/3402\n3. After reducing the fraction, the answer is 20/567\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
]
| [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8903558,"math_prob":0.9604467,"size":388,"snap":"2019-43-2019-47","text_gpt3_token_len":134,"char_repetition_ratio":0.15104167,"word_repetition_ratio":0.0,"special_character_ratio":0.41237113,"punctuation_ratio":0.07317073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98375845,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T05:53:51Z\",\"WARC-Record-ID\":\"<urn:uuid:49e01f0a-fb68-45f7-8e76-babe871dce18>\",\"Content-Length\":\"6947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cb7524f-c5d7-4217-8156-9309046b5ad7>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f51e8cf-95b4-4a6d-92b2-447c3d331e85>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/5-42-times-24-81\",\"WARC-Payload-Digest\":\"sha1:2B4XFNFOKIWAVBWZBVLJOKBWO73SY2RG\",\"WARC-Block-Digest\":\"sha1:ZJOHWHQRNIEQ42PUPGWFLQIHQIIVZDSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668585.12_warc_CC-MAIN-20191115042541-20191115070541-00416.warc.gz\"}"} |
https://www.physicsforums.com/threads/distance-covered-in-free-fall.630768/ | [
"# Distance covered in free fall\n\n## Main Question or Discussion Point\n\nHi,\n\nAs I understand, the distance covered by an object in free fall is described as d = $\\frac{1}{2}$gt2 or d = 5t2 on earth. Objects accelerate at 10 m/s2.\n\nUsing the first equation, if an object has fallen for 5 seconds then it has covered a distance of 125 meters. If objects, however, accelerate at 10 m/s2, then why hasn't the object fallen 50 meters?\n\nThanks,\n\nRelated Classical Physics News on Phys.org\nharuspex\nAs I understand, the distance covered by an object in free fall is described as d = $\\frac{1}{2}$gt2 or d = 5t2 on earth. Objects accelerate at 10 m/s2."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9123616,"math_prob":0.98713696,"size":796,"snap":"2020-24-2020-29","text_gpt3_token_len":227,"char_repetition_ratio":0.121212125,"word_repetition_ratio":0.8794326,"special_character_ratio":0.30025125,"punctuation_ratio":0.12209302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T09:58:39Z\",\"WARC-Record-ID\":\"<urn:uuid:7d52aacd-7363-468f-b525-f3e4cf538ce2>\",\"Content-Length\":\"72724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f701d71-b1cf-463f-a4e0-4326ec520223>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d7597a3-ba4c-4ad2-b555-6c4bc3ba24d3>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/distance-covered-in-free-fall.630768/\",\"WARC-Payload-Digest\":\"sha1:APBYKJ3VE6ZFVJ3GSIORWBSSVLWOZ2DR\",\"WARC-Block-Digest\":\"sha1:4GEJGMMSNXJ4RVAUIRZTA62F5AI7QLLK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347392142.20_warc_CC-MAIN-20200527075559-20200527105559-00054.warc.gz\"}"} |
http://academicteaching.net/8th-grade-math-worksheets-printable-with-answers/ | [
"printable math games with answers new collection of free fun math worksheets for 8th grade\n\n📷",
null,
"### Printable Math Games With Answers New Collection Of Free Fun Math Worksheets For 8th Grade\n\n📷",
null,
"### 5th Grade Math Measurement Worksheets Word Problems Longest Rivers 8th Grade Math Worksheets Printable With Answers\n\n📷",
null,
"📷",
null,
"📷",
null,
"free printable 8th grade algebra worksheets algebra\n\n📷",
null,
"### Free Printable 8th Grade Algebra Worksheets Algebra\n\n📷",
null,
"📷",
null,
"printable math worksheets for grade 8 images worksheet math for kids 8th grade math worksheets printable\n\n📷",
null,
"### Printable Math Worksheets For Grade 8 Images Worksheet Math For Kids 8th Grade Math Worksheets Printable\n\n📷",
null,
"balancing math equations\n\n📷",
null,
"### Balancing Math Equations\n\n📷",
null,
"📷",
null,
"📷",
null,
"📷",
null,
"8th grade math algebra worksheets printable with answers new free 585 650\n\n📷",
null,
"### 8th Grade Math Algebra Worksheets Printable With Answers New Free 585 650\n\n📷",
null,
"free 8th grade worksheets two ways to print this free 8th grade math educational worksheet\n\n📷",
null,
"### Free 8th Grade Worksheets Two Ways To Print This Free 8th Grade Math Educational Worksheet\n\nbrilliant ideas of 8th grade math worksheets printable free printable 8th grade math worksheets for all\n\n📷",
null,
"### Brilliant Ideas Of 8th Grade Math Worksheets Printable Free Printable 8th Grade Math Worksheets For All",
null,
""
]
| [
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-10_8th-grade-math-worksheets-printable-with-answers_printable-math-games-with-answers-new-collection-of-free-fun-math-worksheets-for-8th-grade.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-11_8th-grade-math-worksheets-printable-with-answers_5th-grade-math-measurement-worksheets-word-problems-longest-rivers-8th-grade-math-worksheets-printable-with-answers.gif",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-12_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-worksheets-printable-with-answers-bitsandpixels-info.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-13_8th-grade-math-worksheets-printable-with-answers_fifth-grade-math-assessment-printable-new-8th-grade-math-worksheets-and-answers-valid-8th-grade-line.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-14_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-worksheets-printable-with-answers-the-best-worksheets-image-collection-download-and-share-worksheets.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-15_8th-grade-math-worksheets-printable-with-answers_free-printable-8th-grade-algebra-worksheets-algebra-.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-16_8th-grade-math-worksheets-printable-with-answers_impressive-math-worksheets-for-8th-grade-pre-algebra-printable-with-answers-love-road-trip.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-17_8th-grade-math-worksheets-printable-with-answers_by-grade-levels.gif",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-18_8th-grade-math-worksheets-printable-with-answers_printable-math-worksheets-for-grade-8-images-worksheet-math-for-kids-8th-grade-math-worksheets-printable.gif",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-19_8th-grade-math-worksheets-printable-with-answers_5th-grade-math-quiz-printable-valid-math-worksheets-and-answers-for-5th-grade-save-8th-grade.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-1_8th-grade-math-worksheets-printable-with-answers_balancing-math-equations.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-20_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-worksheets-with-answer-key-download-them-and-try-to-solve.gif",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-2_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-sheets.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-3_8th-grade-math-worksheets-printable-with-answers_.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-4_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-sheets.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-5_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-algebra-worksheets-printable-with-answers-new-free-585-650.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-6_8th-grade-math-worksheets-printable-with-answers_by-grade-levels.gif",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-7_8th-grade-math-worksheets-printable-with-answers_free-8th-grade-worksheets-two-ways-to-print-this-free-8th-grade-math-educational-worksheet-.png",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-8_8th-grade-math-worksheets-printable-with-answers_brilliant-ideas-of-8th-grade-math-worksheets-printable-free-printable-8th-grade-math-worksheets-for-all.jpg",
null,
"http://academicteaching.net/img/antihrap/8th-grade-math-worksheets-printable-with-answers/700-9_8th-grade-math-worksheets-printable-with-answers_8th-grade-math-worksheets-printable-with-answers.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8481202,"math_prob":0.4593603,"size":1435,"snap":"2020-24-2020-29","text_gpt3_token_len":336,"char_repetition_ratio":0.3368274,"word_repetition_ratio":0.12077295,"special_character_ratio":0.17212543,"punctuation_ratio":0.0047393367,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901774,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-30T22:00:28Z\",\"WARC-Record-ID\":\"<urn:uuid:838f12ee-a47b-424c-9521-0d632625c268>\",\"Content-Length\":\"24135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddcc9406-d110-4ee5-b4a4-431c6497afad>\",\"WARC-Concurrent-To\":\"<urn:uuid:29f8e36a-7b0a-44fc-abf6-8cc502de831a>\",\"WARC-IP-Address\":\"173.208.172.20\",\"WARC-Target-URI\":\"http://academicteaching.net/8th-grade-math-worksheets-printable-with-answers/\",\"WARC-Payload-Digest\":\"sha1:BYBKW46P42KO64MV4VAU54JBQVC4SB6P\",\"WARC-Block-Digest\":\"sha1:O2MGXGDYWBWSLYPBFIGT3RVIMN7KCYPS\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347410352.47_warc_CC-MAIN-20200530200643-20200530230643-00390.warc.gz\"}"} |
https://www.qfs.de/en/qf-test-manual/lc/manual-en-tech_imagealgorithmdetails.html | [
"The classic 'Check image' node is only minimally tolerant towards deviations. Using the default algorithm of comparing pixel by pixel it is not possible to check images that are generated in a not-quite-deterministic way or differ in size.\n\nBy using the attribute 'Algorithm for image comparison' it is possible to define a special algorithm which is tolerant towards certain image deviations. The attribute must start with the algorithm definition in the form algorithm=<algorithm> followed by all required parameters, separated by semicolons. Parameters may be defined in any order, and use of variables is possible, e.g.:\n\n`algorithm=<algo>;parameter1=value1;parameter2=value2;expected=\\$(expected)`\n\n3.5.1+ Since QF-Test 3.5.1 the definition does not need so start with algorithm=<algorithm> but can simply begin with <algorithm>.\nIt is also no longer necessary to define the parameter 'expected'. QF-Test uses a default value if it is not set. Please see below for more information.\n\nA detailed description of the available algorithms and their parameters is provided in the following section. For illustration, explanations are based on their effects on the following image:",
null,
"Figure 54.1: Original image\n\nIn the related run-log (see section 8.1) of a failed check you have the opportunity to analyze the results of the algorithm as well as the result of probability calculation.\nIf you activate the option Log successful advanced image checks all tolerant image checks will be logged for further analysis.\n\nDescription\nThe classic - or default - image check compares the color value of every single expected and actual pixel. If at least one expected pixel differs from the actual pixel the check fails. The option Tolerance for checking images defines the tolerance for comparing pixel values.\nPurpose\nThis pixel based check is suitable if you expect an exact image match with minimal tolerances or any deviations. Whenever your application renders the component not fully deterministically, this algorithm is not suitable.\nExample\nThe classic image check doesn't transform the image, thus the result looks identical to the original image.",
null,
"Figure 54.2: Classic image check\n\nThe classic image check is used when the 'Algorithm for image comparison' attribute is empty.\n\nDescription\nThis algorithm is similar to the classic algorithm, but accepts an amount of unexpected pixels. It splits every pixel in it's three sub-pixels red, green and blue. Afterwards it checks every actual color value against the expected color value. The final result is the amount of identical pixels divided by the total amount of pixels. The calculated result is checked against an expected value.\nPurpose\nIf your images are not rendered fully deterministic but you accept a certain percentage of unexpected pixels, this algorithm may be useful.\nBut it is not suitable, if the actual images are used to have shifts or distortions.\nExample\nThe result image of the exemplary algorithm\n`algorithm=identity;expected=0.95`\nlooks identical to the original image because this algorithm does not manipulate the image.",
null,
"Figure 54.3: Pixel-based identity check\nParameters\nalgorithm=identity\nThe 'Pixel based identity check' should be used.\nexpected (optional, but recommended)\nDefines which probability you expect.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThis algorithm splits every pixel in it's three sub-pixels red, green and blue. Afterwards it checks every actual color value against the expected color value to calculate a percental similarity. All percental deviations are added up and used for calculation. The final result is the average deviation over all color values and all pixels. The calculated result is checked against an expected value.\nPurpose\nIf your images are not rendered fully deterministic but you accept a certain deviation, this algorithm is a possible candidate for your purpose.\nIf you accept deviations for some pixels, but the average deviation all over the image is small, this algorithm is also suitable.\nBut it is not suitable, if the actual images are used to have shifts or distortions.\nExample\nThe result image of the exemplary algorithm\n`algorithm=similarity;expected=0.95`\nlooks identical to the original image because this algorithm does not manipulate the image.",
null,
"Figure 54.4: Pixel-based similarity check\nParameters\nalgorithm=similarity\nThe 'Pixel based similarity check' should be used.\nexpected (optional, but recommended)\nDefines which probability you expect.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThis algorithm partitions the image into quadratic blocks with a selectable size. The color value of each of these blocks is calculated as the average of the color values of the pixels the block contains. If the width or height of the image is not a multiple of the block size, the blocks at the right and bottom edge are cropped and weighted accordingly.\nThe actual blocks are checked against the expected blocks. The final result is the amount of identical blocks divided by the total amount of blocks.\nPurpose\nThis algorithm's purpose is to check an image which only differs at some parts but is identical at the remaining parts.\nExample\nThe exemplary algorithm\n`algorithm=block;size=10;expected=0.95`\nresults in the following image:",
null,
"Figure 54.5: Block-based identity check\nParameters\nalgorithm=block\nThe algorithm 'Block-based identity check' should be used.\nsize\nDefines the size of each block.\nValid values are between 1 and the image size.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThis algorithm also partitions the image in quadratic blocks with a selectable size. The color value of each of these blocks is calculated as the average of the color values of the pixels the block contains. If the width or height of the image is not a multiple of the block size, the blocks at the right and bottom edge are cropped and weighted accordingly.\nThe color value of each expected block is checked against the actual block. Their color values are analyzed for percental similarity. The final result is the average similarity of all blocks with their weight taken into account.\nPurpose\nThis algorithm is suitable for checking images with similar color variances.\nExample\nThe exemplary algorithm\n`algorithm=blocksimilarity;size=5;expected=0.95`\nresults in the following image:",
null,
"Figure 54.6: Block-based similarity check\nParameters\nalgorithm=blocksimilarity\nThe algorithm 'Block-based similarity check' should be used.\nsize\nDefines the size of each block.\nValid values are between 1 and the image size.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nTo create a histogram, an image is first broken into its three base colors red, green and blue. Then the color values for each pixel are analyzed to partition them into a definable amount of categories (known as buckets when talking about histograms). The actual fill level of each bucket is compared to the expected level. The result of the algorithm is a comparison of the relative frequencies of color categories.\nPurpose\nHistograms are used for many scenarios. For example it is possible to check for color tendencies or do brightness analyses.\nHowever, histograms are not suitable for checking rather plain-colored images.\nExample\nThe exemplary algorithm\n`algorithm=histogram;buckets=64;expected=0.95`\nresults in the following image:",
null,
"Figure 54.7: Histogram\nParameters\nalgorithm=histogram\nA 'Histogram' should be used for this image check.\nbuckets\nDefines how many buckets to use.\nValid values are a power of 2 between 2 and 256.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThe Discrete Cosine Transformation is a real-valued, discrete, linear, orthogonal transformation which transforms the discrete signal from local range to frequency range.\nAfter transforming an image, you can eliminate low-order (fast oscillating) frequencies. The remaining high-order (slow oscillating) frequencies with the steady component (zeroth frequency = 0*cos(x) + y) can now be analyzed. You can define how many frequencies per basic color should be used for this image check. You can also specify a tolerance to accept cosine-oscillations as identical which actually differ. Low-order frequencies get weighted less than high-order frequencies when calculating the result.\nPurpose\nThe Discrete Cosine Transformation is suitable for many kinds of image checks, which require certain tolerances. The more frequencies are used for analysis the sharper the image check is.\nExample\nThe exemplary algorithm\n`algorithm=dct;frequencies=20;tolerance=0.1;expected=0.95`\nresults in the following image:",
null,
"Figure 54.8: Analysis with Discrete Cosine Transformation\nParameters\nalgorithm=dct\n'Analysis with Discrete Cosine Transformation' should be used for this image check.\nfrequencies\nDefines how many frequencies to analyze.\nValid values are between 0 (steady component only) and the area of the image.\nThe less frequencies are analyzed the more tolerant the check is. The tolerance is also dependent on the size of the image.\ntolerance\nDefines the (non-linear) tolerance for accepting different cosine-oscillations as identical.\nValid values are between 0.0 and 1.0.\nThe value 1.0 means every image matches every other image, because the maximum difference of each frequency is tolerated. A value of 0.0 means frequencies only match if they are exactly the same. A value of 0.1 is a good starting point because only quite similar frequencies are accepted as identical.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nWhen using this algorithm the image is first partitioned into quadratic blocks with a selectable size (see subsection 54.2.4). Afterwards every partition is analyzed using a Discrete Cosine Transformation (see subsection 54.2.7). The final result is the average of all results of these Discrete Cosine Transformations with consideration of the blocks and their weight.\nPurpose\nThe Discrete Cosine Transformation used on the whole image deviates strongly in case of significant brightness differences occurring in the middle of the image because then the steady component (zeroth frequency), which is the highest weighted part, varies strongly. The partitioning circumvents this behavior because now only the affected partitions result in intense deviations while the other partitions stay untouched.\nExample\nThe exemplary algorithm\n`algorithm=dctblock;size=32;frequencies=4;tolerance=0.1;expected=0.95`\nresults in the following image:",
null,
"Figure 54.9: Block-based analysis with Discrete Cosine Transformation\nParameters\nalgorithm=dctblock\n'Blocks for analysis with Discrete Cosine Transformation' should be used for this image check.\nsize\nDefines the size of each block.\nValid values are between 1 and the image size.\nfrequencies\nDefines how many frequencies to analyze.\nValid values are between 0 (steady component only) and the area of a block.\nThe less frequencies are analyzed the more tolerant the check is. The tolerance is also dependent on the size of the image.\ntolerance\nDefines the (non-linear) tolerance for accepting different cosine-oscillations as identical.\nValid values are between 0.0 and 1.0.\nThe value 1.0 means every image matches every other image, because the maximum difference of each frequency is tolerated. A value of 0.0 means frequencies only match if they are exactly the same. A value of 0.1 is a good starting point because only quite similar frequencies are accepted as identical.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThis algorithm shrinks the image to a chooseable percental size. Afterwards the image gets resized to its original size by use of a bilinear filter. This filter effects a blurring, because every color value is calculated by use of neighbor pixels.\nThe final result is the average deviation over all color values and all pixels of this transformed images.\nPurpose\nDepending on the chosen sharpness the images loose any desired image information. Thus this algorithm is valuable for nearly any scenario.\nExample\nThe exemplary algorithm\n`algorithm=bilinear;sharpness=0.2;expected=0.95`\nresults in the following image:",
null,
"Figure 54.10: Bilinear Filter\nParameters\nalgorithm=bilinear\nA 'Bilinear Filter' should be used for this image check.\nsharpness\nDefines the sharpness of this bilinear filter.\nValid values are between 0.0 (complete loss of information) and 1.0 (no loss of information).\nThe sharpness is a linear parameter. This means a value of 0.5 eliminates exactly half (plus minus rounding to entire pixels) of information.\nexpected (optional, but recommended)\nDefines the minimal match probability for the check to succeed.\nValid values are between 0.0 and 1.0. If not defined, use 0.98.\nresize (optional)\nDefines, if the actual image should be resized before calculation to match the size of the expected image.\nValid values are \"true\" and \"false\".\nfind (optional)\nDefines an image-in-image search.\nA detailed description of this parameter can be found in subsection 54.3.1.\nDescription\nThe image-in-image search allows to find an expected image within a (larger) image. The check is successful when the expected image can be found anywhere using the defined algorithm. Furthermore, you can determine the position of the match.\nThe following combination of parameters are valid:\n\n`find=best` or `find=anywhere`\n`find=best(resultX, resultY)` or `find=anywhere(resultX, resultY)`\n\nPurpose\nThe image-in-image search allows to compare images if you don't know the exact position and thus cannot define an offset. The search can be combined with any algorithm and is thus valuable for any purpose.\nExample\nThe exemplary algorithm\n`algorithm=similarity;expected=0.95;find=best(resultX,resultY)`\nuses pixel-based similarity check (see subsection 54.2.3) to find an image of Q as part of the full image. The got image with highlighted region can be found within the run-log. Besides, the variables resultX and resultY are set to the location of the found image.",
null,
"Figure 54.11: Image-in-image search: Expected image",
null,
"Figure 54.12: Image-in-image search: Got image\nParameters\nfind=best\nDefines that the best match should be used.\nfind=anyhwere\nDefines that the first match which exceeds the expected match probability should be used.\nThe image-in-image search uses multiple threads and thus finding anywhere is non-deterministic.\nresultX\n`resultX` is the name of a QF-Test variable which holds the x-position of the found image.\nIf a variable for the x-position is defined, a variable for the y-position has to be defined as well (see syntax above).\nresultY\n`resultY` is the name of a QF-Test variable which holds the y-position of the found image. If a variable for the y-position is defined, a variable for the x-position has to be defined as well (see syntax above)."
]
| [
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_classic.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_classic.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_identity.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_similarity.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_block.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_blocksimilarity.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_histogram.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_dct.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_dctblock.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_bilinear.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_findExp.png",
null,
"https://archive.qfs.de/qftest/manual/en/images/imgAdv_findGot.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7998985,"math_prob":0.95127827,"size":17350,"snap":"2021-21-2021-25","text_gpt3_token_len":3860,"char_repetition_ratio":0.16205466,"word_repetition_ratio":0.48470503,"special_character_ratio":0.21648414,"punctuation_ratio":0.11383928,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828053,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,9,null,9,null,9,null,9,null,8,null,9,null,9,null,8,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T17:40:45Z\",\"WARC-Record-ID\":\"<urn:uuid:7d135b15-0e06-4f58-855a-d4b8359fb6e7>\",\"Content-Length\":\"136053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88c4dacc-9c91-484e-af9a-4b4bd5c9bea8>\",\"WARC-Concurrent-To\":\"<urn:uuid:6fed05ef-bd9b-4ccd-9361-7fe249e2451a>\",\"WARC-IP-Address\":\"212.11.229.229\",\"WARC-Target-URI\":\"https://www.qfs.de/en/qf-test-manual/lc/manual-en-tech_imagealgorithmdetails.html\",\"WARC-Payload-Digest\":\"sha1:K6EX2REBXVXGRHE2IOPN7IBJXOXB7VH5\",\"WARC-Block-Digest\":\"sha1:ATTW36QR4MUZV53H5FLYYMA5AWSLRCKV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487625967.33_warc_CC-MAIN-20210616155529-20210616185529-00118.warc.gz\"}"} |
https://jp.maplesoft.com/support/help/addons/view.aspx?path=index | [
"",
null,
"index - Maple Help\n\nindex\n\nconstruct an indexed expression",
null,
"Calling Sequence index(p, rest)",
null,
"Parameters\n\n p - expression or name to be indexed rest - (optional) expression sequence of arguments to be passed to p",
null,
"Description\n\n • The index(p, rest) calling sequence is equivalent to constructing the expression p[rest].\n • If p is an indexable expression, index(p, rest) evaluates to the result of indexing p by rest; otherwise, it simply returns the indexed expression ${p}_{\\mathrm{rest}}$. For more about indexing, see selection.\n Note: Calling index with one argument is equivalent to p[], that is, p indexed with an empty index.",
null,
"Examples\n\n > $\\mathrm{index}\\left(f,s\\right)$\n ${{f}}_{{s}}$ (1)\n > $\\mathrm{index}\\left(g\\right)$\n ${g}\\left[\\right]$ (2)\n > $\\mathrm{index}\\left(f,s,t,u,v\\right)$\n ${{f}}_{{s}{,}{t}{,}{u}{,}{v}}$ (3)\n > $\\mathrm{index}\\left(\\left[7,4,8,9\\right],3\\right)$\n ${8}$ (4)\n > $\\mathrm{map2}\\left(\\mathrm{index},\\left[a,b,c,d\\right],\\left[4,2,1,3\\right]\\right)$\n $\\left[{d}{,}{b}{,}{a}{,}{c}\\right]$ (5)",
null,
"Compatibility\n\n • The index command was introduced in Maple 2017."
]
| [
null,
"https://bat.bing.com/action/0",
null,
"https://jp.maplesoft.com/support/help/addons/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/addons/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/addons/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/addons/arrow_down.gif",
null,
"https://jp.maplesoft.com/support/help/addons/arrow_down.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.69537085,"math_prob":0.9997603,"size":974,"snap":"2023-14-2023-23","text_gpt3_token_len":261,"char_repetition_ratio":0.14432989,"word_repetition_ratio":0.0,"special_character_ratio":0.23100616,"punctuation_ratio":0.18721461,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996342,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T14:13:57Z\",\"WARC-Record-ID\":\"<urn:uuid:cbf3e59c-ec57-4227-887c-770a832ebd00>\",\"Content-Length\":\"172220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:487b518d-12c3-4934-a179-9391c48a9e4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:6dad7e80-5b5a-419a-b61a-994d78a5957b>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://jp.maplesoft.com/support/help/addons/view.aspx?path=index\",\"WARC-Payload-Digest\":\"sha1:PZFZNKDBH5TPHMNJSYWDDP64CYTKV43H\",\"WARC-Block-Digest\":\"sha1:KIORIEG5H5ZEXS4ZFYG6XCYHGM65O2H4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648695.4_warc_CC-MAIN-20230602140602-20230602170602-00153.warc.gz\"}"} |
https://rdrr.io/cran/FitAR/man/sdfplot.numeric.html | [
"# sdfplot.numeric: Autoregressive Spectral Density Estimation for \"numeric\" In FitAR: Subset AR Model Fitting\n\n## Description\n\nMethod function for vectors, class \"numeric\"\n\n## Usage\n\n ```1 2``` ```## S3 method for class 'numeric' sdfplot(obj, ...) ```\n\n## Arguments\n\n `obj` object, class\"numeric\", a vector `...` optional arguments\n\n## Value\n\nPlot is produced using plot. Matrix with 2 columns containing the frequencies and spectral density is returned invisibly.\n\n## Author(s)\n\nA.I. McLeod\n\n`sdfplot`\n `1` ```sdfplot(lynx) ```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7309595,"math_prob":0.45185804,"size":431,"snap":"2022-27-2022-33","text_gpt3_token_len":97,"char_repetition_ratio":0.09836066,"word_repetition_ratio":0.0,"special_character_ratio":0.21577726,"punctuation_ratio":0.16666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9549842,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T22:50:24Z\",\"WARC-Record-ID\":\"<urn:uuid:f4c59bbb-4128-4c53-a9ef-b32224b4da54>\",\"Content-Length\":\"42578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae26d39a-880e-4801-b798-0251647a056f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0d53d1a-597b-46e4-b21b-b107b2317acc>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/FitAR/man/sdfplot.numeric.html\",\"WARC-Payload-Digest\":\"sha1:6Y2BOG7DORPY6UIROBIKR6GHNRSLHM6Y\",\"WARC-Block-Digest\":\"sha1:COVYDH42OFKOA6D6BPN3LN7HPSOFK5ML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103947269.55_warc_CC-MAIN-20220701220150-20220702010150-00188.warc.gz\"}"} |
https://www.varsitytutors.com/ap_statistics-help/how-to-find-linearity | [
"# AP Statistics : How to find linearity\n\n## Example Questions\n\n### Example Question #7 : Bivariate Data\n\nA basketball coach wants to determine if a player's height can predict the number of points the player scores in a season. Which statistical test should the coach conduct?\n\nANOVA\n\nLinear regression\n\nT-test\n\nP-score\n\nCorrelation",
null,
""
]
| [
null,
"https://vt-vtwa-app-assets.varsitytutors.com/assets/problems/og_image_practice_problems-9cd7cd1b01009043c4576617bc620d0d5f9d58294f59b6d6556fd8365f7440cf.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77580523,"math_prob":0.45732683,"size":1966,"snap":"2019-51-2020-05","text_gpt3_token_len":427,"char_repetition_ratio":0.1753313,"word_repetition_ratio":0.09294872,"special_character_ratio":0.19277722,"punctuation_ratio":0.13559322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9579486,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T03:42:43Z\",\"WARC-Record-ID\":\"<urn:uuid:9c9f1e66-9f4a-41a1-94e4-d5c653cf302d>\",\"Content-Length\":\"177465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4de16103-0647-4b44-8b2e-b0959c2ca69a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6761b60b-d5a3-409f-8af5-9ecd1bcbef46>\",\"WARC-IP-Address\":\"13.249.44.44\",\"WARC-Target-URI\":\"https://www.varsitytutors.com/ap_statistics-help/how-to-find-linearity\",\"WARC-Payload-Digest\":\"sha1:YUKS646TYEGGT2NY4JD4U573TT425236\",\"WARC-Block-Digest\":\"sha1:EROIAIFSMCC6CLF2NHAWXXWA4JPWGMVL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541301014.74_warc_CC-MAIN-20191215015215-20191215043215-00026.warc.gz\"}"} |
http://www.sciencewriter.net/sd2quiz/chap-08.htm | [
"Statistics Demystified, 2nd edition Stan Gibilisco Explanations for Quiz Answers in Chapter 8 1. If we administer the test to a new group of students and obtain a distribution in which the span between µ - σ and µ + σ exceeds the span between µ - σ and µ + σ in the first distribution, we'll find that the data appears more \"spread-out\" with a larger value of σ (the standard deviation). The answer is B. 2. If the professor replaces one question in the 10-question quiz and then has the students work out the new problem in place of the old one, the distribution will almost certainly change slightly. (The distribution would remain the same only if the new question produced the same combination of right-and-wrong answers as the original question did.) However, without specific data for the students' answers to the new question, we can't know exactly what would happen to the shape of the plot. The correct choice is D. 3. In order to work out this problem, we must generate the new distribution, producing a new graph after the fashion of Fig. 8-6 (on page 273). When we do that, we get the following plot.",
null,
"When we visually compare this graph with Fig. 8-6 in the book, we see that this plot appears more \"flattened-out\" (that is, less sharply peaked) than Fig. 8-6. The correct choice is C. 4. Table 8-10 (on page 304) makes perfect sense. No technical problem exists in it. The answer is D. 5. If we toss a 12-faced, unbiased die five times and it comes up showing face number 1 on all five occasions, we've experienced a result whose probability equals 1 in 125, or 1 in 248,832. The correct choice is B. 6. This experiment reveals the fact that skin problems correlate positively with adult-onset diabetes (AOD). The results tell us nothing, however, about any possible cause/effect relation between skin problems and AOD, or between either of those manifestations and some third factor, known or unknown. The correct choice is D. 7. The points in the scatter plot of Fig. 8-18 (on page 305) look \"random,\" in the sense that they don't appear to fall near any particular line or curve. No evident correlation exists between the percentage of land area covered by forest and the percentage of people who suffer from Syndrome X. The correct choice is A. We can rule out choices B and C straightaway for the foregoing reason. Choice D has no relevance. If no correlation exists, then no cause/effect relation can exist either. Even if a correlation were found, that fact, all by itself, wouldn't imply that a causative agent had anything to do with it. 8. If we engage a computer in an attempt to find a least-squares line in the plot of Fig. 8-18, then in effect, we're asking the machine to quantify something that doesn't exist, like trying to work out the quotient 0/0. We'll likely see no result, or else an \"error\" message. The answer is D. (If our least-squares-line-calculating software is poorly written, we might get to watch our computer crash.) 9. The graph of a normal distribution always appears symmetrical relative to the mean. Choice C is false. All of the other statements hold generally true. Therefore, C constitutes the right answer here. 10. In this particular town, a strong positive correlation exists between average monthly temperature and average monthly precipitation. As one parameter increases, the other one always increases. As one parameter decreases, so does the other. The correct choice is A."
]
| [
null,
"http://www.sciencewriter.net/sd2quiz/fig-08-a.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9336321,"math_prob":0.9292924,"size":3449,"snap":"2019-13-2019-22","text_gpt3_token_len":787,"char_repetition_ratio":0.11349782,"word_repetition_ratio":0.02364865,"special_character_ratio":0.23311105,"punctuation_ratio":0.12068965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9584014,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T00:42:02Z\",\"WARC-Record-ID\":\"<urn:uuid:9c0becd0-50d1-4d20-b316-3dd9171d673c>\",\"Content-Length\":\"5149\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50472dfa-7677-4219-9cf5-6bc9d074d829>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c6a54ee-00ab-4b1b-9216-3d3d9893cfea>\",\"WARC-IP-Address\":\"192.207.255.11\",\"WARC-Target-URI\":\"http://www.sciencewriter.net/sd2quiz/chap-08.htm\",\"WARC-Payload-Digest\":\"sha1:2MLRBGYCBXSZ62MX7SGTQWFVJOMYU3WK\",\"WARC-Block-Digest\":\"sha1:CXGOPSEOWKXL6ZXO5LBZQFEPW7TPYUDT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202588.97_warc_CC-MAIN-20190321234128-20190322020128-00442.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.