URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.w3resource.com/java-exercises/datetime/java-datetime-exercise-31.php | [
" Java exercises: Compute the difference between two dates (Hours, minutes, milli, seconds and nano) - w3resource",
null,
"# Java DateTime, Calendar Exercises: Compute the difference between two dates (Hours, minutes, milli, seconds and nano)\n\n## Java DateTime, Calendar: Exercise-31 with Solution\n\nWrite a Java program to compute the difference between two dates (Hours, minutes, milli, seconds and nano).\n\nSample Solution:\n\nJava Code:\n\n``````import java.time.*;\nimport java.util.*;\n\npublic class Exercise31 {\npublic static void main(String[] args)\n{\nLocalDateTime dateTime = LocalDateTime.of(2016, 9, 16, 0, 0);\nLocalDateTime dateTime2 = LocalDateTime.now();\nint diffInNano = java.time.Duration.between(dateTime, dateTime2).getNano();\nlong diffInSeconds = java.time.Duration.between(dateTime, dateTime2).getSeconds();\nlong diffInMilli = java.time.Duration.between(dateTime, dateTime2).toMillis();\nlong diffInMinutes = java.time.Duration.between(dateTime, dateTime2).toMinutes();\nlong diffInHours = java.time.Duration.between(dateTime, dateTime2).toHours();\nSystem.out.printf(\"\\nDifference is %d Hours, %d Minutes, %d Milli, %d Seconds and %d Nano\\n\\n\",\ndiffInHours, diffInMinutes, diffInMilli, diffInSeconds, diffInNano );\n\n}\n}\n```\n```\n\nSample Output:\n\n```Difference is 6686 Hours, 401180 Minutes, 24070844780 Milli, 24070844 Seconds and 780000000 Nano\n```\n\nN.B.: The result may varry for your system date and time.\n\nFlowchart:",
null,
"Java Code Editor:\n\nImprove this sample solution and post your code through Disqus\n\nWhat is the difficulty level of this exercise?\n\n\n\nNew Content: Composer: Dependency manager for PHP, R Programming"
] | [
null,
"https://www.w3resource.com/w3r_images/java-exercises.png",
null,
"https://www.w3resource.com/w3r_images/java-datetime-exercise-flowchart-31.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7707749,"math_prob":0.8725169,"size":1609,"snap":"2019-26-2019-30","text_gpt3_token_len":374,"char_repetition_ratio":0.1576324,"word_repetition_ratio":0.13541667,"special_character_ratio":0.24860162,"punctuation_ratio":0.28387097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97850394,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T15:39:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e1f3edf7-62cb-4708-aa85-b9d7d7c0a3d5>\",\"Content-Length\":\"105568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e45ea64-7d34-4997-b802-0f812b61200b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bff08968-6248-43f6-b1c5-683e724ed1f6>\",\"WARC-IP-Address\":\"104.25.131.109\",\"WARC-Target-URI\":\"https://www.w3resource.com/java-exercises/datetime/java-datetime-exercise-31.php\",\"WARC-Payload-Digest\":\"sha1:GX7URM7BBYB7FQM3WOOZ466OCLT53OOD\",\"WARC-Block-Digest\":\"sha1:TDLPGNZ5CVWZ5V36V6KADJW2K6ZYZK3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529480.89_warc_CC-MAIN-20190723151547-20190723173547-00454.warc.gz\"}"} |
https://www.projecteuclid.org/euclid.dmj/1461252850 | [
"Duke Mathematical Journal\n\nGamma classes and quantum cohomology of Fano manifolds: Gamma conjectures\n\nAbstract\n\nWe propose Gamma conjectures for Fano manifolds which can be thought of as a square root of the index theorem. Studying the exponential asymptotics of solutions to the quantum differential equation, we associate a principal asymptotic class $A_{F}$ to a Fano manifold $F$. We say that $F$ satisfies Gamma conjecture I if $A_{F}$ equals the Gamma class $\\widehat{\\Gamma}_{F}$. When the quantum cohomology of $F$ is semisimple, we say that $F$ satisfies Gamma conjecture II if the columns of the central connection matrix of the quantum cohomology are formed by $\\widehat{\\Gamma}_{F}\\operatorname{Ch}(E_{i})$ for an exceptional collection $\\{E_{i}\\}$ in the derived category of coherent sheaves $\\mathcal{D}^{b}_{\\mathrm{coh}}(F)$. Gamma conjecture II refines a part of a conjecture by Dubrovin. We prove Gamma conjectures for projective spaces and Grassmannians.\n\nArticle information\n\nSource\nDuke Math. J., Volume 165, Number 11 (2016), 2005-2077.\n\nDates\nReceived: 18 June 2014\nRevised: 6 September 2015\nFirst available in Project Euclid: 21 April 2016\n\nPermanent link to this document\nhttps://projecteuclid.org/euclid.dmj/1461252850\n\nDigital Object Identifier\ndoi:10.1215/00127094-3476593\n\nMathematical Reviews number (MathSciNet)\nMR3536989\n\nZentralblatt MATH identifier\n1350.14041\n\nCitation\n\nGalkin, Sergey; Golyshev, Vasily; Iritani, Hiroshi. Gamma classes and quantum cohomology of Fano manifolds: Gamma conjectures. Duke Math. J. 165 (2016), no. 11, 2005--2077. doi:10.1215/00127094-3476593. https://projecteuclid.org/euclid.dmj/1461252850"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5178158,"math_prob":0.8497016,"size":19417,"snap":"2019-43-2019-47","text_gpt3_token_len":6392,"char_repetition_ratio":0.20120537,"word_repetition_ratio":0.07619048,"special_character_ratio":0.3395993,"punctuation_ratio":0.28061742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99819064,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T21:12:24Z\",\"WARC-Record-ID\":\"<urn:uuid:925d5e32-2f55-4437-a64a-663d78719350>\",\"Content-Length\":\"60717\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b65e627-0d20-416b-90b3-0068089a98f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c2e637a-13c2-4209-a8bc-13211245402f>\",\"WARC-IP-Address\":\"132.236.27.47\",\"WARC-Target-URI\":\"https://www.projecteuclid.org/euclid.dmj/1461252850\",\"WARC-Payload-Digest\":\"sha1:TSXG54O66AR3GCV3REFHZCRWVG7GQHCN\",\"WARC-Block-Digest\":\"sha1:UVDNO7VXYBPKLFT7AIGHZFQNRUFJQGHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986697760.44_warc_CC-MAIN-20191019191828-20191019215328-00023.warc.gz\"}"} |
https://open.bccampus.ca/browse-our-collection/find-open-textbooks/?uuid=c732fe64-79c3-4638-aba0-b2ca258244a1&contributor=&keyword=&subject=Math | [
"## Elementary Differential Equations with Boundary Value Problems\n\nSeptember 30, 2014 | Updated: May 10, 2019\nAuthor: William F. Trench, Trinity University\n\nElementary Differential Equations with Boundary Value Problems is written for students in science, engineering, and mathematics who have completed calculus through partial differentiation. If your syllabus includes Chapter 10 (Linear Systems of Differential Equations), your students should have some preparation in linear algebra. In writing this book, the author aimed to make the language easy to understand. As an elementary text, this book is written in an informal but mathematically accurate way, illustrated by appropriate graphics. The author has tried to formulate mathematical concepts succinctly in language that students can understand and has minimized the number of explicitly stated theorems and defonitions, preferring to deal with concepts in a more conversational way, copiously illustrated by 299 completely worked out examples. Where appropriate, concepts and results are depicted in 188 figures.This text also includes 2041 numbered exercises, many with several parts. They range in difficulty from routine to very challenging. •\n\nSubject Areas\nMath/Stats, Math - General\n\nOriginal source\ndigitalcommons.trinity.edu\n\nTell us you are using this Open Textbook\n\nSupport for adapting an open textbook\n\nNeed help?\n\n#### Get This Book\n\nSelect a file format",
null,
""
] | [
null,
"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92427534,"math_prob":0.41897148,"size":1912,"snap":"2021-31-2021-39","text_gpt3_token_len":372,"char_repetition_ratio":0.09067086,"word_repetition_ratio":0.041666668,"special_character_ratio":0.17730126,"punctuation_ratio":0.1221865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9605712,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T23:24:36Z\",\"WARC-Record-ID\":\"<urn:uuid:4935fd1d-f4a6-46c0-992d-53b306a89da2>\",\"Content-Length\":\"92961\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f69bff63-b0bc-4765-8dab-45422159e0c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e5bb4d1-592c-4272-8c82-6b5b2243b5bc>\",\"WARC-IP-Address\":\"204.239.18.16\",\"WARC-Target-URI\":\"https://open.bccampus.ca/browse-our-collection/find-open-textbooks/?uuid=c732fe64-79c3-4638-aba0-b2ca258244a1&contributor=&keyword=&subject=Math\",\"WARC-Payload-Digest\":\"sha1:SHNHTIMVUXGGOGRLQYZCMDDWVSVZZOX5\",\"WARC-Block-Digest\":\"sha1:JMQUFE6INZ2ZNMF5SXKRQGEHC6PUJAMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055808.78_warc_CC-MAIN-20210917212307-20210918002307-00533.warc.gz\"}"} |
https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/ | [
"Basic Programming\nTopics:\nTime and Space Complexity\n• Input/Output\n• Complexity Analysis\n• Time and Space Complexity\n• Implementation\n• Operators\n• Bit Manipulation\n• Recursion\n\n# Time and Space Complexity\n\nSometimes, there are more than one way to solve a problem. We need to learn how to compare the performance different algorithms and choose the best one to solve a particular problem. While analyzing an algorithm, we mostly consider time complexity and space complexity. Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. Similarly, Space complexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run as a function of the length of the input.\n\nTime and space complexity depends on lots of things like hardware, operating system, processors, etc. However, we don't consider any of these factors while analyzing the algorithm. We will only consider the execution time of an algorithm.\n\nLets start with a simple example. Suppose you are given an array $A$ and an integer $x$ and you have to find if $x$ exists in array $A$.\n\nSimple solution to this problem is traverse the whole array $A$ and check if the any element is equal to $x$.\n\nfor i : 1 to length of A\nif A[i] is equal to x\nreturn TRUE\nreturn FALSE\n\n\nEach of the operation in computer take approximately constant time. Let each operation takes $c$ time. The number of lines of code executed is actually depends on the value of $x$. During analyses of algorithm, mostly we will consider worst case scenario, i.e., when $x$ is not present in the array $A$. In the worst case, the if condition will run $N$ times where $N$ is the length of the array $A$. So in the worst case, total execution time will be $(N * c + c)$. $N * c$ for the if condition and $c$ for the return statement ( ignoring some operations like assignment of $i$ ).\n\nAs we can see that the total time depends on the length of the array $A$. If the length of the array will increase the time of execution will also increase.\n\nOrder of growth is how the time of execution depends on the length of the input. In the above example, we can clearly see that the time of execution is linearly depends on the length of the array. Order of growth will help us to compute the running time with ease. We will ignore the lower order terms, since the lower order terms are relatively insignificant for large input. We use different notation to describe limiting behavior of a function.\n\n$O$-notation:\nTo denote asymptotic upper bound, we use $O$-notation. For a given function $g(n)$, we denote by $O(g(n))$ (pronounced “big-oh of g of n”) the set of functions:\n$O(g(n)) =$ { $f(n)$ : there exist positive constants $c$ and $n_0$ such that $0 \\le f (n) \\le c * g(n)$ for all $n \\ge n_0$ }\n\n$\\Omega$-notation:\nTo denote asymptotic lower bound, we use $\\Omega$-notation. For a given function $g(n)$, we denote by $\\Omega(g(n))$ (pronounced “big-omega of g of n”) the set of functions:\n$\\Omega(g(n)) =$ { $f(n)$ : there exist positive constants $c$ and $n_0$ such that $0 \\le c * g(n) \\le f(n)$ for all $n \\ge n_0$ }\n\n$\\Theta$-notation:\nTo denote asymptotic tight bound, we use $\\Theta$-notation. For a given function $g(n)$, we denote by $\\Theta(g(n))$ (pronounced “big-theta of g of n”) the set of functions:\n$\\Theta(g(n)) =$ { $f (n)$ : there exist positive constants $c_1,\\;c_2$ and $n_0$ such that $0 \\le c_1 * g(n) \\le f (n) \\le c_2 * g(n)$ for all $n \\gt n_0$ }",
null,
"Time complexity notations\n\nWhile analysing an algorithm, we mostly consider $O$-notation because it will give us an upper limit of the execution time i.e. the execution time in the worst case.\n\nTo compute $O$-notation we will ignore the lower order terms, since the lower order terms are relatively insignificant for large input.\nLet $f(N) = 2 * N^2 + 3 * N + 5$\n$O(f(N)) = O(2 * N^2 + 3 * N + 5) = O(N^2)$\n\nLets consider some example:\n\n1.\n\nint count = 0;\nfor (int i = 0; i < N; i++)\nfor (int j = 0; j < i; j++)\ncount++;\n\n\nLets see how many times count++ will run.\n\nWhen $i = 0$, it will run $0$ times.\nWhen $i = 1$, it will run $1$ times.\nWhen $i = 2$, it will run $2$ times and so on.\n\nTotal number of times count++ will run is $0 + 1 + 2 + ... + (N-1) = \\frac{N * (N-1)}{2}$. So the time complexity will be $O(N^2)$.\n\n2.\n\nint count = 0;\nfor (int i = N; i > 0; i /= 2)\nfor (int j = 0; j < i; j++)\ncount++;\n\nThis is a tricky case. In the first look, it seems like the complexity is $O(N * logN)$. $N$ for the $j's$ loop and $logN$ for $i's$ loop. But its wrong. Lets see why.\n\nThink about how many times count++ will run.\n\nWhen $i = N$, it will run $N$ times.\nWhen $i = N / 2$, it will run $N / 2$ times.\nWhen $i = N / 4$, it will run $N / 4$ times and so on.\n\nTotal number of times count++ will run is $N + N / 2 + N / 4 + ... + 1 = 2 * N$. So the time complexity will be $O(N)$.\n\nThe table below is to help you understand the growth of several common time complexities, and thus help you judge if your algorithm is fast enough to get an Accepted ( assuming the algorithm is correct ).\n\nLength of Input (N) Worst Accepted Algorithm\n$\\le [10..11]$ $O(N!), O(N^6)$\n$\\le [15..18]$ $O(2^N * N^2)$\n$\\le [18..22]$ $O(2^N * N)$\n$\\le 100$ $O(N^4)$\n$\\le 400$ $O(N^3)$\n$\\le 2K$ $O(N^2 * logN)$\n$\\le 10K$ $O(N^2)$\n$\\le 1M$ $O(N * logN)$\n$\\le 100M$ $O(N), O(logN), O(1)$\n\nGo to next tutorial\n\nContributed by: Akash Sharma"
] | [
null,
"https://he-s3.s3.amazonaws.com/media/uploads/a1dddec.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82132775,"math_prob":0.9999503,"size":5326,"snap":"2019-43-2019-47","text_gpt3_token_len":1582,"char_repetition_ratio":0.12514092,"word_repetition_ratio":0.19081633,"special_character_ratio":0.35035673,"punctuation_ratio":0.11281589,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000052,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T00:56:47Z\",\"WARC-Record-ID\":\"<urn:uuid:379e12ed-eb12-4d1f-bce1-df641e9bc8b9>\",\"Content-Length\":\"93266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:223ac813-ccab-4216-8967-9ed6d5a6bfa5>\",\"WARC-Concurrent-To\":\"<urn:uuid:022989c8-4669-4e33-9f25-39d9bdac1e00>\",\"WARC-IP-Address\":\"54.169.44.24\",\"WARC-Target-URI\":\"https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/\",\"WARC-Payload-Digest\":\"sha1:DUKHRJVCHF2E5NSTH2W5MSRXKMLH63HT\",\"WARC-Block-Digest\":\"sha1:NBBS4QXAAV5OLQB3EI77AHPYP4QCIA5J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664469.42_warc_CC-MAIN-20191112001515-20191112025515-00279.warc.gz\"}"} |
https://answers.everydaycalculation.com/add-fractions/12-5-plus-12-6 | [
"Solutions by everydaycalculation.com\n\nAdd 12/5 and 12/6\n\n1st number: 2 2/5, 2nd number: 2 0/6\n\n12/5 + 12/6 is 22/5.\n\nSteps for adding fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 5 and 6 is 30\n2. For the 1st fraction, since 5 × 6 = 30,\n12/5 = 12 × 6/5 × 6 = 72/30\n3. Likewise, for the 2nd fraction, since 6 × 5 = 30,\n12/6 = 12 × 5/6 × 5 = 60/30\n4. Add the two fractions:\n72/30 + 60/30 = 72 + 60/30 = 132/30\n5. After reducing the fraction, the answer is 22/5\n6. In mixed form: 42/5\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6368537,"math_prob":0.9992146,"size":408,"snap":"2019-43-2019-47","text_gpt3_token_len":194,"char_repetition_ratio":0.23019803,"word_repetition_ratio":0.0,"special_character_ratio":0.5588235,"punctuation_ratio":0.07272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99705034,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T20:39:09Z\",\"WARC-Record-ID\":\"<urn:uuid:dbe473f6-e23c-49a0-b8c9-6254b47e2ab3>\",\"Content-Length\":\"8684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f8a3cfc-a470-4935-a885-f803598bd4e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:707737b8-93e6-4296-baef-ab38d957ade8>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/12-5-plus-12-6\",\"WARC-Payload-Digest\":\"sha1:F2BBLHLYIOOE7CHHVRQ6IIOPIT36SEP2\",\"WARC-Block-Digest\":\"sha1:MYPL24BQVWTNKQ45IVPXDVPIXBEPVYP4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986697760.44_warc_CC-MAIN-20191019191828-20191019215328-00336.warc.gz\"}"} |
https://analyticsindiamag.com/hands-on-to-reagent-end-to-end-platform-for-applied-reinforcement-learning/ | [
"###### Hands-on to ReAgent: End-to-End Platform for Applied Reinforcement Learning",
null,
"# Hands-on to ReAgent: End-to-End Platform for Applied Reinforcement Learning",
null,
"Facebook ReAgent, previously known as Horizon is an end-to-end platform for using applied Reinforcement Learning in order to solve industrial problems. The main purpose of this framework is to make the development & experimentation of deep reinforcement algorithms fast. ReAgent is built on Python. It uses PyTorch framework for data modelling\n\nAnd training and TorchScript for serving. ReAgent holds different algorithms for data preprocessing, feature engineering, model training & evaluation and lastly for optimized serving. ReAgent was first presented in the research paperHorizon: Facebook’s Open Source Applied Reinforcement Learning Platform by Jason Gauci, Edoardo Conti, Yitao Liang, Kittipat Virochsiri, Yuchen He, Zachary Kaden, Vivek Narayanan, Xiaohui Ye, Zhengxing Chen, Scott Fujimoto.\n\n`Register for Data & Analytics Conclave>>`\n\nThe key features of Facebook’s ReAgent are :\n\n• Capable of handling Large-dimension datasets.\n• Provides optimized algorithms for data preprocessing, training, etc.\n• Gives a highly efficient production environment for model serving.\n\nAlgorithms Supported by ReAgent\n\nWorkflow of ReAgent\n\nThe image below shows the overall workflow of ReAgent for decision making and reasoning. It starts its decision-making process by using predefined rules and then with the help of feedback, multi-armed bandits, those decisions are optimized and at last, contextual feedback trains the contextual bandits and reinforcement learning models. These trained models are then deployed via TorchScript library.\n\nDependencies of ReAgent Platform\n\nPython >=3.7\n\nInstallation\n\nReAgent can be installed using the docker image and manually. In this case, we are cloning the GitHub repository and installing all the required dependencies of ReAgent via pip.\n\n``` %%bash\n%cd /content/ReAgent/ ```\n\nThen install the required python packages:\n\n`!pip install -r requirements.txt `\n\n``` !pip install pytorch_lightning\n!pip install \".[gym]\" ```\n\nYou can check the detailed-installation process here and usage of ReAgent is discussed here.\n\nTo know more about ReAgent Serving Platform(RASP), you can check this documentation.\n\nDemo – Reinforcement Learning on CartPole Problem\n\n###### Yandex Upgrades Machine Learning Library CatBoost\n\nThis demo explains the usage of ReAgent on CartPole reinforcement learning. The code below uses the OpenAI Gym environment.\n\n1. Import all the required modules and packages.\n``` from reagent.gym.envs.gym import Gym\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nimport tqdm.autonotebook as tqdm ```\n1. Define the environment by passing CartPole to the Gym class.\n``` env = Gym('CartPole-v0')\ndef reset_env(env, seed):\nnp.random.seed(seed)\nenv.seed(seed)\nenv.action_space.seed(seed)\ntorch.manual_seed(seed)\nenv.reset()\nreset_env(env, seed=0) ```\n1. Next, is to create a policy which contains a simple scorer or Multilayer perceptron network and softmax sampler.\n```from reagent.net_builder.discrete_dqn.fully_connected import FullyConnected\nfrom reagent.gym.utils import build_normalizer\nnorm = build_normalizer(env)\nnet_builder = FullyConnected(sizes=, activations=[\"linear\"])\ncartpole_scorer = net_builder.build_q_network(\nstate_feature_config=None,\nstate_normalization_data=norm['state'],\noutput_dim=len(norm['action'].dense_normalization_parameters)) ```\n\nThe idea behind the policy is that agents will simply execute this cart pole environment.\n\n``` from reagent.gym.policies.policy import Policy\nfrom reagent.gym.policies.samplers.discrete_sampler import SoftmaxActionSampler\nfrom reagent.gym.agents.agent import Agent\npolicy = Policy(scorer=cartpole_scorer, sampler=SoftmaxActionSampler())\nagent = Agent.create_for_env(env, policy) ```\n1. Now, create a trainer that takes reinforcement learning algorithms to train. The following trainer can be created from the commands below:\n``` from reagent.training.reinforce import (\nReinforce, ReinforceParams\n)\nfrom reagent.optimizer.union import classes\ntrainer = Reinforce(policy, ReinforceParams(\ngamma=0.99,\n)) ```\n1. After creating a Trainer, start the training of the model by creating a function that changes the observed transitions into a training batch and then evaluating the reward for all episodes via RL interaction loop. The code for it is shown below:\n``` import reagent.types as rlt\ndef to_train_batch(trajectory):\nstate=rlt.FeatureData(torch.from_numpy(np.stack(trajectory.observation)).float()),\naction=F.one_hot(torch.from_numpy(np.stack(trajectory.action)), 2),\nreward=torch.tensor(trajectory.reward),\nlog_prob=torch.tensor(trajectory.log_prob)\n) ```\n\nRun agent on the environment and record the rewards.\n\n``` from reagent.gym.runners.gymrunner import evaluate_for_n_episodes\neval_rewards = evaluate_for_n_episodes(100, env, agent, 500, num_processes=20)\nStart the loop\nnum_episodes = 200\nreward_min = 20\nmax_steps = 200\nreward_decay = 0.8\ntrain_rewards = []\nrunning_reward = reward_min\nfrom reagent.gym.runners.gymrunner import run_episode\nwith tqdm.trange(num_episodes, unit=\" epoch\") as t:\nfor i in t:\ntrajectory = run_episode(env, agent, max_steps=max_steps, mdp_id=i)\nbatch = to_train_batch(trajectory)\ntrainer.train(batch)\nep_reward = trajectory.calculate_cumulative_reward(1.0)\nrunning_reward *= reward_decay\nrunning_reward += (1 - reward_decay) * ep_reward\ntrain_rewards.append(ep_reward)\nt.set_postfix(reward=running_reward) ```\n1. Finally, print all the rewards on each training episode in a form of the graph as shown below.\n\nConclusion\n\nIn this write-up, we have discussed ReAgent Platform aka Horizon and its demo with an example of CartPole Problem with reinforcement learning.\n\nNote : The codes mentioned above are not suitable for colab, due to some dependency issues. The following is the Jupyter Notebook file, to reproduce the above experiment.\n\nOfficial Codes, Docs & Tutorials are available at:\n\nWhat Do You Think?\n\n`Join our Telegram Group. Be part of an engaging community`"
] | [
null,
"https://analyticsindiamag.com/wp-content/uploads/2021/09/Data-Science-AIM-Banner-2022-V4.gif",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74262875,"math_prob":0.610258,"size":6494,"snap":"2021-43-2021-49","text_gpt3_token_len":1418,"char_repetition_ratio":0.117103234,"word_repetition_ratio":0.0,"special_character_ratio":0.20095472,"punctuation_ratio":0.16313364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9532594,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T18:08:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ecebdc1e-3f5c-42ae-b10b-adc69987f6a6>\",\"Content-Length\":\"145890\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe9dbc3b-fb07-4807-b6fc-04f09e50c0d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c23935a5-dbdc-42f6-be48-e7d026988479>\",\"WARC-IP-Address\":\"104.26.14.105\",\"WARC-Target-URI\":\"https://analyticsindiamag.com/hands-on-to-reagent-end-to-end-platform-for-applied-reinforcement-learning/\",\"WARC-Payload-Digest\":\"sha1:UJVDEKHIFOTIXT5ZBTS2P6KSXBJHPJI7\",\"WARC-Block-Digest\":\"sha1:N4K6VRIEIICMMJFKZS6K2PXJD3WNUCFQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584913.24_warc_CC-MAIN-20211016170013-20211016200013-00535.warc.gz\"}"} |
https://achemmic.com/tag/tazarotene-ic50/ | [
"## Heterosis also known as the hybrid energy occurs when the suggest\n\nHeterosis also known as the hybrid energy occurs when the suggest phenotype of hybrid off-spring is better than that of their two inbred parents. studying expression info for each gene can produce prejudiced and varying estimates and unreliable exams of heterosis highly. To deal with these disadvantages Tazarotene IC50 we produce a hierarchical style to acquire information throughout genes. Applying our building framework all of us derive scientific Bayes estimators and a great inference technique to identify gene expression heterosis. Simulation effects show which our proposed technique outperforms the greater traditional technique used to discover gene phrase heterosis. This information has ancillary Sagopilone material on line. = you 2 as well as the offspring (= 3). Allow = you … means the total range of genes beneath study. All of us use to represent the suggest expression standard of gene of genotype sama dengan = minutes {= ? (+ exhibits HPH LPH or Tazarotene IC50 MPH if and only if > 0 > 0 or ≠ 0 respectively. Past work on estimating gene expression heterosis using microarray data (Swanson-Wagner et al. 2006 Wang et al. 2006 Bassene et al. 2010 has used separate estimates for each gene obtained by replacing population means (= 1 2 3 = 1 ··· and are problematic because they are biased and tend to underestimate and (see Appendix A). Though the sample average estimator of is unbiased with only a few observations for each gene in a typical microarray experiment the sample average estimators of may each be highly variable. Because high-throughput technologies measure expression of hundreds of thousands of genes simultaneously we can utilize information across genes to improve estimation and testing of gene expression heterosis for each individual gene. For gene = (? = ? (+ ? |helps to develop statistical inferences for RICTOR all three types of gene expression heterosis. We model and equal to the absolute value of a draw from a normal distribution with probability 1 ? and and draw inferences about gene expression heterosis from estimates of these posteriors. We compare the empirical Bayes method with the sample average method through simulation studies where datasets were generated based on real heterosis microarray experiments or hypothetical probability models. Simulation studies show that the empirical Bayes Sagopilone estimators of have smaller mean square errors (MSEs) than the sample average estimators that have been used previously. Furthermore the empirical Bayes estimators of and are less biased than the sample average estimators and the inferences we draw using our empirical Bayes approach are superior to traditional approaches for detecting all forms of heterosis. The remainder of the paper proceeds as follows. Section 2 presents the proposed hierarchical model in full detail. Section 3 derives Tazarotene IC50 the empirical Bayes inference and estimators strategy based on the framework constructed in section 2. Section 4 summarizes analysis results of two real experiments. Section 5 presents results of several simulation studies. Section 6 summarizes our work. R code and C code for the analysis of real experiments in section 4 the simulation studies in section 5 and the implementation Tazarotene IC50 of all our algorithms is available upon request. 2 HIERARCHICAL GENE EXPRESSION HETEROSIS MODEL Let denote the normalized log-scale gene expression Tazarotene IC50 measurement for genotype = 1 ··· is the total number of replicates for genotype (= 1 2 3 = 1 ··· by = min{ sama dengan is believed Sagopilone by sama dengan? (+ succumbed (3) uses Smyth (2004). Tazarotene IC50 The blend model for the purpose of in (1) models the cases wherever Sagopilone parental means are even and wherever parental means differ correspondingly. The hyperparameter specifies the proportion of genes which might be expressed among two father and mother equally. Likewise the blend model intended for in (2) describes the cases where mean gene expression in the offspring is equal or not to the average of two parental means. When necessary the model (1)–(3) may be modified as needed to better capture the features of a given dataset. For example the mixture model could include more than one normal distribution component intended for or ≡ (≡ and are the natural sample average estimators of and – given and – are is a two-component mixture distribution where each component density is itself an infinite mixture of normal distributions with."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8870676,"math_prob":0.9381193,"size":4499,"snap":"2021-43-2021-49","text_gpt3_token_len":911,"char_repetition_ratio":0.1250278,"word_repetition_ratio":0.011220196,"special_character_ratio":0.19493221,"punctuation_ratio":0.053424656,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9708389,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T00:23:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2b9afd0c-608c-4864-a956-6b9cb4d78c86>\",\"Content-Length\":\"30984\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:482798a4-0996-4147-81af-0726c8aad669>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3642df3-8f58-4574-a144-7ef40b2ec655>\",\"WARC-IP-Address\":\"136.0.111.15\",\"WARC-Target-URI\":\"https://achemmic.com/tag/tazarotene-ic50/\",\"WARC-Payload-Digest\":\"sha1:YUAVEY7GBSBCFD7CUA3BIEEYDQIGFLS2\",\"WARC-Block-Digest\":\"sha1:HBC6EUPBMWBDWJUYPWNQVMDG4PKZA75I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585449.31_warc_CC-MAIN-20211021230549-20211022020549-00638.warc.gz\"}"} |
https://www.analyticsvidhya.com/blog/2021/04/simple-understanding-and-implementation-of-knn-algorithm/ | [
"Sai Patwardhan — April 21, 2021\nThis article was published as a part of the Data Science Blogathon.\n\n## Overview:\n\nK Nearest Neighbor (KNN) is intuitive to understand and an easy to implement the algorithm. Beginners can master this algorithm even in the early phases of their Machine Learning studies.\n\nThis KNN article is to:\n\n· Understand K Nearest Neighbor (KNN) algorithm representation and prediction.\n\n· Understand how to choose K value and distance metric.\n\n· Required data preparation methods and Pros and cons of the KNN algorithm.\n\n· Pseudocode and Python implementation.\n\n## Introduction:\n\nK Nearest Neighbor algorithm falls under the Supervised Learning category and is used for classification (most commonly) and regression. It is a versatile algorithm also used for imputing missing values and resampling datasets. As the name (K Nearest Neighbor) suggests it considers K Nearest Neighbors (Data points) to predict the class or continuous value for the new Datapoint.\n\nThe algorithm’s learning is:\n\n1. Instance-based learning: Here we do not learn weights from training data to predict output (as in model-based algorithms) but use entire training instances to predict output for unseen data.\n\n2. Lazy Learning: Model is not learned using training data prior and the learning process is postponed to a time when prediction is requested on the new instance.\n\n3. Non -Parametric: In KNN, there is no predefined form of the mapping function.\n\n## How does KNN Work?\n\n1. #### Principle:\n\nConsider the following figure. Let us say we have plotted data points from our training set on a two-dimensional feature space. As shown, we have a total of 6 data points (3 red and 3 blue). Red data points belong to ‘class1’ and blue data points belong to ‘class2’. And yellow data point in a feature space represents the new point for which a class is to be predicted. Obviously, we say it belongs to ‘class1’ (red points)\n\nWhy?\n\nBecause its nearest neighbors belong to that class!\n\nYes, this is the principle behind K Nearest Neighbors. Here, nearest neighbors are those data points that have minimum distance in feature space from our new data point. And K is the number of such data points we consider in our implementation of the algorithm. Therefore, distance metric and K value are two important considerations while using the KNN algorithm. Euclidean distance is the most popular distance metric. You can also use Hamming distance, Manhattan distance, Minkowski distance as per your need. For predicting class/ continuous value for a new data point, it considers all the data points in the training dataset. Finds new data point’s ‘K’ Nearest Neighbors (Data points) from feature space and their class labels or continuous values.\n\nThen:\n\nFor classification: A class label assigned to the majority of K Nearest Neighbors from the training dataset is considered as a predicted class for the new data point.\n\nFor regression: Mean or median of continuous values assigned to K Nearest Neighbors from training dataset is a predicted continuous value for our new data point\n\n2. #### Model Representation\n\nHere, we do not learn weights and store them, instead, the entire training dataset is stored in the memory. Therefore, model representation for KNN is the entire training dataset.\n\n## How to choose the value for K?\n\nK is a crucial parameter in the KNN algorithm. Some suggestions for choosing K Value are:\n\n1. Using error curves: The figure below shows error curves for different values of K for training and test data.\n\nAt low K values, there is overfitting of data/high variance. Therefore test error is high and train error is low. At K=1 in train data, the error is always zero, because the nearest neighbor to that point is that point itself. Therefore though training error is low test error is high at lower K values. This is called overfitting. As we increase the value for K, the test error is reduced.\n\nBut after a certain K value, bias/ underfitting is introduced and test error goes high. So we can say initially test data error is high(due to variance) then it goes low and stabilizes and with further increase in K value, it again increases(due to bias). The K value when test error stabilizes and is low is considered as optimal value for K. From the above error curve we can choose K=8 for our KNN algorithm implementation.\n\n2. Also, domain knowledge is very useful in choosing the K value.\n\n3. K value should be odd while considering binary(two-class) classification.\n\n## Required Data Preparation:\n\n1. Data Scaling: To locate the data point in multidimensional feature space, it would be helpful if all features are on the same scale. Hence normalization or standardization of data will help.\n\n2. Dimensionality Reduction: KNN may not work well if there are too many features. Hence dimensionality reduction techniques like feature selection, principal component analysis can be implemented.\n\n3. Missing value treatment: If out of M features one feature data is missing for a particular example in the training set then we cannot locate or calculate distance from that point. Therefore deleting that row or imputation is required.\n\n## Python implementation:\n\nImplementation of the K Nearest Neighbor algorithm using Python’s scikit-learn library:\n\n### Step 1: Get and prepare data\n\n```import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import metrics```\n\nAfter loading important libraries, we create our data using sklearn.datasets with 200 samples, 8 features, and 2 classes. Then data is split into the train(80%) and test(20%) data and scaled using StandardScaler.\n\n```X,Y=make_classification(n_samples= 200,n_features=8,n_informative=8,n_redundant=0,n_repeated=0,n_classes=2,random_state=14)\nX_train, X_test, y_train, y_test= train_test_split(X, Y, test_size= 0.2,random_state=32)\nsc= StandardScaler()\nsc.fit(X_train)\nX_train= sc.transform(X_train)\nsc.fit(X_test)\nX_test= sc.transform(X_test)\nX.shape```\n`(200, 8)`\n\n### Step 2: Find the value for K\n\nFor choosing the K value, we use error curves and K value with optimal variance, and bias error is chosen as K value for prediction purposes. With the error curve plotted below, we choose K=7 for the prediction\n\n```error1= []\nerror2= []\nfor k in range(1,15):\nknn= KNeighborsClassifier(n_neighbors=k)\nknn.fit(X_train,y_train)\ny_pred1= knn.predict(X_train)\nerror1.append(np.mean(y_train!= y_pred1))\ny_pred2= knn.predict(X_test)\nerror2.append(np.mean(y_test!= y_pred2))\n# plt.figure(figsize(10,5))\nplt.plot(range(1,15),error1,label=\"train\")\nplt.plot(range(1,15),error2,label=\"test\")\nplt.xlabel('k Value')\nplt.ylabel('Error')\nplt.legend()```\n\n### Step 3: Predict:\n\nIn step 2, we have chosen the K value to be 7. Now we substitute that value and get the accuracy score = 0.9 for the test data.\n\n```knn= KNeighborsClassifier(n_neighbors=7)\nknn.fit(X_train,y_train)\ny_pred= knn.predict(X_test)\nmetrics.accuracy_score(y_test,y_pred)```\n`0.9`\n\n## Pseudocode for K Nearest Neighbor (classification):\n\nThis is pseudocode for implementing the KNN algorithm from scratch:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8210587,"math_prob":0.9748358,"size":7889,"snap":"2021-43-2021-49","text_gpt3_token_len":1748,"char_repetition_ratio":0.14533925,"word_repetition_ratio":0.0033641716,"special_character_ratio":0.21751806,"punctuation_ratio":0.13951936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99785817,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T08:35:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ae654de9-aadf-4b54-859d-1249761acf2b>\",\"Content-Length\":\"135287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0dbc0f65-c8bb-4a29-a729-616cbb92a593>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f78bd41-4205-435d-a006-361ca58b2524>\",\"WARC-IP-Address\":\"172.67.38.119\",\"WARC-Target-URI\":\"https://www.analyticsvidhya.com/blog/2021/04/simple-understanding-and-implementation-of-knn-algorithm/\",\"WARC-Payload-Digest\":\"sha1:H2KYF4Q6NFRKGWWVITT3WUMJINS7RN7L\",\"WARC-Block-Digest\":\"sha1:JFWPKXIMZ7B4P2WG3F2TWD6WAMBMPALR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359976.94_warc_CC-MAIN-20211201083001-20211201113001-00079.warc.gz\"}"} |
https://arxiv.org/abs/1609.03895 | [
"# Title:Lepton mass and mixing in a simple extension of the Standard Model based on T7 flavor symmetry\n\nAbstract: A simple Standard Model Extension based on $T_7$ flavor symmetry which accommodates lepton mass and mixing with non-zero $\\theta_{13}$ and CP violation phase is proposed. At the tree- level, the realistic lepton mass and mixing pattern is derived through the spontaneous symmetry breaking by just one vacuum expectation value ($v$) which is the same as in the Standard Model. Neutrinos get small masses from one $SU(2)_L$ doublet and two $SU(2)_L$ singlets in which one being in $\\underline{1}$ and the two others in $\\underline{3}$ and $\\underline{3}^*$ under $T_7$ , respectively. The model also gives a remarkable prediction of Dirac CP violation $\\delta_{CP}=172.598^\\circ$ in both normal and inverted hierarchies which is still missing in the neutrino mixing matrix.\n Comments: 23 pages, 6 figures Subjects: High Energy Physics - Phenomenology (hep-ph) Journal reference: Physics of Atomic Nuclei, Vol. 82, No 2, (2019), pp. 168--182 DOI: 10.1134/S1063778819020133 Cite as: arXiv:1609.03895 [hep-ph] (or arXiv:1609.03895v1 [hep-ph] for this version)\n\n## Submission history\n\nFrom: Vo Van Vien [view email]\n[v1] Tue, 13 Sep 2016 15:25:13 UTC (34 KB)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8758997,"math_prob":0.88050586,"size":1130,"snap":"2020-10-2020-16","text_gpt3_token_len":296,"char_repetition_ratio":0.09502664,"word_repetition_ratio":0.046783626,"special_character_ratio":0.2778761,"punctuation_ratio":0.08530806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9730454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T14:07:27Z\",\"WARC-Record-ID\":\"<urn:uuid:e4e4a27c-2748-4a5a-a3a5-3111340eb289>\",\"Content-Length\":\"20852\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f94f817d-1f48-4284-aef9-46acf0286f0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dbb61ee3-17eb-4e83-b760-cfe75a8b75c1>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1609.03895\",\"WARC-Payload-Digest\":\"sha1:F7YMLCABTJCPUGXC2ZIXSFYMUBWOGAO3\",\"WARC-Block-Digest\":\"sha1:I4DSGIEKTU45DJ23Z2OAXLWKAU2ZKWIX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146342.41_warc_CC-MAIN-20200226115522-20200226145522-00131.warc.gz\"}"} |
https://www.daniweb.com/programming/software-development/threads/398790/basic-broken-code-emptying-form-fields-return-problem | [
"Hi guys,\n\nQuestion 1. I'm a pretty rubbish coder, haven't really done much of it in the couple of years since my degree, at work they've recently expressed an interest in getting me back into it. My problem is that i feel as though I've been taught the syntax (which i totally understand) I can easily write up a a for loop or whatever, my problem is knowing how to apply that. So i guess I would define it as knowing the words of another language but not having any idea how to speak fluently lol. Any ideas where i can start getting some help ? useful links would be appreciated, preferably anything that helped you guys that is more about how to code rather than syntax (i plan to spend more time on here etc).\n\nQuestion 2, which is sort of an example of my problem (see below), I actually broke this when trying to fix it (trying to add in the return 1,2,3 for the different possibilities), but basically I'm currently struggling with using forms. Using 2 random numbers the user must enter the correct answer for multiplying the random numbers. How can I empty the fields and basically reset the form after a correct answer ?\n\n``````Public Class MathLoader\n\nDim generator As New Random\n\nDim a1 As Integer = generator.Next(1, 100)\nDim b1 As Integer = generator.Next(1, 100)\n\nPrivate Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load\n\nEnd Sub\n\nPrivate Function correctCheck() As Integer\n\nDim z1 As Double = a1 * b1\nDim x1 As Integer = 0\n\nIf TextBox1.Text = \"\" Then\nLabel3.Text = \"Please enter a value\"\nReturn 2\n\nElseIf TextBox1.Text = z1 Then\n\nLabel3.Text = \"Correct !\"\nans1.Text = z1\nBeep()\nReturn 1\n\nElse\nLabel3.Text = \"Wrong...\"\nReturn 3\n\nEnd If\n\nEnd Function\n\nPrivate Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click\n'Try\n\nran1.Text = a1\nran2.Text = b1\n\nIf ans1.Text = \"\" Then\n\nElseIf ans1.Text <> \"\" Then\nFor index = 1 To 3\nIf Me.correctCheck() = 1 Then\nMessageBox.Show(\"Complete\")\nElseIf Me.correctCheck() = 2 Then\nMessageBox.Show(\"2\")\nElseIf Me.correctCheck() = 3 Then\nMessageBox.Show(\"3\")\nElse\nMessageBox.Show(\"4\")\nEnd If\nNext\nMessageBox.Show(\"You're out of guesses\")\n\nEnd If\n\n'Catch e As Exception\n' MessageBox.Show(e.ToString)\n\n'End Try\n\nEnd Sub\n\nEnd Class\n``````\n\nI learned how to code by writing code. I refer to the msdn library because it is the instructions on how to use the language. Your pretty much just going to have to dive right into it. It will work itself out naturally. When you get stuck find the answer …\n\n## All 2 Replies",
null,
"I learned how to code by writing code. I refer to the msdn library because it is the instructions on how to use the language. Your pretty much just going to have to dive right into it. It will work itself out naturally. When you get stuck find the answer and keep going. Can't find the answer post it here.\n\nYour above code could look like below. Probably not going to do you any good other than seeing some other code thats not yours. You should look at the variable names though. One thing you can start right now is naming your variables with a meaningful name. You will end up doing it eventually if you like to learn the hard way. You could also start now as your beginning.\n\n``````Public Class Form3\n\nPrivate Sub Form3_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load\nEnd Sub\n\nPrivate Sub ButtonGuessAnswer_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ButtonGuessAnswer.Click\n'Use static to keep track of how many tries.\nStatic NumberOfGuesses As Integer\n\n'Should be self explanatory\nMessageBox.Show(\"Wrong.\")\nElse\nDim Number1 As Double = CDbl(LabelRandomNumber1.Text)\nDim Number2 As Double = CDbl(LabelRandomNumber2.Text)\n\nIf CDbl(TextBoxAnswer.Text) = Number1 * Number2 Then\nMessageBox.Show(\"Correct.\")\n'Got it right. Start over.\nNumberOfGuesses = 0\nElse\nMessageBox.Show(\"Wrong.\")\nEnd If\nEnd If\n\nNumberOfGuesses += 1\n\nIf NumberOfGuesses = 3 Then\n'Tried three times and couldn't get it.\n'reset the number of guesses and reset\n'the display\nNumberOfGuesses = 0\nEnd If\nEnd Sub\n\n'Resets the screen with new values\nDim Generator As New Random\n\nLabelRandomNumber1.Text = Generator.Next(1, 100)\nLabelRandomNumber2.Text = Generator.Next(1, 100)"
] | [
null,
"https://static.daniweb.com/connect/images/anonymous.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81975764,"math_prob":0.74385905,"size":2298,"snap":"2021-04-2021-17","text_gpt3_token_len":577,"char_repetition_ratio":0.1102877,"word_repetition_ratio":0.02631579,"special_character_ratio":0.25500435,"punctuation_ratio":0.12253829,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646731,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T09:13:14Z\",\"WARC-Record-ID\":\"<urn:uuid:c51c78bd-bb0c-4574-b25d-2bcbb9ca3833>\",\"Content-Length\":\"66181\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50ee2093-686d-451e-8582-aea3796c7b1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c5e649d-7e84-41d6-be3b-f49d2212fab8>\",\"WARC-IP-Address\":\"104.22.4.5\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/software-development/threads/398790/basic-broken-code-emptying-form-fields-return-problem\",\"WARC-Payload-Digest\":\"sha1:NJPXFHZGXMOX557H67UFGVYRU2ULZMBQ\",\"WARC-Block-Digest\":\"sha1:5O6EJXR3MFBK7JTFGM3MYVN6RTZPZWQT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703511903.11_warc_CC-MAIN-20210117081748-20210117111748-00750.warc.gz\"}"} |
https://www.numbers.education/41367.html | [
"Is 41367 a prime number? What are the divisors of 41367?\n\n## Parity of 41 367\n\n41 367 is an odd number, because it is not evenly divisible by 2.\n\nFind out more:\n\n## Is 41 367 a perfect square number?\n\nA number is a perfect square (or a square number) if its square root is an integer; that is to say, it is the product of an integer with itself. Here, the square root of 41 367 is about 203.389.\n\nThus, the square root of 41 367 is not an integer, and therefore 41 367 is not a square number.\n\n## What is the square number of 41 367?\n\nThe square of a number (here 41 367) is the result of the product of this number (41 367) by itself (i.e., 41 367 × 41 367); the square of 41 367 is sometimes called \"raising 41 367 to the power 2\", or \"41 367 squared\".\n\nThe square of 41 367 is 1 711 228 689 because 41 367 × 41 367 = 41 3672 = 1 711 228 689.\n\nAs a consequence, 41 367 is the square root of 1 711 228 689.\n\n## Number of digits of 41 367\n\n41 367 is a number with 5 digits.\n\n## What are the multiples of 41 367?\n\nThe multiples of 41 367 are all integers evenly divisible by 41 367, that is all numbers such that the remainder of the division by 41 367 is zero. There are infinitely many multiples of 41 367. The smallest multiples of 41 367 are:\n\n## Numbers near 41 367\n\n### Nearest numbers from 41 367\n\nFind out whether some integer is a prime number"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8574098,"math_prob":0.998147,"size":370,"snap":"2021-04-2021-17","text_gpt3_token_len":110,"char_repetition_ratio":0.19125684,"word_repetition_ratio":0.0,"special_character_ratio":0.35675675,"punctuation_ratio":0.16304348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99876475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T10:49:16Z\",\"WARC-Record-ID\":\"<urn:uuid:1c38311f-34b5-49e1-8af5-33cb357dc125>\",\"Content-Length\":\"18950\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4c34a99-2c10-4727-aa50-9f7f679deb09>\",\"WARC-Concurrent-To\":\"<urn:uuid:d706426b-ff1e-4a98-a06c-428e4732ad4b>\",\"WARC-IP-Address\":\"213.186.33.19\",\"WARC-Target-URI\":\"https://www.numbers.education/41367.html\",\"WARC-Payload-Digest\":\"sha1:QVCQNY4JD3BG6EL32DCFTII3P5DXXDBR\",\"WARC-Block-Digest\":\"sha1:KYUAVL2YKY6CQ7SUG44NAHAM6R6HWDHE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038056325.1_warc_CC-MAIN-20210416100222-20210416130222-00084.warc.gz\"}"} |
https://www.teachoo.com/subjects/cbse-maths/class-10th/ch15-10th-probability/ | [
"# Chapter 15 Class 10 Probability\n\nGet NCERT Solutions for Chapter 15 Class 10 free at teachoo. Solutions to all exercise questions, examples and optional is available with detailed explanations.\n\nIn Class 9, we studied about Empirical or Experimental Probability.\n\nIn this chapter, we will study\n\n• Theoretical Probability, that is, P(E) = Number of outcomes with E / Total possible outcomes\n• Probability of complementary event, i.e., P(not E)\n• Probability of Impossible and sure events\n• Probability of questions where die is thrown twice\n• Probability of card questions\n• Finding Probability using distance and area\n\nClick on an exercise or a topic link below to get started."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8672455,"math_prob":0.79611504,"size":318,"snap":"2020-45-2020-50","text_gpt3_token_len":66,"char_repetition_ratio":0.11464968,"word_repetition_ratio":0.0,"special_character_ratio":0.1981132,"punctuation_ratio":0.118644066,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99241173,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-02T05:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:58f8d4ed-fb25-45da-b84e-8f726e1668ec>\",\"Content-Length\":\"36994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7dfac3ec-3b24-48a2-af40-fbc441979f36>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec752f81-d9ca-4efb-8f3a-b148e64e73e6>\",\"WARC-IP-Address\":\"52.23.32.39\",\"WARC-Target-URI\":\"https://www.teachoo.com/subjects/cbse-maths/class-10th/ch15-10th-probability/\",\"WARC-Payload-Digest\":\"sha1:JEBVUXNC6LBXF2NQIAJKVQAXELLI4QOI\",\"WARC-Block-Digest\":\"sha1:GLZ7OGMNH6XURORBREMIV7G2WSOH4VH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141692985.63_warc_CC-MAIN-20201202052413-20201202082413-00321.warc.gz\"}"} |
https://amses-journal.springeropen.com/articles/10.1186/s40323-020-00159-0 | [
"# Cross section shape optimization of wire strands subjected to purely tensile loads using a reduced helical model\n\n## Abstract\n\nThis paper introduces a shape optimization of wire strands subjected to tensile loads. The structural analysis relies on a recently developed reduced helical finite element model characterized by an extreme computational efficacy while accounting for complex geometries of the wires. The model is extended to consider interactions between components and its applicability is demonstrated by comparison with analytical and finite element models. The reduced model is exploited in a design optimization identifying the optimal shape of a 1 + 6 strand by means of a genetic algorithm. A novel geometrical parametrization is applied and different objectives, such as stress concentration and area minimization, and constraints, corresponding to operational limitations and requirements, are analyzed. The optimal shape is finally identified and its performance improvements are compared and discussed against the reference strand. Operational benefits include lower stress concentration and higher load at plastification initiation.\n\n## Introduction\n\nWire ropes are basic structural elements in engineering and construction. Thanks to their complex hierarchical composition, wire ropes design permits to achieve a response tailored for specific applications and load cases. They are used as a structural link in bridges and cranes, for lifting objects or as tracks in cable-ways. They offer high longitudinal stiffness, while keeping a low transversal bending stiffness. This allows for easy storage, movement and deployment, thanks to the use of drums, sheaves and pulleys .\n\nEven though many different designs have been proposed throughout history [2, 3], the general composition of a rope (see Fig. 1) has not changed. The basic element is the helical wire, arranged in a bundle to form a strand. The obtained strands can be themselves helically arranged to obtain the stranded rope. Compared to fibre materials used in fibre ropes (that have been in use for millennia ), the use of structural materials allow for an increased load carrying capability.\n\nThe mining field in the 19th century was a driving industry for the development of wire ropes: the aim was to replace the employed metal chain—characterized by small damage tolerance—with an element that would be comparable in structural response. This is achieved by using rope designs, where multiple load paths architecture provides time for servicing and replacing the damaged component, avoiding catastrophic failure .\n\nOut of the vast number of rope applications, this work focuses on those where loads are mainly tensile, in which case stationary ropes are employed . They are utilized, for example, in cable-stayed bridge, in cable-ways and in cranes as guy lines for booms suspension (see Fig. 2). They can also be found in civil constructions in the form of pre-stressed concrete strands . Stationary ropes are usually multi-layered strands, having therefore each component in a single helical configuration, as opposed to stranded ropes described above.\n\nRopes have many geometrical parameters in defining the overall response of the strand . While at the level of the strand and the rope there are numerous combinations of parameters (number of wires, layout, lay-factor), the very basic component, the wire, offers most often only its diameter as degree of freedom, due to the ease and lower costs in manufacturing circular wires. As a consequence, strands present local stress concentrations due to the radial pressure concentration at the wires contact locations. Departing from the geometrical constraint of round wires, the aforementioned design drawbacks are mitigated permitting to achieve optimized overall operational characteristics.\n\nFor the case of tensile-dominated applications where single strands are used, the focus will be on providing examples of how the geometry could be optimized for minimum weight or minimum stress concentration, while satisfying application-dependent requirements such as limit load, axial stiffness, axial load at plastification or bending stiffness. 1 + 6 strands are basic, yet among the most used, strands. It has been chosen as reference strand in this work for its relatively straightforward geometry. Accordingly, a novel geometrical parametrization for the 1 + 6 wire strands is proposed.\n\nFor the analysis to come, a reduced model is employed. Rope theory literature has been developed since the 1860s and a plethora of models have been proposed. Complexity of analytical models for wire strands span from the simple assumption of helical springs in parallel to a more refined curved beam theory, mainly based on Love’s theory or on general theory of rods by Green and Laws , accounting for bending and torsion. Besides, finite element (FE) models have also being employed to model more complex phenomena as residual stress after manufacturing , contact and friction and electromagnetic interactions within power cables . Reduced helical models [7, 13,14,15] have been introduced in more recent years and utilize the concept of helical symmetry to reduce the computational domain, have also being successfully used in various fields. The computational efficiency of reduced models and their ability to model complex geometries permit to challenge the limitation of purely circular wires and to propose an alternative approach to the strand design, by means of a shape optimization. In order to permit such a procedure, part of the considered domain needs to be modified to allow for the contact definition.\n\nThis paper is structured as follows: “Modeling techniques comparison” presents the modeling technique used and how it stands against alternative techniques; in “Optimization procedure” section the optimization framework is introduced and the selection of objectives and constraints is discussed; “Results” section contains the discussion on the performance benefits of the optimal shape compared to the reference and a sensitivity analysis carried on the resulting strand; finally, conclusions are drawn in “Conclusions” section.\n\n## Modeling techniques comparison\n\n### Reduced helical model\n\nWhen a helical structure is deformed uniformly along its entire length, the state variables (strains and stresses) are uniform along helical lines. Its overall response can be exactly analysed by taking a representative two-dimensional surface. This is a property called translational invariance , and it is exploited to derive a reduced finite element model whose formulation is similar in idea to the generalized plane strain elements . Other models have been proposed that use this same property, such as those by Zubov , Treyssede , Frikha et al. and Karathanasopoulos and Kress . Differently from the aforementioned models, the one used in this work has been derived within the finite strain framework, therefore being able to better describe the wire motions. Additionally, it was developed for complex geometries and interactions on the transverse cross section.\n\nThe reduced model permits to have a complex geometry, while keeping a low number of elements. This allows fine meshes and local strains and stresses to be studied, without the need of a volumetric FE and very computationally expensive simulations. On the other hand though, it is limited by its derivation assumption: only uniform loadcases can be studied, such as axial elongation and twist, radial compaction and thermal expansion . Accordingly, any load case—which determines that each transverse cross section of the structure behaves identically—can be considered.\n\n### Requirements on modeling approaches\n\nFor our optimization, four requirements are essential to be satisfied by the chosen modeling technique. An analytical model as found in Feyrer , and two three-dimensional FE models (based on either on solid volumetric or beam elements) are compared to the reduced model.\n\nAxial response As the axial elongation is the load case to optimize for, our model needs to be able to fully capture the interaction between wires, including stiffening due to contact among wires and material plasticity. Figure 3 shows how all models are able to predict the overall axial behaviour.\n\nComputational efficiency A main focus when approaching an optimization routine is to assure that the core simulation—that computes the objective value—is as efficient as possible, as it is run multiple times. Therefore, in Fig. 4 a comparison between solution times to quantify the speed of each model is shown. Apart from the analytical model, the beam and reduced models are comparable in solving the analysis, with the solid FE being significantly slower.\n\nComplex geometries With the goal of setting up a shape optimization, the chosen model will need to be able to fully describe the geometry of the strand (and in particular of the outer wire). Solid and reduced FE models are the only ones that satisfy this requirement, because both the analytical and the beam FE models rely on a narrow database of cross sections for contact definition.\n\nBending response A calculation of the bending response is also required in the optimization routine, to constrain the strand flexibility. Solid and beam FE models and analytical models can directly describe such a load case. The reduced model, on the other hand, because the transversal slices would not behave independently from their axial location, is inherently not capable of modeling bending.\n\nTable 1 highlights how the reduced model stands out against the alternative modeling approaches.\n\n### Extension of the reduced helical model to account for contact\n\nBecause the influence of contact between wires is important to fully characterize the stress state within the strand, an extension of the model found in was required (Fig. 5b). The model was originally developed for the analysis of a single constituent, either free helices or solid regions (e.g. solid cylinder with inclusions). Strands have instead distinct components that are free to rotate and move relative to each other. Therefore, an interaction law needs to be introduced. Instead of simply merging the contact points , the current work uses a contact law with exponential pressure-overclosure behaviour.\n\nIn order to use the contact definitions already available in Abaqus, a geometrical expedient is introduced. Since each component is locally planar and there is a relative out-of-plane rotation, in order to enable a three-dimensional contact, an auxiliary master surface must be defined. This allows the interaction to actually represent a surface-to-surface contact rather then a line-to-line one, that would eventually create an artificial—localized-kink. This surface is obtained by extruding the nodes of the inner core perpendicularly to the reference plane. These nodes are then connected by shell elements and rigidly constrained to the corresponding parent nodes to guarantee the helical symmetry. Figure 5b shows such contact surface, with highlighted the nodes connected to the corresponding master node lying on the reference cross section.\n\n### Approximation of the bending stiffness\n\nAs suggested in the work by Foti , the bending of a strand exhibits two distinctive extremes.\n\n• Stick phase, where the bending curvature is low enough that the friction between components prevents them from sliding relatively to each other. All wires form a cross section with connected elements associated with high bending stiffness.\n\n• Slip phase, curvatures are high enough that friction can be ignored and each component is assumed to freely bend about its neutral plane, determining an overall reduction in bending stiffness.\n\nThe two values of stiffnesses, both in stick and in slip phase, are well approximated by the bending stiffness of the straight rod having the same transverse cross section.\n\n\\begin{aligned} K_{stick} = E_{0} I_{0} + \\sum \\limits _{i=1}^6 E_{i} {\\tilde{I}}_{i} \\end{aligned}\n(1)\n\\begin{aligned} K_{slip} = E_{0} I_{0} + \\sum \\limits _{i=1}^6 E_{i} I_{i} \\end{aligned}\n(2)\n\nwhere E is the Young modulus, I is the moment of inertia of the each wire with respect to its own neutral plane and $${\\tilde{I}}$$ is the moment of inertia with respect to the strand neutral plane. Subscript 0 refers to the core wire, while values of $$i>0$$ refer to the outer wires ($$i=1 \\cdots 6$$).\n\nThis approximation allows us to consider bending without involving more complex models. Figure 6 shows how the analytically computed stiffness values match the results obtained by Foti . However, the ability to characterize the transition between the two phases (that depends on the friction coefficient $$\\mu$$) is not maintained.\n\nThe axial force applied to the strand also influences the bending response , due to the increased friction at the contact between wires when the strand is elongated. Considering the fact that, for the applications considered in this work, axial forces are high and curvatures are low, the stick phase stiffness $$K_{stick}$$ will be considered.\n\n### Material model\n\nThroughout all simulations presented here the material model is an elastic-ideally plastic constitutive law. Figure 7 shows the stress-strain curve corresponding to the material parameters as in Table 2. This choice of constitutive law allows to model failure by a limit load analysis. The material of the analysed structure is replaced by an ideally plastic material with lower yield stress. This makes the limit load, i.e. the maximum load the structure can sustain before plastic collapse, representative of the breaking load.\n\n## Optimization procedure\n\n### Objectives\n\nThe aim is to obtain wire shapes which reduce local stress concentration and therefore reduce plastification, fatigue damages, thereby extending life time. In addition, lightweight design increases structural efficiency and decreases material costs. As a result, it has been chosen to consider two objectives.\n\nThe first is stress concentration minimization, defined as\n\n\\begin{aligned} \\gamma = \\max \\left( \\frac{\\sigma _{VM}^{max}}{\\sigma _{VM}^n}\\right) \\end{aligned}\n(3)\n\nwhere $$\\sigma _{VM}^{max}$$ is the largest Von Mises stress acting in the cross section (located at the wire-to-wire contact point) and $$\\sigma _{VM}^n$$ is the nominal value at the center of the core wire, i.e. the tensile stress occurring as a result of the applied deformation. Because of the nonlinear local response, the stress concentration at contact point varies with the applied load history. In particular, it will reach its maximum value $$\\gamma$$ at the initiation of plastification (Fig. 10).\n\nThe second objective is area minimization, that, at constant lay-length, directly translates into weight reduction. It is considered as the effective area covered by the material in the transverse cross section. Due to the choice of the ideally plastic constitutive law, when a limit load is given, the minimum value of the area is bounded by the yield stress.\n\n### Constraints\n\nOptimization procedures need to have constraints that avoid infeasible solutions to be accepted. For instance, simplifying a rope structure to a single isotropic rod would prevent any stress concentration, therefore minimizing the objective. In such a case though, the rope would lose the favourable bending flexibility and damage tolerance, thereby not fullfilling fundamental requirements of rope structures. Such characteristics are main factors in the selection of ropes in an application and need to be maintained. While the damage tolerance is kept by solely considering a shape optimization (that keeps the multi-component nature of the strand, contrarily to a topology optimization), the bending stiffness is taken as inequality constraint, where the upper bound is defined by the bending stiffness $$K_{stick}$$ of the reference strand.\n\nAdditionally, each application sets a maximum load the rope is required to carry. The breaking load of the selected rope needs to be higher than such value. Therefore, because the optimal shape needs to satisfy the same requirements as its respective initial geometry, the breaking load is considered as a constraint as well.\n\n### Geometrical setup\n\nFigure 8 shows the geometrical parametrization used in the considered procedure. The optimization aims at a wide variability, while keeping the number of design parameters reasonably low. It presents a straight core wire and 6 helical wires around it. The analysis considers constant the number of wires and the lay-length (i.e. the axial length corresponding to a full turn of an outer wire).\n\nFigure 8b shows the degrees of freedom that our shape parametrization has. Besides the total strand wire radius R and the outer wire diameter d, the shape is parametrized by the use of two auxiliary circles that can be moved and scaled on the cross section. These fillets bring in a total of 3 parameters ($$\\rho _1$$, $$r_1$$ and $$r_2$$).\n\nTo fully define the geometry, the following geometrical constraints are imposed as well:\n\n• Minimum interwire distance (gap) is set to be at the mirror plane (highlighted point 1 in Fig. 8b), to allow for the contact initiation;\n\n• Concave outer shape, with a curvature corresponding to the radius of the strand, R (point 2);\n\n• Flat outer wire surface (point 3) with given angular distance $$\\Omega$$, that permits relative movement between adjacent outer wires without contact.\n\nIn the case which reducing the concentration at the contact point is our objective, the optimal shape would morph into a shape that allows a surface-to-surface contact. Doing so would provide a larger area for radial force transmission and thus reduce the localization. We though encode the geometry so to have the contact surface to be concave or convex, in order not to restrict the design space. Figure 8c shows potential candidates satisfying the geometrical constraints.\n\n### Optimization routine\n\nBecause of the complex geometry and the geometrical constraints to be considered, a genetic algorithm has been chosen to find a global minimum of the considered problem. A pool of 100 different feasible geometries —based on the parametrization—has been created as initial population. The optimization is allowed to have up to 100 generations, with Matlab default values for mutation and crossover .\n\nEach optimization has either area minimization or stress concentration minimization as single objective, as discussed in Section 3.1.\n\nConstraints are enforced by a multiplicative penalty factor as follows:\n\n\\begin{aligned} {\\tilde{f}} = f \\prod _{i=1}^{n} \\left( 1+\\frac{|g_i-{\\hat{g}}_i|}{{\\hat{g}}_i} \\right) \\prod _{j=1}^{m} \\left( 1+\\frac{\\mathrm {max}(0,h_j-{\\hat{h}}_j)}{{\\hat{h}}_j} \\right) \\end{aligned}\n\nwhere $$g_i$$ and $$h_j$$ are the current values of the n equality constraint functions and m disequality constraint functions. $${\\hat{g}}_i$$ and $${\\hat{h}}_j$$ are the given constraint values and $${\\tilde{f}}$$ is the objective value of the constrained problem.\n\n## Results\n\nThe reference strand taken as initial design is characterized by a 1 + 6 layout, with a core diameter of 2.50 mm and an outer wire diameter of 2.25 mm. Additional constant geometry properties, as the number of wires n, lay-factor LL, gap g and angular distance $$\\Omega$$ are listed in Table 3. Material properties used correspond to an ideally plastic law, as discussed in “Material Model” section. The strand is extended axially to a nominal axial strain of $$1\\%$$. The objective is stress concentration minimization and the selected constraints are the limit load and bending flexibility of the reference strand.\n\nThe optimization procedure is coordinated by the built-in Matlab R2018a Optimization Toolbox , while each simulation is solved by Abaqus 6.14 with a custom subroutine (called User Element) developed in a previous work .\n\nThe optimal shape is shown in Fig. 9 on the right, where the contour of the Von Mises stress concentration, i.e. the stress normalized with the nominal value measured at the center of the core wire, is displayed. The increment plotted refers to the nominal strain at which the plastification starts, that corresponds (as shown in Fig. 10) to the highest stress concentration within the loading history. The pressure distribution associated with the surface-to-surface contact results in the reduction of stress concentration into a more homogeneous field in the new geometry. The reference strand has a maximum local Von Mises stress of $$148\\%$$ higher than the nominal stress (corresponding to a concentration $$\\gamma = 2.48$$), while the optimal presents only a $$4\\%$$ higher Von Mises stress ($$\\gamma = 1.04$$). The initiation of plastification happens therefore at significantly larger strains, as shown in Fig. 11, where the accumulated equivalent plastic strain at the location of plastic initiation is plotted against the loading history. Delayed plastification means as well that the axial load that the structure can bear without having any local defects is increased, as it is highlighted in Fig. 12 by the value $$F_p^{opt}$$ ($$34.8\\,\\mathrm {kN}$$) being 3.74 times higher than $$F_p^{ref}$$ ($$9.3\\,\\mathrm {kN}$$).\n\nContrarily to Fig. 11, the curves in Fig. 12 do not show any effect of the early local plastification. This is due to the considerably small area affected by this phenomenon, making its contribution to the axial force is negligible. Figure 12 shows the force-strain curves and it can be seen that the limit load is kept as required by the constraint, with the optimal strand being more compliant than the reference strand by less than $$2.5\\%$$. As for the bending flexibility, it showed the same value as the reference, with less than $$0.1\\%$$ variation.\n\nFigure 13 presents another optimal shape, obtained with an alternative choice of objectives and constraints. When axial force at plastic initiation $$F_p$$ is considered as the only constraint, the required transverse surface is allowed to decrease, because it is not bounded by the limit load requirement. If area minimization is therefore considered, the resulting shape shown in Fig. 13b is obtained, with the area value of $$10.8\\, \\text {mm}^2$$, corresponding to the $$37\\%$$ of the reference strand area ($$A = 28.9\\,\\text {mm}^2)$$. A delayed plastification‘ start can provide margin for reducing the safety factor and consequently the required weight.\n\n### Sensitivity analysis\n\nAny production process is subjected to tolerances, and thus it could be expected that the manufactured optimal wire would not match perfectly the computed one. To study this sensitivity, it has been chosen to slightly vary a parameter $$\\rho _1$$, that determines the inner surface curvature of the outer wire. Figure 14 shows the maximum stress concentration measured when adding a small perturbation $$\\Delta \\rho _1$$ to the optimal $${\\hat{\\rho }}_1$$. In both directions (whose corresponding geometries are illustrated in Fig. 14) there is a detrimental effect, due to the reintroduction of local concentration, losing partially the benefit of the optimal shape. Values are though significantly lower than the reference strand value ($$\\gamma = 2.48$$). In particular, the results show that a larger $$\\rho _1$$ ($$\\Delta \\rho _1 > 0$$, corresponding to a smaller curvature of the inner contact surface) is safer, as $$\\gamma$$ increases less for such values.\n\n## Conclusions\n\nThe reduced helical model capability to resolve local stresses has been proven essential to allow for the proposed optimization. It computes stress concentrations without recurring to a solid FE model, that would have rendered the entire routine computationally very expensive. Within the limitations set by the reduced helical model assumptions, the applicability and potential of the chosen approach was demonstrated by showing that a optimized design of the strand, and in particular of the outer wires, was found.\n\nSuch optimization framework complements the state-of-the-art design of strands, since an optimal cross section—providing beneficial characteristics—could be tailored to each application. The strand manufacturer would need to have ways to produce the resulting geometry by successive drawing through custom made dies. While this surely increases the complexity and cost of the strand manufaCturing processes, it is feasible, as non-circular wires have already been used in full-locked spiral rope . Compared to the compaction of strands—the process of radially compressing a strand that originally had round wires—the approach proposed in this work reduce dirt infiltration and determines a better contact pressure distribution, without introducing unwanted pre-stresses. This analysis can directly be extended to more complex geometries such as multi-layer strands and it could also be coupled with other models to analyse the next hierarchical level, the wire rope. For instance, the reduced model could compute the homogenized properties of the wire strand to be used in a beam model, that would effectively simulate a stranded wire rope.\n\n## Availability of data and materials\n\nData available on request from the authors. Plot data is found in the additional material “Plot_data.xlsx”.\n\n## References\n\n1. Cardou A, Jolicoeur C. Mechanical models of helical strands. Appl Mech Rev. 1997;50(1):1.\n\n2. Verreet R. Die Geschichte des Drahtseiles. Drahtwelt. 1989;75(6):100–6.\n\n3. Sayenga D. The Birth and Evaluation of the American Wire Rope Industry. First Annual Wire Rope Proceedings. Pullman, Washington 99164: Engineering Extension Service. Washington: Washington State University; 1980.\n\n4. Costello GA. Theory of wire rope., Mechanical engineering seriesNew York: Springer; 1997.\n\n5. Feyrer K. Wire ropes: tension, endurance, reliability, vol. 14. Berlin: Springer; 2015.\n\n6. Raoof M. Wire recovery length in a helical strand under axial-fatigue loading. Int J Fatig. 1991;13(2):127–32.\n\n7. Filotto FM, Kress G. Nonlinear planar model for helical structures. Comput Struct. 2019;224:106111.\n\n8. Love AEH. Treatise on mathematical theory of elasticity. A treatise on the mathematical theory of elasticity. 4th edn. 1944; p. 643.\n\n9. Green AE, Laws N. A General Theory of Rods. Mech Gener Cont. 1968;293:49–56.\n\n10. Frigerio M, Buehlmann PB, Buchheim J, Holdsworth SR, Dinser S, Franck CM, et al. Analysis of the tensile response of a stranded conductor using a 3D finite element model. Int J Mech Sci. 2016;106:176–83.\n\n11. Xiang L, Wang HY, Chen Y, Guan YJ, Dai LH. Elastic-plastic modeling of metallic strands and wire ropes under axial tension and torsion loads. Int J Solids Struct. 2017;129:103–18.\n\n12. Del-Pino-López JC, Hatlo M, Cruz-Romero P. On simplified 3D finite element simulations of three-core armored power cables. Energies. 2018;11:11.\n\n13. Treyssède F. Elastic waves in helical waveguides. Wave Motion. 2008;45(4):457–70.\n\n14. Frikha A, Cartraud P, Treyssède F. Mechanical modeling of helical structures accounting for translational invariance. Part 1: static behavior. Int J Solids Struct. 2013;50(9):1373–82.\n\n15. Karathanasopoulos N, Kress G. Two dimensional modeling of helical structures, an application to simple strands. Comput Struct. 2016;174:79–84. https://doi.org/10.1016/j.compstruc.2015.08.016.\n\n16. Cheng AHD, Rencis JJ, Abousleiman Y. Generalized plane strain elasticity problems. Trans Model Simul. 1995;10:167–74.\n\n17. Zubov LM. Exact nonlinear theory of tension and torsion of helical springs. Doklady Phys. 2002;47(8):623–6.\n\n18. Foti F, Martinelli L. An analytical approach to model the hysteretic bending behavior of spiral strands. Appl Math Modell. 2016;40(13–14):6451–67.\n\n19. The MathWorks I. Global Optimization Toolbox User’s Guide; 2018.\n\n20. Puzzi S, Carpinteri A. A double-multiplicative dynamic penalty approach for constrained evolutionary optimization. Struct Multidiscip Optimiz. 2008;35(5):431–45.\n\n21. Dassault Systèmes. Abaqus 6.14 Online Documentation; 2014.\n\n22. Bergen Cable Technology I. Cable 101. https://bergencable.com/cable-101.\n\n23. Wikipedia. Chords Bridge. https://en.wikipedia.org/wiki/Chords_Bridge.\n\n24. Kobelco. Kobelco Construction Machinery Europe B.V. https://www.kobelco-europe.com.\n\nNot applicable.\n\n## Funding\n\nThe authors acknowledge the support of the Swiss National Science Foundation (project No. 159583 and Grant No. 200020_1595831).\n\n## Author information\n\nAuthors\n\n### Contributions\n\nFMF performed the optimization simulations and analyses and wrote the paper. FR created the beam and solid models used in the present work and reviewed the paper. GK commented and reviewed the paper. All authors read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Francesco Maria Filotto.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.",
null,
""
] | [
null,
"https://amses-journal.springeropen.com/track/article/10.1186/s40323-020-00159-0",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9080596,"math_prob":0.95265114,"size":29553,"snap":"2022-05-2022-21","text_gpt3_token_len":6148,"char_repetition_ratio":0.13577448,"word_repetition_ratio":0.010585586,"special_character_ratio":0.2082699,"punctuation_ratio":0.119885825,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781507,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T14:06:33Z\",\"WARC-Record-ID\":\"<urn:uuid:dc0f73a4-c9cf-4837-89ab-40e824990211>\",\"Content-Length\":\"274323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c99807f-357a-479f-a60b-0c82997eaa98>\",\"WARC-Concurrent-To\":\"<urn:uuid:09debd49-68e9-4d6b-ade4-ab9cc06effe5>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://amses-journal.springeropen.com/articles/10.1186/s40323-020-00159-0\",\"WARC-Payload-Digest\":\"sha1:5APU6XFCHVBTS4O6CVVDPAO47TUO7DSE\",\"WARC-Block-Digest\":\"sha1:C2FTZMOK7CL55FC6UP7D6Y72ZNTSO6CM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662532032.9_warc_CC-MAIN-20220520124557-20220520154557-00364.warc.gz\"}"} |
https://cre8math.com/2018/03/17/the-geometry-of-polynomials/ | [
"# The Geometry of Polynomials\n\nI recently needed to make a short demo lecture, and I thought I’d share it with you. I’m sure I’m not the first one to notice this, but I hadn’t seen it before and I thought it was an interesting way to look at the behavior of polynomials where they cross the x-axis.\n\nThe idea is to give a geometrical meaning to an algebraic procedure: factoring polynomials. What is the geometry of the different factors of a polynomial?\n\nLet’s look at an example in some detail:",
null,
"$f(x)=2(x-4)(x-1)^2.$",
null,
"Now let’s start looking at the behavior near the roots of this polynomial.",
null,
"Near",
null,
"$x=1,$ the graph of the cubic looks like a parabola — and that may not be so surprising given that the factor",
null,
"$(x-1)$ occurs quadratically.",
null,
"And near",
null,
"$x=4,$ the graph passes through the x-axis like a line — and we see a linear factor of",
null,
"$(x-4)$ in our polynomial.\n\nBut which parabola, and which line? It’s actually pretty easy to figure out. Here is an annotated slide which illustrates the idea.",
null,
"All you need to do is set aside the quadratic factor of",
null,
"$(x-1)^2,$ and substitute the root,",
null,
"$x=1,$ in the remaining terms of the polynomial, then simplify. In this example, we see that the cubic behaves like the parabola",
null,
"$y=-6(x-1)^2$ near the root",
null,
"$x=1.$ Note the scales on the axes; if they were the same, the parabola would have appeared much narrower.\n\nWe perform a similar calculation at the root",
null,
"$x=4.$",
null,
"Just isolate the linear factor",
null,
"$(x-4),$ substitute",
null,
"$x=4$ in the remaining terms of the polynomial, and then simplify. Thus, the line",
null,
"$y=18(x-4)$ best describes the behavior of the graph of the polynomial as it passes through the x-axis. Again, note the scale on the axes.\n\nWe can actually use this idea to help us sketch graphs of polynomials when they’re in factored form. Consider the polynomial",
null,
"$f(x)=x(x+1)^2(x-2)^3.$ Begin by sketching the three approximations near the roots of the polynomial. This slide also shows the calculation for the cubic approximation.",
null,
"Now you can begin sketching the graph, starting from the left, being careful to closely follow the parabola as you bounce off the x-axis at",
null,
"$x=-1.$",
null,
"Continue, following the red line as you pass through the origin, and then the cubic as you pass through",
null,
"$x=2.$ Of course you’d need to plot a few points to know just where to start and end; this just shows how you would use the approximations near the roots to help you sketch a graph of a polynomial.",
null,
"Why does this work? It is not difficult to see, but here we need a little calculus. Let’s look, in general, at the behavior of",
null,
"$f(x)=p(x)(x-a)^n$ near the root",
null,
"$x=a.$ Given what we’ve just been observing, we’d guess that the best approximation near",
null,
"$x=a$ would just be",
null,
"$y=p(a)(x-a)^n.$\n\nJust what does “best approximation” mean? One way to think about approximating, calculuswise, is matching derivatives — just think of Maclaurin or Taylor series. My claim is that the first",
null,
"$n$ derivatives of",
null,
"$f(x)=p(x)(x-a)^n$ and",
null,
"$y=p(a)(x-a)^n$ match at",
null,
"$x=a.$\n\nFirst, observe that the first",
null,
"$n-1$ derivatives of both of these functions at",
null,
"$x=a$ must be 0. This is because",
null,
"$(x-a)$ will always be a factor — since at most",
null,
"$n-1$ derivatives are taken, there is no way for the",
null,
"$(x-a)^n$ term to completely “disappear.”\n\nBut what happens when the",
null,
"$n$th derivative is taken? Clearly, the",
null,
"$n$th derivative of",
null,
"$p(a)(x-a)^n$ at",
null,
"$x=a$ is just",
null,
"$n!p(a).$ What about the",
null,
"$n$th derivative of",
null,
"$f(x)=p(x)(x-a)^n$?\n\nThinking about the product rule in general, we see that the form of the",
null,
"$n$th derivative must be",
null,
"$f^{(n)}(x)=n!p(x)+ (x-a)(\\text{terms involving derivatives of } p(x)).$ When a derivative of",
null,
"$p(x)$ is taken, that means one factor of",
null,
"$(x-a)$ survives.\n\nSo when we take",
null,
"$f^{(n)}(a),$ we also get",
null,
"$n!p(a).$ This makes the",
null,
"$n$th derivatives match as well. And since the first",
null,
"$n$ derivatives of",
null,
"$p(x)(x-a)^n$ and",
null,
"$p(a)(x-a)^n$ match, we see that",
null,
"$p(a)(x-a)^n$ is the best",
null,
"$n$th degree approximation near the root",
null,
"$x=a.$\n\nI might call this observation the geometry of polynomials. Well, perhaps not the entire geometry of polynomials…. But I find that any time algebra can be illustrated graphically, students’ understanding gets just a little deeper.\n\nThose who have been reading my blog for a while will be unsurprised at my geometrical approach to algebra (or my geometrical approach to anything, for that matter). Of course a lot of algebra was invented just to describe geometry — take the Cartesian coordinate plane, for instance. So it’s time for algebra to reclaim its geometrical heritage. I shall continue to be part of this important endeavor, for however long it takes….",
null,
"### Vince Matsko\n\nMathematician, educator, consultant, artist, puzzle designer, programmer, blogger, etc., etc. @cre8math\n\n## 6 thoughts on “The Geometry of Polynomials”\n\n1.",
null,
"William Meisel says:\n\nVince – this is really interesting and I am surprised that I have never come across it anywhere before.\nDoes a similar thing work for functions of two variables? I’m thinking of things that are particularly tough to sketch by hand, like Descarte’s Folium (x^3 + y^3 = 3xy). I have read articles that suggest using something called Newton’s Polygon to sketch these, but I will admit I have not been able to follow the steps that clearly. (My intuition is that this method will not generalize to higher dimensions, because I guess you would have to use partial derivatives, but I am often wrong.)\n\nLike\n\n2.",
null,
"Vince Matsko says:\n\nWilliam – yes, I got it to work with the Folium of Descartes! And in several other cases that I tried as well. Too much to go into here, but I plan to write a more detailed blog post on what I found out in a few weeks (this weekend I’ll be writing about the Bay Area Mathematical Artists). Sorry to keep you in suspense….but thanks for the inspiration!\n\nLiked by 1 person\n\n3.",
null,
"Rachel Campos says:\n\nWell written post.\n\nLike"
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/poly0b.png",
null,
"https://vincematsko.files.wordpress.com/2018/03/poly0c.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/poly0d.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/day137poly1.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/day137poly2.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/day137poly3.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/poly1d.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://vincematsko.files.wordpress.com/2018/03/poly1f.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://2.gravatar.com/avatar/27c6b760e350d92f9864bdedf4dc755d",
null,
"https://0.gravatar.com/avatar/cdb944b28cbd9b46801bfd351b55c087",
null,
"https://2.gravatar.com/avatar/27c6b760e350d92f9864bdedf4dc755d",
null,
"https://1.gravatar.com/avatar/7e8814c77793c3318957b3974db8c27a",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9595482,"math_prob":0.99262357,"size":5069,"snap":"2020-45-2020-50","text_gpt3_token_len":1087,"char_repetition_ratio":0.13030602,"word_repetition_ratio":0.011086474,"special_character_ratio":0.20694417,"punctuation_ratio":0.08730159,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996388,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122],"im_url_duplicate_count":[null,null,null,8,null,8,null,null,null,null,null,8,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,9,null,null,null,8,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T22:37:49Z\",\"WARC-Record-ID\":\"<urn:uuid:8bc79dcf-db98-4ec2-8a62-c5baebf501fe>\",\"Content-Length\":\"113952\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18b06089-8383-49d9-b05d-cb582ab13877>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5d3f802-9705-4b22-b644-e2dedf15f0bc>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://cre8math.com/2018/03/17/the-geometry-of-polynomials/\",\"WARC-Payload-Digest\":\"sha1:LN2SI3HRL364KLUIERVI3FGNFMEBKOY7\",\"WARC-Block-Digest\":\"sha1:BGY55U3DDOGUYRJIMM5SP7T4IEFUDWAF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141168074.3_warc_CC-MAIN-20201123211528-20201124001528-00412.warc.gz\"}"} |
https://stats.stackexchange.com/questions/560688/why-is-my-highest-coefficient-not-significant-but-lower-ones-are-poisson-regre | [
"# Why is my highest coefficient not significant but lower ones are? (Poisson regression)\n\nI'm looking to see if income level affects the number of pets a person has, controlling for number of children. I have run a GLM (Poisson family) model, and found that if a person is categorised as 'Low' income, their number of expected pets decreases by 21.3%. I also found that for every child a person has, their number of expected pets increases by 4.03%.\n\nHowever, the 21.3% decrease is not significant, and the 4.03% increase is significant. I'm wondering if this is incorrect/whether I have done something wrong? I am aware I may be confused about how statistical significance works, so just want to check whether the below makes sense!\n\nThe R outputs for estimates and p-values look like this:\n\n Estimate Pr(>|z|)\nLevel_of_IncomeLow: -0.239043 0.0830 .\nNum_of_children 0.039527 7.54e-10 ***\n\n\nMy calculations:\n\nExponent of -0.239043: 0.787381\n1-0.787381=0.212619\n= 21.3% decrease\n\nExponent of 0.039527: 1.040319\n= 4.03% increase\n\n\nEssentially, is it okay that the larger effect is not statistically significant, but the smaller effect is? It just feels wrong!\n\n• Significance of a test depends on the ratio of the coefficient to its standard error (which you should also report in your analyses). The standard error is determined (i) the sample size, (ii) the variance of the residuals of the model, and (iii) the variation in the independent variables. Components (i) and (ii) affect the tests for both coefficients. However, it's possible that for (iii) there is more \"relative\" variation in the number of children in your sample than the level of income. Jan 16, 2022 at 21:51\n• This means that it is easier to \"detect\" an effect. There could still be an income effect but the sample size is too low to detect it. Jan 16, 2022 at 21:51\n• The fact that it depends on a ratio is extremely important. For example, if you rescale the variable, it affects both the numerator and denominator, and doesn't alter the significance. This makes significance a measure that doesn't depend on the units of measurement. Jan 16, 2022 at 21:54\n• Thank you for all the helpful answers! I will also add standard error into my report Jan 16, 2022 at 23:32\n• You can't compare the size of coefficients that are in different units. There's no basis on which to call one coefficient \"larger\" or \"smaller\" when you're comparing incommensurable units. Jan 17, 2022 at 0:46\n\nlow (vs. high) has a restricted range (i.e., only 2 values). Number of children presumably has a much wider range. The narrower the range of an $$X$$ variable, the less power there is to test it. That's a general principle. Specific to this example, it sounds like the underlying variable (income) was subjected to a median split, or something similar. If so, you should be aware that categorizing a continuous variable has long been considered poor statistical practice.\n\nSignificance is not purely about the magnitude of an effect. For instance, if the parameter/effect is $$1$$ light-year or $$9.46×10^{12}$$ km, then it is different figures but the same distance.\n\n(You might argue that this example is bad because I changed the units, but how are you gonna make sure that the units match when you compare 'pets per unit of income' with 'pets per number of kids'. When you change the currency then you change the effect size.)\n\nSignificance is an expression of the magnitude of an observed effect size in terms of how likely or unlikely it is to have an at least as strong deviation from the value predicted by some null hypothesis, given that the null hypothesis is true.\n\nWhen this probability is low then you might consider the effect as an anomaly from the point of view of the null hypothesis. If you detect an effect with a small magnitude to be an anomaly, but an effect with a large magnitude is not, then it means that the experiment has different sensitivity for the two effects.\n\nis it okay that the larger effect is not statistically significant, but the smaller effect is? It just feels wrong!\n\nIt depends on the relative effect size. Relative to the error of the estimate and this might be different for the two coefficients.\n\nBelow you see an example where the estimated intercept of 1.724 is different from 0 but it is not more significant than the estimated slope of 0.095 being different from 0.\n\nThe slope is a smaller effect size if you compare the plain figures like 0.095 < 1.724. But the slope has a much smaller variation from sample to sample, and such a small effect is significant.",
null,
""
] | [
null,
"https://i.stack.imgur.com/0XtCS.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91200876,"math_prob":0.9321947,"size":1057,"snap":"2023-14-2023-23","text_gpt3_token_len":290,"char_repetition_ratio":0.10161444,"word_repetition_ratio":0.012121212,"special_character_ratio":0.307474,"punctuation_ratio":0.17256637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99199045,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T19:56:33Z\",\"WARC-Record-ID\":\"<urn:uuid:062a11b8-cc47-4cd0-8d94-79b0b93817a7>\",\"Content-Length\":\"167111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77bc0e1f-e38a-428d-bb69-363a6f77e3fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:17002d2d-0227-4fc8-a9b3-aeb94bfa69b7>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/560688/why-is-my-highest-coefficient-not-significant-but-lower-ones-are-poisson-regre\",\"WARC-Payload-Digest\":\"sha1:W4OEA6TVWJIOQ35J7PAT6VR6GITBAEYC\",\"WARC-Block-Digest\":\"sha1:N35JYWUDWBDZHHTSSXOV3YE3FNBBX265\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948684.19_warc_CC-MAIN-20230327185741-20230327215741-00560.warc.gz\"}"} |
https://simple.m.wikipedia.org/wiki/Torque | [
"# Torque\n\ntendency of a force to rotate an object\n\nIn physics, torque is the tendency of a force to turn or twist. If a force is used to begin to spin an object, or to stop an object from spinning, a torque is made.\n\nThe force applied to a lever, multiplied by the distance from the lever's fulcrum, multiplied again by the sine of the angle created, is described as torque. This is also known as \"r cross f,\" or \"force times fulcrum distance times sine theta.\"\n\n## Fulcrum\n\nFulcrum is the axis of rotation or point of support on which a lever turns in raising or moving something.\n\n## Equation\n\nThe equation for torque is:\n\n${\\boldsymbol {\\tau }}=\\mathbf {r} \\times \\mathbf {F} \\,\\!$\n\nwhere F is the net force vector and r is the vector from the axis of rotation to the point where the force is acting. The Greek letter Tau is used to represent torque.\n\nThe units of torque are force multiplied by distance. The SI unit of torque is the newton-metre. The most common English unit is the foot-pound."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8505344,"math_prob":0.98042774,"size":1427,"snap":"2023-14-2023-23","text_gpt3_token_len":343,"char_repetition_ratio":0.12508784,"word_repetition_ratio":0.0,"special_character_ratio":0.23686054,"punctuation_ratio":0.10469314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99955446,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T09:18:45Z\",\"WARC-Record-ID\":\"<urn:uuid:f0f2a30f-745b-488c-9155-24064448c478>\",\"Content-Length\":\"60683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e541b948-09ab-4316-bf00-422cee67610e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e94a9ced-d2f3-49be-9f5b-cc83b760aed1>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://simple.m.wikipedia.org/wiki/Torque\",\"WARC-Payload-Digest\":\"sha1:5DQNDGZQCFPXYBSZRFFQPLQYHTY6OFTN\",\"WARC-Block-Digest\":\"sha1:F6EEWUTZ274ERYFAFBCOHE3V3DEBQTCI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645417.33_warc_CC-MAIN-20230530063958-20230530093958-00442.warc.gz\"}"} |
https://www.millisecond.com/support/docs/current/html/language/attributes/position.htm | [
"",
null,
"",
null,
"Inquisit Language Reference\n\n# position attribute\n\nThe position attribute controls the screen location at which stimuli are presented.\n\n## Member of\n\n<button> <caption> <checkboxes> <clock> <dropdown> <image> <likert> <listbox> <openended> <picture> <radiobuttons> <shape> <slider> <slidertrial> <text> <textbox> <video>\n\n## Syntax\n\n/ position = (x value, y value)\n\n## Parameters\n\n x value A value or property indicating the horizontal screen coordinate in pixels, percent(default), or points. If x is set to null and y is specified, x is set to the pixel value of y. y value A value or property indicating the vertical screen coordinate in pixels, percent(default), or points. If y is set to null and x is specified, y is set to the pixel value of x.\n\n## Remarks\n\nHorizontal position is relative to the left side. 0% is the left edge of the screen and 100% is the right edge. Vertical position is relative to the top of the screen, with 0% placing the stimulus at the top and 100% placing it at the bottom edge. Percentages may be specified as decimals (e.g., 52.968) for increased precision. The default position is the middle of the screen (50%, 50%).\n\n## Examples\n\nThe following sets the position of the text to the lower left corner of the screen:\n\n``<text sometext>/ items = (\"ipsum\")/ position = (0, 100)</text>``\n\nThe following sets the position of the text to the middle of the screen with 800 X 600 resolution:\n\n``<text sometext>/ items = (\"ipsum\")/ position = (400px, 300px)</text>``\n\nThe following sets the position based on the trial number:\n\n``<text sometext>/ items = (\"ipsum\")/ position = (trial.mytrial.currenttrialnumber * 5, trial.mytrial.currenttrialnumber * 5)</text>``"
] | [
null,
"https://www.facebook.com/tr",
null,
"https://www.millisecond.com/support/docs/current/html/stylesheets/images/up.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7054482,"math_prob":0.96307707,"size":1560,"snap":"2022-27-2022-33","text_gpt3_token_len":401,"char_repetition_ratio":0.15167095,"word_repetition_ratio":0.22134387,"special_character_ratio":0.27051282,"punctuation_ratio":0.113074206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9505468,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T08:29:39Z\",\"WARC-Record-ID\":\"<urn:uuid:19ca24f9-ea09-4728-9022-9d6c0293e9b7>\",\"Content-Length\":\"7511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7debe4ec-2075-44e7-b2d4-3db09488f327>\",\"WARC-Concurrent-To\":\"<urn:uuid:2536910b-0666-42bc-baa7-9204e6ac9826>\",\"WARC-IP-Address\":\"44.242.33.6\",\"WARC-Target-URI\":\"https://www.millisecond.com/support/docs/current/html/language/attributes/position.htm\",\"WARC-Payload-Digest\":\"sha1:BIDUF644PHJSJGE74C3KB47D4O6LAGSB\",\"WARC-Block-Digest\":\"sha1:LXF7UXPC2LCA4M657PD74MTIG55FBTIL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572870.85_warc_CC-MAIN-20220817062258-20220817092258-00249.warc.gz\"}"} |
https://nbviewer.ipython.org/github/gpeyre/numerical-tours/blob/master/matlab/sparsity_7_sudoku.ipynb | [
"# Sudoku using POCS and Sparsity¶\n\nImportant: Please read the installation page for details about how to install the toolboxes. $\\newcommand{\\dotp}{\\langle #1, #2 \\rangle}$ $\\newcommand{\\enscond}{\\lbrace #1, #2 \\rbrace}$ $\\newcommand{\\pd}{ \\frac{ \\partial #1}{\\partial #2} }$ $\\newcommand{\\umin}{\\underset{#1}{\\min}\\;}$ $\\newcommand{\\umax}{\\underset{#1}{\\max}\\;}$ $\\newcommand{\\umin}{\\underset{#1}{\\min}\\;}$ $\\newcommand{\\uargmin}{\\underset{#1}{argmin}\\;}$ $\\newcommand{\\norm}{\\|#1\\|}$ $\\newcommand{\\abs}{\\left|#1\\right|}$ $\\newcommand{\\choice}{ \\left\\{ \\begin{array}{l} #1 \\end{array} \\right. }$ $\\newcommand{\\pa}{\\left(#1\\right)}$ $\\newcommand{\\diag}{{diag}\\left( #1 \\right)}$ $\\newcommand{\\qandq}{\\quad\\text{and}\\quad}$ $\\newcommand{\\qwhereq}{\\quad\\text{where}\\quad}$ $\\newcommand{\\qifq}{ \\quad \\text{if} \\quad }$ $\\newcommand{\\qarrq}{ \\quad \\Longrightarrow \\quad }$ $\\newcommand{\\ZZ}{\\mathbb{Z}}$ $\\newcommand{\\CC}{\\mathbb{C}}$ $\\newcommand{\\RR}{\\mathbb{R}}$ $\\newcommand{\\EE}{\\mathbb{E}}$ $\\newcommand{\\Zz}{\\mathcal{Z}}$ $\\newcommand{\\Ww}{\\mathcal{W}}$ $\\newcommand{\\Vv}{\\mathcal{V}}$ $\\newcommand{\\Nn}{\\mathcal{N}}$ $\\newcommand{\\NN}{\\mathcal{N}}$ $\\newcommand{\\Hh}{\\mathcal{H}}$ $\\newcommand{\\Bb}{\\mathcal{B}}$ $\\newcommand{\\Ee}{\\mathcal{E}}$ $\\newcommand{\\Cc}{\\mathcal{C}}$ $\\newcommand{\\Gg}{\\mathcal{G}}$ $\\newcommand{\\Ss}{\\mathcal{S}}$ $\\newcommand{\\Pp}{\\mathcal{P}}$ $\\newcommand{\\Ff}{\\mathcal{F}}$ $\\newcommand{\\Xx}{\\mathcal{X}}$ $\\newcommand{\\Mm}{\\mathcal{M}}$ $\\newcommand{\\Ii}{\\mathcal{I}}$ $\\newcommand{\\Dd}{\\mathcal{D}}$ $\\newcommand{\\Ll}{\\mathcal{L}}$ $\\newcommand{\\Tt}{\\mathcal{T}}$ $\\newcommand{\\si}{\\sigma}$ $\\newcommand{\\al}{\\alpha}$ $\\newcommand{\\la}{\\lambda}$ $\\newcommand{\\ga}{\\gamma}$ $\\newcommand{\\Ga}{\\Gamma}$ $\\newcommand{\\La}{\\Lambda}$ $\\newcommand{\\si}{\\sigma}$ $\\newcommand{\\Si}{\\Sigma}$ $\\newcommand{\\be}{\\beta}$ $\\newcommand{\\de}{\\delta}$ $\\newcommand{\\De}{\\Delta}$ $\\newcommand{\\phi}{\\varphi}$ $\\newcommand{\\th}{\\theta}$ $\\newcommand{\\om}{\\omega}$ $\\newcommand{\\Om}{\\Omega}$\n\nThis numerical tour explores the use of numerical schemes to solve the Sudoku game.\n\nThis tour was written by <http://lcav.epfl.ch/~lu/ Yue M. Lu> and <http://www.ceremade.dauphine.fr/~peyre/ Gabriel Peyr >.\n\nThe idea of encoding the Sudoku rule using a higer dimensional lifting, linear constraints and binary constraint is explained in:\n\nAndrew C. Bartlett, Amy N. Langville, An Integer Programming Model for the Sudoku Problem, The Journal of Online Mathematics and Its Applications, Volume 8. May 2008.\n\nThe idea of removing the binary constraint and using sparsity constraint is exposed in:\n\nP. Babu, K. Pelckmans, P. Stoica, and J. Li, Linear Systems, Sparse Solutions, and Sudoku, IEEE Signal Processing Letters, vol. 17, no. 1, pp. 40-42, 2010.\n\nThis tour elaborarates on these two ideas. In particular it explains why $L^1$ minimization is equivalent to a POCS (projection on convex sets) method to find a feasible point inside a convex polytope.\n\nIn :\naddpath('toolbox_signal')\n\n\n## Game Encoding and Decoding¶\n\nThe basic idea is to use a higher dimensional space of size |(n,n,n)| to represent a Sudoku matrix of size |(n,n)|. In this space, the arrays are constrained to have binary entries.\n\nSize of the Sudoku. This number must be a square.\n\nIn :\nn = 9;\n\n\nCreate a random integer matrix with entries in 1...9.\n\nIn :\nx = floor(rand(n)*n)+1;\n\n\nComparison matrix used for encoding to binary format.\n\nIn :\nU = repmat( reshape(1:n, [1 1 n]), n,n );\n\n\nEncoding in binary format.\n\nIn :\nencode = @(x)double( repmat(x, [1 1 n])==U );\nX = encode(x);\n\n\nThe resulting matrix has binary entries and has size |(n,n,n)|. One has |x(i,j)=k| if |X(i,j,k)=1| and |X(i,j,l)=0| for |l~=k|.\n\nDecoding from binary format. One use a |min| to be able to recover even if the matrix |X| is not binary (because of computation errors).\n\nIn :\n[tmp,x1] = min( abs(X-1), [], 3 );\n\n\nShow that decoding is ok.\n\nIn :\ndisp(['Should be 0: ' num2str(norm(x-x1,'fro')) '.']);\n\nShould be 0: 0.\n\n\n## Encoding Constraints¶\n\nFor |X| to be a valid encoded matrix, it should be binary and satifies that each |X(i,j,:)| contains only a single |1|, which can be represented using a linear contraint |Aenc*X(:)=1|.\n\nNow we construct one encoding constraint.\n\nIn :\ni = 4; j = 4;\nZ = zeros(n,n,n);\nZ(i,j,:) = 1;\n\n\nAdd this constraint to the encoding constraint matrix.\n\nIn :\nAenc = [];\nAenc(end+1,:) = Z(:)';\n\n\nShow that constraint is satisfied.\n\nIn :\ndisp(['Should be 1: ' num2str(Aenc*X(:)) '.']);\n\nShould be 1: 1.\n\n\nExercise 1\n\nBuild the encoding matrix |Aenc|. Display it.\n\nIn :\nexo1()",
null,
"In :\n%% Insert your code here.\n\n\nShow that constraint |Aenc*X(:)=1| is satisfied.\n\nIn :\ndisp(['Should be 0: ' num2str(norm(Aenc*X(:)-1)) '.']);\n\nShould be 0: 0.\n\n\n## Sudoku Rules Constraints¶\n\nIn a Sudoku valid matrix |x|, each column, row and sub-square of |x| should contains all the values in 0...n. This can be encoded on the high dimensional |X| using linear constraints |ArowX=1|, |AcolX=1| and |Ablock*X=1|.\n\nA valid Sudoku matrix.\n\nIn :\nx = [8 1 9 6 7 4 3 2 5;\n5 6 3 2 8 1 9 4 7;\n7 4 2 5 9 3 6 8 1;\n6 3 8 9 4 5 1 7 2;\n9 7 1 3 2 8 4 5 6;\n2 5 4 1 6 7 8 9 3;\n1 8 5 7 3 9 2 6 4;\n3 9 6 4 5 2 7 1 8;\n4 2 7 8 1 6 5 3 9]\n\nx =\n\n8 1 9 6 7 4 3 2 5\n5 6 3 2 8 1 9 4 7\n7 4 2 5 9 3 6 8 1\n6 3 8 9 4 5 1 7 2\n9 7 1 3 2 8 4 5 6\n2 5 4 1 6 7 8 9 3\n1 8 5 7 3 9 2 6 4\n3 9 6 4 5 2 7 1 8\n4 2 7 8 1 6 5 3 9\n\n\n\nEncode it in binary format.\n\nIn :\nX = encode(x);\n\n\nSelect the index of the entries of a row.\n\nIn :\ni=3; k=5;\nZ = zeros(n,n,n);\nZ(i,:,k) = 1;\n\n\nFill the first entries of the row matrix.\n\nIn :\nArow = [];\nArow(end+1,:) = Z(:)';\n\n\nShow that constraint is satisfied.\n\nIn :\ndisp(['Should be 1: ' num2str(Arow*X(:)) '.']);\n\nShould be 1: 1.\n\n\nExercise 2\n\nBuild the full row matrix |Arow|. Display it.\n\nIn :\nexo2()",
null,
"In :\n%% Insert your code here.\n\n\nShow that constraint |Arow*X(:)=1| is satisfied.\n\nIn :\ndisp(['Should be 0: ' num2str(norm(Arow*X(:)-1)) '.']);\n\nShould be 0: 0.\n\n\nExercise 3\n\nBuild the full column matrix |Acol|. Display it.\n\nIn :\nexo3()",
null,
"In :\n%% Insert your code here.\n\n\nShow that constraint |Acol*X(:)=1| is satisfied.\n\nIn :\ndisp(['Should be 0: ' num2str(norm(Acol*X(:)-1)) '.']);\n\nShould be 0: 0.\n\n\nNow we proceed to block constraints.\n\nSize of a block.\n\nIn :\np = sqrt(n);\n\n\nThe upper left square should contain all numbers in |{1,...,n}|.\n\nIn :\nk = 1;\nZ = zeros(n,n,n);\nZ(1:p,1:p,k) = 1;\n\n\nAdd it as the first row of the block constraint matrix.\n\nIn :\nAblock = [];\nAblock(end+1,:) = Z(:)';\n\n\nShow that constraint is satisfied.\n\nIn :\ndisp(['Should be 1: ' num2str(Ablock*X(:)) '.']);\n\nShould be 1: 1.\n\n\nExercise 4\n\nCreate the full block matrix. Display it.\n\nIn :\nexo4()",
null,
"In :\n%% Insert your code here.\n\n\nShow that constraint |Ablock*X(:)=1| is satisfied.\n\nIn :\ndisp(['Should be 0: ' num2str(norm(Ablock*X(:)-1)) '.']);\n\nShould be 0: 0.\n\n\n## Inpainting Constraint¶\n\nA Sudoku game asks to fill the missing entries of a partial Sudoku matrix |x1| to obtain a full Sudoku matrix |x|.\n\nThe fact that for each available entry |(i,j)| on must have |x(i,j)=x1(i,j)| can be encoded using a linear constraint.\n\nLoad a Sudoku with missing entries, that are represented as 0. This is an easy grid.\n\nIn :\nx1 = [0 1 0 0 0 0 3 0 0;\n0 0 3 0 8 0 0 4 0;\n7 0 2 0 0 3 0 0 1;\n0 3 0 9 4 0 1 0 0;\n9 0 0 0 0 0 0 0 6;\n0 0 4 0 6 7 0 9 0;\n1 0 0 7 0 0 2 0 4;\n0 9 0 0 5 0 7 0 0;\n0 0 7 0 0 0 0 3 0];\n\n\nRetrieve the indexes of the available entries.\n\nIn :\n[I,J] = ind2sub( [n n], find(x1(:)~=0) );\nv = x1(x1(:)~=0);\n\n\nCreate a vector corresponding to the constraint that |x1(I(i),J(i))==v(i)|.\n\nIn :\ni = 1;\nZ = zeros(n,n,n);\nZ(I(i), J(i), v(i)) = 1;\n\n\nFill the first entries of the row matrix.\n\nIn :\nAinp = [];\nAinp(end+1,:) = Z(:)';\n\n\nExercise 5\n\nBuild the full inpainting matrix |Ainp|. Display it.\n\nIn :\nexo5()",
null,
"In :\n%% Insert your code here.\n\n\nShow that constraint |Ainp*X1(:)=1| is satisfied.\n\nIn :\nX1 = encode(x1);\ndisp(['Should be 0: ' num2str(norm(Ainp*X1(:)-1)) '.']);\n\nShould be 0: 0.\n\n\n## Solving the Sudoku by Binary Integer Programming¶\n\nThe whole set of constraints can be written |A*X(:)=1|, where the matrix |A| is defined as a concatenation of all the constraint matrices.\n\nIn :\nA = [Aenc; Arow; Acol; Ablock; Ainp];\n\n\nPre-compute the pseudo-inverse of A.\n\nIn :\npA = pinv(A);\n\n\nIf the Sudoku game has an unique solution |x|, then the corresponding lifted vector |X| is the only solution to |A*X(:)=1| under the constraint that |X| is binary.\n\nThis is the idea proposed in:\n\nAndrew C. Bartlett, Amy N. Langville, An Integer Programming Model for the Sudoku Problem, The Journal of Online Mathematics and Its Applications, Volume 8. May 2008.\n\nUnfortunately, solving a linear system under binary constraints is difficult, in fact solving a general integer program is known to be NP-hard. It means that such a method is very slow to solve Sudoku for large |n|.\n\nOne can use branch-and-bounbs methods to solve the binary integer program, but this might be slow. One can use for instance the command |bintprog| of Matlab (optimization toolbox), with an arbitrary objective function (since one wants to solve a feasability problem, no objective is needed).\n\nExercise 6\n\nImplement the Soduku solver using an interger linear programming algorithm.\n\nIn :\nexo6()\n\nIn :\n%% Insert your code here.\n\n\n## Removing the Binary Constraint¶\n\nIf one removes the binary constraint, one simply wants to compute a solution to the linear system |A*X(:)=1|. But unfortunately it has an infinite number of solutions (and the set of solutions is not bounded).\n\nIt is thus unlikely that chosing a solution at random will work, but let's try it by projecting any vector on the constraint |A*X(:)=1|.\n\nFirst define the orthogonal projector on the constraint |{X \\ A*X(:)=1}|.\n\nIn :\nprojector = @(u)reshape( u(:) - pA*(A*u(:)-1), [n n n]);\n\n\nWe project an arbitrary vector (that does not satisfy the constraint) onto the constraint |A*X(:)=1|.\n\nIn :\nXproj = projector( zeros(n,n,n) );\n\n\nCheck that |Xproj| projects onto itself because it satisfies the constraints.\n\nIn :\nd = projector(Xproj)-Xproj;\ndisp(['Should be 0: ' num2str(norm(d(:), 'fro')) '.']);\n\nShould be 0: 4.1665e-14.\n\n\nPlot the histogrtam of the entries of |Xproj|. As you can see, they are not binary, meaning that the binary constraint is violated.\n\nIn :\nclf;\nhist(Xproj(:), 30);\naxis('tight');",
null,
"It is thus not a solution to the Sudoku problem. We emphasize this by counting the number of violated constraints after decoding / re-encoding.\n\nIn :\n[tmp,xproj] = min( abs(Xproj-1), [], 3 );\nXproj1 = encode(xproj);\ndisp(['Number of violated constraints: ' num2str(sum(A*Xproj1(:)~=1)) '.']);\n\nNumber of violated constraints: 119.\n\n\n## Solving the Sudoku by Projection on Convex Sets¶\n\nA way to improve the quality of the result is to find a vector that satisfies both |AX(:)=1| and |0<=X<=1|. Note that this last constraint can be modified to |X>=0| because of the fact that the entries of |X(i,j,:)| must sum to 1 because of |AX(:)|.\n\nA way to find a point inside this polytope |P = {X \\ A*X(:)=1 and X>=0}| is to start from a random initial guess.\n\nIn :\nXproj = zeros(n,n,n);\n\n\nAnd iteratively project on each constraint. This corresponds to the POCS algorithm to find a feasible point into the (non empty) intersection of convex sets.\n\nIn :\nXproj = max( projector(Xproj),0 );\n\n\nExercise 7\n\nPerform iterative projections (POCS) on the two constraints |AXproj(:)=1| and |Xproj>=0|. Display the decay of the error |norm(AXproj(:)-1)| in logarithmic scale.\n\nIn :\nexo7()",
null,
"In :\n%% Insert your code here.\n\n\nDisplay the histogram of the recovered values.\n\nIn :\nclf;\nhist(Xproj(:), 30);\naxis('tight');",
null,
"As you can see, the resulting vector is (nearly, up to convergence errors of the POCS) a binary one, meaning that it is actually the (unique) solution to the Sudoku problem.\n\nWe check this by counting the number of violated constraints after decoding and re-encoding.\n\nIn :\n[tmp,xproj] = min( abs(Xproj-1), [], 3 );\nXproj1 = encode(xproj);\ndisp(['Number of violated constraints: ' num2str(sum(A*Xproj1(:)~=1)) '.']);\n\nNumber of violated constraints: 0.\n\n\nExercise 8\n\nProve (numerically) that for this grid, the polytope of constraints |P={X \\ A*X(:)=1 and X>=0}| is actually reduced to a singleton, which is the solution of the Sudoku problem.\n\nIn :\nexo8()\n\nIn :\n%% Insert your code here.\n\n\nUnfortunately, this is not always the case. For more difficult grids, |P| might not be reduced to a singleton.\n\nThis is a difficult grid.\n\nIn :\nx1 = [0 0 3 0 0 9 0 8 1;\n0 0 0 2 0 0 0 6 0;\n5 0 0 0 1 0 7 0 0;\n8 9 0 0 0 0 0 0 0;\n0 0 5 6 0 1 2 0 0;\n0 0 0 0 0 0 0 3 7;\n0 0 9 0 2 0 0 0 8;\n0 7 0 0 0 4 0 0 0;\n2 5 0 8 0 0 6 0 0];\n\n\nExercise 9\n\nTry the iterative projection on convexs set (POCS) method on this grid (remember that you need to re-define |A| and |pA|). What is your conclusion ? ill the constraint matrix OCS heck wether this is a valid solution.\n\nIn :\nexo9()\n\nNumber of violated constraints: 20.",
null,
"In :\n%% Insert your code here.\n\n\n## Decoding Using L1 Sparsity¶\n\nThe true solution has exactly |n^2| non zero entries, while a feasible point within the convex polytope |P| is usually not as sparse.\n\nCompute the sparsity of a projected vector.\n\nIn :\nXproj = projector( zeros(n,n,n) );\ndisp(['Sparsity: ' num2str(sum(Xproj(:)~=0)) ' (optimal: ' num2str(n*n) ').']);\n\nSparsity: 729 (optimal: 81).\n\n\nOne can prove that any solution to |AX(:)=1| has more than |n^2| non zeros, and that the true Sudoku solution is the unique solution to |AX(:)=1| with |n^2| entries.\n\nOne can thus (in principle) solve the Sudoku by finding the solution to |A*X(:)=1| with minimal L0 norm.\n\nUnfortunately, solving this problem is known to be in some sense NP-hard.\n\nA classical method to approximate the solution to the minimum L0 norm problem it to replace it by a minimum L1 norm solution, which can be computed with polynomial time algorithms.\n\nThis idea is put forward in:\n\nP. Babu, K. Pelckmans, P. Stoica, and J. Li, Linear Systems, Sparse Solutions, and Sudoku, IEEE Signal Processing Letters, vol. 17, no. 1, pp. 40-42, 2010.\n\nThe L1 norm of the Sudoku solution is |n^2|. The L1 norm of a projected vector is usually larger.\n\nIn :\ndisp(['L1 norm: ' num2str(norm(Xproj(:),1)) ' (optimal: ' num2str(n*n) ').']);\n\nL1 norm: 102.7585 (optimal: 81).\n\n\nUnfortunately, all the vectors in the (bouned) polytope |A*X(:)=1| and |X>=0| has the same L1 norm, equal to |81|.\n\nThis shows that the L1 minimization has the same properties as the POCS algorithm. They work if and only if the polytope is reduced to a single point.\n\nNevertheless, one can compute the solution with minimum L1 norm which corresponds to the Basis pursuit problem. This problem is equivalent to a linear program, and can be solved using standard interior points method (other algorithms, such as Douglas-Rachford could be used as well).\n\nDefine a shortcut for the resolution of basis pursuit.\n\nIn :\nsolvel1 = @(A)reshape(perform_solve_bp(A, ones(size(A,1),1), n^3, 30, 0, 1e-10), [n n n]);\n\n\nSolve the L1 minimization.\n\nIn :\nXbp = solvel1(A);\n\n\nCompute the L1 norm of the solution, to check that it is indeed equal to the minimal possible L1 norm |n^2|.\n\nIn :\ndisp(['L1 norm: ' num2str(norm(Xbp(:),1)) ' (optimal: ' num2str(n*n) ').']);\n\nL1 norm: 81 (optimal: 81).\n\n\nUnfortunately, on this difficult problem, similarely to POCS, the L1 method does not works.\n\nIn :\n[tmp,xbp] = min( abs(Xbp-1), [], 3 );\nXbp1 = encode(xbp);\ndisp(['Number of violated constraints: ' num2str(sum(A*Xbp1(:)~=1)) '.']);\n\nNumber of violated constraints: 16.\n\n\n## Decoding Using more Aggressive Sparsity¶\n\nSince the L1 norm does not perform better than POCS, it is tempting to use a more agressive sparsity measure, like |L^alpha| norm for |alpha<1|.\n\nThis leads to non-convex problems, and one can compute a (not necessarily globally optimal) local minimum.\n\nAn algorithm to find a local minimum of the energy is the reweighted L1 minimization, described in:\n\nE. J. Cand s, M. Wakin and S. Boyd, Enhancing sparsity by reweighted l1 minimization, J. Fourier Anal. Appl., 14 877-905.\n\nThis idea is introduced in the paper:\n\nP. Babu, K. Pelckmans, P. Stoica, and J. Li, Linear Systems, Sparse Solutions, and Sudoku, IEEE Signal Processing Letters, vol. 17, no. 1, pp. 40-42, 2010.\n\nAt each iteration of the algorithm, one minimizes\n\n$$min \\sum 1/u_k |x_k|$$\n\nsubject to $$A x=1$$\n\nThe weights are then updated as\n\n$$u_k=|x_k|^{1-\\alpha} + \\epsilon$$\n\nThe weighted L1 minimization can be recasted as a traditional L1 minimization using a change of variables.\n\n$$min |y|_1$$\n\nsubject to $$A diag(u) y=1$$ and $$x_k = y_k u_k$$\n\nSet the target |alpha|, that should be in |0<=alpha<=1|.\n\nIn :\nalpha = 0;\n\n\nSet the regularisation parameter |epsilon|, that avoids a division by zero.\n\nIn :\nepsilon = 0.1;\n\n\nInitial set of weights.\n\nIn :\nu = ones(n,n,n);\n\n\nSolve the weighted L1 minimization.\n\nIn :\nXrw = solvel1( A*diag(u(:)) ) .* u;\n\n\nUpdate the weights.\n\nIn :\nu = (abs(Xrw).^(1-alpha)+epsilon);\n\n\nExercise 10\n\nCompute the solution using the reweighted L1 minimization. Track the evolution of the number of invalidated constraints as the algorithm iterates.\n\nIn :\nexo10()\n\nNumber of violated constraints: 0.\n\nIn :\n%% Insert your code here.\n\n\nDisplay the histogram.\n\nIn :\nhist(Xrw(:), 30);",
null,
"While reweighting L1 works for reasonnably complicated Sudoku, it might fail on very difficult one.\n\nThis is the Al Escargot puzzle, believed to be the hardest Sudoku available.\n\nIn :\nx1 = [1 0 0 0 0 7 0 9 0;\n0 3 0 0 2 0 0 0 8;\n0 0 9 6 0 0 5 0 0;\n0 0 5 3 0 0 9 0 0;\n0 1 0 0 8 0 0 0 2;\n6 0 0 0 0 4 0 0 0;\n3 0 0 0 0 0 0 1 0;\n0 4 0 0 0 0 0 0 7;\n0 0 7 0 0 0 3 0 0];\n\n\nExercise 11\n\nTry reweighted L1 on this puzzle. ill matrix. olve\n\nIn :\nexo11()",
null,
"In :\n%% Insert your code here.\n\n\nExercise 12\n\nTry other sparsity-enforcing minimization methods, such as Orthogonal Matching Pursuit (OMP), or iterative hard thresholding.\n\nIn :\nexo12()\n\nIn :\n%% Insert your code here.\n\n\nExercise 13\n\nTry the different methods of this tour on a large number of Sudokus.\n\nIn :\nexo13()\n\nIn :\n%% Insert your code here.\n\n\nExercise 14\n\nTry the different methods of this tour on larger Sudokus, for |n=4,5,6|.\n\nIn :\nexo14()\n\nIn :\n%% Insert your code here."
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAKDElEQVR4nO3c0U4z RxZGURjN+78yc/FH0ciIUDFgf5te6y65OjrVZpcbkte3t7cXAKj5z7MHAIB7CBgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIG QJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAk CRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIG QJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAk CRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQNJ/nz3Aj3t9ff3/f3x7 e7v5Nws2pxq0uajNqQZZ1KGRRb29vT17hE9c7hvY+8di4ZAWHtZPWdRH3l+SnjXJP1iYavP4bmwu amGqQZcL2Huffqg8On9Y1KHNRW1ONWhzUZtTPZ2Afc79+pD79aHN+7XjO7S5qMSPqW8nYP9a4vF9 7yIP9Kc2F7U51SCLOnSRRQnYczz+6dns7qc2F7U51SCLOmRR97liwBauHomnZ3NRm1MNsqhDm4ta mGrfFQN2kS/XX7e5qM2pBlnUoc1FbU615ooB+5RH59DmojanGmRRhzYXtTnVgwnYPbwYObS5qMTf ay1MtXl8NzYXtTnV7yNgv5YL2qHNRW1ONWhzUZtT/T4CtmLzz5B8zF5WFxW9X28uanMqPiVgLy8b P6Y3H2hv2w5tHt+NzUVtTjVoYVFrBOzlxff9Y5uL2pxq0OaiNqcaZFHvCdgRF7RD7td3s6iPeA1w aPP4fpSA/R6bF7TNqQZZ1KHNRW1O9esJ2NNs/t54c6pBFnVoc1GbU/FvXTRgC7ehzQd6c6obju/Q 5qI2pxq0sKhxFw1Y9Pv+5lSPFz2+x9tc1OZUgyzqUxcN2Kc2H53NqQZZ1KHNRW1ONciiBOxOm68g /L3Woc3ju7G5qM2pBlnUAwjYb7Z5QducatDmojanGmRRDyBgQzb/MmpzqkGbi9qcapBFFQnYXxZu Q17X3M2iPuKt8qHN47uxsKgpAvaXze/7m1MNii7Krf+P6PE9nkXdELBTvh7dzaI+4uvRoc3ju7G5 qIWpfo6A3c9t6JBFHdpc1OZUgzYXtTnVd7lQwLyuOWRRhzYXtTnVoM1FbU4160IB87rmUOKB3lzU 5lSDLOojiR9TOy4UsBuJx/c9D/Qfm4vanGqQRR2yqH923YBFeaAPbS5qc6pB0UV5AfhgAvadFj5U iQd6c1GbUw2yqI8kXgBuTnUfAftO0Wvj420uanOqQRZ1aHNRm1Pd59IB833/0OaiNqcaZFGHNhe1 OdWISwcscU4Lt6HNRXldc2jz+G5sLmpzKv526YAl/Kbv+z9q83Pu+A5tLmpzKv4mYF+y8Phu/uC+ sbCo9xamcnyHNhe1OdWNheP7IQL2JS5ohzYXtTnVoOiiNqd6vOjxnbhWwPw69NDmojanGrS5qM2p BlnUuWsFLHFOC7chv82+m0V9xB/dHNo8vk3XCljC5vf9zakGWdShzUVtTsVHBKxn8zO2OdUgizq0 uajNqS5LwL7ZwuPrBeDdLOrQ5qI2pxq0sKhvIWDfbPOCtjnVIIs6tLmozakG/ZpFCdijVa6NC1MN qizq6VNtfhGpHN/TbR7fewL2fJu3oc2pBm0uanOqQRaVJmABm5+xzakGbS5qc6pBFrVMwL5q4fHd fDGSeAthUR/xn20d2jy+GwuL+gkC9lWbF7TNqQZFF+V/1vBH9Pge77cu6jU6NwAX5xsYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkDS/wAZqjaI cWvurwAAAABJRU5ErkJggg== ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAIX0lEQVR4nO3b24ob ORRAUXvI//9yzcOEMCS4q+NL1dmqtR4b0RiieEvKyX3bthsA1Pxz9gcAgGcIGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA0o+zP8DH3e/3sz/Cvm3b vv6cBywA+L9t287+CDvcwEbYTcsBC3Y364QFAL8IGD9NiGjijvhnZX/7yQELgNvtdl/+L0biOxFe NOEV2jP1YubXwQ0MVjDhfpyo14SX8PlhqLhiwGxQuKwJIU+UPuGKAZuwQSdEVGWBtCsGbIIJEU0c AydMTCg9zGSIA9ZkaoMXza+DGxisacIlfsJL+PxvYZ4mYMCnTIho4o74xP/8k/abgM1hv8Jl/VlZ af8OAZtiwn41MQGEGOKANzAxwXrm18ENDN5g5gV64AJ4IwGDRUyIaOKOaGJiGRcNmP0Kl2ViYhkX DdiE/fpEAk1MAPxiiIMeExNwgPl1uOgNjLQJF+iECbd8+BwBg2U90enffvL6AhMTfI6AAR9kYuJp Sr9LwM70+muMHQyrUvpdAnamv32N2f0Nn1ggosBMAsaOCRFNMDEBBxMweI/ExMQpz9TwIQIG63g9 gd/5nX+7IMFTedGFAjbhKGoHw0yeyosuFLAJR1ETEwDvcqGAJUyIaIKJCUDASJowMTHThEu80nMM AYOlTLjEeyrnGAIGHG1CRGc64KF7pfYL2MkmHEVD+xXWdsBD90rtF7CTTTiKnvLgY2ICeJGAse8T jTQxcdgCWJWAwfMm3I9NTHBZAgaLmxDRmUxM1F0rYBOOovYrDGFiou5aAZtwFDUxAfAW1wpYgomJ Rybcj5Ue5hAwMibcj5X+0YLdWz68nYDBak4J+e4t/+0TE58YqaDlvvyfaOLIDPCfbdu+/tY6bMH8 Olz9BmZiAhhlwkt45dx/9YCZmHhkwkCE0gNfuHrAeGTCKU/pHy0wMQE3AYMXmZh4RFb5tMA/070o cYoHTjFnYuLrz3mK+XVYP2AALMkTIgBJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZA0r83UVooSa356wAAAABJRU5ErkJggg== ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAHsElEQVR4nO3bwW7b MBQAwTjo//+ye2hRtEkPdODIb8mZm2+E8agVBel2v9/fAKDm/dULAICvEDAAkgQMgCQBAyBJwABI EjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJ wABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQB AyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQM gCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAA kgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABI EjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJ wABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQB AyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQM gCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAA kgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABI EjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJ wABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgKQfr17At7vdbq9ewn/c7/e/ f85cJHCyD5epgfYP2EwDi/V5WAcuEuAPAeM3uVqk9DCEgMFjZuZKVjmQgMEO5GqR0u/kuIAZXziZ /b6T4wI2c3xlFeBRxwVsJrlapPTAHwJGiVyt860h2xMw2NPAYjlA81wCBlxErhYp/SIBG8rzHziW /b5IwIYaOMHuCoFRBIxVcrVI6eEaAgZPNjNXssp+BAyOIFeLlD5EwD4yvnAy+z1EwD6aOb6yCvCB gDXI1SKlh3MIGFuRq3W+NaROwOBQA4vlAM1DBAyYQq4WKf0vAlbl+Q8cy37/RcCqBk6wu0LgSgLG 08jVIqWHpxAwuNrMXMkqOQIGvL3J1TKln0PAHmZ84WT2+xwC9rCZ4yurwGkEbBNytUjpYRsCxlnk ap1vDRlOwID/G1gsB2j+JmBAhlwtOqT0ArYtz3/gWIfsdwHb1sAJPuSuELiGgHEduVqk9LBCwGCc mbmSVaYRMGCJXC1S+ssI2LfwAgUcy36/jIB9i4ET7K4Q2IyAnUKuFik9VAgY/EOu1nlUzmsJGPBF A4vlAH0UAQP2IVeL9ii9gF1hj1kBtrHHJUjArrDHrFxA6YF1AsYgcrXOCxQgYJA0sFgO0FxMwIDn kKtFSv8st89/JQDM9/7qBQDAVwgYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEk/AVHwoEspQNTnAAAAAElFTkSuQmCC ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAH0UlEQVR4nO3bOZIi QRAAQRjb/3+5V1hldgbjPjKq3UUMoYQ2gkyo47ZtBwCo+fr0AQDgHgIGQJKAAZAkYAAkCRgASQIG QJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAk CRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIG QJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAk CRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIG QJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZD059MHeLnj8fjpI5ywbdv3g23b dvj/qD/eAPBm/z6XJjvOP+KDZOBKJyP6/ZXfbwAWNr8OAga3UXp2Yn4dBAxWcHELLavcan4dBAzY EQP09ebXQcB+8nwDHARsgjVKY0EEvNn8OggYSzFAw7PMr4OAwU65jMh58+sgYMAUBuhR5tdBwABi 3lP6+XUQsGVZEAGPmF8HAeN9LIggZH4dBAzGuVj6gwGa15tfBwEDruIy4t7Mr4OAATzTMqvy+XXY S8AuPkCVRwrgPebXYS8BG2iZr2nAkubXQcAIcCUA3m9+HQQM1mFVzhPNr4OAAS/kSkDX/DoIGMAI 0wbo+XUQsJJpzzewsPl1EDAe5R8WsKT5dRAweBMDNC3z6yBgsF8uI3LG/DoIGMBl16zKD2t94Myv g4A9jQURsJL5dRCwlVkQAXebXwcBg8PBAA2/zK+DgAE3UPr9mF8HAQPyXEZ8hfl1EDCAvbhpgJ5f BwH7MP+zAGaaXwcB4wS/cwDz6yBgcL+Lv7UoPV3z6yBgsBoDNE8xvw4CBuyU0p83vw4Cdo4FEbBb 8+sgYD2+NgJvML8OAsayDNDwiPl1EDDYO5cROWl+HQQMmMiq/OPm10HAAMJetyqfXwcBezJfG4E1 zK+DgPEZSg/Dza+DgMFo/mHBp8yvg4ABNzNA78H8OggYwKukLyPOr8P6AQNgSV+fPgAA3EPAAEgS MACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnA AEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAED IEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyA JAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACS BAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgS MACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnA AEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAED IEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyA JAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACS BAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgS MACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnA AEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAED IEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEj6C67W5LkE kFfoAAAAAElFTkSuQmCC ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAGvklEQVR4nO3ZQVbE IBBAwcHn/a+Mexej8cXAT6pO0Ct+A2PO+QKAmo/VAwDAXwgYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkDS5+oB/t0YY/UIAD1zztUj/MANDICk xwVs/50CgN94XMC8KALcw+MCBsA9CBgASQIGkORHX8AAjtmkHH70BQzgGOXYhIABkCRgQM8mj3is JWBcx6HDWTzi8RKwJ9gnGw4d4EQCdn+ycdQ+yQfeEDD4TvIhQcAuYqkHOJeAXcRSD3Cu4WYAQJEb GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZA0hdDQyRbIyuWEgAAAABJRU5ErkJggg== ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAKFklEQVR4nO3dQXLa SABAUXrKB/ZRfOOeBRWVBoMDzjjwW+8tUmCzUFtEn26EGHPOEwDU/PPsDQCA7xAwAJIEDIAkAQMg ScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAk AQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDICkt2dvwGPGGHPO7fb5 xvknF3cvHgPAQ/bH0teUCdjVFO3TtQ/b/u9+sQ8ufruSxNDGGKfTx90Pf7+1W1ey8NBOS49u4aGd Iq/+M0uIc86L58rCTx0AfiszA7vl/CLoixcLidcRAE+XO1qGA3axcnjL5yXEH9wmgKzcCQSZJcSr vrGKuPDC48JDOy09uoWHdlp6dAsPraI6Azu/OtifeXj1LEQAVhUL2Banz5XSLYBDaS8hAnBYAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZA0tuzN+AxY4w553b7fOP8k4u7AKwtE7CtT9vdi5Lt72oYwPIyS4hzTlkCYJOZgX3b xdRNBQGuujhavr71A6ZYAPfYHy0TMcssIQLAXnUGNud0FiLAkcUCto/TRah0C+BQLCECkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJse8D48KjX/vt W9OAZQjYAj7ufuT7D24FwN9lCRGAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIMml pI7l0Wsn3s9VFoG/TMCO5qELJ7rKIvC6LCECkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkPT27A34I2OM0+k0 59xub3cBWFs4YGOMfbq2bm0/B2BhlhABSArPwE67ude2fnjrMRuTM4CrvjiQvqZqwPbrhF//0RUL 4B77o2UiZpYQAUiqzsD2y4bOQgQ4oGrATp9CpVsAh2IJEYAkAQMgScAASBIwAJIEDIAkAQMgScAA SBIwAJIEDIAkAQMgScAASBIwAJIEDICk8NXoeSmJr78DViJg/F8+7nvY+89uBXAYlhABSBIwAJIE DIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIw AJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAA SHp79gZwaYzx7E0ACBCw1/Rx9yPff3ArAF6YJUQAksIzsG2pbc75+S4Aa6sG7Jyrfbq2bo0xNAxg ee0lxDGGXAEcU3UGdvrvlOuLh138Vu0ArsqdAh0O2J0UC+Ae+6NlImbtJUQADqs6A5tzOgsR4Miq ATt9CpVuARyKJUQAkgQMgCQBAyAp/B4YR/DQubzeB4VDETBenAvzA9dZQgQgScAASBIwAJIEDIAk AQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIE DIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIw AJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASHp79gYcwhjj2ZsAsBoB+2s+7n7k+w9uBcAqLCEC kJQP2LY6N3557vYA8He0lxD39Zpzfr4NwKrCMzChAjiy9gzsHheLipoHcFXuLZhqwM5/6P2/tygW wD32R8tEzKpLiPOXk0QBHFJ1BnZhzrm9XtAzgCPIB2zLlW4BHEp1CRGAgxMwAJIEDIAkAQMgScAA SBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMg ScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJLenr0BVWOMZ28CwAPWO2oJ2J/4 uPuR7z+4FQD3WuqoZQkRgCQBAyBJwABIEjAAkgQMgCQBAyBJwABI8jkw1vHQ5zTnnD+3JcBfIGCs ZKkPaQJfs4QIQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQFL4ShzbdYPO1wS6uAvA2sIB O/03XVu3xhgaBrC88BKiSgEcWXsGdvo13/riMuQXv5I9gDWEA3axcniLYgEsKbyEeBIngAOrzsDO 06/9mYfOQgQ4lGrAPldKtwAOpb2ECMBhCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgA SQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZD09uwN eCFjjGdvAgD3ErALH3c/8v0HtwKA37GECECSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCS gAGQ5FJSHNRDl76cc/7clgDfI2AcluteQpslRACSBAyAJAEDIEnAAEha/yQO37MMsKT1A+ZkM4Al WUIEIEnAAEgSMACSBAyAJAEDIEnA4MnW/qTHwqNbeGgVRziNHv6US9fDC1oqYNtRxhGE/5tPE8LL WSdgY4ytW/vbACxpnQP91YBZpAb4ntevwzozsKtefwcA8D3OQgQgScAASFrnPbCTsxABjmSpgAFw HIufxPHFnGyx6dpxhrPASL8eQv1DILdGt/aOO/+qO7SrXv+puHLAfvvJsO1U+xffSb+12Gfg1t5x X4+u/sGPW6PbH9/X23H7z+0Uh/ZZ5Xl43JM41nieHdDCO26Zw98tY4zlx7iGOWdiN608A7uH/05R dlzOSosEny25hPj6lgrYQ9NeT7ioVXfceVzbv+sNcFWLLeC3LBWwR99O8FSLWnLHOQjCoxb/r/L5 rKHz0eGibQv8ERY4xWtv7R13MbpbZwRE3RrdAk/R3z4tu0O76vWfiq++fQBw1XHPQgQgTcAASBIw AJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAA SBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMg ScAASPoXUmBqvigpS1YAAAAASUVORK5CYII= ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAORElEQVR4nO3d0Xbb OLZFUeKO/P8v4z6orWJEipYUisAG5nyokbJdLbYSe/kcInKptS4AkOb/Wl8AAHxCwACIJGAARBIw ACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQMgEgCBkAkAQMgkoABEEnAAIgk YABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARBJwACIJGAARBIwACL9aX0BZyql3H5Ra217JQB82zgB K6Xcu7X+NQBDskIEINI4E9iuUpZlKa2vAiBP/3uswQO29PF70MlKs4fL6OEaXEZv1+AyeruGZXWk oGdWiABEGmcCq7U6hQgwj3ECtugWwEy6WLZ+TynL0P//AL6ik1txx9wDAyCSgAEQScAAiCRgAEQS MAAiCRgAkQQMgEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARBJwACI JGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkf60voC3lVJuv6i1PnvX 7nsBGElYwEop9zKtf32nWwCTGG2FWEpZz2EAjCpsAvvVbQJ7GNS2HwDAg7jv/rsO2Lvt2f0AxQJ4 xfqrZUTMug7YW+3ZvSUGwKi6DthWrXV7CvGWrt13ATCqsIAte3G6v0W3AOYx2ilEACYhYABEEjAA IgkYAJEEDIBIAgZAJAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRg AEQSMAAiCRgAkQQMgEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDINKAASul tL4EAL7uT+sLOJN0AcxjqAms1lprbX0VAFxhqAls18NYpnAAu+KWWKkBez1LigXwivVXy4iYpQZM lgAmN9Q9sD0R30YA8LYBA2Y4A5jBgAEDYAYCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBI AgZAJAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgA kQQMgEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIv1pfQEfKqXUWrdvvP96/d5Sls3HApAtL2DrSm1t q1brcvhfABApb4VYa91W6q6Uclw4AMaQN4Edu7VtvWAspSxLvVftIH4AM4v77r/rgD08m7+2Z/cD aq2l6BbAL/4+OhAQs64D9lZ1do91ADCqrgP2olu6arUnBJhIasDWibr/WrcA5pF3ChEAFgEDIJSA ARBploAlnAgF4A1TBMzZDoDxTBEwAMYjYABEEjAAIgkYAJEEDIBIEwXMSXqAkcwSMCfpAQYzS8AA GIyAARBJwACINFfAnOMAGMZEAXOOA2AkEwUMgJEIGACRpguY22AAY5grYG6DAQxjroABMIwZA2aL CDCA6QJmiwgwhukCBsAYJg2YLSJAuhkDZosIMIAZA3ZjCAOINmnAbkOYhgHkmjRgi0UiQLh5A3Zj CAMI9af1BZyp/OSovjZe1SpgAKlGm8BqrbXW8nKXNAwg1FABe3Hw2tIwgDhDrRBvSinrkj1MY9vI 3YawUhzrAKb2+u6qE6kB283S7Y0PiXplLLNIBDj41r9PqQF7lqWPt4jLYggDSJIasK3b9wvvHkS8 uy8SF39FDCDBOAH7l9nr539hWRb3wwAyDHUK8RReZQogwjgT2InWDTOKAfTJBPbUeqMIQG8E7Eit NooAnRKw3xnFADokYC9Zj2IyBtADAXuDjAH0Q8DeJmMAPRCwD8kYQFsC9k9kDKAVATuBjAFcT8BO I2MAV/JSUie7v/SUV6IC+CoB+5aHl/BQMoBzWSF+l70iwJcI2BVkDOB0AnYdGQM4kYBdTcYATiFg bcgYwD8SsJZkDOBjAtaejAF8QMB6IWMAbxGwvsgYwIsErEcyBvArAeuXjAEcELDeyRjALi/mm+Hh Re4Xrw4MTM8EFsZABnAjYJFkDEDAgskYMDP3wOJtG+b2GDCDvICVn6/TdfN1uqzGkO17x7Y+5XF7 GiZ7AoDp5AVs+YlTKWVbqdm6tfUwkE3/fADDygvYcaJuQ9j6Y8rfd4cmKZyMAe8qaffS8wJ2szt+ LXvD2STF2uX2GPC6g2/9+9R1wHaHp+2M9fABPHB7DBhS1wF7FqTdtz+bybizVwRG0nXAtm7j18NB xFu6aq0HBxS5kzFgDGEBO94c6tbr3B4D0oUFjHO5PQbk8lJSLItXpQICCRj/kTEgiBUij/zsMSCC CYynDGRAzwSMX8gY0CcrRF7i2D3QGwHjDY7dA/2wQuQT9opAcwLG52QMaMgKkX/l9hjQhIBxDrfH gItZIXIye0XgGgLGV8gY8G1WiHyR22PA9wgYX+f2GPANVohcx14ROJGAcTUZA05hhUgbfmgL8I9M YDRmIAM+I2B0QcaAd1kh0hHH7oHXCRjdceweeIUVIv2yVwQOCBi9kzFglxUiGdweAx4IGEncHgPu rBCJZK8ICBjBZAxmZoVIPK9KBXMygTEOAxlMZagJrPx80aq+A5/YQ8P8WYBRDRWw5SddpRQNm5xj 9zC8oQK2G63y9y5J2Kbi2D28rqRt3ocK2LL3G6BYLPaK8IL1V8uImKUG7NlcdV8hNrgmuidjMJLU gG3nKve9eJHbYzCG1IBt1VqdQuR1bo9BunECtugWH7FXhFBDBQw+Zq8IcQQM/mOvCEG8lBTs8KpU 0D8Bg6dkDHpmhQi/cHsM+iRg8BK3x6A3VojwHntF6ISAwSdkDJqzQoTP+WHQ0JAJDE5gIIPrCRic RsbgSlaIcDLH7uEaAgZf4dg9fJsVInyXvSJ8iYDBFWQMTmeFCNdx7B5OZAKDBgxk8O8EDJqRMfgX VojQmGP38BkBgy44dg/vskKEvtgrwosEDHokY/ArK0Tol2P3cMAEBgEMZLAlYBBDxmDNChHCOHYP NwIGkRy7BytEyGavyLQEDEYgY0zIChHG4fYYU8kLWPn51KybT82y+s5z+16YhNtjTCIvYMtPnEop 20rpFtw9DGQ+ORhMXsCOE3UbwmQM7uwVGVVewJa/V4UPtsPZwwdrG3OyV+RXB19a+9R1wJ61516p h4/fjZNiwZq9Is+sv1pGxKzrgO0e03gWpIN3AQ9kjAF0HbCtWuv2FOItXbvvAg64PUa0sIAte3F6 WC0Cb3F7jFBeiQP4Hy/nQRYBA/4iY6TIWyECF/DDoOmfCQw4YiCjWwIG/E7G6JAVIvAqx+7pioAB 73Hsnk5YIQIfslekLQED/omM0YoVInACx+65ngkMOJOBjMsIGHA+GeMCVojAtzh2z1cJGPBdjt3z JVaIwEXsFTmXgAGXkjHOYoUINODYPf/OBAa0ZCDjYwIGtCdjfMAKEeiFY/e8RcCAvjh2z4usEIFO 2StyTMCArskYz1ghAgHcHmNLwIAYbo+xZoUI5LFXZDGBAbnsFScnYEA2e8VpWSECg7BXnI2AAUOR sXlYIQIDcntsBgNOYMU3XcCyLD/TmIFsVKMFTL2ALRkb0lABK6VUawLgCRkbzPj3wB5mMoWDyflh 0M/EbbBSR5ZtlnZDZSYDjsnYrogvnqkT2PaZvb8l4nkHOvGwVPTFI0hqwABO5Nh9ogEDZvwCPuNV qbIMdQoR4BTOK0YQMIB9Mta5AVeIACdy7L5bJjCAlxjIeiNgAG+QsX5YIQK8zbH7HggYwIccu2/L ChHgX9krNiFgAOeQsYtZIQKcybH7y5jAAL7CQPZtAgbwRTL2PVaIAF/n2P03CBjARRy7P5cVIsDV 7BVPIWAAbcjYP7JCBGjJ7bGPCRhAe26PfcAKEaAj9oqvM4EBdMde8RUCBtApe8VjVogAvbNX3CVg ABlk7IEVIkASt8fuBAwgj9tjixUiQLSZ94oCBhBvzoxZIQIMYrYfBm0CAxjNJAOZgAGMafiMWSEC jGzgY/epASul1M1vQll9j7F9L8C0hjx2H7lCLM+H4frjyuv51cEFX6mHy+jhGhaX0dk1LC7jwmsY aa+YF7Dd2Wv93h7+CAL0bIyMpa4Qn7m1bR25h571NpwBtLI9dr8sSTXrOmDb9tzecv/nQ41246RY AMfu09iy1Nu/RqyyjtZxPdvW62Hquo9iDS4OINj/vpD2H4euJ7AX3XJ1n8+W1dQVmmcAfpU6gQEw ubxTiACwCBgAoQQMgEgjHOLo0PY4ycPbt++6wPHfAb/4oRs+Fc9+dxo+bqtno9VTcfDQPkfWb7n/ utWf1esf+i0Cdr7dA/13Tf40NPzrBMev+3XllWwf+vqvVgeP2+rZaPVUHDx0w4Q0edyDh27+OdI5 K8SrNXmxq4avD3nw0K1e96ttKp5p9Qfj4kd85aGbPBVdzV7rd7XKasTL8pnArtbwG97etH0qenvc Vs9Gt2OHz5Gly/m4KyawS/X8R+FiDZ+K27eWTe6vHNerid6m84Y79vU/e3joPufjrgjYdfqfxy/T /KnoaovYcEfU5HEPHrrVJa1/DNP1t0V3H7rD350OdT0e5no4YbV9bcaulldXPnTzp6LVN7m7j9vP s9H8FGLzp+J+VT5H2j70WwQMgEhWiABEEjAAIgkYAJEEDIBIAgZAJAED4C8pfxVMwAD4T0q9FgED 4K7zFz98IGAA/McEBkC/7pUqP27/2vDFnT/gx6kATGQ9YD376bspDTOBAQzo2U+HyZqxjgkYAJEE DGBAtdasI4UfEDCAAd3qFXSk8AMOcQCM5sXZa124xFlt8AETgFFZIQIQScAAiCRgAEQSMAAiCRgA kQQMgEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARDp/wGkzaT4Z2PN oQAAAABJRU5ErkJggg== ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAJz0lEQVR4nO3d0W7i VhRAUbvKB/dT+se3D1Yt12ESZhSwt73WQwQOEqcwYeObSzqPMSYAqPnr6AEA4E8IGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA0sfRA7zWPM9HjwCQ NMY4eoRvXDxg0zRN0z9P3/LvFz1h8zyf4Z/CGcY4wwzGONsMxjjbDFPk3b8lRACSBAyAJAEDIEnA 3uEMK9rTOcY4wwyTMU42w2SMk81QIWAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJH0cPcCz5nleLqz/u9Ldkc83AODCGgFb4rSGaoyx fP383fUGB00KwJs0ArbYhQqAO8sEzAkWwEutv4ipyAQMgJfanhskYmYXIgBJjTOwZdfGevnhEbsQ AW6lEbDpUZZ2R3QL4FYsIQKQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkC BkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkfRw/wrHme18tjjO2Rh1cBuLZMwKb/l2me5/Xqkq7tVQ0DuLxS wHahAuAHbRe6EkoBW5cKNQzgx+1WuQ6c5EmZTRyiBcBWI2CJ9wIAvFNjCXGMsdtk+PmIXYgAt9II 2PQoS7sjugVwK40lRADYETAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJ wABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQB AyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQM gCQBAyBJwABIEjAAkj6OHuD3zPM8xlgvLxeWI7urAFxbKWBroqZHJdte1TCAy8ssIcoSAFulMzAA Xme7ypXQCNjysK5fnYoB/LjtS2siZo2A+f0WADuNgH02xrALEeDOYgHbxmkXKt0CuJXMLkQA2BIw AJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAA SBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMg ScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJI+jh7g WfM8LxfGGA+PfL4BABeWCdi0CdUYY/m6HF/Stb2qYQCXl1lC1CQAtkpnYOsiIQA/LvcaWwrY7ndd APyg7UJX4pW2sYSYeCgBeKfGGdiya2O9/PCIXYgAt9II2PQoS7sjugVwK40lRADYETAAkgQMgCQB AyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQM gCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAA kgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgKSPowd41jzP y4UxxsMjn28AwIVlAjZtQjXGWL4ux5d0ba9qGMDlZZYQNQmArdIZ2OTsCuBl1l/EVGQCtlsnBOBn bV9gEzHLLCFO6gXARuMMbHkvsN1nuOzjWK9OdiEC3EwjYA+btDuoWwC3UlpCBICVgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ9HH0AL9nnucxxnp5 ubAc2V0F4NoyAVv7tF7dlWx7VcMALi+zhDjGkCUAVpkzMABearfQdX4CBsA0/X8DQSJmmSVEANiq noGNMexCBLizWMC2cdqFSrcAbsUSIgBJAgZAkoABkCRgACQJGABJsV2IAPyZxGeTf4uAAdzHP0/f 8u8XTvFDLCECkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ2DvM83z0CNN0jjHOMMNkjJPNMBnj ZDNUCBgASR9HD/CT1ncuY4xjJwHg1a4TsHme125tLwNwSdd5oX8YMKvJAH/m/HW4zhnYQ+d/AgD4 MzZxAJAkYAAkXed3YJNdiAB3cqmAAXAfV97EceAJ2dd3/Z5d/l/M8M5H5lf39eZnp/KMPPzue8Y4 wzOy2zZ81EOx/e59fkY+O//nkS4bsAM/FvbFXb9tW/+3//nrxwxe+sh8PcZ7Zvh2jPc8KU8+I0eN sX2xPvAZeecPy7f/Ko79GTn2g62VDyDZxPFWY4wzvKMxw+o87zHneT78VWOZ4QwPyEnGuK2TvFJ9 67JnYHzr8NeIw1+vT+VtZz9fD3DsDKfytiXEZ8bgIQG7o5P8ZB7+11KWu16/HviAHP5cnMrhBT3D 36UbY6w/pxr2K5YQb+rwF4gD7301/jMd+oCc5NHgVJZwemfztSuvFZxnF+LnrRyHbG1a7vfYjV6f /0zlIXveTvWMPPzW28Y42zPyzjMez8i3g508EGefDwAesoQIQJKAAZAkYAAkCRgASQIGQJKAAZAk YAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKA AZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZD0L7nNCHvnrJ9B AAAAAElFTkSuQmCC ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAM2UlEQVR4nO3d4XLi NhiGUauT+79l9Qcb1ouNgQQsvdI50+mkgRm7hPDkEwJKrXUBgDT/tT4BAPgJAQMgkoABEEnAAIgk YABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQ ScAAiCRgAEQSMAAiCRgAkQQMgEgCBkAkAQMgkoABEEnAAIj01foE3qmUcvmi1tr2TAD4tHECVkq5 dmv9NQBDsoQIQKRxJrBdpSzLUlqfBUCe/texBg/Y0sfPoJMlzR5Oo4dzcBq9nYPT6O0cltWWgp5Z QgQg0jgTWK3VLkSAeYwTsEW3AGbSxWLr55SyDP3/B/ARnTwVd8xzYABEEjAAIgkYAJEEDIBIAgZA JAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQM gEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARBJwACIJGAARBIwACIJ GACRBAyASAIGQCQBAyCSgAEQScAAiPTV+gReVkq5fFFrvXfR7qUAjCQsYKWUa5nWX1/pFsAkwgL2 0GUIW2dsPZYtCgdwx82jZf9GC9ilT+vhTLEAnnHwp3+fug7Yq8OTVgHMo+uAvRSk3afEABhV1wHb qrVudyFe0rV7EQCjCgvYshcnT3cBTMgLmQGIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAA iCRgAEQSMAAiCRgAkQQMgEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKA ARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQMgEgCBkAk AQMgkoABEGnAgJVSWp8CAB/31foE3km6AOYx1ARWa621tj4LAM4w1AS262YsUziAXXGLWKkBez5L igXwjPWjZUTMUgMmSwCTG+o5MADmMWDADGcAMxgwYADMQMAAiCRgAEQSMAAijR+whBczAPCy4QMm XwBjGj5gAIxJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQM gEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARBJwACIJGAARBIwACIJ GACRBAyASAIGQCQBAyCSgAEQScAAiCRgAET6an0CP1RKqbVuv3n9enspACPJC9i6Ulu6BTCJvIBd EnUvY5fv32TMZAbw0PF40KG8gB275m0dKtECeGj9UBkRs64DdnMLPuyQUAHMo+uAvRSk3W0dAIyq 64A96ZKuWut1YlMygOGlBmz3KS7dApiHFzIDEEnAAIgkYABEEjAAIgkYAJEEDIBI4wes1iXhLVEA eM34AQNgSAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQMgEgCBkAkAQMgkoABEGmWgHk/X4DB TBGwWlufAQDvNkXAABiPgAEQScAAiCRgAEQSMAAiCRgAkWYJWK1eCgYwlFkCBsBgBAyASAIGQCQB AyDSXAGzjwNgGBMFzFv6AoxkooABMBIBAyCSgAEQabqA2ccBMIa5AmYfB8Aw5goYAMP4an0C71S+ 1werUQtgdKNNYLXWWmvxTBfA6IYK2DODl89VARjDUEuIF6WUdclupjGriwC74tauUgO2m6XLN28S pVgAzzj4079PqQG7l6Unc1WKLfUA2VIDtnX5e+GZjYieBgMYwDgBs1QIMJWhdiECMI95A2YVESDa pAGz3AiQbtKAAZBu6oBZRQTINW/ArCICRJs3YABEmz1gVhEBQk0dMKuIALmmDhgAuQTMKiJApNkD ZhURINTsAQMglIAti1VEgEACZhURIJKAARBJwP6wigiQRcCWxSoiQCABAyCSgP1lFREgiID9YRUR IIuAARBJwP6q1SoiQAwBAyCSgN0yhAFEELB/2MoBkELAdhjCAPonYLcMYQARBGyfIQygcwK24zKE aRhAzwRsn4YBdE7A7tIwgJ4J2BENA+iWgD2gYQB9ErDHNAygQwL2FA0D6I2APUvDALoiYC/QMIB+ CNhrNAygEwL2smvDZAygIQH7iVqNYgCNfbU+gZeV72jUzfvGl1VPtpe+Xa1/5jBvYA9wvryALd9x KqVsK3VCt/49nIYBtJEXsONEXYaw9XXKv8t8by+chgFjKGlPiuQF7GJ3/Fr2hjNriQDPOPjTv09d B2x3eNrOWDdXaOLasMVnOgOcouuA3QvS7vfvzWSnWW+v1zCAT+s6YFuX8etmI+IlXbXWgw2Kp7Gc CHCOsIAdrxy2ncCuNAzgBF7I/BFe5gzwaWETWJB1w4xiAG9nAvssoxjAhwjYx2kYwCcI2Bk0DODt BOwkPoQF4L0E7Dw+hAXgjQTsbBoG8BYC1oDlRIDfE7A2LCcC/JKAtaRhAD8mYI1ZTgT4GQFrz3Ii wA8IWC80DOAlAtYRy4kAzxOwvlhOBHiSgPXIKAbwkIB1yigGcMwHWnbNp2IC3GMCC2BFEWBLwDKs VxRlDGARsCwyBnDlObA824Z5egyYkIClukbrWjIZA6ZiCTGedUVgTiawQVhXBGYjYEOxrgjMQ8DG ZCADhidgIzOQAQMTsCmsS3bzHYBQAjYXS4vAMARsRpYWgQEI2NQMZEAuAcNABkQSMP4ykAFBBIxb 24FsUTKgPwLGXZYWgZ4JGI9ZWgQ6JGA8y0AGdEXAeJn39QB6MFTAyvcDavWAegpLi0BDQwVs+U5X KUXDTmPXItDEUAETrbaUDDjTUAFbVquI974jcidQMki0ffzsXGrA7mXpuoS4vYjz7ZZsETPo0vrR MiJmqQHbZsnzXj1b/2TEDHiL1IBt1VrtQoxwELNFz4CnjROwRbcC3fzEDGfA84YKGOkMZ8DzBIxO Gc6AYwJGBsMZcEPAyHM8nG2vAAxJwIhnsRHmJGCMxmIjTELAGJnhDAYmYEzEcAYjETAmZScIpBMw WBaLjRBIwGCHxUbon4DBA4Yz6JOAwWu2PTu4FPgcAYNfsdgIrQgYvI3FRjiTgMGnGM7gowQMzuBl Z/B2AgYNPOzZ9jrADQGD9ratMqLBQwIGPTKiwUMCBgGMaLAlYBDJrhAQMBiBJUcmJGAwIEuOzEDA YApGNMYjYDAjIxoDEDBgWewKIZCAATssOdI/AQMes+RIhwQM+AkjGs0JGPAGRjTOJ2DARxjR+DQB A85gROPtBAxow4jGLwkY0AUjGq8SMKBTRjSOCRiQwYjGDQEDUhnRJidgwCCMaLMRMGBYu0l7eB1S CBgwEW+6P5K8gJXvu1vd3NHK6p64vRTghhEtWl7Alu84lVK2ldIt4DdsDAmSF7DjRF2GMBkD3sLG kJ7lBWz5d6nwxnY4u7mytgG/MfCIdvDQ2qeuA3avPddK3Vx/N06KBXzOSCPa+tEyImZdB2x3m8a9 IB1cBHCagUe03nQdsK1a63YX4iVduxcBtDXSiNabsIAte3G6WVoE6JkR7V3yAgYwEiPajwkYQF+M aE8SMICuGdHu+a/1CUyhkw2pPZxGD+ewOI3OzmFxGi+eQ63//LN8J239zwxMYADZnhnRdq+WTsAA RjPJqqOAAYxvyI0hg797RQ8r2gAJti+xbXIaLxg8YACMyi5EACIJGACRBAyASAIGQCTb6D/i3ge7 rHdFnr99puFHpm0P3fCmaPWxOwfHbXVrNPwEIr8jDw/dw+/I+Yd+iYC93/qOuL1TNrk3NHw5wcGh G/5iXD/U++RzODhuq1uj1U1xcOiGCWly3INDN/8d6ZwlxLOVUs7/Pbl84OfJB3146CY3xdI6Ffe0 umOcfMRnDt3kpuhq9lpf1CqrDQ/9PBPY2Rr+wdubtjdFb8dtdWt0O3b4HVm6nI+7YgI7Vc93hZM1 vCkuf1o2eX7luF5N9DadN1xjX/+7h0P3OR93RcDO0/88fprmN0VXq4gN14iaHPfg0K1OqX5bTr9v 3Dt0hz+dDnU9Hua62WF1/bu74aavpelqwPoWaHtTtPojd/e4/dwazXchNr8prmfld6TtoV8iYABE soQIQCQBAyCSgAEQScAAiCRgAEQSMAD+kfJSMAED4K+Uei0CBsBV529+eEPAAPjLBAZAv66VKt8u /9nwzZ1/wMepAExkPWDd+/TdlIaZwAAGdO/TYbJmrGMCBkAkAQMYUK01a0vhDwgYwIAu9QraUvgD NnEAjObJ2WtduMRZbfABE4BRWUIEIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQ ScAAiCRgAEQSMAAiCRgAkQQMgEgCBkAkAQMg0v/UpxQCczdINgAAAABJRU5ErkJggg== ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAKAUlEQVR4nO3c0W6r xgJAUbjKB/dT+sfTB3QRJW7iE8WGDWs9RIZY8tQ03p7x+MxjjAkAav539AAA4CcEDIAkAQMgScAA SBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMg ScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASPo4egBPmed5ezjG2J58eAjAtTUCtm3S Eqp5nteTy5ntoYYBXF5sCVGcAFg0ZmA/tlt7BOBJ558tlAL2s+nXGa7BSSaOZxjGGcZgGGcbg2Gc bQxT5N1/bAkRABaZGdjuXckYwy5EgDvLBOxzlnZndAvgVk6x2Po6J1lNBmhJvHj6DAyAJAEDIEnA AEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAEDIEnAAEgSMACSBAyAJAED IEnAAEgSMACSPo4ewMvN8/z8nccYrxsJAL/o+gGbpr+fvudfLxwFAL/KEiIASQIGQJKAAZAkYAAk CRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZAkYAAkCRgASQIGQJKAAZD0 cfQAnjXP83JjjPHwzOc7AHBhjYAtcVpDNcZYfn7+7XqHg0YKwJs0ArbYhQqAO8sEzAQL4KXWD2Iq MgED4KW2c4NEzOxCBCCpMQNbdm2stx+esQsR4FYaAZseZWl3RrcAbsUSIgBJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRg ACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoAB kCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkPRx9ACeNc/zenuMsT3z8BCAa8sEbPp3meZ5 Xg+XdG0PNQzg8kpLiPM8b+dhANxZbwZmggXwCrkZQiZgogXwUruPaQ4cyZMaS4iJpxKAd2rMwMYY u02Gn8/YhQhwK42ATY+ytDujWwC30lhCBIAdAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAk AQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIE DIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIwAJIEDIAkAQMgScAASBIw AJIEDIAkAQMg6ePoAfyZeZ7HGOvt5cZyZncIwLWVArYmanpUsu2hhgFcXmYJUZYA2CrNwAB4ne0q V0IjYMvTuv40FQP4dduX1kTMGgHz+RYAO42AfTbGsAsR4M5iAdvGaRcq3QK4lcwuRADYEjAAkgQM gCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAA kgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABI EjAAkgQMgCQBAyBJwABIEjAAkgQMgCQBAyBJwABIEjAAkj6OHsCz5nlebowxHp75fAcALiwTsGkT qjHG8nM5v6Rre6hhAJeXWULUJAC2SjOwdZEQgF+Xe40tBWz3WRcAv2i70JV4pW0sISaeSgDeqTED W3ZtrLcfnrELEeBWGgGbHmVpd0a3AG6lsYQIADsCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkY AEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQ9HH0AJ41z/NyY4zx8MznOwBwYZmATZtQjTGWn8v5JV3bQw0DuLzMEqIm AbBVmoFNZlcAL7N+EFORCdhunRCA37V9gU3ELLOEOKkXABuNGdjyXmC7z3DZx7EeTnYhAtxMI2AP m7Q7qVsAt1JaQgSAlYABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZA koABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkPRx9AD+ zDzPY4z19nJjObM7BODaMgFb+7Qe7kq2PdQwgMvLLCGOMWQJgFVmBgbAS+0Wus5PwACYpn9vIEjE LLOECABb1RnYGMMuRIA7iwVsG6ddqHQL4FYsIQKQJGAAJAkYAEkCBkCSgAGQFNuFCMDPJL6b/EcE DOA+/n76nn+9cBS/xBIiAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAA JAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgAGQ JGAAJAkYAEkCBkCSgAGQJGAAJAkYAEkCBkCSgL3DPM9HD2GazjGMM4xhMoyTjWEyjJONoULAAEj6 OHoAv2l95zLGOHYkALzadQI2z/Pare1tAC7pOi/0DwNmNRngZ85fh+vMwB46/wUA4Gds4gAgScAA SLrOZ2CTXYgAd3KpgAFwH1fexHHghOzrh37PLv8vxvDOZ+a/HuvNV6dyRR7+9j3DOMMV2W0bPuqp 2P72Pn8jn53/+0iXDdiBXwv74qHftq3/2//89WsGL31mvh7Ge8bw7TDec1GevCJHDWP7Yn3gFXnn H8u3/1cc+zdy7BdbK19AsonjrcYYZ3hHYwyr87zHnOf58FeNZQxneEJOMozbOskr1bcuOwPjW4e/ Rhz+en0qb5v9fD2AY8dwKm9bQnxmGDwkYHd0kr/Mw/+1lOWh158HPiGHX4tTObygZ/h36cYY69+p hv0XS4g3dfgLxIGPvhr/Nx36hJzk2eBUlnB6Z/O1K68VnGcX4uetHIdsbVoe99iNXp//mcpD9ryd 6oo8/NXbhnG2K/LOGY8r8u3ATh6Is48PAB6yhAhAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJ AgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJ GABJAgZAkoABkCRgACQJGABJAgZAkoABkCRgACQJGABJAgZAkoABkPQPAZImmArXUQoAAAAASUVO RK5CYII= ",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAkAAAAGwCAIAAADOgk3lAAAACXBIWXMAAAsSAAALEgHS3X78AAAA IXRFWHRTb2Z0d2FyZQBBcnRpZmV4IEdob3N0c2NyaXB0IDguNTRTRzzSAAAOeUlEQVR4nO3d3Xab uhqGUdij93/L2gduWBT/YQeQXmnOgzXSNDGYLvuJPrAzl1ImAEjzv9o7AADfEDAAIgkYAJEEDIBI AgZAJAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMFo3z/PD jz/93qP24Te3fPuuzfe+uKm3WznqDu6/tWdfc+yewB5/au8AvDfPcynl7dfcPiilXPBk+nZ/Tvre 6qJ3ns4IGAFuTVo/dS5/XH9+85nNcmf5ss1nbn/cPC/f5/DhDmxKud7E5m/ffu/07yJmvd2Ht//g MD25a3u+cXM3l2Nyv/WHd+3hylLqOJuA0bmHUdmk7r5e99/17Ol487S+/uIX2dhs4n5Xp3/7cb+V +1u+//zmXmxuanPHN7f89i5sbuftgYLDCRgZjh0MXnPC5sAzdp/e1J7DdR/IY7erZJxNwBjO23na IZ6tk774+i92+Ist3vfm07uw5zbhQAJGjM2qYv9T6vLt074lwtfLiBfzvRebuN/VZ5+/vwuvd/v1 vXi2D5vTdc/uwsP9vOYKGrjxIxIAkbwODIBIAgZAJAEDIFL2RRyb1/TcPumsHsAIggO2ecMFF+8C DMUIEYBIwSuw6dE77jz8AgA+1f4oKzVg+98joIV/g0ammi3sRgv7YDda2we70do+TCE//RshAhAp dQX26bvmANCZ1IBNd6HSLYChNDFsPU8j02SALBFPns6BARBJwACIJGAARBIwACIJGACRBAyASAIG QCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQMgEg9B2yep2kqCb8XG4CPdRuwdbfmeZIxgM50GzAA +tZnwKy3ALrXZ8AA6F6fASul9h4AcLI+A3ZP0gA6023ASrlF6+/ZMGfFADrTbcAA6Fv/ATM8BOhS /wFbmCIC9GSggAHQkyECZooI0J8hArYwRQToxlgBA6AbowRsmSJahAH0YZSAAdAZAQMg0kABM0UE 6MlAAQOgJ2MFzAvCALoxVsAWpogA6QYNGADphguYKSJAH4YL2MIUESDauAEDINqIAfOCMIAOjBgw ADogYABEGjRgpogA6QYNGADpxg2YF4QBRBs3YAtTRIBEAgZApKEDZooIkGvogC1MEQHi/Km9A1+a /21OKWX5TLGwAhhAasDWlZrneZ7n5TPrj3fcjuUXQKT4EeJHuXp5O7+/DQCuk7oC2+9+2FhrTwBa Nqf9IJ8dsD3Lrx1f8Hf5Nc+uSwTGtTk1U3FPdoofIQIwpuAV2Hr59curEF3KARAnOGCbUB11KYcp IkAEI0QAIgnYXxZeAFkEbMvJMIAIAgZAJAH7jykiQBABe8AUEaB9AgZAJAH7xzJFtAgDaJyAARBJ wACIJGBbpogAEQQMgEgC9oAXhAG0T8BeMUUEaJaAARBJwB4zRQRonIC9YYoI0CYBAyCSgD3lBWEA LRMwACIJGACRBOwVU0SAZgkYAJEE7A0vCANok4DtZYoI0BQBAyCSgL1nigjQIAH7gCkiQDsEDIBI AraLF4QBtEbAAIgkYABEErC9TBEBmiJgAEQSsA94QRhAOwTsG6aIANUJGACRBOwzpogAjRCwL5ki AtQlYABEErCPmSICtEDAvmeKCFCRgAEQScC+4W2lAKoTMAAiCdiXXMoBUJeA/ZYpIkAVAgZAJAH7 nikiQEUCdgBTRIDrCRgAkf7U3oHvzT8Ln1LK/R+vUYrlF0AdqQG75WqdrqVb8zxf2bCfjTolBnCp 1IDdLOman6+DNn91fdsAIrx4Im1TcMDWS649X3babvydIlqEAdHWz5YRMXMRBwCRBAyASKkjxPV5 r4pXIf5s0RQR4GqpAZvuQuXqDIChGCEeQz0BLiZgB0u4cgegBwIGQCQBO4wpIsCVBOx4pogAFxAw ACIJ2JGWKaJFGMDZBAyASAIGQCQBO5gpIsA1BAyASAJ2PC8IA7iAgJ3IFBHgPAIGQCQBO4UpIsDZ BOxcpogAJxEwACIJ2Fm8IAzgVAIGQCQBAyCSgJ3IFBHgPAIGQCQBO5cXhAGcRMAuYooIcCwBAyCS gJ3OFBHgDAJ2HVNEgAMJGACRBOwKpogAhxOwS5kiAhxFwACIJGAX8bZSAMcSMAAiCdh1XMoBcCAB q8AUEeD3BAyASAJ2KVNEgKMIWB2miAC/JGAARBKwq5kiAhxCwKoxRQT4DQEDIJKAVeBtpQB+T8AA iCRgAEQSsDpMEQF+ScAAiCRg1XhBGMBvCFh9pogAXxAwACL9qb0D35tXK5dSyvLHkjObK8XyC+BL wQGbVq2a5/nhxynm2SkxgM9kjxDneZ4tYQCG1MMK7HXDNn/b2uJsmSJahAF1xa0HggO2M0WtFQug Tetny4iYpY4QIw4uAOdJXYHdX3aYeBXijSkiwBdSAzbdhSquWwD8RuoIsTPiC/ApAWuLU3sAOwkY AJEErBWmiAAfEbDmmCIC7CFgAEQSsIYsU0SLMIC3BAyASAIGQCQBa4spIsBOAgZAJAFrjheEAewh YO0yRQR4QcAAiCRgLTJFBHhLwJpmigjwjIABEEnAGmWKCPCagLXOFBHgIQEDIJKAtcvbSgG8IGAA RBKwprmUA+AZActgigiwIWAARBKw1pkiAjwkYDFMEQHWBAyASAIWwBQR4J6AJTFFBFgIGACRBCyD t5UC2BAwACIJGACRBCyGKSLAmoABEEnAknhBGMBCwCKZIgIIGACRBCyMKSLAjYClMkUEBidgAEQS sDxeEAYwCRgAoQQMgEgCFskUEUDAAIgkYKm8IAwYnIDFM0UExiRgAEQSsGCmiMDI4gM2/0zQ5h91 96eKIe80MLo/tXfgV9b1Kj/rkfXHAPQqeAUmVJMXhAEDy16B7bEZKmoewENxp2BSA3Y70Ov/PqNY AHusny0jYpY6Qiw/puETZYoIjCl1BbZRSll+Xhi8ZwCDiA/YkquRu1WK5RcwnNQRIg/JGDAOAQMg koB1YuABKjAoAeuNKSIwCAEDIJKA9cMUERiKgHXIFBEYgYABEEnAuuJtpYBxCBgAkQSsNy7lAAYh YN0yRQT6JmAARBKwDpkiAiMQsJ6ZIgIdEzAAIglYn0wRge4JWOdMEYFeCRgAkQSsW95WCuibgAEQ ScAAiCRgPTNFBDomYABEErDOeUEY0CsBG4UpItAZAQMgkoD1b30ph3UY0A0B698mWhoG9EHARqRh QAcEDIBIAgZAJAEblCkikE7A+vfstcwaBkT7U3sHuMKmYUu65tlbdQCprMBGtI6WF4cBoQRsUM/W ZAApBGxcpWyXYgBBBGx0GgaEEjCcEgMiCRjTZJwIBBIw/qNhQBAB4x8aBqQQMLacEgMiCBgPOCUG tE/AeErDgJYJGK8YJwLNEjDe8KZTQJsEjPecEgMaJGDspWFAUwSMDzglBrQj+Bdazj9Pn6WU+z9y klL+6ZZfiQnUEhyw6d90Ld2a51nDTnU7un6tM1BX8AhRpepySgyoK3sFNu944tx8jewdaD1OvH3g 6EKuPc+oTckO2Obs14uv4STGidCN9bNlRMxSR4gRB3ccxonA9VJXYKUUVyE2ZTNO9I8AnC01YNNd qHSrOqfEgCuljhBpkzedAi4jYBxPw4ALCBin8KZTwNkEjLP4PSzAqQSMEzklBpxHwDidhgFnEDCu 4JQYcDgB4yLGicCxBIxLaRhwFAHjahoGHELAqMApMeD3BIw6nBIDfknAqEnDgK8JGJVpGPAdAaM+ p8SALwgYTXBKDPiUgNEQDQP2EzDaYpwI7CRgNMfvYQH2EDBa5JQY8JaA0S4NA14QMJrmlBjwjIDR OuNE4CEBI4OGARsCRgwNA9YEjCROiQELASOMU2LAjYARScMAASOVhsHgBIxgTonByASMbE6JwbAE jB5oGAxIwOiEcSKMRsDoh9/DAkMRMLrilBiMQ8DokIbBCASMPjklBt0TMLplnAh9EzA697BhYgbv lPdfUtuf2jsApyvlQbpuH5SAB+lZNhUf+VBMjsZK0GPECowhPHsQWootRj4UI9/3jftD0fLBsQJj FOt12FrLj8+LORQLhyKCFRgAkazAGF3LI/7zPFxhjHkoJkdjJWvpaQXGQO6fksZ8kpoGvuM7DXt8 sh4jAsZY1o/Glh+ZF1i/Tm7zmrnROBRrQY8RI0SG0/hj8mKOxsKhWJQyzfNcmj8iVmAARBIwACIJ 2BXmNq7saWE3WtiHyW40tg+T3WhsH1IIGACRgi/iWH5OuZ1p3PwRgL4FB2z6N11LtyIungHgl3p4 rn8RMNNkgO+0X4fsFdj0k6tnoWr/HwCA7wQHbLPwAmAo2VchqhfAsFLPgW1mhuspYug9AuAjqQED YHDB58B2qnhV/f2m1wvHK/eq4vL02aZrHYoXu1RxuxWPxuQxUnW7Lzbdwi41vsLpOWAVr6F/sela /0Msr5m7fgeebbriY6PW0Xix3YoJqbLdF5uu+xhpatNVdml9fVzjL6vNvojjtVJKxUfCs03P83z9 U0aDj8yp0qGYWn1+rHI0mlp7rf+qyv8Ytbb7YtPVd6nlek19r8DaVPHnmtaereoeiou3+Ha7ET/w XqPB9XGtTdc9FFU2/ZGeV2ANqtiPWv8jPtt03UdFrdX5s+1WnBSt/9vCpttcH1fZdMvlaISAXafu +1o19fhs8LxLxe3W2qXyY7r8f49nm27wX6fWpr0N3h5NLw8P0cLcbP3B7a9qXV5VfdN1D8Vmr6pf 2dXC0Zg8Rn72pMp27zdd/VDU3fRH+g8YAF0yQgQgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZA JAEDIJKAARBJwACIJGAARBIwACIJGACRBAyASAIGQCQBAyCSgAEQScAAiCRgAEQSMAAiCRgAkQQM gEgCBkAkAQMgkoABEEnAAIgkYABEEjAAIgkYAJEEDIBIAgZAJAEDIJKAARDp/8NDhgC2uIvwAAAA AElFTkSuQmCC ",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7084246,"math_prob":0.99336517,"size":16536,"snap":"2021-43-2021-49","text_gpt3_token_len":5262,"char_repetition_ratio":0.15352045,"word_repetition_ratio":0.11471037,"special_character_ratio":0.33913884,"punctuation_ratio":0.16625446,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99741644,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T13:27:05Z\",\"WARC-Record-ID\":\"<urn:uuid:1e04b434-9eea-4afe-9d78-ea95bf1ca2d3>\",\"Content-Length\":\"141519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2e24f95-8ca2-4bff-bc4f-e7db7d67afad>\",\"WARC-Concurrent-To\":\"<urn:uuid:6296871e-d8d7-4a9a-ba51-eb6b208f81a1>\",\"WARC-IP-Address\":\"104.21.25.233\",\"WARC-Target-URI\":\"https://nbviewer.ipython.org/github/gpeyre/numerical-tours/blob/master/matlab/sparsity_7_sudoku.ipynb\",\"WARC-Payload-Digest\":\"sha1:U5DPEFQCWHHJQYIOUXGA4MRUUARATKF3\",\"WARC-Block-Digest\":\"sha1:GS7NMF2TNOIEBQSCZJ2ZRUH2YXIIOGXA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585997.77_warc_CC-MAIN-20211024111905-20211024141905-00446.warc.gz\"}"} |
https://www.irrigationbox.co.nz/irrigatable-area-calculator-by-water-supply | [
"# Irrigatable Area\n\nThis calculator finds the land area that can be irrigated with a given flow of water. The minimum system capacity (supply) is the available water from the supply. The water needs is the peak crop water need during a specific time period. The available hours of operation per day is measured as the available hours for irrigation on a worst case day. The system efficiency is based on the irrigation efficiency and distribution uniformity.\n\n## Irrigatable Area As Limited By Water Supply\n\nMinimum System Capacity (Supply):\nWater Needs:\nOperation Hours Per Day:\nSystem Efficiency:\nIrrigated Area:\n\n### The Equation\n\nThis calulator uses this formula to determine the Irrigatable area.",
null,
"Where:",
null,
"= Irrigatable Area (sq. ft)",
null,
"= Minimum system capacity, or supply (gpm)",
null,
"= Water needs (in/day)",
null,
"= Operation hours per day (hrs)",
null,
"= System Efficiency (as a decimal)\n\nReference: Washington State University"
] | [
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/Irrigatable_Area.gif",
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/A.gif",
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/S.gif",
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/Wsubn.gif",
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/hrs.gif",
null,
"https://www.irrigationbox.co.nz/Images/uploaded/Calculator-Images/E.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7855242,"math_prob":0.82860565,"size":274,"snap":"2021-31-2021-39","text_gpt3_token_len":65,"char_repetition_ratio":0.08148148,"word_repetition_ratio":0.0,"special_character_ratio":0.229927,"punctuation_ratio":0.11363637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97661215,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,4,null,4,null,1,null,2,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T15:00:18Z\",\"WARC-Record-ID\":\"<urn:uuid:51d8dce4-1914-46ad-9a85-700c95d670ba>\",\"Content-Length\":\"77241\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:751823fb-4d49-4608-97f5-ffc8a285601e>\",\"WARC-Concurrent-To\":\"<urn:uuid:541849d0-dbaa-479d-a0a8-8cb6f3c0076b>\",\"WARC-IP-Address\":\"13.70.72.44\",\"WARC-Target-URI\":\"https://www.irrigationbox.co.nz/irrigatable-area-calculator-by-water-supply\",\"WARC-Payload-Digest\":\"sha1:HGRZRAB2JSAT5L4LQKOPSGUWN6LNP7NA\",\"WARC-Block-Digest\":\"sha1:XOX7K4VATPJ7VP4ICREUIKEVHDQWQIHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154459.22_warc_CC-MAIN-20210803124251-20210803154251-00494.warc.gz\"}"} |
https://downloads.haskell.org/~ghc/6.8.1/docs/html/users_guide/data-type-extensions.html | [
"## 8.4. Extensions to data types and type synonyms\n\n### 8.4.1. Data types with no constructors\n\nWith the `-fglasgow-exts` flag, GHC lets you declare a data type with no constructors. For example:\n\n``` data S -- S :: *\ndata T a -- T :: * -> *\n```\n\nSyntactically, the declaration lacks the \"= constrs\" part. The type can be parameterised over types of any kind, but if the kind is not `*` then an explicit kind annotation must be used (see Section 8.7.3, “Explicitly-kinded quantification”).\n\nSuch data types have only one value, namely bottom. Nevertheless, they can be useful when defining \"phantom types\".\n\n### 8.4.2. Infix type constructors, classes, and type variables\n\nGHC allows type constructors, classes, and type variables to be operators, and to be written infix, very much like expressions. More specifically:\n\n• A type constructor or class can be an operator, beginning with a colon; e.g. `:*:`. The lexical syntax is the same as that for data constructors.\n\n• Data type and type-synonym declarations can be written infix, parenthesised if you want further arguments. E.g.\n\n``` data a :*: b = Foo a b\ntype a :+: b = Either a b\nclass a :=: b where ...\n\ndata (a :**: b) x = Baz a b x\ntype (a :++: b) y = Either (a,b) y\n```\n\n• Types, and class constraints, can be written infix. For example\n\n```\tx :: Int :*: Bool\nf :: (a :=: b) => a -> b\n```\n\n• A type variable can be an (unqualified) operator e.g. `+`. The lexical syntax is the same as that for variable operators, excluding \"(.)\", \"(!)\", and \"(*)\". In a binding position, the operator must be parenthesised. For example:\n\n``` type T (+) = Int + Int\nf :: T Either\nf = Left 3\n\nliftA2 :: Arrow (~>)\n=> (a -> b -> c) -> (e ~> a) -> (e ~> b) -> (e ~> c)\nliftA2 = ...\n```\n\n• Back-quotes work as for expressions, both for type constructors and type variables; e.g. `Int `Either` Bool`, or `Int `a` Bool`. Similarly, parentheses work the same; e.g. `(:*:) Int Bool`.\n\n• Fixities may be declared for type constructors, or classes, just as for data constructors. However, one cannot distinguish between the two in a fixity declaration; a fixity declaration sets the fixity for a data constructor and the corresponding type constructor. For example:\n\n``` infixl 7 T, :*:\n```\n\nsets the fixity for both type constructor `T` and data constructor `T`, and similarly for `:*:`. `Int `a` Bool`.\n\n• Function arrow is `infixr` with fixity 0. (This might change; I'm not sure what it should be.)\n\n### 8.4.3. Liberalised type synonyms\n\nType synonyms are like macros at the type level, and GHC does validity checking on types only after expanding type synonyms. That means that GHC can be very much more liberal about type synonyms than Haskell 98:\n\n• You can write a `forall` (including overloading) in a type synonym, thus:\n\n``` type Discard a = forall b. Show b => a -> b -> (a, String)\n\nf x y = (x, show y)\n\ng :: Discard Int -> (Int,String) -- A rank-2 type\ng f = f 3 True\n```\n\n• You can write an unboxed tuple in a type synonym:\n\n``` type Pr = (# Int, Int #)\n\nh :: Int -> Pr\nh x = (# x, x #)\n```\n\n• You can apply a type synonym to a forall type:\n\n``` type Foo a = a -> a -> Bool\n\nf :: Foo (forall b. b->b)\n```\n\nAfter expanding the synonym, `f` has the legal (in GHC) type:\n\n``` f :: (forall b. b->b) -> (forall b. b->b) -> Bool\n```\n\n• You can apply a type synonym to a partially applied type synonym:\n\n``` type Generic i o = forall x. i x -> o x\ntype Id x = x\n\nfoo :: Generic Id []\n```\n\nAfter expanding the synonym, `foo` has the legal (in GHC) type:\n\n``` foo :: forall x. x -> [x]\n```\n\nGHC currently does kind checking before expanding synonyms (though even that could be changed.)\n\nAfter expanding type synonyms, GHC does validity checking on types, looking for the following mal-formedness which isn't detected simply by kind checking:\n\n• Type constructor applied to a type involving for-alls.\n\n• Unboxed tuple on left of an arrow.\n\n• Partially-applied type synonym.\n\nSo, for example, this will be rejected:\n\n``` type Pr = (# Int, Int #)\n\nh :: Pr -> Int\nh x = ...\n```\n\nbecause GHC does not allow unboxed tuples on the left of a function arrow.\n\n### 8.4.4. Existentially quantified data constructors\n\nThe idea of using existential quantification in data type declarations was suggested by Perry, and implemented in Hope+ (Nigel Perry, The Implementation of Practical Functional Programming Languages, PhD Thesis, University of London, 1991). It was later formalised by Laufer and Odersky (Polymorphic type inference and abstract data types, TOPLAS, 16(5), pp1411-1430, 1994). It's been in Lennart Augustsson's hbc Haskell compiler for several years, and proved very useful. Here's the idea. Consider the declaration:\n\n``` data Foo = forall a. MkFoo a (a -> Bool)\n| Nil\n```\n\nThe data type `Foo` has two constructors with types:\n\n``` MkFoo :: forall a. a -> (a -> Bool) -> Foo\nNil :: Foo\n```\n\nNotice that the type variable `a` in the type of `MkFoo` does not appear in the data type itself, which is plain `Foo`. For example, the following expression is fine:\n\n``` [MkFoo 3 even, MkFoo 'c' isUpper] :: [Foo]\n```\n\nHere, `(MkFoo 3 even)` packages an integer with a function `even` that maps an integer to `Bool`; and ```MkFoo 'c' isUpper``` packages a character with a compatible function. These two things are each of type `Foo` and can be put in a list.\n\nWhat can we do with a value of type `Foo`?. In particular, what happens when we pattern-match on `MkFoo`?\n\n``` f (MkFoo val fn) = ???\n```\n\nSince all we know about `val` and `fn` is that they are compatible, the only (useful) thing we can do with them is to apply `fn` to `val` to get a boolean. For example:\n\n``` f :: Foo -> Bool\nf (MkFoo val fn) = fn val\n```\n\nWhat this allows us to do is to package heterogenous values together with a bunch of functions that manipulate them, and then treat that collection of packages in a uniform manner. You can express quite a bit of object-oriented-like programming this way.\n\n#### 8.4.4.1. Why existential?\n\nWhat has this to do with existential quantification? Simply that `MkFoo` has the (nearly) isomorphic type\n\n``` MkFoo :: (exists a . (a, a -> Bool)) -> Foo\n```\n\nBut Haskell programmers can safely think of the ordinary universally quantified type given above, thereby avoiding adding a new existential quantification construct.\n\n#### 8.4.4.2. Type classes\n\nAn easy extension is to allow arbitrary contexts before the constructor. For example:\n\n```data Baz = forall a. Eq a => Baz1 a a\n| forall b. Show b => Baz2 b (b -> b)\n```\n\nThe two constructors have the types you'd expect:\n\n```Baz1 :: forall a. Eq a => a -> a -> Baz\nBaz2 :: forall b. Show b => b -> (b -> b) -> Baz\n```\n\nBut when pattern matching on `Baz1` the matched values can be compared for equality, and when pattern matching on `Baz2` the first matched value can be converted to a string (as well as applying the function to it). So this program is legal:\n\n``` f :: Baz -> String\nf (Baz1 p q) | p == q = \"Yes\"\n| otherwise = \"No\"\nf (Baz2 v fn) = show (fn v)\n```\n\nOperationally, in a dictionary-passing implementation, the constructors `Baz1` and `Baz2` must store the dictionaries for `Eq` and `Show` respectively, and extract it on pattern matching.\n\nNotice the way that the syntax fits smoothly with that used for universal quantification earlier.\n\n#### 8.4.4.3. Record Constructors\n\nGHC allows existentials to be used with records syntax as well. For example:\n\n```data Counter a = forall self. NewCounter\n{ _this :: self\n, _inc :: self -> self\n, _display :: self -> IO ()\n, tag :: a\n}\n```\n\nHere `tag` is a public field, with a well-typed selector function `tag :: Counter a -> a`. The `self` type is hidden from the outside; any attempt to apply `_this`, `_inc` or `_display` as functions will raise a compile-time error. In other words, GHC defines a record selector function only for fields whose type does not mention the existentially-quantified variables. (This example used an underscore in the fields for which record selectors will not be defined, but that is only programming style; GHC ignores them.)\n\nTo make use of these hidden fields, we need to create some helper functions:\n\n```inc :: Counter a -> Counter a\ninc (NewCounter x i d t) = NewCounter\n{ _this = i x, _inc = i, _display = d, tag = t }\n\ndisplay :: Counter a -> IO ()\ndisplay NewCounter{ _this = x, _display = d } = d x\n```\n\nNow we can define counters with different underlying implementations:\n\n```counterA :: Counter String\ncounterA = NewCounter\n{ _this = 0, _inc = (1+), _display = print, tag = \"A\" }\n\ncounterB :: Counter String\ncounterB = NewCounter\n{ _this = \"\", _inc = ('#':), _display = putStrLn, tag = \"B\" }\n\nmain = do\ndisplay (inc counterA) -- prints \"1\"\ndisplay (inc (inc counterB)) -- prints \"##\"\n```\n\nAt the moment, record update syntax is only supported for Haskell 98 data types, so the following function does not work:\n\n```-- This is invalid; use explicit NewCounter instead for now\nsetTag :: Counter a -> a -> Counter a\nsetTag obj t = obj{ tag = t }\n```\n\n#### 8.4.4.4. Restrictions\n\nThere are several restrictions on the ways in which existentially-quantified constructors can be use.\n\n• When pattern matching, each pattern match introduces a new, distinct, type for each existential type variable. These types cannot be unified with any other type, nor can they escape from the scope of the pattern match. For example, these fragments are incorrect:\n\n```f1 (MkFoo a f) = a\n```\n\nHere, the type bound by `MkFoo` \"escapes\", because `a` is the result of `f1`. One way to see why this is wrong is to ask what type `f1` has:\n\n``` f1 :: Foo -> a -- Weird!\n```\n\nWhat is this \"`a`\" in the result type? Clearly we don't mean this:\n\n``` f1 :: forall a. Foo -> a -- Wrong!\n```\n\nThe original program is just plain wrong. Here's another sort of error\n\n``` f2 (Baz1 a b) (Baz1 p q) = a==q\n```\n\nIt's ok to say `a==b` or `p==q`, but `a==q` is wrong because it equates the two distinct types arising from the two `Baz1` constructors.\n\n• You can't pattern-match on an existentially quantified constructor in a `let` or `where` group of bindings. So this is illegal:\n\n``` f3 x = a==b where { Baz1 a b = x }\n```\n\nInstead, use a `case` expression:\n\n``` f3 x = case x of Baz1 a b -> a==b\n```\n\nIn general, you can only pattern-match on an existentially-quantified constructor in a `case` expression or in the patterns of a function definition. The reason for this restriction is really an implementation one. Type-checking binding groups is already a nightmare without existentials complicating the picture. Also an existential pattern binding at the top level of a module doesn't make sense, because it's not clear how to prevent the existentially-quantified type \"escaping\". So for now, there's a simple-to-state restriction. We'll see how annoying it is.\n\n• You can't use existential quantification for `newtype` declarations. So this is illegal:\n\n``` newtype T = forall a. Ord a => MkT a\n```\n\nReason: a value of type `T` must be represented as a pair of a dictionary for `Ord t` and a value of type `t`. That contradicts the idea that `newtype` should have no concrete representation. You can get just the same efficiency and effect by using `data` instead of `newtype`. If there is no overloading involved, then there is more of a case for allowing an existentially-quantified `newtype`, because the `data` version does carry an implementation cost, but single-field existentially quantified constructors aren't much use. So the simple restriction (no existential stuff on `newtype`) stands, unless there are convincing reasons to change it.\n\n• You can't use `deriving` to define instances of a data type with existentially quantified data constructors. Reason: in most cases it would not make sense. For example:;\n\n```data T = forall a. MkT [a] deriving( Eq )\n```\n\nTo derive `Eq` in the standard way we would need to have equality between the single component of two `MkT` constructors:\n\n```instance Eq T where\n(MkT a) == (MkT b) = ???\n```\n\nBut `a` and `b` have distinct types, and so can't be compared. It's just about possible to imagine examples in which the derived instance would make sense, but it seems altogether simpler simply to prohibit such declarations. Define your own instances!\n\n### 8.4.5. Declaring data types with explicit constructor signatures\n\nGHC allows you to declare an algebraic data type by giving the type signatures of constructors explicitly. For example:\n\n``` data Maybe a where\nNothing :: Maybe a\nJust :: a -> Maybe a\n```\n\nThe form is called a \"GADT-style declaration\" because Generalised Algebraic Data Types, described in Section 8.4.6, “Generalised Algebraic Data Types (GADTs)”, can only be declared using this form.\n\nNotice that GADT-style syntax generalises existential types (Section 8.4.4, “Existentially quantified data constructors ”). For example, these two declarations are equivalent:\n\n``` data Foo = forall a. MkFoo a (a -> Bool)\ndata Foo' where { MKFoo :: a -> (a->Bool) -> Foo' }\n```\n\nAny data type that can be declared in standard Haskell-98 syntax can also be declared using GADT-style syntax. The choice is largely stylistic, but GADT-style declarations differ in one important respect: they treat class constraints on the data constructors differently. Specifically, if the constructor is given a type-class context, that context is made available by pattern matching. For example:\n\n``` data Set a where\nMkSet :: Eq a => [a] -> Set a\n\nmakeSet :: Eq a => [a] -> Set a\nmakeSet xs = MkSet (nub xs)\n\ninsert :: a -> Set a -> Set a\ninsert a (MkSet as) | a `elem` as = MkSet as\n| otherwise = MkSet (a:as)\n```\n\nA use of `MkSet` as a constructor (e.g. in the definition of `makeSet`) gives rise to a `(Eq a)` constraint, as you would expect. The new feature is that pattern-matching on `MkSet` (as in the definition of `insert`) makes available an `(Eq a)` context. In implementation terms, the `MkSet` constructor has a hidden field that stores the `(Eq a)` dictionary that is passed to `MkSet`; so when pattern-matching that dictionary becomes available for the right-hand side of the match. In the example, the equality dictionary is used to satisfy the equality constraint generated by the call to `elem`, so that the type of `insert` itself has no `Eq` constraint.\n\nThis behaviour contrasts with Haskell 98's peculiar treatment of contexts on a data type declaration (Section 4.2.1 of the Haskell 98 Report). In Haskell 98 the definition\n\n``` data Eq a => Set' a = MkSet' [a]\n```\n\ngives `MkSet'` the same type as `MkSet` above. But instead of making available an `(Eq a)` constraint, pattern-matching on `MkSet'` requires an `(Eq a)` constraint! GHC faithfully implements this behaviour, odd though it is. But for GADT-style declarations, GHC's behaviour is much more useful, as well as much more intuitive.\n\nFor example, a possible application of GHC's behaviour is to reify dictionaries:\n\n``` data NumInst a where\nMkNumInst :: Num a => NumInst a\n\nintInst :: NumInst Int\nintInst = MkNumInst\n\nplus :: NumInst a -> a -> a -> a\nplus MkNumInst p q = p + q\n```\n\nHere, a value of type `NumInst a` is equivalent to an explicit `(Num a)` dictionary.\n\nThe rest of this section gives further details about GADT-style data type declarations.\n\n• The result type of each data constructor must begin with the type constructor being defined. If the result type of all constructors has the form `T a1 ... an`, where `a1 ... an` are distinct type variables, then the data type is ordinary; otherwise is a generalised data type (Section 8.4.6, “Generalised Algebraic Data Types (GADTs)”).\n\n• The type signature of each constructor is independent, and is implicitly universally quantified as usual. Different constructors may have different universally-quantified type variables and different type-class constraints. For example, this is fine:\n\n``` data T a where\nT1 :: Eq b => b -> T b\nT2 :: (Show c, Ix c) => c -> [c] -> T c\n```\n\n• Unlike a Haskell-98-style data type declaration, the type variable(s) in the \"`data Set a where`\" header have no scope. Indeed, one can write a kind signature instead:\n\n``` data Set :: * -> * where ...\n```\n\nor even a mixture of the two:\n\n``` data Foo a :: (* -> *) -> * where ...\n```\n\nThe type variables (if given) may be explicitly kinded, so we could also write the header for `Foo` like this:\n\n``` data Foo a (b :: * -> *) where ...\n```\n\n• You can use strictness annotations, in the obvious places in the constructor type:\n\n``` data Term a where\nLit :: !Int -> Term Int\nIf :: Term Bool -> !(Term a) -> !(Term a) -> Term a\nPair :: Term a -> Term b -> Term (a,b)\n```\n\n• You can use a `deriving` clause on a GADT-style data type declaration. For example, these two declarations are equivalent\n\n``` data Maybe1 a where {\nNothing1 :: Maybe1 a ;\nJust1 :: a -> Maybe1 a\n} deriving( Eq, Ord )\n\ndata Maybe2 a = Nothing2 | Just2 a\nderiving( Eq, Ord )\n```\n\n• You can use record syntax on a GADT-style data type declaration:\n\n``` data Person where\nAdult { name :: String, children :: [Person] } :: Person\nChild { name :: String } :: Person\n```\n\nAs usual, for every constructor that has a field `f`, the type of field `f` must be the same (modulo alpha conversion).\n\nAt the moment, record updates are not yet possible with GADT-style declarations, so support is limited to record construction, selection and pattern matching. For example\n\n``` aPerson = Adult { name = \"Fred\", children = [] }\n\nshortName :: Person -> Bool\nhasChildren (Adult { children = kids }) = not (null kids)\nhasChildren (Child {}) = False\n```\n\n• As in the case of existentials declared using the Haskell-98-like record syntax (Section 8.4.4.3, “Record Constructors”), record-selector functions are generated only for those fields that have well-typed selectors. Here is the example of that section, in GADT-style syntax:\n\n```data Counter a where\nNewCounter { _this :: self\n, _inc :: self -> self\n, _display :: self -> IO ()\n, tag :: a\n}\n:: Counter a\n```\n\nAs before, only one selector function is generated here, that for `tag`. Nevertheless, you can still use all the field names in pattern matching and record construction.\n\n### 8.4.6. Generalised Algebraic Data Types (GADTs)\n\nGeneralised Algebraic Data Types generalise ordinary algebraic data types by allowing constructors to have richer return types. Here is an example:\n\n``` data Term a where\nLit :: Int -> Term Int\nSucc :: Term Int -> Term Int\nIsZero :: Term Int -> Term Bool\nIf :: Term Bool -> Term a -> Term a -> Term a\nPair :: Term a -> Term b -> Term (a,b)\n```\n\nNotice that the return type of the constructors is not always `Term a`, as is the case with ordinary data types. This generality allows us to write a well-typed `eval` function for these `Terms`:\n\n``` eval :: Term a -> a\neval (Lit i) \t = i\neval (Succ t) = 1 + eval t\neval (IsZero t) = eval t == 0\neval (If b e1 e2) = if eval b then eval e1 else eval e2\neval (Pair e1 e2) = (eval e1, eval e2)\n```\n\nThe key point about GADTs is that pattern matching causes type refinement. For example, in the right hand side of the equation\n\n``` eval :: Term a -> a\neval (Lit i) = ...\n```\n\nthe type `a` is refined to `Int`. That's the whole point! A precise specification of the type rules is beyond what this user manual aspires to, but the design closely follows that described in the paper Simple unification-based type inference for GADTs, (ICFP 2006). The general principle is this: type refinement is only carried out based on user-supplied type annotations. So if no type signature is supplied for `eval`, no type refinement happens, and lots of obscure error messages will occur. However, the refinement is quite general. For example, if we had:\n\n``` eval :: Term a -> a -> a\neval (Lit i) j = i+j\n```\n\nthe pattern match causes the type `a` to be refined to `Int` (because of the type of the constructor `Lit`), and that refinement also applies to the type of `j`, and the result type of the `case` expression. Hence the addition `i+j` is legal.\n\nThese and many other examples are given in papers by Hongwei Xi, and Tim Sheard. There is a longer introduction on the wiki, and Ralf Hinze's Fun with phantom types also has a number of examples. Note that papers may use different notation to that implemented in GHC.\n\nThe rest of this section outlines the extensions to GHC that support GADTs. The extension is enabled with `-XGADTs`.\n\n• A GADT can only be declared using GADT-style syntax (Section 8.4.5, “Declaring data types with explicit constructor signatures”); the old Haskell-98 syntax for data declarations always declares an ordinary data type. The result type of each constructor must begin with the type constructor being defined, but for a GADT the arguments to the type constructor can be arbitrary monotypes. For example, in the `Term` data type above, the type of each constructor must end with `Term ty`, but the `ty` may not be a type variable (e.g. the `Lit` constructor).\n\n• You cannot use a `deriving` clause for a GADT; only for an ordinary data type.\n\n• As mentioned in Section 8.4.5, “Declaring data types with explicit constructor signatures”, record syntax is supported. For example:\n\n``` data Term a where\nLit { val :: Int } :: Term Int\nSucc { num :: Term Int } :: Term Int\nPred { num :: Term Int } :: Term Int\nIsZero { arg :: Term Int } :: Term Bool\nPair { arg1 :: Term a\n, arg2 :: Term b\n} :: Term (a,b)\nIf { cnd :: Term Bool\n, tru :: Term a\n, fls :: Term a\n} :: Term a\n```\n\nHowever, for GADTs there is the following additional constraint: every constructor that has a field `f` must have the same result type (modulo alpha conversion) Hence, in the above example, we cannot merge the `num` and `arg` fields above into a single name. Although their field types are both `Term Int`, their selector functions actually have different types:\n\n``` num :: Term Int -> Term Int\narg :: Term Bool -> Term Int\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8017702,"math_prob":0.95439124,"size":20383,"snap":"2023-40-2023-50","text_gpt3_token_len":5179,"char_repetition_ratio":0.13935915,"word_repetition_ratio":0.055555556,"special_character_ratio":0.26340577,"punctuation_ratio":0.1612825,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9686569,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T05:54:59Z\",\"WARC-Record-ID\":\"<urn:uuid:d621c7b5-672f-4526-9b84-c8bb1f14d490>\",\"Content-Length\":\"35005\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a75f70ae-67f2-4f15-bd48-c55c97906cd8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab3bde90-27fa-45b9-aa2c-f7357c6c8781>\",\"WARC-IP-Address\":\"146.75.29.175\",\"WARC-Target-URI\":\"https://downloads.haskell.org/~ghc/6.8.1/docs/html/users_guide/data-type-extensions.html\",\"WARC-Payload-Digest\":\"sha1:QLIFJPJ2MUW22UK2PFH24J7MRNYCF3O6\",\"WARC-Block-Digest\":\"sha1:XUZGKJ6HUBDQXS2RC7U346ZKOO47OC6S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00684.warc.gz\"}"} |
http://essays.grokearth.com/2012/10/golden-ratio-in-nature.html | [
"## Saturday, October 20, 2012\n\n### Golden Ratio in Nature\n\nIn words, the golden ratio is:\nThe ratio of the sum of two quantities over the larger quantity is equal to the ratio of the larger quantity over the smaller one.\nThe golden ratio expressed as an equation is:\n\n(a + b) / a = 1.618 = a / b\n\nFrom the golden ratio, we get a mathematical constant - a number. It is the number 1.6 which, by convention, is represented by the Greek letter φ (pronounced \"fee\").\n\nThe number φ is a mathematical curiosity. Like the mathematical constant π, φ appears everywhere.",
null,
"Ginkgo Leaf\nThe good, of course, is always beautiful, and the beautiful never lacks proportion.\nPlato\nThe golden ratio is found in the proportions of leaves. I measured the golden ratio in the leaf of a ginkgo tree.\n\nUsing the typographical units of pica, I measured 28 pica from the notch to the base of the stem of a ginkgo leaf. I then measured 17 pica from the top of the stem to the base of the stem (length a).\n\nThe quotient of 28 pica over 17 pica yields the golden ratio 1.6.\n\nScientists have recently observed nanoscopic symmetry. The symmetry they observed had the attributes of the golden ratio, demonstrating this curious proportion at the quantum level.\nThe universe cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.\nGalileo Galilei\nReferences"
] | [
null,
"http://4.bp.blogspot.com/-zfFTrdhC39c/UILBQU_hISI/AAAAAAAADPM/1vwDj94wMiI/s200/ginkgoGoldenRatio2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9014342,"math_prob":0.9574846,"size":1447,"snap":"2022-40-2023-06","text_gpt3_token_len":325,"char_repetition_ratio":0.12820514,"word_repetition_ratio":0.016064256,"special_character_ratio":0.2211472,"punctuation_ratio":0.11785714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883596,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T23:48:07Z\",\"WARC-Record-ID\":\"<urn:uuid:479db622-c728-4208-a6b9-1bcea7e5a3a8>\",\"Content-Length\":\"288956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef1eec13-d11e-423e-9431-c770b0f80e52>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1f56eb6-ae39-47b4-b461-92c953896213>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"http://essays.grokearth.com/2012/10/golden-ratio-in-nature.html\",\"WARC-Payload-Digest\":\"sha1:V2BYWEEI6JEY2K2KP4OFCFOU5RDJ4X6R\",\"WARC-Block-Digest\":\"sha1:AWGHMW4POORYYXBGH66STKUWGPECL543\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499468.22_warc_CC-MAIN-20230127231443-20230128021443-00304.warc.gz\"}"} |
https://www.mathstrength.org/grade-4 | [
"",
null,
"## Formative Assessment and Bridging activities",
null,
"These materials are part of an iterative design process and will continue to be refined during the 2021-2022 school year. Feedback is being accepted at the link below.\n*Share Feedback for Grade 4 Modules\n\nNote: Links marked with will open in a new tab\nThe Bridging Standards in bold below are currently live. Others are coming soon!\n\nStandard 4.3c Standard 4.3d\n\nStandard 4.5a\n\nStandard 4.5b\n\nStandard 4.5c\n\nStandard 4.6a\n\nStandard 4.6b Standard 4.8a\n\nStandard 4.8b\n\nStandard 4.8d\n\nStandard 4.10a\n\nStandard 4.14b\n\nStandard 4.15\n\nStandard 4.16\n\n## Standard 4.2a\n\nStandard 4.2a Compare and order fractions and mixed numbers.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Initial fraction ideas begin with creating experiences with a variety of visual representations (area models, number lines, set models, etc.) which can be utilized to compare and order fractions. Over time, reasoning strategies and generalizations are developed through modeling and mental imagery. There may be a transition from creating models to applying those generalizations and reasoning strategies to solve problems.\n\n• A variety of reasoning strategies can be applied to compare fractions. Potential strategies include but are not limited to:\n\n• reasoning about relative magnitude ;\n\n• unit fraction reasoning;\n\n• benchmark reasoning ( i.e. I know 4/8 is ½ so ⅝ is a little more); and\n\n• equivalence.\n\n• Behr and Post (1992) indicate that “a child’s understanding of the ordering of two fractions needs to be based on an understanding of the ordering of unit fractions” (1992, p.21).\n\n• The reasoning strategy applied is often impacted by number choice. For example, I might use the benchmark of ½ if I am comparing 3/10 (a little less than ½) and ⅝ (a little more than ½).\n\nImportant Assessment Look-fors:\n\n• Students recognize and utilize benchmark fractions.\n\n• Students use their understanding of the ordering of unit fractions to ordering of two fractions.\n\n• Students recognize and apply their understanding of equivalent fractions.\n\n• Students recognize fractions equivalent to ½.\n\n• Students represent and work with fractions greater than 1.\n\n• Students organize their thinking in a way that helps them make sense of the problem. Student representations support their thinking/reasoning.\n\n• Student strategies make sense for the given set of numbers.\n\n• It is evident in the strategy used that the student understands the relationship between the numerator and the denominator.\n\nPurposeful questions:\n\n• Can you share where you started and why?\n\n• How might you use your understanding of unit fractions to compare and order these fractions?\n\n• How can benchmark be used to help you understand the value of these fractions?\n\n• Is that fraction greater than 1? Less than 1? How do you know?\n\n• What do you know about the value of these numbers?\n\n• What relationships do you see?\n\n• I’m looking at your number line and I’m noticing _____. Tell me more about that...",
null,
"### Student Strengths\n\nStudents can name, write, represent and compare fractions and mixed numbers represented by a model\n\n### Bridging Concepts\n\nStudents can make generalizations about fractional relationships across representations (i.e., when the numerator is half of the denominator, the fraction is always equal to ½).\n\n### Standard 4.2a\n\nStudents can compare and order fractions and mixed numbers with and without models",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.2b\n\nStandard 4.2b Represent equivalent fractions.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• When two fractions are equivalent that means there are two ways of describing the same amount by using different sized fractional parts. (Van de Walle et al, 2019)\n\n• A variety of representations and models can be used to identify different names for equivalent fractions: region/area, set and measurement models. (Students should use area representations, strips of paper, tape diagrams, number lines, counters and other manipulatives to reason about equivalence.)\n\n• Intuitive methods using drawings and manipulatives support student understanding. Students can develop an understanding of equivalent fractions and also develop from that understanding a conceptually based algorithm. Delay sharing “a rule.” (Van de Walle et al, 2019)\n\nImportant Assessment Look-fors:\n\n• Evidence that the student is able to identify equivalent fractions modeled using a variety of different models: area, set and/or measurement.\n\n• The student uses strategies like removing lines or adding additional lines to the area and number line models to show how the models are equivalent.\n\n• The student is able to use a variety of strategies and can justify their reasoning as to why fractions are equivalent.\n\n• When the student represents their fraction as models, they represent the whole using the same size and shape model. Click this link for more information about student representations.\n\nPurposeful questions:\n\n• What strategy (or strategies) did you use to determine which fraction models are equivalent?\n\n• Which fraction models are the easiest for you to identify? What made it easy?\n\n• Which fraction models are the hardest for you to identify? What made it difficult?\n\n• What relationships do you see?\n\n• Can you create another equivalent model that is different from ones shown?",
null,
"### Student Strengths\n\nStudents can name and write fractions and mixed numbers represented by a model. Students can represent fractions and mixed numbers with models and symbols.\n\n### Bridging Concepts\n\nStudents can create a model to represent a fraction. (Area models tend to be easier for students to grasp while set and measurement models tend to be more difficult. Students may need additional support bridging their understanding of area models to help support their understanding of set and measurement models.)\n\n### Standard 4.2b\n\nStudents can represent equivalent fractions.",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.2C\n\nStandard 4.2c Identify the division statement that represents a fraction, with models and in context.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Fractions have multiple meanings and interpretations. Generally there are five main interpretations: fractions as parts of wholes or parts of sets; fractions as the result of dividing two numbers; fractions as the ratio of two quantities; fractions as operators; and fractions as measures (Behr, Harel, Post, and Lesh 1992; Kieren 1988; Lamon 1999). When a fraction is presented in symbolic form, devoid of context, the intended interpretation of the fraction is not evident. The various interpretations are needed, however, in order to make sense of fraction problems and situations. Students need to explore and understand that fractional parts are equal shares of a whole or set model.\n\n• A fraction can also represent the result obtained when two numbers are divided. This interpretation of fraction is sometimes referred to as the quotient meaning, since the quotient is the answer to a division problem. Chapin and Johnson (2000) give these examples, “the number of gumdrops each child receives when 40 gumdrops are shared among 5 children can be expressed as 40/5 , 8/1 , or 8; when two steaks are shared equally among three people, each person gets 2/ 3 of a steak for dinner. We often express the quotient as a mixed number rather than an improper fraction – 15 feet of rope can be divided to make two jump ropes, each 7 1/ 2 ( 15/2 ) feet long” (p .99– 101).\n\n• When partitioning a whole into more equal shares the parts become smaller. (Teaching Student-Centered Mathematics, Grades 3-5, John Van de Walle)\n\n• When exploring the concept of fractions and connecting it to the division statement, students should be able to identify and recognize that the fraction is the amount that each person would receive when dividing equally.\n\nImportant Assessment Look-fors:\n\n• The student is able to accurately partition models when exploring fair share problems.\n\n• When given context, the student is able to successfully use models to identify the division statement that represents the fraction.\n\n• The student can interpret the fraction and division statement as the amount each person would receive.\n\n• The student can identify the difference between a division statement that represents an improper fraction versus a proper fraction within context. The student can also represent a fair share problem when the numerator is larger or smaller than the denominator.\n\nPurposeful questions:\n\n• Will each person receive more or less than a whole? Explain your reasoning.\n\n• Identify the division statements that represent this fraction as presented in the context.\n\n• How much will each person receive when shared equally?\n\n• Once the students have discovered how much each person will receive, ask the students if each person was to combine their equal share together, what would be the combined total? What do you notice?\n\n• Example: 2 cookies shared with 3 students. Each person would receive ⅔ of a cookie. When combining the equal shares from each of the 3 people, ⅔ + ⅔ + ⅔ = 6/3 or 2. The sum is equivalent to 2 whole cookies or the original amount of cookies that was shared.\n\n• If the dividend is smaller than the divisor, how does this relate to how much each person would receive if the amount was shared equally? Would this be true if the dividend was larger than the divisor?\n\n• Example: 2 pizzas shared with 3 people versus 3 pizzas shared with 2 people.",
null,
"### Student Strengths\n\nStudents can name and write fractions and mixed numbers represented by a model.\n\n### Bridging Concepts\n\nStudents understand the concept of division and know what the dividend and divisor represents.\n\n### Standard 4.2C\n\nStudents can identify the division statement that represents a fraction, with models and in context.",
null,
"",
null,
"",
null,
"## Standard 4.3a\n\nStandard 4.3a Read, write, represent, and identify decimals expressed through thousandths.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• The structure of the base-ten number system is based upon a simple pattern of tens, where each place is ten times the value of the place to its right. This is known as a ten-to-one place value relationship (Van de Walle et al., 2019)\n\n• Decimals is another form of writing fractions and the connection between the two is important in understanding the concepts of decimals (i.e. connect 1/10 as 0.1 and 1/100 as 0.01 and 1/1000 as 0.001 - Reading the decimal fractions will help students “hear” the connection).\n\n• Understanding of the base-ten system to the relationship between adjacent places and how numbers compare can help support students round for decimals to thousandths. For example, it is important to deepen understanding and fluency with decimals in the different forms, seeing .57 as 5 tenths and 7 hundredths as well as 57 hundredths (Common Core Standards Writing Team, 2019, p. 64). This ability to rename and decompose decimals can help students round to the nearest whole number, tenth or hundredth.\n\n• The decimal point separates the whole from the fractional part. The place value system extends infinitely in both directions of the decimal point, to very large and very small numbers. Connected uses of decimals in real life using money, metric measurements, batting averages can support student understanding.\n\nImportant Assessment Look-fors:\n\n• The student is able to identify the decimal represented using a variety of different models.\n\n• The student can model a given decimal using base ten blocks when the whole is defined.\n\n• The student is able to read and write decimals, especially with zero placeholders?\n\n• The student is able to represent decimals in a variety of forms?\n\n• When given a decimal, the student can identify the place value position and value of each digit.\n\nPurposeful questions:\n\n• Is the decimal represented more or less than a whole? Explain your answer.\n\n• If the whole changed, how would that affect the decimal represented?\n\n• When modeling with base ten blocks, what would the cube represent if the whole is equal to a rod? Explain your answer.\n\n• How can you represent this decimal in a variety of ways, such as number line, money, or 10-by-10 grid?",
null,
"### Student Strengths\n\nStudents can identify the ten-to-one relationship within the base-ten system of whole numbers.\nStudents can read and write various amounts of money and recognize that the number before the decimal point represents whole dollars and the amount to the right of the decimal point represent a part of a dollar.\n\n### Bridging Concepts\n\nStudents connect the idea that money is a model or representation of decimals (i.e., that 10 dimes equals a dollar or 100 pennies is equal to a dollar). Students can extend their understanding of ten-to-one place value relationships to decimals.\n\n### Standard 4.3a\n\nRead, write, represent, and identify decimals expressed through thousandths.",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.3c\n\nStandard 4.3c Compare and order decimals.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• This standard builds upon the work students did in previous grades in understanding place value and comparing and ordering whole numbers. In grade 4, in addition to comparing greater numbers, students began relating decimal fractions and decimal numbers and comparing decimals using visual models. The place value understanding that supports the ability to compare decimals also supports the understanding of rounding decimals, which is introduced in grade 5 as SOL 5.1.\n\n• Concepts of whole numbers, fractions, and decimals are connected and applied when comparing and ordering.\n\n• Using manipulatives to construct decimals helps students develop an understanding of the relative size of decimal numbers for comparing and ordering.\n\n• It is important for students to connect decimal number sense concepts such as representations, decimals benchmarks, and/or fractions when comparing and ordering decimals.\n\nImportant Assessment Look-fors:\n\n• The student can compare decimals with different amounts of digits. (Example 0.9 and 0.234)\n\n• The student can justify which decimal is larger or smaller using a variety of strategies that focus on number sense such as models, decimal benchmarks and/or identifying the value of the greatest place value.\n\n• The student can order decimals least to greatest or greatest to least.\n\n• The student can apply a variety of strategies when ordering decimals with similar digits and/or different amounts of digits. (Example: 0.9; 0.901; 0.09; 0.009)\n\nPurposeful questions:\n\n• Identify which decimal is the greatest? Which one is the least? Explain your answer.\n\n• Which decimal(s) can be placed in the space provided so that the decimals are in order from least to greatest? (Example: 0.142; 0.45 ______; 0.8)\n\n• Compare the following decimals using two different strategies to justify which one is greater or least.",
null,
"### Student Strengths\n\nStudents can compare and order whole numbers with similar numbers of digits and/or smaller numbers.\n\n### Bridging Concepts\n\nStudents use understanding of the ten-to-one base ten relationships to create decimal representations (i.e., base ten blocks, decimal circles/squares, etc.).\n\n### Standard 4.3c\n\nStudents can compare and order decimals.",
null,
"",
null,
"",
null,
"## Standard 4.4a\n\nStandard 4.4a Demonstrate fluency with multiplication facts through 12 x 12, and corresponding division facts.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Computational fluency is the ability to think flexibly in order to choose appropriate strategies to solve problems accurately and efficiently (VDOE Grade 4 Curriculum Framework).\n\n• All of the facts are conceptually related so students can figure out new or unknown facts using what they already know (Van de Walle et al, 2018).\n\n• The development of computational fluency relies on quick access to number facts. There are patterns and relationships that exist in the facts. These relationships can be used to learn and retain the facts (VDOE Grade 4 Curriculum Framework).\n\n• Mastering the basic facts is a developmental process. Students move through phases, starting with counting, then more efficient reasoning strategies, and eventually quick recall and mastery. Instruction must help students through these phases without rushing them to know their facts only through memorization (Van de Walle et al, 2018).\n\n• When students struggle with developing basic fact fluency, they may need to return to foundational ideas. Just providing additional drill will not resolve their challenges and can negatively affect their confidence and success in mathematics (Van de Walle et al, 2018).\n\n• In order to develop and use strategies to learn the multiplication facts through the twelves table, students should use concrete materials, a hundreds chart, and mental mathematics. Strategies to learn the multiplication facts include an understanding of multiples, properties of zero and one as factors, commutative property, and related facts (VDOE Grade 4 Curriculum Framework).\n\nImportant Assessment Look-fors:\n\n• The student knows and is able to apply a variety of strategies (i.e., partial products, using friendly numbers, repeated addition, and/or decomposition strategies, recall, etc.).\n\n• The student demonstrates an understanding of the term “product” and uses a strategy to find a product that leads to a correct answer.\n\n• The student identifies multiple number sentences with the same product.\n\n• Student’s work shows understanding of the inverse relationship between multiplication and division.\n\nPurposeful questions:\n\n• What strategies are most efficient for this fact (or set of facts) and how do you know?\n\n• How can knowing one fact help you with another fact (or a fact that you don’t know)?\n\n• What makes one fact related to another fact? How can knowing related facts be helpful?\n\n• What do you know about the relationship between multiplication and division? How can that relationship help you solve problems?\n\n• What are some ways that you can break apart these numbers to make this problem easier?",
null,
"### Student Strengths\n\nStudents developed an understanding of the meanings of multiplication and division of whole numbers through activities and practical problems involving equal-sized groups, arrays, and length models.\n\n### Bridging Concepts\n\nStudents have developed efficient reasoning strategies that will lead them to quick recall and memorization of facts of 0, 1, 2, 5 and 10.\n\n### Standard 4.4A\n\nThe student will demonstrate fluency with multiplication facts through 12 × 12, and the corresponding division facts.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.4b\n\nStandard 4.4b Estimate and determine sums, differences, and products of whole numbers.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Flexible methods of computation for all four operations involve taking apart (decomposing) and combining (composing) numbers in a variety of ways (Van de Walle et al, 2018).\n\n• Students should explore and apply the properties of addition and multiplication as strategies for solving addition, subtraction, multiplication, and division problems using a variety of representations (e.g., manipulatives, diagrams, and symbols). (VDOE Grade 4 Curriculum Framework)\n\n• Flexible methods for computation require deep understanding of the operations and the properties of operations (commutative property, associative property, and the distributive property). How addition and subtraction, as well as multiplication and division, are related as inverse operations is also critical knowledge (Van de Walle et al, 2018).\n\n• Estimation can be used to determine the approximation for and then to verify the reasonableness of sums, differences, products, and quotients of whole numbers. An estimate is a number that lies within a range of the exact solution, and the estimation strategy used in a particular problem determines how close the number is to the exact solution. (VDOE Grade 4 Curriculum Framework)\n\nImportant Assessment Look-fors:\n\n• Students' work shows that they understand and can apply terms such as estimate, sum, difference and product.\n\n• Students’ can select a strategy that they understand and can apply successfully.\n\n• Students can explain their solution and justify the reasonableness of their answer.\n\n• Student’s work contains a variety of strategies and representations.\n\n• Students are able to use estimation to determine the reasonableness of their answer (i.e., a little more than, a little less than, closer to, etc.).\n\nPurposeful questions:\n\n• Tell me about the operation you decided to use and why it makes sense.\n\n• How did you represent your thinking?\n\n• How do you know __ is closest to __?\n\n• Why did you choose that place value to estimate?",
null,
"### Student Strengths\n\nThe student will estimate and determine the sum or difference of two whole numbers up to 9,999.\nThe student will represent multiplication and division through 10 × 10, using a variety of approaches and models.\n\n### Bridging Concepts\n\nStudents use their place value understanding and ability to round to estimate sums and differences of two whole numbers.Students use strategies based on place value and properties of the operations to multiply whole numbers.\n\n### Standard 4.4B\n\nThe student will estimate and determine sums, differences, and products of whole numbers.",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.4c\n\nStandard 4.4c Estimate and determine quotients of whole numbers, with and without remainders.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Flexible methods of computation involve taking apart (decomposing) and combining (composing) numbers in a variety of ways (Van de Walle et al, 2018).\n\n• Students should explore and apply the properties of addition and multiplication as strategies for solving division problems using a variety of representations (e.g., manipulatives, diagrams, and symbols). (VDOE Grade 4 Curriculum Framework)\n\n• Flexible methods for computation require deep understanding of the operations and the properties of operations (commutative property, associative property, and the distributive property). How addition and subtraction, as well as multiplication and division, are related as inverse operations is also critical knowledge (Van de Walle et al, 2018).\n\n• Estimation can be used to determine the approximation for and then to verify the reasonableness of quotients of whole numbers. An estimate is a number that lies within a range of the exact solution, and the estimation strategy used in a particular problem determines how close the number is to the exact solution (VDOE Grade 4 Curriculum Framework).\n\n• Some division situations will produce a remainder, but the remainder will always be less than the divisor. If the remainder is greater than the divisor, that means at least one more can be given to each group (fair sharing) or at least one more group of the given size (the dividend) may be created (Georgia Department of Education Grade 4 Curriculum).\n\nImportant Assessment Look-fors:\n\n• Students’ use methods they understand and can explain.\n\n• Students' work shows that they understand and can apply the term quotient(s).\n\n• Students’ work demonstrates an understanding of place value and identifying related facts that correlate with the problem.\n\n• Students can explain their solution and the reasonableness of their answer.\n\n• Student’s work contains a variety of strategies and representations.\n\n• The student is able to use estimation to determine the reasonableness of their answer (ie, a little more than, a little less than, closer to, etc.).\n\nProbing questions:\n\n• How did you represent your thinking?\n\n• How do you know __ is closest to __?\n\n• Why did you choose that place value for your estimate?\n\n• What is the meaning of a remainder in a division problem?\n\n• What effect does a remainder have on a quotient?\n\n• How are remainders and divisors related?",
null,
"### Student Strengths\n\nStudents developed an understanding of the meanings of multiplication and division of whole numbers through activities and practical problems involving equal-sized groups, arrays, and length models.\nThe students have worked on fluency of facts for 0, 1, 2, 5, and 10.\n\n### Bridging Concepts\n\nStudents may still be working on developing efficient reasoning strategies that will lead them to quick recall and memorization of facts from 0 - 12.\n\n### Standard 4.4c\n\nThe student will estimate and determine quotients of whole numbers, with and without remainders.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"## Standard 4.4d\n\nStandard 4.4d Create and solve single-step and multistep practical problems involving addition, subtraction, and multiplication of whole numbers and single step practical problems with division.\n\n(Pull down for more)\n\nUnderstanding the Learning Progression\n\nBig Ideas:\n\n• Use reasoning and a variety of strategies that the student understands and is able to explain.\n\n• Makes sense of the problem rather than relying on keywords that may not always be helpful. (See VDOE Standards of Learning- Grade 4 Document p.19).\n\n• Inverse operations are related and can flexibly be used to solve problems.\n\n• Estimation, based on number sense and an understanding of place value, can be used to determine if an answer is reasonable.\n\n• Students will be stronger problem solvers given opportunities to engage with a variety of problem types. For more information about addition/subtraction problem types see the Grade 3 VDOE Standards of Learning Document p. 15 and for multiplication/division problem types see the Grade 4 VDOE Standards of Learning Document pp.20-21.\n\nImportant Assessment Look-fors:\n\n• Students are able to create a problem using the given information.\n\n• Students are able to correctly represent the problem they created and it matches the way they solved it.\n\n• Student work shows that they understand the problem because they have planned an approach to solve the problem and/or a way to represent their thinking.\n\n• Student work shows that they understand what each number in the problem represents.\n\n• Student work shows that they understand what operations can be used, the meaning behind the operation or how the operations are related.\n\nPurposeful questions:\n\n• How did you represent your thinking? How do you know it matches the story?\n\n• Is there another way that you could solve this problem?\n\n• What equation/number sentence matches your thinking?\n\n• What operations did you decide to use? Why?",
null,
"### Student Strengths\n\nStudents determine the sum or difference of two whole numbers to 4 digits.\nStudents create and solve single-step and multistep practical problems involving sums or differences of two whole numbers, each 9,999 or less.\n\n### Bridging Concepts\n\nStudents use their understanding of multiplication and division to solve a variety of single step contextual problems.\nStudents use place value understanding and properties of operations to solve multiplication problems up to 10 x 10.\n\n### Standard 4.4D\n\nStudents create and solve single-step and multistep practical problems involving addition, subtraction, and multiplication of whole numbers and single step practical problems with division.",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"### Routines:",
null,
""
] | [
null,
"https://lh5.googleusercontent.com/JOS0I-1o3om_IZUGVqddbtKiXCLc8B5RyrAcqaJ23-UGZ8--76afGuURhlmHok1NTE0GINJptyEK0P7ylAO4nwYNDOL_pmSpU6YVWuxyKPiTHPFAcEQzmQCgXJpKBPbqXQ=w1280",
null,
"https://lh6.googleusercontent.com/TP39rW-t4OecnRy0I6FQV7VURCJavsXMXyW2VYnnUoRnGUZPp-9v7MgshcMZNlnKYztjiP2nYgmxQeZLdZBNaPxzz1fdvc83CNhelO4-PCSjtRCgfykgXhahHFqhPqWswQ=w1280",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh6.googleusercontent.com/4oNwQYBPlvS_vr5lYqNGLbxjsMRjLifsvgVilDQjbKBtmmw-zKxkKNrGaHOQrv20kP8ac_U2MMmptP--IMIFafo=w16383",
null,
"https://lh6.googleusercontent.com/fmvuGGrjvIcWVu5NP2B_511A7DkxA7xZrLURHFX0uctoFu6zY6hDUfy86a_qijz-0p8CGWTE99pdE5AdvqBOmGU=w16383",
null,
"https://lh6.googleusercontent.com/cZVtCO90hUKQxAObfdg_SjIO1NbNSQwOTA494FIWfYI4vzD3tJ8WJm4hSlZ8pZjeh3GS_I0OIKGudCzIad_1-IY=w16383",
null,
"https://lh5.googleusercontent.com/78qi8usLuP7UB9SWzDVpLfK_5hT1ZM3goGJNedQFdaWs1CHxpFHkUBdwf4E8CQUH0U1a5nMrCQ96p3MK06FVYg=w16383",
null,
"https://lh3.googleusercontent.com/IbU5S2ND5EGwCHUvqJV6x7R6C4OrrvFPYcGLh5uMHcae7EcFe7TKWZe7pFa93fsKuuDq6Q4ffCli7d0rRjtBHGM=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh4.googleusercontent.com/66_UgI1Po00_-lK1aNvJi6rg1Kw9Xh1YsIK0lnCrCPvWfmOlaas9FmWh9hiLTjawJzQLZa-cWBoKw6yP0O7YZaM=w16383",
null,
"https://lh5.googleusercontent.com/7mvV_MzEwWOVLKRgLZmDNK5c6iEpPZ4ysHaE-Zz4Lif1ri2Aoa1ejrqXQdAzq98gyPHZ9jnNKvIsZpNblpW6Wec=w16383",
null,
"https://lh5.googleusercontent.com/Z_N-jYrXSxMAZg0c_ulOOKGRJsYBoXTosR-ji2XJrnbxqzAL8f9LcEs1RRy5N1eIjZ_wLaSv_rNXEwOKXZo3sRU=w16383",
null,
"https://lh6.googleusercontent.com/myk_L2T6hulZuBxBTKVhdiQinY5ejumsgFuoo2ZK4EwOCc1JotpWqqxBMpRnHPQJEZETpJBfbMXjdXjNRZ46iyY=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh6.googleusercontent.com/4-okX-9Nq8Ch6qEmfOuZGVm_ZLnYiTT19rAWDnOwE7zKIwZLmQmKm-EgpV7igdRryNL4sUEvXo7Fpb9FYLNp8wM=w16383",
null,
"https://lh3.googleusercontent.com/n6WwhSusemGasqy--v6WaAF6SEn_sq7SBCofLSuhk6dlqHJ_9JsXIkUgZVNDDU0QZ4dUcO_3n5T1zmCH8j_QuEY=w16383",
null,
"https://lh5.googleusercontent.com/0cDcZYUfFBMGkXoorH-WMSXv3eiT1qrCoujOCdDjmmieKILeQopShWOf-xrOZiPBVofUp41uOS5jJkDhkX9show=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh5.googleusercontent.com/kCAYkBaAZH_HjOEAaJc8gLKyhsBWwrgrewgmgpw2T8YJ4sUZN9dEowO40GFdXMv3lG-ScBv23XT32iw-_7vBsPM=w16383",
null,
"https://lh6.googleusercontent.com/hSB3n9-5XaDAWs5KMC_ldeAJZbFzmPRGoZGWjE7PJSRMbi-pV3eFuOBNruHy09t5rkn5f9ZOJm80EKz3bgAoreY=w16383",
null,
"https://lh6.googleusercontent.com/GSMZEs46Ej_O5JeJ19MJwO9fntpol31gDXGhgW1DapHeMpmLvXO128h2WR1iLYnYh5ZnAzhLGPWvTGSxDfk7caE=w16383",
null,
"https://lh6.googleusercontent.com/3OgXimLk4-1mLvkL83d9-vXt8lZQokjn5ZP1U97YmEEyraY505JbSRq2bu3kS5xJuZxYFXNjwwq_-9nYGELOKRg=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh4.googleusercontent.com/j92VKZljTgD4RAJwqSFdqUThGS6u2IIjb_sHqHCSergswxJj3Y34Quf561e7AbRgm95VCPZPnEfX15-Eo-lzb0Y=w16383",
null,
"https://lh5.googleusercontent.com/o9MastFJNjTSGBhiniAe-vYDPTa_KV8MUstOqcUOkK3TvIrAt2mHDP2y1M-2dwdjvnsO8G0cJfyBA8OkO2mI2nU=w16383",
null,
"https://lh6.googleusercontent.com/GVr61KAStiSE5Ls1cSx8BZ2_XwRcnjysS8hVIYJfgdnw5ZOONQW_jE7blRAw23iNRFI7GpCOhqzgaeS-uHyC8r8=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh6.googleusercontent.com/-Co7wqf1jbGueNcaB8HOYe2HBlBv1Mm4aR59GUVuZ4dmcVGfkT_ZcpBQxWpsFUpSPxpKMCjIxpDpvfbIwOw=w16383",
null,
"https://lh3.googleusercontent.com/Sd6I6oAQ6z_QR7RQBQVnl6p2Ldx5S0AUrffsqlfNRI891gOSIJiIkrsCI62x_gadAkiknDnAcO-VfmdrkbtlDrw=w16383",
null,
"https://lh4.googleusercontent.com/BggW62yVEb1jV5PVNFqZeSRlfuhdf8Ew_nB9NBo9W_Z1EYx53rDpuxrAayJUv2CW79Pxrn8EftWCjw2mDSX-N_U=w16383",
null,
"https://lh3.googleusercontent.com/Ybg_fOlKF0hQQm3KZvezKKtniMh80YLmemBsgVf2iYVfW1LDVjxPbNE_UjlSoZ0tinurDjTKafXre7z-RQjvZg=w16383",
null,
"https://lh3.googleusercontent.com/oCmnqINoiI-FwlV_pP0t5pBxjZDBLQuOT4YkXGtTGuaLD5sAqm80f_qpACtGj_rlKQweDX6arFrSQtXoNiMtvqU=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh4.googleusercontent.com/GcZqUOrsWpLANk1n0Byi1P4JYFRhgTMLtVPeW2_H4pXVcI3AxuN5Qg8nQ3TbDoCgpfnoVo_0vuL6fEzKEBLxbqA=w16383",
null,
"https://lh5.googleusercontent.com/IaINqrrzi_-e_wl6lnRUMcafwhg-WNYXxS90weULufHrXxqBDHRuuBvQ5zCDyEEqHQUbVrKszv4j2QIC54O9H2k=w16383",
null,
"https://lh6.googleusercontent.com/nE5edGIsx2b_0m2FaOLh9ygoZ3qRcSinZ_p14eY-S0umtA4mR0cQ6ll1WANbsgUpbKQDLhkvMebRJ6MMpayKFmA=w16383",
null,
"https://lh4.googleusercontent.com/oqrqR7h8KfIoW0X8OLjwIF_KE_WSDqdDSpwJ4lywYe_5DoZuxX8kY0d5umoq1HihOt5vYfU2vG5PUwpmpWoPJKc=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh5.googleusercontent.com/LbHyTV7IbYJrnHCc67dHPuVmNDx-LsCB_SvSE-hDbI_AR36t93rsEzHUHPYQWnRInxjWgZqNWBP1Vg2bBziPqow=w16383",
null,
"https://lh3.googleusercontent.com/dWRQ5Ze3SVZxI868aGp2_SYuRMDEhX2KabD7uPmUFrWPp83ufN_nRchgdiqpKd47UxlpGtjFzcp-g-Vw7FUdHHg=w16383",
null,
"https://lh5.googleusercontent.com/6ePkrkl9LJ15JCouWCQvwJEBuzuqmeXedNIfJ6Qmu_pmFpYTtQxs3iJdSO0aDrqmQSNpi6CdvLR99ag2KcKT2qw=w16383",
null,
"https://lh4.googleusercontent.com/mLOzY1Dp-kz2NpMFdpBxsRC0Cd1w0emZ_ywxhNEeEqRwnjmdYTrPz19vjKF9RRFsJics6lOr-FuT0k_8fkG-RDI=w16383",
null,
"https://lh3.googleusercontent.com/aSptd1KY1JERrckZKECl1zBWSvUpBr3jSc8MOZ4jbCXg0y5pUhMC8ScaHuSpys770wTJ0-B9CbIP3c07XMOE9GE=w16383",
null,
"https://lh4.googleusercontent.com/lV9Zwtzqn9KnDIN916nxk7KbYp2ijDo7SAWZIkwtd0IsyIs45yYAHbfECVwDG0ElHmEGWxAtoT3SRck_N14Wmq2Hg8nZsf0zMM05vz0aancrLJIxVscWjcYbgw97DKjvkA=w1280",
null,
"https://lh5.googleusercontent.com/Qc1CZZ3fyjhiTiJHsSh-ktv3kKdy1YoBNA_DJOGq1WBt_HzaVovjBdgQrWC4dFJFVwmPOJ8Wf77qbcCyvmjkvkU=w16383",
null,
"https://lh3.googleusercontent.com/yxzz7NjTtTn1r_M6cbLHErYYuGlpBtTLbPKNg-yxSoh0pbIB2IKrgYW8y9fATidslLMNnBlb4zt-Jikwoo_5tEs=w16383",
null,
"https://lh6.googleusercontent.com/3Aw9qdXLUfy6QM9M8rOkJsOzVutrordNzJkdzGp0Mc8_RqStBXBJbR-5QmJ-voUKnyKCnExZgVQfjwdEjVAUeg=w16383",
null,
"https://lh4.googleusercontent.com/1B6Kgo4lA7RYDwRpV3xZti5XLhp5B41O8jL8H5g4PPzsuvIuZ7kQOn4-11wAN7peGHQFOEH2FbLJfVkSSyBp9n0=w16383",
null,
"https://lh6.googleusercontent.com/i9IIuxE_2Q_HS4jJsNYP3k7WMZuucs-BNfQUPJ8I6vq5v-14cQte-Q4_SpB__dwKZ6V_9E0hsvXKhPcf8Dsiwt4=w16383",
null,
"https://lh6.googleusercontent.com/6GJyqYeLjUFiU0KnyA8y2ay63bcblRYbk94WfeYxd_EWEuaO4gagm-qV-aDZ7WVB9hhiC9ofiaQE5lZJH0JsUNQ=w16383",
null,
"https://lh6.googleusercontent.com/1hlNORWRx-eP_xqmEVfH2-9qnB-630SP_0iAYeCjZa8VLDpJCjJGDhDx_JIbfmjzWWjFODFmY3JpiFFXYbJSGrR4sqWWCKpe3G7-CfaKQ9ABV_rdpeTs_2JjAB5kW3cSFQ=w1280",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8924141,"math_prob":0.92724955,"size":28346,"snap":"2021-43-2021-49","text_gpt3_token_len":5815,"char_repetition_ratio":0.16452615,"word_repetition_ratio":0.21246849,"special_character_ratio":0.19692373,"punctuation_ratio":0.11666009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99375194,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,1,null,9,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T05:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:68602855-56af-4a68-86f1-2ae283b682a9>\",\"Content-Length\":\"599350\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e7845b6-7964-4f69-a043-e97b94c0ee98>\",\"WARC-Concurrent-To\":\"<urn:uuid:56f9ffee-edfd-4f82-9094-317f5f431013>\",\"WARC-IP-Address\":\"142.251.45.19\",\"WARC-Target-URI\":\"https://www.mathstrength.org/grade-4\",\"WARC-Payload-Digest\":\"sha1:AWPJFMOKQUN7MTMKZUX55MPPQTKUAVTC\",\"WARC-Block-Digest\":\"sha1:XMCY5QRZHB27NCDPRUBI7735XYIXDIDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587623.1_warc_CC-MAIN-20211025030510-20211025060510-00005.warc.gz\"}"} |
http://techtalks.tv/events/13/?page=1 | [
"## TechTalks from event: IEEE FOCS 2010\n\n51st Annual IEEE Symposium on Foundations of Computer Science\n\n• Geometric complexity theory (GCT) Authors: Ketan Mulmuley\nGeometric complexity theory (GCT) is an approach towards the fundamental lower bound problems of complexity theory through algebraic geometry and representation theory. This talk will give a brief introduction to GCT without assuming any background in algebraic geometry or representation theory.\n• How to Grow Your Lower Bounds Authors: Mihai Patrascu\nI will survey the state of the art in proving hardness for data structure problems in the cell-probe model of computation.\n• How To Think About Mechanism Design Authors: Tim Roughgarden, Stanford University\nMechanism design studies optimization problems where the underlying data --- such as the value of a good or the cost of performing a task --- is initially unknown to the algorithm designer. Auction settings are canonical examples, where the private data is the willingness to pay the bidders for the goods on sale, and the optimization problem is to allocate the goods to maximize some objective, such as revenue or overall value to society. A \"mechanism\" is a protocol that interacts with participants and determines a solution to the underlying optimization problem. We first explain why designing mechanisms with good game-theoretic properties boils down to algorithm design in a certain \"constrained computational model.\" We differentiate between single-parameter problems, where this computational model is well understood, and multi-parameter problems, where it is not. We then study two specific problem domains: revenue-maximizing auctions, and the recent idea of analyzing auctions via a \"Bayesian thought experiment\"; and welfare-maximizing mechanisms, with an emphasis on black-box reductions that transmute approximation algorithms into truthful approximation mechanisms.\n• Constructive Algorithms for Discrepancy Minimization Authors: Nikhil Bansal\nGiven a set system (V,S), V=[n] and S={S_1,ldots,S_m}, the minimum discrepancy problem is to find a 2-coloring X:V -> {-1,+1}, such that each set is colored as evenly as possible, i.e. find X to minimize max_{j in [m]} |sum_{i in S_j} X(i)|. In this paper we give the first polynomial time algorithms for discrepancy minimization, that achieve bounds similar to those known existentially using the so-called Entropy Method. We also give a first approximation-like result for discrepancy. Specifically we give efficient randomized algorithms to: 1. Construct an \\$O(sqrt{n})\\$ discrepancy coloring for general sets systems when \\$m=O(n)\\$, matching the celebrated result of Spencer up to constant factors. Previously, no algorithmic guarantee better than the random coloring bound, i.e. O((n log n)^{1/2}), was known. More generally, for \\$mgeq n\\$, we obtain a discrepancy bound of O(n^{1/2} log (2m/n)). 2. Construct a coloring with discrepancy \\$O(t^{1/2} log n)\\$, if each element lies in at most \\$t\\$ sets. This matches the (non-constructive) result of Srinivasan cite{Sr}. 3. Construct a coloring with discrepancy O(lambdalog(nm)), where lambda is the hereditary discrepancy of the set system. In particular, this implies a logarithmic approximation for hereditary discrepancy. The main idea in our algorithms is to gradually produce a coloring by solving a sequence of semidefinite programs, while using the entropy method to guide the choice of the semidefinite program at each stage.\n• Bounded Independence Fools Degree-2 Threshold Functions Authors: Ilias Diakonikolas and Daniel M. Kane and Jelani Nelson\nLet x be a random vector coming from any k-wise independent distribution over {-1,1}^n. For an n-variate degree-2 polynomial p, we prove that E[sgn(p(x))] is determined up to an additive eps for k = poly(1/eps). This gives a large class of explicit pseudo-random generators against such functions and answers an open question of Diakonikolas et al. (FOCS 2009).\n\nIn the process, we develop a novel analytic technique we dub multivariate FT-mollification. This provides a generic tool to approximate bounded (multivariate) functions by low-degree polynomials (with respect to several different notions of approximation). A univariate version of the method was introduced by Kane et al. (SODA 2010) in the context of streaming algorithms. In this work, we refine it and generalize it to the multivariate setting. We believe that our technique is of independent mathematical interest. To illustrate its generality, we note that it implies a multidimensional generalization of Jackson's classical result in approximation theory due to (Ganzburg 1979).\n\nTo obtain our main result, we combine the FT-mollification technique with several linear algebraic and probabilistic tools. These include the invariance principle of of Mossell, O'Donnell and Oleszkiewicz, anti-concentration bounds for low-degree polynomials, an appropriate decomposition of degree-2 polynomials, and a generalized hyper-contractive inequality for quadratic forms which takes the operator norm of the associated matrix into account. Our analysis is quite modular; it readily adapts to show that intersections of halfspaces and degree-2 threshold functions are fooled by bounded independence. From this it follows that Omega(1/eps^2)-wise independence derandomizes the Goemans-Williamson hyperplane rounding scheme.\n\nOur techniques unify, simplify, and in some cases improve several recent results in the literature concerning threshold functions. For the case of ``regular'' halfspaces we give a simple proof of an optimal independence bound of Th\n\n• The Coin Problem, and Pseudorandomness for Branching Programs / Pseudorandom Generators for Regular Branching Programs Authors: Joshua Brody and Elad Verbin / Mark Braverman and Anup Rao and Ran Raz and Amir Yehudayoff\nThe emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either \\$1/2 + \beta\\$ or \\$1/2 - \beta\\$. We are given the outcome of \\$n\\$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to be correct with probability \\$ge 2/3\\$. When our computational model is unrestricted, the majority function is optimal, and succeeds when \\$\beta ge c /sqrt{n}\\$ for a large enough constant \\$c\\$. The coin problem is open and interesting in models that cannot compute the majority function.\n\nIn this paper we study the coin problem in the model of emph{read-once width-\\$w\\$ branching programs}. We prove that in order to succeed in this model, \\$\beta\\$ must be at least \\$1/ (log n)^{Theta(w)}\\$. For constant \\$w\\$ this is tight by considering the recursive tribes function.\n\nWe generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two \\$m\\$-sided dice. We prove that if the distributions are too close, then the dice cannot be distinguished by a small-width read-once branching program.\n\nWe suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-\\$w\\$ read-once emph{permutation} branching programs, using seed length \\$O(w^4 log n log log n + log n log (1/eps))\\$. For \\$w=eps=Theta(1)\\$, this seedlength is \\$O(log n log log n)\\$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braverman, Rao, Raz and Yehudayoff (which might be submitted to this FOCS).\n\n• Settling the Polynomial Learnability of Mixtures of Gaussians Authors: Ankur Moitra and Gregory Valiant\nGiven data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. We give an algorithm for this problem that has a running time, and data requirement polynomial in the dimension and the inverse of the desired accuracy, with provably minimal assumptions on the Gaussians. As simple consequences of our learning algorithm, we can perform near-optimal clustering of the sample points and density estimation for mixtures of \\$k\\$ Gaussians, efficiently.\n\nThe building blocks of our algorithm are based on the work (Kalai emph{et al}, STOC 2010)~cite{2Gs} that gives an efficient algorithm for learning mixtures of two Gaussians by considering a series of projections down to one dimension, and applying the emph{method of moments} to each univariate projection. A major technical hurdle in~cite{2Gs} is showing that one can efficiently learn emph{univariate} mixtures of two Gaussians. In contrast, because pathological scenarios can arise when considering univariate projections of mixtures of more than two Gaussians, the bulk of the work in this paper concerns how to leverage an algorithm for learning univariate mixtures (of many Gaussians) to yield an efficient algorithm for learning in high dimensions. Our algorithm employs emph{hierarchical clustering} and rescaling, together with delicate methods for backtracking and recovering from failures that can occur in our univariate algorithm.\n\nFinally, while the running time and data requirements of our algorithm depend exponentially on the number of Gaussians in the mixture, we prove that such a dependence in necessary.\n\n• Polynomial Learning of Distribution Families Authors: Mikhail Belkin and Kaushik Sinha\nThe question of polynomial learnability of probability distributions, particularly Gaussian mixture distributions, has recently received significant attention in theoretical computer science and machine learning. However, despite major progress, the general question of polynomial learnability of Gaussian mixture distributions still remained open. The current work resolves the question of polynomial learnability for Gaussian mixtures in high dimension with an arbitrary but fixed number of components.\n\nThe result for Gaussian distributions relies on a very general result of independent interest on learning parameters of distributions belonging to what we call {it polynomial families}. These families are characterized by their moments being polynomial of parameters and, perhaps surprisingly, include almost all common probability distributions as well as their mixtures and products. Using tools from real algebraic geometry, we show that parameters of any distribution belonging to such a family can be learned in polynomial time.\n\nTo estimate parameters of a Gaussian mixture distribution the general results on polynomial families are combined with a certain deterministic dimensionality reduction allowing learning a high-dimensional mixture to be reduced to a polynomial number of parameter estimation problems in low dimension.\n\n• Agnostically learning under permutation invariant distributions Authors: Karl Wimmer\nWe generalize algorithms from computational learning theory that are successful under the uniform distribution on the Boolean hypercube \\${0,1}^n\\$ to algorithms successful on permutation invariant distributions, distributions where the probability mass remains constant upon permutations in the instances. While the tools in our generalization mimic those used for the Boolean hypercube, the fact that permutation invariant distributions are not product distributions presents a significant obstacle.\n\nUnder the uniform distribution, halfspaces can be agnostically learned in polynomial time for constant \\$eps\\$. The main tools used are a theorem of Peres~cite{Peres04} bounding the {it noise sensitivity} of a halfspace, a result of~cite{KOS04} that this theorem this implies Fourier concentration, and a modification of the Low-Degree algorithm of Linial, Mansour, Nisan~cite{LMN:93} made by Kalai et. al.~cite{KKMS08}. These results are extended to arbitrary product distributions in~cite{BOWi08}.\n\nWe prove analogous results for permutation invariant distributions; more generally, we work in the domain of the symmetric group. We define noise sensitivity in this setting, and show that noise sensitivity has a nice combinatorial interpretation in terms of Young tableaux. The main technical innovations involve techniques from the representation theory of the symmetric group, especially the combinatorics of Young tableaux. We show that low noise sensitivity implies concentration on ``simple'' components of the Fourier spectrum, and that this fact will allow us to agnostically learn halfspaces under permutation invariant distributions to constant accuracy in roughly the same time as in the uniform distribution over the Boolean hypercube case.\n\n• Learning Convex Concepts from Gaussian Distributions with PCA Authors: Santosh Vempala\nWe present a new algorithm for learning a convex set in \\$R^n\\$ given labeled examples drawn from any Gaussian distribution. The efficiency of the algorithm depends on the dimension of the {em normal subspace}, the span of vectors orthogonal to supporting hyperplanes of the convex set. The key step of the algorithm uses a Singular Value Decomposition (SVD) to approximate the relevant normal subspace. The complexity of the resulting algorithm is \\$poly(n)2^{ ilde{O}(k)}\\$ for an arbitrary convex set with normal subspace of dimension \\$k\\$. For the important special case of the intersection of \\$k\\$ halfspaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(klog (1/eps))} ] to learn a hypothesis that correctly classifies \\$1-eps\\$ of the unknown Gaussian distribution. This improves substantially on existing algorithms and is the first algorithm to achieve a fixed polynomial dependence on \\$n\\$. The proof of our main result is based on a monotonicity property of Gaussian space."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8510702,"math_prob":0.9573388,"size":17901,"snap":"2019-26-2019-30","text_gpt3_token_len":3762,"char_repetition_ratio":0.12471364,"word_repetition_ratio":0.024806201,"special_character_ratio":0.18708453,"punctuation_ratio":0.07083043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99331903,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T16:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:73a160af-4226-4403-8b15-b23e6a0e6c6c>\",\"Content-Length\":\"79414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72105938-d6bc-4cb9-998c-3aabcfd47152>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9d7d0d1-928f-4e7f-9d98-40d5434e5adb>\",\"WARC-IP-Address\":\"173.230.131.71\",\"WARC-Target-URI\":\"http://techtalks.tv/events/13/?page=1\",\"WARC-Payload-Digest\":\"sha1:76RPHQPRSDNB4745OZ6JAX3MSUU5BQFH\",\"WARC-Block-Digest\":\"sha1:AOMZ7SCWLOHYCBWGW6RIPJPPOF4W6OZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524679.39_warc_CC-MAIN-20190716160315-20190716182315-00348.warc.gz\"}"} |
https://calculomates.com/en/divisors/of/2629/ | [
"# Divisors of 2629\n\n## Divisors of 2629\n\nThe list of all positive divisors (that is, the list of all integers that divide 22) is as follows :\n\nAccordingly:\n\n2629 is multiplo of 1\n\n2629 is multiplo of 11\n\n2629 is multiplo of 239\n\n2629 has 3 positive divisors\n\n## Parity of 2629\n\n2629is an odd number,as it is not divisible by 2\n\n## The factors for 2629\n\nThe factors for 2629 are all the numbers between -2629 and 2629 , which divide 2629 without leaving any remainder. Since 2629 divided by -2629 is an integer, -2629 is a factor of 2629 .\n\nSince 2629 divided by -2629 is a whole number, -2629 is a factor of 2629\n\nSince 2629 divided by -239 is a whole number, -239 is a factor of 2629\n\nSince 2629 divided by -11 is a whole number, -11 is a factor of 2629\n\nSince 2629 divided by -1 is a whole number, -1 is a factor of 2629\n\nSince 2629 divided by 1 is a whole number, 1 is a factor of 2629\n\nSince 2629 divided by 11 is a whole number, 11 is a factor of 2629\n\nSince 2629 divided by 239 is a whole number, 239 is a factor of 2629\n\n## What are the multiples of 2629?\n\nMultiples of 2629 are all integers divisible by 2629 , i.e. the remainder of the full division by 2629 is zero. There are infinite multiples of 2629. The smallest multiples of 2629 are:\n\n0 : in fact, 0 is divisible by any integer, so it is also a multiple of 2629 since 0 × 2629 = 0\n\n2629 : in fact, 2629 is a multiple of itself, since 2629 is divisible by 2629 (it was 2629 / 2629 = 1, so the rest of this division is zero)\n\n5258: in fact, 5258 = 2629 × 2\n\n7887: in fact, 7887 = 2629 × 3\n\n10516: in fact, 10516 = 2629 × 4\n\n13145: in fact, 13145 = 2629 × 5\n\netc.\n\n## Is 2629 a prime number?\n\nIt is possible to determine using mathematical techniques whether an integer is prime or not.\n\nfor 2629, the answer is: No, 2629 is not a prime number.\n\n## How do you determine if a number is prime?\n\nTo know the primality of an integer, we can use several algorithms. The most naive is to try all divisors below the number you want to know if it is prime (in our case 2629). We can already eliminate even numbers bigger than 2 (then 4 , 6 , 8 ...). Besides, we can stop at the square root of the number in question (here 51.274 ). Historically, the Eratosthenes screen (which dates back to Antiquity) uses this technique relatively effectively.\n\nMore modern techniques include the Atkin screen, probabilistic tests, or the cyclotomic test."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9168329,"math_prob":0.9876593,"size":2120,"snap":"2021-04-2021-17","text_gpt3_token_len":642,"char_repetition_ratio":0.20274103,"word_repetition_ratio":0.09178744,"special_character_ratio":0.37216982,"punctuation_ratio":0.13822894,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992803,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T22:26:45Z\",\"WARC-Record-ID\":\"<urn:uuid:2e4b0209-006a-4358-9fa8-62620e978c31>\",\"Content-Length\":\"16159\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3baf991-3eec-4dad-adef-791f61f97d26>\",\"WARC-Concurrent-To\":\"<urn:uuid:3017ca2f-04ce-467f-919a-4257ca132d41>\",\"WARC-IP-Address\":\"104.21.88.17\",\"WARC-Target-URI\":\"https://calculomates.com/en/divisors/of/2629/\",\"WARC-Payload-Digest\":\"sha1:BWEIXWRYQBZQUOXE2FUVVKFCZRQQ35P4\",\"WARC-Block-Digest\":\"sha1:F3OK6SS2XD325GNECHG6Z77Y5B4V5AK7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704833804.93_warc_CC-MAIN-20210127214413-20210128004413-00577.warc.gz\"}"} |
https://projecteuler.net/problem=502 | [
"",
null,
"## Counting Castles",
null,
"Published on Saturday, 7th February 2015, 04:00 pm; Solved by 219;\nDifficulty rating: 100%\n\n### Problem 502\n\nWe define a block to be a rectangle with a height of 1 and an integer-valued length. Let a castle be a configuration of stacked blocks.\n\nGiven a game grid that is w units wide and h units tall, a castle is generated according to the following rules:\n\n1. Blocks can be placed on top of other blocks as long as nothing sticks out past the edges or hangs out over open space.\n2. All blocks are aligned/snapped to the grid.\n3. Any two neighboring blocks on the same row have at least one unit of space between them.\n4. The bottom row is occupied by a block of length w.\n5. The maximum achieved height of the entire castle is exactly h.\n6. The castle is made from an even number of blocks.\n\nThe following is a sample castle for w=8 and h=5:",
null,
"Let F(w,h) represent the number of valid castles, given grid parameters w and h.\n\nFor example, F(4,2) = 10, F(13,10) = 3729050610636, F(10,13) = 37959702514, and F(100,100) mod 1 000 000 007 = 841913936.\n\nFind (F(1012,100) + F(10000,10000) + F(100,1012)) mod 1 000 000 007."
] | [
null,
"https://projecteuler.net/images/print_page_logo.png",
null,
"https://projecteuler.net/images/icon_info.png",
null,
"https://projecteuler.net/project/images/p502_castles.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8612652,"math_prob":0.9797678,"size":1027,"snap":"2019-51-2020-05","text_gpt3_token_len":291,"char_repetition_ratio":0.1202346,"word_repetition_ratio":0.0,"special_character_ratio":0.34469327,"punctuation_ratio":0.118421055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9775317,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T09:49:49Z\",\"WARC-Record-ID\":\"<urn:uuid:5c1ce188-7c4a-48b4-93ce-7b8248646400>\",\"Content-Length\":\"6012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb556af4-00e6-4686-9536-c7250d66032b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b01b35ef-5387-4a29-b730-f6e4b535b8fc>\",\"WARC-IP-Address\":\"31.170.122.77\",\"WARC-Target-URI\":\"https://projecteuler.net/problem=502\",\"WARC-Payload-Digest\":\"sha1:TSKCI6Q4OYXSARQWJJRUAEYA4XB24O7A\",\"WARC-Block-Digest\":\"sha1:HBZDBXB725QJEKKKT3GB2HR3JHDMMM3O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540542644.69_warc_CC-MAIN-20191212074623-20191212102623-00409.warc.gz\"}"} |
https://reason.town/neural-network-from-scratch-pytorch/ | [
"# How to Build a Neural Network from Scratch with Pytorch\n\nThis blog post will teach you how to build a neural network from scratch with Pytorch. We’ll cover everything from loading data to training and evaluating the network.\n\nCheckout this video:\n\n## Introduction\n\nNeural networks are a powerful machine learning tool that is used for both supervised and unsupervised learning tasks. In this tutorial, we will learn how to build a neural network from scratch using Pytorch, a popular deep learning framework. We will also learn how to train and evaluate our neural network. By the end of this tutorial, you will be able to build your own neural networks and use them for various tasks.\n\n## What is a neural network?\n\nA neural network is a series of layers, or nodes, that process information in a way that is similar to the brain. Each node is connected to several others, and they work together to solve complex problems. Neural networks are used in a variety of fields, including image recognition, natural language processing, and predictive analytics.\n\nBuilding a neural network from scratch can be a daunting task, but Pytorch makes it easy with its pre-built functions and classes. In this tutorial, we’ll show you how to build a simple neural network with Pytorch.\n\nFirst, we need to import the torch library. This will give us access to all of the functions we need to build our network.\n\nNext, we’ll create our first layer. This is called the input layer, and it will be responsible for taking in our data and feeding it into the rest of the network. We can create an input layer with the following code:\n\n## How do neural networks work?\n\nNeural networks are computer systems that are modeled after the human brain. These systems are able to learn and recognize patterns. Neural networks are composed of a series of layers. The first layer is the input layer. The second layer is the hidden layer. The hidden layer is made up of neurons. The third layer is the output layer.\n\nNeural networks are trained by providing them with a set of training data. This data is fed into the input layer. The hidden layer then learns to recognize patterns in the data. The outputlayer produces the results of the neural network.\n\nPytorch is a deep learning framework that provides a way to implement neural networks on a variety of devices including GPUs and CPUs. Pytorch allows for easy and flexible implementation of neural networks.\n\n## What is Pytorch?\n\nPytorch is a Python-based deep learning framework that enables you to build neural networks from scratch. It offers an easy-to-use API and can be used for both research and production. In this tutorial, you will learn how to build a neural network from scratch using Pytorch.\n\n## Why use Pytorch to build a neural network from scratch?\n\nPytorch is a powerful python library that makes it easy to build complex neural networks from scratch. In this tutorial, we will use Pytorch to build a simple neural network from scratch. We will also discuss some of the advantages of using Pytorch over other libraries like Tensorflow. By the end of this tutorial, you will be able to build and train your own neural networks with Pytorch.\n\n## What are the steps to build a neural network from scratch with Pytorch?\n\nThere are a few steps to building a neural network from scratch with Pytorch. First, you need to define the network architecture, which includes the number of layers and the number of neurons in each layer. Next, you need to initialize the weights and biases for each neuron. After that, you need to define the forward propagation function, which takes in an input and produces an output. Finally, you need to define the backpropagation function, which calculates the gradient of the error with respect to the weights and biases.\n\n## Conclusion\n\nIn this tutorial, we’ve seen how to build a neural network from scratch using Pytorch. We’ve covered the basic concepts of neural networks, such as layers and weights, and seen how to implement a simple network using only numpy. Then, we used Pytorch to build a more complex network and train it on a dataset. Finally, we saw how to evaluate the network on new data and make predictions.\n\nIf you’re interested in learning more about neural networks, we recommend checking out the following resources:\n\n– [A Complete Beginner’s Guide to Neural Networks](https://tosche.net/neural-networks/)\n– [How to Build a Neural Network from Scratch with Pytorch](https://medium.com/@jamesloyys/how-to-build-a-neural-network-from-scratch-with-pytorch-84f2e75bb2cf)\n– [https://towardsdatascience.com/build-your-first-neural network from scratch with Python and Keras](https://towardsdatascience.com/build-your first neural network from scratch with Python and Keras)\n\n-Aurélien Géron, “Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems”, 2nd Edition, O’Reilly Media, Inc., 2019.\n-“Neural Networks and Deep Learning”, Michael Nielsen, Determination Press, 2015."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.888888,"math_prob":0.61088556,"size":5477,"snap":"2023-14-2023-23","text_gpt3_token_len":1163,"char_repetition_ratio":0.19568792,"word_repetition_ratio":0.110593714,"special_character_ratio":0.19883148,"punctuation_ratio":0.11376673,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9688573,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T19:16:16Z\",\"WARC-Record-ID\":\"<urn:uuid:41d12956-0651-4fb7-8aa2-2ce5750d0acc>\",\"Content-Length\":\"154380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffe0fa2c-1434-4088-b847-e936f097bec9>\",\"WARC-Concurrent-To\":\"<urn:uuid:9cf79c30-24ef-47ee-8d1c-b13a48653a36>\",\"WARC-IP-Address\":\"104.21.38.119\",\"WARC-Target-URI\":\"https://reason.town/neural-network-from-scratch-pytorch/\",\"WARC-Payload-Digest\":\"sha1:3CS7BCDW2OVWVTOP3K7DK3INX744CQPP\",\"WARC-Block-Digest\":\"sha1:NS4CJBPXCWVVBBGWBVCSG6KXHVAJUD4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943555.25_warc_CC-MAIN-20230320175948-20230320205948-00104.warc.gz\"}"} |
https://dontgetserious.com/step-by-step-strategy-for-solving-word-problems/ | [
"# Step by Step Strategy for Solving Word Problems\n\n63\n\nSolving word problems can be trickier and more intimidating than solving a math equation. Most students are comfortable working with given numbers, but the simple addition of reading is enough to send a shiver down the spines of most students. However, solving even the most challenging word problems is not difficult provided that you comprehend the mathematical concept behind the question.\n\n## Strategies Used to Solve word problems",
null,
"The following steps can help equip students with the tools necessary to help them become confident in solving the mystery of word problems.\n\n## 1) Understanding the problem\n\n”A recent Adobe Education Exchange article said that the first step to successfully solve word problems is to read the question carefully”. This helps you understand and figure out the scenario of the problem. Be careful not to jump to any conclusion about solving it until you have a full understanding of the problem. The word problem will provide you with all the necessary information that you need to find a solution. Once you have the bigger picture, it’s time to identify the problem and come up with the measurements for finding the answer.\n\n## 2) Gather information\n\nSometimes curriculum writers add extra information that is not necessary for finding the answer to a math problem. The additional details are meant to train the students to ignore the unnecessary information and stay focused on finding the real problem. You will have to filter through the question to identify the details needed to solve that particular problem. Every situation requires different formats but using a visual representation makes it easier. You can use a list, table, or chart to note down the information given and leave blanks for the information not provided.\n\n## 3) Create the equation\n\nIn this step, you will need to identify the keywords. You can use a pencil to underline or circle the phrases that tell you what you need to find. These terms include words like sum, addition, more than, increased, which all mean to add. Once you know what you need to solve, you can determine the formula, equation, and steps you need to use to come up with the correct answer. The step also helps reinforce thinking to identify the information you don’t need.\n\n## 4) Solve the problem\n\nUsing the information provided and the formula, you can now key in the values to develop an equation that will help you solve the problem to get the unknown variable. Always be keen as you do your calculation to avoid mistakes and ensure that you are using the correct order of operations.\n\nOnce done with the computation, review your answer to ensure that it’s accurate. Using the information given and you can test it to see if it’s within the expected margin of the result. If it is outrageous or unreasonable, review your steps and calculation for errors. Going through the problem carefully will help you figure out where you made a mistake.\n\n## 6) Practice word problems often\n\nThe practice is the key to mastering word problem-solving. When you practice word problems, they will often become easier to solve as you notice similarities and patterns in solving them. These help you gain confidence even when you are dealing with challenging word problems.\n\n## Conclusion\n\nWhile the level of difficulty varies, the above steps are the basic planned approach that can help you answer word problems.\n\n63"
] | [
null,
"https://dontgetserious.com/wp-content/uploads/2020/11/Solve-word-problems.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92251337,"math_prob":0.7836634,"size":3370,"snap":"2021-31-2021-39","text_gpt3_token_len":636,"char_repetition_ratio":0.1553773,"word_repetition_ratio":0.0,"special_character_ratio":0.18664688,"punctuation_ratio":0.0762987,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9595149,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T09:54:11Z\",\"WARC-Record-ID\":\"<urn:uuid:cff49b6d-7457-470e-9e7d-13742be91831>\",\"Content-Length\":\"90914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bdc69e9b-d1de-4f6e-baa7-a1e81b6a5bbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:39e74a72-0a2f-4211-a8d8-f0f5734e1f6f>\",\"WARC-IP-Address\":\"104.21.22.225\",\"WARC-Target-URI\":\"https://dontgetserious.com/step-by-step-strategy-for-solving-word-problems/\",\"WARC-Payload-Digest\":\"sha1:QZE7V5DBUDTIFBWZEDPYY7O5Y4GI5ZMU\",\"WARC-Block-Digest\":\"sha1:3OEKAV4CY7F4NAB362PVM7GA2FY5FCEQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058415.93_warc_CC-MAIN-20210927090448-20210927120448-00536.warc.gz\"}"} |
https://cellrank.readthedocs.io/en/latest/pancreas_basic.html | [
"# Pancreas Basics¶\n\nThis tutorial shows how to apply CellRank in order to infer initial or terminal states of a developmental process and how to compute probabilistic fate mappings. The first part of this tutorial is very similar to scVelo’s tutorial on pancreatic endocrinogenesis. The data we use here comes from Bastidas-Ponce et al. (2018). For more info on scVelo, see the documentation or read the article.\n\nThis is the high level mode to interact with CellRank, which is quick and simple but does not offer as many options as the lower level mode, in which you interact directly with our kernels and estimators. If you would like to get to know this more advanced way of interacting with CellRank, see the pancreas advanced tutorial.\n\n## Import packages & data¶\n\nEasiest way to start is to download Miniconda3 along with the environment file found here. To create the environment, run conda create -f environment.yml.\n\n:\n\nimport scvelo as scv\nimport scanpy as sc\nimport cellrank as cr\nimport numpy as np\n\nscv.settings.verbosity = 3\nscv.settings.set_figure_params('scvelo')\ncr.settings.verbosity = 2\n\n\nFirst, we need to get the data. The following commands will download the adata object and save it under datasets/endocrinogenesis_day15.5.h5ad.\n\n:\n\nadata = cr.datasets.pancreas()\n\nAbundance of ['spliced', 'unspliced']: [0.81 0.19]\n\n:\n\nAnnData object with n_obs × n_vars = 2531 × 27998\nobs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta', 'palantir_pseudotime'\nvar: 'highly_variable_genes'\nuns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca'\nobsm: 'X_pca', 'X_umap'\nlayers: 'spliced', 'unspliced'\nobsp: 'connectivities', 'distances'\n\n\n## Pre-process the data¶\n\nFilter out genes which don’t have enough spliced/unspliced counts, normalize and log transform the data and restrict to the top highly variable genes. Further, compute principal components and moments for velocity estimation. These are standard scanpy/scvelo functions, for more information about them, see the scVelo API.\n\n:\n\nscv.pp.filter_and_normalize(adata, min_shared_counts=20, n_top_genes=2000)\n\nFiltered out 22024 genes that are detected 20 counts (shared).\nNormalized count data: X, spliced, unspliced.\nExctracted 2000 highly variable genes.\nLogarithmized X.\ncomputing moments based on connectivities\n'Ms' and 'Mu', moments of un/spliced abundances (adata.layers)\n\n\n## Run scVelo¶\n\nWe will use the dynamical model from scVelo to estimate the velocities. The first step, estimating the parameters of the dynamical model, may take a while (~10min). To make sure we only have to run this once, we developed a caching extension called scachepy. scachepy does not only work for recover_dynamics, but it can cache the output of almost any scanpy or scvelo function. To install it, simply run\n\npip install git+https://github.com/theislab/scachepy\n\n\nIf you don’t want to install scachepy now, don’t worry, the below cell will run without it as well and this is the only place in this tutorial where we’re using it.\n\n:\n\ntry:\nimport scachepy\nc = scachepy.Cache('../../cached_files/basic_tutorial/')\nexcept ModuleNotFoundError:\nprint(\"You don't seem to have scachepy installed, but that's fine, you just have to be a bit patient (~10min). \")\n\nYou don't seem to have scachepy installed, but that's fine, you just have to be a bit patient (~10min).\nrecovering dynamics\n'fit_pars', fitted parameters for splicing dynamics (adata.var)\n\n\nOnce we have the parameters, we can use these to compute the velocities and the velocity graph. The velocity graph is a weighted graph that specifies how likely two cells are to transition into another, given their velocity vectors and relative positions.\n\n:\n\nscv.tl.velocity(adata, mode='dynamical')\n\ncomputing velocities\n'velocity', velocity vectors for each individual cell (adata.layers)\ncomputing velocity graph\n'velocity_graph', sparse matrix with cosine correlations (adata.uns)\n\n:\n\nscv.pl.velocity_embedding_stream(adata, basis='umap', legend_fontsize=12, title='', smooth=.8, min_mass=4)\n\ncomputing velocity embedding",
null,
"## Run CellRank¶\n\nCellRank builds on the velocities computed by scVelo to compute initial and terminal states of the dynamical process. It further uses the velocities to calculate how likely each cell is to develop from each initial state or towards each terminal state.\n\n## Identify terminal states¶\n\nTerminal states can be computed by running the following command:\n\n:\n\ncr.tl.terminal_states(adata, cluster_key='clusters', weight_connectivities=0.2)\n\nComputing transition matrix based on logits using 'deterministic' mode\nEstimating softmax_scale using 'deterministic' mode\nSetting softmax_scale=3.7951\nFinish (0:00:09)\nUsing a connectivity kernel with weight 0.2\nComputing transition matrix based on connectivities\nFinish (0:00:00)\nComputing eigendecomposition of the transition matrix\nAdding .eigendecomposition\nadata.uns['eig_fwd']\nFinish (0:00:00)\nComputing Schur decomposition\nUnable to import PETSc or SLEPc.\nYou can install it from: https://slepc4py.readthedocs.io/en/stable/install.html\nDefaulting to method='brandts'.\nAdding .eigendecomposition\nadata.uns['eig_fwd']\n.schur\n.schur_matrix\nFinish (0:00:09)\nComputing 3 macrostates\nINFO: Using pre-computed schur decomposition\nAdding .macrostates_memberships\n.macrostates\n.schur\n.coarse_T\n.coarse_stationary_distribution\nFinish (0:00:00)\nAdding adata.obs['terminal_states_probs']\nadata.obs['terminal_states']\nadata.obsm['macrostates_fwd']\n.terminal_states_probabilities\n.terminal_states\n\n\nThe most important parameters in the above function are:\n\n• estimator: this determines what’s going to behind the scenes to compute the terminal states. Options are cr.tl.estimators.CFLARE (“Clustering and Filtering of Left and Right Eigenvectors”) or cr.tl.estimators.GPCCA (“Generalized Perron Cluster Cluster Analysis”). The latter is the default, it computes terminal states by coarse graining the velocity-derived Markov chain into a set of macrostates that represent the slow-time scale dynamics of the process, i.e. it finds the states that you are unlikely to leave again, once you have entered them.\n\n• cluster_key: takes a key from adata.obs to retrieve pre-computed cluster labels, i.e. ‘clusters’ or ‘louvain’. These labels are then mapped onto the set of terminal states, to associate a name and a color with each state.\n\n• n_states: number of expected terminal states. This parameter is optional - if it’s not provided, this number is estimated from the so-called ‘eigengap heuristic’ of the spectrum of the transition matrix.\n\n• method: This is only relevant for the estimator GPCCA. It determines the way in which we compute and sort the real Schur decomposition. The default, krylov, is an iterative procedure that works with sparse matrices which allows the method to scale to very large cell numbers. It relies on the libraries SLEPSc and PETSc, which you will have to install separately, see our installation instructions. If your dataset is small (<5k cells), and you don’t want to install these at the moment, use method='brandts'. The results will be the same, the difference is that brandts works with dense matrices and won’t scale to very large cells numbers.\n\n• weight_connectivities: additionally to the velocity-based transition probabilities, we use a transition matrix computed on the basis of transcriptomic similarities to make the algorithm more robust. Essentially, we are taking a weighted mean of these two sources of inforamtion, where the weight for transcriptomic similarities is defined by weight_connectivities.\n\nWhen running the above command, CellRank adds a key terminal_states to adata.obs and the result can be plotted as:\n\n:\n\ncr.pl.terminal_states(adata)",
null,
"## Identify initial states¶\n\nThe same sort of analysis can now be repeated for the initial states, only that we use the function cr.tl.initial_states this time:\n\n:\n\ncr.tl.initial_states(adata, cluster_key='clusters')\n\nComputing transition matrix based on logits using 'deterministic' mode\nEstimating softmax_scale using 'deterministic' mode\nSetting softmax_scale=3.7951\nFinish (0:00:08)\nUsing a connectivity kernel with weight 0.2\nComputing transition matrix based on connectivities\nFinish (0:00:00)\nComputing eigendecomposition of the transition matrix\nAdding .eigendecomposition\nadata.uns['eig_bwd']\nFinish (0:00:00)\nWARNING: For 1 macrostate, stationary distribution is computed\nAdding .macrostates_memberships\n.macrostates\nFinish (0:00:00)\nAdding adata.obs['initial_states_probs']\nadata.obs['initial_states']\nadata.obsm['macrostates_bwd']\n.terminal_states_probabilities\n.terminal_states",
null,
"We found one initial state, located in the Ngn3 low EP cluster.\n\n## Compute fate maps¶\n\nOnce we know the terminal states, we can compute associated fate maps - for each cell, we ask how likely is the cell to develop towards each of the identified terminal states.\n\n:\n\ncr.tl.lineages(adata)\n\nComputing lineage probabilities towards terminal states\nComputing absorption probabilities\nAdding adata.obsm['to_terminal_states']\nadata.obs['to_terminal_states_dp']\n.absorption_probabilities\n.diff_potential\nFinish (0:00:00)\nAdding lineages to adata.obsm['to_terminal_states']\nFinish (0:00:00)",
null,
"We can aggregate the above into a single, global fate map where we associate each terminal state with color and use the intensity of that color to show the fate of each individual cell:\n\n:\n\ncr.pl.lineages(adata, same_plot=True)",
null,
"This shows that the dominant terminal state at E15.5 is Beta, consistent with known biology, see e.g. Bastidas-Ponce et al. (2018).\n\n## Directed PAGA¶\n\nWe can further aggragate the individual fate maps into a cluster-level fate map using an adapted version of PAGA with directed edges. We first compute scVelo’s latent time with CellRank identified root_key and end_key, which are the probabilities of being an initial or a terminal state, respectively.\n\n:\n\nscv.tl.recover_latent_time(adata, root_key='initial_states_probs', end_key='terminal_states_probs')\n\ncomputing latent time using initial_states_probs, terminal_states_probs as prior\n\n\nNext, we can use the inferred pseudotime along with the initial and terminal states probabilities to compute the directed PAGA.\n\n:\n\nscv.tl.paga(adata, groups='clusters', root_key='initial_states_probs', end_key='terminal_states_probs',\nuse_time_prior='velocity_pseudotime')\n\nrunning PAGA using priors: ['velocity_pseudotime', 'initial_states_probs', 'terminal_states_probs']\n\n:\n\ncr.pl.cluster_fates(adata, mode=\"paga_pie\", cluster_key=\"clusters\", basis='umap',\nlegend_kwargs={'loc': 'top right out'}, legend_loc='top left out',\nnode_size_scale=5, edge_width_scale=1, max_edge_width=4, title='directed PAGA')",
null,
"We use pie charts to show cell fates averaged per cluster. Edges between clusters are given by transcriptomic similarity between the clusters, just as in normal PAGA.\n\nGiven the fate maps/probabilistic trajectories, we can ask interesting questions like:\n\nTo find out more, check out the CellRank API.\n\n## Compute lineage drivers¶\n\nWe can compute the driver genes for all or just the subset of lineages. We can also restric this to some subset of clusters by specifying clusters=... (not shown below).\n\nIn the resulting dataframe, we also see the p-value, the corrected p-value (q-value) and the 95% confidence interval for the correlation statistic.\n\n:\n\ncr.tl.lineage_drivers(adata)\n\nComputing correlations for lineages ['Epsilon' 'Alpha' 'Beta'] restricted to clusters None in layer X with use_raw=False\nAdding .lineage_drivers\nadata.var['to Epsilon corr']\nadata.var['to Alpha corr']\nadata.var['to Beta corr']\nFinish (0:00:00)\n\n:\n\nEpsilon corr Epsilon pval Epsilon qval Epsilon ci low Epsilon ci high Alpha corr Alpha pval Alpha qval Alpha ci low Alpha ci high Beta corr Beta pval Beta qval Beta ci low Beta ci high\nindex\nGhrl 0.802445 0.000000e+00 0.000000e+00 0.788123 0.815898 -0.102534 2.297290e-07 1.708022e-06 -0.140933 -0.063827 -0.409512 4.725654e-106 6.750934e-104 -0.441431 -0.376558\nAnpep 0.456914 7.357066e-136 7.357066e-133 0.425527 0.487202 -0.063882 1.298457e-03 4.477439e-03 -0.102589 -0.024982 -0.228949 1.017801e-31 2.867046e-30 -0.265542 -0.191697\nGm11837 0.449616 6.364047e-131 4.242698e-128 0.417977 0.480167 -0.045986 2.068035e-02 4.854542e-02 -0.084796 -0.007037 -0.238269 2.593631e-34 7.980404e-33 -0.274681 -0.201175\nIrx2 0.399584 1.901441e-100 9.507205e-98 0.366325 0.431823 0.517187 3.359556e-182 3.359556e-179 0.488060 0.545163 -0.640866 0.000000e+00 0.000000e+00 -0.663266 -0.617318\nCcnd2 0.384303 3.214070e-92 1.285628e-89 0.350590 0.417020 0.152544 1.074181e-14 1.732551e-13 0.114262 0.190375 -0.351178 6.074690e-76 4.672839e-74 -0.384874 -0.316547\n... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...\nDlk1 -0.323106 1.064602e-63 1.252473e-61 -0.357567 -0.287767 -0.168021 1.477981e-17 2.869865e-16 -0.205637 -0.129910 0.325836 7.865248e-65 4.494427e-63 0.290563 0.360224\nGng12 -0.342606 4.647161e-72 7.149478e-70 -0.376541 -0.307752 -0.330585 7.894530e-67 9.868163e-65 -0.364847 -0.295428 0.462704 7.128450e-140 2.376150e-137 0.431522 0.492782\nNkx6-1 -0.349302 4.411242e-75 7.352071e-73 -0.383051 -0.314622 -0.318417 8.753797e-62 8.753797e-60 -0.352999 -0.282965 0.457423 3.288706e-136 9.396304e-134 0.426055 0.487693\nNnat -0.357368 7.895266e-79 1.579053e-76 -0.390887 -0.322902 -0.324286 3.462537e-64 4.073572e-62 -0.358716 -0.288976 0.466845 8.497522e-143 3.399009e-140 0.435810 0.496771\nPdx1 -0.370468 3.605169e-85 8.011487e-83 -0.403604 -0.336361 -0.332613 1.076943e-67 1.435924e-65 -0.366821 -0.297507 0.481221 2.648652e-153 1.765768e-150 0.450709 0.510609\n\n2000 rows × 15 columns\n\nAfterwards, we can plot the top 5 driver genes (based on the correlation), e.g. for the Alpha lineage:\n\n:\n\ncr.pl.lineage_drivers(adata, lineage=\"Alpha\", n_genes=5)",
null,
""
] | [
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_15_1.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_22_0.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_25_3.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_29_1.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_31_0.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_38_0.png",
null,
"https://cellrank.readthedocs.io/en/latest/_images/pancreas_basic_44_0.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6925549,"math_prob":0.7475577,"size":17036,"snap":"2021-04-2021-17","text_gpt3_token_len":4856,"char_repetition_ratio":0.12576327,"word_repetition_ratio":0.053114437,"special_character_ratio":0.32102606,"punctuation_ratio":0.21275946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629222,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,1,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T18:24:27Z\",\"WARC-Record-ID\":\"<urn:uuid:57048424-377c-4568-9036-566b316a3644>\",\"Content-Length\":\"134503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1fd1596d-043a-42c0-b919-270a9c729f12>\",\"WARC-Concurrent-To\":\"<urn:uuid:2de858aa-f340-4ead-ada5-53e91014e851>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://cellrank.readthedocs.io/en/latest/pancreas_basic.html\",\"WARC-Payload-Digest\":\"sha1:4RJIZTGNFJ42GIYWRSKBFHWYFWHOJRTA\",\"WARC-Block-Digest\":\"sha1:HGDQU72VCT3HC4GIUJ7R3ZIFQXKQ3PMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704803308.89_warc_CC-MAIN-20210126170854-20210126200854-00796.warc.gz\"}"} |
https://number.academy/132036 | [
"# Number 132036\n\nNumber 132,036 spell 🔊, write in words: one hundred and thirty-two thousand and thirty-six . Ordinal number 132036th is said 🔊 and write: one hundred and thirty-two thousand and thirty-sixth. Color #132036. The meaning of number 132036 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 132036. What is 132036 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 132036.\n\n## What is 132,036 in other units\n\nThe decimal (Arabic) number 132036 converted to a Roman number is (C)(X)(X)(X)MMXXXVI. Roman and decimal number conversions.\n\n#### Weight conversion\n\n132036 kilograms (kg) = 291086.6 pounds (lbs)\n132036 pounds (lbs) = 59891.1 kilograms (kg)\n\n#### Length conversion\n\n132036 kilometers (km) equals to 82044 miles (mi).\n132036 miles (mi) equals to 212492 kilometers (km).\n132036 meters (m) equals to 433184 feet (ft).\n132036 feet (ft) equals 40246 meters (m).\n132036 centimeters (cm) equals to 51982.7 inches (in).\n132036 inches (in) equals to 335371.4 centimeters (cm).\n\n#### Temperature conversion\n\n132036° Fahrenheit (°F) equals to 73335.6° Celsius (°C)\n132036° Celsius (°C) equals to 237696.8° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n132036 seconds equals to 1 day, 12 hours, 40 minutes, 36 seconds\n132036 minutes equals to 3 months, 1 week, 16 hours, 36 minutes\n\n### Codes and images of the number 132036\n\nNumber 132036 morse code: .---- ...-- ..--- ----- ...-- -....\nSign language for number 132036:",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Number 132036 in braille:",
null,
"QR code Bar code, type 39",
null,
"",
null,
"Images of the number Image (1) of the number Image (2) of the number",
null,
"",
null,
"More images, other sizes, codes and colors ...\n\n## Share in social networks",
null,
"## Mathematics of no. 132036\n\n### Multiplications\n\n#### Multiplication table of 132036\n\n132036 multiplied by two equals 264072 (132036 x 2 = 264072).\n132036 multiplied by three equals 396108 (132036 x 3 = 396108).\n132036 multiplied by four equals 528144 (132036 x 4 = 528144).\n132036 multiplied by five equals 660180 (132036 x 5 = 660180).\n132036 multiplied by six equals 792216 (132036 x 6 = 792216).\n132036 multiplied by seven equals 924252 (132036 x 7 = 924252).\n132036 multiplied by eight equals 1056288 (132036 x 8 = 1056288).\n132036 multiplied by nine equals 1188324 (132036 x 9 = 1188324).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 132036\n\nHalf of 132036 is 66018 (132036 / 2 = 66018).\nOne third of 132036 is 44012 (132036 / 3 = 44012).\nOne quarter of 132036 is 33009 (132036 / 4 = 33009).\nOne fifth of 132036 is 26407,2 (132036 / 5 = 26407,2 = 26407 1/5).\nOne sixth of 132036 is 22006 (132036 / 6 = 22006).\nOne seventh of 132036 is 18862,2857 (132036 / 7 = 18862,2857 = 18862 2/7).\nOne eighth of 132036 is 16504,5 (132036 / 8 = 16504,5 = 16504 1/2).\nOne ninth of 132036 is 14670,6667 (132036 / 9 = 14670,6667 = 14670 2/3).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 132036\n\n#### Is Prime?\n\nThe number 132036 is not a prime number. The closest prime numbers are 132019, 132047.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 132036 are 2 * 2 * 3 * 11003\nThe factors of 132036 are 1 , 2 , 3 , 4 , 6 , 12 , 11003 , 22006 , 33009 , 44012 , 66018 , 132036\nTotal factors 12.\nSum of factors 308112 (176076).\n\n#### Powers\n\nThe second power of 1320362 is 17.433.505.296.\nThe third power of 1320363 is 2.301.850.305.262.656.\n\n#### Roots\n\nThe square root √132036 is 363,367582.\nThe cube root of 3132036 is 50,921062.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 132036 = loge 132036 = 11,79083.\nThe logarithm to base 10 of No. log10 132036 = 5,120692.\nThe Napierian logarithm of No. log1/e 132036 = -11,79083.\n\n### Trigonometric functions\n\nThe cosine of 132036 is 0,413998.\nThe sine of 132036 is 0,910278.\nThe tangent of 132036 is 2,198751.\n\n### Properties of the number 132036\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 132036 in Computer Science\n\nCode typeCode value\nPIN 132036 It's recommendable to use 132036 as a password or PIN.\n132036 Number of bytes128.9KB\nCSS Color\n#132036 hexadecimal to red, green and blue (RGB) (19, 32, 54)\nUnix timeUnix time 132036 is equal to Friday Jan. 2, 1970, 12:40:36 p.m. GMT\nIPv4, IPv6Number 132036 internet address in dotted format v4 0.2.3.196, v6 ::2:3c4\n132036 Decimal = 100000001111000100 Binary\n132036 Decimal = 20201010020 Ternary\n132036 Decimal = 401704 Octal\n132036 Decimal = 203C4 Hexadecimal (0x203c4 hex)\n132036 BASE64MTMyMDM2\n132036 MD5604b3c7cafdff80cb186630886d13bb1\n132036 SHA178996f85e3ce8f3f962cdd5be09c8bf9dbb53fbd\n132036 SHA22412bdf339a08576297853773d909556f3bd706acb39371186d0003bed\n132036 SHA384ea72dd285ecef550d64b7843821fecbf56976908dce741d09983dedc113948863476d0ea33b7eb3206e67a545a8ba13e\nMore SHA codes related to the number 132036 ...\n\nIf you know something interesting about the 132036 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 132036\n\n### Character frequency in number 132036\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 1 1 3 2 2 1 0 1 6 1\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 132036, the numbers 1+3+2+0+3+6 = 1+5 = 6 are added and the meaning of the number 6 is sought.\n\n## Interesting facts about the number 132036\n\n### Asteroids\n\n• (132036) 2002 CC125 is asteroid number 132036. It was discovered by LINEAR, Lincoln Near-Earth Asteroid Research from White Sands Observatory in Socorro on 2/7/2002.\n\n## № 132,036 in other languages\n\nHow to say or write the number one hundred and thirty-two thousand and thirty-six in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 132.036) ciento treinta y dos mil treinta y seis German: 🔊 (Anzahl 132.036) einhundertzweiunddreißigtausendsechsunddreißig French: 🔊 (nombre 132 036) cent trente-deux mille trente-six Portuguese: 🔊 (número 132 036) cento e trinta e dois mil e trinta e seis Chinese: 🔊 (数 132 036) 十三万二千零三十六 Arabian: 🔊 (عدد 132,036) مائة و اثنان و ثلاثون ألفاً و ستة و ثلاثون Czech: 🔊 (číslo 132 036) sto třicet dva tisíce třicet šest Korean: 🔊 (번호 132,036) 십삼만 이천삼십육 Danish: 🔊 (nummer 132 036) ethundrede og toogtredivetusindseksogtredive Dutch: 🔊 (nummer 132 036) honderdtweeëndertigduizendzesendertig Japanese: 🔊 (数 132,036) 十三万二千三十六 Indonesian: 🔊 (jumlah 132.036) seratus tiga puluh dua ribu tiga puluh enam Italian: 🔊 (numero 132 036) centotrentaduemilatrentasei Norwegian: 🔊 (nummer 132 036) en hundre og tretti-to tusen og tretti-seks Polish: 🔊 (liczba 132 036) sto trzydzieści dwa tysiące trzydzieści sześć Russian: 🔊 (номер 132 036) сто тридцать две тысячи тридцать шесть Turkish: 🔊 (numara 132,036) yüzotuzikibinotuzaltı Thai: 🔊 (จำนวน 132 036) หนึ่งแสนสามหมื่นสองพันสามสิบหก Ukrainian: 🔊 (номер 132 036) сто тридцять двi тисячi тридцять шiсть Vietnamese: 🔊 (con số 132.036) một trăm ba mươi hai nghìn lẻ ba mươi sáu Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 132036 or any natural number (positive integer) please write us here or on facebook."
] | [
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-3.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-2.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-3.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png",
null,
"https://number.academy/img/braille-132036.svg",
null,
"https://numero.wiki/img/codigo-qr-132036.png",
null,
"https://numero.wiki/img/codigo-barra-132036.png",
null,
"https://numero.wiki/img/a-132036.jpg",
null,
"https://numero.wiki/img/b-132036.jpg",
null,
"https://numero.wiki/s/share-desktop.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5324778,"math_prob":0.9660858,"size":7203,"snap":"2022-27-2022-33","text_gpt3_token_len":2612,"char_repetition_ratio":0.1579386,"word_repetition_ratio":0.0088573955,"special_character_ratio":0.43301404,"punctuation_ratio":0.16027088,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99168646,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T09:51:46Z\",\"WARC-Record-ID\":\"<urn:uuid:bcd0ec60-c7cd-48bd-b1a0-d83832909ac3>\",\"Content-Length\":\"41520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cca9acee-7498-4bb4-bf4b-085df5c4660e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2971986-12dc-48e2-b95e-4ff97b92bd64>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/132036\",\"WARC-Payload-Digest\":\"sha1:R52YHHJM7E7Y4TDDE54T5L7HJPTKJXL6\",\"WARC-Block-Digest\":\"sha1:VTYFU4O5T3R46RJKO33T4KPHPZHCAXEO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573193.35_warc_CC-MAIN-20220818094131-20220818124131-00219.warc.gz\"}"} |
https://www.sharkbase.ca/shark-functions.html | [
"# LIST OF ALL SHARK FUNCTIONS:\n\nA function is just like an operation; the addition operation (+) requires two arguments (numbers) and returns their sum; a function requires some arguments (0 to 3) and returns some value.\n\nEvery function has a type: numeric, string, or logical, depending on the values it returns.\n\nA function has arguments; the values put into the function between the parentheses, separated by commas. Shark functions have at most three arguments; a few have none. Some arguments may be optional.\n\nThe arguments can be variables or expressions of type string: str var or str exp, or of type numeric: num var or <num exp>. (Expressions are discussed in Section 3.5)\n\n# Within the above list are some specific Mathematical Functions. The mathematical functions may be divided into five groups::\n\n#### Logarithmic functions:\n\nEXP( e to the power <num exp>\nLOG( natural logarithm of <num exp>\nLOG10( base 10 logarithm of <num exp>\nPOW( <num exp1> to the power <num exp2>\nSQRT( square root of <num exp>\n\n#### Trigonometric functions:\n\nSIN( sine of <num exp> in radians\nCOS( cosine of <num exp> in radians\nTAN( tangent of <num exp> in radians\nASIN( arc sine of <num exp>; returns a value in radians between -𝜋/2 and 𝜋/2\nACOS( arc cosine of <num exp>; returns a value in radians between 0 and 𝜋\nATAN( arc tangent of <num exp>; returns a value in radians between -𝜋/2 and 𝜋/2\n\n#### Hyperbolic functions:\n\nSINH( hyperbolic sine of <num exp>\nCOSH( hyperbolic cosine of <num exp>\nTANH( hyperbolic tangent of <num exp>\n\n#### Integer-valued functions:\n\nCEIL( ceiling integer: the integer equal to or just above a given number\nFLOOR( floor integer: the integer equal to or just below a given number\nINT( the integer part of a given number (the fractional part is discarded); useful for currency \"rounding\", for example.\n\nNote that for positive numbers, INT( and FLOOR( return the same result, but for negative numbers, INT( and CEIL( produce the same result. This is because discarding the decimal part of a real number reduces its distance from zero.\n\n#### Real-valued functions:\n\nABS( displays the absolute value of <num exp>\nMOD( evaluates remainder of a division of a <num exp> by a different <num exp>\n\n#### Examples:\n\n```\n1>:PICTURE='999.999999'\n1>? EXP(1)\n2.718282 ;this is the value of e\n1>? LOG(3)\n1.098612\n1>? LOG10(3)\n0.477121\n1>? POW(2,4)\n16.000000\n1>? POW(2,.5)\n1.414214\n1>? SQRT(4)\n2.000000\n1>? SQRT(2)\n1.414214\n1>? SIN(2) ;2 is in radians\n0.909297\n1>? ASIN(1)\n1.570796\n1>? 2*ASIN(1)\n3.141593 ; this is, of course, 𝜋; store this to a variable if you need 𝜋\n1>? CEIL(3.14)\n4.000000\n1>? FLOOR(3.14)\n3.000000\n1>? INT(23.45)\n23.000000\n1>? INT(-23.45)\n-23.000000\n1>x=23.45\n1>? INT(10*x)\n234.000000\n1>x=23.999\n1>? INT(x)\n23.000000\n1>? INT(3.14)\n3.000000\n1>? FLOOR(-3.14)\n-4.000000\n1>? INT(-3.14) ;note that for negative numbers, INT( and FLOOR( give different values\n-3.000000\n1>? MOD(5,2) ;this is the remainder of 5 divided by 2\n1.000000\n1>? MOD(-3.14, .7)\n-0.340000\n1>? ABS(-3.14)\n3.140000\n1>:PICTURE='9999999.99'\n1>x=15.689\n1>? INT(x*100+.5)/100 Note: this is an example of rounding to the nearest 1 cent\n15.69\n1>x=15.244\n1>? INT(x*100+.5)/100\n15.24\n```\n\n# Within the above list of all functions are some specific Comparative Functions:\n\n```MASK(: Manipulates bits in a string based on comparison between 2 strings\nMAX(: Compare two expressions of any type and return the larger.\n```\n\n# ALPHABETICAL LIST OF ALL SHARK FUNCTIONS:\n\n## !(\n\nConverts a string to upper case.\n\n```!(<str exp>)\n```\n\n<str exp> is the text to be converted to upper case\n\nType: character\n\nAll lower-case letters in the <str exp> are converted into upper case by the !( function. See also the LOWER( function.\n\nExamples:\n\n```1>a='Aa12b'\n1>? !(a)\nAA12B\n1>? !('David!')\nDAVID!\n```\n\nNote that only the lower-case letters are changed.\n\n## #\n\nGets the current record number.\n\n`#`\n\nType: numeric\n\nThis function returns the record number of the current record of the selected file. Note that ? # displays the current record number in the form specified by the system variable :PICTURE (see Section 2.7). Shark also has a more general form of this function, RECNO(, which allows the user to specify a file other than the selected file.\n\nWhen used with the option RECNO(filenum), it gives the record number of the current record in file filenum.\n\nExamples:\n\n```1>USE employee\n1>? #\n1.00\n1>GO BOTTOM\n1>? #\n6.00\n1>GO TOP\n1>? #\n1.00\n1>SKIP 2\n1>? #\n3.00\n1>? recno(2); selects current record in file #2\n```\n\n## \\$( (substring)\n\nThis is shorthand for the SUBSTR( function.\n\n## ASTERISK (*)\n\nDetermines whether a record is deleted. Type: logical\n\nIn the selected file, the current record pointer points at a record. If this record has been marked for deletion (in BROWSE or EDIT, or with the DELETE command), then * gives the value T; otherwise, it is false.\n\nExample:\n\n```1>USE employee\n1>DELETE RECORD 2\n1 DELETE(S)\n1>GO 2\n1>? *\nT\n```\n\nDo not confuse this function with * command (NOTE), or with the * comment marker.\n\n## @( (at)\n\nGets the location of a substring.\n\n```@(find str exp, <str exp> [,start)\n\nfind str exp the string searched for\n<str exp> the text to be searched\n```\n\nOption:\n\nstart byte position at which to start looking for find str exp within the string: <str exp>. If it occurs, the function returns the character position of the first (left-most) substring of <str exp> which is the same as find str exp; if it does not occur, the function returns a 0.\n\n@( and its equivalent, the AT( function, can easily search for successive occurrences of a substring within a string.\n\nConsider the previous paragraph. How many occurences are there of the substring \"string\"? How many of \"s\"? This function can determine the answer as follows if the paragraph is stored as a single string in the variable STR:\n\nSTR='@( and its equivalent, the AT( function, can easily search for successive occurrences of a substring within a string.\n\n```SUBSTR='s'\nCOUNT=0\nSTART=0\nLEN_STR=LEN(STR)\nDO WHILE START<LEN_STR\nSTART=@(SUBSTR,STR,START+1)\nIF START>0\nCOUNT=COUNT+1\nELSE\nBREAK\nENDIF\nENDDO\n```\n\nBe sure that start always advances; if it stays in the same place, it will perform the same operation indefinitely since it will never get to the end.\n\nAmbitious programmers will have noted that, if substr is longer than one byte, start should begin at 1-LEN(SUBSTR) and advance each time by the length. Furthermore, the condition on the DO WHILE line should then be:\n\n```start<=len_str-len_sub\n```\n\n(assuming you've saved the length of the substring in this last variable).\n\nExamples:\n\n```1>greeting='Good morning'\n1>? @('oo', greeting)\n2.00\n1>? @('good',greeting)\n0.00\n```\n\nIn a program:\n\n```IF @(answer,'YNQynq')>0\n```\n\nchecks whether the user response is correct.\n\nThis program segment finds and marks every occurrence of a specified string in a page of text displayed on the screen:\n\n```CLS ;start by erasing the screen\nDO WHILE t ;set up continuous loop -break embedded in segment\nIF row()<mLineNumber ;stop at end of screen\nIF .not. read(mPrintLine,1) ;be sure there's more to display\nBREAK ;if not, break out of loop\nENDIF\n?? mPrintLine ;display the line, no linefeed\nIF mText>' ' ;if looking for input text\nmPrintLine=!(mPrintLine) ;capitalize line from file\nmSearchPos=1 ;start search at 1\nDO WHILE t\nmSearchPos=@(mText,mPrintLine,mSearchPos)\nIF mSearchPos=0 ;no more to find, so...\nBREAK ; get out\nELSE\n* we found something...highlight it and increment search-start position\nCOLOR colorFind,row(),mSearchPos-1,row(),mSearchPos+mTextLen-2\nmSearchPos=mSearchPos+mTextLen\nENDIF\nENDDO\nENDIF\n? ;emit carriage return/linefeed pair\nELSE\nBREAK ;stop when screen is full\nENDIF\nENDDO\n\n```\n\n## ABS(\n\ndisplays the absolute value of a <num exp>\n\nExample:\n\n```1>? abs(\n1>? @('oo', greeting)\n2.00\n1>? @('good',greeting)\n0.00\n```\n\n## ACOS(\n\narc cosine of a <num exp> returns a value in radians between 0 and π\n\n## ASC(\n\nConverts a character to its ASCII number.\n\n```ASC(<str exp>)\n<str exp> the first character of this string is converted\n\nType: numeric\n```\n\nThe characters in the character set used by the computer are numbered from 0 to 255. For the first character of the string <str exp>, ASC( returns the corresponding number. RANK( is a synonym for ASC(. See also the functions CHR(, CTONUM(, and NUMTOC(.\n\nExamples:\n\n```1>? ASC('x')\n120.00\n1>? ASC('xyz')\n120.00\n```\n\nNote that only the first character of the string matters.\n\n## ASIN(\n\narc sine of <num exp> returns a value in radians between -π/2 and π/2\n\n## ATAN(\n\narc tangent of <num exp> returns a value in radians between -π/2 and π/2\n\n## AT(\n\nGets the location of a substring . . . a synonym for @(\n\n```AT(find str exp, <str exp>, [start]\n```\n\nfind str exp the string searched for <str exp> the text to be searched\n\nOption:\n\n```[start] byte position at which to start looking for\n```\n\nType: numeric\n\nThis function finds out whether a string: find str exp occurs in the string: <str exp>. If it occurs, the function returns the character position of the first (left-most) substring of <str exp> which is the same as find str exp; if it does not occur, the function returns a 0.\n\n@( and its equivalent, the AT( function, can easily search for successive occurences of a substring within a string.\n\nConsider the previous paragraph. How many occurrences are there of the suBstring \"string\"? How many of \"s\"? This function can determine the answer as follows if the paragraph is stored as a single string in the variable STR:\n\nExample:\n\n```SUBSTR='s'\nCOUNT=0\nSTART=0\nLEN_STR=len(str)\nDO WHILE start<len_str\nstart=@(substr,str,start+1)\nIF start>0\ncount=count+1\nELSE\nBREAK\nENDIF\nENDDO\n```\n\nBe sure that start always advances; if it stays in the same place, it will perform the same operation indefinitely since it will never get to the end.\n\nAmbitious programmers will have noted that, if SUBSTR is longer than one byte, START should begin at 1-LEN(SUBSTR) and advance each time by the length. Furthermore, the condition on the DO WHILE line should then be\n\n```START<=LEN_STR-LEN_SUB\n```\n\n(assuming you've saved the length of the substring in this last variable).\n\nExamples:\n\n```1>greeting='Good morning'\n1>? @('oo', greeting)\n2.00\n1>? @('good',greeting)\n0.00\n```\n\nIn a program:\n\n```IF @(answer,'YNQynq')>0\n```\n\nchecks whether the user response is correct.\n\nThis program segment finds and marks every occurrence of a specified string in a page of text displayed on the screen:\n\n```CLS ;start by erasing the screen\nDO WHILE t ;break embedded in segment\nIF row()<mLineNumber ;stop at end of screen\nIF .not. read(mPrintLine,1) ;be sure there's more to display\nBREAK ;if not, get out\nENDIF\n?? mPrintLine ;display the line, no linefeed\nIF mText>' ' ;if looking for input text\nmPrintLine=!(mPrintLine) ;capitalize line from file\nmSearchPos=1 ;start search at 1\nDO WHILE t\nmSearchPos=@(mText,mPrintLine,mSearchPos)\nIF mSearchPos=0 ;no more to find, so...\nBREAK ; get out\nELSE\n* we found something...highlight it and increment search-start position\nCOLOR colorFind,row(),mSearchPos-1,row(),\nmSearchPos+mTextLen-2\nmSearchPos=mSearchPos+mTextLen\nENDIF\nENDDO\nENDIF\n? ;emit carriage return/linefeed pair\nELSE\nBREAK ;sop when screen is full\nENDIF\nENDDO\n```\n\n## BIT(\n\nBit-set function determines if a given bit is 0 or 1\n\n```BIT(string,bit position)\n```\n\nstring a string or string variable to test bit position a numeric expression; position of a given bit within string\n\nType: logical\n\nEach character in the ASCII character set is identified by an eight-bit binary number from 0 to 255 inclusive. Each bit may be either 0 or 1; for example, the letter A has a decimal value of 65 and a binary value of 01000001.\n\nWhen used with the SET( and RESET( functions, which turn specific bits to 1 or 0 respectively, the BIT( function can be used to access large amounts of logical data much more compactly than in a set of logical variables. BIT( returns T (true) if the specified bit is set (0), F (false) if not set (0).\n\nNOTE: Bit positions are counted differently than in some other schemes. In these functions, all bits are counted from the left of the string starting at 1, so that each character contains bits numbered as follows: 1. Bits 1 to 8. 2. Bits 9 to 16. 3. Bits 17 to 24. . . . and so on\n\nExample in a program:\n\nTo print the binary value of each character in an input string:\n\n```SET RAW ON ;eliminates spaces between listed output\nDO WHILE t\nACCEPT 'Enter a short string or binary representation: ' TO string\nIF string=' '\nBREAK\nENDIF\n?\nREPEAT LEN(string)*8 TIMES VARYING position\n?? IFF(BIT(string,position),'1','0')\nIF MOD(position,8)=0\n?? ' '\nENDIF\nENDREPEAT\nENDDO\n```\n\nNow run the program:\n\n```Enter a short string for binary representation: Bit <--- enter text \"Bit\"\n01000010 01101001 01110100 <--- result displays on screen\nEnter a short string for binary representation: <--- prompt reappears\n```\n\n## BLANK(\n\nCreates a string of blanks or other specified characters.\n\n`BLANK(<num exp>[,charnum)`\n\n<num exp> a number from 0 to 255\n\nOption:\n\n```charnum the ASCII number of the character used to fill the\nblank string; default is 32, the blank character\n```\n\nType: character\n\nThis function creates a string of <num exp> blanks or other specified characters.\n\nWhen charnum is specified, many interesting effects can be created, particularly by using the special pattern characters in the IBM screen character set, 176-178, and the solid block character, 219.\n\nExamples:\n\n```1>name='DAVID'\n1>? name+BLANK(15)+name\nDAVID DAVID\n1>num=23\n1>? name+BLANK(num+5)+name\nDAVID DAVID\n1>? BLANK(20,65)\nAAAAAAAAAAAAAAAAAAAA\n1>? BLANK(20,196)\n--------------------\n```\n\n## BOF\n\nGives the beginning-of-file flag for the currently selected data file.\n\n`BOF`\n\nType: logical\n\nIf the current record pointer is on the first record of the file in use and a SKIP -1 is issued, BOF returns T (true); otherwise it is F (false). Since SKIP -n is treated as n SKIP -1 commands, BOF returns true if SKIP -n goes past the last record.\n\nTo check an open data file other than the currently selected one, use the BOF( function.\n\nExamples:\n\n```1>USE employee\n1>GO 3\n1>SKIP -2\n1>? #\n1.00\n1>? BOF\nF\n1>SKIP -1\n1>? BOF\nT\n1>GO 4\n1>SKIP -5\n1>? #\n1.00\n1>? BOF\nT\n```\n\n## BOF(\n\nGives the beginning-of-file flag for a specified data file.\n\n`BOF(<filenum>)`\n\nOption:\n\n`<filenum> the number of the data file to be checked`\n\nType: logical\n\nFor the data file number specified, if the current record pointer is on the first record and a SKIP -1 is issued, BOF( returns T (true); otherwise it is F (false). Since SKIP -n is treated as n SKIP -1 commands, BOF( returns true if SKIP n goes past the first record.\n\nAn extended form of the BOF function which, since it takes no parameter, works only on the currently selected data file. If no <filenum> is specified, the current file is assumed.\n\nExamples:\n\n```1>USE employee\n1>GO 3\n1>SKIP -2\n1>? #\n1.00\n1>? BOF()\nF\n1>SELECT 2\n2>USE inventry\n2>GO TOP\n2>? #\n1.00\n2>? BOF()\nF\n2>SKIP -1\n2>? #\n1.00\n2>? BOF()\nT\n2>SELECT 1\n1>? BOF(2)\nT\n```\n\n## CEIL(\n\nceiling integer: the integer equal to or just above a given number\n\nRelated functions: CEIL( = the integer equal to or just above a given number; FLOOR( = the integer equal to or just below a given number; INT( = the integer part of a mixed number after discarding the fractional part; MOD( = the balance after truncating a mixed number.\n\n## CEN(\n\nCenters a line of text.\n\n```CEN(<str exp>,<num exp>)\n\n<str exp> the text to be centered\n<num exp> the line width```\n\nType: character\n\nThis function centers (from the present position) the text <str exp> in a line (column) with <num exp> characters.\n\nExamples:\n\n```1>compiler='Shark'\n1>? CEN(compiler,40)\nShark\n1>@ 10,20 SAY CEN('Center this',40)```\n\nNote: the last command centers the text between columns 20 and 60.\n\n## CHR(\n\nConverts an ASCII number to character.\n\n(To convert a numeric value to a string, use STR()\n\n```CHR(<num exp>)\n\n<num exp> a number from 0 to 255```\n\nType: character\n\nThe characters in the character set used by the computer are numbered from 0 to 255; this number for a character is called the ASCII number. For a given <num exp> in this range, CHR(<num exp>) is the corresponding character.\n\nThis function is useful to send control codes to the printer. For instance,\n\n`1>? CHR(27)+CHR(120)+CHR(1)`\n\nputs the Epson LQ-1500 printer into letter quality mode.\n\nThe functions ASC( and RANK( do the reverse. These functions combine nicely. If the memory variable LETTER contains a letter of the alphabet (other than z or Z), then LETTER=CHR(ASC(LETTER)+1) places in LETTER the next letter of the alphabet.\n\nExamples:\n\n1. To send unprintable or other characters to a printer\n\n`1>? CHR(27)`\n\nsends ESC to the printer\n\n```1>letter='C'\n1>? CHR(RANK(letter)+1)```\n\nsends \"D\" to the printer\n\n2. To set a standard IBM or Epson printer into double-wide mode:\n\n`1>SET PRINT ON`\n`1>? CHR(14)+'First line.'`\n\nprints:\n\n`First line. `\n\nNOTE: the character '0' is difficult to send to a printer since it represents the end-of-line instruction. A better method of sending characters is the\n\n`PSTR`\n\ncommand, which permits sending any character to the printer. Example:\n\n`PSTR 27,W,1Bh,41h,3h`\n\n## CLOSE(\n\nCloses a file.\n\n`CLOSE([filenum])`\n\nOption:\n\n`<filenum> the DOS file number (between 1 and 4)`\n\nType: logical\n\nThis function closes the DOS file (in particular, the sequential file) opened with the ROPEN( or WOPEN( function. It returns T if successful, F otherwise. See the functions ROPEN(, WOPEN(, SEEK(, SSEEK(, READ(, WRITE(, GET(, PUT(, IN(, OUT(, and CLOSE).\n\nIf filenum is not specified, filenum=1 is the default.\n\nExample:\n\n```1>ok=ROPEN('a:label.prg',3)\n1>? ok\nT\n1>ok=CLOSE(3)\n```\n\n## COL(\n\nGets print column position.\n\n`COL()`\n\nType: numeric\n\nThis function gives the current column position of the cursor; if the printer is on, it returns the column position of the printer head. See the commands SET PRINT ON and SET FORMAT TO PRINT, and the function ROW(.\n\nExample:\n\n`@ ROW(),COL()+3 SAY 'Hello'`\n\nprints 'Hello' starting three characters to the right of the end of the last printing.\n\n## COS(\n\ncosine of <num exp> in radians\n\n## COSH(\n\nhyperbolic cosine of <num exp>\n\n## CTONUM(\n\nConvert a hexadecimal string into a decimal number.\n\n```CTONUM(type,string exp)\n\ntype the length of the numeric value to be returned\nstring exp the string to be evaluated as a hexadecimal value```\n\nType: numeric\n\nA general conversion function for converting hexadecimal values into decimal numbers. Input can be any length string or string variable up to eight characters as follows:\n\n``` Type String Length Returns\n1 1 byte integer 0 to 255\n2 2 bytes integer -32768 to 32767\n4 4 bytes integer +/- 2 billion\n8 8 bytes a floating point number```\n\nIf string is shorter, conversion still assumes the string is the format of the given width. When type is 1, this function is equivalent to RANK( or ASC(.\n\nThe NUMTOC( and CHR( functions convert numbers into strings.\n\nDo not confuse these function with STR( and VAL(, which convert decimal numbers into their string representations, and vice versa.\n\nExamples:\n\n```1>? CTONUM(1,'a')\n97.00\n1>? CTONUM(2,'ab')\n25185.00\n1>? CTONUM(4,'abc')\n6513249.00\n1>? CTONUM(4,'abcd'); number too large for format in :PICTURE\n```\n\n## DATE(\n\nTo display a date in a specific format, or to update :DATE with the computer's system date.\n\n`DATE([type],<str exp>)`\n\ntype being one of nine type basic date-output formats:\n\n`DATE(1)`\n`DATE(2)`\n`DATE(3), etc`\n\nOption: <str exp> the date to be converted\n\nType: character\n\nThis function has three distinctly different purposes and results:\n\n1. with only type specified in the range 1-9, rewrites the current Shark date in :DATE in the format specified by the type, and returns the result.\n\n2. with two parameters (the format in the range 1-9, and the date) returns the given date in the specified format. :DATE is not affected.\n\n3. with only type specified as zero, updates the Shark date with the computer's current system date, and the computer time with the current system time. The Shark and system dates are always the same when Shark is started, but the Shark does not automatically advance at midnight as the computer's system date should. When Shark can be in use overnight, or even for days at a time, it may be important to ensure that these dates are kept in synchronization. Date(0) returns a string of length zero.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```Type Date-output format\n----------- -----------------------------------------------------------------\n0 a string of length zero representing the full current system date\ndisplayed as in Type 4 on this list\n1 or YMD 6-character format without slashes: yymmdd\n2 or MDY 8-character format with slashes: mm/dd/yy\n3 or Char Spelled out: Month dd, yyyy\n4 or Full Spelled out: Weekday, Month dd, yyyy\n5 or Lchar Last day of month spelled out in format 3 (Char)\n6 or DMY 11-byte string in format dd-MMM-yyyy (example 03-NOV-1990)\n7 or Variable formatted without slashes according to SET DATE TO command (See SET DATE TO)\n8 or Long 8-character format without slashes: yyyymmdd\n9 or Last Last day of month in format 1 (YMD) or 8 (Long), depending on whether SET DATE TO command sets year to 'YY' or 'YYYY'\n```\n\nShortcut: When specifying type by name, only the first character is usually required. The exception is for Lchar, Long, and Last, which require two characters to resolve ambiguity. If only one is given, Lchar is assumed.\n\nNote: No name equivalent is provided for type 0, which updates the Shark date from the computer BIOS calendar setting.\n\n```<str exp> must contain the date in one of the following formats:\n\nmmddyy ddmmyy yymmdd mmddyyyy ddmmyyyy yyyymmdd (<-- This last format is the ISO-8601 STANDARD format)\n```\n\nOptionally, a slash, a hyphen, or a space may be used to separate the elements of these formats. For example, YY/MM/DD, YY-MM-DD, DD MM YYYY are all equally valid.\n\nThere should be two digits each for month and the day, and two or four digits for the year. 01 3 92 is not acceptable. If <str exp> is not acceptable, then DATE( returns a string of blanks.\n\nIn the event of ambiguity, dates will be decoded in accordance with the format set in the SET DATE TO command. For example:\n\n```SET DATE TO date is interpreted as Comment\n------------- ----- --------------- --------\n'ddmyy' 11/03/99 March 11, 1999 Difficult to use for commerce\n'mmddyy' 11/03/99 November 3, 1999 \"\n'mmddyy' 11/03/60 November 3, 2060 \"\n'yyyymmdd' 20110419 2011 April 19 ISO-8601 STANDARD FORMAT\n```\n\nThis variety of date formats caters to many varied uses. However, the only reliable date format in business and science is the completely unambiguous ISO8601 'yyyymmdd' format shown above.\n\nSee also the system variable :DATE and the command SET DATE TO.\n\nExamples:\n\n```1>:DATE= '10/05/22'\n1>? :DATE\n10/05/22\n1>? DATE()\n20221005\n1>? :DATE\n20221005\n1>? DATE(1)\n221005\n1>? :DATE\n221005\n1>? DATE(2)\n10/05/22\n1>? :DATE\n10/05/22\n1>? DATE(3)\nOctober 5, 2022\n1>? :DATE\nOctober 5, 2022\n1>? DATE(4)\nSaturday, October 5, 2022\n1>? :DATE\nSaturday, October 5, 2022\n1>? DATE(4,'12/08/21')\nTuesday, August 21, 2012\n1>? :DATE\nSunday, December 8, 2021\n1>? DATE(5,'20-03-27')\nMarch 31, 2020\n1>? DATE(6,'21-03-27')\n27-Mar-2021\n1>? :DATE\nSaturday, October 5, 2022\n1>? :DATE(8)\n20110419\n```\n\nThe date can be recalled with any of the above date() functions, and styled to suit the user. For example, using the PIC( function, put the following in the SHARKNET.CNF file so any Shark program can access it at any time:\n\n```:DATE=DATE(8)\n:UNDOC=PIC(DATE(8),\"XXXX.XX.XX\")\n```\n\nThen the system variable :UNDOC when recalled will display:\n\n```1>? :UNDOC\n1>? 2011.04.19```\n\nThus, a preferred date format can be stored in Shark's .CNF file as a system variable. It's then visible to all applications in Shark:\n\n`:UNDOC=PIC(DATE(8),\"XXXX.XX.XX\")`\n\nNOTE: If the time has passed midnight during the current run of Shark, you can update the Shark date with DATE(0):\n\n```1>? DATE(3)\nSunday, October 5, 2019\n1>? DATE(0)\n1>? DATE(4)\nSunday, October 6, 2019```\n\nNOTE: Formatting the date display has no effect on date calculations.\n\nA NOTE FOR PROGRAMMERS & TECH TYPES:\n\nShark keeps all file dates in pre-Y2K MS-DOS format. Each date component before year 2000 is saved simply as the last 2 digits of each component. The 20th century is epoch \"00\". \"1998\" for example is stored as \"0098\". An old file from \"April 15,1998\" will show its file date as \"4-15-98\". It's actually saying \"04-15-0098\", but omitting the zeros. If you type \"DIR\" or \"DIRF(\" at the prompt, this is the date format you will see.\n\nAfter the epoch rolls over to the next millenium (the 21st century), the \"00\" of the 19th millenium will become \"01\" and Shark will display the DOS date not as \"0120\", but as \"120\", again dropping the leading zero. \"April 15,2000\" will thus display as \"4-15-120\".\n\nShark automates this formatting using the \"SET DATE TO\" instruction, so the display task is automated. However, the date display when listing files still remains in the older MS-DOS display format. Only rarely will the Shark programmer see this.\n\nWhen doing occasional file searches or backups, Shark programmers sometimes work with this display quirk by writing a simple formatting subroutine to always display the dates in a preferred display format when listing files:\n\n```* DISPDATE.PRG\n* A date display subroutine to sort out a file listing display:\n*\nif VAL(\\$(dirx(5),7,3))<100\nmdatprefx='19'; set a 20th century date prefix\nMDAT=mdatprefx+\\$(DIRX(5),7,2)+\".\"+IFF(\\$(DIRX(5),1,1)=' ','0'+\\$(DIRX(5),2,1),\\$(DIRX(5),1,2))+\".\"+\\$(DIRX(5),4,2)\nelse\nmdatprefx='20'; set a 21st century date prefix\nMDAT=mdatprefx+\\$(DIRX(5),8,2)+\".\"+IFF(\\$(DIRX(5),1,1)=' ','0'+\\$(DIRX(5),2,1),\\$(DIRX(5),1,2))+\".\"+\\$(DIRX(5),4,2)\n\nendif```\n\nThus you can set your date display as you prefer. The dates in Shark are absolute and are unaffected by any display choices, so your files and dates will always be correct regardless of your display choices. The above subroutine will show dates in a program like this:\n\n```BEFORE:\nVPI.EXE 260326 4-08-91 10:56a\nVPI.MSG 15809 4-08-101 10:57a\nVPI.SGN 1972 4-08-121 10:58a\n\nAFTER displaying file dates with the above subroutine:\nVPI.EXE 260326 1991.04.08 10:56a\nVPI.MSG 15809 2001.04.08 10:57a\nVPI.SGN 1972 2021.04.08 10:58a```\n\nMost of the time, the Shark programmer will deal with preferred formats stored in a system variable such as :UNDOC, and will rarely change date formats once a preferred setup is implemented.\n\n## DAYS(\n\nComputes dates and date differences in days.\n\n```DAYS(<str exp1>,<str exp2>) Example: DAYS(2005-03-18,2005-06-27)\nDAYS(<str exp>,<num exp>) Example: DAYS(2005-03-18,30)\n```\n\nIn the first form: str exp1 and str exp2 are dates\n\nIn the second form: <str exp> is a date and <num exp> is a number\n\nType: numeric/character\n\nIn the first form, DAYS( returns the number of days between two dates. The result is an integer.\n\nIn the second form, DAYS( returns the date (as a string) which is <num exp> days past or before the date <str exp>.\n\nThe string expressions containing dates can be of many different formats (see the DATE( function for more examples):\n\n```\tyyyy/mm/dd 2005/03/18\nyyyy-mm-dd 2005-03-18\nyyyy mm dd 2005 03 18\nmm/dd/yy 03/18/05\nmm-dd-yy 03-18-05\nmm dd yy 03 18 05\n```\n\nThere should be two digits each for yy, mm, and dd, and four digits for yyyy. 01 3 90 is not acceptable (dd contains only 1 digit).\n\nIn the second form, the date is returned in the format set with the SET DATE TO command (default: mmddyyyy). If you wish a different format, use the DATE( function. See also MONTHS( and SET DATE TO.\n\nExamples:\n\n```1>? DAYS('04 06 90','04 29 90')\n23.00\n1>? DAYS('01/01/88','01 23 90')\n753.00\n1>? DAYS('01/01/90','01 23 88')\n-708.00\n1>? DAYS('01/01/91','01 02 91')\n1.00\n1>? DAYS('01/02/91','01 01 91')\n-1.00\n1>? DAYS('04 03 90',30)\n050290\n1>? DAYS('02 03 90',30)\n030590\n1>? DAYS('02 03 90',-3)\n010490\n1>? DAYS('020390',-30)\n010490\n1>monthday='0203'\n1>offset=30\n1>? DAYS(monthday+'90',offset+1)\n030690\n```\n\nNote that the MONTH( and DATE( functions always count the days in leap years:\n\n```\n1>? DAYS('02/28/88','03 01 88') ;leap year\n2.00\n1>? DAYS('02/28/90','03 01 90') ;not a leap year\n1.00\n```\n\nThus, DAYS(2005-03-18,-365) may not give the same result as MONTHS(2005-03-18,-12)\n\nDAYS( and DATE( may be combined to form complex expressions. For instance, the end of the month closest to today in the form set in the SET DATE TO command:\n\n`DATE(7,DAYS(DATE(2),-15))`\n\nThe end of NEXT month:\n\n`DATE(5,DAYS(DATE(2),30))`\n\nSee DATE( and MONTHS( functions, and SET DATE TO command.\n\n## DBF(\n\n``` DBF(type[,<filenum>)\n\ntype the information required from the data file header```\n\nOption:\n\n``` <filenum> the data file number; default is the currently\nselected data file```\n\nType: character/numeric\n\nEach data file has a file head which contains information about its structure, most of which is displayed with the LIST STRUCTURE command.\n\nUsing the DBF(, DBFX( and FLD( functions provides users access to this information in a programmable form suitable for display and use in expressions.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```TYPE EXPLANATION RESULT\n1 or (T)ype file type (\"0\", \"1\", \"2\" or \"3\") string\n2 or (N)ame data file name string\n3 or (F)ields number of fields in structure integer\n4 or (R)ecords number of records in file integer\n5 or (I)ndexes number of indexes currently open integer\n6 or (M)aster index number of master index integer\n7 or (Q)ualified fully-qualified file name from\nSharkBase FILES structure if\none exists; otherwise from DOS string\n8 or (E)xpanded fully-qualified file name from DOS string```\n\nShortcut: When specifying type by name, only the first character is required.\n\nExamples:\n\n```1>? DBF(type),DBF(n),DBF(fields),DBF(recs),DBF(indexes),DBF(master)\n0 CUSTOMER.DBF 9.00 4.00 2.00 1.00\n1>? DBF(q)\nC:\\SHARK\\DBF\\CUSTOMER.DBF\n\n1>SET INDEX TO NAME\n1>APPE BLANK\n1>GO TOP <--- takes you to blank (NEWLY APPENDED) record of indexed table\n1>GO BOTT <--- takes you to last record before new record appended to table\n1>GOTO DBF(R) <--- takes you to the last (NEWLY APPENDED) record in table```\n\nSince appended record is blank, it won't show as the last (BOTTOM) record in an indexed table. DBF(R) will take you to the last record in the table.\n\n## DBFX(\n\nGives additional information about an open data file; an extension to the DBF( function.\n\n``` DBF(type[,<filenum>)\n\ntype one of three types of information as listed below```\n\nOption:\n\n``` <filenum> the data file number; default is the currently\nselected data file```\n\nType: logical\n\nThe DBF( and FLD( functions provide information contained in the data file headers.\n\nShark provides this extended function to give three additional types of information in a programmable form suitable for display and use in expressions.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```TYPE EXPLANATION\n1 or Filter TRUE if a FILTER is in effect\n2 or Limit TRUE if a LIMIT is in effect\n3 or Relation TRUE if file is related to another\n4 or Write TRUE if current record has changes to be written to disk```\n\nShortcut: When specifying type by name, only the first character is required.\n\nIn a program, the programmer has control over whether filters, limits and relations are established, so the first three options are primarily helpful in Conversational SharkBase, in program debugging and in displaying the current environment for the aid of end-users.\n\nThe fourth option, however, is extremely helpful in programming, especially on networks, for it allows the programmer not only to check whether the user actually wants to save changes made to the record, but to avoid locking and changing records when the user was only looking.\n\nA second valuable use of this feature is to time-stamp changes and leave a record of who made them.\n\nExamples:\n\n```1>USE customer\n1>SET FILTER TO state='CA'\n1>? DBFX(Filter) or DBFX(1)\nT\n1>? DBFX(R) or DBFX(3)\nF```\n\nIn a program segment, starting with the READ command):\n\n```READ\nIF DBFX(w)\n@ 22,0 say CEN('Record changed. Save the Changes (Y/N)?',80)\nCURSOR 23,39\nIF !(CHR(INKEY()))='Y'\nREPLACE changedby WITH DATE(1)+user+TIME(1)\nFLUSH\nELSE\nELSE\nENDIF```\n\nThe same approach is used in programming for SHARE files on a network, but the actual technique is beyond the scope of this section due to the many different ways networking-data-indegrity is implemented by various operating systems. See specific network sections of your SharkBase documentation for more on this topic.\n\n## DELETED(\n\nDetermines whether a record is deleted.\n\n` DELETED(<filenum>)`\n\nOption:\n\n` <filenum> the number of the data file to be checked`\n\nType: logical\n\nIn a specified data file, the current record pointer points at a record. If this record has been marked for deletion (in BROWSE or EDIT, or with the DELETE command), then DELETED( gives the value T; otherwise, it is false.\n\nThis is a more general form of the * function, which operates the same way as DELETED( but, because it allows no parameter, works only with the currently selected data file.\n\nExamples:\n```1>? DELETED(4)\nT\n```\n\n## DESCEND(\n\nThe main use of DESCEND( in Shark is to create indexes that are in reverse, or descending, order.\n\nRemember that an index arranges its keys in order of the value of the characters making up the keys. \"A\" comes before \"B\" because the value 65 is smaller than 66.\n\nIn C Language terms, Shark reverses all bits in string; equivalent to NOT of string. Shark uses this function to reverse the order of an indexed file.\n\n`DESCEND(string)`\n``` string the string to be converted; every \"1\" bit is returned\nas \"0\", and every \"0\" bit is returned as \"1\"```\n\nType: character\n\nEach character in the ASCII character set is identified by an eight-bit binary number from 0 to 255 inclusive. Each bit may be either 0 or 1; for example, the letter A has a decimal value of 65 and a binary value of 01000001, while the letter B has a decimal value of 66 and a binary value of 01000010.\n\nThe DESCEND( function returns a \"mirror image\" of the input string . . . i.e., one in which evey bit set to 1 is reset to 0, and every 0 is set to 1.\n\nThe result is familiar to programmers who use C and other low-level languages, who would use the NOT operator to obtain the same result.\n\nThe main use of DESCEND( in Shark is to create indexes that are in reverse, or descending, order.\n\nBut suppose the index expression, instead of being on the key expression itself is on the NOT value of it, as in:\n\n`INDEX ON DESCEND(NAME) TO REVERSE`\n\nNow you have an index called REVERSE.NDX in which B comes before A.\n\nThis function is especially useful in creating reports. For example, suppose you want to report transactions for your customers in reverse date order (latest transactions first). Your index command could be:\n\n`INDEX ON CUST+DESCEND(DATE) TO CUSTREPT`\n\nNote: if you create an index using the DESCEND( function, you will have to use the same function to modify any find strings if you need to use the index to FIND, SEEK, etc.\n\nExample in a report form:\n\n```TITLE - Transactions by Customer, with Dates in Descending Order\nFILE - trans\nINDEX - cust+descend(date) to custrept\nFILE - cust index cust1\nRELATION - custnum to 2\nFIELDS - date(6,date),desc,amount\nPICTURE - ,,9999999.99\nSUBTOTAL - custn\nMESSAGE - 'Customer: '+pic(custnum,'xxx-x-xx -- ')+name#2```\n\nNote: This report form includes a number of features worth noting, including the fact that it creates its own index, sets up a relation to a second file, includes underlined headings and subtotals, and includes one line split into two. In report forms, the comma (,) is a continuation character when it is the last character on a line. See REPORT.\n\n## DIR(\n\nGet file information from disk directory.\n\n``` DIR(filespec) (referred to as \"Form 1\")\nDIR() (referred to as \"Form 2\") ```\n\nOption:\n\n`filespec refers to a string or string expression containing a file name (or skeleton using ? and/or * wildcards), with optional drive and/or path specification (enclosed in single quotes)`\n\nType: character\n\nSearches for specific files specified by the drive path, subdirectory and/or filename and provides specific information on files found.\n\nIf an argument is given as shown in Form 1, it must be a string or string expression naming a file, with * and ? wildcards optional). Returns the first file name found matching filespec. If no match was found, blank is returned.\n\nIf no argument is given as shown in Form 2, the previous filespec is used to find the next matching file. If no more matching file names are found, blank is returned.\n\nDIR( differs from DIRF( in that DIR( requires the user to specify the pathname unless the search is to be confined to the current directory. DIRF( (see below) is usually preferable since it searches the directory as which a current FILES structure points for the file name specified. See DIRF(.\n\nExamples:\n\n`? DIR('c:\\path\\*.bak')`\n\nlocates and displays the name of the first file with extension BAK in the subdirectory \\PATH, while\n\n`? DIR('*.bak')`\n\nlocates and displays the name of the first file with extension BAK in the current directory.\n\nIf a file is found, entering DIR() with no parameters will locate the next file meeting the filespec. This will continue until the response is a blank, indicating there are no more files meeting the filespec.\n\nOnce a file is identified with DIR(, the DIRX( function may be used to obtain additional information from the DOS directory in a form suitable for use in a program.\n\n## DIRF(\n\nGet file information from disk directory.\n\n``` DIR(filespec) (referred to as \"Form 1\")\nDIR() (referred to as \"Form 2\") ```\n\nOption:\n\n`filespec a string or string expression containing a file name (or skeleton using ? and/or * wildcards), with optional drive and/or path specification (enclosed in single quotes)`\n\nType: character\n\nSearches for specific files specified by the filename and the path supplied from the FILES structure (see FILES...ENDFILES), and provides specific information on files found.\n\nIf an argument is given (Form 1), it must be a string or string expression naming a file, with * and ? wildcards optional). Returns the first file name found matching filespec. If no match was found, blank is returned. If a disk name is supplied, the path supplied by the FILES structure is ignored.\n\nIf no argument is given (Form 2), the previous filespec is used to find the next matching file. If no more matching file names are found, blank is returned.\n\nDIR( differs from DIRF( in that DIR( requires the user to specify the pathname unless the search is to be confined to the current directory. See DIR(.\n\nExamples, assuming the following FILES structure is in effect:\n\n```FILES\n*.prg,c:\\shark\\prg\n*.frm,c:\\shark\\prg\n*.bak,c:\\shark\\prg\n*.db*,c:\\shark\\data\n*.ntx,c:\\shark\\ntx\n*.cpl,c:\\shark\\cpl\nENDFILES\nDIRF('*.prg')```\n\nlocates the first file with extension PRG in the subdirectory C:\\SHARK\\PRG. NB: this request only LOCATES the file information and doesn't display any result! Read further for instructions on how to DISPLAY needed information.\n\n`DIRF('*.bak')`\n\nlocates the first file with extension BAK in the subdirectory C:\\SHARK\\PRG.\n\nNOTE that the above instruction only locates the file and doesn't display any result. To display the found file, you must type:\n\n`? DIRF('*.bak')`\n`DIRF('*.dbt')`\n\nlocates the first file with extension DBT in the subdirectory C:\\SHARK\\DATA.\n\n`DIRF('*.txt')`\n\nlocates the first file with extension TXT in the current directory, since there was no matching filename pattern in the FILES structure.\n\nIf a file is found, entering DIRF() with no parameters will locate the next file meeting the filespec. This will continue until the response is a blank, indicating there are no more files meeting the filespec.\n\nOnce a file is identified with DIRF(, the DIRX( function may be used to obtain additional information from the DOS directory in a form suitable for use in a program.\n\n## DIRX(\n\nObtain additional information about file located with the DIR( or DIRF( function.\n\n``` DIRX(type)\n\ntype the name or number of the information required about a file\n```\n\nType: character/numeric\n\nOnce a file is identified with DIR( or DIRF( additional information can be obtained from the DOS directory in a form suitable for use in a program.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```Type Explanation Result\n1 or Name last file name found with DIR( string\n2 or Size size in bytes of last name found with DIR( integer\n3 or Attribute DOS file attribute as follows integer\n1 - directory\n2 - system\n3 - hidden\n5 - normal\n4 or Time time file created or last updated string\n5 or Date date file created or last updated string\n6 directory\n7 sub-directory\n```\n\nShortcut: When specifying type by name, only the first character is required.\n\nThese functions have many uses. Use them to write a program that backs up recently modified files, a program that lists files so the user can pick one, etc.\n\nExamples:\n\n```DIRX(n) returns the filename (e.g. \"Name\")\nDIRX(a) returns 4 if the file is read only (e.g. \"Attribute\")\n```\n\n## EOF(\n\nGives the end-of-file flag for a specified data file.\n\n```\nEOF(<filenum>)```\n\nOption:\n\n` <filenum> the number of the data file to be checked`\n\nType: logical\n\nFor the data file number specified, if the current record pointer is on the last record and a SKIP is issued, EOF( returns T (true); otherwise it is F (false). Since SKIP n is treated as n SKIP commands, EOF( returns true if SKIP n goes past the last record.\n\nAlso, if a LOCATE or CONTINUE command is unsuccessful, or if NEAREST does not find an index key equal to or greater than the FIND string, EOF( returns T.\n\nIf no <filenum> is specified, the current file is assumed.\n\nAn alternate form of the function - EOF - works only on the currently selected data file, since it takes no parameter. See EOF.\n\nExamples:\n\n```1>USE employee\n1>GO 4\n1>SKIP 2\n1>? #\n6.00\n1>? EOF()\nF\n1>SELECT 2\n2>SKIP\n2>? EOF()\nT\n2>GO BOTTOM\n2>SKIP\n2>? #\n6.00\n2>SELECT 1\n1>? EOF(2)\nT\n```\n\n## EOF\n\nGives the end-of-file flag for the currently selected data file.\n\n`EOF`\n\nType: logical\n\nIf the current record pointer is on the last record of the file in use and a SKIP is issued, EOF returns T (true); otherwise it is F (false). Since SKIP n is treated as n SKIP commands, EOF returns true if SKIP n goes past the last record. Also, if a LOCATE or CONTINUE command is unsuccessful, or if NEAREST does not find an index key equal to or greater than the FIND string, EOF returns T.\n\nExamples:\n\n```1>USE employee\n1>GO 4\n1>SKIP 2\n1>? #\n6.00\n1>? EOF\nF\n1>SKIP\n1>? EOF\nT\n1>GO 4\n1>SKIP 3\n1>? #\n6.00\n1>? EOF\nT\n```\n\n## EXP(\n\ne to the power <num exp>\n\n## FIELD(\n\nGet the number of the Get Table entry corresponding to a variable or field name.\n\n```\nFIELD(<name>) ```\n<name> =the name of field or variable in a Get Table\n\nType: numeric\n\nWhile in full-screen editing mode (with READ, BROWSE, EDIT, etc.), each input variable and field is put into a Get Table that can be controlled with an ON FIELD structure.\n\nFIELD( returns the number from 1 to 64 of any editing field on screens created with @ GET and TEXT macros. This function is usually used on an ON FIELD structure to redirect the sequence of data entry.\n\nSee READ and ON FIELD in the Command Reference section.\n\nExample in a program:\n\n```ON FIELD\nFIELD qty\nIF qty<0\n@ 22,0 say CEN('Quantity cannot be negative. Press any key',80)\ncc=INKEY()\nERASE 22,22\n:FIELD=FIELD(qty)\nENDIF\nENDON\n```\n\n## FILE(\n\nVerifies whether a file exists.\n\n``` FILE(<str exp>)\n\n<str exp> = a file name ```\n\nType: logical\n\nThis function looks up the file whose name is given by <str exp>; if the file is found, the function returns T, otherwise it returns F.\n\nIf no extension is given in the file name, DBF is assumed (a data file is looked for).\n\nExamples:\n\n```1>? FILE('employee')\nT\nT\n1>? FILE(mfile)\nF\n1>? FILE('a:'+mfile)\nT\n```\n\n## FLD(\n\nGet information about a field in a data file.\n\n```\nFLD(type,<fieldnum> [,<filenum>])\n\ntype one of the four attributes of a field\n<fieldnum> = the number of the field to be checked```\n\nOption:\n\n` <filenum> the number of the data file to be checked`\n\nType: character/numeric\n\nEach field in a data file has four attributes as shown with the LIST STRUCTURE command: name, type, width and (for numeric variables) number of decimal places. The FLD() function is often used in conjunction with the DBF() function.\n\nThese attributes can be retrieved in a form suitable for use in a program with the FLD() function.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```Type Explanation Result\n1 or Name string containing field name string\n2 or Type string containing field type string\n3 or Width number containing width of field integer\n4 or Decimals number of decimal places in field integer\n```\n\nShortcut: When specifying type by name, only the first character is required.\n\nExample in a program:\n\n```REPEAT DBF(records) TIMES varying fldnum\nREPEAT 4 times VARYING type\n?? FLD(type,fldnum)\nENDREPEAT\n?\nENDREPEAT\n```\n\n## FLOOR(\n\nfloor integer: the integer equal to or just below a given number\n\nRelated functions: CEIL( = the integer equal to or just above a given number; FLOOR( = the integer equal to or just below a given number; INT( = the integer part of a mixed number after discarding the fractional part; MOD( = the balance after truncating a mixed number.\n\n## GET(\n\nGets a string from a DOS file.\n\n``` GET(str var,<width num exp>[,filenum])\n\nstr var stores the string\n<width num exp> = the width of the string requested (must be in range 1 to 254)```\n\nOption:\n\n` filenum the DOS file number (between 1 and 4)`\n\nType: logical\n\nThis function imports a string of <width num exp> characters from a DOS file opened with the ROPEN( function; the character number pointer is normally positioned with the SEEK( function.\n\nIf successful in getting all the bytes requested, GET( returns T (true) and sets str var to the string imported from the file. If str var does not exist, GET( will create it.\n\nIf the function is unsuccessful, it returns F (false). This will be the result if the GET( function tries to get data beyond the end of the file. Note, however, that even if GET( returns F, one of more characters may still have been imported from the file; it is wise to check the value and width of str var to ensure part of a file is not lost.\n\nIf filenum is not given, filenum=1 is assumed.\n\nGET( READ(, IN(, and WRAP( are the only functions that change the contents of the memory variable used as an argument.\n\nExample in a program:\n\n```IF ROPEN('test',3)\nDO WHILE GET(string,80,3)\n? string\nENDDO\nENDIF\nok=CLOSE(3)\n```\n\n## IFF(\n\nAllows inline IF...THEN logic in expressions.\n\n``` IFF(cond,<exptrue>,<expfalse>)\n\ncond a logical expression\n<exptrue> the expression to be returned if cond is TRUE\n<expfalse> the expression to be returned if cond is FALSE```\n\nType: Type: character/numeric/logical\n\nThis function returns <exptrue> if cond is true, <expfalse> otherwise. The type of the value returned is the same as the expression selected by the condition. IFF( is very useful in the FIELDS line of reports or in commands such as SUM, AVERAGE, REPLACE, or LIST.\n\nExamples:\n\n```1>? IFF(married,'Married','Single ')\n1>SUM IFF(quant>500, quant*price, 0),IFF(state='NY',1,0)\n```\n\nThe first command prints \"Married\" or \"Single\" according to the value of a logical field named MARRIED. The second command will return the sum of all quantities for transactions where quantity is greater than 500, and a count of all records where STATE='NY', thus combining two separate commands (SUM FOR and COUNT FOR) into one.\n\nCaution: do not use expressions of different types or widths in reports, since this may cause the REPORT command to fail.\n\n## IN(\n\nInputs a single character from a sequential file.\n\n``` IN(str var[,<filenum>])\n\nstr var stores the character```\n\nOption:\n\n` <filenum> the DOS file number (between 1 and 4)`\n\nType: logical\n\nThis function reads the next character of the DOS file (opened with the ROPEN() function into the string variable str var; if str var does not exist, it will be created. str var cannot be a matrix variable.\n\nIf filenum is not given, filenum=1 is assumed. IN( returns T if successful, F otherwise.\n\nThis function is especially useful to communicate over the standard COM1, COM2 devices, for conversion of WordStar or other non-standard files to standard ASCII files, to encrypt/decrypt a file through a translation table.\n\nIN(, GET(, READ(, and WRAP( are the only functions that change the contents of the memory variable used as an argument.\n\nSee the functions OUT(, ROPEN(, WOPEN(, SEEK(, SSEEK(, and CLOSE(.\n\n## IFKEY(\n\nTests if a character is waiting in the keyboard buffer.\n\n` IFKEY()`\n\nType: logical\n\nIt is often useful to test whether a key has been pressed on the keyboard without waiting indefinitely if a key is not pressed.\n\nThe IFKEY( function returns T (true) if a keystroke is waiting in the keyboard buffer, F (false) if not. The keyboard buffer is not affected.\n\nExample in a program, to create a timing loop that ends as soon as any key is pressed:\n\n```start=VAL(TIME(seconds))\nDO WHILE VAL(TIME(seconds))-start<3\nIF IFKEY()\nBREAK\nENDIF\nENDDO\n```\n\n## INKEY(\n\nEvery key on a keyboard has a numeric value. INKEY() waits and gets the numeric value of a key-press.\n\n`INKEY()`\n\nType: numeric\n\nThis function suspends program execution until a key is pressed. It returns a number identifying the key. Nothing is displayed on the screen. Any key can be read (except Alt, Ctrl, and shift which merely affect the characters produced by other keys) including all function keys, editing keys, and alternate keys. (Depending on your computer, function keys may be pre-programmed in a variety of ways. It's even possible that F11 and F12 may not recognized by your computer's BIOS program, and may be ignored by Shark. Although it's possible, it's more likely that your PC will recognize these keys.)\n\nStandard keys are identified with their ASCII number. Other keys return values between 256 and 511.\n\nExamples:\n\n``` Key INKEY()\nCtrl-C 3\nA 65\nAlt-A 285\n<F1> 315\nShift-<F1> 340\nCtrl-<F1> 350\nAlt-<F1> 360\nF12 390\n```\n\nTo find out the number identifying a key, give the command:\n\n```1>? INKEY()\n```\n\nThen (1) press <ENTER>, then (2) the key. The key's character number will be displayed.\n\nUsing INKEY() the user can program his own EDIT, set up cursor controlled menus, and so on.\n\nA simple example showing the many uses of INKEY():\n\nINKEY() makes a more attractive screen than the WAIT command. INKEY() can also be used to make choices:\n\n```DO WHILE t\nCLS\n@ 10,25 SAY \"Hello! Press Any Key\"\nWAIT=INKEY()\nDO CASE\nCLS\n@ 3,30 say \"Some Announcements:\"\nBOX 2,15,4,65\n@ 7,27 say \"Select from the following:\"\n@ 10,20 say \"<--- <R>eturn to Start or <Q>uit --->\"\nCOLOR 112,10,25,10,27\nCOLOR 112,10,50,10,52\nCURSOR 24,79\n:KEY=0\nWAIT=INKEY()\n* if <R> or <r>, <R>eturn to opening menu\nCASE :KEY=114 .or. :KEY=82\nLOOP\n* if <Q> or <q>\nCASE :KEY=81 .or. :KEY=113\nCLS\nCANCEL\nOTHERWISE\nCLS 22,22\nRING\n@ 15,20 SAY CEN(\"PLEASE TRY AGAIN!\",40)\nWAIT=INKEY()\nCLS\n@ 10,30 \"Thank you!\"\nENDCASE\nENDDO\n\n```\n\n## INSERT(\n\nOverwrites a string at a given position with another string.\n\n``` INSERT(str expover,<str exp>,<num exp>)\n\nstr expover the string expression to overwrite\n<str exp> the string expression to overwrite with\n<num exp> the position```\n\nType: character\n\nThis function takes the string in str expover and overwrites the string with <str exp> starting at position <num exp>.\n\nExamples:\n\n```1>line=' '\n1>customer='John Smith'\n1>ponumber='32109'\n1>amount='910.56'\n1>line=INSERT(line,customer,1)\n1>? line\nJohn Smith\n1>line=INSERT(line,ponumber,15)\n1>? line\nJohn Smith 32109\n1>line=INSERT(line,amount,25)\n1>? line\nJohn Smith 32109 910.56\n1>line=' c '\n1>newline=INSERT(line,customer,@('c',line))\n1>? newline\nJohn Smith\n```\n\nNote: The last example shows the use of INSERT( with \"templates\". The line variable is the template. The character \"c\" in it designates the place where the customer has to be inserted. Such templates are useful in report generators or for creating screen displays.\n\n## INT(\n\nGives the integer part of a given number (the fractional part is discarded); useful for currency \"rounding\", for example.\n\nRelated functions: CEIL( = the integer equal to or just above a given number; FLOOR( = the integer equal to or just below a given number; INT( = the integer part of a mixed number after discarding the fractional part; MOD( = the balance after truncating a mixed number.\n\nExample:\n\n```1> ? int(3.14159)\n1> 3.00\n```\n\n## LEFT(\n\nGets the left part of a string.\n\n``` LEFT(<str exp>, <num exp>)\n\n<str exp> the string from which the new string is formed\n<num exp> the number of characters to place in the new string\n```\n\nType: character\n\nThis function takes the first <num exp> characters from the string <str exp>. It is equivalent to, but more efficient than:\n\n```1>\\$(<str exp>, 1,<num exp>).\n```\n\nIf <num exp> is greater than the width of <str exp>, this function returns all of <str exp>.\n\nWherever an expression calls for a substring starting at the beginning, use LEFT( instead of \\$( or SUBSTR(.\n\nExample:\n\n```1>a='David Bark'\n1>? LEFT(a,5)\nDavid\n1>? LEFT(a,50)\nDavid Bark\n```\n\n## LEN(\n\nGets the width (length) of a string.\n\n``` LEN(<str exp>)\n\n<str exp> is the string being measured```\n\nType: numeric\n\nThis function returns the width (including trailing blanks) of the string <str exp>.\n\nExamples:\n\n```1>name='David Barberr'\n1>? LEN(name)\n13.00\n1>? LEN(name+' is a nice boy')\n27.00\n```\n\nNote that the width of a string is at least 1!\n\n## LOC(\n\nLOC( Gets the current byte position in a file opened with ROPEN( or WOPEN(.\n\nLOC(<filenum>)\n\nOption:\n\n<filenum> is the number of the sequential or random file, 1 to 4 (default 1)\n\nType: number\n\nWhenever a file is opened with the ROPEN( or WOPEN( function, Shark maintains a pointer at a current position, which is where any PUT( or GET( function would take effect. The position pointer is set with the SEEK( and SSEEK( functions, and reset every time the IN(, OUT, READ(, WRITE(, PUT(, and GET( function is used.\n\nIf filenum is not given, filenum=1 is assumed.\n\nA common use of LOC( is to get the current position before a SEEK( so that the pointer can be reset to the original position after some operation.\n\n## LOG(\n\nLOG( <num exp>)\n\ngives the natural logarithm of <num exp>\n\nType: number\n\nExample:\n\n```1>? log(100)\n1> 4.60\n```\n\n## LOG10(\n\ngives the base 10 logarithm of <num exp>\n\nType: number\n\nExample:\n\n```1>? log10(100)\n1> 2.00\n```\n\n## LOWER(\n\nConverts the string <str exp> to lower case.\n\n` LOWER( <str exp>)`\n<str exp> is the text to be converted to lower case\n\nType: character\n\nAll upper-case letters in the <str exp> are converted into lower case by the LOWER( function. See also the !( and UPPER( functions.\n\nExamples:\n\n```1>a='Aa12b'\n1>? LOWER(a)\naa12b\n1>? LOWER('David!')\ndavid!\n```\n\nNote that only the upper-case letters, A-Z, are changed (to a-z). No other characters are affected.\n\n## LTRIM(\n\nTrims blanks from the left-hand side of a string:\n\nLTRIM(<str exp> ) <str exp> the string to be trimmed\n\nType: character\n\nThis function gets rid of the blanks on the left of a string. See also TRIM( which removes the blanks on the right side of a string.\n\nExamples:\n\n1>a=' David ' 1>? a David 1>? LEN(a) 14.00 1>? LTRIM(a)+' is trimmed on the left' David is trimmed on the left 1>? LEN(LTRIM(a)) 9.00 1>blank=' ' 1>? LEN(LTRIM(blank)) 1.00\n\nNote: LTRIM(blank) is a single blank.\n\n```MASK(<operation>,<string>,<mask>) Manipulates bits in a string based on comparison between 2 strings\n<operation> a number representing one of the three masking operations supported by SharkBase: AND, OR, and XOR\n<string> the string to be modified by the operation\n<mask> the string used to modify the bits in string1 ```\n\nType: character\n\nEach character in the ASCII character set is identified by an eight-bit binary number from 0 to 255 inclusive. Each bit may be either 0 or 1; for example, the letter A has a decimal value of 65 and a binary value of 01000001, while the letter B has a decimal value of 66 and a binary value of 01000010. Although SharkBase has functions to set individual bits to 1 or 0 (SET( and RESET( respectively), MASK( can be used to change any number of bits at once.\n\nThe three MASK( operations are used to return a string that is the result of bitwise operations between two strings, comparing each bit in one string with the bit in the same position in the other string. If the two bits are both 1 (on or true), AND and OR both produce a 1 in that position, while XOR (exclusive OR) returns a 0. If one is a 1 and the other a 0, OR and XOR both produce a 1 and AND produces a 0. Finally, if both are 0, all three produce a 0.\n\nThere is one other operator of this type, known as the bitwise NOT, but it is not implemented through the MASK( function. Since the primary use of the bitwise NOT is to create indexes in decending order, this operation is implemented through the DESCEND( function. See DESCEND(.\n\nThe use of AND, OR and XOR is the same as in C and other low-level languages that permit access to data at the bit level, and will rarely be used by most SharkBase programmers. Those who require these operations already know how to use them and need no additional instructions for SharkBase.\n\nExample in a program:\n\nGiven any screen color (See COLOR and SET COLOR), determine the reverse that SharkBase would use in highlighting input windows during READ operations (useful in using the same color to highlight messages and other text on the screen):\n\n```CLS\nDO WHILE t\nINPUT 'Color.... ' to in\nlow=mod(in,16) ;separate the low-order and high-order bits\nhi=int(in/16)\nout=low*16+hi ;the rest of these operations are too complex to explain...just be assured it works\nnew=int(in/16)+16*mod(in,16)\n:color=in\n? 'Main Color..',in\n:color=rev\n? 'Reverse.....',rev\nENDDO\n```\n\n## MAX(\n\nGiven any two expressions of the same type, MAX( returns the higher value. It must be remembered that string comparisons are based on the ASCII value of the characters in the two strings. Comparing two logical expressions has no meaning.\n\nYou may find that a comparison between a numeric expression and a numeric field results in an error. This can be avoided by ensuring both arguments are an expression, accomplished most easily by adding 0 to a field.\n\nExamples:\n\n```1>? MAX(123,amount+0) ;adding 0 ensures handling as an expression\n5241.34\n1>? MAX('hello','goodbye')\nhello```\n```MAX(<exp1>,<exp2>) Compare two expressions of any type and return the larger.\n<exp1> any expression\n<exp2> any expression of the same type as <exp1> ```\n\nType: Type: character/numeric/logical\n\n```\n\nchoices the number of choices offered by the menu\nhotkeys one character for each possible option, beginning\nwith zero (normally the exit option)\nwidth the width of the menu lightbar\n```\n\nOption:\n\n`seconds exit MENU( after this many seconds; function returns value of 65`\n\nType: numeric\n\nThe MENU( function pauses program execution and superimposes a movable lightbar (reverse-video line) over a menu of selections previously written to the screen, usually with TEXT.\n\nThe menu lightbar is moved up and down with the up arrow and dn arrow keys. If you press down arrow while on the bottom, the lightbar cycles automatically to the top. Similarly, pressing up arrow while on the top cycles to the bottom.\n\nThe user can select any item by moving the lightbar over it and pressing Enter, or entering its line number as a one-digit number. In either case, the line number selected is returned by the function, and the key pressed stored in the system variable :KEY. Both the function value and :KEY can be tested in a subsequent DO CASE structure to determine the program's next actions. The inkey() function displays the :KEY value:\n\n```CLS\nDO WHILE t\n@ 10,0 SAY \"To display a key code, press any key . . .\"\n? inkey()\nENDDO\n```\n\nIf the user presses 0 or <Home> , MENU( returns zero, although the first line covered by the lightbar is 1. Options over 9 can be accessed only by the lightbar.\n\nAlternately, you may choose to use the <hotkeys> option for this function's first argument. Instead of the number of choices, a string may be supplied. The length of the string determines the number of choices and the letters in the string math the numbers of the choice. For example, if if you supplied a string literal or variable with the value \"Qmdn\" for <hotkeys>, pressing \"Q\" or \"q\" would return zero, \"M\" or \"m\" would return 1, \"D\" or \"d\" would return 2, and \"N\" or \"n\" would return 3. <Hotkeys> may be caps or lower case; MENU( is not case-sensitive.\n\nIf any cursor key except , and <Home> is pressed, MENU( returns the number of the line highlighted by the lightbar, and :KEY contains the key number that would be returned by the INKEY( function. (If SET FUNCTION OFF, all function keys have the same effect as these cursor keys.)\n\nWhile the MENU( function is active, all other typewriter keys are ignored.\n\nNote: If an ON KEY structure is in effect, it is ignored while MENU( is waiting for input.\n\nExamples:\n\n1. This is a program which shows specifically the values returned by Menu( when any key is pressed:\n\n```CLS\n@ 19,46 say 'Returns'\n@ 19,64 say ':KEY'\nDO WHILE t\nCURSOR 5,20\n@ 20,50 say var\n@ 20,65 say :KEY\nENDDO\n```\n\n2. In a simple real-life program:\n\n```CLS\nTEXT\n\n PREPARE SALES TOTALS\n CALCULATE TAX SUMMARY FOR A TAX PERIOD\n PRINT OUT SALES TRANSACTIONS & MARGINS\n REVIEW DEPOSITS JOURNAL\n\nENDTEXT\nCHOICE=0\nCURSOR 8,12 ; positions cursor at row 8, column 12\nCHOICE=MENU(4,1); 1-wide lightbar moves over 4 menu rows, saves value to choice\n*\nDO CASE\n*** *** *** *** ***\nCASE CHOICE = 0\n... etc\nCASE CHOICE = 1\n... etc\n```\n\n3. Another real-life program:\n\n```ERASE\nWINDOW 1,2,23,77 double ;draw frame around screen\n@ 1,3 say DATE(full)\n@ 3,3 say CEN(:company,74)\nWINDOW 8,25,22,75 blank ;use WINDOW to position TEXT\nWINDOW\nCURSOR 10,23\nIF ans=0 .or. :key=335\nENDIF\n@ ans+9,23 say CHR(149) ;• character as bullet\nDO CASE\nCASE ans=1\n... etc.\n```\n\n4. Part of the same program using both hotkeys and timeout:\n\n```WINDOW 8,25,22,75 blank ;use WINDOW to position TEXT\nWINDOW\nCURSOR 10,23\nhotkeys='Qeprsmwgf' ;hotkeys can be a string variable\n* ; this allows 8 options plus zero (quit)\nIF ans=0 .or. :key=335 .or. ans=65 ;return of 65 means program timed-out\nENDIF\n@ ans+9,23 say CHR(149) ;• character as bullet\nDO CASE\nCASE ans=1\n... etc.\n```\n\n## MIN(\n\nCompare two expressions of any type and return the smaller.\n\n```\nMIN(exp1,exp2)\n\nexp1 any character or numeric expression\nexp2 any expression of the same type as exp1 ```\n\nType: character/numeric\n\nGiven any two expressions of the same type, MIN( returns the lower value. It must be remembered that string comparisons are based on the ASCII value of the characters in the two strings. (Comparing two logical expressions has no meaning.)\n\nYou may find that a comparison between a numeric expression and a numeric field results in an error. This can be avoided by ensuring both arguments are an expression, accomplished most easily by adding 0 to a field.\n\nExamples:\n\n```\n1>? MIN(123,amount+0) ;adding 0 ensures handling as an expression\n123.00\n1>? MIN('Hello','Goodbye')\nGoodbye\n```\n\n## MOD(\n\nThe MODULO function returns the remainder of one number divided by another.\n\n`MOD(<num exp1>,<num exp2>)`\n\nFor example:\n\n```1>? MOD(5,2) gives the remainder of 5 divided by 2, e.g.\n1.000000\n1>? MOD(-3.14, .7)\n-0.340000\n```\n``` MOD(<num exp1>,<num exp2>) <num exp1> modulo <num exp1>: returns 0 if <num exp2> is 0; returns the value num with the same sign\nas <num exp1>, less than <num exp2>, satisfying <num exp1>=i*<num exp2>+num for some integer i.\n\n1>? MOD(5,2) this is the remainder of 5 divided by 2\n1.000000\n1>? MOD(-3.14, .7)\n-0.340000\n```\n\nRelated functions: CEIL( = the integer equal to or just above a given number; FLOOR( = the integer equal to or just below a given number; INT( = the integer part of a mixed number after discarding the fractional part; MOD( = the balance after truncating a mixed number.\n\n## MONTHS(\n\nComputes dates and date differences in months.\n\n```\nMONTHS(<date1>,<date2>) or\nMONTHS(<date1>,<num exp>)\n\n<date1> a string expression containing a valid date\n<date2> a string expression containing a valid date\n<num exp> a numeric expression such as a number of months\n```\n\nWhen <date2> is specified, MONTHS( returns number of months between the two dates. When <num exp> is specified, MONTHS( returns date that many months away\n\nMONTHS( computes the difference between the two dates in months, or computes a date a given number of months before or after a specified date. Fractional parts of months are discarded.\n\nIf a computed date is after the last date of the month, the date will be adjusted to the last day of the month. For example, MONTHS('013190',1) results in 022890.\n\nExamples:\n\n```1>? MONTHS('04 06 90','04 29 90')\n0.00\n1>? MONTHS('01/01/90','02/01/90')\n1.00\n1>? MONTHS('02/01/90','01/01/90')\n-1.00\n1>? MONTHS('01/01/90','01/01/92')\n24.00\n1>? MONTHS('02/01/90',10)\n120190\n1>? MONTHS('01/01/90',-6)\n070189\n```\n\nNote that the MONTH( and DATE( functions always count the days in leap years:\n\nThus, DAYS(2005-03-18,-365) may not give the same result as MONTHS(2005-03-18,-12)\n\n## NDX(\n\nGet information on index files in use.\n\n``` NDX(type [,indexnum] [,<filenum>])\n\ntype the name or number of the information required```\n\nOptions:\n\n``` the number of the index being checked (1 to 7);\ndefault is thge master index\n<filenum> the number of the data file to be checked```\n\nType: character/logical\n\nNDX( is used to primarily in programs to get the information on the current environment in a form suitable for use in expressions.\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```Type Explanation Result\n1 or Name name of index file string\n2 or Key key on which index was created string\n3 or DBF_Name name of data file on which index\nwas created string\n4 or Filter TRUE if filter or FOR clause was\nin effect when index was created logical```\n\nShortcut: When specifying type by name, only the first character is required.\n\nExamples:\n\n```1>? NDX(n),NDX(key),NDX(dbf),NDX(filter)\nCUST1.NDX CUSTNUM CUSTOMER.DBF F\n```\n\n## NTX(\n\n```This is an obsolete function identical to NDX(, which refers to the\ndiscontinued Clipper .ntx file type. The older Clipper NTX indexes are\nno longer used in Shark.\n```\n\n## NUMTOC(\n\n```Convert a decimal number to a hexadecimal string.\n\nNUMTOC(type,number)\n\ntype the length of the string to be created\nnumber the number to be converted\n\nType: character\n\nA general conversion function for converting decimal numbers into\nhexadecimal values. Input can be any number, and the returned string length\ncan be up to eight characters as follows:\n\nThe type can be given in either of two forms, a name or number (numeric\nexpression) as follows:\nType Range of Number String Length Returned\n1 integer 0 to 255 1 byte\n2 integer -32768 to 32767 2 bytes\n4 integer +/- 2 billion 4 bytes\n8 a floating point number 8 bytes\nShortcut: When specifying type by name, only the first character is required.\n\nTypes 1, 2 and 4 return hexadecimal integers. Any fractional parts are ignored.\n\nWhen type is 1, this function is equivalent to CHR(. Values outside\nthe range of 0 to 155 return the modulus of 256 for type 1.\n\nThe CTONUM(, RANK( and ASC( functions convert strings into numbers.\n\nDo not confuse these function with STR( and VAL(, which convert decimal\nnumbers into their string representations, and vice versa.\n\nExamples:\n1>? NUMTOC(1,97)\na\n1>? NUMTOC(2,25185)\nab\n1>? NUMTOC(4,6513249)\nabc\n```\n\n## OUT(\n\n```\nOutputs a single character to a sequential file.\n\nOUT(str var [,<filenum>])\n\nstr var contains the character\n\nOption:\n\n<filenum> the DOS file number (between 1 and 4)\n\nType: logical\n\nThis function outputs the character in str var to the sequential file\n(opened with the WOPEN( function).\n\nIf filenum is not given, filenum=1 is assumed. OUT( returns T if successful, F otherwise.\n\nThis function is especially useful to communicate over the standard COM1,\nCOM2 devices, for conversion of Word Star or other non-standard files to\nstandard ASCII files, to encrypt/decrypt a file through a translation table.\n\nSee the functions IN(, ROPEN(, WOPEN(, CLOSE(, SEEK(, SSEEK(, READ(, and WRITE(.\n```\n\n## PIC(\n\n```\nFormats the PICTURE of a number or string.\n\nPIC(exp,format)\n\nexp is the number or string to be formatted\nformat the format clause\n\nType: character\n\nThis function returns the exp formatted with the format clause\nformat. See the command @ for the description of the format clauses.\nPIC( is especially useful in preparing numeric values for printing.\n\nPIC( always returns a string, even when a number or numeric expression\nis being formatted.\n\nExamples:\n1>number=1123.89\n1>format='9,999.99'\n1>? PIC(number,'9,999.99')\n1,123.89\n1>? PIC(number,format)\n1,123.89\n1>format='9999'\n1>? PIC(number,format)\n1123\n1>format='\\$\\$\\$,\\$\\$\\$.99'\n1>? PIC(number,format)\n\\$1,123.89\n1>format='\\$\\$\\$,\\$\\$\\$.999'\n1>? PIC(number,format)\n\\$1,123.890\n1>string='abcd'\n1>format='xX9!'\n1>? PIC(string,'xx9!')\nabcD\n1>? PIC(string,format)\nabcD\n1>format='X-X-X-X'\n1>? PIC(string,format)\na-b-c-d\n1>SET ZERO ON\n1> ? 0,\"*\",str(0,5,2),\"*\",pic(0,\"99.99\"),\"*\",\"0\"\n0.00 * 0.00 * 0.00 * 0\n1>SET ZERO OFF\n1> ? 0,\"*\",str(0,5,2),\"*\",pic(0,\"99.99\"),\"*\",\"0\"\n* 0.00 * * 0\n\n```\n\n## POW(\n\n```<num exp1> to the power <num exp2>\nExample:\n\n1>:PICTURE='999.999999'\n1>? POW(2,4)\n16.00000```\n\n## PRINTER(\n\n```\nThis is an MS-DOS function to test whether a printer is ready to print.\n\nPRINTER(printernum)\n\nOption:\n\nprinternum the number of the LPT port (1 or 2)\n\nType: logical\n\nWhenever a program has to print, it needs a printer turned on and\non-line. When it is unsuccessful in printing, Shark intercepts the customary\nDOS error (the infamous \"Abort, Retry, Ignore?\") and ends execution.\n\nThe PRINTER( function gives programmers a way to ensure the printer is\ncorrectly set up before sending output to the screen. This makes it possible\nto suspend execution under program control, prompt for correction action, or\neven SPOOL the output to a disk file instead of the printer.\n\nExamples in programs:\n\nDO WHILE .NOT. PRINTER()\nWINDOW 10,10,15,69 DOUBLE\n@ 12,10 SAY CEN('Turn on printer and press any key . . .',60)\nRING\nCURSOR 13,39\ncc=INKEY()\nWINDOW\nENDDO\nIF .NOT. PRINTER(2) ;test LPT2\nSPOOL printfil\nENDIF\nIF PRINTER()\nSET PRINT ON\nENDIF\n```\n\n## PUT(\n\n```\nPuts a string into a DOS file.\n\nPUT(<str exp>[,filenum])\n\n<str exp> the string to overwrite with\n\nOption:\n\n<filenum> the DOS file number (between 1 and 4)\n\nType: logical\n\nA DOS file was opened with the WOPEN( function; the character number\npointer was normally positioned with the SEEK( function. This function\noverwrites the file from the character chosen by the character number pointer\nwith the string <str exp>.\n\nIf filenum is not given, filenum=1 is assumed. PUT( returns T if successful, F otherwise.\n\nExamples:\n1>byte=CHR(13)\n1>ok=WOPEN('test',3)\n1>ok=SEEK(5221)\n1>ok=PUT(byte,3)\n```\n\n## RAND(\n\n```\nGives a random number in the range 0<=n<1.\n\nRAND()\n\nOption:\n\nseed a number used to initiate the random series\n\nType: numeric\n\nA series of successive calls to the RAND( function will return a uniform\ndistribution of random numbers.\n\nThe first time RAND( is called, <seed> (any numeric expression) may be\nspecified. All subsequent calls should be without the seed. If no initial\nseed is provided, a random seed is chosen by the program.\n\nRAND( always returns a number equal to or greater than 0 and less than 1.\nIf you need a random series of integers between zero and 5000, use\n5000*RAND().\n\nNote: if you provide the initial seed, every execution of RAND( ) will\nreturn the same series of numbers.\n```\n\n## RANK(\n\n```\nConverts a character to its ASCII number.\n\nRANK(<str exp>)\n\n<str exp> the first character of this string is converted\n\nType: numeric\n\nThe characters in the character set used by the computer are numbered\nfrom 0 to 255. For the first character of the string <str exp>, RANK(\nCTONUM(, and NUMTOC(.\n\nExamples:\n\n1>? RANK('x')\n120.00\n1>? RANK('xyz')\n120.00\n\nNote that only the first character of the string matters.\n```\n\n```\nReads a line of a sequential file.\n\nstr var stores the line read in\n\nOption:\n\n<filenum> the DOS file number (between 1 and 4)\n\nType: logical\n\nThis function reads the next line of the sequential file (opened with the\nROPEN( function) into the string variable str var. If str var does not\nexist, it will be created; str var cannot be a matrix variable.\n\nA line is terminated by the carriage return character (ASCII 13). Since\nthe line is read into a string variable, it cannot be longer than 254\ncharacters.\n\nIf filenum is not given, filenum=1 is assumed. READ( returns T if\nsuccessful, F otherwise.\n\nIn Shark programs, READ( normally appears in an IF or DO WHILE command.\n\nREAD(, IN(, GET(, and WRAP( are the only functions that change the\ncontents of the memory variable used as an argument.\n\nSee the functions WRITE(, ROPEN(, WOPEN(, CLOSE(, IN(, OUT(, and SSEEK(.\n\nExamples:\n\n1. In Conversational Shark:\n\n1>ok=ROPEN('a:label.prg')\n1>? line\n\n2. Two programs to print a text file, TEST (in the second version it is\nassumed that TEST has no more than 20 lines):\nSET WIDTH TO 80\nSET PRINT ON\nIF ROPEN('test')\n? line\nENDDO\nok=CLOSE()\nENDIF\nDIM CHAR 80 matrix\nSET WIDTH TO 80\nSET PRINT ON\nIF ROPEN('test',1)\nREPEAT 20 times VARYING num\nmatrix[num]=input,\nELSE\nBREAK\nENDIF\nENDREPEAT\nIF CLOSE(1)\n? matrix\nENDIF\nENDIF\n```\n\n## RECNO(\n\n```\nGets the current record number in any open data file.\n\nRECNO(<filenum>)\n\nOption:\n\n<filenum> the number of any open data file; default is the selected data file\n\nType: numeric\n\nThis function returns the record number of the current record of any\nspecified data file; if no <filenum> is given as in recno(), returns the record number the\nselected file. Note that ? RECNO() displays the current record number in the\nform specified by the system variable :PICTURE (see Section 2.7).\n\nShark also has a more limited form of this function, #, which applies\nonly to the selected data file.\n\nExamples:\n1>USE employee\n1>USE#2 customer\n1>? RECNO(1)\n1.00\n1>GO BOTTOM\n1>? RECNO(1)\n6.00\n1>GO TOP\n1>? RECNO()\n1.00\n1>SKIP#2 2\n1>? RECNO(2)\n3.00\n```\n\n## REMLIB(\n\nRemoves a library entry.\n\n`REMLIB(volume)`\n\nvolume the number of the library entry to be removed.\n\nType: logical\n\nThis function deletes a library entry. The function accepts the library volume number you wish to delete as its argument and returns T (true) if the delete operation was successful, F (false) if not.\n\nOnce a library entry (volume) is deleted, its space in the library is made available for new text.\n\nLibraries are created with the SET LIBRARY TO command.\n\nExample:\n\n```1>? REMLIB(50)\nT\n```\n\n## REPLACE(\n\nReplaces, in a string expression, all occurrences of a string with another string.\n\n` REPLACE(<str exp>,<str exp1>,<str exp2>) `\n\nReplace in the string expression <str exp> all occurrences of the string <str exp1> with the string <str exp2>\n\nType: character\n\nThis function looks up the first occurrence of . This process continues as long as <str exp1> occurs in <str exp>.\n\nExamples:\n\n1. A field contains a number as right justified characters, padded on the left with blanks. The following REPLACE( changes these numbers to right justified numbers padded on the left with zeros.\n\n```1>number=' 123'\n1>number=REPLACE(number,' ','0')\n1>? number\n00000123\n```\n\n2. In writing checks, dollar amounts may be left padded with dollar signs:\n\n```1>number=' 123.11'\n1>number=REPLACE(number,' ','\\$')\n1>? number\n\\$\\$\\$\\$\\$123.11```\n\n3. Renaming a variable in a program line. The variable OLDN is renamed FIRSTNUMB.\n\n```1>line='newn=oldn+oldn+(oldn/3)'\n1>line=REPLACE(line,'oldn','firstnumb')\n1>? line\nnewn=firstnumb+firstnumb+(firstnumb/3)\n```\n\n## RESET(\n\nSets a bit in a string to 0.\n\n``` RESET(<str exp>,bit position)\n\n<str exp> the string or string expression on which the\nfunction is to act\nbit position the number of the bit, numbered from the left\nstarting at 1, which is to be set to zero ```\n\nType: logical\n\nA bit is any of the eight binary digits in a character's ASCII number representation. Each bit can have only one of two possible values, 0 and 1.\n\nThe SET( and RESET( functions are used to manipulate the bits within a string or string expression. SET( makes a bit 1, and RESET( makes a bit 0. The BIT( function tests the value of a specific bit.\n\nAmong the chief uses for these functions is compression of logical (true/false) data by using just one bit for each data item instead of an entire byte for a logical field or two bytes for a logical variable.\n\nSee the BIT( function for programming examples.\n\nExamples:\n\n```1>str='PS'\n1>? BIT(str,15)\nT\n1>str=RESET(str,15)\n1>? str,BIT(str,15)\nPQ F\n```\n\n## RIGHT(\n\nGets the right-hand part of a string.\n\n` RIGHT(<str exp>, <num exp>) `\n\n<str exp> the string from which the new string is formed <num exp> the number of characters to place in the new string\n\nType: character\n\nThis function returns the last (that is, the rightmost) <num exp> characters from the string <str exp>.\n\nIf <num exp> is greater than the width of <str exp>, this function returns all of <str exp>.\n\nExample:\n\n```1>a='David Bark'\n1>? RIGHT(a,5)\nBark\n1>? RIGHT(a,50)\nDavid Bark\n```\n\n## ROPEN(\n\nOpens a DOS file for reading.\n\n``` ROPEN(<str exp> [, <filenum> ])\n\n<str exp> the file name```\n\nOption:\n\n`<filenum> the DOS file number (between 1 and 4) `\n\nType: logical\n\nThis function opens <str exp>, a DOS file (in particular, a sequential file or input device, such as COM1), for reading only. The current position pointer (see the SEEK( function) is set to the beginning of the file.\n\nIf filenum is not given, filenum=1 is assumed. If no file extension is given, the extension TXT is used.\n\nROPEN( returns T if successful, F otherwise.\n\nSee the functions WOPEN(, CLOSE(, READ(, WRITE(, IN(, OUT(, GET(, PUT(,SEEK(, and SSEEK(.\n\nExamples:\n\n```1>? ROPEN('a:label.prg')\nT```\n\nIn an Shark program, ROPEN( normally appears in an IF command:\n\n```IF ROPEN('file',2)\n? data\nENDDO\nok=CLOSE(2)\nENDIF\n```\n\n## ROW(\n\nGets print row position.\n\n` ROW() `\n\nType: numeric\n\nIn an MS-DOS system, this function gives the current row (line) position of the cursor; if the printer is on, it returns the column position of the printer head. See the commands SET PRINT ON and SET FORMAT TO PRINT, and the function COL(.\n\nExample:\n\n`@ ROW()+1,COL()+3 SAY 'Hello'`\n\nprints 'Hello' starting on the next line three characters to the right of the end of the last printing.\n\n## SEEK(\n\nGoes to a given character number in a plain text file.\n\n` SEEK(<num exp> [,filenum]) `\n\n<num exp> the character number\n\nOption:\n\n` <filenum> a Shark file number (between 1 and 4)`\n\nType: logical\n\nThis function repositions the character number pointer in the text file (opened with the ROPEN( or WOPEN( function) to the value given by <num exp>.\n\nIf filenum is not given, filenum=1 is assumed. If no file extension is given, the extension TXT is used.\n\nIf SEEK( is successful, it returns T (true); otherwise F (false). In a Shark program, SEEK( normally occurs in an IF or DO WHILE command.\n\nOnce the character pointer is properly positioned, use the GET( and PUT( functions to manipulate the characters.\n\nRelated functions: SSEEK(, ROPEN(, WOPEN(, CLOSE(, GET(, and PUT(.\n\nExample:\n\n```1>ok=ROPEN('a:label.prg',4)\n1>ok=SEEK(522,4)\n```\n\n## SET(\n\nSets a bit in a string to 1.\n\n```\nSET(<str exp>,bit position)\n\n<str exp> the string or string expression on which the\nfunction is to act\nbit position the number of the bit, numbered from the left\nstarting at 1, which is to be set to 1 ```\n\nType: logical\n\nA bit is any of the eight binary digits in a character's ASCII number representation. Each bit can have only one of two possible values, 0 and 1.\n\nThe SET( and RESET( functions are used to manipulate the bits within a string or string expression. SET( makes a bit 1, and RESET( makes a bit 0. The BIT( function tests the value of a specific bit.\n\nAmong the chief uses for these functions is compression of logical (true/false) data by using just one bit for each data item instead of an entire byte for a logical field or two bytes for a logical variable.\n\nSee the BIT( function for programming examples.\n\nExamples:\n\n```1>str='PQ'\n1>? BIT(str,15)\nF\n1>str=RESET(str,15)\n1>? str,BIT(str,15)\nPS T\n```\n\n## SIN(\n\nsine of <num exp> in radians\n\n## SINH(\n\nhyperbolic sine of <num exp>\n\n## SQRT(\n\nsquare root of <num exp>\n\n## SPACE(\n\nGets the amount of space left in the data space. SPACE()\n\nType: numeric\n\nThis function returns the available memory in the 64K data space (see Appendix A).\n\nExample:\n\n```1>? SPACE()\n27155.00\n```\n\nAs a general rule, when SPACE is down to less than 500.00, there is not a lot of room left for code & variables!\n\n## SSEEK(\n\nGoes to a given line number in a sequential file.\n\n```\nSSEEK(<num exp> [,<filenum>)\n\n<num exp> the line number ```\n\nOption:\n\n```\n<filenum> the DOS file number (between 1 and 4)```\n\nType: logical\n\nThis function repositions the line number pointer in the sequential file (opened with the ROPEN( function) to the value given by <num exp>.\n\nIf filenum is not given, filenum=1 is assumed. If no file extension is given, the extension TXT is used.\n\nIf filenum is not given, filenum=1 is assumed. If no file extension is given, the extension TXT is used.\n\nSee the functions SEEK(, ROPEN(, WOPEN(, CLOSE(, READ(, IN(, and OUT(.\n\nExample:\n\n```1>ok=ROPEN('a:label.prg',4)\n1>line=''\n1>ok=SSEEK(5,4)\n1>? ok\nT\n1>? line\nGOTO top\n1>ok=SSEEK(900,4)\n1>? ok\nF\n```\n\n## STR(\n\nType: character\n\nConverts a number to a string.\n\n```\nSTR(<num expression>,<width num expression>[,<decimals num expression<])\n\n<num expression> the number to be converted\n<width num expression> the width of the string\n```\n\nExample:\n\n```1> ? STR(1234,4)\n1> 1234\n1> ? STR(1234,12)\n1> 1234\n\n```\n\nOption:\n\n```\n<decimals num expression> the number of decimals\n```\n\nThis function gets a number by evaluating the <num expression>, and converts it into a string of given width. Optionally, the number of decimals can be specified (the default is 0). See also PIC(.\n\nExamples:\n\n```1>x=123.456\n1>? STR(x,8)\n123\n1>? LEN(STR(x,8))\n8.00\n1>? STR(x,10)\n123\n1>? STR(x,10,1); NOTE 'x' is the num expression; '10' is the width of the num expression; and '1' is the number of decimals\n123.4\n```\n\nWhen combined with the VAL( function, STR( is a convenient way of rounding decimal numbers with a given precision.\n\nFor example:\n\n```1>:PICTURE='9999.99999'\n1>a=29.95748\n1>? a\n29.95748\n1>? VAL(STR(a+.005,10,2))\n29.96000\n1>? VAL(STR(a+.00005,10,4))\n29.95750\n```\n\n## SUBSTR(\n\nType: character\n\nGets a substring of a string.\n\n```\nSUBSTR(<str exp>, <start num exp>, <width num exp>)\n\n<str exp> the string from which the new string is formed\n<start num exp> the position from which the new string is taken\n<width num exp> the number of characters to place in the new string\n```\n\nThis function, a synonym for the \\$( function, takes the string in <str exp> from position <start num exp> (fractions are thrown away); the number of characters taken is <width num exp>. (In both numeric expressions, the fractions are disregarded).\n\nExamples:\n\n```1>name='David Barberr'\n1>? SUBSTR(name, 7,3)\nBar\n1>? SUBSTR(name, 7,12)\nBarberr\n1>? LEN(SUBSTR(name,7,12))\n7.00\n```\n\nNote that SUBSTR(name,7,12) is of width 7, not 12; there are only 7 letters left in name from position 7.\n\n```1>s=3\n1>t=1\n1>? SUBSTR(name+name,(s+t)/2,1.9)\na\n```\n\nNote that 1.9 was taken as 1.\n\n## TAN(\n\ntangent of <num exp> in radians\n\n## TANH(\n\nhyperbolic tangent of <num exp>\n\n## TEST(\n\nTests a string whether it is a valid expression.\n\n```TEST(<str exp>)\n\n<str exp> the string to be tested\n\nType: logical```\n\nThis function tests the string in <str exp> as to whether it is a valid expression; in particular, all variables must be defined and must be of the proper type. It returns T if it is, F otherwise. If the test is successful, TYPE( can be used to find the type of the expression.\n\nIn a Shark program, use TEST( to find out whether a selection criteria typed in by the user is correct. Or use it to ensure that a variable exists in a subroutine that may be called from several programs.\n\nExamples:\n\n```1>? TEST('check=0')\nF false because check is not defined\n1>check=0\n1>? TEST('check=0')\nT\n1>? TEST('check=0.or.check=1')\nT\n1>? TEST('check=\"A\".or.check=1')\nF false because \"A\" is of character\ntype\n1>? TEST('num')\nF\n1>num=5\n1>? TEST('num')\nT\n1>? TEST('num+') false because the second operand\nis missing\nF\nIF .NOT. TEST('check')\ncheck=0\nENDIF\n```\n\n## TIME(\n\nGets the system time.\n\n```\nTIME(type)```\n\nOption:\n\n```\ntype one of three types of information requested, as listed below ```\n\nType: character\n\nThis function returns a string containing the current system time, and changes the format of the system variable :TIME to this format. (See the system variable :TIME in Section 2. Recall that :TIME is initialized when Shark starts up, but can be reinitialized with a :TIME= command.)\n\nThe type can be given in either of two forms, a name or number (numeric expression) as follows:\n\n```Type Explanation Example\n1 or HMS Hours,minutes,seconds,hundredths 17:26:36.07\n2 or AMPM Hours,minutes with AM/PM 5:36 pm\n3 or Seconds Seconds since midnight 62804.80```\n\nShortcut: When specifying type by name, only the first character is required. The default parameter is 1, the time in the 24-hour format hh:mm:ss.hh.\n\nExamples:\n\n```1>? :TIME\n15:45:59\n1>? TIME(2)\n3:46 pm\n1>:TIME\n3:46 pm\nstart=VAL(TIME(seconds))\nprogram lines\nfinish=VAL(TIME(seconds))\n? 'Program execution took',start-finish,'seconds.'```\n\nThis displays how long the running of <program lines> took, in seconds.\n\n## TRIM(\n\nTrims blanks from the right-hand side of a string.\n\n```\nTRIM(<str exp>)\n\n<str exp> the string to be trimmed ```\n\nType: character\n\nShark stores strings in fields padded on the right with blanks. In actual use, these blanks may get in the way. TRIM( gets rid of the blanks on the right of a string. TRIM( can be used in the key of an index. See also LTRIM( .\n\nExamples:\n\n```1>a='Vancouver '\n1>b='BC'\n1>? a+','+b\nVancouver ,BC\n1>? LEN(a)\n14.00\n1>? trim(a)+','+b\nVancouver,BC\n1>? LEN(TRIM(a))\n9.00\n1>blank=' '\n1>? LEN(TRIM(blank))\n1.00```\n\nNote: TRIM(blank) is a single blank.\n\n## TYPE(\n\nGets the type of an expression.\n\n` TYPE(exp)`\nexp = any expression\n\nType: character\n\nFor an expression, exp, this function returns the type of the expression as a one character string:\n\n```C for character\nL for logical\nN for numeric\nM for memo\nF for float ;a type used by other xBase languages; treated as N internally\nD for date ;a type used by other xBase languages; treated as C internally\nU Undefined ;created by GLOBAL or VARIABLES command, but not yet given\na value or type```\n\nTo test whether a string is a valid expression, use the TEST( function.\n\nExamples:\n\n```1>a='name'\n1>? TYPE(a)\nC\n1>? TYPE(a+a)\nC\n1>n=12\n1>? TYPE(a+n)\n1. Invalid variable type found when executing an expression.\n1>? TYPE(a<a)\nL\n1>? TYPE(a<a .OR. n<5)\nL\n1>? TYPE(note)\nM\n1>? TYPE(date)\nD\n1>TYPE(n+5/10)\nN\n```\n\n## UPPER(\n\n```\nConverts a string to upper case.\n\nUPPER(<str exp>)\n\n<str exp> the text to be converted to upper case\n\nType: character\n\nAll lower-case letters in the <str exp> are converted into upper case by\nthe UPPER( function, which is a synonym for the !( function. See also the\nLOWER( function.\n\nExamples:\n\n1>a='Aa12b'\n1>? UPPER(a)\nAA12B\n1>? UPPER('David!')\nDAVID!\nNote that only the lower-case letters are changed.\n```\n\n## VAL(\n\n```\nConverts a string to its numeric value.\n\nVAL(<str exp>)\n\n<str exp> the string to be evaluated\n\nType: numeric\n\nThis function takes the string <str exp> which is a number, and returns\nit as a number. If the whole string cannot be interpreted as a number, it\ntakes as much of the front part as it can.\n\nExamples:\n\n1>a='123.23'\n1>? VAL(a)\n123.23\n1>? VAL(123.23)\n1. Invalid variable type found when executing an expression.\n1>? VAL('a12')\n0.00\n1>? VAL('12a')\n12.00\n1>? VAL(DATE(2))\nyields the current month as a number.\n```\n\n## WOPEN(\n\n```\nOpens a DOS file for writing.\n\nWOPEN(<str exp>[,<filenum>])\n\n<str exp> the file name\n\nOption:\n\n<filenum> the DOS file number (between 1 and 4)\n\nType: logical\n\nThis function opens the DOS file, and in particular, the sequential file\n<str exp> (or output device, such as COM1:) for writing; if the file does not\nexist, it will be created. If filenum is not given, filenum=1 is assumed. If\nno file extension is given, the extension TXT is used. It returns T if\nsuccessful, F otherwise.\n\nSee the functions ROPEN(, CLOSE(, GET(, PUT(, SEEK(, SSEEK(, WRITE(,\n\nExample:\n1>ok=WOPEN('a:label.prg')\n1>? ok\nT\n\nIn an Shark program, WOPEN( normally appears in an IF command:\n\nIF WOPEN('file',2)\n\n2. Two programs to print a text file, TEST (in the second version it is\nassumed that TEST has no more than 20 lines):\n\nSET WIDTH TO 80\nSET PRINT ON\nIF ROPEN('test')\n? line\nENDDO\nok=CLOSE()\nENDIF\n```\n\n## WRAP(\n\n```\nWraps text for output.\n\nWRAP(str var, <num exp>)\n\n<str exp> the string to be wrapped\n<num exp> the line width\n\nType: character\n\nThis function returns one line of text with word wrapping from the string\nin str var; str var now contains what is left of the string. In other\nwords, the string returned will contain as many words as will fit in a line\n(the line width is given by <num exp>). If the whole contents of str var is\none line, str var becomes the blank string (of length 1)\n\nWRAP(, IN(, GET(, and READ( are the only functions that change the\ncontents of the memory variable used as an argument.\n\nExamples:\n1>text='This text line is going to be wrapped in a printed line of width 30.'\n1>? text\nThis text line is going to be wrapped in a printed line of width 30.\n1>temp=text\n1>WRAP(temp,30)\nThis text line is going to be\n1>? temp\nwrapped in a printed line of width 30.\n1>WRAP(temp,30)\nwrapped in a printed line of\n1>? temp\nwidth 30.\n1>WRAP(temp,30)\nwidth 30.\n1>? temp\n1>? LEN(temp)\n1.00\n```\n\n## WRITE(\n\n```\nWrites a new line into the sequential file.\n\nWRITE(str var[,<filenum>])\n\nstr var the line to be written\n\nOption:\n\n<filenum> the DOS file number (between 1 and 4)\n\nType: logical\n\nThis function writes (appends) the contents of the string variable\nstr var to the end of a sequential file opened with the WOPEN( function.\nIf filenum is not given, filenum=1 is assumed. WRITE( returns T if\nsuccessful, F otherwise.\nSee the functions ROPEN(, WOPEN(, READ(, IN(, OUT(, GET(, PUT(, and CLOSE(.\n\nExample:\n\n1>ok=WOPEN('customer.frm',2)\n1>ok=WRITE('FIELDS - cust,orderno,amount',2)\n1>ok=CLOSE(2)\ncreates the CUSTOMER.FRM report form file:\nFIELDS - cust,orderno,amount\nNow the command REPORT CUSTOMER will run the report. This example\nillustrates how a program can be written that creates a report form file.\nIn an Shark program, WRITE( normally appears in an IF command:\nIF WRITE('file',2)\n\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7461294,"math_prob":0.89724463,"size":90035,"snap":"2023-40-2023-50","text_gpt3_token_len":24623,"char_repetition_ratio":0.168031,"word_repetition_ratio":0.26417378,"special_character_ratio":0.28135726,"punctuation_ratio":0.14196177,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9548307,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T15:57:12Z\",\"WARC-Record-ID\":\"<urn:uuid:428071d9-6140-473b-9c5e-296289164f24>\",\"Content-Length\":\"137328\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75753a43-7c60-44c7-bb8a-74235fb84d3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:17e56925-1832-43d4-8a69-36d9ea19c8b5>\",\"WARC-IP-Address\":\"23.253.205.12\",\"WARC-Target-URI\":\"https://www.sharkbase.ca/shark-functions.html\",\"WARC-Payload-Digest\":\"sha1:4WNR232WUATJOJOYNPOC7EUVU2V3STRT\",\"WARC-Block-Digest\":\"sha1:B5GOVCJBY6YKIJ65WH7MNTRZ2VGD5HO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.2_warc_CC-MAIN-20231205140836-20231205170836-00124.warc.gz\"}"} |
https://answers.everydaycalculation.com/subtract-fractions/50-35-minus-30-10 | [
"Solutions by everydaycalculation.com\n\n## Subtract 30/10 from 50/35\n\n1st number: 1 15/35, 2nd number: 3 0/10\n\n50/35 - 30/10 is -11/7.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 35 and 10 is 70\n2. For the 1st fraction, since 35 × 2 = 70,\n50/35 = 50 × 2/35 × 2 = 100/70\n3. Likewise, for the 2nd fraction, since 10 × 7 = 70,\n30/10 = 30 × 7/10 × 7 = 210/70\n4. Subtract the two fractions:\n100/70 - 210/70 = 100 - 210/70 = -110/70\n5. After reducing the fraction, the answer is -11/7\n\n#### Subtract Fractions Calculator\n\n-\n\nUse fraction calculator with our all-in-one calculator app: Download for Android, Download for iOS"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6325994,"math_prob":0.9959787,"size":303,"snap":"2019-35-2019-39","text_gpt3_token_len":116,"char_repetition_ratio":0.18060201,"word_repetition_ratio":0.0,"special_character_ratio":0.46864685,"punctuation_ratio":0.12857144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99940753,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T07:46:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6f896ef7-2e50-438c-ac8d-c0ada81358c8>\",\"Content-Length\":\"7709\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30cbc126-46a5-4c4b-b6a8-b316c649fde5>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3e4a518-2eb1-4ae0-afab-f0314ffa937f>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/50-35-minus-30-10\",\"WARC-Payload-Digest\":\"sha1:T7OKU6YPX5EZJVCY5HBRC3F3RNYR73CI\",\"WARC-Block-Digest\":\"sha1:6TGPZSNYJ7GLPJR4HFGSZPHEQJRHYPCT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514570830.42_warc_CC-MAIN-20190915072355-20190915094355-00095.warc.gz\"}"} |
https://discuss.codechef.com/t/codechef-is-broken/75632 | [
"",
null,
"# CodeChef is broken\n\nJust try to run this code in the Problem: HS08TEST\n\nIt gives a runtime error of NZEC but when I submitted it. CodeChef accepted the code. CodeChef is broken. This wasted my 2 days and now I won’t be using this platform.\n\nimport java.util.;\nimport java.lang.\n;\nimport java.io.;\nimport java.text.\n;\n\n/* Name of the class has to be “Main” only if the class is public. */\nclass Codechef\n{\npublic static void main (String[] args) throws java.lang.Exception\n{\nint amount;\ndouble balance;\nScanner scanner = new Scanner(System.in);\namount = scanner.nextInt();\nbalance = scanner.nextDouble();\nDecimalFormat df = new DecimalFormat(\"###.00\");\nif((amount>0 && amount<=2000) && (balance>=0 && balance<=2000)){\nif(amount%5==0 && amount<=balance && (amount+0.5<=balance)){\nbalance = balance-amount;\nbalance = balance-0.50;\nSystem.out.println(df.format(balance));\n}\nelse{\nSystem.out.println(df.format(balance));\n}\n}else{\nSystem.out.println(df.format(balance));\n}\n\n``````}\n``````\n\n}\n\nRunning without input it does give random error sometimes, and that is not a problem ,you have to give custom input before you run on codechef ide.\n\n1 Like\n\nbhai, if you run this code on codechef ide then you have to give custom inputs, otherwise it will give you NZEC, i also wasted so many days figuring this out.\nso, whenever you run any code, and if it requires any input(like amount and balance is required in your code) then do give custom input then run, else directly try to submit.\n\nprovide custom input before running the program."
] | [
null,
"https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54297686,"math_prob":0.8135703,"size":968,"snap":"2020-34-2020-40","text_gpt3_token_len":237,"char_repetition_ratio":0.1493776,"word_repetition_ratio":0.0,"special_character_ratio":0.30061984,"punctuation_ratio":0.23232323,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9766128,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T16:32:52Z\",\"WARC-Record-ID\":\"<urn:uuid:8da62f43-1098-4b6f-9069-a1decd569044>\",\"Content-Length\":\"18614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d359378d-bca6-4e69-8909-783bdcc062e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:acef7c43-5ae5-4b53-89ca-0420e3d6692f>\",\"WARC-IP-Address\":\"18.213.158.143\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/codechef-is-broken/75632\",\"WARC-Payload-Digest\":\"sha1:LKKOGDONRLOHZXTDD66RZRZ6RUGKNP2P\",\"WARC-Block-Digest\":\"sha1:PGCTYNT3VGGAASTV3ZFB7FSVWXUT4X2A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402127075.68_warc_CC-MAIN-20200930141310-20200930171310-00396.warc.gz\"}"} |
https://www.helpteaching.com/questions/115558/what-fraction-is-equal-to-3x5x4-x2 | [
"##### Question Info\n\nThis question is public and is used in 192 tests or worksheets.\n\nType: Multiple-Choice\nCategory: Fractions and Ratios\nScore: 9\nAuthor: jlasbell\n\nView all questions by jlasbell.\n\n# Fractions and Ratios Question\n\nView this question.\n\nAdd this question to a group or test by clicking the appropriate button below.\n\n## Grade 7 Fractions and Ratios\n\nWhat fraction is equal to $((3x)/5)/(x/4 +x/2) ?$\n1. $x^2/5$\n2. $(9x^2)/20$\n3. $4/5$\n4. $9/5$\nYou need to have at least 5 reputation to vote a question down. Learn How To Earn Badges."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7413716,"math_prob":0.68358326,"size":527,"snap":"2019-51-2020-05","text_gpt3_token_len":150,"char_repetition_ratio":0.17590822,"word_repetition_ratio":0.33766234,"special_character_ratio":0.26565465,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97613543,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T22:41:31Z\",\"WARC-Record-ID\":\"<urn:uuid:67b8e8ef-e20b-4914-8c69-af40cac79a89>\",\"Content-Length\":\"16941\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2bf7d9aa-cf9b-4c02-b77b-26c7083a15c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:65dcbe09-1fbc-49d0-8706-83aa0f5e9547>\",\"WARC-IP-Address\":\"52.6.170.220\",\"WARC-Target-URI\":\"https://www.helpteaching.com/questions/115558/what-fraction-is-equal-to-3x5x4-x2\",\"WARC-Payload-Digest\":\"sha1:ESHF6DUO2MLQDLJ435HGUPIDNQZPU4IA\",\"WARC-Block-Digest\":\"sha1:SP3EJLRZ2URO35CGVEVLBQ2AD6N2MYFR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529006.88_warc_CC-MAIN-20191210205200-20191210233200-00502.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-linear-algebra-7th-edition/chapter-4-vector-spaces-4-3-subspaces-of-vector-spaces-4-3-exercises-page-167/5 | [
"## Elementary Linear Algebra 7th Edition\n\nSince the sum of continuous functions is continuous and the multiplication of a continuous function by a scalar is continuous, then $W$ the set of all functions that are continuous on $[-1,1]$ is a vector subspace of the set of all functions that are integrable on $[-1,1]$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9308023,"math_prob":0.99851465,"size":328,"snap":"2019-51-2020-05","text_gpt3_token_len":80,"char_repetition_ratio":0.19444445,"word_repetition_ratio":0.14545454,"special_character_ratio":0.24085365,"punctuation_ratio":0.07575758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9830679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T22:03:59Z\",\"WARC-Record-ID\":\"<urn:uuid:ff38646a-4738-47e2-9aad-51bd32468e42>\",\"Content-Length\":\"68308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cfa03205-a948-4f4a-b73e-83c48a1f7b3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f64fc12a-2ec8-4a32-a164-b5d234841617>\",\"WARC-IP-Address\":\"52.73.200.166\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/elementary-linear-algebra-7th-edition/chapter-4-vector-spaces-4-3-subspaces-of-vector-spaces-4-3-exercises-page-167/5\",\"WARC-Payload-Digest\":\"sha1:VNRBX6PELJZAEDLMNXBA7TVD2CUFQM3R\",\"WARC-Block-Digest\":\"sha1:5DGO6LPZNI2DPVQBWST5SRBKVASHDUJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00453.warc.gz\"}"} |
https://electricalbaba.com/significance-of-burden-in-potential-transformer/ | [
"# Significance of Burden in Potential Transformer\n\nBefore going into the impact of Burden on the performance of a Potential transformer, we will discuss about Burden. The Burden of an Instrument Transformer is the rated Volt-Ampere loading which is permissible without errors exceeding the limits for a particular class of Instrument Transformer.\n\nBasically, there are two Classes of an Instrument Transformer, one is Protection Class and another one is Metering Class. Protection Class (PS) is one which is used for the protection scheme where as Metering Class is used for the purpose of Metering.\n\nNow, we will discuss about errors in a Potential Transformer. There are two types of error in a Potential Transformer, Ratio Error and Phase Angle Error.\n\n### Ratio Error:\n\nTransformation Ratio of a Potential Transformer varies with the operating condition and therefore the voltage induced in the secondary of a Potential Transformer will also vary. The error in secondary voltage of a Potential Transformer is defined as\n\n% Ration Error = (Kn-R)×100/R\n\nWhere Kn = Rated primary voltage / Rated secondary voltage\n\nR = Primary Voltage / Secondary Voltage\n\n### Phase Angle Error:\n\nIn an ideal Potential Transformer, there should not be any phase difference between the primary voltage and the secondary voltage reversed. But in a practical Potential Transformer there exists a phase difference between them.\n\nThe important thing to note that, Ratio error is of importance when we use PT for Protection purpose but when we use PT for Metering then both Ration and Phase Angle Error are important to consider.\n\nNow we are on the stage to discuss the impact of Burden on the performance of a Potential Transformer.\n\n### Impact of Burden on the Performance of PT:\n\nThe effects of Burden on the performance of a Potential Transformer are as follows:\n\nIf we increase the Burden, the secondary as well as primary current will increase but as the primary voltage will remain constant (because primary of a PT is connected to the line), the secondary voltage will decrease. Therefore, the voltage ratio will increase. Thus we see that increasing the Burden, increases the Error in the Ratio. Also, as the power factor of the Burden reduces the transformation ratio of Potential Transformer increases.\n\nPT accuracy performance changes linearly with burden and can be plotted as\n\nHere RCF is Ratio Correction Factor, which is defined as defined as the factor that when multiplied by the potential transformer output will yield the correct result.\n\nMathematically,\n\nRCF = Transformation Ratio/Nominal Ratio =R / Kn\n\nThe transformer nameplates show a “marked ratio” usually an even number, such as 20 to 1. The actual ratio of primary to secondary quantity may be slightly higher or lower than the marked value by an amount 1 called ratio error. For example, if the actual ratio is 20.2 to 1, then the RCF is 1.01 and the ratio error is 1%.\n\nRCF = 20.2/20 = 1.01\n\nSo, % Ratio Error = (RCF-1)×100 = 1%\n\n### 4 thoughts on “Significance of Burden in Potential Transformer”\n\n1.",
null,
"Thanks so much! The term \"burden\" is not that common in latam, so this was very handy!\n\n2.",
null,
"RCF IS Kn/R, I THINK, INSTEAD OF R/Kn\n\n3.",
null,
"4.",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2050%2050'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2050%2050'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2050%2050'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2050%2050'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9037011,"math_prob":0.97085047,"size":3087,"snap":"2023-14-2023-23","text_gpt3_token_len":660,"char_repetition_ratio":0.17061304,"word_repetition_ratio":0.07321773,"special_character_ratio":0.20861678,"punctuation_ratio":0.08900524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99319094,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T23:16:28Z\",\"WARC-Record-ID\":\"<urn:uuid:955fb7ab-283a-420b-b184-75044e93cafc>\",\"Content-Length\":\"68813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:865fd9d4-9a52-41f7-baca-28c9e189f72f>\",\"WARC-Concurrent-To\":\"<urn:uuid:834db7a7-058d-40cb-8bc3-98ebccdcf302>\",\"WARC-IP-Address\":\"104.21.74.120\",\"WARC-Target-URI\":\"https://electricalbaba.com/significance-of-burden-in-potential-transformer/\",\"WARC-Payload-Digest\":\"sha1:4MJWUG63RQCK76OLKHLYHDCQ26VR6O5Z\",\"WARC-Block-Digest\":\"sha1:33NJ2YCHOL3DDQNSJ3CQ44GKRLANOS2L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648209.30_warc_CC-MAIN-20230601211701-20230602001701-00480.warc.gz\"}"} |
https://mafiadoc.com/measuring-market-power-in-the-greek-food-and-agecon-search_5ba3dd1b097c4749358b46ba.html | [
"## Measuring market power in the Greek food and ... - AgEcon Search\n\nAug 30, 2011 - Abstract: This paper measures the degree of market power of the Greek food and beverages manufacturing industry over the period ...\n\nMeasuring market power in the Greek food and beverages manufacturing industry Anthony N. Rezitis 1 Maria A. Kalantzi 2 1\n\nAssociate Professor, Department of Business Administration of Food and Agricultural Enterprises, University Western Greece, G. Seferi Str. 2, 30 100 Agrinio, Greece, E-mail: [email protected] 2 PhD Candidate, Department of Business Administration of Food and Agricultural Enterprises, University Western Greece, G. Seferi Str. 2, 30 100 Agrinio, Greece, E-mail: [email protected]\n\nPaper prepared for presentation at the EAAE 2011 Congress Change and Uncertainty Challenges for Agriculture, Food and Natural Resources August 30 to September 2, 2011 ETH Zurich, Zurich, Switzerland\n\nCopyright 2011 by [Rezitis Anthony and Kalantzi Maria]. All rights reserved. Readers may make verbatim copies of this document for non-commercial purposes by any means, provided that this copyright notice appears on all such copies. Abstract: This paper measures the degree of market power of the Greek food and beverages manufacturing industry over the period 1983–2007 at the three-digit SIC level. The present study also estimates the “deadweight” loss and the reduction of consumers’ income due to the possible existence of market power in the Greek food and beverages manufacturing industry. Based on Bresnahan’s (1989) conjectural variation model, three different approaches are used to investigate competitive conditions of the Greek food and beverages manufacturing industry. The first approach assesses the extent of market power of the whole industry over the period 1983–2007; the second approach tests the degree of market power in each one of the nine sectors of the industry over the whole period, i.e. 1983–2007; and the third one estimates the extent of market power for the whole Greek food and beverages manufacturing industry for specific sub-periods of the period 1983–2007. The methodology of Dickson and Yu (1989) is adopted to measure the welfare losses. The empirical results indicate the presence of some degree of market power in the whole Greek food and beverages manufacturing industry as well as in each one sector of the industry during the period 1983– 2007 and, as a result, the existence of welfare losses. In addition, the empirical findings support the presence of some degree of market power for each sub-period of the period 1983– 2007 in the whole Greek food and beverages manufacturing industry and the existence of welfare losses. Keywords: Conjectural variation, Greek food and beverages manufacturing industry, Market power; Welfare losses JEL classification: D43, D60, L66, Q10\n\n1\n\n2\n\nvariation model of competition, a manufacturing industry is considered in which firms face a demand function p ≡ p (Y , z ) , with Y = ∑ yi , and i = 1,..., n where p is the output price, yi i\n\nrepresents the quantity supplied by firm i , n is the number of firms and z is a vector of exogenous factors affecting the demand curve. Thus, the profit maximization problem of firm i is given as: (1) max π i = p (Y , z ) ⋅ yi − C ( yi , wi )\n\nwhere C ( yi , wi ) is the cost function of firm i and wi is a vector of input prices of firm i. The first order condition of the profit maximization problem (1) is given as: ∂π i ∂C ∂p ∂Y = p+ ⋅ yi − i = 0 (2) ∂yi ∂Y ∂yi ∂yi Rearranging Eq. (2), the following expression is obtained: ⎛ ∂p Y ⎞ ⎛ ∂Y yi ⎞ p − MCi θ (3) = −⎜ ⋅ ⎟⋅⎜ ⋅ ⎟=− i p h ⎝ ∂Y p ⎠ ⎝ ∂yi Y ⎠\n\nwhere MCi is the marginal cost of firm i, h ≡ ( ∂Y ∂p ) (Y p ) < 0 is the price elasticity of output demand and θ i ≡ ( ∂Y ∂yi ) (Y yi ) is the conjectural variation elasticity of firm i which represents the reaction of total industry to the change of output of firm i and it is a measure of competition. When θi takes the value of zero for all firms, then the industry is under conditions of perfect competition, while the value of one indicates a monopolistic market. Values of θi between zero and one support the presence of Cournot oligopoly in the food and beverages manufacturing industry. According to Bresnahan (1989), multiplying equation (3) by yi Ci , summing over i and rearranging, the following expression, i.e. the supply function, is obtained: f ⎞ ⎛ (4) S y ⎜1 + ⎟ = MC h⎠ ⎝ where S y is the ratio of aggregate revenue to total cost, MC is the industry-level (weighted) f p − MC 1 . According to Cowling and Waterson (1976), the average = h p degree of competition parameter f ( with 0 ≤ f ≤ 1) measures the average deviation of\n\nmarginal cost and −\n\nfirms’ behavior from the monopolistic case and, if properly identified in the estimation process, expresses the true degree of market power exerted by firms, with f = 1 indicating monopolistic market power, 0 < f < 1 Cournot oligopoly, and f = 0 perfect competition. Furthermore, a specified cost function, i.e. C ( yi , wi ) and a cost share equation for labor are added to the empirical system in order to improve the precision of the estimates. Following the methodology of Dickson and Yu (1989), the industry demand curve is h represented by Y = 1 p , where h is the absolute value of the demand elasticity, h. In\n\n( )\n\nε\n\naddition, the weighted industry marginal cost curve MC is presented by Y = MC , where ε is the inverse of the weighted industry marginal cost elasticity. Based on the Lerner\n\n1\n\nL = − f h , where\n\nL is the Lerner index which is the relative mark-up or the price-cost margin. 3\n\n(\n\nindex, L = po − MC\n\n)\n\npo = f h , the oligopoly price ( po ) and the oligopoly output (Yo ) are\n\ngiven as:\n\n⎛ h ⎞ 1ε po = ⎜⎜ ⎟⎟ Yo h f − ⎝ ⎠\n\n(5) hε\n\n⎛ h − f ⎞ h +ε h (6) Yo = 1 po = ⎜⎜ ⎟⎟ h ⎝ ⎠ The net loss of welfare (deadweight loss or Harberger loss) due to the existence of oligopoly is depicted by: 1\n\n1 h WL = ∫ ⎡(1 Y ) − Y 1 ε ⎤dY ⎣ ⎦ Y H\n\n(7)\n\no\n\nThe reduction of consumers’ welfare due to the transfer of income from consumers to the monopolist in the case of an oligopoly (Tullock loss) is presented by: 1 h WLT = WLH + ⎡(1 Yo ) − Yo1 ε ⎤ Yo (8) ⎣ ⎦ 3. Model Formulation and Data Variables The translog specification for the total cost function with two inputs, one output and symmetry and linear homogeneity restrictions imposed is given below: ⎛ C ln ⎜ t ⎜W ⎝ k ,t\n\n⎞ 1 ⎟⎟ = a 0 + a y ln Yt + a yy ln Yt 2 ⎠\n\n⎛W 1 g ll ln ⎜ l ,t ⎜W 2 ⎝ k ,t\n\n(\n\n)\n\n2\n\n⎛W + g ly ln Yt ln ⎜ l ,t ⎜W ⎝ k ,t\n\n2\n\n⎞ ⎛ Wl ,t ⎟⎟ + xt Τ + xty Τ ln Yt + xtl Τ ln ⎜⎜ ⎠ ⎝ W k ,t\n\n⎞ ⎛ Wl ,t ⎟⎟ + a l ln ⎜⎜ ⎠ ⎝ W k ,t\n\n⎞ ⎟⎟ + ⎠\n\n(9)\n\n⎞ ⎟⎟ ⎠\n\nwhere Ct represents the industry-level cost, Wl,t, Wk,t correspond to the two input prices, i.e. labor and capital respectively; Yt corresponds to the output quantity of industry; T is the time trend. The convention for the translog function is followed by letting variables with upper bars, i.e. input prices and output, be normalized by their means to avoid possible multicolinearity. The demand function is denoted by: ⎛ p ⎞ ⎛ ⎞ It ln Yt = a + h ln ⎜ t × 100 ⎟ + z159 ln ⎜ × 100 ⎟ + ⎝ bt ⎠ ⎝ bt × P O Pt ⎠\n\n⎛ ⎛ ⎞⎞ It z s ⎜⎜ D S s × ln ⎜ × 100 ⎟ ⎟⎟ s = 151 ⎝ bt × P O Pt ⎠⎠ ⎝ 158\n\n(10)\n\nwhere h is the industry demand elasticity, pt is the output price, bt is a price deflator, I t is the Gross National Product, POPt is the population of Greece, z159 is the income demand elasticity of sector 159 where sector 159 is the manufacture of beverages as referred to Table 1 and DSs (s=151,…,158) is a dummy variable, which is set to one for the s sector and zero otherwise in order to account for possible differences in the income demand elasticity among the sectors of the food and beverages manufacturing industry. Note that the s sectors are defined in Table 1. Furthermore, zs (s=151,…,158) refers to the change in the income demand elasticity of s sector (s=151,…,158) with respect to the sector 159, i.e. the manufacture of beverages. Applying Shephard’s Lemma to the cost function (9), the cost share equation for labor is given as: ⎛W ⎞ Sl ,t = al + gly ln Y t + gll ln ⎜ l ,t ⎟ + xtlT (11) ⎜W ⎟ ⎝ k ,t ⎠ 4\n\nand the cost share equation for capital is the following: ⎛W ⎞ S k ,t = ak + g ky ln Yt + g kl ln ⎜ l ,t ⎟ + xtk T (12) ⎜W ⎟ ⎝ k ,t ⎠ Note that since the sum of the dependent variables over the two cost share equations (11) and (12) always equals 1 then only one factor share equation is linearly independent. As a result, it is necessary to omit one equation (equation for capital) to avoid singularity of the estimated covariance matrix. Three different approaches are used in the present study to investigate competitive conditions in the Greek food and beverages manufacturing industry. The first approach is based on the estimated system of equations (9), (10), (11) and the following supply function (13) where a unique estimate of the degree of market power ( f15 ) is obtained for the whole industry over the period 1983–2007:\n\n⎛W f ⎞ ⎛ SY ,t ⎜1 + 15 ⎟ = a y + a yy ln Yt + gly ln ⎜ l ,t ⎜W h ⎠ ⎝ ⎝ k ,t where SY ,t\n\n⎞ (13) ⎟⎟ + xtyT ⎠ is the ratio of the industry’s aggregate revenue to total cost and f15 is the\n\nconjectural variation elasticity for the whole food and beverages manufacturing industry. The second approach investigates two related estimated equation systems. The first estimated system includes the equations (9), (10), (11) and the following supply function (14) where the f parameter is allowed to change among the sectors of the food and beverages manufacturing industry while it is unchanged over time: 158 ⎛W ⎞ f f SY ,t + 159 SY ,t + ∑ s ( DS s × SY ,t ) = a y + a yy ln Yt + gly ln ⎜ l ,t ⎟ + xtyT (14) ⎜W ⎟ h s =151 h ⎝ k ,t ⎠ where f159 is the conjectural variation elasticity of sector 159, i.e. the manufacture of beverages and f s ( s = 151,...,158 ) refers to the change in the conjectural variation elasticity of s sector ( s = 151,...,158 ) with respect to sector 159. In addition, DS s\n\n( s = 151,...,158)\n\nis a\n\ndummy variable, which is set to one for the s sector and zero otherwise (Table 1). The empirical results of the aforementioned estimated system, provided in the next section, indicate that all of the sectors have the same conjectural variation elasticity, f, except that of sector 157, i.e. manufacture of prepared animal feeds. Thus, the supply function (14) is respecified and a slightly modified system (second estimated system) is estimated under the second approach. More analytically, the second estimated system of the second approach contains the equations (9), (10), (11) and the following supply function (15) where the estimate of the degree of market power ( f16 ) is the same for all sectors of the food and beverages manufacturing industry except that of sector 157 (manufacture of prepared animal feeds ): ⎛W ⎞ f f SY ,t + 16 SY ,t + 157 ( DS157 × SY ,t ) = a y + a yy ln Yt + gly ln ⎜ l ,t ⎟ + xtyT (15) ⎜W ⎟ h h k t , ⎝ ⎠ where f16 refers to the conjectural variation elasticity which is the same across all sectors of the Greek food and beverages manufacturing industry except that of sector 157, f157 is the change in the conjectural variation elasticity of the sector 157 with respect to the other sectors of manufacturing, i.e. f16 , and DS157 is the dummy variable corresponding to sector 157.\n\n5\n\nThe third approach is based on the estimated system of equations (9), (10), (11) and the following supply function (16) which allows f to change for each sub-period of the period 1983–2007 but to remain the same among sectors: 8 ⎛W ⎞ f f SY ,t + 1 SY ,t + ∑ t ( DTt × SY ,t ) = a y + a yy ln Yt + gly ln ⎜ l ,t ⎟ + xtyT (16) ⎜W ⎟ h t =2 h k t , ⎝ ⎠ where f1 refers to the conjectural variation elasticity of the sub-period t=1, i.e. the years 1983–1985, and DTt ( t = 2,...,8 ) is a dummy variable, which is set to one for the sub-period t\n\n( t = 2,...,8 ) refers to the change of the conjectural ( t = 2,...,8 ) with respect to the sub-period t = 1 , i.e. 1983–\n\nand zero otherwise.2 Note also that ft variation elasticity of sub-period t\n\n1985. The sample comprised annual data for the period 1983–2007 for nine sectors at the three-digit SIC level of the Greek food and beverages manufacturing industry, i.e. SIC: 151159, based on the Statistical Nomenclature of Economic Activity of 2003 (STAKOD_2003). The data used in the estimation was obtained by the Annual National Industrial Survey of the Hellenic Statistical Authority (EL.STAT.) The nine sectors of the Greek food and beverages manufacturing industry are presented in Table 1. 4. Empirical Results The empirical results of the three different approaches used to measure the degree of market power in the Greek food and beverages manufacturing industry are presented in Table 2. The equation systems of the three different approaches are estimated using the non-linear three-stage least square (NL3SLS) estimation technique. The econometric analysis is conducted using Shazam 9.0 software. The empirical findings of the three different approaches are plausible and consistent with economic theory in terms of the signs and the magnitudes of the coefficients and indicate that the translog cost function satisfies the restrictions of monotonicity and concavity at the sample mean (Table 2).3 According to the first approach, the empirical results indicate that all the estimated coefficients of the translog cost function are statistically significant at any conventional level of significance (Table 2, First Approach). Furthermore, the results support that the cost shares of labor ( al = 0.7779 ) and capital ( ak = 0.2222 ) are statistically\n\nsignificant at any conventional level of significance. These labor and capital share estimates are similar to the corresponding mean labor and capital cost shares calculated from the data, i.e. 0.7071 and 0.2929 respectively. The results of the supply function reveal that the conjectural variation elasticity ( f15 = 0.7104 ) is statistically significant at any conventional level of significance and its value is ranged between zero and one. These results imply the presence of some degree of market power in the Greek food and beverages manufacturing industry over the period 1983–2007. The estimated results of the demand function indicate that first, the price elasticity of output demand ( h = -0.7130 ) is statistically significant at any conventional level of significance; second, the income demand elasticity of sector 159 2\n\nIt is noted that the sub-period 2 (t=2) corresponds to the period 1986–1988, the sub-period 3 (t=3) to the period 1989–1991, the sub-period 4 (t=4) to the period 1992–1994, the sub-period 5 (t=5) to the period 1995– 1997, the sub-period 6 (t=6) to the period 1998–2000, the sub-period 7 (t=7) to the period 2001–2003 and the sub-period 8 (t=8 ) to the period 2004–2007. 3 The monotonicity restriction at the sample mean implies that al>0 and ak>0, while the concavity restriction implies that the Hessian matrix is negative semidefinite, i.e. all Allen-Uzawa own-partial elasticities of substitution are negative at the sample mean.\n\n6\n\n( z159 = 0.3969 ) ,\n\ni.e. the manufacture of beverages, is statistically significant; and third the\n\nchange of the income demand elasticity of each one of the sectors of the food and beverages manufacturing industry, zs ( s = 151,...,158 ) , with respect to sector 159, i.e. the manufacture of beverages, is statistically significant at any conventional level of significance. Furthermore, the results regarding scale economies imply the presence of increasing returns to scale. Finally, relative to the whole Greek food and beverages manufacturing industry, the Harberger loss is about €96.53 million in terms of 2007 value added (or 2.74% of the 2007 value added) whereas the Tullock loss is about €176.51 million in terms of 2007 value added (or 5.01% of the 2007 value added) (Table 3). The empirical findings of the first estimated system of the second approach show that most of the estimated parameters of the translog cost function are statistically significant at any conventional level of significance (Table 2, Second Approach, First Estimated System). In addition, the results support that the cost shares of labor ( al = 0.7706 ) and capital\n\n( ak = 0.2294 ) are statistically significant at any conventional level of significance. Moreover, the results of the supply function show that only sector 157, i.e. the manufacture of prepared animal feeds, has a statistically different degree of market power from sector 159, i.e. the manufacture of beverages, since only the change in the conjectural variation elasticity of sector 157 ( f157 ) with respect to sector 159 is statistically significant. These results are reinforced by the fact that the Wald test does not reject the null hypothesis which supports that all sectors, except sector 157, have the same degree of market power as that of sector 159.4 The estimated results of the demand function indicate that first, the price elasticity of output demand ( h = -0.6782 ) is statistically significant at any conventional level of significance; second, the income demand elasticity of sector 159\n\n( z159 = 0.4029 ) ,\n\ni.e. the\n\nmanufacture of beverages, is statistically significant; and third the change in the income demand elasticity of each one of the sectors of the food and beverages manufacturing industry, zs ( s = 151,...,158 ) , with respect to sector 159, i.e. the manufacture of beverages, is statistically significant at any conventional level of significance. Furthermore, the results regarding scale economies imply the presence of increasing returns to scale. According to the empirical results of the second estimated system of the second approach, all the estimated coefficients of the translog cost function are statistically significant at any conventional level of significance (Table 2, Second Approach, Second Estimated System). Furthermore, the results support that the cost shares of labor ( al = 0.7782 ) and capital\n\n( ak = 0.2218)\n\nare statistically significant at any conventional level of\n\nsignificance. Moreover, the results of the supply function reveal that first, the conjectural variation elasticity of all the sectors except sector 157, i.e. f16 , is statistically significant at any conventional level of significance; and second, the change of the conjectural variation elasticity of sector 157 ( f157 = −0.0004 ) with respect to the conjectural variation elasticity of all the other sectors significance.\n\nNote\n\n( f16 = 0.7714 ) that\n\nthe\n\nis statistically significant at any conventional level of\n\nconjectural\n\nvariation\n\nelasticity\n\nof\n\nsector\n\n157\n\nis\n\n4\n\nWald test: the null hypothesis is f 151 = f 152 = f 153 = f 154 = f 155 = f 156 = f 158 = 0 whereas the alternative is that at least one of the aforementioned sectors is different from zero. The Wald test used follows\n\n( ) distribution with 7 degrees of freedom. The t-statistic and the p-value are 4.30 and 0.7452\n\nthe chi-squared χ\n\n2\n\nrespectively.\n\n7\n\n0.7710 ( f16 + f157 = 0.7710 ) . In addition, both conjectural variation elasticities of all sectors of the industry except sector 157 ( f16 = 0.7714 ) and that of sector 157 ( f16 + f157 = 0.7710 ) , i.e. sector 157 the manufacture of prepared animal feeds, are ranged between zero and one which indicates the presence of some degree of market power for each sector of the Greek food and beverages manufacturing industry over the period 1983–2007. The empirical results of the demand equation reveal that, first, the price elasticity of output demand ( h = -0.7742 ) is statistically significant at any conventional level of significance; second, the income demand elasticity of sector 159\n\n( z159 = 0.3701) ,\n\ni.e. the\n\nmanufacture of beverages, is statistically significant; and third the change in the income demand elasticity of each one of the sectors of the food and beverages manufacturing industry, zs ( s = 151,...,158 ) , with respect to sector 159, i.e. the manufacture of beverages, is statistically significant at any conventional level of significance. Furthermore, the results regarding scale economies imply the presence of increasing returns to scale. Finally, the Harberger loss is about €1.47 million in terms of 2007 value added (or 2.43% of the 2007 value added) for sector 157, i.e. the manufacture of prepared animal feeds, and €87.26 million (or 2.52% of the 2007 value added) for all other sectors, i.e. sectors 151–156 & 158–159, whereas the Tullock loss is about €2.58 million in terms of 2007 value added (or 4.25% of the 2007 value added) for sector 157 and €151.31 million (or 4.37% of the 2007 value added) for all other sectors of the Greek food and beverages manufacturing industry, i.e. sectors 151–156 & 158–159, (Table 3). The empirical findings of the third approach (Table 2, Third Approach) show that all the estimated coefficients of the translog cost function are statistically significant at any conventional level of significance. In addition, the cost shares of labor ( al = 0.7817 ) and capital ( ak = 0.2183) are statistically significant at any conventional level of significance. Furthermore, the empirical findings of the supply function indicate that first, the conjectural variation elasticity ( f1 ) of the sub-period t=1, i.e. years 1983–1985, is statistically significant at any conventional level of significance and second, the change of the conjectural variation elasticity of each sub-period ( f t , t = 2,...,8 ) with respect to the conjectural variation elasticity of the sub-period t = 1 , f1 , is statistically significant.5 Moreover, the empirical findings indicate that the conjectural variation elasticity of each sub-period is ranged between zero and one implying the presence of some degree of market power for each sub-period of the period 1983–2007 for the whole Greek food and beverages manufacturing industry. Figure 1 depicts the degree of market power for the whole Greek food and beverages manufacturing industry for each sub-period of the period 1983–2007. According to Figure 1, the sub-period t=1, i.e. the years 1983–1985, presents the highest conjectural variation elasticity ( f1 = 0.6612 ) and as a result the highest degree of market power whereas the sub-period t=8, i.e. the years 2004– 2007, shows the lowest conjectural variation elasticity ( f1 + f8 = 0.6596 ) and as a result the 5\n\nMore specific, the conjectural variation approach of the sub-period 1983–1985 is 0.6612 (f1=0.6612), sub-period 1986–1988 is 0.6609 (f1+f2=0.6609), of the sub-period 1989–1991 is 0.6605 (f1+f3=0.6605), sub-period 1992–1994 is 0.6603 (f1+f4=0.6603), of the sub-period 1995–1997 is 0.6600 (f1+f5=0.6600), sub-period 1998–2000 is 0.6598 (f1+f6=0.6598), of the sub-period 2001–2003 is 0.6600 (f1+f7=0.6600), sub-period 2004–2007 is 0.6596 (f1+f8=0.6596).\n\nof of of of\n\nthe the the the\n\n8\n\nlowest degree of market power. In general, Figure 1 shows that the degree of market power gradually decreases during the sub-periods under consideration except that of the sub-period t=7, i.e. the years 2001–2003, where an increase in the degree of market power occurred ( f1 + f 7 = 0.6600 ) . In particular, while the conjectural variation elasticity gradually decreases and takes the value of 0.6598 ( f1 + f 6 = 0.6598 ) for the sub-period t=6, i.e. the years 1998–2000, it becomes slightly higher and takes the value of 0.6600 ( f1 + f 7 = 0.6600 ) for the sub-period t=7, i.e. the years 2001–2003. Two important events took place which resulted in the gradual decrease in the degree of market power in the Greek food and beverages manufacturing industry during the period 1983–2007 except that of the period 2001–2003, i.e. the sub-period t=1,…, 8 except that of t=7. The first event is the deregulation of international markets and the gradual abolition of protectionism since the mid-1980s which led to imports being gradually increased. The second event is the introduction of research, development and innovation in the Greek food industry through different Developmental Laws and Operational Programmes towards the 1990s which resulted in the increase in the competitiveness of some problematic and smallscale firms in the Greek food and beverages manufacturing industry. However, the degree of market power in the Greek food and beverages manufacturing industry slightly increased during the period 2001–2003, i.e. the sub-period t=7, probably due to the launch of the euro in 2000. The launch of the euro led some small-scale firms to exit the Greek food market since they could not operate in the Single European Market and the European Monetary Union. The empirical findings of the demand equation imply that first, the price elasticity of output demand ( h = -0.6627 ) is statistically significant at any conventional level of significance; second, the income demand elasticity of sector 159\n\n( z159 = 0.3350 ) ,\n\ni.e. the\n\nmanufacture of beverages, is statistically significant, and third the change in the income demand elasticity of each one of the sectors of the food and beverages manufacturing industry, zs ( s = 151,...,158 ) , with respect to sector 159, i.e. the manufacture of beverages, is statistically significant at any conventional level of significance. Furthermore, the results regarding scale economies imply the presence of increasing returns to scale. Finally, the Harberger loss is ranged between 2.71% for the sub-period t=8, i.e. the years 2004–2007, which equals €338.53 million in terms of value added and 3.40% for the sub-period t=1, i.e. the years 1983–1985, which is equal to €27.60 million in terms of value added whereas the Tullock loss is ranged between 5.29% for the sub-period t=8, i.e. the years 2004–2007, which is equal to €660.82 million in terms of value added and 6.34% for the sub–period t=1, i.e. the years 1983–1985, which is equal to €51.47 million in terms of value added (Table 3). 5. Conclusions The objective of this paper has been to measure the degree of market power in the Greek food and beverages manufacturing industry over the period 1983–2007. For this purpose, three different approaches of the Bresnahan’s conjectural variation method are used. The first approach assesses the degree of market power of the whole Greek food and beverages manufacturing industry over the period 1983–2007. The second approach tests the degree of market power in each one of the nine sectors of the industry over the period under consideration, i.e. 1983–2007, and the third one estimates the extent of market power for the whole Greek food and beverages manufacturing industry for specific sub-periods of the period 1983–2007. The empirical results of the first approach imply the presence of imperfect competition in the whole Greek food and beverages manufacturing industry for the period 1983–2007. The\n\n9\n\nempirical findings of the second approach suggest the presence of a non-competitive market structure in each one of the nine sectors of the Greek food and beverages manufacturing industry for the period 1983–2007, with only sector 157, i.e. the manufacture of prepared animal feeds, showing a different degree of market power from all other sectors of the industry, i.e. sectors 151–156 & 157–158. Finally, according to the empirical findings of the third approach, each sub-period of the period 1983–2007, i.e. the sub-period t=1,…,8, appears to operate in conditions of imperfect competition in the whole Greek food and beverages manufacturing industry, with the sub-period t=1, i.e. the years 1983–1985, showing the highest degree of market power whereas the sub-period t=8, i.e. the years 2004–2007, the lowest degree of market power. Two important events took place which resulted in the gradual decrease of the degree of market power in the Greek food industry during the period 1983–2007 except that for the period 2001–2003. The first is the deregulation of international markets and the gradual abolition of protectionism since the mid-1980s and the second is the introduction of research, development and innovation in the Greek food market through different Developmental Laws and Operational Programmes towards the 1990s. However, the degree of market power in the Greek food and beverages manufacturing industry was slightly increased during the period 2001–2003 probably due to the launch of the euro in 2000. Furthermore, the present study estimates the net welfare loss (deadweight loss or Harberger loss) and the reduction of consumers’ welfare due to the transfer of income from consumers to the monopolist (Tullock loss) in the case of imperfect competition. According to the first approach, the empirical results indicate that, for the whole Greek food and beverages manufacturing industry, the Harberger loss is about 2.74% whereas the Tullock loss is about 5.01%. The findings, regarding the second approach, imply that the Harberger and the Tullock losses are 2.43% and 4.25% respectively for sector 157 whereas for all other sectors of the food and beverages manufacturing industry, the Harberger and the Tullock losses are 2.52% and 4.37% respectively. Finally, the empirical results of the third approach reveal that the Harberger loss is ranged between 2.71% for the sub-period 2004–2007 and 3.40% for the sub-period 1983–1985 while the Tullock loss is between 5.29% for the sub-period 2004–2007 and 6.34% for the sub-period 1983–1985. References Anders, M.S. (2008), “Imperfect competition in German food retailing: Evidence from State level Data”, International Atlantic Economic Society, (36), 441-454. Bhuyan, S., and Lopez, R.A. (1997), “Oligopoly Power in the Food and Tobacco Industries”, American Journal of Agricultural Economics, (79), 1035-1043. Bourlakis, C. A. (1992b), “Profits and market power in Greek Manufacturing Industries”, Ph.D. Thesis, School of Economic and Social Studies, University of East Anglia. Bresnahan, T.F.(1982),“The oligopoly solution concept is identified”, Economics Letters, (10), 87-92. Bresnahan, T.F. (1989), “Empirical studies of industries with market power.” Handbook of Industrial Organization, (2), R. Schmalensee and R. D. Willig, ed., 1011-1157. Cowling, K. and Waterson M. (1976), “Price-cost margins and market structure”, Economica, (43), 267-274. Dickson, V.A. and Yu, W. (1989),“Welfare losses in Canadian manufacturing under alternative oligopoly regimes”,International Journal of Industrial Organization,(7), 257-267. Harberger, A.C., (1954), “Monopoly and Resource Allocation”, American Economic Review, (2), 77–87. Lau, L. (1982), “On identifying the degree of competitiveness from industry price and output data”, Economic Letters, (10), 93-99.\n\n10\n\nLopez, A.R., Aziza, M.A. and Liron-Espana, C. (2002), “Market power and/or efficiency: A structural approach”, Review of Industrial Organization, (20), 115-126. Peterson, E.B and Connor, J.M, (1995), “A comparison of oligopoly welfare loss estimates for U.S. food manufacturing”, American Journal of Agricultural Economics, (77), 300-308. Tullock, G. (1967), “The welfare cost of tariffs, monopolies, and theft”, Western Economic Journal, (5), 224-232. Table 1. Classification of sectors SIC\n\nSector description\n\n151 152 153 154 155 156 157 158 159\n\nProduction, processing and preserving of meat and meat products Processing and preserving of fish and fish products Processing and preserving of fruits and vegetables Manufacture of vegetable and animal oils and fats Manufacture of dairy products Manufacture of grain milk products, starches and starch products Manufacture of prepared animal feeds Manufacture of other food products Manufacture of beverages\n\nTable 2. Empirical results of the conjectural variation model of Greek food and beverages manufacturing industry over the period 1983-2007 Coef. Cost a0 ay ayy gly al akα gll gklα gkyα xt xty xtl Supply f15 f16 f151 f152 f153 f154 f155 f156 f157 f158 f159 f1 f2 f3 f4\n\nFirst Approach Estimated System: Equations (9), (10), (11), (13)\n\nSecond Approach First Estimated System: Second Estimated System: Equations (9), (10), (11), Equations (9), (10), (14) (11), (15)\n\nThird Approach Estimated System: Equations (9), (10), (11), (16)\n\n18.5820***(435.310) 0.6812***(22.505) 0.1030***(8.720) 0.0635***(12.216) 0.7779***(51.072) 0.2222***(14.585) 0.0755***(16.165) -0.0755***(-16.165) -0.0635***(-12.216) -0.0211***(-7.832) 0.0048***(2.947) -0.0052***(-4.633)\n\n18.5790***(402.810) 0.6588***(15.071) 0.0284 (0.525) 0.0599***(11.381) 0.7706***(48.463) 0.2294***(14.430) 0.0759***(16.233) -0.0759***(-16.233) -0.0599***(-11.381) -0.0186***(-5.911) 0.0041**(2.478) -0.0049***(-4.351)\n\n18.5850***435.53) 0.6791*** (22.513) 0.0905*** (7.174) 0.0645*** (12.565) 0.7782*** (50.960) 0.2218*** (14.527) 0.0757*** (16.181) -0.0757*** (-16.181) -0.0645*** (-12.565) -0.0209*** (-7.737) 0.0048*** (2.975) -0.0052*** (-4.642)\n\n18.4200*** (347.57) 0.4287*** (5.898) 0.1041*** (8.840) 0.0450*** (6.969) 0.7817*** (52.457) 0.2183*** (14.651) 0.0755*** (16.753) -0.0755*** (-16.753) -0.0450*** (-6.969) -0.0098*** (-2.890) 0.0233*** (4.770) -0.0066*** (-6.304)\n\n0.7104 ***(3.106) ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─\n\n─ ─ -0.0005 (-1.474) -0.0007 (-1.265) -0.0002 (-0.584) -0.0002 (-0.699) -0.0001 (-0.606) -0.0003 (-1.037) -0.0008*(-1.693) -0.0001 (-0.997) 0.6760*** (3.158) ─ ─ ─ ─\n\n─ 0.7714*** (3.440) ─ ─ ─ ─ ─ ─ -0.0004** (2.130) ─ ─ ─ ─ ─ ─\n\n─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ 0.6612*** (2.830) -0.0003* (-1.884) -0.0007** (-2.444) -0.0009** (-2.415)\n\n11\n\nf5 f6 f7 f8 Demand a h z 151 z152 z153 z154 z155 z156 z157 z158 z159 Scale Economies (SCE) b\n\na\n\n─ ─ ─ ─\n\n─ ─ ─ ─\n\n─ ─ ─ ─\n\n-0.0012** -0.0014** -0.0012** -0.0016**\n\n19.6640***(7.820) -0.7130***(-3.106) -0.1715***(-14.612) -0.2941***(-24.530) -0.0924***(-7.968) -0.2141***(-18.443) -0.0759***(-6.663) -0.1564***(-13.536) -0.2298***(-19.516) -0.0373***(-3.276) 0.3969**(2.005)\n\n19.4330***(8.186) -0.6782***(-3.157) -0.1710***(-13.841) -0.2883***(-23.547) -0.0822***(-6.681) -0.2157***(-17.595) -0.0748***(-6.201) -0.1552***(-12.736) -0.2351***(-19.293) -0.0354***(-2.954) 0.4029**(2.102)\n\n20.1900*** (8.133) -0.7742*** (-3.440) -0.1695*** (-14.603) -0.2915*** (-24.368) -0.0924*** (-8.098) -0.212*** (-18.432) -0.0758*** (-6.774) -0.1553*** (-13.624) -0.2364 *** (-19.820) -0.0378*** (-3.386) 0.3701* (1.881)\n\n19.9770*** (7.794) -0.6627*** (-2.830) -0.1717*** (-14.608) -0.2878*** (-23.912) -0.0839*** (-7.209) -0.2196*** (-18.920) -0.0770*** (-6.722) -0.1529*** (-13.200) -0.2282*** (-19.353) -0.0366*** (-3.202) 0.3350* (1.670)\n\n0.2568***(12.463)\n\n0.2876***(8.687)\n\n0.2583*** (12.582)\n\n0.2583*** (12.582)\n\n(-2.421) (-2.391) (-2.216) (-2.273)\n\nak=1-al, gkl = -gll, gky = -gly. The corresponding values in parentheses are “t-ratios” obtained by applying the Wald test\n\n( (\n\nstatistic, b SCE = 1 − ∂ ln Ct Wk ,t\n\n)\n\n)\n\n∂ ln Y = 1 − ( aY + xty *13) at the point of approximation. The corresponding values\n\nin parentheses are “t-ratios” obtained by applying the Wald test statistic. *** indicates 1% significance levels, ** indicates 5% significance levels, * indicates 10% significance levels.\n\nTable 3. Estimated Harberger and Tullock losses in the Greek food and beverages manufacturing industry over the period 1983-2007 Approaches\n\nSectors/ Time periods\n\nFirst Approach\n\nHarberger loss a (WLH)\n\nTullock loss a (WLT)\n\n2.74\n\n5.01\n\n151-159\n\nSecond Approach (2ndEstimated System) Third Approach\n\na\n\nHarberger loss c\n\nTullock loss c\n\nHarberger loss d\n\nTullock loss d\n\n41972.69\n\n3523.14\n\n96.53\n\n176.51\n\n151-156& 158-159\n\n2.52\n\n4.37\n\n40819.40\n\n3462.55\n\n87.26\n\n151.31\n\n157\n\n2.43\n\n4.25\n\n1153.29\n\n60.59\n\n1.47\n\n2.58\n\n1983-1985 1986-1988 1989-1991 1992-1994 1995-1997 1998-2000 2001-2003 2004-2007\n\n3.40 3.22 3.00 2.93 2.84 2.78 2.83 2.71\n\n6.34 6.07 5.73 5.63 5.49 5.39 5.48 5.29\n\n811.86 1416.70 2535.86 4624.16 5451.29 6733.66 8176.80 12491.87\n\n─ ─ ─ ─ ─ ─ ─ ─\n\n─ ─ ─ ─ ─ ─ ─ ─\n\n─ ─ ─ ─ ─ ─ ─ ─\n\n27.60 45.62 76.08 135.49 154.82 187.2 231.4 338.53\n\n51.47 85.99 145.31 260.34 299.28 362.94 448.09 660.82\n\nThe estimated Harberger and Tullock losses are ratios, b The value added is in million Euros, c The Harberger and Tullock losses are in terms of 2007 value added and in million Euros, d The Harberger and Tullock losses are in terms of value added and in million Euros.\n\nD E G R E EO FM A R K E TP O W E R\n\nFigure 1. Degree of market power for the whole Greek food and beverages manufacturing industry for specific sub-periods of the period 1983-2007 0.6615 0.6610 0.6605 0.6600 0.6595 0.6590 0.6585 19831985\n\n19861988\n\n19891991\n\n19921994\n\n19951997\n\n19982000\n\n20012003\n\n20042007\n\nSUB-PERIODS\n\n12"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8650327,"math_prob":0.9419974,"size":40318,"snap":"2020-10-2020-16","text_gpt3_token_len":11361,"char_repetition_ratio":0.18854493,"word_repetition_ratio":0.2983524,"special_character_ratio":0.3219902,"punctuation_ratio":0.14734773,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9557162,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T15:23:00Z\",\"WARC-Record-ID\":\"<urn:uuid:b9412052-66ef-4a79-bc8e-60328aba7df0>\",\"Content-Length\":\"101726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52109dc5-cd52-43d5-996d-ccf4ebf5da3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a84e66e1-8e82-4b97-afe7-0132965857cb>\",\"WARC-IP-Address\":\"104.27.163.127\",\"WARC-Target-URI\":\"https://mafiadoc.com/measuring-market-power-in-the-greek-food-and-agecon-search_5ba3dd1b097c4749358b46ba.html\",\"WARC-Payload-Digest\":\"sha1:UJL6SCJEEFJZF5F2J7IBDDT22BUQT3OL\",\"WARC-Block-Digest\":\"sha1:2P2ZPDUIQJGHGBD2B6VYYBPZLPA7WNVN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505731.37_warc_CC-MAIN-20200401130837-20200401160837-00053.warc.gz\"}"} |
https://food-le.com/numberle | [
"Rating:\nNumberle\n5\n\n# Numberle\n\nNumberle is a math puzzle game inspired by Wordle, which became popular in early 2022. The main objective of the Numberle game is to correctly estimate the mathematical equation in six attempts. As you add your own equations, colored suggestions will appear to indicate how close you are to solving the puzzle; if all of the rows are highlighted in green, you have won! This game is ideal for brain training while also having fun with your buddies.\n\n## HOW TO PLAY\n\nSimply enter any valid equation to find hints to begin the game. You will have a total of six attempts to estimate the target equation. You can use numbers (0-9) and arithmetic signs (+ - * / =) to calculate.\n\nDetermine the numbers and signs in the equation.\n\nIf any numbers or arithmetic signs appear in the target equation but are in the wrong position, they will be marked in brown. If there are numbers or signs in the equation and they are in the correct place, they will be highlighted in green. The gray tint indicates that certain numbers or signs do not appear in the equation.\n\nMake an attempt to answer the target equation.\n\nTo win the game, you must correctly guess the equation (all spots are green). You may quickly post your results on social media at the end of the game, as well as copy the URL and challenge your friends!\n\nPUZZLE WORD brain skill logic quiz braining seach\nComment (0)",
null,
"Be the first to comment",
null,
""
] | [
null,
"https://food-le.com/themes/foodle/resources/images/icons/empty-comment.png",
null,
"https://food-le.com/game-tracking-views.ajax",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93964535,"math_prob":0.9817204,"size":1284,"snap":"2022-40-2023-06","text_gpt3_token_len":267,"char_repetition_ratio":0.15,"word_repetition_ratio":0.0,"special_character_ratio":0.211838,"punctuation_ratio":0.087649405,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9703908,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T13:57:01Z\",\"WARC-Record-ID\":\"<urn:uuid:540e0abd-de5a-43e9-b406-2fba2cec7720>\",\"Content-Length\":\"80618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7329e4aa-a1c4-441a-bf06-154eebef9124>\",\"WARC-Concurrent-To\":\"<urn:uuid:7456b1ed-bccd-4077-bb80-d43cb676a5f6>\",\"WARC-IP-Address\":\"104.21.62.86\",\"WARC-Target-URI\":\"https://food-le.com/numberle\",\"WARC-Payload-Digest\":\"sha1:FNWPU2J77LNTRR2NUF3FNSHXZRKNDQK6\",\"WARC-Block-Digest\":\"sha1:WTB2PME7BWA2EZV25PRNDMQYZL457XKZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335034.61_warc_CC-MAIN-20220927131111-20220927161111-00294.warc.gz\"}"} |
https://cmc.deusto.eus/category/publications/umberto-biccari/ | [
"## Control and Numerical approximation of Fractional Diffusion Equations\n\nUmberto Biccari, Mahamadi Warma, Enrique Zuazua. Control and Numerical approximation of Fractional Diffusion Equations (2022) Handb. Numer. Anal. Elsevier. ISSN:1570-8659, DOI: https://doi.org/10.1016/bs.hna.2021.12.001 Abstract. The aim of this work is to give…\n\n## Internal control for a non-local Schrödinger equation involving the fractional Laplace operator\n\nU. Biccari Internal control for a non-local Schrödinger equation involving the fractional Laplace operator (2021) Abstract: We analyze the interior controllability problem for a nonlocal Schr\\\"odinger equation involving the fractional…\n\n## A stochastic approach to the synchronization of coupled oscillators\n\nUmberto Biccari, Enrique Zuazua. A stochastic approach to the synchronization of coupled oscillators. Frontiers in Energy Research, section Smart Grids. Front. Energy Res. Vol. 8 (2020). DOI: 10.3389/fenrg.2020.00115 Abstract. This paper…\n\n## Controllability of the one-dimensional fractional heat equation under positivity constraints\n\nU. Biccari, M. Warna, E. Zuazua Internal observability for coupled systems of linear partial differential equations. Commun. Pure Appl. Anal., Vol 19. No. 4 (2019), pp. 1949-1978. DOI: 10.3934/cppaa.2020086\n\n## Null-controllability properties of a fractional wave equation with a memory term\n\nU. Biccari, M. Warma Null-controllability properties of a fractional wave equation with a memory term. Evol. Eq. Control The., Vol. 9, No. 2 (2020), pp. 399-430 Abstract: We study the…\n\n## Propagation of one and two-dimensional discrete waves under finite difference approximation\n\nU. Biccari, A. Marica, E. Zuazua Propagation of one and two/dimensional discrete waves under finite difference approximation, Found. Comput. Math., Vol. 20 (2020), pp. 1401-1438. DOI: 10.1007/s10208-020-09445-0 Abstract: We analyze…\n\n## Null controllability of a nonlocal heat equation with an additive integral kernel\n\nU. Biccari, V. Hernández-Santamaría Null controllability of a nonlocal heat equation with an additive integral kernel. SIAM J. Control Optim., Vol. 57, No. 4 (2019), pp. 2924-2938, DOI: 10.1137/18M1218431 Abstract:…\n\n## Null-controllability properties of the wave equation with a second order memory term\n\nU. Biccari, S. Micu Null-controllability properties of the wave equation with a second order memory termJ DIFFER EQUATIONS, Vol. 267, No. 2 (2019), pp. 1376-1422 doi.org/10.1016/j.jde.2019.02.009 Abstract: We study the…\n\n## Dynamics and control for multi-agent networked systems: a finite difference approach\n\nU. Biccari, D. Ko, E. Zuazua Dynamics and control for multi-agent networked systems: a finite difference approach. Math. Models Methods Appl. Sci., Vol. 29, No. 4 (2019), pp. 755–790. DOI:…\n\n## Boundary controllability for a one-dimensional heat equation with a singular inverse-square potential\n\nU. Biccari Boundary controllability for a one-dimensional heat equation with a singular inverse-square potential, Math. Control Relat. F., Vol. 9, No. 1 (2019), pp. 191-219, DOI: 10.3934/mcrf.2019011 Abstract: We analyse…\n\n## The Poisson equation from non-local to local\n\nU. Biccari, V. Hernández-Santamaría The Poisson equation from non-local to local, Electronic Journal of Differential Equations, Vol. 2018 (2018), No. 145, pp. 1-13. DOI: arXiv:1801.09470 Abstract: We analyze the limit…\n\n## Controllability of a one-dimensional fractional heat equation: theoretical and numerical aspects\n\nU. Biccari, V. Hernández-Santamaría Controllability of a one-dimensional fractional heat equation: theoretical and numerical aspects, IMA J. Math. Control Inf., Vol. 36, No. 4 (2019), pp. 1199-1235. DOI: 10.1093/imamci/dny025 Abstract:…\n\n## Local regularity for fractional heat equations\n\nU. Biccari, M. Warma, E. Zuazua Local regularity for fractional heat equations<, Recent Advances in PDEs: Analysis, Numerics and Control, SEMA SIMAI Springer Series, Vol. 17 (2018). DOI: 10.1007/978-3-319-97613-6 Abstract:…\n\n## Local elliptic regularity for the Dirichlet fractional Laplacian\n\nU. Biccari, M. Warma, E. Zuazua Local elliptic regularity for the Dirichlet fractional Laplacian Advanced Nonlinear Studies, Vol. 17, Nr. 2 (2017), pp. 387-409. DOI: 10.1515/ans-2017-0014 Abstract: We analyze the…\n\n## Addendum: Local elliptic regularity for the Dirichlet fractional Laplacian\n\nU. Biccari, M. Warma, E. Zuazua Addendum: Local elliptic regularity for the Dirichlet fractional Laplacian Adv. Nonlinear Stud., Vol. 17, Nr. 4 (2017), pp. 837-839. DOI: 10.1515/ans-2017-6020 Abstract: In ,…\n\n## Null controllability for a heat equation with a singular inverse-square potential involving the distance to the boundary function\n\nU. Biccari, E. Zuazua Null controllability for a heat equation with a singular inverse-square potential involving the distance to the boundary function J. Differential Equations, Vol. 261, No. 5 (2016),…"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6929025,"math_prob":0.6277076,"size":6148,"snap":"2022-05-2022-21","text_gpt3_token_len":1762,"char_repetition_ratio":0.15332031,"word_repetition_ratio":0.044902913,"special_character_ratio":0.27553675,"punctuation_ratio":0.25040388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9700815,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T03:00:50Z\",\"WARC-Record-ID\":\"<urn:uuid:8c0e98fe-b497-44b9-a2c3-04689a9e3d3c>\",\"Content-Length\":\"123492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78ebaddc-b4a5-4983-a815-16b06602aff7>\",\"WARC-Concurrent-To\":\"<urn:uuid:ddffc9e6-d92d-4f55-b7e3-24cadf3fa093>\",\"WARC-IP-Address\":\"151.80.161.28\",\"WARC-Target-URI\":\"https://cmc.deusto.eus/category/publications/umberto-biccari/\",\"WARC-Payload-Digest\":\"sha1:76G6DMMXAT44O4VDZAWELSQZA75G7DYF\",\"WARC-Block-Digest\":\"sha1:CYS47BLMYBA4RW6J3QAYSUUCKMXFB7AH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662631064.64_warc_CC-MAIN-20220527015812-20220527045812-00139.warc.gz\"}"} |
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/9/lesson/9.3.2/problem/9-157 | [
"",
null,
"",
null,
"### Home > A2C > Chapter 9 > Lesson 9.3.2 > Problem9-157\n\n9-157.",
null,
"Remember the annual compound interest formula.\nA = P(1 + r)t\n\nFirst substitute the values into the equation.\n25000 = 15000(1.08)t\n\nDivide both sides by 15000.\n\n$\\frac{5}{3}=(1.08)^{\\textit{t}}$\n\nRemember the rules of logarithms and take the log of both sides.\n\n$\\text{log}\\left(\\frac{5}{3}\\right)=\\textit{t}(\\text{log}1.08)$\n\nSolve for t.\n\nThe account will be worth 25000 in between 6 and 7 years.\n\nt = 6.64"
] | [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null,
"https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/169fa5e0-259f-11e9-ab6b-d7741872f579/A2C_9-157 Question_original.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8159961,"math_prob":0.9972924,"size":362,"snap":"2021-31-2021-39","text_gpt3_token_len":115,"char_repetition_ratio":0.108938545,"word_repetition_ratio":0.0,"special_character_ratio":0.37292817,"punctuation_ratio":0.13414635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995981,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T06:00:04Z\",\"WARC-Record-ID\":\"<urn:uuid:08cf1389-6f44-42b9-a69e-e706b48dceeb>\",\"Content-Length\":\"39981\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2211cbaa-29df-41b2-8bce-5cc12791179f>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fb8c600-1217-4629-a35c-194de92136ed>\",\"WARC-IP-Address\":\"104.26.6.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/9/lesson/9.3.2/problem/9-157\",\"WARC-Payload-Digest\":\"sha1:Q2VTDWAO7MTNDVFARDPVQ5BG7GTX5PVF\",\"WARC-Block-Digest\":\"sha1:OG7JJRTWZL5RBSU7G44TQEX57DR6QQXL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057598.98_warc_CC-MAIN-20210925052020-20210925082020-00293.warc.gz\"}"} |
https://de.scribd.com/document/347016535/Repeat-Paper-01-Edexcel-Physics | [
"Sie sind auf Seite 1von 6\n\n# FORCES and MOTION\n\n## a. Speed, Velocity and Acceleration\n\na. Speed/ average speed is defined as the distance moved per time, and hence,\nthe equation for speed is :\n\nQ1 1. If S for speed, d for distance travelled and t for time, then show that S=d/t.\n\n## 4. Show that 3.6km/h = m/s.\n\nQ2 Calculate the average speed of a car that travels 2500m in 500 seconds.\n\nQ3 Calculate the average speed of a bus that travels 150km in 2.5 hours.\n\nQ4 How far, in meters, will a train travel in 500 seconds at an average speed of 25\nm/s?\n\n10000 m/s?\n\nkm/h?\n\n30 000 km/h?\n\n## Q8 How long, in seconds, should a person take to travel 2000m at an average\n\nspeed of 4m/s?\n\nQ9 How long, in seconds, should a rocket take to travel 1 000 000m at an average\nspeed of 20 000m/s?\n\n## Sanjaya Perera(BSc(Hons).Maths(Sp)) - 0773 440 468 Page 1\n\nFORCES and MOTION\nRevision Paper 01- Edexcel IGCSE in Physics\n\n## b. Velocity is a physical vector quantity; both magnitude and direction are\n\nneeded to define it.\nVelocity is defined as a measure of the distance an object travels in a stated\ndirection (displacement) in a given length of time, and hence, the equation for\nvelocity is:\n( )\n=\n\nQ10 1. If V for velocity, d for displacement and t for time, then show that V=d/t.\n\n## c. Acceleration is a physical vector quantity; both magnitude and direction are\n\nneeded to define it.\nAcceleration is defined as the rate at which objects change their velocity, and\nhence, the equation for acceleration is:\n\n=\n\nQ11 1. If a for acceleration, u for initial velocity, v for final velocity and t for time\ntaken for change velocity, then show that a=(v-u)/t.\n\n## 3. What is meant by deceleration?\n\nQ12 A car is travelling at 40m/s. It accelerates steadily for 10seconds until velocity\nbecomes 70m/s. What is its acceleration?\n\n## Sanjaya Perera(BSc(Hons).Maths(Sp)) - 0773 440 468 Page 2\n\nFORCES and MOTION\nRevision Paper 01- Edexcel IGCSE in Physics\n\nQ13 Calculate the acceleration of a rocket the moves from rest (0 m/s) to 1000 m/s\nin 20 seconds.\n\nQ14 By how much does the velocity of a cyclist change in 2 seconds when\naccelerating at 5 m/s2?\n\nQ15 What is the final velocity of a rocket after 30 seconds if it accelerates at 50m/s2\nfrom 100 m/s?\n\nQ16 What is the initial velocity of a bus if after 4 seconds of accelerating at 3m/s2 it\nreaches 18 m/s?\n\n## Q17 An object strikes the ground travelling at 20m/s. It is brought to rest in\n\n0.05seconds. What is its acceleration? Explain the negative sign of your\n\n*********************************************************************\n\nb. Distance-time graphs\na. Gradient of the distance (Y-axis) Vs time (X-axis) graph denotes speed.\ni. A straight line sloping upwards means it has a steady speed.\nii. A horizontal line means the object is stopped.\niii. A steeper gradient means a higher speed.\niv. A curved line means the speed is changing.\nv. A straight line sloping downwards means it has a steady speed\nand a steady velocity in the negative direction.\n\nQ1 Using given details, sketch a distance-time graph to show the motion of a car\nbetween two junctions (A to C).\n\n## Start from junction A at t=t0\n\nMoving quickly from A to B and reached to B at t=t1\nStop for a moment and start its journey again from B at t=t2\nMoving slowly from B to junction C and end its journey at t=t3\n\nQ2 Show that the average speed of the above car for its whole journey is\n\n.\n( )\n\n## Sanjaya Perera(BSc(Hons).Maths(Sp)) - 0773 440 468 Page 3\n\nFORCES and MOTION\nRevision Paper 01- Edexcel IGCSE in Physics\n\n## Q3 Following distance-time graph shows a car travelling along a road, starting\n\nfrom the city O to the city G.\nDistance Vs Time\n120\n\n100 E\nDistance/km\n\n80\n\n60 D\nC\n40\n\n20\nA B\n0\nF\nO G\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nTime/h\n\nI. Discuss the travelling behavior of the car from OA, AB, BC, CD,\nDE, EF and FG respectively.\n(Hint: uniform speed, not moving, speed changing, accelerating, positive direction, negative\ndirection, deceleration)\nII. What is the total time that the car is stopped during its travel?\nIII. Compare the speed difference between OA and DE.\nIV. Find the speed of the car between EF.\nV. Find the average speed of the whole journey.\nVI. What can you say about the city O (starting location) and city G (ending\nlocation)?\n\n*********************************************************************\n\nc. Velocity-time graphs\na. Gradient of the velocity (Y-axis) Vs time (X-axis) graph denotes acceleration.\ni. A straight line sloping upwards means it has a uniform\nacceleration.\nii. A horizontal line means the object is moving uniform velocity.\niii. The area under the graph is equal to the travelled distance.\niv. A straight line sloping downwards means it has a uniform\nacceleration in the negative direction/ deceleration.\n\n## Sanjaya Perera(BSc(Hons).Maths(Sp)) - 0773 440 468 Page 4\n\nFORCES and MOTION\nRevision Paper 01- Edexcel IGCSE in Physics\n\n## Q1 Using following details, sketch a velocity-time graph to show the motion of a\n\ncar between two junctions (A to C).\n\n## Start from junction A with the initial speed U at t=0.\n\nVelocity of the car increases until becomes V, within time t and reaches to\nthe junction B.\nThen the car moves with the uniform velocity V during time T and reaches\nto Junction C.\nI. Show that the acceleration (a) of the car within first time t is given by\n( )\n= .\nII. Show that the distance (S) travelled with uniform velocity (V) by the\ncar within time T is given by = .\n\n## Q2 Using following details, sketch a velocity-time graph to show the motion of a\n\nbike rider on a straight road.\n\nVelocity Vs Time\n120\n\n100 F\nVelocity (m/s)\n\n80\n\n60 D\nE\n40\n\n20 C\nB\nG H\n0\nA\n0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nTime (s)\n\nI. Discuss the travelling behavior of the rider from AB, BC, CD, DE,\nEF, FG and GH respectively.\nII. What is the total time that the rider is stopped during his ride?\nIII. Compare the acceleration difference between AB and EF.\nIV. Find the deceleration of the bike after apply the brake between FG.\nV. How far he rides the bike after apply the brake (FG).\n\n## Sanjaya Perera(BSc(Hons).Maths(Sp)) - 0773 440 468 Page 5\n\nFORCES and MOTION\nRevision Paper 01- Edexcel IGCSE in Physics\n\nVI. How far he rides the bike under uniform velocity {(BC) &(DE)}.\n\n## Q3 A car is travelling at 20m/s. It accelerates uniformly at 3m/s2 for 5 seconds.\n\nI. Draw a velocity-time graph for the car during the period that it is\naccelerating. Include numerical detail on the axes of your graph.\nII. Calculate the distance which car travels while it is accelerating.\n\n## Q4 Plot a velocity-time graph using the data in the following table.\n\nv(m/s) 0.0 2.5 5.0 7.5 10.0 10.0 10.0 10.0 10.0 10.0\nt(s) 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0\nI. The acceleration during the first 4 seconds.\nII. The distance travelled in the last 6 seconds of the motion shown.\n\n## Q5 Consider the following velocity-time graph;\n\n12\n\n10\nvelocity (m/s)\n\n0\n0 10 20 30 40 50 60\ntime (s)\n\n## I. Find the acceleration and deceleration.\n\nII. Find the total distance travelled by the person shown in the graph.\n\n*********************************************************************"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8443581,"math_prob":0.91106945,"size":4224,"snap":"2020-10-2020-16","text_gpt3_token_len":1197,"char_repetition_ratio":0.1521327,"word_repetition_ratio":0.16836734,"special_character_ratio":0.30871212,"punctuation_ratio":0.13786009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99110883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T12:34:23Z\",\"WARC-Record-ID\":\"<urn:uuid:3f828d9c-94db-4443-a781-cd5ab7dfb38f>\",\"Content-Length\":\"395461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:631951e7-134f-4bce-9aa0-28b63a76b5f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3032ccb-8381-4aeb-a534-7551c36657b3>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://de.scribd.com/document/347016535/Repeat-Paper-01-Edexcel-Physics\",\"WARC-Payload-Digest\":\"sha1:ISX2OCQPOVCRHQDUOOU3L6SOWX3SNDI7\",\"WARC-Block-Digest\":\"sha1:GLA2M53QZY5FMM3XJLAWT7CEJUSQKX74\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371604800.52_warc_CC-MAIN-20200405115129-20200405145629-00052.warc.gz\"}"} |
https://www.proofwiki.org/wiki/Coprimality_Criterion | [
"# Coprimality Criterion\n\n## Theorem\n\nIn the words of Euclid:\n\nTwo unequal numbers being set out, and the less being continually subtracted in turn from the greater, if the number which is left never measures the one before it until an unit is left, the original numbers will be prime to one another.\n\n## Proof\n\nLet the less of two unequal numbers $AB, CD$ be continually subtracted from the greater, such that the number which is left over never measure the one before it till a unit is left.\n\nWe need to show that $AB$ and $CD$ are coprime.",
null,
"Suppose $AB, CD$ are not coprime.\n\nThen some natural number $E > 1$ will divide them both.\n\nLet some multiple of $CD$ be subtracted from $AB$ such that the remainder $AF$ is less than $CD$.\n\nThen let some multiple of $AF$ be subtracted from $CD$ such that the remainder $CG$ is less than $AF$.\n\nThen let some multiple of $CG$ be subtracted from $FA$ such that the remainder $AH$ is a unit.\n\nSince, then, $E$ divides $CD$, and $CD$ divides $BF$, then $E$ also divides $BF$.\n\nBut $E$ also divides $BA$.\n\nTherefore $E$ also divides $AF$.\n\nBut $AF$ divides $DG$.\n\nTherefore $E$ also divides $DG$.\n\nBut $E$ also divides the whole $DC$.\n\nTherefore $E$ also divides the remainder $GC$.\n\nBut $CG$ divides $FH$.\n\nTherefore $E$ also divides $FH$.\n\nBut $E$ also divides the whole $FA$.\n\nTherefore $E$ also divides the remainder, that is, the unit $AH$.\n\nBut $E > 1$ so this is impossible.\n\nTherefore, from Book $\\text{VII}$ Definition $12$: Relatively Prime, $AB$ and $CD$ are relatively prime.\n\n$\\blacksquare$\n\n## Historical Note\n\nThis proof is Proposition $1$ of Book $\\text{VII}$ of Euclid's The Elements."
] | [
null,
"https://www.proofwiki.org/w/images/thumb/7/71/Euclid-VII-1.png/200px-Euclid-VII-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84478545,"math_prob":0.9999846,"size":1940,"snap":"2023-40-2023-50","text_gpt3_token_len":557,"char_repetition_ratio":0.14514463,"word_repetition_ratio":0.030674847,"special_character_ratio":0.29072165,"punctuation_ratio":0.17424242,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000098,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T13:54:04Z\",\"WARC-Record-ID\":\"<urn:uuid:60f6310c-e73b-4cff-9598-5dc2a3cb2dcd>\",\"Content-Length\":\"44424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9853c677-7f34-402c-ba43-c1e154ed901b>\",\"WARC-Concurrent-To\":\"<urn:uuid:921bea71-d9ae-46ca-87ef-80497dd156d2>\",\"WARC-IP-Address\":\"104.21.84.229\",\"WARC-Target-URI\":\"https://www.proofwiki.org/wiki/Coprimality_Criterion\",\"WARC-Payload-Digest\":\"sha1:IRFHRFSX3QUGOCX2Y6ZCEKIK3SGWMPIP\",\"WARC-Block-Digest\":\"sha1:IM4EWI7FAOXU4BHCLC2KFPYMW5263GLB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511000.99_warc_CC-MAIN-20231002132844-20231002162844-00480.warc.gz\"}"} |
https://zxi.mytechroad.com/blog/searching/leetcode-529-minesweeper/ | [
"Let’s play the minesweeper game (Wikipediaonline game)!\n\nYou are given a 2D char matrix representing the game board. ‘M’ represents an unrevealed mine, ‘E’ represents an unrevealed empty square, ‘B’ represents a revealed blank square that has no adjacent (above, below, left, right, and all 4 diagonals) mines, digit (‘1’ to ‘8’) represents how many mines are adjacent to this revealed square, and finally ‘X’ represents a revealed mine.\n\nNow given the next click position (row and column indices) among all the unrevealed squares (‘M’ or ‘E’), return the board after revealing this position according to the following rules:\n\n1. If a mine (‘M’) is revealed, then the game is over – change it to ‘X’.\n2. If an empty square (‘E’) with no adjacent mines is revealed, then change it to revealed blank (‘B’) and all of its adjacent unrevealed squares should be revealed recursively.\n3. If an empty square (‘E’) with at least one adjacent mine is revealed, then change it to a digit (‘1’ to ‘8’) representing the number of adjacent mines.\n4. Return the board when no more squares will be revealed.\n\nExample 1:\n\nInput:\n\n[['E', 'E', 'E', 'E', 'E'],\n['E', 'E', 'M', 'E', 'E'],\n['E', 'E', 'E', 'E', 'E'],\n['E', 'E', 'E', 'E', 'E']]\n\nClick : [3,0]\n\nOutput:\n\n[['B', '1', 'E', '1', 'B'],\n['B', '1', 'M', '1', 'B'],\n['B', '1', '1', '1', 'B'],\n['B', 'B', 'B', 'B', 'B']]\n\nExplanation:\n\n\n\nExample 2:\n\nInput:\n\n[['B', '1', 'E', '1', 'B'],\n['B', '1', 'M', '1', 'B'],\n['B', '1', '1', '1', 'B'],\n['B', 'B', 'B', 'B', 'B']]\n\nClick : [1,2]\n\nOutput:\n\n[['B', '1', 'E', '1', 'B'],\n['B', '1', 'X', '1', 'B'],\n['B', '1', '1', '1', 'B'],\n['B', 'B', 'B', 'B', 'B']]\n\nExplanation:\n\n\n\nNote:\n\n1. The range of the input matrix’s height and width is [1,50].\n2. The click position will only be an unrevealed square (‘M’ or ‘E’), which also means the input board contains at least one clickable square.\n3. The input board won’t be a stage when game is over (some mines have been revealed).\n4. For simplicity, not mentioned rules should be ignored in this problem. For example, you don’t need to reveal all the unrevealed mines when the game is over, consider any cases that you will win the game or flag any squares.\n\n## Solution: DFS\n\nTime complexity: O(m*n)\nSpace complexity: O(m* n)\n\n## Python3\n\nIf you like my articles / videos, donations are welcome.\n\nBuy anything from Amazon to support our website",
null,
"## Be First to Comment\n\nMission News Theme by Compete Themes."
] | [
null,
"https://zxi.mytechroad.com/blog/wp-content/uploads/2018/11/wechat_pay_opt.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67993504,"math_prob":0.87936187,"size":3974,"snap":"2020-34-2020-40","text_gpt3_token_len":1483,"char_repetition_ratio":0.16775818,"word_repetition_ratio":0.31113955,"special_character_ratio":0.44061399,"punctuation_ratio":0.2014768,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97306895,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T12:02:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b0b3b18e-2591-426e-a0f3-7faa859c9626>\",\"Content-Length\":\"96395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62d2d56a-c5ba-4e9f-9813-53bde6677193>\",\"WARC-Concurrent-To\":\"<urn:uuid:5231b382-ae20-4bd5-b343-fc1b2eaa5ed4>\",\"WARC-IP-Address\":\"107.180.12.40\",\"WARC-Target-URI\":\"https://zxi.mytechroad.com/blog/searching/leetcode-529-minesweeper/\",\"WARC-Payload-Digest\":\"sha1:SBYDETTKLYVH5236ZDXSEV6RQQVUQLHM\",\"WARC-Block-Digest\":\"sha1:WZPY3QRAWAFXFIHGIODUMME3ZG4FL2DZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738892.21_warc_CC-MAIN-20200812112531-20200812142531-00543.warc.gz\"}"} |
https://codeforces.com/blog/entry/47402 | [
"### ifsmirnov's blog\n\nBy ifsmirnov, history, 5 years ago,",
null,
"It is known (well, as long as Tarjan's proof is correct, and it is believed to be) that DSU with path compression and rank heuristics works in O(α(n)), where α(n) is an inverse Ackermann function. However, there is a rumor in the community that if you do not use the rank heuristic than the DSU runs as fast, if not faster. For years it was a controversial topic. I remember some holywars on CF with ones arguing for a random heuristic and others claiming they have a counter-test where DSU runs in",
null,
"independently of rand() calls.\n\nRecently I was looking through the proceedings of SODA (a conference on Computer Science) and found a paper by Tarjan et al. The abstract states:\n\nRecent experiments suggest that in practice, a naïve linking method works just as well if not better than linking by rank, in spite of being theoretically inferior. How can this be? We prove that randomized linking is asymptotically as efficient as linking by rank. This result provides theory that matches the experiments, which implicitly do randomized linking as a result of the way the input instances are generated.\n\nThis paper is relatively old (2014), though I haven't yet heard of it and decided to share with you.\n\nA whole paper can be found here.",
null,
"dsu,",
null,
"Comments (24)\n » In practice it is about twice slower, isn't it?\n• » » On average testcase work (time of usual implementation with rand) = (time of rand) + (time of usual implementation without rand).On the worst testcase: (time of usual implementation with rand) = (time of rand) + (time of usual implementation without rand) / 2.In practice path-compression without rank-heuristic works in constant time.\n » 5 years ago, # | ← Rev. 2 → What about that version? What is its complexity? int Find(int v) { if (rep[v] == v) { return v; } return rep[v] = Find(rep[v]); } void Union(int a, int b) { rep[Find(a)] = Find(b); } \n• » » Amortized O(log n) per operation.\n• » » » 5 years ago, # ^ | ← Rev. 2 → How to prove that?EDIT: Providing both proof and worst case test would be nice.\n• » » » » 5 years ago, # ^ | ← Rev. 3 → Well, I know the proof that it's",
null,
":http://cs.stackexchange.com/questions/50294/why-is-the-path-compression-no-rank-for-disjoint-sets-o-log-n-amortized-fo/50318#50318EDIT: This paper by Tarjan and van Leeuwen gives pictures with worst-case examples, (see pages 262-264), and a slightly different proof of the fact that it is O(log n): http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/Union-Find-Tarjan.pdf\n• » » I think, you skiped a return before rep[v] = Find(rep[v])\n• » » » 5 years ago, # ^ | ← Rev. 2 → Thanks, it seems even the simplest possible F&U version cannot be written without a bug ; p. At least that's the kind of bug compiler would tell me about :).\n• » » 5 years ago, # ^ | ← Rev. 2 → Can anyone show me an example of m (union/find) operations that traverses more than 2* min (n, m) edges?\n• » » » When I do a \"find\" or \"union\" operation, it requires a constant time * (number of edges in the path to the root). If I divide these edges in two groups: immediate edge (the first edge in the path) and intermediate edges (the remaining edges). I can easily see that each edge will be an intermediate edge at most once, because we have the path compression. So, the number of intermediate edges will be at most n (number of union/find calls). And obviously, the number of immediate edges will be at most to n.The conclusion is that the total time required for the union/find operations will be constant * (2 * n). Therefore, each operation union/find operation costs O(1) amortized.**P.S: ** Someone corrects me, if I'm wrong.\n• » » » » I saw my mistake, the statement that \"each edge will be an intermediate edge at most once\" is false.\n » You may also find this blog post on Codeforces interesting where natsukagami was wondering why a simple link heuristic was faster than link-by-size. In the comments I mentioned that he was using link-by-index.\n » Maybe the sence is: in random case it works in average with same asymptotic, but with rank heuristic in worth case ?\n• » » Yes, you're right. In case of randomized algorithms the average case is analysed almost always. But this is not the point -- the rumor was that random linking may work on θ(logn) on average, which was disproved by Tarjan.\n » 5 years ago, # | ← Rev. 2 → We prove that randomized linking is asymptotically as efficient as linking by rank. After short discussion ifsmirnov agreed that the post misleads readers.Let's read the given paper attentively.We analyze randomized linking and randomized early linking, which are linking by index and early linking by index with the elements ordered uniformly at random.Authors prove that joinIndex works in O(α(n)). void init() { for (int i = 0; i < n; i++) index[i] = rand(); } void joinIndex( int a, int b ) { if (index[a] > index[b]) swap(a, b); p[a] = b; } Usual join, which uses random in following way: void join( int a, int b ) { if (rand() & 1) swap(a, b); p[a] = b; } reduces depth only in two times. So with path-compression it works in",
null,
"in the worst cast. And without path-compression it works in Θ(n) in the worst case.\n• » » I still haven't heard of any counter-random test. I even actually believed in power of join with random and stopped writing with rank.\n• » » » One such test would be: 1 2 1 3 1 4 ... 1 N Let's prove that random linking runs in expected O(N) time without path compression and O(log N) with it. First, realize that, in reality, find's run time is directly proportional to the length of the longest path, which will always be part of 1's connected component in such a case. Knowing the longest path always increases by at most 1 every union, we only care about the expected number of longest path increases, or, by linearity of expectation, the probability of an increase happening on one particular iteration. This probability is exactly",
null,
", thus the expected number of increases is",
null,
".If we want to include path compression, the complexity drops down to O(log N) even without random linking. The hardest part would be proving it to be Ω (log N). Though harder in theory, the fact that random linking provides linear expected heights while rank heuristics provide logarithmic expected heights makes for enough intuitive evidence of this fact (this goes by the same lines as Burunduk1).\n• » » » » 5 years ago, # ^ | ← Rev. 2 → One such test... Let's prove that random linking runs in expected O(log N) with path compression. Do you mean that DSU with path compression and random linking will work O(NlogN) in total on this test?\n• » » » » » Probably not on this test alone. The main point of the test is to show random linking won't decrease height asymptotically, so random linking is no better than father[a] = b type linking, which in itself runs in O(log N).\n• » » » » You haven't proved anything. Burunduk1 already showed me this test couple of years ago, but I think it's still n alpha(n). I want test with nlogn.\n• » » » » » 5 years ago, # ^ | ← Rev. 2 → I think that here is something similar to O(nlogn):\n• » » » » » » It does. I changed your code a little bit to compare my ways of implementation and the new \"indexes\" one.Here's the result:RANDOM: res[1.0e+04] = 5.872 res[1.0e+05] = 6.667 res[1.0e+06] = 7.450 res[1.0e+07] = 8.227 RANKS: res[1.0e+04] = 2.691 res[1.0e+05] = 2.692 res[1.0e+06] = 2.691 res[1.0e+07] = 2.691 INDEXES: res[1.0e+04] = 3.791 res[1.0e+05] = 3.838 res[1.0e+06] = 3.998 res[1.0e+07] = 3.825 That's the code: https://pastebin.com/mFCP6511Now it's obvious: random is NOT as good as ranks or indexes. I hope the question is closed.\n• » » » » You've given right Ω(n) test for case \"without path compression\".But, I agree with -XraY-, test against \"path compression\" is more complicated. Because even simpler question \"how to make path compression be amortized ω(1)?\" is not so easy.The test for path compression: for n times while (1) new_root = join(old_root, new vertex) if (depth of the tree increased) break get(the most deep vertex) // path compression If there is no random (join makes just p[a] = b) \"while\" loop will do exactly one iteration. With random expected number of iterations is 2.The time of i-th get is",
null,
". To prove it we have to draw some pictures, after 2k calls of \"get\" our tree will be Tk, k-th binomial tree (Tk = Tk - 1 + Tk - 1, add the second root as child of the first one).\n » For future reference, I posted a potential proof of an $\\Omega\\left(n \\frac{\\log n}{\\log \\log n}\\right)$ lower bound for path compression with linking by flipping a fair coin, if we allow the queries to be interactive (i.e. depend on the result of earlier find queries), in this thread."
] | [
null,
"https://codeforces.org/s/48661/images/flags/24/gb.png",
null,
"https://espresso.codeforces.com/ec5a72f1e561968c3d322e9d6ba7e4272b48cac4.png",
null,
"https://codeforces.org/s/48661/images/blog/tags.png",
null,
"https://codeforces.org/s/48661/images/icons/comments-48x48.png",
null,
"https://espresso.codeforces.com/9040a33098f83986b0de64475c66584fbfdf0e22.png",
null,
"https://espresso.codeforces.com/ec5a72f1e561968c3d322e9d6ba7e4272b48cac4.png",
null,
"https://espresso.codeforces.com/eb946338365d9781f7d2e9ec692c26702d0ae3a7.png",
null,
"https://espresso.codeforces.com/cbf25ae6a8f2f0effa5ee0e4c69cea0c3a56d49d.png",
null,
"https://espresso.codeforces.com/d02be19ac76158f10ae4fce91d8da4aac016fdc9.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9221336,"math_prob":0.8966759,"size":7588,"snap":"2021-43-2021-49","text_gpt3_token_len":1892,"char_repetition_ratio":0.119066454,"word_repetition_ratio":0.030134814,"special_character_ratio":0.24657354,"punctuation_ratio":0.118971065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.982796,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,8,null,null,null,7,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T02:35:38Z\",\"WARC-Record-ID\":\"<urn:uuid:9704a263-5093-478c-b3af-41bc1949abad>\",\"Content-Length\":\"171439\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc02e3e1-9e7f-43bf-978c-ec2ba6ad25a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9e82fe3-1bc8-4a87-bdf7-540525d23e50>\",\"WARC-IP-Address\":\"213.248.110.126\",\"WARC-Target-URI\":\"https://codeforces.com/blog/entry/47402\",\"WARC-Payload-Digest\":\"sha1:GZQEXBGOG3ZKOVT23DY7R6XCYGR7SOKM\",\"WARC-Block-Digest\":\"sha1:7CCZVK75ZK4PABAFEWK3GBD7GVILF4V3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362923.11_warc_CC-MAIN-20211204003045-20211204033045-00064.warc.gz\"}"} |
https://metanumbers.com/2735 | [
"## 2735\n\n2,735 (two thousand seven hundred thirty-five) is an odd four-digits composite number following 2734 and preceding 2736. In scientific notation, it is written as 2.735 × 103. The sum of its digits is 17. It has a total of 2 prime factors and 4 positive divisors. There are 2,184 positive integers (up to 2735) that are relatively prime to 2735.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 4\n• Sum of Digits 17\n• Digital Root 8\n\n## Name\n\nShort name 2 thousand 735 two thousand seven hundred thirty-five\n\n## Notation\n\nScientific notation 2.735 × 103 2.735 × 103\n\n## Prime Factorization of 2735\n\nPrime Factorization 5 × 547\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 2735 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 2,735 is 5 × 547. Since it has a total of 2 prime factors, 2,735 is a composite number.\n\n## Divisors of 2735\n\n1, 5, 547, 2735\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 3288 Sum of all the positive divisors of n s(n) 553 Sum of the proper positive divisors of n A(n) 822 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 52.2972 Returns the nth root of the product of n divisors H(n) 3.32725 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 2,735 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 2,735) is 3,288, the average is 822.\n\n## Other Arithmetic Functions (n = 2735)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 2184 Total number of positive integers not greater than n that are coprime to n λ(n) 1092 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 400 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 2,184 positive integers (less than 2,735) that are coprime with 2,735. And there are approximately 400 prime numbers less than or equal to 2,735.\n\n## Divisibility of 2735\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 0 5 5 7 8\n\nThe number 2,735 is divisible by 5.\n\n## Classification of 2735\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (2735)\n\nBase System Value\n2 Binary 101010101111\n3 Ternary 10202022\n4 Quaternary 222233\n5 Quinary 41420\n6 Senary 20355\n8 Octal 5257\n10 Decimal 2735\n12 Duodecimal 16bb\n20 Vigesimal 6gf\n36 Base36 23z\n\n## Basic calculations (n = 2735)\n\n### Multiplication\n\nn×i\n n×2 5470 8205 10940 13675\n\n### Division\n\nni\n n⁄2 1367.5 911.666 683.75 547\n\n### Exponentiation\n\nni\n n2 7480225 20458415375 55953766050625 153033550148459375\n\n### Nth Root\n\ni√n\n 2√n 52.2972 13.9847 7.23168 4.86846\n\n## 2735 as geometric shapes\n\n### Circle\n\n Diameter 5470 17184.5 2.34998e+07\n\n### Sphere\n\n Volume 8.5696e+10 9.39993e+07 17184.5\n\n### Square\n\nLength = n\n Perimeter 10940 7.48022e+06 3867.87\n\n### Cube\n\nLength = n\n Surface area 4.48814e+07 2.04584e+10 4737.16\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 8205 3.23903e+06 2368.58\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.29561e+07 2.41105e+09 2233.12\n\n## Cryptographic Hash Functions\n\nmd5 1d49780520898fe37f0cd6b41c5311bf 874662c69f110a90da1066f0235e8643c245726b 93f606bd517c88c296b17fe207ea50ce5019e0daa47ab2d5999ec48ed9b3cf41 af6f2c840d302d0ebb7ffe7114e53eea2d766960f1269cb89cb06851f9a7721da8dc77eaaf20e7eeb08e1a104b0d1afd56b862ba99c9f4572b04e1a0e8083aee a97b8821a94474269dee1448d9ed0ab0df0621fc"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61897004,"math_prob":0.9775698,"size":4419,"snap":"2021-04-2021-17","text_gpt3_token_len":1569,"char_repetition_ratio":0.11891279,"word_repetition_ratio":0.02827381,"special_character_ratio":0.43742928,"punctuation_ratio":0.07431552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T03:37:04Z\",\"WARC-Record-ID\":\"<urn:uuid:b46ad208-7f3f-41af-abc2-ae2cd52eb9a7>\",\"Content-Length\":\"48077\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa1ed410-678c-4a0a-8c97-be74abe2b84b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a34f00f-b3cc-40c1-a62e-29371da5bd84>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/2735\",\"WARC-Payload-Digest\":\"sha1:PY5U4HK75HJUBWKQOJOSNN27J7JQOSJ4\",\"WARC-Block-Digest\":\"sha1:G7OK3NS64DTTB6LWV4V6Y6JFVJFFTJEC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704795033.65_warc_CC-MAIN-20210126011645-20210126041645-00366.warc.gz\"}"} |
https://joomla.stackexchange.com/questions/9138/output-repeatable-field-based-on-user-defined-order | [
"# Output repeatable field based on user defined order\n\nI have a very basic repeatable field where I am allowing the user to enter some field along with an input where they can enter the 'Display Order' [1,2,3,4,5]. But I am struggling with how I could apply any sorting function on the data that is stored for the repeatable field. A sample output is the following:\n\n``````stdClass Object\n(\n[order] => Array\n(\n => 1\n => 3\n => 2\n)\n[url] => Array\n(\n => /first\n => /third\n => /second\n)\n)\n``````\n\nHow would I go about so that when the module content is displayed in the front end; it would be based on the order defined by user [1,2,3] and not as it is currently [1,3,2].\n\nHere is a sample function that can give you ordered list based on your order array.\n\n``````public function getSortedArray(\\$myObj, \\$ordering)\n{\n// first build an array with keys as your order, values as data (url)\n\\$myArray = array();\nforeach(\\$myObj->order as \\$i=>\\$order)\n{\n\\$myArray[\\$order] = \\$myObj->url[\\$i];\n}\n\n// now you can get the data as per the user requested order\n\\$return = array();\nforeach(\\$ordering as \\$order)\n{\n\\$return[] = \\$myArray[\\$order];\n}\n\nreturn \\$return;\n}\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9381503,"math_prob":0.9401961,"size":615,"snap":"2019-51-2020-05","text_gpt3_token_len":183,"char_repetition_ratio":0.10638298,"word_repetition_ratio":0.016666668,"special_character_ratio":0.33333334,"punctuation_ratio":0.099236645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9793343,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T20:31:55Z\",\"WARC-Record-ID\":\"<urn:uuid:9efd37a3-666a-4541-861d-0dfeef0687b6>\",\"Content-Length\":\"127988\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5917c3ae-be3b-483e-b253-1207a40344fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:696a763d-f4c9-4294-9d1a-0c2ecf9d5ece>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://joomla.stackexchange.com/questions/9138/output-repeatable-field-based-on-user-defined-order\",\"WARC-Payload-Digest\":\"sha1:BDGWSHDQYLADZAQQMKMORULORTMNNFH3\",\"WARC-Block-Digest\":\"sha1:DO46D7SUPJTOZA5CXBFV2VFB7HKZDQNH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250625097.75_warc_CC-MAIN-20200124191133-20200124220133-00100.warc.gz\"}"} |
http://blog.lambdaconcept.com/doku.php?id=migen:migen_install | [
"",
null,
"Wiki\n\nSite Tools\n\nmigen:migen_install\n\nHow to install?\n\nYou'll need python3 in order to use migen.\n\n```git clone --recursive https://github.com/m-labs/migen.git\ncd migen\n./setup.py install```\n\nCheck if it works\n\nFor the moment, all we are interested in is to see if your migen is correctly installed. We will get into the explanation of the code later. Copy or download this example file\n\nmyexample.py\n```from migen import *\n\n# Our simple counter, which increments at every cycle.\nclass Counter(Module):\ndef __init__(self):\nself.count = Signal(4)\n\n# At each cycle, increase the value of the count signal.\n# We do it with convertible/synthesizable FHDL code.\nself.sync += self.count.eq(self.count + 1)\n\n# Simply read the count signal and print it.\n# The output is:\n# Count: 0\n# Count: 1\n# Count: 2\n# ...\ndef counter_test(dut):\nfor i in range(20):\nprint((yield dut.count)) # read and print\nyield # next clock cycle\n# simulation ends with this generator\n\nif __name__ == \"__main__\":\ndut = Counter()\nrun_simulation(dut, counter_test(dut), vcd_name=\"basic1.vcd\")```\n\nNow just run it. You should obtain a counter that goes from 0 to 15 and loop again.\n\n`python3 myexample.py`\n\nHere is an example of what you could obtain.\n\nFinally, Notice that you have a basic.vcd file that has been created. You can open it with gtkwave.",
null,
""
] | [
null,
"http://blog.lambdaconcept.com/lib/exe/fetch.php",
null,
"http://blog.lambdaconcept.com/lib/exe/indexer.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81983894,"math_prob":0.4130805,"size":1202,"snap":"2019-26-2019-30","text_gpt3_token_len":304,"char_repetition_ratio":0.10183639,"word_repetition_ratio":0.0,"special_character_ratio":0.27121463,"punctuation_ratio":0.17670682,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9516397,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T07:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:796c45cb-ced6-4009-a2d6-58b21c6196cd>\",\"Content-Length\":\"15234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f3b4410-4bf9-485c-a1ce-57786043d4ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:65518c64-7311-4c7a-a16b-f8cb333bd355>\",\"WARC-IP-Address\":\"163.172.95.192\",\"WARC-Target-URI\":\"http://blog.lambdaconcept.com/doku.php?id=migen:migen_install\",\"WARC-Payload-Digest\":\"sha1:C4PNJN4HFZXDRTVB4HWDXKAZSKL32XLR\",\"WARC-Block-Digest\":\"sha1:ON7ZI7BPS57W3EOESLU3HPEDZZRAUFHV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000231.40_warc_CC-MAIN-20190626073946-20190626095946-00546.warc.gz\"}"} |
https://www.investopedia.com/terms/e/envelope.asp | [
"## What Is an Envelope?\n\nEnvelopes are technical indicators that are typically plotted over a price chart with upper and lower bounds. The most common example of an envelope is a moving average envelope, which is created using two moving averages that define upper and lower price range levels. Envelopes are commonly used to help traders and investors identify extreme overbought and oversold conditions as well as trading ranges.\n\n### Key Takeaways\n\n• An envelope, in technical analysis, refers to trend lines plotted both above and below the current price.\n• An envelope's upper and lower bands are typically generated by a simple moving average and a pre-determined distance above and below the moving average—but can be created using any number of other techniques.\n• Many traders react to a sell signal when price reaches or crosses the upper band and a buy signal when price reaches or crosses the lower band of an envelope channel.\n\n## How Envelopes Work\n\nTraders can interpret envelopes in many different ways, but most use them to define trading ranges. When the price reaches the upper bound, the security is considered overbought, and a sell signal is generated. Conversely, when the price reaches the lower bound, the security is considered oversold, and a buy signal is generated. These strategies are based on mean reversion principles.\n\nThe upper and lower bounds are typically defined such that the price tends to stay within the upper and lower thresholds during normal conditions. For a volatile security, traders may use higher percentages when creating the envelope to avoid whipsaw trading signals. Meanwhile, less volatile securities may necessitate lower percentages to create a sufficient number of trading signals.\n\nEnvelopes are commonly used in conjunction with other forms of technical analysis to enhance the odds of success. For example, traders may identify potential opportunities when the price moves outside of the envelope and then look at chart patterns or volume metrics to identify when a tipping point is about to occur. After all, securities can trade at overbought or oversold conditions for a prolonged period of time.\n\n## Example of an Envelope\n\nMoving average envelopes are the most common type of envelope indicator. Using either a simple or exponential moving average, an envelope is created by defining a fixed percentage to create upper and lower bounds.\n\nLet's take a look at a five percent simple moving average envelope for the S&P 500 SPDR (SPY):\n\nThe calculations for this envelope are:\n\n\\begin{aligned} &\\text{Upper Bound} = \\text{SMA}_{50} + \\text{SMA}_{50}*0.05\\\\ &\\text{Lower Bound} = \\text{SMA}_{50} - \\text{SMA}_{50}*0.05\\\\ &\\text{Midpoint} = \\text{SMA}_{50}\\\\ \\\\ \\textbf{where:}&\\\\ &\\text{SMA}_{50}=\\text{50-day Simple Moving Average} \\\\ \\end{aligned}\n\nTraders may have taken a short position in the exchange-traded fund when the price moved beyond the upper range and a long position when the price moved below the lower range. In these cases, the trader would have benefited from the reversion to the mean over the following periods. Traders may set stop-loss points at a fixed percentage beyond the upper and lower bounds, while take-profit points are often set at the midpoint line."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9093048,"math_prob":0.9553952,"size":3125,"snap":"2019-51-2020-05","text_gpt3_token_len":624,"char_repetition_ratio":0.139058,"word_repetition_ratio":0.016260162,"special_character_ratio":0.18688,"punctuation_ratio":0.082585275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97026926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T08:35:37Z\",\"WARC-Record-ID\":\"<urn:uuid:ec2095ed-2a2d-403d-9fe6-9393c95aed15>\",\"Content-Length\":\"111430\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5c81f15-7fe4-4aad-86cb-ca256f45e86b>\",\"WARC-Concurrent-To\":\"<urn:uuid:deceea03-6d7f-4e2a-abde-77253b1836c2>\",\"WARC-IP-Address\":\"151.101.250.114\",\"WARC-Target-URI\":\"https://www.investopedia.com/terms/e/envelope.asp\",\"WARC-Payload-Digest\":\"sha1:7EKM4PDAUJCFPF7LUFVELP7IG7AQUWY2\",\"WARC-Block-Digest\":\"sha1:7TOS54F2FZHXZLU2HZRB7BGHE6RJJYHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251696046.73_warc_CC-MAIN-20200127081933-20200127111933-00170.warc.gz\"}"} |
https://answers.everydaycalculation.com/gcf/120-1260 | [
"Solutions by everydaycalculation.com\n\n## What is the GCF of 120 and 1260?\n\nThe gcf of 120 and 1260 is 60.\n\n#### Steps to find GCF\n\n1. Find the prime factorization of 120\n120 = 2 × 2 × 2 × 3 × 5\n2. Find the prime factorization of 1260\n1260 = 2 × 2 × 3 × 3 × 5 × 7\n3. To find the gcf, multiply all the prime factors common to both numbers:\n\nTherefore, GCF = 2 × 2 × 3 × 5\n4. GCF = 60\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn how to find GCF of upto four numbers in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7713359,"math_prob":0.99886656,"size":603,"snap":"2019-51-2020-05","text_gpt3_token_len":198,"char_repetition_ratio":0.13355593,"word_repetition_ratio":0.0,"special_character_ratio":0.43283582,"punctuation_ratio":0.07826087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966435,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T03:43:07Z\",\"WARC-Record-ID\":\"<urn:uuid:f4a984d8-1015-4d8b-b91f-6bdb3fc6ebbb>\",\"Content-Length\":\"6113\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4375d6e2-ea5b-4938-8553-e6228ab25119>\",\"WARC-Concurrent-To\":\"<urn:uuid:866ef93e-cca5-424a-a60a-878ef74ff2c6>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/gcf/120-1260\",\"WARC-Payload-Digest\":\"sha1:7XJVIJZA5QV3V3F27K7DZS2LHTBD76ER\",\"WARC-Block-Digest\":\"sha1:ZYRTKKUVHIBEGPHEOFOGGCOQ4J3GTPJ7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540548537.21_warc_CC-MAIN-20191213020114-20191213044114-00379.warc.gz\"}"} |
https://solvedlib.com/excel-use-simplex-method-and-exel-to-solve-the,164320 | [
"# Excel Use Simplex method and Exel To solve the following LPPs. Maximize Maximize P-3x + x2...\n\n###### Question:\n\nExcel",
null,
"Use Simplex method and Exel To solve the following LPPs. Maximize Maximize P-3x + x2 subject to the constraints x1 + x2 = 2 2x) + 3x2 s 12 3x + = 12 x 20 x220 P = 5x1 + 7x2 subject to the constraints 2xy + 3x2 = 12 3x + x2 = 12 x 20 *2 2 0 Maximize Maximize P = 2x2 + 4x2 + x3 subject to the constraints -*1 + 2x2 + 3x3 5 6 -*, + 4x2 + 5x3 5 *1 + 5x2 + 7x3 = 7 X, 20 X220X320 P = 6x + 3x2 + 2x3 subject to the constraints 2x1 + 2x2 + 3x3 = 30 2x1 + 2x2 + x3 = 12 x 20 x 20 x 20 Maximize P = 4x1 + 2x2 + 5x3 subject to the constraints *1 + 3x2 + 2x3 = 30 2xy + x2 + 3x3 = 12 X 20 X220 x 20\n\n#### Similar Solved Questions\n\n##### Bauer Brewing Co. has the following data, dollars in thousands. If it follows the residual dividend...\nBauer Brewing Co. has the following data, dollars in thousands. If it follows the residual dividend model, what will its dividend payout ratio be? Capital budget $15,000 % Debt 55% Net income (NI)$9,000 25.00% 30.00% 45.00% 60.00% 75.00%...\n##### If a (ction s(e) poinion Of kttibn me Gexavavue %w time 9ivet me velbury mhot iC acths\"&}. For, at \"MC gic \"porinion hna () Vlt) , and (b}me veiouny when t=0 te5 andte? s()eigt2-ide+4 km (89 - Foot If QObielt iS rdoopea Feet apove me \"gw93 itJ posieoz C46 € tirg \"engte Imne z by Slt) = t iS me in fecondc Since itwas drppea a . Whak i$ik velogiry delond anttk being drpped 2 b . When willithik me goowd? 6. Whok i$ ikS Vlloaky wpon inpact? Thc cevenve in dolars 9ene\nif a (ction s(e) poinion Of kttibn me Gexavavue %w time 9ivet me velbury mhot iC acths\"&}. For, at \"MC gic \"porinion hna () Vlt) , and (b}me veiouny when t=0 te5 andte? s()eigt2-ide+4 km (89 - Foot If QObielt iS rdoopea Feet apove me \"gw93 itJ posieoz C46 € tirg \"e...\nConsider the function f defined on [0, 1] such that f (0) = 1,f(1) 3,f(1/5) = f(3/5) and f (2/5) f(4/5) Suppose that thc approximation of [ = J: f(x)dx by composite Trapezoidal rule with $subintervals is [.8, then thc value ofc is: O -0.5 0 0.5 0... 1 answer ##### Evaluate the triple integrals over the indicated region. Be alert for simplifications and auspicious orders of iteration.$\\iiint_{R}\\left(x^{2}+y^{2}\\right) d V,$over the cube$0 \\leq x, y, z \\leq 1$Evaluate the triple integrals over the indicated region. Be alert for simplifications and auspicious orders of iteration.$\\iiint_{R}\\left(x^{2}+y^{2}\\right) d V,$over the cube$0 \\leq x, y, z \\leq 1$... 1 answer ##### Hi, I need help with this question. Can you please explain in details thank you! Case... Hi, I need help with this question. Can you please explain in details thank you! Case Study 4: Uncoordinated Felix, a 54-year-old masonry contractor, has been complaining for the past three months about increasing arm weakness and that it is becoming more difficult to pick up and use his tools. He ... 5 answers ##### What do we mean when we say that bonding is on a continuum? What do we mean when we say that bonding is on a continuum?... 1 answer ##### 1. Predict the product(s) (or no reaction) and reagent(s). (31 pts; no partial point) j. РСС... 1. Predict the product(s) (or no reaction) and reagent(s). (31 pts; no partial point) j. РСС Н k. H2SO4, H20 Н I. 1. SOCI2, EtgN ОН 2. NaOAc m. MsCl LDA ОН pyridine... 1 answer ##### One of$\\sin \\theta, \\cos \\theta,$and$\\tan \\theta$is given. Find the other two if$\\theta$lies in the specified interval. $\\cos \\theta=\\frac{1}{3}, \\quad \\theta \\text { in }\\left[-\\frac{\\pi}{2}, 0\\right]$ One of$\\sin \\theta, \\cos \\theta,$and$\\tan \\theta$is given. Find the other two if$\\theta$lies in the specified interval. $\\cos \\theta=\\frac{1}{3}, \\quad \\theta \\text { in }\\left[-\\frac{\\pi}{2}, 0\\right]$... 5 answers ##### How Many cenuel Lomf 25 Ine Nokeue CH;CHEHYOH? \"hat uedar?Crar the Levis suciUIta molecuk CHCKEHO:_ Cairin: % > 8eomedyEch ceniniTh: molaculss BH+ NO and FEIs uc consikred mizht b; mort rcactive Men \"nomal\" moksuk_Telalively (eictive. 3urrssl 0 reason why cach How Many cenuel Lomf 25 Ine Nokeue CH;CHEHYOH? \"hat uedar? Crar the Levis suciUI ta molecuk CHCKEHO:_ Cairin: % > 8eomedy Ech cenini Th: molaculss BH+ NO and FEIs uc consikred mizht b; mort rcactive Men \"nomal\" moksuk_ Telalively (eictive. 3urrssl 0 reason why cach... 1 answer ##### The equations frwdx = Jut()dy and fr\"W8x=**?W-S4 & F\"() dx, where y = r \"(a) for... The equations frwdx = Jut()dy and fr\"W8x=**?W-S4 & F\"() dx, where y = r \"(a) for the first formula and tº 1 is integrable, give the two different formulas for the integral of cos shown below Can both integrations be correct? Explain ſcos * 'xdx => 'x-sin (cos\u0003... 4 answers ##### Tlue fuutction lus Lt1 Mru 24t the surival Muuctlon Sulz) for McFLi; MMAck Wat Fipllta| limiting ;g0' _ Verily that Sulz) satlslics tle first ITGeMIL~ ~urt /alMumc [ Ioii_ Wat pdi = Mut Muue Fimulam Titikl6 a5M Seel Qaph tlu ~urvival ftict M MIc] [email protected] Iil D2\" CluuCilculahe I0l5tu= Cluu Mn WaTu, uul (tut Gu ~-ertt'~uns l fuuuton fr _ lile age 50, Inchling the range; Wuat pdf of the future Iilethue Fiuttrhik6 Ille ge 5O: Colculate tlc prolwbility thut lifc ug0 50 ill di: betxrnl Tlue fuutction lus Lt1 Mru 24t the surival Muuctlon Sulz) for McFLi; MMAck Wat Fipllta| limiting ;g0' _ Verily that Sulz) satlslics tle first ITGeMIL~ ~urt /alMumc [ Ioii_ Wat pdi = Mut Muue Fimulam Titikl6 a5M Seel Qaph tlu ~urvival ftict M MIc] [email protected] Iil D2\" Cluu Cilculahe I0l5tu= Cluu Mn... 1 answer ##### During its first year of operations, Spring Garden Plans earned net credit sales of$377,000. Industry...\nDuring its first year of operations, Spring Garden Plans earned net credit sales of $377,000. Industry experience suggests that bad debts will amount to 1% of net credit sales. At December 31, 2018, accounts receivable total$35,000. The company uses the allowance method to account for uncollectible...\n##### A triangle has sides A, B, and C. The angle between sides A and B is (pi)/3. If side C has a length of 1 and the angle between sides B and C is ( pi)/8, what are the lengths of sides A and B?\nA triangle has sides A, B, and C. The angle between sides A and B is (pi)/3. If side C has a length of 1 and the angle between sides B and C is ( pi)/8, what are the lengths of sides A and B?...\n##### Consider the differential equation_ty\" 2=1+7,y(3) =1(a) Solve the initial value problem_The solution is y(x)4mI%0 <) sinlogcy(b) Determine atleast approximately where the solution is valid_ (Round your answer to two decimal places.)The solution is valid in the interval\nConsider the differential equation_ ty\" 2=1+7,y(3) =1 (a) Solve the initial value problem_ The solution is y(x) 4mI%0 <) sin log cy (b) Determine atleast approximately where the solution is valid_ (Round your answer to two decimal places.) The solution is valid in the interval...\n##### 15,2 AUow; r~dwtt Gv & Mt j\" = Cs Mdtnifs YAcKd n An ec 0e7pn 125) 0z Ma 0z6 EmLy6r I < 6llsn; \"J proeuat C noGi ve MaJnf pyoduct Uam & Yeaekt mecksnism plz { seo0+JHoAst fv Mc sho w aUl posstLl py YAch'n Name TzAclan| Tn& Ilow1 m< thsnism pleaj+) prdutts C Ne Hso\n15,2 AUow; r~dwtt Gv & Mt j\" = Cs Mdtnifs YAcKd n An ec 0e7pn 125) 0z Ma 0z6 EmLy 6r I < 6llsn; \"J proeuat C no Gi ve MaJnf pyoduct Uam & Yeaekt mecksnism plz { se o 0+J Ho Ast fv Mc sho w aUl posstLl py YAch'n Name TzAclan| Tn& Ilow1 m< thsnism pleaj+) prdutts C N...\n##### Cetarmiie tte prcpontion buainee ehudarts honeve The business cologa computirg cuntarwen the proporlon excneds 30r6, tran the Iab wlll gcala back a propaend panonal computers (PC 5) at bome. studente Wure renceni Gamhprdand have PC aattona argement of Its fecilities_ Suppose 250 businesa Find tha rejoction reglon for Ihis test using -Select one: Rejact Hc If z < -645. if 2 > ,845. Reject MI 7 05 0r 2 < Aeject Ho I= 645. Reject Ho\ncetarmiie tte prcpontion buainee ehudarts honeve The business cologa computirg cuntarwen the proporlon excneds 30r6, tran the Iab wlll gcala back a propaend panonal computers (PC 5) at bome. studente Wure renceni Gamhprdand have PC aattona argement of Its fecilities_ Suppose 250 businesa Find tha re...\n##### A consumer agency wants to estimate the proportion of all drivers who wear seat belts while...\nA consumer agency wants to estimate the proportion of all drivers who wear seat belts while driving. A preliminary study of 600 people has shown that 436 of drivers wear seat belts while driving. How large should the sample size be so that the 99% confidence interval for the population proportion ha...\n##### Hi could someone please help me with these questions. Also, could you write the steps clearly...\nHi could someone please help me with these questions. Also, could you write the steps clearly so I understand how you got the answers. Thanks in advance! FORMULAS W Weight Finding magnitude of diagonal Resultant Finding direction of diagonal resultant mass x g Resultant Square root (opposite? + ...\n##### How do you integrate (ln x) ^ 2 / x ^ 2?\nHow do you integrate (ln x) ^ 2 / x ^ 2#?..."
] | [
null,
"https://i.imgur.com/9ZOxbBt.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7523341,"math_prob":0.98029256,"size":17082,"snap":"2022-40-2023-06","text_gpt3_token_len":5502,"char_repetition_ratio":0.09772807,"word_repetition_ratio":0.5387517,"special_character_ratio":0.30318463,"punctuation_ratio":0.14477652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796478,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T07:19:18Z\",\"WARC-Record-ID\":\"<urn:uuid:25d5d469-60d5-4bac-9e8d-4eff78ac92f0>\",\"Content-Length\":\"85420\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be326b15-f2e7-4ac6-a7b0-50330b84735e>\",\"WARC-Concurrent-To\":\"<urn:uuid:177610b9-b866-42e6-b76a-c85d7959fda5>\",\"WARC-IP-Address\":\"172.67.132.66\",\"WARC-Target-URI\":\"https://solvedlib.com/excel-use-simplex-method-and-exel-to-solve-the,164320\",\"WARC-Payload-Digest\":\"sha1:PRTBNPHENKXLNBB6FR3DZGRABXLDO25J\",\"WARC-Block-Digest\":\"sha1:JQR73EPA2Q4NB7I3LOFSWXKHWDDKJCAM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337971.74_warc_CC-MAIN-20221007045521-20221007075521-00381.warc.gz\"}"} |
https://da.overleaf.com/latex/templates/template-for-proofs-in-discrete-and-argumentative-mathematics/rpvyqnyywhwk | [
"# Template for proofs in Discrete and Argumentative Mathematics\n\nAuthor\nstanley\nAbstractThis is the template for DAM (discrete and argumentative mathematics). We prove theorem $2.1$ using the method of proof by way of contradiction. This theorem states that for any set $A$, that in fact the empty set is a subset of $A$, that is $\\emptyset \\subset A$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8128966,"math_prob":0.95970196,"size":385,"snap":"2021-21-2021-25","text_gpt3_token_len":93,"char_repetition_ratio":0.104986876,"word_repetition_ratio":0.0,"special_character_ratio":0.22337662,"punctuation_ratio":0.09589041,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97973245,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T14:32:42Z\",\"WARC-Record-ID\":\"<urn:uuid:fbd91681-229f-45e6-bed2-cfe21c40b279>\",\"Content-Length\":\"25471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbc21749-7879-43ce-8d3a-cd9baa6c9908>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ef1d43d-d797-4179-975d-a91901a665e9>\",\"WARC-IP-Address\":\"34.120.186.93\",\"WARC-Target-URI\":\"https://da.overleaf.com/latex/templates/template-for-proofs-in-discrete-and-argumentative-mathematics/rpvyqnyywhwk\",\"WARC-Payload-Digest\":\"sha1:IZQHLZKSHXZVS6K4S3YKFXQPWYK4MPHR\",\"WARC-Block-Digest\":\"sha1:ORL5JXL7ZMWZHW747XQIK5IQ7MHGU6FZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989637.86_warc_CC-MAIN-20210518125638-20210518155638-00262.warc.gz\"}"} |
https://www.handakafunda.com/cat-2019/quantitative-aptitude-algebra-logarithm-let-x-and-y-be-positive-real-numbers-such-that/ | [
"# Quantitative Aptitude – Algebra – Logarithm – Let x and y be positive real numbers such that\n\n## Let x and y be positive real numbers such that – Video\n\nQ. Let x and y be positive real numbers such that log(base 5) (x + y) + log(base 5) (x − y) = 3, and log(base 2)y − log(base 2)x = 1 − log(base 2)3. Then xy equals?\n1. 150\n2. 100\n3. 25\n4. 250\n\nSolution: Given, log(base5) (x + y) + log(base5) (x − y) = 3\nOr log(base5) (x + y)*(x-y) =3\nOr x^2 –y^2 = 5^3 = 125————-1)\nlog(base2) y − log(base2) x = 1 − log(base2) 3= log(base2) 2 – log(base2) 3\nlog(base2) y/x = log(base2) 2/3\ny/x = 2/3\ny = 2x/3\nfrom eq 1) x^2 – (2x/3)^2 = 125\nx^2 – (4x^2/9) = 125\n5x^2 = 125*9 or x^2 = 225\nx = 15\ny= 2x/3 = 30/3 = 10\nxy = 15*10 =150\n\n## Crack CAT with Unacademy!\n\nUse referral code HANDA to get 10% off.\n\n• Daily Live Classes\n• Live Tests and Quizzes\n• Structured Courses\n• Personalized Coaching"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6769178,"math_prob":0.99987495,"size":1435,"snap":"2022-40-2023-06","text_gpt3_token_len":528,"char_repetition_ratio":0.13766597,"word_repetition_ratio":0.13475177,"special_character_ratio":0.3728223,"punctuation_ratio":0.04452055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998485,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T06:08:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f426a5de-3960-4e03-b4b4-a1c53734c94d>\",\"Content-Length\":\"37288\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b27dbef0-fdf0-486c-bb9d-4a141e55f4b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3daf5dd-3f2d-4c9a-976d-41b842194a35>\",\"WARC-IP-Address\":\"172.67.145.19\",\"WARC-Target-URI\":\"https://www.handakafunda.com/cat-2019/quantitative-aptitude-algebra-logarithm-let-x-and-y-be-positive-real-numbers-such-that/\",\"WARC-Payload-Digest\":\"sha1:O4HQCSQ77ZSITUJYCY7NTX3VQMKM66GB\",\"WARC-Block-Digest\":\"sha1:WENWSD2H25UTEGVYHTPOIGKN6QPF3J6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337480.10_warc_CC-MAIN-20221004054641-20221004084641-00554.warc.gz\"}"} |
https://community.intel.com/t5/Intel-Fortran-Compiler/Use-Fortran-dll-in-VC/td-p/1093497 | [
"Community\ncancel\nShowing results for\nDid you mean:\nHighlighted\nBeginner\n33 Views\n\n## Use Fortran dll in VC++\n\nHi! I created a Fortran code with 3 function. This is my Fortran code:\n\n!DEC\\$ ATTRIBUTES DLLEXPORT :: CIRCLE_AREA\n!DEC\\$ ATTRIBUTES ALIAS : \"Circle_Area\" :: CIRCLE_AREA\n\nimplicit none\nreal, parameter :: PI = 3.14159\nreturn\n\nend function\n\ninteger function sum(a)\n!DEC\\$ ATTRIBUTES DLLEXPORT :: SUM\n\nimplicit none\ninteger :: a(10)\ninteger i\nsum=0\ndo i=1,10\nsum=sum+a(i)\nend do\nreturn\n\nend function\n\nsubroutine MakeLower(string)\n!DEC\\$ ATTRIBUTES DLLEXPORT :: MAKELOWER\n\nimplicit none\ncharacter(len=*) :: string\ninteger :: len, i, code\nlen = len_trim(string)\ndo i=1,len\ncode = ichar(string(i:i))\nif ( code >= ichar('a') .and. code <= ichar('z') ) then\nstring(i:i) = char(code-32)\nend if\nend do\nreturn\n\nend subroutine\n\nI use this Fortran code to create dll (Dll1). Then I wrote a c program to call function circle_area() in Fortran code.\n\n#include \"stdafx.h\"\n#include \"stdlib.h\"\nextern \"C\" {float circle_area(float *a); }\n\nint _tmain(int argc, _TCHAR* argv[])\n{\nfloat b = 3.;\ncircle_area(&b);\nsystem(\"pause\");\nreturn 0;\n}\n\nBefore I ran my c code, I included my dll in Visual Studio 2013. First I added my Dll1.lib directory in VC++ directories-> Library Directories, in Linker -> General -> Additional Library Directories added Dll1.lib directory again, in Linker -> Input -> Additional Dependencies added Dll1.lib. Last, I put Dll1.dll in c project folder that it will create .exe together.\n\nAfter running my c code, I got these messages :\n\nerror LNK2019: unresolved external symbol '_circle_area ' referenced in function '_wmain'\n\nerror LNK1120: 1 unresolved external symbol\n\nIt seems that I need something setup. Can someone help me?\n\nTags (1)\n\nAccepted Solutions\nHighlighted\nHonored Contributor I\n33 Views\n\nYou may also want to keep in mind the standard option:\n\n```module m\n\nuse, intrinsic :: iso_c_binding, only : c_float\n\nimplicit none\n\nprivate\n\npublic :: circle_area\n\nreal(c_float), parameter :: PI = 3.14159 !.. Is this precise enough for your needs?\n! pi = acos(-1.0_xx) is one way to get most precision\n! for a selected real kind of xx\n\ncontains\n\nfunction circle_area(radius) result( Area ) bind(C, name=\"Circle_Area\")\n!DEC\\$ ATTRIBUTES DLLEXPORT :: CIRCLE_AREA\n\n!.. Function result\nreal(c_float) :: Area\n\nreturn\n\nend function circle_area\n\nend module m\n```\n```#include <iostream>\nusing namespace std;\n\nextern \"C\" {\n\n// Prototype for the Fortran Function\nfloat Circle_Area( float );\n\n}\n\nint main()\n{\n\nfloat r = 2.0;\n\ncout << \"Area of circle with radius of 2.0 = \" << Circle_Area(r) << endl;\n\nreturn 0;\n\n}\n```\n\n8 Replies\nHighlighted\nEmployee\n33 Views\n\nAdd \"DECORATE,\" before ALIAS. You told Fortran that the external name was just \"Circle_Area\" but C, on 32-bit WIndows, adds a leading underscore (as does Fortran for normal use.) DECORATE tells Fortran to add the appropriate platform-specific name decoration.\n\nYou don't put dll.lib in \"Additional library directories\". Instead you put the directory where dll.lib is put. Note also that you will need to copy the DLL itself to the executable directory.\n\nHighlighted\nHonored Contributor I\n34 Views\n\nYou may also want to keep in mind the standard option:\n\n```module m\n\nuse, intrinsic :: iso_c_binding, only : c_float\n\nimplicit none\n\nprivate\n\npublic :: circle_area\n\nreal(c_float), parameter :: PI = 3.14159 !.. Is this precise enough for your needs?\n! pi = acos(-1.0_xx) is one way to get most precision\n! for a selected real kind of xx\n\ncontains\n\nfunction circle_area(radius) result( Area ) bind(C, name=\"Circle_Area\")\n!DEC\\$ ATTRIBUTES DLLEXPORT :: CIRCLE_AREA\n\n!.. Function result\nreal(c_float) :: Area\n\nreturn\n\nend function circle_area\n\nend module m\n```\n```#include <iostream>\nusing namespace std;\n\nextern \"C\" {\n\n// Prototype for the Fortran Function\nfloat Circle_Area( float );\n\n}\n\nint main()\n{\n\nfloat r = 2.0;\n\ncout << \"Area of circle with radius of 2.0 = \" << Circle_Area(r) << endl;\n\nreturn 0;\n\n}\n```\n\nHighlighted\nBeginner\n33 Views\n\nSteve Lionel (Intel) wrote:\n\nAdd \"DECORATE,\" before ALIAS. You told Fortran that the external name was just \"Circle_Area\" but C, on 32-bit WIndows, adds a leading underscore (as does Fortran for normal use.) DECORATE tells Fortran to add the appropriate platform-specific name decoration.\n\nYou don't put dll.lib in \"Additional library directories\". Instead you put the directory where dll.lib is put. Note also that you will need to copy the DLL itself to the executable directory.\n\n\"You don't put dll.lib in \"Additional library directories\".\", I don't understand! Don't I add Dll1.lib directory in \"Additional library directories\"?\n\nDoes \"copy the DLL\" means copy the Dll1.dll only?\n\nHighlighted\nEmployee\n33 Views\n\nYou put the directory containing dll1.lib there, not \"dll1.lib\".\n\nYes, copy the .dll only.\n\nHighlighted\nBeginner\n33 Views\n\nI am so happy that my program works now! But I have a big Fortran function called in C++.\n\nSo I need to adjust Fortran function simply.\n\nAnd is there any document or what I can read about using C++ call Fortran dll ?\n\n```#include \"stdafx.h\"\n#include <iostream>\nusing namespace std;\nextern \"C\" {float CIRCLE_AREA(float *a ); }\n\nint _tmain(int argc, _TCHAR* argv[])\n{\n\nfloat r = 2.;\ncout << \"Area of circle with radius of 2.0 = \" << CIRCLE_AREA(&r) << endl;\nsystem(\"pause\");\nreturn 0;\n\n}\n```\n\nHighlighted\nBlack Belt\n33 Views\n\nThere is a full chapter, \"Mixed Language Programming\", in the Intel Fortran Compiler Reference.\n\nYou may not find much about calling Fortran directly in C++. The Fortran standard specifies interoperability with C, not with C++. However, the extern \"C\" {...} construct of C++ takes care of this.\n\n[Corrected by adding quotes, as stated in #8]\n\nHighlighted\nEmployee\n33 Views\n\nThat's:\n\n`extern \"C\"`\n\nYou need the quotes.\n\nHighlighted\nBeginner\n33 Views\n\nOkay! I see. Thanks!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.69783163,"math_prob":0.5364762,"size":5540,"snap":"2020-45-2020-50","text_gpt3_token_len":1436,"char_repetition_ratio":0.11867774,"word_repetition_ratio":0.4790487,"special_character_ratio":0.27599278,"punctuation_ratio":0.19487649,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.95642316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T23:24:51Z\",\"WARC-Record-ID\":\"<urn:uuid:999a4f88-b405-4be4-970e-cc2c1ffa8b9d>\",\"Content-Length\":\"326622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e670ac87-7963-4a5c-92f9-30dc387d7902>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a4f0277-8b67-4ead-8219-e3d3ca525d7e>\",\"WARC-IP-Address\":\"13.32.207.11\",\"WARC-Target-URI\":\"https://community.intel.com/t5/Intel-Fortran-Compiler/Use-Fortran-dll-in-VC/td-p/1093497\",\"WARC-Payload-Digest\":\"sha1:K6MBOEXQGNZLYX4S7GHCBNUMYJ62PVJJ\",\"WARC-Block-Digest\":\"sha1:3A7HAGHTHSRGZSRWFXPUSEM55YX2LBHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141732835.81_warc_CC-MAIN-20201203220448-20201204010448-00492.warc.gz\"}"} |
https://ejpam.com/index.php/ejpam/article/view/6 | [
"# The analytic hierarchy and analytic network measurement processes: Applications to decisions under Risk\n\n## Authors\n\n• Thomas L. Saaty\n\n## Keywords:\n\ndecision, intangibles, judgments, pairwise comparisons, priorities, synthesis\n\n## Abstract\n\nMathematics applications largely depend on scientific practice. In science measurement depends on the use of scales, most frequently ratio scales. A ratio scale there is applied to measure various physical attributes and assumes a zero and an arbitrary unit used uniformly throughout an application. Different ratio scales are combined by means of formulas. The formulas apply within structures involving variables and their relations under natural law. The meaning and use of the outcome is then interpreted according to the judgment of an expert as to how well it meets understanding and experience or satisfies laws of nature that are always there. Science derives results objectively, but interprets their significance is subjectively. In decision making, there are no set laws to characterize structures in which relations are predetermined for every decision. Understanding is needed to structure a problem and then also to use judgments to represent importance and preference quantitatively so that a best outcome can be derived by combining and trading off different factors or attributes. From numerical representations of judgments, priority scales are derived and synthesized according to given rules of composition. In decision making the priority scales can only be derived objectively after subjective judgments are made. The process is the opposite of what we do in science. This paper summarizes a mathematical theory of measurement in decision making and applies it to real-life examples of complex decisions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8985278,"math_prob":0.8285067,"size":2275,"snap":"2022-40-2023-06","text_gpt3_token_len":430,"char_repetition_ratio":0.09951563,"word_repetition_ratio":0.06962025,"special_character_ratio":0.18373626,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646995,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T21:05:00Z\",\"WARC-Record-ID\":\"<urn:uuid:5789fce8-bbec-4fa4-8096-06dcfd7a2196>\",\"Content-Length\":\"33197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b564177-2ad6-4266-86ba-19b6a0c13cb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c92842eb-ceb6-4049-b7ba-9a34a2ba3fb5>\",\"WARC-IP-Address\":\"167.86.118.250\",\"WARC-Target-URI\":\"https://ejpam.com/index.php/ejpam/article/view/6\",\"WARC-Payload-Digest\":\"sha1:4AC2OWFEKC6FLM3RXYXXC224RWRFIYQY\",\"WARC-Block-Digest\":\"sha1:TYI5767TIIEMLB43SX37FOY4TI73IY2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337524.47_warc_CC-MAIN-20221004184523-20221004214523-00133.warc.gz\"}"} |
https://www.geeksforgeeks.org/area-of-a-regular-pentagram/ | [
"# Area of a Regular Pentagram\n\nGiven a Pentagram and it’s inner side length(d). The task is find out area of Pentagram. The Pentagram is a five-pointed star that is formed by drawing a continuous line in five straight segments.",
null,
"Examples:\n\nInput: d = 5\nOutput: Area = 139.187\nArea of regular pentagram = 139.187\nInput: d = 7\nOutput: Area = 272.807\n\n## Recommended: Please try your approach on {IDE} first, before moving on to the solution.\n\nIdea is to use Golden Ratio between a/b, b/c, and c/d which equals approximately 1.618\nInner side length d is given so\nc = 1.618 * d\nb = 1.618 * c\na = 1.618 * b\n\nAB, BC and CD are equals(both side of regular pentagram)\nSo AB = BC = CD = c and BD is given by d.\n\nArea of pentgram = Area of Pentagon BDFHJ + 5 * (Area of triangle BCD)\nArea of Pentagon BDFHJ = (d^2 * 5)/ (4* tan 36)\nArea of triangle BCD = [s(s-d)(s-c)(s-c)]^(1/2) {Heron’s Formula}\nwhere\ns = (d + c + c)/2\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ implementation of the approach ` `#include ` `#define PI 3.14159 ` `using` `namespace` `std; ` ` ` `// Function to return the area of triangle BCD ` `double` `areaOfTriangle(``float` `d) ` `{ ` ` ``// Using Golden ratio ` ` ``float` `c = 1.618 * d; ` ` ``float` `s = (d + c + c) / 2; ` ` ` ` ``// Calculate area of triangle BCD ` ` ``double` `area = ``sqrt``(s * (s - c) * ` ` ``(s - c) * (s - d)); ` ` ` ` ``// Return area of all 5 trianlge are same ` ` ``return` `5 * area; ` `} ` ` ` `// Function to return the area of regular pentagon ` `double` `areaOfRegPentagon(``float` `d) ` `{ ` ` ``// Calculate the area of regular ` ` ``// pentagon using above formula ` ` ``double` `cal = 4 * ``tan``(PI / 5); ` ` ``double` `area = (5 * d * d) / cal; ` ` ` ` ``// Return area of regular pentagon ` ` ``return` `area; ` `} ` ` ` `// Function to return the area of pentagram ` `double` `areaOfPentagram(``float` `d) ` `{ ` ` ``// Area of a pentagram is equal to the ` ` ``// area of regular pentagon and five times ` ` ``// the area of Triangle ` ` ``return` `areaOfRegPentagon(d) + ` ` ``areaOfTriangle(d); ` `} ` ` ` `// Driver code ` `int` `main() ` `{ ` ` ``float` `d = 5; ` ` ``cout << areaOfPentagram(d) << endl; ` ` ` ` ``return` `0; ` `} `\n\n## Java\n\n `// Java implemenation of above approach ` `public` `class` `GFG ` `{ ` ` ` ` ``static` `double` `PI = ``3.14159``; ` ` ` ` ``// Function to return the area of triangle BCD ` ` ``static` `double` `areaOfTriangle(``float` `d) ` ` ``{ ` ` ``// Using Golden ratio ` ` ``float` `c = (``float``) (``1.618` `* d); ` ` ``float` `s = (d + c + c) / ``2``; ` ` ` ` ``// Calculate area of triangle BCD ` ` ``double` `area = Math.sqrt(s * (s - c) ` ` ``* (s - c) * (s - d)); ` ` ` ` ``// Return area of all 5 trianlge are same ` ` ``return` `5` `* area; ` ` ``} ` ` ` ` ``// Function to return the area of regular pentagon ` ` ``static` `double` `areaOfRegPentagon(``float` `d) ` ` ``{ ` ` ``// Calculate the area of regular ` ` ``// pentagon using above formula ` ` ``double` `cal = ``4` `* Math.tan(PI / ``5``); ` ` ``double` `area = (``5` `* d * d) / cal; ` ` ` ` ``// Return area of regular pentagon ` ` ``return` `area; ` ` ``} ` ` ` ` ``// Function to return the area of pentagram ` ` ``static` `double` `areaOfPentagram(``float` `d) ` ` ``{ ` ` ``// Area of a pentagram is equal to the ` ` ``// area of regular pentagon and five times ` ` ``// the area of Triangle ` ` ``return` `areaOfRegPentagon(d) ` ` ``+ areaOfTriangle(d); ` ` ``} ` ` ` ` ``// Driver code ` ` ``public` `static` `void` `main(String[] args) ` ` ``{ ` ` ``float` `d = ``5``; ` ` ``System.out.println(areaOfPentagram(d)); ` ` ``} ` `} ` ` ` `// This code has been contributed by 29AjayKumar `\n\n## Python\n\n `# Python3 implementation of the approach ` ` ` `import` `math ` ` ` `PI ``=` `3.14159` ` ` `# Function to return the area of triangle BCD ` `def` `areaOfTriangle(d) : ` ` ` ` ``# Using Golden ratio ` ` ``c ``=` `1.618` `*` `d ` ` ``s ``=` `(d ``+` `c ``+` `c) ``/` `2` ` ` ` ``# Calculate area of triangle BCD ` ` ``area ``=` `math.sqrt(s ``*` `(s ``-` `c) ``*` ` ``(s ``-` `c) ``*` `(s ``-` `d)) ` ` ` ` ``# Return area of all 5 triangles are the same ` ` ``return` `5` `*` `area ` ` ` ` ` `# Function to return the area of regular pentagon ` `def` `areaOfRegPentagon(d) : ` ` ` ` ``global` `PI ` ` ``# Calculate the area of regular ` ` ``# pentagon using above formula ` ` ``cal ``=` `4` `*` `math.tan(PI ``/` `5``) ` ` ``area ``=` `(``5` `*` `d ``*` `d) ``/` `cal ` ` ` ` ``# Return area of regular pentagon ` ` ``return` `area ` ` ` ` ` `# Function to return the area of pentagram ` `def` `areaOfPentagram(d) : ` ` ` ` ``# Area of a pentagram is equal to the ` ` ``# area of regular pentagon and five times ` ` ``# the area of Triangle ` ` ``return` `areaOfRegPentagon(d) ``+` `areaOfTriangle(d) ` ` ` ` ` `# Driver code ` ` ` `d ``=` `5` `print``(areaOfPentagram(d)) ` ` ` ` ` `# This code is contributed by ihritik `\n\n## C#\n\n `// C# implementation of the above approach ` `using` `System; ` ` ` `class` `GFG ` `{ ` ` ` ` ``static` `double` `PI = 3.14159; ` ` ` ` ``// Function to return the area of triangle BCD ` ` ``static` `double` `areaOfTriangle(``float` `d) ` ` ``{ ` ` ``// Using Golden ratio ` ` ``float` `c = (``float``) (1.618 * d); ` ` ``float` `s = (d + c + c) / 2; ` ` ` ` ``// Calculate area of triangle BCD ` ` ``double` `area = Math.Sqrt(s * (s - c) ` ` ``* (s - c) * (s - d)); ` ` ` ` ``// Return area of all 5 trianlge are same ` ` ``return` `5 * area; ` ` ``} ` ` ` ` ``// Function to return the area of regular pentagon ` ` ``static` `double` `areaOfRegPentagon(``float` `d) ` ` ``{ ` ` ``// Calculate the area of regular ` ` ``// pentagon using above formula ` ` ``double` `cal = 4 * Math.Tan(PI / 5); ` ` ``double` `area = (5 * d * d) / cal; ` ` ` ` ``// Return area of regular pentagon ` ` ``return` `area; ` ` ``} ` ` ` ` ``// Function to return the area of pentagram ` ` ``static` `double` `areaOfPentagram(``float` `d) ` ` ``{ ` ` ``// Area of a pentagram is equal to the ` ` ``// area of regular pentagon and five times ` ` ``// the area of Triangle ` ` ``return` `areaOfRegPentagon(d) ` ` ``+ areaOfTriangle(d); ` ` ``} ` ` ` ` ``// Driver code ` ` ``public` `static` `void` `Main() ` ` ``{ ` ` ``float` `d = 5; ` ` ``Console.WriteLine(areaOfPentagram(d)); ` ` ``} ` `} ` ` ` `// This code has been contributed by ihritik `\n\nOutput:\n\n```139.187\n```\n\nTime Complexity : O(1)\n\nMy Personal Notes arrow_drop_up",
null,
"Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below."
] | [
null,
"https://media.geeksforgeeks.org/wp-content/uploads/20190607094038/Program-to-calculate-area-of-regular-Pentagram-when-inner-side-length-is-given2-300x196.png",
null,
"https://media.geeksforgeeks.org/auth/profile/fp4yi0vhq9lf5q37q1pt",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6659008,"math_prob":0.97065395,"size":6234,"snap":"2020-10-2020-16","text_gpt3_token_len":1893,"char_repetition_ratio":0.20722312,"word_repetition_ratio":0.443038,"special_character_ratio":0.2994867,"punctuation_ratio":0.08064516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998536,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T02:58:43Z\",\"WARC-Record-ID\":\"<urn:uuid:d721c8bb-eca9-4f20-bbc6-38fc6fde23da>\",\"Content-Length\":\"162377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d0e838e-cd12-41d3-b26d-e436c0543a19>\",\"WARC-Concurrent-To\":\"<urn:uuid:de4f56e2-7ebe-40f6-abc0-050e37ba29a8>\",\"WARC-IP-Address\":\"23.221.227.168\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/area-of-a-regular-pentagram/\",\"WARC-Payload-Digest\":\"sha1:GKBEBUL5T2QWD6D7CGG7CAHVUWTZI6R4\",\"WARC-Block-Digest\":\"sha1:34TPICPFULIWHREORBAVTGJIX6BVHXAA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371612531.68_warc_CC-MAIN-20200406004220-20200406034720-00288.warc.gz\"}"} |
https://www.cuesta-art.be/19/08/14/42128/mining_circulating_load_calculations.html | [
" mining circulating load calculations\n\n### mining circulating load calculations\n\n#### Recirculation Load I Formula In Crushing\n\nNo. 1 Crushing Plant ... vertical mill formula of power ... circulating load of dynamic ... circulating load calculation.ball mill recirculating load ... Read more recirculating load of a 2 stage crushing circuit - BINQ Mining\n\n#### circulating load calculation screen lkp - esic2017.eu\n\ncirculating load calculation screen screens the material passing from thethe circulation load is monitored by weighing the . Max. power is calculated at Ch= ~% of . circulating load pdf – Grinding Mill China. ... Circulating Load Formula Crusher - Mining Machinery.\n\n#### Lculating Circulating Load Ball Mill Mining - rcci.be\n\nMining How To Calculate Circulating Load Mining Machinery. Mining Circulating Load - ficci-petrotechretail. VRM and ball mill circulating load Mining Assume, for example, that we are to crush a material, . mining calculate ball mill circulating load pdf--Xinhai Mining .\n\n#### ball mill recirculating load calculation pdf – Grinding ...\n\nball mill calculation pdf - SBM mining equipments applied . Home > SBM designed>ball mill recirculating load calculation pdf. All data required by the calculation routine must be defined in each corresponding Chat online. » Learn More. Circulating Load In A Ball Mill. ball mill recirculating load calculation pdf. THE SIZING AND SELECTION OF ...\n\n#### mineral processing circulating load calculations\n\nmineral processing circulating load calculations; mineral processing circulating load calculations. ... Circulating Overshot,Releasing Overshot,Circulating Overshot from Mining Machinery Parts . Shipping: Less than Container Load (LCL) Service to US . casing and the other fishes from the outside during the drilling and workover process.\n\n#### circulating load formula in ball mill - agemo.be\n\nCirculating Load Calculation Formula. Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a ...how to calculate ball mill circulating load for iron ore. how to calculate ball mill circulating load for iron ore processing pdf. ... BINQ Mining · formula to calculate ball mill ...\n\n#### recirculating load in ball milling formula - BINQ Mining\n\nNov 23, 2012· circulating load calculation – OneMine Mining and Minerals Library … circulating load calculation … Classification effects in wet ball milling circuits – by R. E. McIvor Technical Papers, MINING ENGINEERING, Vol. 40, … »More detailed\n\n#### (PDF) Circulating load calculation in grinding circuits\n\nA problem for solving mass balances in mineral processing plants is the calculation of circulating load in closed circuits. A family of possible methods for the resolution of these calculations is ...\n\n#### circulating load definition of ore mill - dsignhaus.co.za\n\nPDF: Effect of circulating load and classification efficiency on HPGR ... PDF The ball mill is the most common ore grinding technology today, and probably more than 50% of the total world energy consumption for ore grinding is...\n\n#### The Influence Of The Circulating Load On Crushing Efficiency\n\nBall Mill Circulating Load Calculation - YouTube. Jan 5, 2014 ... TITLE: Influence Of ...ball mill circulating load calculation - crusher export ... zenith stone crusher machine,or crusher machine,or crushing machine,includes jaw crusher ... ball mill efficiency calculations - OneMine Mining and …\n\n#### Lecture 11: Material balance in mineral processing\n\nRe circulating load ratio L ; 4 4 7 4 4. 2.33. If the feed stream slurry contains 35% solids by volume and 40% of the water is recycled, calculate lids in hydro cyclon roducts. Density of solid L3.215 tons/m 7. an low+ volume of solid in overflow . concentration of so e p\n\n#### calculating load on a ball mill - hutaib.in\n\nCirculating Load Calculation Formula. 2018-7-27 Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit.\n\n#### calculating circulating load crushing circuits - parsana.in\n\ncalculating circulating load crushing circuits. of methods which can be used to solve the circulating load is the iterative ... algorithm to calculate the circulation load in closed circuits which allows the ..... 6 Tsakalakis K. Use of a simplified method to calculate closed crushing circuits.\n\n#### Circulating Load | Mill (Grinding) | Mathematical Optimization\n\nThe circulating load calculation in this case will take into account the hydrocyclone partition.74% of the hydrocyclone feed returns to the mill as circulating load. mar. being neces- REM: R. Results and discussion Table 1 summarizes the results of the proposed iterative method applied to …\n\n#### Circulating Load Calculation in Mineral Processing Closed ...\n\nA problem for solving mass balances in mineral processing plants is the calculation of circulating load in closed circuits. A family of possible methods to the resolution of this calculation is ...\n\n#### mine screen circulating load calculation - rcpl.org.in\n\nmining circulating load calculations Ball mill circulating load calculation pakistan crusher,sthis page is provide professional mining circulating load ca. Get more circulating load formula in. Theory calculation and testing of air injection parameters in.\n\n#### how to calculate circulating load - BINQ Mining\n\nCirculating Load Ratio – Metcom Technologies. Circulating Load Ratio. (after Davis). What is the circulating load ratio in your ball milling circuit? There is a rapid and easy way to calculate it from any set of … »More detailed\n\n#### Calculation Of Circulating Load On Quarry - wildcer.org.in\n\nCirculating Load Formula Crusher - Mining Machinery. Load formula crusher circulating load in crusher Circulating Load Calculation Screen Crusher USA About circulating load calculation screen related How To Find Ball Mill Circulating Load calculate ball mill circulating load pdf Quarry Crusher the formula for calculating circulating loads in the formula.\n\n#### calculating circulating load crusher - celebrationcakes.in\n\nhow to calculate crusher circulating load. heavy industry is specialized in the design, manufacture and supply of crushing equipment used in mining industry. Crusher Circulating Load Calculation - … the circulating load about the crusher was enhanced by an uncrushed. 10mm to 15mm size ... gravel By calculating the circulating load of the ...\n\n#### calculating circulating load crusher – Grinding Mill China\n\nhow to calculate sag mill ball charge – Gulin Mining. how to calculate sag mill ball charge »More detailed. ball mill circulating load calculation. » Learn More. how to calculate circulating load in grinding mill. how to calculate circulating load in grinding mill. ... Crusher Circulating Load Calculations,Stone Crushers,Grinding . » Learn ...\n\n#### load calculation forball mill circulating - Mineral ...\n\nFeb 18, 2018· load calculation forball mill circulating Natural Linseed Oil Beneficial for dust allergies, sweet itch, respiratory conditions & circulation . 13:09 Allback Linseed Oil Paint Solvent Free Oil Paint) Review · Nippon Flour Mills linseed oil Golden Flaxseed <186g> 3 ..\n\n#### Circulating Load Calculation Screen - mayukhportfolio.co.in\n\nCirculating Load Calculation Screen-India Crusher&Mill. circulating load calculation screen; sag mill load calculation - beltconveyers.net. Gulin Services, LLC. circulating load being forecast. ... This page is provide professional mining circulating load calculations information for you, we have livechat to answer you mining circulating load ...\n\n#### Circulating Load Calculation Formula\n\nCirculating Load Calculation Formula View Larger Image Here is a formula that allows you to calculate the circulating load ratio around a ball mill and hydrocylone as part of a grinding circuit.\n\n#### Hammermill Load Calculations- Mining Machinery\n\nRpm Calculations Jaw Crusher Dnapestcontrol InCapacity calculations Capacity D x RPM x L x S x Bulk putaran rpm hammermill pemecah mining circulating load calculations ; jaw crushers rpm ; espresso.Hammermill machinery in namibia hammer mill machine in namibia On hammer mill machine in namibia, there are a lot of calculations and Free Down Load For Stone.\n\n#### the formula for calculating circulating loads in mineral ...\n\nmining circulating load calculations - cesed.eu. Get Price Circulating Load Calculation Formula - Mineral. ... will end up sometimes with either re-circulating loads ... calculation in mineral processing closed ... Get More Info.\n\n#### crusher circulating load calculation - exactpoint.in\n\nmining circulating load calculations Pogo - InfoMine - Mining Intelligence and Technology . Mining information for the Pogo mine in the US presented by MineSite ... Know More. circulating load calculation India - XSM. circulating load calculation India from XSM. Shanghai XSM (circulating load calculation India) is professional manufacturer, the ...\n\n#### mining circulating load - pizzamanteca.com\n\nmining circulating load Nuclear Fuel Cycle Overview World … 2018-7-11 · The nuclear fuel cycle: industrial processes which involve the production of electricity from uranium in nuclear power reactors.\n\n#### circulating load calculation in ball mill\n\ncirculating load calculation in ball mill Alwayse Area Ball Transfer Unit Series Uk Products Spherical Roller . Alwayse Area Ball Transfer Unit Series Uk Products Spherical Roller Bearing When calculating loads, consider the possiblity of impact caused by incorrect levels. incorporating re-circulating ball principles, however the inverted position is still the machinery such as shearing ...\n\n#### Circulating load calculation in grinding circuits - SciELO\n\nABSTRACT. A problem for solving mass balances in mineral processing plants is the calculation of circulating load in closed circuits. A family of possible methods for the resolution of these calculations is the iterative method, consisting of a finite loop where in each iteration the initial solution is refined in order to approach the exact solution.\n\n#### mining circulating load calculations - fit45.eu\n\nmining circulating load calculations - mining circulating load calculations Circulating Load Calculation Formula Here is a formula that allows you to calculate the circulating load ratio around a …\n\n#### mineral processing ball mill circulating load calculation\n\nDec 12, 2015· mineral processing ball mill circulating load calculation; Act Model Premium & Lift Hm Bridge Plug - Tubing Set - Buy Frac . ... Professional Gold Mining Machine Elution And Electrowinning . Shipping: Less than Container Load (LCL) Service to US . 11 Fuzhou Road, Yantai China, which is a professionalEPC contractor for mineral ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78493404,"math_prob":0.9857801,"size":10475,"snap":"2019-35-2019-39","text_gpt3_token_len":2085,"char_repetition_ratio":0.31716168,"word_repetition_ratio":0.15159236,"special_character_ratio":0.19446301,"punctuation_ratio":0.15332581,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9804495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T16:33:25Z\",\"WARC-Record-ID\":\"<urn:uuid:920f6d3e-3338-4167-b131-09578ecaaa51>\",\"Content-Length\":\"27603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86d4e2e0-8818-49af-8880-91606334426b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f2ec1cc-3cd9-4beb-8960-0e0aed58b5ca>\",\"WARC-IP-Address\":\"104.18.56.141\",\"WARC-Target-URI\":\"https://www.cuesta-art.be/19/08/14/42128/mining_circulating_load_calculations.html\",\"WARC-Payload-Digest\":\"sha1:5RLXH46VUMFJ5LOXGCXDNWUFZJEQDHAA\",\"WARC-Block-Digest\":\"sha1:KM4OZFWPXTQOKA37XV5IVNQMIGDTCOFK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316075.15_warc_CC-MAIN-20190821152344-20190821174344-00217.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/tagged/xor | [
"# Questions tagged [xor]\n\nXOR, often written ⊕, is one of the basic operations on bits and bit-sequences. It is a building block of many cryptographic primitives (and some higher-level algorithms, like modes of operations).\n\n246 questions\nFilter by\nSorted by\nTagged with\n14 views\n\n### xor DECODING without a key [closed]\n\nHow do I find the binary key that decodes FA41086EF9153F into a readable word in English. So Far, I have tried several keys of different lengths and nothing makes sense .. any one has any hints?\n39 views\n\n### Need help finding keysize for Vigenere cipher\n\nI am doing a crypto challenge for breaking a Vigenere style repeating XOR encryption (https://cryptopals.com/sets/1/challenges/6). I have looked at similar questions asked here, mainly this one: ...\n167 views\n\n### Formula for bits of entropy per bit, when combining bits with XOR\n\nAssume that bits $A$ and $B$ each have .5 bits of entropy per bit. The two-bit result of the concatenation $A‖B$ has 1 bit of entropy total, and it retains the entropy density of .5 bits of entropy ...\n100 views\n\n### Using sha3 as a deterministic source of psuedo-random bytes for xor encryption\n\nThe following code generates a deterministic stream of pseudo-random bytes, using SHA3. ...\n97 views\n\n### Is it safe to XOR-combine hashes by rotating them first?\n\nI have a small number of hashes. I would like to combine them into a single hash. XORing the hashes ignores their order, which is important. Also, it could lead to a result of zero if there were an ...\n63 views\n\n### Using xor encryption in the following use case\n\nI use an encryption scheme based on a symmetric cipher, with the corresponding symmetric key encrypted with RSA/OAEP using the public RSA key of the recipient. I now want to use ECC crypto in ...\n109 views\n\n### Generating a XOR Hash function for a given group of numbers to maximise collisions\n\nGiven $x$, a vector of $m$ distinct bit vectors of bit length $N$, I want to use an XOR-hash $h$ such that all $x_i$ collide to the same value. I define an XOR hash $h$ which takes bit vector $q$ as ...\n56 views\n\n### Can i XOR an array of bytes with an equal length key securely\n\nIf i have say 64 bytes, can i XOR those bytes with another 64 random bytes, and have two new arrays, where both are needed to reconstruct the first? Intuetively i need some symmetric encryption like ...\n81 views\n\n### Assignment: XOR-cipher [duplicate]\n\nI am currently attending a IT-Security course at the Tel Aviv University and I stuck with the an assignment task. Please DO NOT post the solution if you know it. But I would be very grateful for any ...\n6k views\n\n### Can I encrypt a message by swapping bits in the text?\n\nI have tried out an encryption method, in which I swap bits in the text. The text length is N bit, then I generate several random number pairs in the range 0..N-1, as [n,k] pairs. After that I swap ...\n35 views\n\n### Approach used with XOR cipher in black box scenery\n\nWhat's the approach used in cryptanalysis to make an decrypter for a simple xor stream cipher in a Black Box scenery with only the output?\n211 views\n\n### XOR cipher with fixed key and known relation among plaintexts\n\nI have three messages, each known to be XOR-encoded, with the same key used for each message of this XOR cipher. Encoded message 1: $e_1\\,=\\,00100111010$ Encoded message 2: $e_2\\,=\\,01001110110$ ...\n130 views\n\n### Secret sharing scheme WITHOUT Shamir Secrete Sharing\n\nI am planning to use the following XOR scheme to divide a secret into only 2 shares (I do not want to use Shamir Secret Sharing for different reasons that are beyond the scope of this post). Here's ...\n78 views\n\n### Potential weaknesses when combining hash functions?\n\nFor consistency's sake, M is the message, H1 and H2 are separate hash functions. I've heard that concatenation or XORing hash outputs together do not provide improved security against preimage and ...\n97 views\n\n### Weakness in a CBC-like XOR cipher\n\nA simple symmetric encryption algorithm can be written as follows: Input message M and 64 bit key $K$ Divide M into 64 bit size blocks $B_1...B_n$ Get first block $B_1$ and perform bit-wise $\\oplus$ ...\n70 views\n\n### Boolean XOR function a valid way to verify the integrity of a message\n\nDoes the Boolean XOR function represent a valid way to verify the integrity of a message ?\n69 views\n\n### GMW Multiplication AND for 2 parties\n\nI am looking into the GMW protocol's evaluation for multiplication in 2 parties. I have referred to different materials on it but I didn't exactly understand how $a_i b_j + a_j b_i$ is calculated in a ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89233655,"math_prob":0.79671603,"size":13518,"snap":"2020-45-2020-50","text_gpt3_token_len":3328,"char_repetition_ratio":0.1358591,"word_repetition_ratio":0.007413509,"special_character_ratio":0.2506288,"punctuation_ratio":0.11938152,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.977737,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T01:29:32Z\",\"WARC-Record-ID\":\"<urn:uuid:66dbe0bd-8863-4ed8-b2a5-c70d58702b38>\",\"Content-Length\":\"247937\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:660f7358-26d7-4b47-9f6e-1fcd5af697c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:76ce28d1-d39e-4a15-8cff-0169c7cadd70>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/tagged/xor\",\"WARC-Payload-Digest\":\"sha1:QH6AZ4QBZXUSFPG4XPVWBI32NMBC3Y5D\",\"WARC-Block-Digest\":\"sha1:EWMPZZVHMIJJDHNCAQNP4VYTWGN47QCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878662.15_warc_CC-MAIN-20201021235030-20201022025030-00696.warc.gz\"}"} |
https://www.quantumcalculus.org/level-surfaces-and-lagrange/ | [
"# Level surfaces and Lagrange\n\nIn this recent paper the topic of level surfaces in a d-graph is studied. This is a core calculus topic and belongs to quantum calculus, as space is a discrete space which has properties shared by Euclidean space like that unit spheres are graph theoretical spheres (removing one vertex produces a contractible graph and recursively, every unit sphere is a sphere). One application is the visualization of Chladni surfaces as seen in the following video:\n\nThe definition of a level surface of a function f on the vertex set of the graph G is the following: consider the set of all complete subgraphs of G and connect two of these “points” if one is contained in the other. This is a larger graph G’ called Barycentric refinement of G. Now look at the subgraph of G’ which has as vertices all complete subgraphs of G on which the function f changes sign. The magic is that if G is a d-graph, then this level surface { f=c } is a (d-1) graph\nas long as the value c is not taken by any vertex of G.\n\n[metaslider id=53]\n\nIf we are given k functions, we can start with the first function, look at its level surface f1=c1, then extend the other functions to this new graph, then look at the level surface f2=c2 etc. For almost all values c1,…,ck, this produces a d-k graph. This is a discrete Sard result. However, there is a catch: the order of the functions matters (no surprise in a quantum setting) but the regularity is very helpful.\n\nCloser to classical calculus is to look at k functions f1,…,fk and now look at all simplices in G’ on which all functions simultaneously change sign. Now the set of values c1,…,ck for which we get a (d-k) graph has positive measure in general. We are however closer to classical calculus or algebraic geometry.\n\nHere is the simplest case: take a 2-graph (the discrete analog of a two dimensional surface) and two functions f,g. A Lagrange problem is to find the extrema of f under the constraint g=c. One can now do this in the same way as in classical\nmultivariable calculus: define the discrete gradient of f on a triangle (xyz) rooted at a vertex x as the two vector df = . Lets look at the level curve g=c. It consists of all edges in G on which f changes sign. If c is a value not taken by g, then g=c is a union of circular graphs. The reason is that on each triangle, either none or two of the edges have the property that g changes sign on them. Since every edge has two triangles as neighbors, this implies that every edge has two neighbors in G’. Similar, any level curve of f is a union of closed circular graphs.\nNow, if the gradients df and dg are not parallel, then the simultaneous zero locus { f=c, g=d } is a discrete set. Only points for which the Lagrange equations df = L dg, g=c hold are candidates for extrema for f.\n\nMore generally, in higher dimensions, the discrete algebraic set {f1=c1,….,fk=ck} is a d-k graph, if at every point, the discrete gradients satisfy a maximal rank condition.\n\nThis topic of doing calculus without limits is not only a game. It can be used to investigate completely unexplored problems like classifying the nature of level surfaces of the ground state of the Laplacian in a graph. The ground state is the eigenvector to the smallest nonzero eigenvalue. It has one connected nodal surface (both in the graph (Fiedler) as well as in the Riemannian manifold case (Courant)) by the Courant-Fiedler theorem. What is the topology of this surface if G is a d-sphere? This question can be explored now in a combinatorial setting. If the answer would be that the nodal surface is a d-1 sphere, then this would be a strong indication that the same holds also in the continuum."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9280456,"math_prob":0.9933775,"size":3197,"snap":"2020-45-2020-50","text_gpt3_token_len":746,"char_repetition_ratio":0.12715314,"word_repetition_ratio":0.0034904014,"special_character_ratio":0.22051923,"punctuation_ratio":0.08868501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989601,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T20:44:04Z\",\"WARC-Record-ID\":\"<urn:uuid:793e014b-c5a9-4de0-9bf6-bccc87bab59d>\",\"Content-Length\":\"46727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a89ac3dc-e43f-4962-a6e4-0a1aacf07e2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba7918b2-9c88-405f-83f7-e7d96c59df10>\",\"WARC-IP-Address\":\"208.113.163.198\",\"WARC-Target-URI\":\"https://www.quantumcalculus.org/level-surfaces-and-lagrange/\",\"WARC-Payload-Digest\":\"sha1:DWCQNP66WRQ2KCA3KFMG2QQ2PQ432G2S\",\"WARC-Block-Digest\":\"sha1:TMJNXG2W4GT63N2DNTL2FK6XS5X6I7PJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141743438.76_warc_CC-MAIN-20201204193220-20201204223220-00699.warc.gz\"}"} |
https://sungilfa.co.kr/eng/bbs/content.php?co_id=shaft_main | [
"## ProductsConnecting Shaft(Line Shaft)\n\nline shaft, distance coupling, connection coupling, longtype coupling, hollow shaft, hollow shaft connection coupling, lined shaft, lengthy coupling, lengthy shaft coupling, ISO 9001, ISO 14001, Korean made components, IT components, mechanical components, motion components,motion accessories, motion control, motion control components, semi conductor components, display machine components, small sized machine components, small mechanical components, machine components, factory automation part\n\nConnecting shaft is a compact solution which enables to transmit\nmotion accurately, especially when there is a longer distance\nbetween shafts of an application.\n\n• Why Connecting Shaft?\n• How to determine the proper length(L)\n• How to calculate permissible parallel misalignment\n• How to calculate Torsional Stiffness\n\n### Product Overview\n\n#### Why Connecting Shaft?\n\nHow to transmit motion when there is a longer distance between the shafts?\n\nIt is recommended to use a connecting shaft rather than a coupling + ground shaft combination.\n\n##### Combined set of coupling + Ground shaft\n• 3 different parts (2 Couplings, 1 ground shaft) are separately needed.\n• Bigger laboring required\n• \u0007Hard to keep the appropriate straightness\nbetween couplings each end and\na ground shaft\n##### Connecting Shaft\n• 1 Whole piece structure\n• Simple installation\n• Easy and handy maintenance\n• The hollow shaft with high stiffness\n\n#### How to determine the proper length(L)\n\n##### General Side-clamp Type",
null,
"L(Total Length)= Ls(distance between shafts)+ 2L1\n\n##### Side-clamp Hub Split Type",
null,
"L(Total Length)= Ls(distance between shafts)+ 2L3\n\nSide-clamp Hub Split type is commonly used for connecting shaft in regards to an easier maintenance.\n\n#### How to calculate permissible parallel misalignment",
null,
"${P}_{m}=\\left(L-2\\left({L}_{1}+{L}_{2}\\right)\\right)×\\mathrm{tan}\\frac{{A}_{m}}{2}$\n• Pm = Permissible parallel misalignment\n• L = Total length\n• Am = Permissible angular misalignment of connecting shaft\n(= 2 x coupling’s value)\n• The value calculated by the above formula is maximum permissible parallel misalignment in the allowable range of motion transmission, which means sleeves of SJCL and plate spring of SHDL may still get worn down even within the range of permissible parallel misalignment.\n• The Pm value shrinks by ½ when there are both angular and parallel misalignment at the same time.\n• It is recommended to use at the ⅓ value of Pm for longer lifespan, as well as keep the shafts located in line as straight as possible.\n$T{S}_{L}=\\frac{1}{2×\\frac{1}{T{S}_{c}}+\\frac{{L}_{pipe}}{T{S}_{s}}}$ (N·m/rad)\n• Lpipe = Length of Pipe [ ${L}_{pipe}=\\frac{L-2{L}_{4}}{1000}$ (m)]"
] | [
null,
"https://sungilfa.co.kr/eng/page/images/shaft-main_img3.png",
null,
"https://sungilfa.co.kr/eng/page/images/shaft-main_img4.png",
null,
"https://sungilfa.co.kr/eng/page/images/shaft-main_img5.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8670482,"math_prob":0.97650576,"size":554,"snap":"2021-04-2021-17","text_gpt3_token_len":116,"char_repetition_ratio":0.14363636,"word_repetition_ratio":0.0,"special_character_ratio":0.20036101,"punctuation_ratio":0.056818184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96161973,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T05:33:03Z\",\"WARC-Record-ID\":\"<urn:uuid:f5952d8d-fe27-4fd7-8338-8d8359b9d829>\",\"Content-Length\":\"20966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bc7d6ba-5f3e-487b-b7bd-9cacf043b1fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:e643eacf-9dd4-46d6-b041-558f9b55dae4>\",\"WARC-IP-Address\":\"211.43.203.11\",\"WARC-Target-URI\":\"https://sungilfa.co.kr/eng/bbs/content.php?co_id=shaft_main\",\"WARC-Payload-Digest\":\"sha1:ZCXRAA76HI7XIHP2HB4KSAZR7LKSMNHU\",\"WARC-Block-Digest\":\"sha1:H34CO5H2GBIN6RZDU4D6V4KLUVNNO3O7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703509973.34_warc_CC-MAIN-20210117051021-20210117081021-00769.warc.gz\"}"} |
https://nhess.copernicus.org/articles/20/1267/2020/ | [
"Nat. Hazards Earth Syst. Sci., 20, 1267–1285, 2020\nhttps://doi.org/10.5194/nhess-20-1267-2020\nNat. Hazards Earth Syst. Sci., 20, 1267–1285, 2020\nhttps://doi.org/10.5194/nhess-20-1267-2020\n\nResearch article 13 May 2020\n\nResearch article | 13 May 2020",
null,
"# Non-stationary extreme value analysis applied to seismic fragility assessment for nuclear safety analysis\n\nNon-stationary extreme value analysis applied to seismic fragility assessment for nuclear safety analysis\nJeremy Rohmer1, Pierre Gehl1, Marine Marcilhac-Fradin2, Yves Guigueno2, Nadia Rahni2, and Julien Clément2 Jeremy Rohmer et al.\n• 1BRGM, 3 av. C. Guillemin, 45060 Orléans CEDEX 2, France\n• 2IRSN, Radioprotection and Nuclear Safety Institute, BP 17, 92262 Fontenay-aux-Roses CEDEX, France\n\nCorrespondence: Jeremy Rohmer ([email protected])\n\nAbstract\n\nFragility curves (FCs) are key tools for seismic probabilistic safety assessments that are performed at the level of the nuclear power plant (NPP). These statistical methods relate the probabilistic seismic hazard loading at the given site to the required performance of the NPP safety functions. In the present study, we investigate how the tools of non-stationary extreme value analysis can be used to model in a flexible manner the tail behaviour of the engineering demand parameter as a function of the considered intensity measure. We focus the analysis on the dynamic response of an anchored steam line and of a supporting structure under seismic solicitations. The failure criterion is linked to the exceedance of the maximum equivalent stress at a given location of the steam line. A series of three-component ground-motion records (∼300) were applied at the base of the model to perform non-linear time history analyses. The set of numerical results was then used to derive a FC, which relates the failure probability to the variation in peak ground acceleration (PGA). The probabilistic model of the FC is selected via information criteria completed by diagnostics on the residuals, which support the choice of the generalised extreme value (GEV) distribution (instead of the widely used log-normal model). The GEV distribution is here non-stationary, and the relationships of the GEV parameters (location, scale and shape) are established with respect to PGA using smooth non-linear models. The procedure is data-driven, which avoids the introduction of any a priori assumption on the shape or form of these relationships. To account for the uncertainties in the mechanical and geometrical parameters of the structures (elastic stiffness, damping, pipeline thicknesses, etc.), the FC is further constructed by integrating these uncertain parameters. A penalisation procedure is proposed to set to zero the variables of little influence in the smooth non-linear models. This enables us to outline which of these parametric uncertainties have negligible influence on the failure probability as well as the nature of the influence (linear, non-linear, decreasing, increasing, etc.) with respect to each of the GEV parameters.\n\nShare\n1 Introduction\n\nA crucial step of any seismic probability risk assessment (PRA) is the vulnerability analysis of structures, systems and components (SSCs) with respect to the external loading induced by earthquakes. To this end, fragility curves (FCs), which relate the probability of an SSC to exceed a predefined damage state as a function of an intensity measure (IM) representing the hazard loading, are common tools. Formally, FC expresses the conditional probability with respect to the IM value (denoted “im”) and to the EDP (engineering demand parameter) obtained from the structural analysis (force, displacement, drift ratio, etc.) as follows:\n\n$\\begin{array}{}\\text{(1)}& {P}_{\\mathrm{f}}\\left(\\mathrm{im}\\right)=P\\left(\\mathrm{EDP}\\ge \\mathrm{th}\\mathrm{|}\\mathrm{IM}=\\mathrm{im}\\right),\\end{array}$\n\nwhere “th” is an acceptable demand threshold.\n\nFCs are applied to a large variety of different structures, like residential buildings (e.g. Gehl et al., 2013), nuclear power plants (Zentner et al., 2017), wind turbines (Quilligan et al., 2012), underground structures (Argyroudis and Pitilakis, 2012), etc. Their probabilistic nature makes them well suited for PRA applications, at the interface between probabilistic hazard assessments and event tree analyses, in order to estimate the occurrence rate of undesirable top events.\n\nDifferent procedures exist to derive FCs (see e.g. an overview by Zentner et al., 2017). In the present study, we focus on the analytical approach, which aims at deriving a parametric cumulative distribution function (CDF) from data collected from numerical structural analyses. A common assumption in the literature is that the logarithm of “im” is normally distributed (e.g. Ellingwood, 2001) as follows:\n\n$\\begin{array}{}\\text{(2)}& {P}_{\\mathrm{f}}\\left(\\mathrm{im}\\right)=\\mathrm{\\Phi }\\left(\\frac{\\mathrm{log}\\left(\\mathrm{im}\\right)-\\mathrm{log}\\left(\\mathit{\\alpha }\\right)}{\\mathit{\\beta }}\\right),\\end{array}$\n\nwhere Φ is the standard normal cumulative distribution function, α is the median and β is lognormal standard deviation. The parameters of the normal distribution are commonly estimated either by maximum likelihood estimation (see e.g. Shinozuka et al., 2000) or by fitting a linear probabilistic seismic demand model on the log scale (e.g. Banerjee and Shinozuka, 2008).\n\nThis procedure faces, however, limits in practice.\n\n• Limit (1). The assumption of normality may not always be valid in all situations, as discussed by Mai et al. (2017) and Zentner et al. (2017). This widely used assumption is especially difficult to justify when the considered EDP corresponds to the maximum value of the variable of interest (for instance maximum transient stress value), i.e. when the FC serves to model extreme values.\n\n• Limit (2). A second commonly used assumption is the homoscedasticity of the underlying probabilistic model, i.e. the variance term β is generally assumed to be constant over the domain of the IM.\n\n• Limit (3). The assumption of linearity regarding the relation between the median and IM may not always hold valid, as shown for instance by Wang et al. (2018) using artificial neural networks.\n\n• Limit (4). A large number of factors may affect the estimate of Pf in addition to IM; for instance epistemic uncertainties due to the identification and characterisation of some mechanical (elastic stiffness, damping ratio, etc.) and geometrical parameters of the considered structure.\n\nThe current study aims at going a step forward in the development of seismic FCs by improving the procedure regarding the aforementioned limits. To deal with limit (1), we propose relying on the tools of extreme value statistics (Coles, 2001) and more specifically on the generalised extreme value (GEV) distribution, which can model different extremes' behaviour.\n\nNote that the focus is on the extremes related to EDP, not on the forcing, i.e. the analysis does not model the extremes of IM as is done for current practices of probabilistic seismic hazard analysis (see e.g. Dutfoy, 2019). This means that no preliminary screening is applied, which implies that the FC derivation is conducted by considering both large and intermediate earthquakes, i.e. IM values that are small–moderate to large.\n\nThe use of GEV is examined using criteria for model selection like Akaike or Bayesian information criteria (Akaike, 1998; Schwarz, 1978). Limits (2) and (3) are addressed using tools for distributional regression (e.g. Koenker et al., 2013) within the general framework of the Generalized Additive Model for Location, Scale and Shape (GAMLSS) parameter (e.g. Rigby and Stasinopoulos, 2005). GAMLSS is very flexible in the sense that the mathematical relation of the median and variance in Eq. (1) can be learnt from the data via non-linear smooth functions. GAMLSS can be applied to any parametric probabilistic model and here to the GEV model as a particular case. This enables us to fit a non-stationary GEV model, i.e. a GEV model for which the parameters vary as a function of some covariates (here corresponding to IM and U). The use of data-driven non-linear smooth functions avoids introducing a priori a parametric model (linear or polynomial) as many authors do (see an example by for sea level extremes by Wong, 2018, and for temperature by Cheng et al., 2014).\n\nFinally, accounting for the epistemic uncertainties in the FC derivation (limit (4)) can be conducted in different manners. A first option can rely on the incremental dynamic analysis (IDA), where the uncertain mechanical and geometrical parameters result in uncertain capacities (i.e. related to the threshold “th” in Eq. 1). The FC is then derived through convolution with the probabilistic distribution of the demand parameter; see Vamvatsikos and Cornell (2002). Depending on the complexity of the system (here for NPP), the adaptation of IDA to non-linear dynamic structural numerical simulations can be tedious (this is further discussed in Sect. 3.1). In the present study, we preferably opt for a second approach by viewing Pf as conditional on the vector of uncertain mechanical and geometrical factors U (in addition to IM), namely\n\n$\\begin{array}{}\\text{(3)}& {P}_{\\mathrm{f}}\\left(\\mathrm{im},\\mathbit{u}\\right)=P\\left(\\mathrm{EDP}\\ge \\mathrm{th}\\mathrm{|}\\mathrm{IM}=\\mathrm{im},\\mathbit{U}=\\mathbit{u}\\right).\\end{array}$\n\nDealing with Eq. (3) then raises the question of integrating a potentially large number of variables, which might hamper the stability and quality of the procedure for FC construction. This is handled with a penalisation procedure (Marra and Wood, 2011), which enables the analyst to screen the uncertainties of negligible influence.\n\nThe paper is organised as follows. Section 2 describes the statistical methods to derive non-stationary GEV-based seismic fragility curves. Then, in Sect. 3, we describe a test case related to the seismic fragility assessment for a steam line of a nuclear power plant. For this case, the derivation of FC is performed by considering the widely used IM in the domain of seismic engineering, namely peak ground acceleration (PGA). Finally, the proposed procedure is applied in Sect. 4 to two cases, without and with epistemic uncertainties, and the results are discussed in Sect. 5.\n\n2 Statistical methods\n\nIn this section, we first describe the main steps of the proposed procedure for deriving the FC (Sect. 2.1). The subsequent sections provide technical details on the GEV probability model (Sect. 2.2), its non-stationary formulation and implementation (Sect. 2.3) within the GAMLSS framework, and its combination with variable selection (Sect. 2.4).\n\n## 2.1 Overall procedure\n\nTo derive the seismic FC, the following overall procedure is proposed.\n\n• Step 1 consists of analysing the validity of using the GEV distribution with respect to alternative probabilistic models (like the normal distribution of Eq. 2 in particular).\n\n• Depending on the results of step 1, step 2 aims at fitting the non-stationary GEV model using the double-penalisation formulation described in Sect. 2.2 and 2.3.\n\n• Step 3 aims at producing some diagnostic information about the fitting procedure and results. The first diagnostic test uses the Q–Q plot of the model deviance residuals (conditional on the fitted model coefficients and scale parameter) formulated by Augustin et al. (2012). If the model distributional assumptions are met, then the Q–Q plot should be close to a straight line. The second diagnostic test relies on a transformation of the data to a Gumbel distributed random variable (e.g. Beirlant et al., 2004) and on an analysis of the corresponding Gumbel Q–Q and P–P plot.\n\n• Step 4 aims at analysing the partial effect of each input variable (i.e. the smooth non-linear term; see Eq. 4 in Sect. 2.3) to assess the influence of the different GEV parameters.\n\n• Step 5 aims at deriving the seismic FC by evaluating the failure probability ${P}_{\\mathrm{f}}\\left(\\mathrm{im},\\mathbit{u}\\right)=P\\left(\\mathrm{EDP}\\ge \\mathrm{th}\\mathrm{|}\\mathrm{IM}=\\mathrm{im},\\mathbit{U}=\\mathbit{u}\\right)$. The following procedure is conducted to account for the epistemic uncertainties:\n\n• Step 5.1 – the considered IM is fixed at a given value,\n\n• Step 5.2 – for the considered IM value, a large number (here chosen at n=1000) of U samples are randomly generated,\n\n• Step 5.3 – for each of the randomly generated U samples, the failure probability is estimated for the considered IM value,\n\nThe result of the procedure corresponds to a set of n FCs from which we can derive the median FC as well as the uncertainty bands based on the pointwise confidence intervals at different levels. These uncertainty bands thus reflect the impact of the epistemic uncertainty related to the mechanical and geometrical parameters. Due to the limited number of observations, the derived FC is associated to the uncertainty on the fitting of the probabilistic model (e.g. GEV or Gaussian) as well. To integrate this fitting uncertainty in the analysis, step 5 can be extended by randomly generating parameters of the considered probabilistic model at step 5.2 (by assuming that they follow a multivariate Gaussian distribution).\n\n## 2.2 Model selection\n\nSelecting the most appropriate probabilistic models is achieved by means of information criteria, as recommended in the domain of non-stationary extreme value analysis (e.g. Kim et al., 2017; Salas and Obeysekera, 2014) and more particularly recommended for choosing among various fragility models (e.g. Lallemant et al., 2015); see also an application of these criteria in the domain of nuclear safety by Zentner (2017). We focus on two information criteria, namely Akaike and Bayesian information criteria (Akaike, 1998; Schwarz, 1978), respectively denoted AIC and BIC, whose formulation holds as follows:\n\n$\\begin{array}{}\\text{(4)}& \\begin{array}{rl}& \\mathrm{AIC}=\\mathrm{2}\\mathrm{log}\\left(l\\right)+\\mathrm{2}k,\\\\ & \\mathrm{BIC}=\\mathrm{2}\\mathrm{log}\\left(l\\right)+k\\mathrm{log}\\left(n\\right),\\end{array}\\end{array}$\n\nwhere l(.) is the log likelihood of the considered probability model, k is the number of parameters and n is the size of the dataset used to fit the probabilistic model.\n\nThough both criteria share similarities in their formulation, they provide different perspectives on model selection.\n\n• AIC-based model selection considers a model to be a probabilistic attempt to approach the “infinitely complex data-generating truth – but only approaching not representing” (Höge et al., 2018: Table 2). This means that AIC-based analysis aims at addressing which model will predict the best the next sample; i.e. it provides a measure of the predictive accuracy of the considered model (Aho et al., 2014: Table 2).\n\n• The purpose of BIC-based analysis considers each model to be a “probabilistic attempt to truly represent the infinitely complex data-generating truth” (Höge et al., 2018: Table 2), assuming that the true model exists and is among the candidate models. This perspective is different from the one of AIC and focuses on an approximation of the marginal probability of the data (here lEDP – log-transformed EDP) given the model (Aho et al., 2014: Table 2) and gives insights on which model generated the data; i.e. it measures goodness of fit.\n\nThe advantage of testing both criteria is to account for both perspectives on model selection, predictive accuracy and goodness of fit while enabling the penalisation of models that are too complex; the BIC generally penalises more strongly than the AIC. Since the constructed models use penalisation for the smoothness, we use the formulation provided by Wood et al. (2016: Sect. 5) to account for the smoothing parameter uncertainty.\n\nHowever, selecting the most appropriate model may not be straightforward in all situations when two model candidates present close AIC and BIC values. For instance, Burnham and Anderson (2004) suggest an AIC difference (relative to the minimum value) of at least 10 to support the ranking between model candidates with confidence. If this criterion is not met, we propose complementing the analysis by the likelihood ratio test (LRT; e.g. Panagoulia et al., 2014: Sect. 2), which compares two hierarchically nested GEV formulations using $L=-\\mathrm{2}\\left({l}_{\\mathrm{0}}-{l}_{\\mathrm{1}}\\right)$, where l0 is the maximised log likelihood of the simpler model M0 and l1 is the one of the more complex model M1 (that presents q additional parameters compared to M0 and contains M0 as a particular case). The criterion L follows a χ2 distribution with q degrees of freedom, which allows deriving a p value of the test.\n\n## 2.3 Non-stationary GEV distribution\n\nThe CDF of the GEV probability model holds as follows:\n\n$\\begin{array}{}\\text{(5)}& P\\left(\\mathrm{EDP}\\le \\mathrm{edp}\\right)=\\mathrm{exp}\\left(-{\\left(\\mathrm{1}+\\mathit{\\xi }\\left(\\frac{\\mathrm{edp}-\\mathit{\\mu }}{\\mathit{\\sigma }}\\right)\\right)}^{-\\mathrm{1}/\\mathit{\\xi }}\\right),\\end{array}$\n\nwhere “edp” is the variable of interest, and μ, σ and ξ are the GEV location, scale and shape parameters, respectively. Depending on the value of the shape parameter, the GEV distribution presents an asymptotic horizontal behaviour for ξ<0 (i.e. the asymptotically bounded distribution, which corresponds to the Weibull distribution), unbounded behaviour when ξ>0 (i.e. high probability of occurrence of great values can be reached, which corresponds to the Fréchet distribution) and intermediate behaviour in the case of ξ=0 (Gumbel distribution).\n\nFigure 1a illustrates the behaviour of the GEV density distribution for μ=12.5, σ=0.25 and different ξ values: the higher ξ, the heavier the tail. Figure 1b and c further illustrate how changes in the other parameters (the location and the scale, respectively) affect the density distribution. The location primarily translates the whole density distribution, while the scale affects the tail and, to a lesser extent (for the considered case), the mode.",
null,
"Figure 1Behaviour of the GEV density distributions depending on the changes in the parameter value: (a) ξ (with μ fixed at 12.5 and σ fixed at 0.25), (b) μ (with ξ fixed at 0.5 and σ fixed at 0.25) and (c) σ (with μ fixed at 12.5 and ξ fixed at 0.5).\n\nThe GEV distribution is assumed to be non-stationary in the sense that the GEV parameters θ=(μ, σ, ξ) vary as a function of x the vector of input variables, which include IM and the uncertain input variables U (as described in the Introduction). The fitting is performed within the general framework of the GAMLSS parameter (e.g. Rigby and Stasinopoulos, 2005). Since the scale parameter satisfies σ>0, we preferably work with its log transformation, which is denoted lσ. In the following, we assume that θ follows a semi-parametric additive formulation as follows:\n\n$\\begin{array}{}\\text{(6)}& {\\mathit{\\eta }}_{\\mathit{\\theta }}\\left(\\mathbit{x}\\right)=\\sum _{j=\\mathrm{1}}^{J}{f}_{j}\\left({x}_{j}\\right),\\end{array}$\n\nwhere J is the number of functional terms that is generally inferior to the number of input variables (see Sect. 2.3). fj(.) corresponds to a univariate smooth non-linear model, described as follows:\n\n$\\begin{array}{}\\text{(7)}& {f}_{j}\\left(x\\right)=\\sum _{b}{\\mathit{\\beta }}_{j\\mathrm{b}}{b}_{\\mathrm{b}}\\left(x\\right),\\end{array}$\n\nwith bb(.) being the thin-plate spline basis function (Wood, 2003) and βj the regression coefficients for the considered smooth function.\n\nThese functional terms (called partial effect) hold the information of each parameter's individual effect on the considered GEV parameter. The interest is to model the relationship between each GEV parameter and the input variables' flexibly. Alternative approaches would assume a priori functional relationships (like linear or of a polynomial form), which may not be valid.\n\nThe model estimation consists of evaluating the regression coefficients β (associated to the GEV parameters θ) by maximising the log likelihood l(.) of the GEV distribution. To avoid overfitting, the estimation is based on the penalised version of l(.) to control the roughness of the smooth functional terms (hence their complexity) as follows:\n\n$\\begin{array}{}\\text{(8)}& \\begin{array}{c}\\mathrm{argmax}\\\\ \\mathit{\\beta }\\end{array}\\left(l\\left(\\mathbit{\\beta }\\right)-\\frac{\\mathrm{1}}{\\mathrm{2}}\\sum _{j}{\\mathit{\\lambda }}_{j}{\\mathbit{\\beta }}^{T}{\\mathbf{S}}^{j}\\mathbit{\\beta }\\right),\\end{array}$\n\nwhere λj controls the extent of the penalisation (i.e. the trade-off between goodness of fit and smoothness), and Sj is a matrix of known coefficients (such that the terms in the summation measure the roughness of the smooth functions). Computational methods and implementation details are detailed in Wood et al. (2016) and references therein. In particular, the penalisation parameter is selected through minimisation of the generalised cross-validation score.\n\n## 2.4 Variable selection\n\nThe introduction of the penalisation coefficients in Eq. (8) has two effects: they can penalise how “wiggly” a given term is (i.e. it has a smoothing effect), and they can penalise the absolute size of the function (i.e. it has a shrinkage effect). The second effect is of high interest to screen out input variables of negligible influence. However, the penalty can only affect the components that have derivatives, i.e. the set of smooth non-linear functions called the “range space”. Completely smooth functions (including constant or linear functions), which belong to the “null space” are, however, not influenced by Eq. (8). For instance, for one-dimensional thin-plate regression splines, a linear term might be left in the model, even when the penalty value is very large (λ→∞); this means that the aforementioned procedure does not ensure that an input variable of negligible influence will completely be filtered out of the analysis (with corresponding regression coefficient shrunk to zero). The consequence is that Eq. (8) does not usually remove a smooth term from the model altogether (Marra and Wood, 2011). To overcome this problem, a double-penalty procedure was proposed by Marra and Wood (2011) based on the idea that the space of a spline basis can be decomposed into the sum of two components, one associated with the functions in the null space and the other with the range space. See Appendix A for further implementation details. This double-penalty procedure is adopted in the following.\n\nTo exemplify how the procedure works, we apply it to the following synthetic case. Consider a non-stationary GEV distribution whose parameters are related to two covariates x1 and x2 (see Eq. 9) as follows:\n\n$\\begin{array}{}\\text{(9)}& \\begin{array}{rl}{f}_{\\mathit{\\mu }}\\left(x\\right)=& {x}_{\\mathrm{1}}^{\\mathrm{3}}+\\mathrm{2}\\cdot {x}_{\\mathrm{2}}^{\\mathrm{2}}+\\mathrm{1},\\\\ & {f}_{l\\mathit{\\sigma }}\\left(x\\right)={x}_{\\mathrm{1}}^{\\mathrm{2}},\\\\ & {f}_{\\mathit{\\xi }}\\left(x\\right)=-\\mathrm{0.1}.\\end{array}\\end{array}$\n\nA total of 200 random samples are generated by drawing x1 and x2 from a uniform distribution on [0; 4] and [0; 2], respectively. Figure 2a provides the partial effects for the synthetic test case using the single-penalisation approach. The non-linear relationships are clearly identified for μ (Fig. 2a – i, ii) and for lσ (Fig. 2a – ii). However, the single-penalisation approach fails to identify properly the absence of influence of x2 on lσ and of both covariates on ξ (Fig. 2a – iv, v, vi), since the resulting partial effects still present a linear trend (though with small amplitude and large uncertainty bands). Figure 2b provides the partial effects using the double-penalisation approach. Clearly, this type of penalisation achieves a more satisfactory identification of the negligible influence of x2 on lσ and of both covariates on ξ (Fig. 2b – iv, v, vi) as well the non-linear partial effects for μ (Fig. 2b – i, ii) and for lσ (Fig. 2b – ii).",
null,
"Figure 2Partial effect for the synthetic test case using the single-penalisation approach (a) and the double-penalisation approach (b).\n\n3 Application case\n\nThis section provides details on the test case on which the proposed statistical methods (Sect. 2) for the derivation of FCs are demonstrated. The numerical model of the main steam line of a nuclear reactor is described in Sect. 3.1. A set of ground-motion records (Sect. 3.2) is applied to assess the seismic fragility of this essential component of a nuclear power plant.\n\n## 3.1 Structural model\n\nThe 3-D model of a steam line and its supporting structure (i.e. the containment building; see schematic overview in Fig. 3a), previously assembled by Rahni et al. (2017) in the Cast3M finite-element software (Combescure et al., 1982), are introduced here as an application of the seismic fragility analysis of a complex engineered object. The containment building consists of a double-wall structure: the inner wall (reinforced pre-stressed concrete) and the outer wall (reinforced concrete) are modelled with multi-degree-of-freedom stick elements (see Fig. 3b). The steel steam line is modelled by means of beam elements, representing pipe segments and elbows, as well as several valves, supporting devices and stops at different elevations of the supporting structure.",
null,
"Figure 3(a) Schematic overview of a nuclear power plant (adapted from https://www.iaea.org/resources/nucleus-information-resources, last access: 2 December 2019). The red rectangles indicate the main components represented in the structural model. (b) Stick model of the containment building. (c) Steam line beam model, originally built by Rahni et al. (2017). The red circle indicates the location of the vertical stop.\n\nThe objective of the fragility analysis is to check the integrity of the steam line: one of the failure criteria identified by Rahni et al. (2017) is the effort calculated at the location corresponding to a vertical stop along the steam line (Fig. 3c). Failure is assumed when the maximum transient effort exceeds the stop's design effort, i.e. EDP 775 kN (i.e. 13.56 on log scale). The model also accounts for epistemic uncertainties due to the identification of some mechanical and geometrical parameters, namely Young's modulus of the inner containment, the damping ratio of the structural walls and of the steam line, and the thickness of the steam line along various segments of the assembly. The variation range of the 10 selected parameters, constituting the vector U of uncertain factors (see Eq. 3), is detailed in Table 1. A uniform distribution is assumed for these parameters following the values provided by Rahni et al. (2017).\n\nTable 1Input parameters of the numerical model, according to Rahni et al. (2017).",
null,
"## 3.2 Dynamic structural analyses\n\nA series of non-linear time history analyses are performed on the 3-D model by applying ground-motion records (i.e. acceleration time histories) at the base of the containment building in the form of a three-component loading. In the Cast3M software, the response of the building is first computed, and the resulting displacement time history along the structure is then applied to the steam line model in order to record the effort demands during the seismic loading. The non-linear dynamic analyses are performed on a high-performance computing cluster, enabling the launch of the multiple runs in parallel (e.g. a ground motion of a duration of 20 s is processed in around 3 or 4 h). Here, the main limit with respect to the number of ground-motion records is not necessarily related to the computation time cost but more related to the availability of natural ground motions that are able to fit the conditional spectra at the desired return periods (as detailed below). Another option would be the generation of synthetic ground motions, using for instance the stochastic simulation method by Boore (2003) or the non-stationary stochastic procedure by Pousse et al. (2006). It has been decided, however, to use only natural records in the present application in order to accurately represent the inherent variability in other ground-motion parameters such as duration.\n\nNatural ground-motion records are selected and scaled using the conditional spectrum method described by Lin et al. (2013). Thanks to the consideration of reference earthquake scenarios at various return periods, the scaling of a set of natural records is carried out to some extent while preserving the consistency of the associated response spectra. The steps of this procedure hold as follows.\n\n• Choice of a conditioning period. The spectral acceleration (SA) at ${T}^{*}=\\mathrm{0.38}$ s (fundamental mode of the structure) is selected as the ground-motion parameter upon which the records are conditioned and scaled.\n\n• Definition of seismic hazard levels. Six hazard levels are arbitrarily defined, and the associated annual probabilities of exceedance are quantified with the OpenQuake engine1, using the SHARE seismic source catalogue (Woessner et al., 2013), for an arbitrary site in southern Europe. The ground-motion prediction equation (GMPE) from Boore et al. (2014) is used to generate the ground motions, assuming soil conditions corresponding to ${V}_{\\mathrm{s},\\mathrm{30}}=\\mathrm{800}$ m s−1 at the considered site. Data associated with the mean hazard curve are summarised in Table 2.\n\n• Disaggregation of the seismic sources and identification of the reference earthquakes. The OpenQuake engine is used to perform a hazard disaggregation for each scaling level. A reference earthquake scenario may then be characterised through the variables [Mw; Rjb; ε] (i.e. magnitude, Joyner–Boore distance, error term of the ground-motion prediction equation), which are averaged from the disaggregation results (Bazzurro and Cornell, 1999). This disaggregation leads to the definition of a mean reference earthquake (MRE) for each scaling level.\n\n• Construction of the conditional spectra. For each scaling level, the conditional mean spectrum is built by applying the GMPE to the identified MRE. For each period Ti, it is defined as follows (Lin et al., 2013):\n\n$\\begin{array}{}\\text{(10)}& \\begin{array}{rl}{\\mathit{\\mu }}_{\\mathrm{ln}\\mathrm{SA}\\left({T}_{i}\\right)\\mathrm{|}\\mathrm{ln}\\mathrm{SA}\\left({T}^{*}\\right)}& ={\\mathit{\\mu }}_{\\mathrm{ln}\\mathrm{SA}}\\left({M}_{\\mathrm{w}},{R}_{j\\mathrm{b}},{T}_{i}\\right)\\\\ & +{\\mathit{\\rho }}_{{T}_{i},{T}^{*}}\\cdot \\mathit{\\epsilon }\\left({T}^{*}\\right)\\cdot {\\mathit{\\sigma }}_{\\mathrm{ln}\\mathrm{SA}}\\left({M}_{\\mathrm{w}},{T}_{i}\\right),\\end{array}\\end{array}$\n\nwhere μln SA(Mw, Rjb, Ti) is the mean output of the GMPE for the MRE considered, ${\\mathit{\\rho }}_{{T}_{i},{T}^{*}}$ is the cross-correlation coefficient between SA(Ti) and SA(T*) (Baker and Jayaram, 2008), ε(T*) is the error term value at the target period ${T}^{*}=\\mathrm{0.38}$ s, and σln SA(Mw, Ti) is the standard deviation of the logarithm of SA(Ti), as provided by the GMPE. The associated standard deviation is also evaluated, thanks to the following equation:\n\n$\\begin{array}{}\\text{(11)}& \\begin{array}{rl}{\\mathit{\\mu }}_{\\mathrm{ln}\\mathrm{SA}\\left({T}_{i}\\right)\\mathrm{|}\\mathrm{ln}\\mathrm{SA}\\left({T}^{*}\\right)}& ={\\mathit{\\mu }}_{\\mathrm{ln}\\mathrm{SA}}\\left({M}_{\\mathrm{w}},{R}_{j\\mathrm{b}},{T}_{i}\\right)\\\\ & +{\\mathit{\\rho }}_{{T}_{i},{T}^{*}}\\cdot \\mathit{\\epsilon }\\left({T}^{*}\\right)\\cdot {\\mathit{\\sigma }}_{\\mathrm{ln}\\mathrm{SA}}\\left({M}_{\\mathrm{w}},{T}_{i}\\right).\\end{array}\\end{array}$\n\nThe conditional mean spectrum and its associated standard deviation are finally assembled in order to construct the conditional spectrum at each scaling level. The conditional mean spectra may be compared with the uniform hazard spectra (UHS) that are estimated from the hazard curves at various periods. As stated in Lin et al. (2013), the SA value at the conditioning period corresponds to the UHS, which acts as an upper-bound envelope for the conditional mean spectrum.\n\n• Selection and scaling of the ground-motion records. Ground-motion records that are compatible with the target conditional response spectrum are selected, using the algorithm by Jayaram et al. (2011): the distribution of the selected ground-motion spectra, once scaled with respect to the conditioning period, has to fit the median and standard deviation of the conditional spectrum that is built from Eqs. (10) and (11). The final selection from the PEER database (PEER, 2013) consists of 30 records for each of the six scaling levels (i.e. 180 ground-motion records in total).\n\nTwo distinct cases are considered for the derivation of FCs, depending on whether parametric uncertainties are included in the statistical model or not.\n\n• Case no. 1 (without epistemic uncertainties). A first series of numerical simulations are performed by keeping the mechanical and geometrical parameters fixed at their best estimate values, i.e. the midpoint of the distribution intervals detailed in Table 1. The 180 ground-motion records are applied to the deterministic structural model, resulting in a database of 180 (IM, EDP) pairs, with PGA chosen as the IM.\n\n• Case no. 2 (with epistemic uncertainties). A second series of numerical simulations are performed by accounting for parametric uncertainties. This is achieved by randomly varying the values of the mechanical and geometrical input parameters of the numerical model (Table 1) using the Latin hypercube sampling technique (McKay et al., 1979). A total number of 360 numerical simulations are performed (using 180 ground-motion records).\n\nTherefore, multiple ground motions are scaled at the same IM value, and statistics on the exceedance rate of a given EDP value may be extracted at each IM step, in a similar way to what is carried out in multiple-stripe analyses or incremental dynamic analyses (Baker, 2015; Vamvatsikos and Cornell, 2002) for the derivation of FC. In the present study, the conditional spectrum method leads to the selection and scaling of ground motions with respect to SA (0.38 s), which corresponds to the fundamental modal of the structure. For illustration purposes, Fig. 4 displays the damage probabilities at the six selected return periods, which may be associated to unique values of SA (0.38 s).",
null,
"Figure 4(a) Damage probabilities directly extracted from the six scaling levels (or return periods). (b) Damage probabilities with respect to the six SA(T*) levels, and fitted lognormal cumulative distribution function.\n\nFrom Fig. 4, two main observations can be made: (i) the multiple-stripe analysis does not emphasise any difference between the models with and without parametric uncertainty, and (ii) the FC directly derived from the six probabilities does not provide a satisfying fit. However, the fragility analysis is here focused on the pipeline component (located along the structure), which appears to be more susceptible to PGA: therefore, PGA is chosen as IM in the present fragility analysis.\n\nFigure 5 provides the evolution of lEDP versus lPGA (log-transformed PGA) for both cases. We can note that only a few simulation runs (five for Case no. 1 and eight for Case no. 2) lead to the exceedance of the acceptable demand threshold. As shown in Fig. 4, there is a variability around the six scaling levels: for this reason, it is not feasible to represent probabilities at six levels of PGA. In this case, conventional approaches for FC derivation are the “regression on the IM-EDP cloud” (i.e. least-squares regression, as demonstrated by Cornell et al., 2002) or the use of generalised linear model regression or maximum likelihood estimation (Shinozuka et al., 2000).",
null,
"Figure 5Evolution of lEDP (log-transformed EDP) as a function of lPGA (log-transformed PGA) for Case no. 1 (a) without parametric uncertainty and for Case no. 2 (b) with parametric uncertainty. The horizontal dashed line indicates the acceptable demand threshold.\n\n4 Applications\n\nIn this section, we apply the proposed procedure to both cases described in Sect. 3.2. Section 4.1 and 4.2 describes the application for deriving the FCs without (Case no. 1) and with epistemic uncertainty (Case no. 2), respectively. For each case, we first select the most appropriate probabilistic model, then analyse the partial effects and, finally, compare the derived FC with the one based on the commonly used assumption of normality. The analysis is here focused on the lPGA to derive the FC.\n\n## 4.1 Case no. 1: derivation of seismic FC without epistemic uncertainties\n\n### 4.1.1 Model selection and checking\n\nA series of different probabilistic models (Table 3) were fitted to the database of (IM, EDP) points described in Sect. 3.2 (Fig. 5a). Three different probabilistic models (normal, Tweedie, GEV) and two types of effects on the probabilistic model's parameters were tested (linear or non-linear). Note that the Tweedie distribution corresponds to a family of exponential distributions which takes as special cases the Poisson distribution and the gamma distribution (Tweedie, 1984).\n\nThe analysis of the AIC and BIC differences (relative to the minimum value; Fig. 6) here suggests that both models, GEVsmo3 and GEVsmo2, are valid (as indicated by the AIC and BIC differences close to zero). The differences between the criteria value are less than 10, and to help the ranking, we complement the analysis by evaluating the LRT p value, which reaches ∼18 %, hence suggesting that GEVsmo2 should be preferred (for illustration, the LRT p value for a stationary GEV model and the non-stationary GEVsmot2 model is here far less than 1 %). In addition, we also analyse the regression coefficients of GEVsmo3, which shows that the penalisation procedure imposes all coefficients of the shape parameters to be zero, which indicates that lPGA only acts on the location and scale parameters.",
null,
"Figure 6Model selection criteria (AIC – a – and BIC – b – differences relative to the minimum value) for the different models described in Table 3 considering the derivation of a FC without epistemic uncertainty.\n\nThese results provide support in favour of GEVsmo2, i.e. a GEV distribution with a non-linear smooth term for the location and scale parameters only. The estimated shape parameter reaches here a constant value of 0.07 (±0.05), hence indicating a behaviour close to the Gumbel domain. This illustrates the flexibility of the proposed approach based on the GEV, which encompasses the Gumbel distribution as a particular case. We also note that the analysis of the AIC and BIC values would have favoured the selection of NOsmo2 if the GEV model had not been taken into account, i.e. a heteroscedastic log-normal FC.\n\nThe examination of the diagnostic plots (Fig. 7a) of the model deviance residuals (conditional on the fitted model coefficients and scale parameter) shows a clear improvement of the fitting, in particular for large theoretical quantiles above 1.5 (the dots better aligned along the first bisector in Fig. 7b). The Gumbel Q–Q and P–P plots (Fig. 7c and d) also indicate a satisfactory fitting of the GEV model.",
null,
"Figure 7Diagnostic plots to check the validity of the considered model: (a) Q–Q plot for the deviance residuals for the NOsmo2 model, (b) Q–Q plot for the deviance residuals for the GEVsmo2 model without parametric uncertainty, (c) Q–Q plot on Gumbel scale and (d) P–P plot on Gumbel scale.\n\n### 4.1.2 Partial effects\n\nFigure 8a and b provides the evolution of the partial effects (as formally described in Sect. 2.3: Eqs. 6 and 7) with respect to the location and to the log-transformed scale parameter, respectively. We note that the assumption of the relationship between EDP and lPGA is non-linear (contrary to the widely used assumption). An increase in lPGA both induces an increase in μ and of lσ, hence resulting in a shift of the density (as illustrated in Fig. 1b), and an impact on the tail (as illustrated in Fig. 1c). We note that the fitting uncertainty (indicated by the ± 2 standard errors above and below the best estimate) remains small, and the aforementioned conclusions can be considered with confidence.",
null,
"Figure 8Partial effect of (a) PGA on the GEV location parameter and (b) PGA on the log-transformed GEV scale parameter. The red-coloured bands are defined by 2 SE (standard errors) above and below the estimate.\n\n### 4.1.3 FC derivation\n\nUsing the Monte Carlo-based procedure described in Sect. 2.1, we evaluate the failure probability Pf (Eq. 1) to derive the corresponding GEV-based FC (Fig. 9a) with accounts for fitting uncertainties. The resulting FC is compared to the one based on the normal assumption (Fig. 9b). This shows that Pf would have been underestimated for moderate-to-large PGA from 10 to ∼25 m2 s−1 if the selection of the probability model had not been applied (i.e. if the widespread assumption of normality had been used); for instance at PGA = 20 m2 s−1, Pf is underestimated by ∼5 %. This is particularly noticeable for the range of PGA from 10 to 15 m2 s−1, where the GEV-based FC clearly indicates a non-zero probability value, whereas the Gaussian model indicates negligible probability values below 1 %. For very high PGA, both FC models approximately provide almost the same Pf value. These conclusions should, however, be analysed with respect to the fitting uncertainty, which has here a clear impact; for instance at PGA = 20 m2 s−1, the 90 % confidence interval has a width of 10 % (Fig. 9a), i.e. of the same order of magnitude than a PGA variation from 10 to 20 m2 s−1. We note also that the fitting uncertainty reaches the same magnitude between both models. This suggests that additional numerical simulation results are necessary to decrease this uncertainty source for both models.",
null,
"Figure 9Fragility curve (relating the failure probability Pf to PGA) based on (a) the non-stationary model GEVsmo2 and (b) the Gaussian NOsmo2 model. The coloured bands reflect the uncertainty in the fitting.\n\n## 4.2 Case no. 2: derivation of seismic FC with epistemic uncertainties\n\n### 4.2.1 Model selection and checking\n\nIn this case, the FCs were derived by accounting not only for lPGA but also for 10 additional uncertain parameters (Table 1).\n\nThe AIC and BIC differences (relative to the minimum value; Fig. 10) for the different probabilistic models (described in Table 2) show that GEVsmo2 model should preferably be selected. Contrary to Case no. 1, the AIC and BIC differences are large enough to rank with confidence GEVsmo2 as the most appropriate model. This indicates that the location and scale parameters are non-linear smooth functions of IM and of the uncertain parameters. The estimated shape parameter reaches here a constant value of −0.24 (±0.06), hence indicating a Weibull tail behaviour. Similarly to the analysis without parametric uncertainties (Sect. 4.1), we note that the AIC and BIC values would have favoured the selection of NOsmo2 if the GEV model had not been taken into account.",
null,
"Figure 10Model selection criteria (AIC – a – and BIC – b – differences relative to the minimum value) for the different models described in Table 3 considering the derivation of a FC with epistemic uncertainty.\n\nThe examination of the Q–Q plots (Fig. 11) of the model deviance residuals (conditional on the fitted model coefficients and scale parameter) shows an improvement of the fitting, in particular for large theoretical quantiles above 1.0 (the dots better aligned along the first bisector in Fig. 11b). The Gumbel Q–Q and P–P plot (Fig. 11c and d) also indicate a very satisfactory fitting of the GEV model.",
null,
"Figure 11Diagnostic plots to check the validity of the considered model: (a) Q–Q plot for the deviance residuals for the NOsmo2 model, (b) Q–Q plot for the deviance residuals for the GEVsmo2 model with epistemic uncertainty, (c) Q–Q plot on Gumbel scale and (d) P–P plot on Gumbel scale.\n\n### 4.2.2 Partial effects\n\nFigure 12 provides the evolution of the partial effects with respect to the location parameter. Several observations can be made.\n\n• Figure 12a shows quasi-similar partial effect for lPGA in Case no. 1 (Fig. 8a).\n\n• Three out of the ten uncertain parameters were filtered out by the procedure of Sect. 2.4, namely two mechanical parameters (the damping ratio of reinforced pre-stressed concrete ξRPC and the damping ratio of the steam line ξSL) and one geometrical parameter (the pipe thickness of segment no. 2). As an illustration, Fig. 12e depicts the partial effect of a parameter, which was identified as of negligible influence: here, the partial effect of e2 is shrunk to zero.\n\n• Three thickness parameters (e1, e4, e5) present an increasing linear effect on μ (Fig. 12d, g and h).\n\n• Two parameters (Young's modulus of the inner containment EIC and the thickness e3) present a decreasing linear effect on μ (Fig. 12b and f).\n\n• The damping ratio of the reinforced concrete ξRC presents a non-linear effect, with a minimum value at around 0.0725 (Fig. 12c).\n\n• The thickness e6 presents a non-linear effect, with a maximum value at around 0.04 (Fig. 12i).",
null,
"Figure 12Partial effect on the GEV location parameter. The red-coloured bands are defined by 2 SE (standard errors) above and below the estimate.\n\nFigure 13 provides the evolution of the partial effects with respect to the (log-transformed) scale parameter. We show here that a larger number of input parameters were filtered out by the selection procedure; i.e. only the thickness e5 is selected as well as the damping ratios of the concrete structures ξRPC and ξRC (related to the containment building). The partial effects are all non-linear, but with larger uncertainty than for the location parameter (compare the widths of the red-coloured uncertain bands in Figs. 12 and 13). In particular, the strong non-linear influence of ξRPC and ξRC may be due to the simplified coupling assumption between structural dynamic response and anchored steam line (i.e. the displacement time history at various points of the building is directly used as input for the response of the steam line). Identifying this problem is possible thanks to the analysis of the partial effects, though it should be recognised that this behaviour remains difficult to interpret and further investigations are here necessary. We also note that the partial effect for lPGA is quasi-similar to Fig. 8b in Case no. 1.",
null,
"Figure 13Partial effect on the log-transformed GEV scale parameter. The red-coloured bands are defined by 2 SE (standard errors) above and below the estimate.\n\nTable 4 summarises the different types of influence identified in Figs. 12 and 13, i.e. linear, non-linear or absence of influence as well as the type of monotony when applicable.\n\nTable 4Influence of the geometrical and mechanical parameters on the GEV parameters, μ and lσ, of the GEVsmo2 model.",
null,
"### 4.2.3 FC derivation\n\nBased on the results of Figs. 12 and 13, the FC is derived by accounting for the epistemic uncertainties by following the Monte Carlo procedure (step 5 described in Sect. 2.1) by including (or not) fitting uncertainty (Fig. 14a and b, respectively). We show that the GEV-based FC is less steep than the one for Case no. 1 (Fig. 9): this is mainly related to the value of the shape parameter (close to Gumbel regime for Case no. 1 without epistemic uncertainty and to Weibull regime for Case no. 2 with epistemic uncertainty). Figure 14a also outlines that the uncertainty related to the mechanical and geometrical parameters has a non-negligible influence, as shown by the width of the uncertainty bands: for PGA = 30 m2 s−1, the 90 % confidence interval has a width of ∼20 %. In addition, the inclusion of the fitting uncertainty (Fig. 14b) increases the width of the confidence interval, but it appears to mainly impact the 90 % confidence interval (compare the dark- and the light-coloured envelope in Fig. 14); for instance, compared to Fig. 14a, this uncertainty implies a +5 % (respectively −5 %) shift of the upper bound (respectively lower bound) of the 90 % confidence interval at PGA = 30 m2 s−1.",
null,
"Figure 14Fragility curve (relating the failure probability Pf to PGA) considering epistemic uncertainties only (a, c) and fitting uncertainty as well (b, d). (a, b) GEV-based FC; (c, d) FC based on the normal assumption. The coloured bands are defined based on the pointwise confidence intervals derived from the set of FCs (see text for details).\n\nCompared to the widely used assumption of normality, Fig. 14c and d show that the failure probability reached with this model is larger than with the GEV-based FC; at PGA = 30 m2 s−1, the difference reaches ∼5 %. In practice, this means that a design based on the Gaussian model would have here been too conservative. Regarding the impact of the different sources of uncertainty, the epistemic uncertainty appears to influence the Gaussian model less than the GEV one. The impact of the fitting uncertainty is, however, quasi-equivalent for both models.",
null,
"Figure 15FC considering different thickness e4: (a) −12.5 %, (b) −5 %, (c) +5 % and (d) +12.5 % of the original value. Uncertainty bands are provided by accounting for epistemic uncertainty only (dark blue) and by accounting for the fitting uncertainty as well (light blue).\n\nThe interest of incorporating the mechanical and geometrical parameters directly into the equation of the FC is the ability to study how the FC in Fig. 14 evolves as a function of the parametric uncertainties, hence identifying regions of the parameters' values leading to large failure probability. This is illustrated in Fig. 15, where the FC is modified depending on the value of the thickness e4, from −12.5 % (0.033 m) to +12.5 % (∼0.043 m), with respect to the median value of 0.038 m. Here larger e4 induces a steeper FC. This appears to be in agreement with the increasing effect of e4 as shown in Fig. 12g. Figure 15 also shows that the effect of e4 on Pf only becomes significant when the e4 variation is of a least ±5 %, compared to the fitting uncertainty (of the order of magnitude of ±2.5 %).\n\n5 Discussion and further work\n\nThe current study focused on the problem of seismic FC derivation for nuclear power plant safety analysis. We propose a procedure based on the non-stationary GEV distribution to model, in a flexible manner, the tail behaviour of the EDP as a function of the considered IM. The key ingredient is the use of non-linear smooth functional EDP–IM relationships (partial effects) that are learnt from the data (to overcome limits (2) and (3) as highlighted in the Introduction). This avoids the introduction of any a priori assumption to the shape or form of these relationships. In particular, the benefit is shown in Case no. 1 (without epistemic uncertainty), where the non-linear relation is clearly outlined for both μ and lσ. The interest of such data-driven non-parametric techniques has also been shown using alternatives techniques (like neural network – Wang et al., 2018 – or kernel smoothing – Mai et al., 2017). To bring these approaches to an operative level, an extensive comparison or benchmark exercise on real cases should be conducted in the future.\n\nThe second objective of the present study was to compare the GEV-based FC with the one based on the Gaussian assumption. We show that if a careful selection of the most appropriate model is not performed (limit (1) described in the Introduction), the failure probability would be either under- or overestimated for Case no. 1 (without epistemic uncertainty) and Case no. 2 (with epistemic uncertainty), respectively. This result brings an additional element against the uncritical use of the (log-)normal fragility curve (see discussions by Karamlou and Bocchini, 2015; Mai et al., 2017; Zentner et al., 2017, among others).\n\nThe third objective was to propose an approach to incorporate the mechanical and geometrical parameters in the FC derivation (using advanced penalisation procedures). The main motivation was to allow studying the evolution of the failure probability as a function of the considered covariate (as illustrated in Fig. 15). As indicated in the Introduction, an alternative approach would rely on the principles of the IDA method, the advantage being to capture the variability of the structural capacity and to get deeper insight into the structural behaviour. See an example for masonry buildings by Rota et al. (2010). However, the adaptation of this technique would impose additional developments to properly characterise collapse through the numerical model (see discussion by Zentner et al., 2017: Sect. 2.5). Section 3.1 also points out the difficulty in applying this approach in our case. Combining the idea underlying IDA and our statistical procedure is worth investigating in the future.\n\nThe benefit of the proposed approach is to provide information on the sensitivity to the epistemic uncertainties by both identifying the parameters of negligible influence (via the double-penalisation method) and using the derived partial effects. The latter hold information on the magnitude and nature of the influence (linear, non-linear, decreasing, increasing, etc.) for each GEV parameter (to overcome limit (4)). Additional developments should, however, be performed to derive the same levels of information for the FC (and not only for the parameters of the probabilistic model). In this view, Fig. 15 provides a first basis that can be improved by (1) analysing the role of each covariate from a physical viewpoint, as done for instance by Salas and Obeysekera (2014) to investigate the evolution of hydrological extremes over time (e.g. increasing, decreasing or abrupt shifts of hydrologic extremes), as some valuable lessons can also be drawn from this domain of application to define and communicate an evolving probability of failure (named return period in this domain), and (2) deriving a global indicator of sensitivity via variance-based global sensitivity analysis (see e.g. Borgonovo et al., 2013). The latter approach introduces promising perspectives to ease the fitting process by filtering out beforehand some negligible mechanical and geometrical parameters. It is also expected to improve the interpretability of the procedure by clarifying the respective role of the different sources of uncertainty, i.e. related to the mechanical and geometrical parameters and also to the fitting process, which appears to have a non-negligible impact in our study.\n\nThe treatment of this type of uncertainty can be improved with respect to two aspects: (1) it is expected to decrease by fitting the FC with a larger number of numerical simulation results. To relieve the computational burden (each numerical simulation has a computation time cost of several hours; see Sect. 3.2), replacing the mechanical simulator by surrogate models (like neural network – Wang et al., 2018 – or using model order reduction strategy – Bamer et al., 2017) can be envisaged, and (2) the modelling of such uncertainty can be done in a more flexible and realistic manner (compared to the Gaussian assumption made here) using Bayesian techniques within the framework of GAMLSS (Umlauf et al., 2018).\n\nFinally, from an earthquake engineering viewpoint, the proposed procedure has focused on a single IM (here PGA), but any other IMs could easily be incorporated, similarly to for the mechanical and geometrical parameters, to derive vector-based FC as done by Gehl et al. (2019) using the same structure. The proposed penalisation approach can be seen as a valuable option to solve a recurrent problem in this domain, namely the identification of most important IMs (see discussion by Gehl et al., 2013, and references therein).\n\nAppendix A: Double-penalisation procedure\n\nThis Appendix gives further details on the double-penalisation procedure used to select variables in the non-stationary GEV. Full details are described by Marra and Wood (2011).\n\nConsider the smoothing penalty matrix Sj in Eq. (6) (associated to the jth smooth function in the semi-parametric additive formulation of Eq. 4). This matrix can be decomposed as\n\n$\\begin{array}{}\\text{(A1)}& {\\mathbf{U}}_{j}{\\mathbf{\\Lambda }}_{j}{\\mathbf{U}}_{j}^{T},\\end{array}$\n\nwhere Uj is the eigenvector matrix associated with the jth smooth function, and Λj is the corresponding diagonal eigenvalue matrix. As explained in Sect. 2.4, the penalty as described in Eq. (6) can only affect the components that have derivatives, i.e. the set of smooth non-linear functions called the “range space”. Completely smooth functions (including constant or linear functions), which belong to the “null space”, are, however, not influenced. This problem implies that Λj contains zero eigenvalues, which makes the variable selection difficult for “nuisance” functions belonging to the null space, i.e. functions with negligible influence on the variable of interest. The idea of Marra and Wood (2011) is to introduce an extra penalty term which penalises only functions in the null space of the penalty to achieve a complete removal of the smooth component. Considering the decomposition (Eq. A1), an additional penalty can be defined as ${\\mathbf{S}}_{j}^{*}={\\mathbf{U}}_{j}^{*}{\\mathbf{U}}_{j}^{*T}$, where ${\\mathbf{U}}_{j}^{*}$ is the matrix of eigenvectors corresponding to the zero eigenvalues of Λj. In practice, the penalty in Eq. (6) holds as follows:\n\n$\\begin{array}{}\\text{(A2)}& {\\mathit{\\lambda }}_{j}{\\mathbit{\\beta }}^{T}{\\mathbf{S}}_{j}\\mathbit{\\beta }+{\\mathit{\\lambda }}_{j}^{*}{\\mathbit{\\beta }}^{T}{\\mathbf{S}}_{j}^{*}\\mathbit{\\beta },\\end{array}$\n\nwhere two penalisation parameters (λj, ${\\mathit{\\lambda }}_{j}^{*}$) are estimated, here by minimisation of the generalised cross-validation score.\n\nCode and data availability\n\nCode is available upon request to the first author. Statistical analysis was performed using R package “mgcv” (available at https://cran.r-project.org/web/packages/mgcv/index.html (last access: 20 March 2020). See Wood (2017) for an overview. Numerical simulations were performed with Cast3M simulator (Combescure et al., 1982).\n\nAuthor contributions\n\nJR designed the concept, with input from PG. MMF, YG, NR and JC designed the structural model and provided to PG for adaptation and implementation to the described case. PG performed the dynamical analyses. JR undertook the statistical analyses and wrote the paper, with input from PG.\n\nCompeting interests\n\nThe authors declare that they have no conflict of interest.\n\nSpecial issue statement\n\nThis article is part of the special issue “Advances in extreme value analysis and application to natural hazards”. It is a result of the Advances in Extreme Value Analysis and application to Natural Hazard (EVAN), Paris, France, 17–19 September 2019.\n\nAcknowledgements\n\nWe thank both reviewers for their constructive comments, which led to the improvement of the paper.\n\nFinancial support\n\nThis study has been carried out within the NARSIS project, which has received funding from the European Union's Horizon 2020 Euratom programme under grant agreement no. 755439.\n\nReview statement\n\nThis paper was edited by Yasser Hamdi and reviewed by two anonymous referees.\n\nReferences\n\nAho, K., Derryberry, D., and Peterson, T.: Model selection for ecologists: the worldviews of AIC and BIC, Ecology, 95, 631–636, 2014.\n\nAkaike, H.: Information theory and an extension of the maximum likelihood principle, in: Selected papers of hirotugu akaike, Springer, New York, NY, 199–213, 1998.\n\nArgyroudis, S. A. and Pitilakis, K. D.: Seismic fragility curves of shallow tunnels in alluvial deposits, Soil Dynam. Earthq. Eng., 35, 1–12, 2012.\n\nAugustin, N. H., Sauleau, E. A., and Wood, S. N.: On quantile quantile plots for generalized linear models, Comput. Stat. Data Anal., 56, 404–409, 2012.\n\nBaker, J. W.: Efficient analytical fragility function fitting using dynamic structural analysis, Earthq. Spectra, 31, 579–599, 2015.\n\nBaker, J. W. and Jayaram, N.: Correlation of spectral acceleration values from NGA ground motion models, Earthq. Spectra, 24, 299–317, 2008.\n\nBamer, F., Amiri, A. K., and Bucher, C.: A new model order reduction strategy adapted to nonlinear problems in earthquake engineering, Earthq. Eng. Struct. Dynam., 46, 537–559, 2017.\n\nBanerjee, S. and Shinozuka, M.: Mechanistic quantification of RC bridge damage states under earthquake through fragility analysis, Prob. Eng. Mech., 23, 12–22, 2008.\n\nBazzurro, P. and Cornell, C. A.: Disaggregation of seismic hazard, Bull. Seismol. Soc. Am., 89, 501–520, 1999.\n\nBeirlant, J., Goegebeur, Y., Segers, J., and Teugels, J.: Statistics of Extremes: Theory and Applications, Wiley & Sons, Chichester, UK, 2004.\n\nBoore, D. M.: Simulation of ground motion using the stochastic method, Pure Appl. Geophys., 160, 635–676, 2003.\n\nBoore, D. M., Stewart, J. P., Seyhan, E., and Atksinson, G. M.: Nga-west2 equations for predicting pga, pgv, and 5 % damped psa for shallow crustal earthquakes, Earthq. Spectra, 30, 1057–1085, 2014.\n\nBorgonovo, E., Zentner, I., Pellegri, A., Tarantola, S., and de Rocquigny, E.: On the importance of uncertain factors in seismic fragility assessment, Reliabil. Eng. Syst. Safe., 109, 66–76, 2013.\n\nBurnham, K. P. and Anderson, D. R.: Multimodel inference: understanding AIC and BIC in model selection, Sociolog. Meth. Res., 33, 261–304, 2004.\n\nCheng, L., AghaKouchak, A., Gilleland, E., and Katz, R. W.: Non-stationary extreme value analysis in a changing climate, Climatic Change, 127, 353–369, 2014.\n\nColes, S.: An Introduction to Statistical Modeling of Extreme Values, Springer, London, UK, 2001.\n\nCombescure, A., Hoffmann, A., and Pasquet, P.: The CASTEM finite element system, in: Finite Element Systems, Springer, Berlin, Heidelberg, 115–125, 1982.\n\nCornell, C. A., Jalayer, F., Hamburger, R. O., and Foutch, D. A.: Probabilistic basis for 2000 SAC federal emergency management agency steel moment frame guidelines, J. Struct. Eng., 128, 526–533, 2002.\n\nDutfoy, A.: Estimation of Tail Distribution of the Annual Maximum Earthquake Magnitude Using Extreme Value Theory, Pure Appl. Geophys., 176, 527–540, 2019.\n\nEllingwood, B. R.: Earthquake risk assessment of building structures, Reliab. Eng. Syst. Safe., 74, 251–262, 2001.\n\nGehl, P., Seyedi, D. M., and Douglas, J.: Vector-valued fragility functions for seismic risk evaluation, Bull. Earthq. Eng., 11, 365–384, 2013.\n\nGehl, P., Marcilhac-Fradin, M., Rohmer, J., Guigueno, Y., Rahni, N., and Clément, J.: Identifying Uncertainty Contributions to the Seismic Fragility Assessment of a Nuclear Reactor Steam Line, in: 7th International Conference on Computational Methods in Structural Dynamics and Earthquake Engineering, Crete, Greece, https://doi.org/10.7712/120119.7312.18915, 2019.\n\nHöge, M., Wöhling, T., and Nowak, W.: A primer for model selection: The decisive role of model complexity, Water Resour. Res., 54, 1688–1715, 2018.\n\nJayaram, N., Lin, T., and Baker, J. W.: A computationally efficient ground-motion selection algorithm for matching a target response spectrum mean and variance, Earthq. Spectra, 27, 797–815, 2011.\n\nKaramlou, A. and Bocchini, P.: Computation of bridge seismic fragility by large-scale simulation for probabilistic resilience analysis, Earthq. Eng. Struct. Dynam., 44, 1959–1978, 2015.\n\nKim, H., Kim, S., Shin, H., and Heo, J. H.: Appropriate model selection methods for nonstationary generalized extreme value models, J. Hydrol., 547, 557–574, 2017.\n\nKoenker, R., Leorato, S., and Peracchi, F.: Distributional vs. Quantile Regression, Technical Report 300, Centre for Economic and International Studies, University of Rome Tor Vergata, Rome, Italy, 2013.\n\nLallemant, D., Kiremidjian, A., and Burton, H.: Statistical procedures for developing earthquake damage fragility curves, Earthq. Eng. Struct. Dynam., 44, 1373–1389, 2015.\n\nLin, T., Haselton, C. B., and Baker, J. W.: Conditional spectrum-based ground motion selec-tion. Part I: hazard consistency for risk-based assessments, Earthq. Eng. Struct. Dynam., 42, 1847–1865, 2013.\n\nMai, C., Konakli, K., and Sudret, B.: Seismic fragility curves for structures using non-parametric representations, Front. Struct. Civ. Eng., 11, 169–186, 2017.\n\nMarra, G. and Wood, S. N.: Practical variable selection for generalized additive models, Comput. Stat. Data Anal., 55, 2372–2387, 2011.\n\nMcKay, M., Beckman, R., and Conover, W.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code, Technometrics, 21, 239–245, 1979.\n\nPanagoulia, D., Economou, P., and Caroni, C.: Stationary and nonstationary generalized extreme value modelling of extreme precipitation over a mountainous area under climate change, Environmetrics, 25, 29–43, 2014.\n\nPEER: PEER NGA-West2 Database, Pacific Earthquake Engineering Research Center, available at: https://ngawest2.berkeley.edu (last access: 2 December 2019), 2013.\n\nPousse, G., Bonilla, L. F., Cotton, F., and Margerin L.: Nonstationary stochastic simulation of strong ground motion time histories including natural variability: application to the K-net Japanese database, Bull. Seismol. Soc. Am., 96, 2103–2117, 2006.\n\nQuilligan, A., O'Connor, A., and Pakrashi, V.: Fragility analysis of steel and concrete wind turbine towers, Eng. Struct., 36, 270–282, 2012.\n\nRahni, N., Lancieri, M., Clement, C., Nahas, G., Clement, J., Vivan, L., Guigueno, Y., and Raimond, E.: An original approach to derived seismic fragility curves – Application to a PWR main steam line, in: Proceedings of the International Topical Meeting on Probabilistic Safety Assessment and Analysis (PSA2017), Pittsburgh, PA, 2017.\n\nRigby, R. A. and Stasinopoulos, D. M.: Generalized additive models for location, scale and shape, J. Roy. Stat. Soc. Ser. C, 54, 507–554, 2005.\n\nRota, M., Penna, A., and Magenes, G.: A methodology for deriving analytical fragility curves for masonry buildings based on stochastic nonlinear analyses, Eng. Struct., 32, 1312–1323, 2010.\n\nSalas, J. D. and Obeysekera, J.: Revisiting the concepts of return period and risk for nonstationary hydrologic extreme events, J. Hydrol. Eng., 19, 554–568, 2014.\n\nSchwarz, G.: Estimating the Dimension of a Model, Ann. Stat., 6, 461–464, 1978.\n\nShinozuka, M., Feng, M., Lee, J., and Naganuma, T.: Statistical analysis of fragility curves, J. Eng. Mech., 126, 1224–1231, 2000.\n\nTweedie, M. C. K.: An index which distinguishes between some important exponential families. Statistics: Applications and New Directions, in: Proceedings of the Indian Statistical Institute Golden Jubilee International Conference, edited by: Ghosh, J. K. and Roy, J., Indian Statistical Institute, Calcutta, 579–604, 1984.\n\nUmlauf, N., Klein, N., and Zeileis, A.: BAMLSS: bayesian additive models for location, scale, and shape (and beyond), J. Comput. Graph. Stat., 27, 612–627, 2018.\n\nVamvatsikos, D. and Cornell, C. A.: Incremental dynamic analysis, Earthq. Eng. Struct. Dynam., 31, 491–514, 2002.\n\nWang, Z., Pedroni, N., Zentner, I., and Zio, E.: Seismic fragility analysis with artificial neural networks: Application to nuclear power plant equipment, Eng. Struct., 162, 213–225, 2018.\n\nWoessner, J., Danciu, L., Kaestli, P., and Monelli, D.: Database of seismogenic zones, Mmax, earthquake activity rates, ground motion attenuation relations and associated logic trees, FP7 SHARE Deliverable Report D6.6, available at: http://www.share-eu.org/node/52.html (last access: 4 April 2020), 2013.\n\nWong, T. E.: An Integration and Assessment of Nonstationary Storm Surge Statistical Behavior by Bayesian Model Averaging, Adv. Stat. Climatol. Meteorol. Oceanogr., 4, 53–63, 2018.\n\nWood, S. N.: Thin-plate regression splines, J. Roy. Stat. Soc. B, 65, 95–114, 2003.\n\nWood, S. N.: Generalized Additive Models: An Introduction with R, 2nd Edn., Chapman and Hall/CRC, Boca Raton, Florida, 2017.\n\nWood, S. N., Pya, N., and Säfken, B.: Smoothing parameter and model selection for general smooth models, J. Am. Stat. Assoc., 111, 1548–1563, 2016.\n\nZentner, I.: A general framework for the estimation of analytical fragility functions based on multivariate probability distributions, Struct. Safe., 64, 54–61, 2017.\n\nZentner, I., Gündel, M., and Bonfils, N.: Fragility analysis methods: Review of existing approaches and application, Nucl. Eng. Design, 323, 245–258, 2017.\n\nhttps://www.globalquakemodel.org/ (last access: 2 December 2019)."
] | [
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-avatar-thumb150.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f01-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f02-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f03-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-t01-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f04-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f05-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f06-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f07-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f08-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f09-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f10-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f11-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f12-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f13-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-t04-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f14-thumb.png",
null,
"https://nhess.copernicus.org/articles/20/1267/2020/nhess-20-1267-2020-f15-thumb.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8779206,"math_prob":0.9539403,"size":59916,"snap":"2022-40-2023-06","text_gpt3_token_len":13915,"char_repetition_ratio":0.1528909,"word_repetition_ratio":0.045314327,"special_character_ratio":0.23109019,"punctuation_ratio":0.16966273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99009514,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T21:22:21Z\",\"WARC-Record-ID\":\"<urn:uuid:7dbd3577-5e95-47cb-b9cd-59755fafb627>\",\"Content-Length\":\"248230\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:771177b4-b44b-44ee-bb80-6163d836cb85>\",\"WARC-Concurrent-To\":\"<urn:uuid:04401ce1-9dc7-4db3-8cfd-6a0ace90124d>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://nhess.copernicus.org/articles/20/1267/2020/\",\"WARC-Payload-Digest\":\"sha1:HS6L3NP6OYFQZLIQGVL6HJ5IPLFMRMTY\",\"WARC-Block-Digest\":\"sha1:VAIXISZYNYLXRIZTEJO6VXESK3PKDJGC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499890.39_warc_CC-MAIN-20230131190543-20230131220543-00342.warc.gz\"}"} |
https://paperity.org/p/75922955/measurement-of-differential-and-integrated-fiducial-cross-sections-for-higgs-boson | [
"# Measurement of differential and integrated fiducial cross sections for Higgs boson production in the four-lepton decay channel in pp collisions at $$\\sqrt{s}=7$$ and 8 TeV\n\nJournal of High Energy Physics, Apr 2016\n\nIntegrated fiducial cross sections for the production of four leptons via the H → 4ℓ decays (ℓ = e, μ) are measured in pp collisions at $$\\sqrt{s}=7$$ and 8TeV. Measurements are performed with data corresponding to integrated luminosities of 5.1 fb−1 at 7TeV, and 19.7 fb−1 at 8 TeV, collected with the CMS experiment at the LHC. Differential cross sections are measured using the 8 TeV data, and are determined as functions of the transverse momentum and rapidity of the four-lepton system, accompanying jet multiplicity, transverse momentum of the leading jet, and difference in rapidity between the Higgs boson candidate and the leading jet. A measurement of the Z → 4ℓ cross section, and its ratio to the H → 4ℓ cross section is also performed. All cross sections are measured within a fiducial phase space defined by the requirements on lepton kinematics and event topology. The integrated H → 4ℓ fiducial cross section is measured to be 0. 56 − 0.44 + 0.67 (stat) − 0.06 + 0.21 (syst) fb at 7 TeV, and 1. 11 − 0.35 + 0.41 (stat) − 0.10 + 0.14 (syst) fb at 8 TeV. The measurements are found to be compatible with theoretical calculations based on the standard model.\n\nThis is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP04%282016%29005.pdf\n\nV. Khachatryan, A. M. Sirunyan, A. Tumasyan, W. Adam. Measurement of differential and integrated fiducial cross sections for Higgs boson production in the four-lepton decay channel in pp collisions at $$\\sqrt{s}=7$$ and 8 TeV, Journal of High Energy Physics, 2016, 5, DOI: 10.1007/JHEP04(2016)005",
null,
""
] | [
null,
"https://paperity.org/static/img/logo/logo_on-white_m15-25.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72320384,"math_prob":0.89540744,"size":101183,"snap":"2019-26-2019-30","text_gpt3_token_len":32410,"char_repetition_ratio":0.15201771,"word_repetition_ratio":0.09423798,"special_character_ratio":0.26760423,"punctuation_ratio":0.28923914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97512954,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-16T00:02:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c4abb456-e5f0-46ed-9c6f-303b616acb31>\",\"Content-Length\":\"124881\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c673ce8-6735-47e8-8efe-0bf8d223bb42>\",\"WARC-Concurrent-To\":\"<urn:uuid:200ababa-b6f0-47f9-b271-7095d3171b79>\",\"WARC-IP-Address\":\"88.198.20.149\",\"WARC-Target-URI\":\"https://paperity.org/p/75922955/measurement-of-differential-and-integrated-fiducial-cross-sections-for-higgs-boson\",\"WARC-Payload-Digest\":\"sha1:7FVVKHFXCQFFV4PMBY37YLFRH73UNYGJ\",\"WARC-Block-Digest\":\"sha1:TFDJFUFAGICHZKQ7EQNITNYVRBKRMNBT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195524290.60_warc_CC-MAIN-20190715235156-20190716021156-00064.warc.gz\"}"} |
https://www.anevis-solutions.com/2016/calculate-volatility-correctly/ | [
"",
null,
"Volatility is a highly important component in many different investment strategies, but it is also a measure that is not commonly understood, especially when looking at the calculation in detail. In this article we wanted to review the two different approaches of calculating volatility commonly encountered in the market or when looking at different fund factsheets. Investment returns mostly are calculated via a discrete or continuous approach, which will yield different risk and return statistics, based upon each approach that we see. One of the more commonly used approaches in the market is with volatility calculated based on discrete returns, thus",
null,
"For this case, we will show that you have an inaccurate volatility output. We hope you will take some time and read through this article, to be better familiar with volatility calculations. Calculating volatility is not necessarily complex, but doing so without a full awareness of the underlying formulas and assumptions will run the risk of an inaccurate risk reporting for your investment strategy.\n\nVolatility Calculation – the correct way using continuous returns\n\nVolatility is used as a measure of dispersion in asset returns. Thus, it describes the risk attached to an observed financial instrument and is equivalent to the standard deviation calculation well known from statistics. To understand how to calculate volatility correctly and why the commonly used procedure using discrete returns is inaccurate we first need to clarify some basics.\n\nStatistical basics\n\nLet’s assume",
null,
"to be a one-dimensional discrete random variable taking values in",
null,
"with",
null,
"the probability density function and",
null,
"the distribution function.",
null,
"will describe the single-period continuous return of our financial asset and",
null,
"the potential values",
null,
"might realize. The information about the probability, that",
null,
"realizes",
null,
", is given by",
null,
". The Variance of",
null,
"is defined as the expected quadratic difference of the random variable’s realizations and the expected value of the random variable:",
null,
"As the expected value of a discrete variable is the sum of all realizations times the probabiliy of this realization we get",
null,
"with",
null,
"the expected value of the random variable",
null,
". The standard deviation is derived by taking the square root of the variance, thus",
null,
"Application on a financial asset\n\nWhen evaluating financial assets we do not have the luxury of knowing the random variable",
null,
"representing how a single-period return is defined, thus we do not know anything about the potential values",
null,
"the variable",
null,
"might realize nor do we know what the probability of the realization of those values",
null,
"is. But we can look at the already realized returns we saw in the past. With this we are then able to estimate from those observations how",
null,
"behaves, e.g. by estimating statistics like the standard deviation of",
null,
".\n\nNow, let’s assume we look at a financial asset with prices",
null,
"at times",
null,
", thus",
null,
". We assume that the continuous returns",
null,
"all are realizations of a series of identically distributed random variables",
null,
", thus",
null,
"is the realization of",
null,
"",
null,
"is the realization of",
null,
"and so on. To characterize the distribution of",
null,
", which is the same as all the other distributions of",
null,
"as they are identically distributed, we can now look at the realizations",
null,
"The historical probability in this setting for each realization equals",
null,
"as we have",
null,
"continuous returns.\n\nCalculation of single-period volatility\n\nTo calculate the standard deviation we first need to calculate the expected value. As continuous returns are additive (proofed in our article about properties of linear, discrete and continuous returns) we can use the arithmetical average as an estimation for the expected value. So we calculate in a first step",
null,
"The variance of",
null,
"is now easily derived using the calculated expected value and the variance formula:",
null,
"with",
null,
"for all",
null,
"as the historical probability for each realization equals",
null,
"as written above, thus",
null,
"Using this we can calculate the standard deviation of the random variable",
null,
"or equivalentely the “volatility” of the single-period return by",
null,
"There is a lot of debate among statisticians if the above estimation for the variance should be used or if it should be amended by the Bessel’s correction factor",
null,
"for an unbiased estimator. The respective unbiased estimation for the variance would look like this:",
null,
"As this discussion would go beyond the scope of this article at this point we will leave it to the reader to decide what estimation measure to use.\n\nAggregating single-period volatility to multi-period volatility\n\nLet’s assume we calculated the volatility based on daily continuous returns, thus",
null,
"characterizes the daily volatility. To be able to annualize this volatility we use another assumption and the consequent property of the variance.\n\nGiven that the identically distributed random variables",
null,
"are also statistically independent of each other, the following holds:",
null,
"Given the additivity of continuous returns we know that a year’s return (let’s assume a year has 252 trading days) described by the random variable",
null,
"can be written as the sum of 252 random variables describing the daily returns,",
null,
". Thus we have for the variance of the yearly continuous return using that",
null,
", as the random variables describing the daily returns are identically distributed,",
null,
"And thus",
null,
"To summarize\n\nUnder the assumptions that the\n\n• single-period returns are identically distributed and\n\nthe single-period volatility can be calculated based on",
null,
"observed single-period returns using",
null,
"",
null,
"",
null,
"Single-period volatiliy can be aggregated under the additional assumption that the\n\n• single-period returns are independent\n\nto a multi-period volatility consisting of",
null,
"single-period time frames using",
null,
"Why calculating volatility using discrete returns is not meaningful\n\nOf course, all of the mathematical basics mentioned above are still true when we start working with discrete returns. The random variable",
null,
"now describes the single-period discrete return of our financial asset and not the continuous return.\n\nPitfall using discrete returns for calculating single-period volatility\n\nAs detailed above, the expected value of our random variable needs to be calculated based on the set of discrete returns. As shown in properties of linear, discrete and continuous returns, discrete returns are not additive but multiplicative. So using the arithmetical average as an estimation for the expected value is not appropriate, as applying arithmetic operations on geometric data like discrete returns would have no meaningful interpretation. An estimation for the expected value of discrete returns we could interpret financially would be to use the geometrical average:",
null,
"However, although we can interpret this, it underestimates the expected value of the discrete returns.\n\nSo when trying to calculate volatility using discrete returns you must choose between the lesser of two evils – either you take a poor estimation for the expected value (geometrical average) or you risk calculating something which can not be interpreted and thus is not meaningful (arithmetical average)."
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201024%20576'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20217%2041'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2086%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2041%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2045%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2086%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2011'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2041%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20220%2024'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20291%2040'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2040%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20152%2023'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2086%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2041%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2017%2015'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20120%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20104%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20322%2045'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2083%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2076%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2023%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2076%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2023%2015'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2023%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2083%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20232%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2031%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2014%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20234%2056'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2023%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20461%2056'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20187%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%207%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2031%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20354%20122'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2023%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20352%2069'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2081%2019'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20398%20122'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2043%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20113%2016'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20236%2057'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2042%2015'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20136%2025'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20307%2018'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20357%20122'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20354%2023'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2014%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20195%2068'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20116%2056'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20281%2045'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%208'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20185%2022'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2012'%3E%3C/svg%3E",
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20338%2065'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90976065,"math_prob":0.9900983,"size":7565,"snap":"2021-43-2021-49","text_gpt3_token_len":1366,"char_repetition_ratio":0.15513821,"word_repetition_ratio":0.06297872,"special_character_ratio":0.17303371,"punctuation_ratio":0.065149136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992948,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T04:22:40Z\",\"WARC-Record-ID\":\"<urn:uuid:70e9f7b5-8a32-49bd-911e-d04d721b0bb9>\",\"Content-Length\":\"100297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a39aeb47-d053-47e1-80fc-bef364f2df66>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ca0763b-6534-409a-b6ca-47cd9507cc9c>\",\"WARC-IP-Address\":\"34.90.26.83\",\"WARC-Target-URI\":\"https://www.anevis-solutions.com/2016/calculate-volatility-correctly/\",\"WARC-Payload-Digest\":\"sha1:3RNVBLHC7MPSKKSRBO3BKDSRPYU3UTK3\",\"WARC-Block-Digest\":\"sha1:JA6H5DCK46KKCBRDAFBRZJNUNLTGOGS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585561.4_warc_CC-MAIN-20211023033857-20211023063857-00333.warc.gz\"}"} |
https://rdrr.io/cran/transmem/man/transPlot.html | [
"# transPlot: Plots transport profiles of single run experiments In transmem: Treatment of Membrane-Transport Data\n\n## Description\n\nGiven the transport complete information of the interest species and, optionally, secondary and tertiary species, the function plots transport profiles including (if given) non-linear regression models that can be obtained using `transTrend`.\n\n## Usage\n\n ```1 2 3 4 5``` ```transPlot(trans, trend = NULL, secondary = NULL, tertiary = NULL, sec.trend = \"spline\", lin.secon = FALSE, span = 0.75, legend = FALSE, xlab = \"Time (h)\", ylab = expression(Phi), xlim = NULL, ylim = NULL, xbreaks = NULL, ybreaks = NULL, size = 2.8, bw = FALSE, srs = NULL, plot = TRUE) ```\n\n## Arguments\n\n `trans` Data frame with the complete transport information of interest species. Must be generated using `conc2frac`. This is the only non-optional parameter. `trend` Non-linear regression model of the main transport profile generated using `transTrend`. `secondary` Secondary species transport data frame (see `conc2frac`). `tertiary` Tertiaty species transport data frame (see `conc2frac`). `sec.trend` Type of trend line to be used for secondary and tertiary species data. Default is `'spline'` but `'linear'`, `'loess'` and `'logarithmic'` are also allowed. `lin.secon` Deprecated. Use `sec.trend = 'linear'` instead. `span` Amount of smoothing when `sec.tred = 'loess'`. Is a value between 0 and 1. Default is 0.75 `legend` Logical. If `FALSE`, the default, the legend is not included. `xlab` Label to be used for x axis. Text and expression allowed. `ylab` Label to be used for y axis. Text and expression allowed. `xlim` Numeric vector of limits for X-axis. `ylim` Numeric vector of limits for X-axis. `xbreaks` Numeric vector of x-axis breaks. `ybreaks` Numeric vector of x-axis breaks. `size` Size used for points in the plot. `bw` Logical, if `FALSE`, the default, a color version of the plot is given. If a black and white version is required, it must be set to `TRUE`. `srs` Deprecated. `plot` Logical. If `TRUE`, the default, the plot is printed in the current graphical device.\n\n## Details\n\nMost `transmem` graphical representations are made using the package `ggplot2` so the function returns a ggplot2 object that can be assigned to a variable for further modification.\n\nThis function has a version that uses replicated experiments and may be useful to illustrate repeateability. For more information see `transPlotWR`.\n\n## Value\n\nPlot of the transport profile considering all provided species.\n\n## Author(s)\n\nCristhian Paredes, [email protected]\n\nEduardo Rodriguez de San Miguel, [email protected]\n\n## References\n\nWickham H (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. ISBN 978-3-319-24277-4, https://ggplot2.tidyverse.org.\n\n## Examples\n\n ```1 2 3 4 5 6 7 8``` ``` data(seawaterLiNaK) trend <- transTrend(trans = seawaterLiNaK\\$Lithium.1, model = 'paredes') transPlot(trans = seawaterLiNaK\\$Lithium.1, trend = trend, secondary = seawaterLiNaK\\$Sodium.1, tertiary = seawaterLiNaK\\$Potassium.1) transPlot(trans = seawaterLiNaK\\$Lithium.1, trend = trend, secondary = seawaterLiNaK\\$Sodium.1, tertiary = seawaterLiNaK\\$Potassium.1, bw = TRUE) ```\n\ntransmem documentation built on July 1, 2020, 10:38 p.m."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60203266,"math_prob":0.93177485,"size":2352,"snap":"2022-05-2022-21","text_gpt3_token_len":636,"char_repetition_ratio":0.122231685,"word_repetition_ratio":0.11898017,"special_character_ratio":0.23129252,"punctuation_ratio":0.14250614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96208555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T01:30:57Z\",\"WARC-Record-ID\":\"<urn:uuid:63576171-eef4-44f6-99a4-2e67b07b98bb>\",\"Content-Length\":\"48453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afcb58ba-3ea5-44c9-835c-2472ce7e3950>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4ec9f36-2d3f-433a-93dc-8bb70a905842>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/transmem/man/transPlot.html\",\"WARC-Payload-Digest\":\"sha1:5UOBAXSWTYMBIWSKKIOGPEMHLEHD4Q73\",\"WARC-Block-Digest\":\"sha1:JTRHZJCVQG2QTH4X6KMKMN4CFBTYSAMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662552994.41_warc_CC-MAIN-20220523011006-20220523041006-00433.warc.gz\"}"} |
https://www.daniweb.com/programming/software-development/threads/365968/how-to-access-array-in-different-classes | [
"could someone plz help.. i'm stuck in one part where u have to access the array form the main class(random) ive juss posted the two classes i have doubt in.. the question goes soemthin like this:\n\nQ:Make a java class file for function that can do addition, then make a new java class file again for random numbers(which will allow your program to make random equations for addition).Make a new java class file for menu where the user decides what grade(from 1 to 6) they study in and if u type something wrong like a character or anything, the program shouldnt shut down but print out “invalid type again” and then the user decided how many equations they want to solve; 10, 20, 30 or 50 . Your math program should ask to answer the equations. In\n50.\n100\n100\n\nWhen the user types in the answer,your program should state wether the answer was correct or wrong.In the end, it should tell how many eqautions the user got correct and how many wrong.\n\n``````import java.util.Random;\n\npublic class Main {\npublic static void main(String[] args) {\n\nRandom generator= new Random();\nint num1 = generator.nextInt(20)+1;\nint num2 = generator.nextInt(20)+1;\n\n}\n}// this is the main class``````\n``````import java.util.Scanner;\npublic void add(int num1, int num2){\n\nScanner input1=new Scanner(System.in);\nint choice2=input1.nextInt();\n\nswitch(choice2){\n\ncase 10:\n\nint[] numbers=new int ;\n\nfor (int i=0;i<numbers.length; i++){\n\nnumbers[i]=generator.nextInt(10)+1;// this is the area of problem, the java gives error in recognizin 'generator.nextInt(10)+1' since random is in the other class.\n\nSystem.out.println(\" wat is\"+ num1 +\"*\"+num2);\n\nint guess=input1.nextInt();\n\n}\nelse{\n\n}\n\n}\nbreak;\n}\n}\n\nYou can pass the object you are trying to access to the method you are calling from aObject. E.g. change Main.java to\n\n``aObject.add(num1, num2, generator);``\n\n``public void add(int num1, int num2, Random generator)``\n\nyou may need to return it back if …\n\nits still showin an error (the nulltype error), sionceit doesnt recognizes Random in : public void add( int sum1,int sum2. Random generator);\nmoreover is 'Random' above suppose to be a built in keyword? =S\n\nI don't see the code posted for public void add(int sum1, int sum2, Random generator) but …\n\n## All 5 Replies",
null,
"You can pass the object you are trying to access to the method you are calling from aObject. E.g. change Main.java to\n\n``aObject.add(num1, num2, generator);``\n\n``public void add(int num1, int num2, Random generator)``\n\nyou may need to return it back if you need to keep the same copy throughout the program.\n\nits still showin an error (the nulltype error), sionceit doesnt recognizes Random in : public void add( int sum1,int sum2. Random generator);\nmoreover is 'Random' above suppose to be a built in keyword? =S\n\nits still showin an error (the nulltype error), sionceit doesnt recognizes Random in : public void add( int sum1,int sum2. Random generator);\nmoreover is 'Random' above suppose to be a built in keyword? =S\n\nI don't see the code posted for public void add(int sum1, int sum2, Random generator) but if I had to guess, your getting a null pointer because you are trying to use an object before it is initialised most likely.\n\nI don't see the code posted for public void add(int sum1, int sum2, Random generator) but if I had to guess, your getting a null pointer because you are trying to use an object before it is initialised most likely.\n\nheres the codes\n\n``````import java.util.Random;\npublic class Main {\n\n/**\n* @param args the command line arguments\n*/\npublic static void main(String[] args) {\n\nRandom generator= new Random();\nmObject.question();\n\nint num1 = generator.nextInt(20)+1;\nint num2 = generator.nextInt(20)+1;\n\n// TODO code application logic here\n}\n\n}\n\n*/\nimport java.util.Scanner;\npublic void add(int num1, int num2, Random generator ){\n\nScanner input1=new Scanner(System.in);\nint choice2=input1.nextInt();\n\nswitch(choice2){\n\ncase 10:\n\nint[] numbers=new int ;\n\nfor ( int i=0;i<numbers.length; i++){\n\nnumbers[i]=generator.nextInt(10)+1;\n\nSystem.out.println(\" wat is\"+ num1 +\"*\"+num2);\n\nint guess=input1.nextInt();\n}\nelse{\n\n}\n\nreturn(Random generator);\n}\nbreak;\n\n}\n}\n}\n``````\n\nBoomBoomF after looking over what you just posted for you addition method in both posts, I'm thinking maybe in general you should rethink about what you are doing and how that method should work. Anywho to getting it working as posted\n\n``return(Random generator);``\n\nIs not going to work. you would need to\n\n``return new Random();``\n\nThis will return a new Random object\n\nor\n\n``return generator;``\n\nThis will return the Random object that you initially passed in.\n\nAnother silly problem is that we are returning a Random object but our return type is void for the add method. Is there some reason for passing this Random object back out of the method? It already exists outside of the method when you originally created it and it hasn't been modified in anyway. Your whole for loop seems to hurt my head as well. You are putting random numbers inside of an array and then you keep asking the same multiplication problem over and over as many times as the length of the array.(num1 and num2 were provided from the outside and don't change at all inside the method). Then you exit the loop after the first iteration because you have a return statement at the end of the loop. Unless your just testing stuff out, this needs to be rethought. Good luck! :)\n\nBe a part of the DaniWeb community\n\nWe're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge."
] | [
null,
"https://static.daniweb.com/connect/images/anonymous.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8213019,"math_prob":0.8171004,"size":2189,"snap":"2021-43-2021-49","text_gpt3_token_len":539,"char_repetition_ratio":0.11899313,"word_repetition_ratio":0.024096385,"special_character_ratio":0.27409777,"punctuation_ratio":0.15367483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.961477,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T08:27:58Z\",\"WARC-Record-ID\":\"<urn:uuid:c0b8603a-9c12-4b2b-9737-7c6b0b5bd630>\",\"Content-Length\":\"80022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e616a97-35d4-4be6-ab44-2881268bbb25>\",\"WARC-Concurrent-To\":\"<urn:uuid:73267704-7a5d-4493-938f-7b7a56dfc6a6>\",\"WARC-IP-Address\":\"172.66.41.5\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/software-development/threads/365968/how-to-access-array-in-different-classes\",\"WARC-Payload-Digest\":\"sha1:E55ITSE27Y7TWTMMP4OYXEO6GRTUZTH6\",\"WARC-Block-Digest\":\"sha1:BBKZEWMYK4A7ORFISDY4SAH2CA2FRRMF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587854.13_warc_CC-MAIN-20211026072759-20211026102759-00597.warc.gz\"}"} |
https://rpy2.github.io/doc/v3.2.x/html/robjects_convert.html | [
"# Mapping rpy2 objects to arbitrary python objects¶\n\n## Protocols¶\n\nAt the lower level (`rpy2.rinterface`), the rpy2 objects exposing R objects implement Python protocols to make them feel as natural to a Python programmer as possible. With them they can be passed as arguments to many non-rpy2 functions without the need for conversion.\n\nR vectors are mapped to Python objects implementing the methods `__getitem__()` / `__setitem__()` in the sequence protocol so elements can be accessed easily. They also implement the Python buffer protocol, allowing them be used in `numpy` functions without the need for data copying or conversion.\n\nR functions are mapped to Python objects implementing the `__call__()` so they can be called just as if they were functions.\n\nR environments are mapped to Python objects implementing `__getitem__()` / `__setitem__()` in the mapping protocol so elements can be accessed similarly to in a Python `dict`.\n\nNote\n\nThe rinterface level is quite close to R’s C API and modifying it may quickly results in segfaults.\n\n## Conversion¶\n\nIn its high-level interface `rpy2` is using a conversion system that has the task of convertion objects between the following 3 representations: - lower-level interface to R (`rpy2.rinterface` level), - higher-level interface to R (`rpy2.robjects` level) - other (no `rpy2`) representations\n\nFor example, if one wanted have all Python `tuple` turned into R character vectors (1D arrays of strings) as exposed by rpy2’s low-level interface the function would look like:\n\n```from rpy2.rinterface import StrSexpVector\n\ndef tuple_str(tpl):\nres = StrSexpVector(tpl)\nreturn res\n```\n\n### Converter objects¶\n\nThe class `rpy2.robjects.conversion.Converter` groups such conversion functions into one object.\n\nOur conversion function defined above can then be registered as follows:\n\n```from rpy2.robjects.conversion import Converter\nmy_converter = Converter('my converter')\nmy_converter.py2rpy.register(tuple, tuple_str)\n```\n\nConverter objects are additive, which can be an easy way to create simple combinations of conversion rules. For example, creating a converter that adds the rule above to the default conversion rules is written:\n\n```from rpy2.robjects import default_converter\ndefault_converter + my_converter\n```\n\n### Local conversion rules¶\n\nThe conversion rules can be customized globally (See section Customizing the conversion) or through the use of local converters as context managers. The latter is recommended when experimenting or wishing a specific behavior of the conversion system that is limited in time.\n\nWe can use this to example, if we want to change rpy2’s current refusal to handle sequences of unspecified type.\n\nThe following code is throwing an error that rpy2 does not know how to handle Python sequences.\n\n```x = (1, 2, 'c')\n\nfrom rpy2.robjects.packages import importr\nbase = importr('base')\n\n# error here:\nres = base.paste(x, collapse=\"-\")\n```\n\nThis can be changed by using our converter as an addition to the default conversion scheme:\n\n```from rpy2.robjects import default_converter\nfrom rpy2.robjects.conversion import Converter, localconverter\nwith localconverter(default_converter + my_converter) as cv:\nres = base.paste(x, collapse=\"-\")\n```\n\n### `rpy2py()`¶\n\nThe conversion is trying to turn an rpy2 object (either `rpy2.rinterface` or `rpy2.robjects` level, low or high level interface respectively) into a Python object (or an object that is more Python-like than the input object). This method is a generic as implemented in `functools.singledispatch()`.\n\nFor example the optional conversion scheme for `numpy` objects will return numpy arrays whenever possible.\n\nNote\n\nrobjects-level objects are also implicitly rinterface-level objects because of the inheritance relationship in their class definitions, but the reverse is not true. The robjects level is an higher level of abstraction, aiming at simplifying one’s use of R from Python (although at the possible cost of performances).\n\n### `py2rpy()`¶\n\nThe conversion is between (presumably) non-rpy2 objects and rpy2 objects. The result tend to be a lower-level interface object (`rpy2.rinterface`) because this conversion is often the step before an object is passed to R.\n\nThis method is a generic as implemented in `functools.singledispatch()` (with Python 2, `singledispatch.singledispatch()`).\n\n### Customizing the conversion¶\n\nAs an example, let’s assume that one want to return atomic values whenever an R numerical vector is of length one. This is only a matter of writing a new function rpy2py that handles this, as shown below:\n\n```import rpy2.robjects as robjects\nfrom rpy2.rinterface import SexpVector\n\[email protected](SexpVector)\ndef my_rpy2py(obj):\nif len(obj) == 1:\nobj = obj\nreturn obj\n```\n\nThen we can test it with:\n\n```>>> pi = robjects.r.pi\n>>> type(pi)\n<type 'float'>\n```\n\nAt the time of writing `singledispath()` does not provide a way to unregister. Removing the additional conversion rule without restarting Python is left as an exercise for the reader."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7825349,"math_prob":0.7843177,"size":4892,"snap":"2019-51-2020-05","text_gpt3_token_len":1134,"char_repetition_ratio":0.1698036,"word_repetition_ratio":0.0362117,"special_character_ratio":0.20645952,"punctuation_ratio":0.10332542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9733935,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T01:48:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4345b710-f7bc-4532-a55a-ad7994d4c398>\",\"Content-Length\":\"20678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f53c924-9c37-49c2-a079-ee7f81b0dd7f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c47ec8cc-0aad-4866-b98a-0e09db43ccaf>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://rpy2.github.io/doc/v3.2.x/html/robjects_convert.html\",\"WARC-Payload-Digest\":\"sha1:EK2KNKLIGOFVKQKOCXX2EEVG3SYTHO4N\",\"WARC-Block-Digest\":\"sha1:XBAHHRECBPZDI7J75N5EWN2WEBIB3NM7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606269.37_warc_CC-MAIN-20200122012204-20200122041204-00007.warc.gz\"}"} |
https://package.frelm.org/repo/611/28.0.0/Exts.Result | [
"This is an alternative site for discovering Elm packages. You may be looking for the official Elm package site instead.\n\n# Exts.Result\n\nExtensions to the core `Result` library.\n\nmapBoth : (e -> f) -> (a -> b) -> Result e a -> Result f b\n\nApply functions to both sides of a `Result`, transforming the error and ok types.\n\nisOk : Result e a -> Bool\n\nBoolean checks for success/failure.\n\nisErr : Result e a -> Bool\nfromOk : Result e a -> Maybe a\n\nConvert a `Result` to a `Maybe`.\n\nfromErr : Result e a -> Maybe e\nmappend : Result e a -> Result e b -> Result e ( a, b )\n\nMonoidal append - join two Results together as though they were one.\n\neither : (e -> c) -> (a -> c) -> Result e a -> c\n\nCollapse a `Result` down to a single value of a single type.\n\nExample:\n\n`````` case result of\nErr err -> errorView err\nOk value -> okView value\n``````\n\n...is equivalent to:\n\n`````` either errorView okView result\n``````\n``````module Exts.Result exposing\n( mapBoth\n, isOk\n, isErr\n, fromOk\n, fromErr\n, mappend\n, either\n)\n\n{-| Extensions to the core `Result` library.\n\n@docs mapBoth\n@docs isOk\n@docs isErr\n@docs fromOk\n@docs fromErr\n@docs mappend\n@docs either\n\n-}\n\n{-| Apply functions to both sides of a `Result`, transforming the error and ok types.\n-}\nmapBoth : (e -> f) -> (a -> b) -> Result e a -> Result f b\nmapBoth f g r =\ncase r of\nOk x ->\nOk (g x)\n\nErr x ->\nErr (f x)\n\n{-| Boolean checks for success/failure.\n-}\nisOk : Result e a -> Bool\nisOk x =\ncase x of\nOk _ ->\nTrue\n\nErr _ ->\nFalse\n\n{-| -}\nisErr : Result e a -> Bool\nisErr =\nnot << isOk\n\n{-| Convert a `Result` to a `Maybe`.\n-}\nfromOk : Result e a -> Maybe a\nfromOk v =\ncase v of\nOk x ->\nJust x\n\nErr _ ->\nNothing\n\n{-| -}\nfromErr : Result e a -> Maybe e\nfromErr v =\ncase v of\nErr x ->\nJust x\n\nOk _ ->\nNothing\n\n{-| Monoidal append - join two Results together as though they were one.\n-}\nmappend : Result e a -> Result e b -> Result e ( a, b )\nmappend a b =\ncase ( a, b ) of\n( Err x, _ ) ->\nErr x\n\n( _, Err y ) ->\nErr y\n\n( Ok x, Ok y ) ->\nOk ( x, y )\n\n{-| Collapse a `Result` down to a single value of a single type.\n\nExample:\n\ncase result of\nErr err -> errorView err\nOk value -> okView value\n\n...is equivalent to:\n\neither errorView okView result\n\n-}\neither : (e -> c) -> (a -> c) -> Result e a -> c\neither f g r =\ncase r of\nErr x ->\nf x\n\nOk x ->\ng x\n```\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.571812,"math_prob":0.86145884,"size":1798,"snap":"2021-04-2021-17","text_gpt3_token_len":558,"char_repetition_ratio":0.14492753,"word_repetition_ratio":0.24812031,"special_character_ratio":0.34149054,"punctuation_ratio":0.12154696,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9847983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T16:09:59Z\",\"WARC-Record-ID\":\"<urn:uuid:55cfb0d4-3401-4efa-9b37-3fb34a2c14f8>\",\"Content-Length\":\"24488\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29025ad7-694f-4595-a1ba-d7253c1a0e1a>\",\"WARC-Concurrent-To\":\"<urn:uuid:79056600-cf2a-4012-a2ab-c057b54aabe3>\",\"WARC-IP-Address\":\"54.85.157.136\",\"WARC-Target-URI\":\"https://package.frelm.org/repo/611/28.0.0/Exts.Result\",\"WARC-Payload-Digest\":\"sha1:QNMCPT4JIGFM3Z57EXG2ECZXF7MTEDTX\",\"WARC-Block-Digest\":\"sha1:IXKC37SVAO3UZCP6Q25LC4XKFI3IQJXY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703530835.37_warc_CC-MAIN-20210122144404-20210122174404-00115.warc.gz\"}"} |
https://hoven.in/ncert-phy-xi-ch-2/q17-ch2-phy.html | [
"# (solved)Question 2.17 of NCERT Class XI Physics Chapter 2\n\nOne mole of an ideal gas at standard temperature and pressure occupies 22.4 L (molar volume). What is the ratio of molar volume to the atomic volume of a mole of hydrogen ? (Take the size of hydrogen molecule to be about 1 Å). Why is this ratio so large ?\n(Rev. 03-Aug-2022)\n\n## Categories | About Hoven's Blog\n\n,\n\n### Question 2.17 NCERT Class XI Physics\n\nOne mole of an ideal gas at standard temperature and pressure occupies 22.4 L (molar volume). What is the ratio of molar volume to the atomic volume of a mole of hydrogen ? (Take the size of hydrogen molecule to be about 1 Å). Why is this ratio so large ?\n\n### Physical Concept and Assumptions\n\nAssumption 1: hydrogen is an ideal gas so we can use 22.4 liters as the molar volume.\n\nAssumption 2: each hydrogen molecule is a rigid sphere of diameter 1 Å angstorm.\n\nForces of attraction between two hydrogen atoms are small so they spread as a gas at STP [273.15 K and 100 kPa]. As a result there is a lot of empty space between two atoms of the gas.\n\nWhat we have to do: We have to compare the sum of volumes of all hydrogen atoms [i.e., if they stay together as a solid] to the volume of the container 22.4 L.\n\n### Video Explanation\n\nPlease watch this youtube video for a quick explanation of the solution:\n\n### Solution in Detail\n\n$\\displaystyle 1 \\text{ liter} = 10^{-3} \\:m^3$\n\n$\\displaystyle \\therefore \\text{molar volume} = 22.4 \\times 10^{-3}\\:m^3$\n\nVol. of 1 atom $\\displaystyle = \\frac 16 \\pi D^3 = \\frac 16 \\pi (10^{-10})^3$\n\nVol. of 1 mol atom $\\displaystyle = 6 \\times 10^{23} \\times \\frac 16 \\pi (10^{-10})^3$\n\n\\begin{aligned} \\frac{\\text{molar vol.}}{\\text{atomic vol}} &= \\frac{22.4 \\times 10^{-3}}{6 \\times 10^{23} \\times \\displaystyle \\frac 16 \\pi (10^{-10})^3}\\\\ \\\\ &\\approx 10^{4} \\text{(see video)} \\:\\underline{Ans} \\end{aligned}\n\nWhy is this ratio so large? Ans: because force of attraction between two hydrogen atoms is small so they spread out as a gas."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78465205,"math_prob":0.99369794,"size":1786,"snap":"2022-27-2022-33","text_gpt3_token_len":523,"char_repetition_ratio":0.116161615,"word_repetition_ratio":0.020408163,"special_character_ratio":0.31410974,"punctuation_ratio":0.09641873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991493,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T14:49:41Z\",\"WARC-Record-ID\":\"<urn:uuid:aea15443-d04c-463b-9e40-f1313323b3dd>\",\"Content-Length\":\"27774\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e08ed63e-439f-49c0-b413-372613c116ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:d40cf6b1-0929-4e9f-a735-8831cb90e1ab>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://hoven.in/ncert-phy-xi-ch-2/q17-ch2-phy.html\",\"WARC-Payload-Digest\":\"sha1:K7YS2VCUV2DUCSBFOHHSLEQFAGYVK7CP\",\"WARC-Block-Digest\":\"sha1:DN3NHUJ52TFV5XSTIGZFUBMWWFP6A335\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570977.50_warc_CC-MAIN-20220809124724-20220809154724-00546.warc.gz\"}"} |
https://www.pcibex.net/forums/reply/7951/ | [
"# Reply To: html layout/event times/selector vs. scale/ compatibility\n\nPennController for IBEX Forums Support html layout/event times/selector vs. scale/ compatibility Reply To: html layout/event times/selector vs. scale/ compatibility\n\n#7951\n\nHi,\n\nI’m not sure I’m following everything, but `and` does not mean “and do something else,” it is used to write a conjunctive test: reference\n\nThe commands `success` and `failure` accept subsequences of commands, separated by commas just like the main sequence of commands, eg:\n\n```.test.selected({ V: getImage(\"inappropriate\"), B: getImage(\"infelicitous\"), N: getImage(\"appropriate\") }[variable.Right_Key])\n.success(\ngetVar(\"exp_score\").set(s_exp=>s_exp+1)\n,\ngetVar(\"item_score\").set(sc_i=>sc_i+1)\n)```\n\nAnd yes, it is possible to have multiple `test` commands on the same element:\n\n```.test.selected({ V: getImage(\"inappropriate\"), B: getImage(\"infelicitous\"), N: getImage(\"appropriate\") }[variable.Right_Key])\n.success(\ngetVar(\"exp_score\").set(s_exp=>s_exp+1)\n,\ngetVar(\"item_score\").set(sc_i=>sc_i+1)\n)\n.test.selected(getImage(\"inappropriate\")).success( getVar(\"v_score\").set( v=> v+1 ) )\n.test.selected(getImage(\"infelicitous\")).success( getVar(\"b_score\").set( v=> v+1 ) )\n.test.selected(getImage(\"appropriate\")).success( getVar(\"n_score\").set( v=> v+1 ) )```\n\nNote that in this example I assume you have created the Var elements named v_score, b_score and n_score earlier\n\nFeel free to share a link to your experiment with me if you’d like more assistance, so I can see the code as is, in context\n\nJeremy"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6130835,"math_prob":0.4761361,"size":1455,"snap":"2023-40-2023-50","text_gpt3_token_len":382,"char_repetition_ratio":0.1474845,"word_repetition_ratio":0.13095239,"special_character_ratio":0.26872852,"punctuation_ratio":0.19101124,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9671459,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-01T13:20:16Z\",\"WARC-Record-ID\":\"<urn:uuid:9dd11724-3cc4-4585-ade2-696527075647>\",\"Content-Length\":\"66793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b86e1bc4-51e4-4e43-9cf4-21d7ad9b5d52>\",\"WARC-Concurrent-To\":\"<urn:uuid:11a19204-46a3-41b1-b476-6b2d79051ce1>\",\"WARC-IP-Address\":\"45.33.65.35\",\"WARC-Target-URI\":\"https://www.pcibex.net/forums/reply/7951/\",\"WARC-Payload-Digest\":\"sha1:UUIQ4DYXXJJQYXQ6BEGEY6IBFP6BKQEF\",\"WARC-Block-Digest\":\"sha1:NYMJYLSLUKXJKZLVHA2YDPGAQGIS4TQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100287.49_warc_CC-MAIN-20231201120231-20231201150231-00441.warc.gz\"}"} |
https://ask.sagemath.org/question/57703/how-can-i-manipulate-a-multiplicative-group-of-zmodn/ | [
"# how can I manipulate a multiplicative group of Zmod(n)\n\nHi,\n\nI want to create the multiplicative group $(\\mathbb{Z}/7\\mathbb{Z})^*=\\{1,2,3,4,5,6\\}$.\n\nI did these steps:\n\nsage: n = 7\nsage: Zn = Zmod(n)\nsage: G = Zn.unit_group()\nsage: list(G)\n[1, f, f^2, f^3, f^4, f^5]\n\n\nThen I want to create the subgroups generated by $2=f^2$ which is $\\{1,2,4\\}$.\n\nI did the following steps:\n\nsage: f = G.gen()\nsage: H = G.subgroup([f^2])\nsage: list(H)\n[1, f, f^2]\n\n\nMy problem is when I did\n\nsage: Zn(f)\n3\n\n\nBut here f needs to be 2. How can I solve this? Replace f by value?\n\nedit retag close merge delete\n\n1\n\nFor those interested, there is a parallel discussion on that topic on sage-devel : https://groups.google.com/g/sage-deve...\n\nSort by » oldest newest most voted\n\n### Going through the group of units\n\nThis implementation suffers from the defect that\n\n• elements of H have G as their parent\n• but they display in terms of a generator of H\n\nHere is a slightly convoluted way to work around this.\n\nDefine G and H:\n\nsage: n = 7\nsage: Zn = Zmod(n)\nsage: G = Zn.unit_group()\nsage: f = G.gen()\nsage: H = G.subgroup([f^2])\n\n\nList the elements of H (this displays using a different f):\n\nsage: Hlist = list(H)\nsage: Hlist\n[1, f, f^2]\n\n\nList the elements of H expressed in G:\n\nsage: HlistG = list(prod((a^b for a, b in zip(H.gens(), h.list())), G.one()) for h in H)\nsage: HlistG\n[1, f^2, f^4]\n\n\nGet their values in Zn:\n\nsage: HlistZn = [h.value() for h in HlistG]\nsage: HlistZn\n[1, 2, 4]\n\n\nI opened a ticket to make this happen more naturally:\n\nThe cyclic ring also has a method multiplicative_subgroups.\n\nThat method lists generating tuples for its multiplicative subgroups:\n\nsage: n = 7\nsage: Zn = Zmod(n)\nsage: Sub = Zn.multiplicative_subgroups()\nsage: Sub\n((3,), (2,), (6,), ())\n\n\nSadly they do not give a hold on the subgroups as such.\n\nsage: H = Sub\nsage: H\n(2,)\nsage: parent(H)\n<class 'tuple'>\n\n\nNeither do the generators for these subgroups have these subgroups as parents.\n\nInstead, they are simply elements of the initial cyclic ring.\n\nsage: h = H\nsage: parent(h)\nRing of integers modulo 7\n\nmore"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7845795,"math_prob":0.9410689,"size":1522,"snap":"2022-40-2023-06","text_gpt3_token_len":475,"char_repetition_ratio":0.1521739,"word_repetition_ratio":0.04597701,"special_character_ratio":0.29106438,"punctuation_ratio":0.19130434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T12:04:05Z\",\"WARC-Record-ID\":\"<urn:uuid:411f540c-dccf-4aab-b6ac-8f96e1032446>\",\"Content-Length\":\"59023\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3072e545-7a87-4a54-af35-48a04e9def40>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d48d039-336b-4626-915b-2dacc35a58bd>\",\"WARC-IP-Address\":\"194.254.163.53\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/57703/how-can-i-manipulate-a-multiplicative-group-of-zmodn/\",\"WARC-Payload-Digest\":\"sha1:F36T6U6FEMBIYI24K5DPXGAB2MAOJGR3\",\"WARC-Block-Digest\":\"sha1:LLNBUPPIFDVKD6OIQXS23BBDP56UJL4K\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499816.79_warc_CC-MAIN-20230130101912-20230130131912-00251.warc.gz\"}"} |
https://wolferanchquakerdale.org/ | [
"Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch Wolfe Ranch"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95329446,"math_prob":0.94276536,"size":2719,"snap":"2023-40-2023-50","text_gpt3_token_len":598,"char_repetition_ratio":0.08839779,"word_repetition_ratio":0.0,"special_character_ratio":0.22103715,"punctuation_ratio":0.10128913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99287885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T21:25:47Z\",\"WARC-Record-ID\":\"<urn:uuid:1c3f9cc9-cff8-48a6-afb2-17a6aafb98b7>\",\"Content-Length\":\"127599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e708727a-3909-4666-8a56-4df4dc74c2c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:9a9c27cd-1443-40b9-9972-fb3bc8ff5d42>\",\"WARC-IP-Address\":\"13.248.243.5\",\"WARC-Target-URI\":\"https://wolferanchquakerdale.org/\",\"WARC-Payload-Digest\":\"sha1:VDLQAXLDF3D4VG6YYKWALW2N62RNN3H3\",\"WARC-Block-Digest\":\"sha1:HKOIMAW657G2VUTZBKU3PIUAWRRXK4X3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506045.12_warc_CC-MAIN-20230921210007-20230922000007-00571.warc.gz\"}"} |
http://forum.m5stack.com/topic/410/lesson-20-grove-gpio-potentiometr-kingpong-game | [
"Lesson 20. GROVE GPIO + Potentiometr. KingPong Game\n\n• The purpose of this lesson\n\nHi! Today we will learn how to use a variable resistor to control a tennis racket in a Ping Pong game.",
null,
"Figure 1\n\nThis tutorial will teach you how to use the GROVE GPIO port to connect an input device and use this as an example of a game.\n\nShort help\n\nTable tennis (sometimes colloquially mistakenly used the name ping-pong) — Olympic sport, athletic game ball, which uses a racket and a table separated by a net. The game can take place between two opponents or two pairs of opponents. The task of the players is to use the rackets to send the ball to the opponent so that he could not return it back in accordance with the rules. p.s. In our case, the player will play with himself.\n\nMore on Wiki: https://en.wikipedia.org/wiki/Table_tennis\n\nList of components for the lesson\n\n• PC/MAC;\n• M5STACK FIRE;\n• USB-C cable from standard set;\n• variable resistor;\n• colored wires from the standard set (type plug-socket);\n• colored wires not from the standard set (socket type);\n• wire cutters;\n• shrinkage;\n• needle;\n• soldering iron and solder.\n\nLet's start!\n\nStep 1. Do adapter (if there is no factory)\n\nThe m5fire has GROVE GPIO connector, but unfortunately the standard wires will not work. You need an adapter with a plug, if you do not have it, now we will figure out how to fix it.\n\nThe first step is to use a needle to bend the latch on the outlet and remove the contact (Fig. 2).",
null,
"Figure 2\n\nGood. Now take the wire cutters and gently squeeze evenly around the contact. During this process, make a fitting by connecting to any of the four contacts from the black connector (PORT B M5FIRE) (Fig. 3). We need to compress the four pins on the four wires.",
null,
"Figure 3\n\nWhen all four wires are crimped, it is necessary to return the plastic insulators to two of them. Then connect all the wires and press the heat shrink (Fig. 4).",
null,
"Figure 4\n\nOnce the wires are connected to the device, it is not recommended to remove them unnecessarily (Fig. 5 - 6).",
null,
"Figure 5",
null,
"Figure 6\n\nStep 2. Preparation and connection of the variable resistor\n\nSolder two pairs of three wires to the variable resistor (Fig. 7). Right wire connect to +5 V, Central to 36, left to GND (Fig. 1).",
null,
"Figure 7\n\nStep 3. Sketch\n\nThe code of the game is very simple, let's look at an interesting point in this lesson:\n\nFind out the input voltage. Do not forget that the reference voltage is 3.3 V, and we supply 5 V. more than 3.3 V ADC does not understand:\n\n``````int voltage = analogRead (36) * 3400 / 4096;\n``````\n\nConvert to percentages:\n\n``````int percentage = voltage * 100 / 3400;\n``````\n\nCalculate the position of the racket:\n\n``````raket_position = map(percentage, 0, 100, 0, 10);\n``````\n\nEach time we will draw all ten rackets, but only the active will be red, and the remaining white:\n\n``````for (int i = 0; i < 10; i++)\n{\nif (i < 5)\n{\nx = 0;\ny = i * (raket_height + raket_margin);\n}\nelse\n{\nx = screen_width - raket_width;\ny = (9-i) * (raket_height + raket_margin);\n}\ncolor = (i == raket_position) ? RED : WHITE;\nM5.Lcd.fillRect(x, y, raket_width, raket_height, color);\nledBar(0, 0, 0, 12);\nledBar(255, 0, 0, 9 - raket_position);\nif (i == raket_position)\n{\nraket_x = x;\nraket_y = y;\n}\n}\n``````\n\nFinal step\n\nThat's all (Fig. 8). Can be played on a record :)",
null,
"Figure 8"
] | [
null,
"https://pp.userapi.com/c846216/v846216098/12afa3/YOpYgp6lHkY.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af61/uDEy3pUy43k.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af6b/VI3GavAJwv8.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af7b/AeSC2nAZLqA.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af85/9-aQPXDlIm4.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af8f/6V6-0N37gCY.jpg",
null,
"https://pp.userapi.com/c846216/v846216098/12af99/1-70V_Gr_34.jpg",
null,
"https://pp.userapi.com/c831208/v831208688/1ee9b6/WI8pyLoR4uQ.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80213356,"math_prob":0.96707034,"size":3347,"snap":"2019-26-2019-30","text_gpt3_token_len":928,"char_repetition_ratio":0.10679031,"word_repetition_ratio":0.003267974,"special_character_ratio":0.2811473,"punctuation_ratio":0.15759312,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691369,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T21:13:11Z\",\"WARC-Record-ID\":\"<urn:uuid:151252ce-dd6c-41f6-8331-f7811a09f082>\",\"Content-Length\":\"38516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cf4dbde-e8b8-4534-9d4e-44a98a0a6662>\",\"WARC-Concurrent-To\":\"<urn:uuid:b78992f9-a5f9-4b6d-a4aa-6edf09f4307b>\",\"WARC-IP-Address\":\"112.74.83.120\",\"WARC-Target-URI\":\"http://forum.m5stack.com/topic/410/lesson-20-grove-gpio-potentiometr-kingpong-game\",\"WARC-Payload-Digest\":\"sha1:YA4A2KAXZMJ5FQWQY2LJSQUTKG56K5VR\",\"WARC-Block-Digest\":\"sha1:KEUGFAIQRC4PS4UPOYUZBVEU53LQVIMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998298.91_warc_CC-MAIN-20190616202813-20190616224813-00025.warc.gz\"}"} |
http://okmathframework.pbworks.com/w/page/112827220/3-N-3-1 | [
"• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.\n\n• Dokkio Sidebar (from the makers of PBworks) is a Chrome extension that eliminates the need for endless browser tabs. You can search all your online stuff without any extra effort. And Sidebar was #1 on Product Hunt! Check out what people are saying by clicking here.\n\nView\n\n# 3-N-3-1\n\nlast edited by 4 years, 6 months ago\n\n3.N.3.1 Read and write fractions with words and symbols.\n\nIn a Nutshell\n\nStudents have had the opportunity to explore benchmark fractions (halves, thirds, and fourths) in previous years with a focus on equal portions when dividing a whole. Third graders now move to naming fractions.\n\n## Teacher Actions\n\n• Use models and mathematical representations to justify the name of a fraction.\n\n• Communicate mathematically with peers by naming the fraction using words and symbols.\n\n• Demonstrate mathematical reasoning by evaluating the accuracy of others solutions.\n\n• Pose purposeful questions to help students recall prior knowledge and justify their thinking. Questions may include: How do we read this fraction correctly? When might we read or write fractions in real life? How can we prove the name of this fraction?\n\n• Implement tasks that focus on communication. For example: One student orders a pizza with specific fractions of toppings while the other write down the order and uses a drawing/manipulatives to show the order.\n\n• Use mathematical representations to make connections when reading and writing fractions. For example: Rulers show inches broken down into fourths and halves.\n\n## Misconceptions\n\n• Fractions can be represented in multiple formats, such as written or pictorial form.\n\n• Fractions are observable in the real world.\n\n• A fraction can have the same value, but look distinctively different.\n\n• That the numerator and denominator of a fraction has its value as a whole number.\n\n• They pronounce fractions with their whole numeral name. For example: 3/4 is pronounced three four or three fours.\n\n• The numerator is the bottom number or the denominator is the top number.\n\n• The fraction name is always represented by the shaded portion.\n\n• The denominator is the amount of leftover pieces. For example: Mikey ate 3 out of the 8 pieces of pizza. What was the fraction of pizza Mikey ate? Students may answer 3/5 (three were eaten, 5 were not eaten).\n\nOKMath Framework Introduction"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9041435,"math_prob":0.74063206,"size":2142,"snap":"2022-27-2022-33","text_gpt3_token_len":444,"char_repetition_ratio":0.12768944,"word_repetition_ratio":0.0,"special_character_ratio":0.20354809,"punctuation_ratio":0.10638298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9538367,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T05:59:59Z\",\"WARC-Record-ID\":\"<urn:uuid:acacdd09-5aca-4d85-844a-44d0fcf6bae1>\",\"Content-Length\":\"27215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1767067-e8e3-4e27-8f3d-d9ae2c03b38b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e60652c-412a-4487-b8db-0fb63e690e6d>\",\"WARC-IP-Address\":\"208.96.18.238\",\"WARC-Target-URI\":\"http://okmathframework.pbworks.com/w/page/112827220/3-N-3-1\",\"WARC-Payload-Digest\":\"sha1:I2T6MK2SWYPRY2ZOD32AS4VRA6AN3ZPY\",\"WARC-Block-Digest\":\"sha1:LXR5QOBCYRMK5S5SPPIYLIOWPMMJYBMH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572161.46_warc_CC-MAIN-20220815054743-20220815084743-00381.warc.gz\"}"} |
https://eduhawks.com/a-set-of-equations-is-given-below-equation-h-y-x-2-equation-j-y-3x-4-which-of-the-following-steps-can-be-used-to-find-the-solution-to-the-set-of-equations-4-points-a/ | [
"Breaking News\nHome / Assignment Help / A set of equations is given below: Equation H: y = –x + 2 Equation J: y = 3x – 4 Which of the following steps can be used to find the solution to the set of equations? (4 points) A.–x = 3x – 4 B.–x +2 = 3x C.–x + 2 = 3x – 4 D.–x + 1 = 3x + 2\n\n# A set of equations is given below: Equation H: y = –x + 2 Equation J: y = 3x – 4 Which of the following steps can be used to find the solution to the set of equations? (4 points) A.–x = 3x – 4 B.–x +2 = 3x C.–x + 2 = 3x – 4 D.–x + 1 = 3x + 2\n\nA set of equations is given below: Equation H: y = –x + 2 Equation J: y = 3x – 4 Which of the following steps can be used to find the solution to the set of equations? (4 points) A.–x = 3x – 4 B.–x +2 = 3x C.–x + 2 = 3x – 4 D.–x + 1 = 3x + 2",
null,
""
] | [
null,
"https://secure.gravatar.com/avatar/62f591b1388e6c2d487488579c5ceaa7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8705177,"math_prob":1.0000093,"size":483,"snap":"2020-24-2020-29","text_gpt3_token_len":193,"char_repetition_ratio":0.1920668,"word_repetition_ratio":0.9672131,"special_character_ratio":0.436853,"punctuation_ratio":0.12903225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-30T09:11:07Z\",\"WARC-Record-ID\":\"<urn:uuid:bb0df973-fed2-4701-8434-842441559613>\",\"Content-Length\":\"73778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca89033f-eda2-4e0d-bae8-927a866e29da>\",\"WARC-Concurrent-To\":\"<urn:uuid:61bff257-94bb-429c-9626-80c66fa96d51>\",\"WARC-IP-Address\":\"96.125.171.78\",\"WARC-Target-URI\":\"https://eduhawks.com/a-set-of-equations-is-given-below-equation-h-y-x-2-equation-j-y-3x-4-which-of-the-following-steps-can-be-used-to-find-the-solution-to-the-set-of-equations-4-points-a/\",\"WARC-Payload-Digest\":\"sha1:SLKQQVCMC2F3YQJETSICE3VPVF3NIMI6\",\"WARC-Block-Digest\":\"sha1:4QI5N6I4HEDQ6GCUHD6MAEVVF4TMWRWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347407667.28_warc_CC-MAIN-20200530071741-20200530101741-00496.warc.gz\"}"} |
https://digitalblackboard.io/physics/high-school/mechanics/ | [
"# Newtonian Mechanics\n\n### Summary\n\nNewtonian mechanics, also known as classical mechanics, is a branch of physics that describes the motion of objects under the influence of forces. It was developed by Sir Isaac Newton in the 17th century and is based on his three laws of motion.\n\n1. The First Law of Motion states that an object at rest will stay at rest, and an object in motion will stay in motion with a constant velocity, unless acted upon by an external force.\n\n2. The Second Law of Motion states that the acceleration of an object is proportional to the net force acting on it and inversely proportional to its mass. This can be expressed as $\\bm{F} = m\\bm{a}$, where $\\bm{F}$ is force, $m$ is mass and $\\bm{a}$ is acceleration.\n\n3. The Third Law of Motion states that for every action, there is an equal and opposite reaction. This means that if two objects interact, they will apply equal and opposite forces on each other.\n\nNewton's laws of motion are used to describe the motion of objects in everyday life, as well as in complex systems such as planetary motion, the motion of objects in space, and the behavior of fluids. The principles of Newtonian mechanics also form the basis for many areas of physics, including classical mechanics, thermodynamics, and electromagnetism."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.97112906,"math_prob":0.97012156,"size":1273,"snap":"2023-14-2023-23","text_gpt3_token_len":273,"char_repetition_ratio":0.13475177,"word_repetition_ratio":0.023255814,"special_character_ratio":0.20974077,"punctuation_ratio":0.094262294,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99515647,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T02:55:52Z\",\"WARC-Record-ID\":\"<urn:uuid:7cc40c68-dc0d-4b2d-85d8-a0d7c5b1f2a9>\",\"Content-Length\":\"40166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78e959da-860d-463c-bcf3-2db3e76e1c20>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a334fbb-649f-4dba-979e-447d97c21370>\",\"WARC-IP-Address\":\"13.228.247.11\",\"WARC-Target-URI\":\"https://digitalblackboard.io/physics/high-school/mechanics/\",\"WARC-Payload-Digest\":\"sha1:MJQUJT7V4BYDYAS2EPAXD42RK75PQHSR\",\"WARC-Block-Digest\":\"sha1:KNTFMPXIM3CUIHQODQLDZWLN6VQBI6IO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949093.14_warc_CC-MAIN-20230330004340-20230330034340-00450.warc.gz\"}"} |
http://vip.arachnoid.com/TankProfiler/case_history.html | [
"Home | Mathematics | * Applied Mathematics | * Storage Tank Modeling | * TankCalc * TankFlow * TankProfiler Storage Container Mathematics TankProfiler Case History TankStepped TankStepped Java Source Listing Trapezoidal Storage Tanks Trapezoidal Storage Tanks: Java Listing User inquiries about TankCalc Volumes In Depth",
null,
"",
null,
"",
null,
"Share This Page\nTankProfiler Case History\n\nAnalyzing and modeling a real-world tank with an odd shape\n\nIntroduction | Description to Profile | Summary",
null,
"Figure 1: cryogenic storage tank profile\nIntroduction\n\nThis article describes a typical storage tank analysis case history, using the methods I created with TankProfiler, which has been written to go beyond what my earlier program TankCalc can meaningfully analyze.\n\nA few months ago I received an inquiry from a TankCalc user who needed to analyze a rather oddly-shaped tank, one that I quickly realized TankCalc wouldn't be able to model very accurately. The tank's overall profile is shown in Figure 1 above. The tank's end caps aren't elliptical, nor are they spherical, nor, for that matter, do they have any other classic geometric shape.\n\nDuring the exchange about this tank's shape, my correspondent revealed that the end caps consist of two separate sections with different radii, welded together. One radius is evident at the bend between the main end cap and the cylindrical section, the other radius is evident at the center of the end cap. I realized if I wanted to accurately analyze this tank, I would need to write a new analysis method, one that relied on an incremental profile rather than by choosing from the handful of classic geometries offered by TankCalc (that new method became TankProfiler). Then I would need to write a routine that used the tank's description to create a profile suitable for TankProfiler.\n\nDescription to Profile\n\nMy correspondent provided a drawing of his tank and this description: \"The end cap is not so straightforward to describe mathematically. The section of the end cap is composed of two circle sectors with different radii. The radius of each circular sector is calculated based on the tank's diameter. So we have the radius of the top circular sector R=0.8*D, and the small circular sector that unites the top with the cilinder [sic] (the rounded corner actually) has R=(1/6.5)*D\" [D = tank diameter]. He went on to say that this two-radius arrangement is common in cryogenic storage tank fabrication, with differences in the specified dimensions for different purposes.\n\nA small digression. As I began this correspondence, it occurred to me that there are many tanks in the field that don't have simple geometric shapes, so one of my assumptions about TankCalc (which requires tanks to have such shapes) was at least partly wrong. But given the existence of computers and powerful mathematical analysis tools, why should tank fabricators struggle to make their tanks agree with simple preconceptions about their shape? This cryogenic tank was obviously the logical outcome of an efficient manufacturing process, and it has a rather odd shape, but not one that's particularly difficult to analyze. This tank needs to be analyzed using a stepwise dimensional profile, not a pure mathematical model like that used in TankCalc, but a dimensional profile is relatively easy to create. My thinking now is that mathematical tank analysis should agree with efficient manufacturing methods, not force tank fabricators to use a shape that might cause the tank's cost to increase — in the modern era, mathematics is almost always cheaper than welding metal in a particular way.\n\nAfter some preliminary work using Sage (my current favorite tool for mathematical analysis), I realized I could produce a very accurate profile of the tank based only on the provided information. Here's a diagram of my analysis generated in Sage:",
null,
"Figure 2: Dimensional analysis created using Sage\n\nFor simplicity, Figure 2 shows one of two end caps. All the relevant variable names appear in Figure 2. Here are their definitions:\n\n• d = tank diameter.\n• r = tank radius = $\\frac{d}{2}$.\n• ra = the smaller of the two end cap radii, that defining the \"corner\" between the cylindrical section and the main end cap section (blue in Figure 2).\n• rb = the larger of the two end cap radii, that defining the center of the end cap (green in Figure 2).\n• adj, opp, hyp = three sides of the gray right triangle that's key to analyzing this tank.\n• φ = the angle between hyp and adj, also the angle between the tank's vertical centerline and the junction between the large and small radii, as seen from the triangle's lower extent.\n• θ = the angle between hyp and opp, also the angle subtending the extent of the small radius.\n\nHere are notes on the analysis and the equations that resulted:\n\n• As a preliminary step toward generating a dimensional profile, I saw that I needed to use the available information — the tank's overall dimensions and the two provided end cap radii — to create an accurate diagram of the tank.\n• After some thought I realized that an analysis based on a right triangle could be used to compute the unknowns.\n• Because of its location at the transition between the end cap and the cylindrical wall, I saw that the length of the triangle's opposite side (opp in Figure 2) was equal to the tank's radius minus ra:\n• (1) $opp = r-r_a$.\n\n• In the same way, the length of the triangle's hypotenuse (hyp in Figure 2) was equal to the greater radius minus the smaller radius:\n• (2) $hyp = r_b - r_a$\n\n• With opp and hyp defined, I can compute adj, the triangle's third side:\n• (3) $adj = \\sqrt{hyp^2 - opp^2}$\n\n• Now that we have three triangle sides, I can compute the two required angles φ and θ:\n\n• (4) $\\phi = \\tan^{-1}(\\frac{opp}{adj})$\n• (5) $\\theta = \\tan^{-1}(\\frac{adj}{opp})$ or $\\frac{\\pi}{2} - \\phi$\n\nWith all the required quantities computed as described above, one can then generate a dimensional profile that consists of end caps having the two differing radii, and the cylinder wall. Any suitable method can be used to generate a profile, I favor Python simply because it produces useful results very quickly. Here is a link to the Python script I used to create this tank's profile.\n\nThe generated profile is then submitted to TankProfiler, which will produce a table that correlates tank sensor height and partial content volume or the reverse, as well as additional information about the tank — its total volume, surface area and (if a wall thickness is provided) the volume of the tank itself. TankProfiler will also create a diagram like that shown in Figure 1.\n\nIt's important to say that this method isn't limited to the dimensions specified for this example tank — most dimensions will be accepted, with certain constraints:\n• rb must be larger than r.\n• ra must be smaller than rb\nSummary\nTanks like this, and others like it, are quite common, more common than I realized when I wrote TankCalc. As it turns out, TankCalc makes some simplifying assumptions about tank shapes that I'm discovering aren't realistic. The advantage of TankProfiler, coupled with an analysis like that above, is that it produces very accurate results for tanks that don't have easily defined shapes. The drawback is that TankProfiler's users need to know more than those who use TankCalc — they need to be comfortable running Python programs at the command line (no fancy user interface) and for a tank not analyzed by others, they should be able to perform a mathematical analysis like that above.\n\nObviously one can get reasonably accurate results from TankCalc for a tank like this that doesn't have a classical geometric shape. When this tank is submitted to TankCalc with the assumption that the end caps are elliptical, the volume errors are less than 10% for all but the very smallest partial volumes (i.e. when the tank is nearly empty). This method is intended for those who need the highest practical accuracy. If provided with a carefully analyzed, well-written profile as in the above example, TankProfiler produces very accurate results and accommodates the many tank shapes outside those that can be managed by TankCalc.\n\n Home | Mathematics | * Applied Mathematics | * Storage Tank Modeling | * TankCalc * TankFlow * TankProfiler Storage Container Mathematics TankProfiler Case History TankStepped TankStepped Java Source Listing Trapezoidal Storage Tanks Trapezoidal Storage Tanks: Java Listing User inquiries about TankCalc Volumes In Depth",
null,
"",
null,
"",
null,
"Share This Page"
] | [
null,
"http://vip.arachnoid.com/images/leftarrow.png",
null,
"http://vip.arachnoid.com/images/rightarrow.png",
null,
"http://vip.arachnoid.com/images/addthis16.gif",
null,
"http://vip.arachnoid.com/TankProfiler/graphics/sage_tank_diagram.png",
null,
"http://vip.arachnoid.com/TankProfiler/graphics/sage_tank_analysis.png",
null,
"http://vip.arachnoid.com/images/leftarrow.png",
null,
"http://vip.arachnoid.com/images/rightarrow.png",
null,
"http://vip.arachnoid.com/images/addthis16.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93998456,"math_prob":0.87889624,"size":7854,"snap":"2020-34-2020-40","text_gpt3_token_len":1666,"char_repetition_ratio":0.1277707,"word_repetition_ratio":0.04197901,"special_character_ratio":0.2153043,"punctuation_ratio":0.08134642,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98036176,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T12:30:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c70d6c7e-bd77-4618-9e2d-137b23e9ae22>\",\"Content-Length\":\"17293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ab6d484-5dc4-4f9e-8c6a-4083f8b9e10e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce70e60a-2171-40f7-8c01-ff23f593840b>\",\"WARC-IP-Address\":\"142.11.206.210\",\"WARC-Target-URI\":\"http://vip.arachnoid.com/TankProfiler/case_history.html\",\"WARC-Payload-Digest\":\"sha1:E2IH4Z3NLW7TKX4PZJMSDDM3NGP7QRN4\",\"WARC-Block-Digest\":\"sha1:56MLVG3LSXG37ROAMB5CJOWLHGKSRZO6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400241093.64_warc_CC-MAIN-20200926102645-20200926132645-00613.warc.gz\"}"} |
https://reproductive-health.org/John%20K-J%20Li%20-%20Dynamics%20of%20the%20Vascular%20System/80 | [
"Physical Concepts and Basic Fluid Mechanics\n67\n3.3.2\nBernoulli’s Equation and Narrowing Vessel\nLumen\nFor a vessel with a narrowed segment or stenosis, as shown in Fig.\n3.3.3,\nthe total volume flow through all segments must be the same, by the\nconservation\nof\nmass. The\nflow\nis\ngiven by the product\nof\nthe cross-\nsectional area and the flow velocity:\nQ\n=\nA,v,\n=\nA2v2\n=\nA3v3\n(3.3.15)\nFig. 3.3.3:\nA\ncylindrical vessel with\na\nnarrowed segment\nor\nstenosis.\nThe familiar kinetic energy equation\nis\ngiven as\n1\n2\nK.E.\n=\n-mv2\nand the corresponding potential energy is\nP.E.\n=\npQ\nThe total energy\nis\ntheir sum:\nW\n=\nP.E.+\nK.E.\nThis gives rise to\n(3.3.16)\n(3.3.17)\n(3.3.18)\n(3.3.1\n9)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6900451,"math_prob":0.97768277,"size":670,"snap":"2021-31-2021-39","text_gpt3_token_len":243,"char_repetition_ratio":0.1036036,"word_repetition_ratio":0.03448276,"special_character_ratio":0.31791046,"punctuation_ratio":0.20930232,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T03:19:06Z\",\"WARC-Record-ID\":\"<urn:uuid:7a494b53-1e26-4616-a60e-29219b03b67a>\",\"Content-Length\":\"23578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5ba5d30-8865-4911-ba6e-86bfc0d8d22a>\",\"WARC-Concurrent-To\":\"<urn:uuid:be0529c8-e54f-4357-a45d-6508388f1c9e>\",\"WARC-IP-Address\":\"104.21.73.202\",\"WARC-Target-URI\":\"https://reproductive-health.org/John%20K-J%20Li%20-%20Dynamics%20of%20the%20Vascular%20System/80\",\"WARC-Payload-Digest\":\"sha1:ULR6RS6RYIXODJJOGIP2LWTTVLIPGPFV\",\"WARC-Block-Digest\":\"sha1:WBWEDKZYB2RWSK2M7PP5FHXUMMRYFTMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154500.32_warc_CC-MAIN-20210804013942-20210804043942-00668.warc.gz\"}"} |
https://www.teachoo.com/1547/1090/Ex-4.1--2---Represent-in-form-of-quadratic-equations/category/Making-quadratic-equation/ | [
"",
null,
"",
null,
"1. Chapter 4 Class 10 Quadratic Equations (Term 2)\n2. Concept wise\n\nTranscript\n\nEx 4.1 ,2 Represent the following situations in the form of quadratic equations : (i) The area of a rectangular plot is 528 m2. The length of the plot (in metres) is one more than twice its breadth. We need to find the length and breadth of the plot. Given that Area = 528 m2 and Length is one more that twice its breadth Let the breadth be x So, length = 2x + 1 We know that Area of rectangle = Length breadth 528 = (2x + 1) x (2x + 1) x = 528 2x2 + x = 528 2x2 + x 528 = 0 It is the form of ax2 + bx + c = 0 Where a = 2, b = 1 , c = 528 Hence, it is a quadratic equation .",
null,
""
] | [
null,
"https://d1avenlh0i1xmr.cloudfront.net/185fec89-4205-47b0-897f-fc9b58460f8eslide9.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/3a09f951-3d43-4aea-b033-b66da601a003slide10.jpg",
null,
"https://delan5sxrj8jj.cloudfront.net/misc/Davneet+Singh.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9181201,"math_prob":0.9999777,"size":696,"snap":"2021-31-2021-39","text_gpt3_token_len":211,"char_repetition_ratio":0.13294798,"word_repetition_ratio":0.026666667,"special_character_ratio":0.34195402,"punctuation_ratio":0.088435374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999602,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T14:31:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a7dc21ea-ea55-4a69-bb0b-078c2b9d21a7>\",\"Content-Length\":\"45377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81310bbe-960c-4cba-be61-3f5e6ad96429>\",\"WARC-Concurrent-To\":\"<urn:uuid:2cc863fa-cc12-47b4-9a39-178194621aa5>\",\"WARC-IP-Address\":\"3.226.182.14\",\"WARC-Target-URI\":\"https://www.teachoo.com/1547/1090/Ex-4.1--2---Represent-in-form-of-quadratic-equations/category/Making-quadratic-equation/\",\"WARC-Payload-Digest\":\"sha1:5VM57TSJIW4QQS7X2RRLECUK5ADWBONH\",\"WARC-Block-Digest\":\"sha1:ZYKPV5UKHUCUUV5FJBXLSTVGRDZCZUHO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056476.66_warc_CC-MAIN-20210918123546-20210918153546-00519.warc.gz\"}"} |
https://biggboss3.net/qa/quick-answer-how-do-i-calculate-current-liabilities.html | [
"",
null,
"# Quick Answer: How Do I Calculate Current Liabilities?\n\n## How do you calculate bank liabilities?\n\nLiabilities are what the bank owes to others.\n\nSpecifically, the bank owes any deposits made in the bank to those who have made them.\n\nThe net worth, or equity, of the bank is the total assets minus total liabilities.\n\nNet worth is included on the liabilities side to have the T account balance to zero..\n\n## Is Rent current liabilities?\n\nCurrent liabilities are debts payable within one year, while long-term liabilities are debts payable over a longer period. … Items like rent, deferred taxes, payroll, and pension obligations can also be listed under long-term liabilities.\n\n## What are total liabilities?\n\nTotal liabilities are the combined debts that an individual or company owes. They are generally broken down into three categories: short-term, long-term, and other liabilities. On the balance sheet, total liabilities plus equity must equal total assets.\n\n## How many types of current liabilities are there?\n\nThe difference between the three most recognised types of liabilities – current liabilities, non-current liabilities, and contingent liabilities is represented in the table below.\n\n## Is rent asset or liabilities?\n\nAccounting: Lease considered an asset (leased asset) and liability (lease payments). Payments are shown on the balance sheet. Tax: As owner, lessee claims depreciation expense, and interest expense.\n\n## Where is current liabilities on balance sheet?\n\nCurrent liabilities are listed on the balance sheet under the liabilities section and are paid from the revenue generated from the operating activities of a company.\n\n## Is short term debt the same as current liabilities?\n\nWhat Is Short-Term Debt? Short-term debt, also called current liabilities, is a firm’s financial obligations that are expected to be paid off within a year. It is listed under the current liabilities portion of the total liabilities section of a company’s balance sheet.\n\n## What accounts are current liabilities?\n\nExamples of current liabilities:Accounts payable. Accounts payables are expected to be paid off within a year’s time, or within one operating cycle (whichever is longer). … Interest payable.Income taxes payable.Bills payable.Bank account overdrafts.Accrued expenses.Short-term loans.\n\n## Where does salary go on balance sheet?\n\nSalaries do not appear directly on a balance sheet, because the balance sheet only covers the current assets, liabilities and owners equity of the company. Any salaries owed by not yet paid would appear as a current liability, but any future or projected salaries would not show up at all.\n\n## What are non current liabilities?\n\nNoncurrent liabilities, also known as long-term liabilities, are obligations listed on the balance sheet not due for more than a year. … Examples of noncurrent liabilities include long-term loans and lease obligations, bonds payable and deferred revenue.\n\n## What is the formula for current liabilities?\n\nMathematically, Current Liabilities Formula is represented as, Current Liabilities formula = Notes payable + Accounts payable + Accrued expenses + Unearned revenue + Current portion of long term debt + other short term debt.\n\n## What are current liabilities examples?\n\nExamples of current liabilities include accounts payable, short-term debt, dividends, and notes payable as well as income taxes owed.\n\n## How do you calculate net current liabilities?\n\nNet current liabilities refer to the current assets less current liabilities of an organisation. To have net current liabilities, the current liabilities must be larger than the current assets.\n\n## What are average current liabilities?\n\nThe simplest way to calculate your average current liabilities for a particular period is with the beginning-and-end method. Get the total value of current liabilities as recorded on the balance sheet for the beginning of the period. … The result is your average current liabilities.\n\n## How do you calculate current assets and current liabilities?\n\nThe current ratio formula goes as follows:Current Ratio = Current Assets divided by your Current Liabilities.Quick Ratio = (Current Assets minus Prepaid Expenses plus Inventory) divided by Current Liabilities.Net Working Capital = Current Assets minus your Current Liabilities.More items…•"
] | [
null,
"https://mc.yandex.ru/watch/68550892",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93460435,"math_prob":0.8730898,"size":4826,"snap":"2021-21-2021-25","text_gpt3_token_len":895,"char_repetition_ratio":0.2754044,"word_repetition_ratio":0.11732606,"special_character_ratio":0.18545379,"punctuation_ratio":0.124105014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716405,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T12:54:27Z\",\"WARC-Record-ID\":\"<urn:uuid:f772f61a-6f0c-4707-a64a-6d19a2469b54>\",\"Content-Length\":\"34033\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a26c624-d694-4f5a-91d6-5660f5639582>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cca4657-09a3-4c13-b437-d125bef9bd96>\",\"WARC-IP-Address\":\"87.236.16.33\",\"WARC-Target-URI\":\"https://biggboss3.net/qa/quick-answer-how-do-i-calculate-current-liabilities.html\",\"WARC-Payload-Digest\":\"sha1:2Z6WB2HGJUHEBRBU7EBR7H2MPTVBYW6U\",\"WARC-Block-Digest\":\"sha1:5SPTVVI4OR6XUUHZA35SVHLGSRL3HVCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991269.57_warc_CC-MAIN-20210516105746-20210516135746-00049.warc.gz\"}"} |
https://www.freecodecamp.org/news/arrow-function-javascript-tutorial-how-to-declare-a-js-function-with-the-new-es6-syntax/ | [
"You’ve probably seen arrow functions written a few different ways.\n\n``````//example 1\nconst addTwo = (num) => {return num + 2;};\n\n//example 2\nconst addTwo = (num) => num + 2;\n\n//example 3\nconst addTwo = num => num + 2;\n\n//example 4\nconst addTwo = a => {\nconst newValue = a + 2;\nreturn newValue;\n};\n``````\n\nSome have parentheses around the parameters, while others don’t. Some use curly brackets and the `return` keyword, others don’t. One even spans multiple lines, while the others consist of a single line.\n\nInterestingly, when we invoke the above arrow functions with the same argument we get the same result.\n\n``````console.log(addTwo(2));\n//Result: 4\n``````\n\nHow do you know which arrow function syntax to use? That’s what this article will uncover: how to declare an arrow function.\n\n## A Major Difference\n\nArrow functions are another—more concise—way to write function expressions. However, they don’t have their own binding to the `this` keyword.\n\n``````//Function expression\nconst addNumbers = function(number1, number2) {\nreturn number1 + number2;\n};\n\n//Arrow function expression\nconst addNumbers = (number1, number2) => number1 + number2;\n``````\n\nWhen we invoke these functions with the same arguments we get the same result.\n\n``````console.log(addNumbers(1, 2));\n//Result: 3\n``````\n\nThere's an important syntactical difference to note: arrow functions use the arrow `=>` instead of the `function` keyword. There are other differences to be aware of when you write arrow functions, and that’s what we’ll explore next.\n\n## Parentheses\n\nSome arrow functions have parentheses around the parameters and others don't.\n\n``````//Example with parentheses\nconst addNums = (num1, num2) => num1 + num2;\n\n//Example without parentheses\nconst addTwo = num => num + 2;\n``````\n\nAs it turns out, the number of parameters an arrow function has determines whether or not we need to include parentheses.\n\nAn arrow function with zero parameters requires parentheses.\n\n``````const hello = () => \"hello\";\nconsole.log(hello());\n//Result: \"hello\"\n``````\n\nAn arrow function with one parameter does not require parentheses. In other words, parentheses are optional.\n\n``````const addTwo = num => num + 2;\n``````\n\nSo we can add parentheses to the above example and the arrow function still works.\n\n``````const addTwo = (num) => num + 2;\n//Result: 4\n``````\n\nAn arrow function with multiple parameters requires parentheses.\n\n``````const addNums = (num1, num2) => num1 + num2;\n//Result: 3\n``````\n\nArrow functions also support rest parameters and destructuring. Both features require parentheses.\n\nThis is an example of an arrow function with a rest parameter.\n\n``````const nums = (first, ...rest) => rest;\nconsole.log(nums(1, 2, 3, 4));\n//Result: [ 2, 3, 4 ]\n``````\n\nAnd here’s one that uses destructuring.\n\n``````const location = {\ncountry: \"Greece\",\ncity: \"Athens\"\n};\n\nconst travel = ({city}) => city;\n\nconsole.log(travel(location));\n//Result: \"Athens\"\n``````\n\nTo summarize: if there’s only one parameter—and you’re not using rest parameters or destructuring—then parentheses are optional. Otherwise, be sure to include them.\n\n## The Function Body\n\nNow that we’ve got the parentheses rules covered, let’s turn to the function body of an arrow function.\n\nAn arrow function body can either have a “concise body” or “block body”. The body type influences the syntax.\n\nFirst, the “concise body” syntax.\n\n``````const addTwo = a => a + 2;\n``````\n\nThe “concise body” syntax is just that: it’s concise! We don’t use the `return` keyword or curly brackets.\n\nIf you have a one-line arrow function (like the example above), then the value is implicitly returned. So you can omit the `return` keyword and the curly brackets.\n\nNow let’s look at “block body” syntax.\n\n``````const addTwo = a => {\nconst total = a + 2;\n}\n``````\n\nNotice that we use both curly brackets and the `return` keyword in the above example.\n\nYou normally see this syntax when the body of the function is more than one line. And that’s a key point: wrap the body of a multi-line arrow function in curly brackets and use the `return` keyword.\n\n### Objects and Arrow Functions\n\nThere’s one more syntax nuance to know about: wrap the function body in parentheses when you want to return an object literal expression.\n\n``````const f = () => ({\ncity:\"Boston\"\n})\nconsole.log(f().city)\n``````\n\nWithout the parentheses, we get an error.\n\n``````const f = () => {\ncity:\"Boston\"\n}\n//Result: error\n``````\n\nIf you find the arrow function syntax a bit confusing, you’re not alone. It takes some time to get familiar with it. But being aware of your options and requirements are steps in that direction."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6537338,"math_prob":0.9700558,"size":4500,"snap":"2021-31-2021-39","text_gpt3_token_len":1078,"char_repetition_ratio":0.17504448,"word_repetition_ratio":0.07036536,"special_character_ratio":0.25555557,"punctuation_ratio":0.15204678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9800365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T07:29:38Z\",\"WARC-Record-ID\":\"<urn:uuid:d7041ca4-2a7b-4d1e-ac3e-8f0770176b32>\",\"Content-Length\":\"63573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06590f79-5238-415e-87bd-d25ae0b4136b>\",\"WARC-Concurrent-To\":\"<urn:uuid:41cae997-0a5a-459b-888f-151d1781ee80>\",\"WARC-IP-Address\":\"172.67.70.149\",\"WARC-Target-URI\":\"https://www.freecodecamp.org/news/arrow-function-javascript-tutorial-how-to-declare-a-js-function-with-the-new-es6-syntax/\",\"WARC-Payload-Digest\":\"sha1:T2RWCJZBB6A5CZOIA27MERTWCKX5GE6G\",\"WARC-Block-Digest\":\"sha1:S2EXLOMMNBC5TWDRLOM62VMYXASROCEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057199.49_warc_CC-MAIN-20210921070944-20210921100944-00432.warc.gz\"}"} |
https://www.thefreelibrary.com/A+Computationally+Efficient+Pipelined+Architecture+for+1D%2F2D+Lifting...-a0562766622 | [
"# A Computationally Efficient Pipelined Architecture for 1D/2D Lifting Based Forward and Inverse Discrete Wavelet Transform for CDF 5/3 Filter.\n\nI. INTRODUCTION\n\nThe wavelet analysis which expands the influence area continuously has been an important tool since it has been first introduced for the multiresolution signal decomposition . The wavelet transform is a very convenient and effective transform at almost for all kind of application fields including the JPEG compression algorithms of the signal and image processing studies [2-7]. The wavelet analysis is a versatile, efficient and helpful technique in biomedical applications [8-10], genomic researches [11-13], and even seismic signal processing [14-15], as well in the studies of the other research fields for the researchers. The power of the wavelet transform stems from providing the better localization in time-frequency domains.\n\nThe applications and the designs of the various studies are implemented by using digital systems which have limited throughput and memory capacity. Therefore, the efficient realization of the Discrete Wavelet Transform (DWT) which provides the sufficient information to analyze and synthesize the original signal is considerably crucial. The resulted coefficients from this computationally intensive signal transform should be calculated by using an effective digital architecture with satisfying the requirements and characteristics of the related application. So far, several architectures and VLSI (Very Large Scale Integration) systems are proposed for this purpose , .\n\nThe filter banks, which are composed of lowpass, and highpass filters essentially, are used for the wavelet transform, and these filter banks are implemented with different methods. These are convolutional based approaches including direct form, cascade, polyphase, lattice structure, and lifting based structure [3-4], [18-20]. Each of these convolutional based methods has some advantages and disadvantages depending on the application fields. However, a systematic and useful method for the biorthogonal wavelet transform has been proposed. Lifting method for wavelet transform is the spatial constructions of the wavelet coefficients by deriving the polyphase matrices [21-22]. This reversible factorization technique consisting of the lifting steps is preferred substantially and encountered in literature widely due to the its attractive properties such as being a transform of providing integer to integer mapping, in-place processing, and spatial instead of frequency , , [23-24].\n\nThe lifting method is a filter bank which is utilized to compute the DWT coefficients using biorthogonal wavelets. Some of the studies are based on the pipeline technique [25-26], some of them uses the folded architecture in which the use of the single filter output of multiple times for the n levels [27-28]. Some others utilize the flipping structure [29-30]. Each of these architectures has purposed some improvements considering some different performance criteria just as hardware complexity, memory requirements, precision, throughput and latency.\n\nIn this study, a simple and efficient architecture is proposed to realize the lifting based DWT method which is used in the various studies including today's important applications such as image processing and JPEG compression . So, the parallel processing elements, which are easily affordable today in terms of cost, are used in the structure of this 2D pipelined DWT architecture which is benefiting from the power of the parallel operation and increasing the system performance. Some efforts in this point of view are appeared in several studies . The proposed scalable architecture can be useful for the vector processors with an appropriate adaptation .\n\nAnother crucial issue is that the computation of the 2D image borders, in other words the decision about the method which is used for the 2D image border extension is affected the transform performance at the image borders significantly. There are some available studies regarding to this critical issue in the literature [34-35]. Proper one of the boundary extension methods, including symmetrical, zero padding, periodic extension, can be used for the computation of the boundaries of the signal by considering the limitations and the necessities of the related application. In this study, the symmetric boundary extension method is employed to have the optimum performance and not to increase the hardware complexity.\n\nAn architecture for the CDF 5/3 (Cohen-Daubechies-Feauveau) filter which is important for the lossless JPEG compression is proposed. The application of the shifting units is sufficient instead of the use of multipliers by taking the advantage of the dyadic nature of the 5/3 CDF filter.\n\nThe FPGA (Field Programmable Gate Arrays) boards are used to test and examine the designed architectures in literature frequently , . The proposed architecture has been formed using RTL (Register Transfer Level) design process. The RTL design process is used for the algorithmic level design describing the signal flows between the registers of the complex or large systems synchronized by a clock signal. So, the proposed design which is obtained by using the RTL design process has been verified by simulation and synthesized for the Xilinx Spartan 3e FPGA board to examine the results. Also, Verilog HDL (Hardware Description Language) has been used to describe of the designed architecture.\n\nFor the 5/3 CDF filter, reconstruction of the wavelet coefficients from the original signal is lossless. Because 5/3 CDF filter coefficients are dyadic rational values and so the integer to integer transform is obtained by using this feature . The implementations with finite precision should be constructed when the wavelet filters with irrational coefficients are used, and this situation leads to the data loss. However, the original samples are obtained even though the fractions of the transformed wavelet coefficients are rounded to the integers when the 5/3 CDF filter is used. The 5/3 CDF filter is known as lossless wavelet transform, and also the JPEG 2000 standard which is the compression standard using wavelet method supports the lifting based filtering mode. Also, this filter belongs to the two-matrix lifting factorization class.\n\nSo, in this study an efficient architecture is proposed for the widely used CDF 5/3 filter on account of the properties of the lossless wavelet transform. The realization of the inverse transform is also required to test the performance of the proposed architecture.\n\nThe rest of the paper is organized as follows. The background of the wavelet transform is summarized as looking from the viewpoint of lifting schema in the second section. The essential constructs of the proposed 2D DWT processors as well as the general system perspective are described in the third section. The detailed explanation of the proposed 1D/2D lifting based architecture is also given. After that the results and discussions are given, and finally conclusion section is given.\n\nII. WAVELET TRANSFORM BACKGROUND AND LIFTING SCHEMA\n\nDWT uses two sets of functions. These functions are the scaling functions relating with lowpass filter, and the wavelet functions relating with highpass filter.\n\nLifting schema is a convenient method to compute DWT using biorthogonal wavelets. Lifting method can be used if there is no appropriate one among the known wavelets for the application under consideration .\n\nThe information carried on a signal does not vary from sample to sample, randomly. There is a correlation between the adjacent samples. The even indexed samples can be predicted by using only the odd indexed samples of the data at hand, and vice versa. The accuracy of the prediction depends on the correlation between the adjacent samples and the suitability of the estimation method , .\n\nLifting factorization method, also called the second generation wavelets, is consisted of three steps. These steps can be summarized as follows;\n\nI) Split: The signal is separated into its even and odd polyphase components. This operation is also called the Lazy wavelet. The even indexed samples are the even polyphase components and the odd indexed samples are the odd polyphase components of the signal.\n\nII) Predict: The odd samples are predicted using the adjacent even samples at this step. The detail coefficients in other words highpass components are computed.\n\nIII) Update: The even samples are computed by means of the odd samples obtained at the previous step. Indeed, the approximation coefficients, also called lowpass components, are computed at this step .\n\nIn brief, the new odd samples are predicted using even samples and the new even samples are updated using the new odd samples. The z-transform of a FIR filter, which is a delay line essentially, can be represented by Laurent series.\n\n[mathematical expression not reproducible] (1)\n\nwhere [h.sub.k] is impulse response of a filter h which has the finite number of coefficients [h.sub.k], and the degree of that Laurent polynomial h is given as abs(h) =q-p.\n\nThe polyphase matrix used for the lifting schema is defined as:\n\n[mathematical expression not reproducible] (1)\n\nwhere [h.sub.e](z) defines even low pass coefficients, [h.sub.o](z) defines odd low pass coefficients, [g.sub.e](z) defines even indexed high pass coefficients and [g.sub.o](z) defines odd high pass coefficients.\n\nThe related lifting and scaling steps are derived from the biorthogonal wavelets to realize the lifting based DWT. The analysis filters of a specific wavelet class should be represented in the polyphase matrix form. The polyphase matrix of a wavelet transform filter is decomposed into the upper and lower triangle matrix, and the diagonal matrix. Thus, the lifting based architectures are derived.\n\nThe P(z) matrix having the determinant value of 1 is required to perform the wavelet transform. In other words, the diagonals of the polyphase matrix are 1, and the matrix is factorized as 2x2 upper and lower triangle matrix. The upper triangle matrix defines the prediction step coefficients, and the lower triangle matrix defines the update step coefficients.\n\nThe (h, g) is complementary filter pair, Laurent polynomials are [s.sub.i](z) and [t.sub.i](z) for 1<= i <= m, and K is a non-zero constant to factorize the polyphase matrix.\n\n[mathematical expression not reproducible] (3)\n\nThe polyphase matrix can be factorized into lifting steps routinely. Each finite filter used for the wavelet transform is obtained by utilizing m times lifting and dual lifting steps following the Lazy wavelet step. The last one of these steps is the scaling. In the scaling step, the scaling matrix having each element is 0 except for the diagonal ones is derived. The dual polyphase matrix is given as,\n\n[mathematical expression not reproducible] (4)\n\nThe reverse operation of lifting schema is obtained by alternating the signs of the Laurent polynomials, the sign of the [s.sub.i](z) and [t.sub.i](z) and running the all operations backward direction . The lifting schema is a suitable alternative when the frequency based techniques are not practical.\n\nA. The 2D lifting schema based DWT\n\nThe CDF 5/3 filter is a reversible transform valid for the JPEG 2000 image compression algorithm and coding standard for lossless image compression. The filter coefficients are dyadic rationals which can map integer to integer. The highpass and lowpass coefficients through the transform are the complements of the each other, and the original input signal can be reconstructed by means of these two time series together. The coefficients of this lossless transform filter are given as follows;\n\nLowpass: (-1/2, 1, -1/2)\n\nHighpass: (-1/8, 2/8, 6/8, 2/8, -1/8)\n\nThe polyphase matrix and factorization of the filter in z domain can be given in the following form;\n\n[mathematical expression not reproducible] (5)\n\nThe prediction and update steps of the CDF 5/3 filter are shown in the factorization given in the Eq. (5). Accordingly, the calculated new coefficients of the transform i.e. the odd and even components corresponding to the highpass and lowpass components in time domain are given as following equations,\n\ny(2i + 1)=x(2i+1)-0.5x(x(2i) + x(2i + 2)) (6)\n\ny(2i)=x(2i) + 0.25x(y(2i+1) + x(2i+3)) (7)\n\nwhere x's are the signal points and y 's are the calculated signal samples .\n\nIII. GENERAL STRUCTURE OF THE PROPOSED SYSTEM\n\nAt first, the main system architecture is constructed. The word length has been chosen as 16-bit in accordance with the architecture in the system. The system composed of the main datapath, control and memory units. The system design has been performed by using RTL design process. The RTL design process is used for the algorithmic level design describing the signal flows between the registers of the complex or large systems. The ASMD (Algorithm State Machine Diagram) summarizing all the processes relating to the design and processor is constructed. The processes are executed depending on this designed chart.\n\nThe signal input starts sample by sample, as soon as the processor becomes ready by the defined enable input, i.e. start control signal, in the system. The number of signal sample points is N. The index of the first sample is 0 (even index). The even indexes show the even samples and odd indexes are indicating the odd indexed ones. The CDF 5/3 filter needs adjacent 3 samples of the input signal due to its characteristics. The first 3 samples are received cycle by cycle a total of 3 cycles, and the transform process starts immediately when these 3 samples acquired.\n\nInitially, the new odd signal sample shown by the index \"1\" is computed. After the calculation of the \"1\" indexed odd sample, at this very moment the \"0\" indexed even sample is calculated. The symmetric boundary extension is utilized in order to avoid the performance degradation at the signal boundaries in the case of 2D transform when the first new even sample is computed. In this case, a copy of the \"1\" indexed new odd sample which is hold by a register is used for the calculation of the \"0\" indexed new even sample. Indeed, the register value containing the new calculated odd sample is used twice, in that situation. The pipeline technique is utilized at the datapath unit to provide a performance increase during the calculations. Considering the lifting based method, the pipelined technique appears to be a natural choice so the lifting itself is quite favorable.\n\nThe three successive samples are required to calculate the value of a new sample point, whether the odd or even, when regarding receive of the signal points into the system or more specifically the memory unit (or buffer). Therefore, taking the signal samples as pairs (odd-even or 3 indexed-4 indexed samples) is a necessity due to the data dependency. However, the time delay is prevented by means of the pipelined technique and an even new sample is calculated in each clock cycle, and an odd new sample is calculated in the other cycle, and so on. The datapath takes the \"3\" indexed odd sample while the \"1\" indexed new odd sample is computed.\n\nThe \"4\" indexed even sample is taken while the \"0\" indexed new even sample is computed. Two very next neighboring samples are necessary as well as the sample which just now under computation because of the data dependency. The pipeline technique is quite useful at this point, so this technique has been utilized in the proposed architecture to gain performance.\n\nThe pipeline technique is realized by using the extra pipeline registers. So, the hardware utilization is enhanced by means of some additional pipeline registers (or buffer registers) providing that there is no hardware resource is idle. The accurate input data is processed and transferred to the proper hardware resource by virtue of the control signals. The datapath operations are triggered by the clock signal, and total number of clock cycles of process time for input signal defines the total latency.\n\nThe process continues until the samples are finished. The symmetric extension is employed at the signal boundary using the copy of the \"(N-2)\" indexed sample when the processor reaches the boundary. The process time schedule is shown in the Fig. 1. In the Fig. 1 solid lines; stands for the unchanged samples and dashed lines stands for the new calculated samples.\n\nA. Proposed 1D lifting based architecture\n\nThe construction of the 1-D DWT architecture is the first step to establish the 2D DWT architecture because the 2D architecture is based on the 1D structure. For this purpose, the 1D architecture has been formed primarily. The architecture has been constructed by RTL design methodology and the model of the architecture is given in the Fig. 2. The architecture has been composed of the datapath unit, control unit and the memory unit essentially. An address unit is also added to organize and direct the addresses of the memory unit. The datapath executes all required operations, and the timing issues which are occurring in the datapath unit are treated by the control signals generated by the control unit.\n\nThe pipeline technique provides the efficiency by reducing the total latency, and this technique is an effective choice for the proposed lifting based DWT. The registers hold the 16-bit fixed point numbers resulting from the 16-bit architecture. Realization of the DWT which utilize the FP (Floating Point) number format requires excessive hardware resource and moreover the use of the FP units is expensive solution. In addition, FP operations are slower as a consequence of the operation complexity. Low cost embedded microprocessors and microcontrollers do not have FPUs (Floating Point Units) generally; hence these systems do not support the FP (DSPs, FPGAs, etc.) operations.\n\nThese systems support the utilization of the fixed point format instead of using the FP number format. Especially the fixed point format is used for the FIR filters. Fixed point format provides the limited precision, but this format is more simple and faster to operate. Another advantage is that the fixed point systems require less hardware resources.\n\nA synthesizable HDL code should be constituted to realize the architecture after design step. The synthesizable structures specify that the digital logic elements inside the FPGA board can implement the system design which is under consideration. Actually, synthesize operation is the conversion of the HDL code into circuit or hardware elements. The RTL design is quite convenient tool for the special purpose processor design just like in this study. The designed operations are executed in the datapath with the help of the control unit at each clock cycle. The design has some predefined specific properties which are summarized by the ASMD (Algorithm State Machine Diagram) given in the Fig. 3. Each state of the processor and the datapath operations corresponding to these states are described on the ASMD, distinctly. The control mechanism of the processor and pipeline registers are shown on the chart.\n\nDatapath operations have been enumerated for convenience at the ASMD chart, and related operations are given from (1) to (11). The used decision control signals which are generated by datapath unit are given in Table I and the related datapath operations are given in Table II.\n\nIn given ASMD chart, the interactions between the datapath unit, control unit and memory unit are described. These operations are carried out at each processing unit simultaneously. Datapath operations could be summarized as follows. The image has been split into frames by address unit, and these frames are directed to corresponding processing units to have been filtered. When Start signal is asserted, datapath operations get starts, otherwise the control flow returns to \"S_ready\" state. \"load_first_samples\" control signal provides initialization of the sample counters. \"S_first_there\" state acquires the samples which are placed at the image boundaries. \"Flush\" signal flushes all the datapath operations and registers, if this signal is asserted upon necessity.\n\nAfter having first three samples, the lifting based operations starts. One of the required control signals is asserted at each cycle, and the required pipeline stages are performed. A new sample point is taken to related register and at the same time sample counter is incremented. In that clock cycle, the new odd sample value is also calculated, and all these operations fulfilled upon activation of \"Calculate1\" signal. This stage is corresponding to the Eq. (6) i.e. the highpass components of the signal.\n\nUsing the previously calculated new odd sample values new even sample value is calculated similar to the previous step. This stage is corresponding to the Eq. (7) i.e. the lowpass components of the input signal upon activation of \"Calculate2\". After having these boundary samples then all other signal points are calculated by the help of the \"Load1\" and \"Load2\" control signals, until the sample number counter reach at the related number of sample count, which is given by N. When the sample counter reach at the related number of sample point, N, and then the \"Load3\" signal is activated and the other boundary samples of the image are calculated. All operations are pipelined using required pipeline registers and control signals.\n\nAt each clock cycle and by the help of the related control signal, next sample is taken and also one of the new sample values is calculated, and the register transfers are carried out. Index counters which are datapath controls are related to the memory unit and these provide to have been taken the corresponding sample point.\n\nSome processing units which fulfill the transform are utilized in the designed structure. The model can be seen in the Fig. 2 for the designed processing unit. Each processing unit has its own unique id which indicates the corresponding unit number. The components which are used to constitute the system architecture are explained.\n\nDatapath unit: This is one of the main blocks of the system including the hardware resources which the operations are managed. Datapath performs the DWT with the help of the control unit signals. This unit includes pipeline registers which are required for the pipeline technique. In the datapath, the shifting units are utilized instead of the multiplier hardware. The shifting units are simple, efficient and faster considering the operation cost. The logic circuit schema of the operational datapath unit which is comprised of the simple datapath components is shown in the Fig. 4.\n\nControl unit: Control unit ensures that the scheduling and synchronization of all the datapath operations are accurate and also all the processes are executed without resulting any complexity. The control unit is a FSM (Finite State Machine) designed substantially. This FSM is designed taking into account the lifting based DWT which is fulfilled by the special purpose processor. The generated control signals are directed to the datapath and address unit to manage the transform. The control unit operates in a cooperative manner with datapath unit because this unit handles the events taken place in the datapath unit.\n\nMemory unit: This unit stores the signal samples to be transformed. The address of the input sample which is under consideration is defined by the index value generated by the address unit. The new sample values are exchanged with the old values because of the transform is in-place and the old sample values are discarded. This part of the system has been designed as an individual unit free from the datapath and control unit.\n\nThe used address range depends on the input signal to be processed. Indeed, the memory can be assumed as a kind of LUT (Look Up Table) proceeding from bottom to top and a relevant illustration is given in the Fig. 5.\n\nThe results which are obtained from each processing unit are written to the same address ranges after processing the input samples. So, the memory requirements are reduced.\n\nAddress unit: Address unit has been added to the designed structure to provide that the progressing to 2D case from 1D is more easily. Address unit partitions the image into related number of image frames and transfers those frames to corresponding processing units to be processed in parallel. This unit is quite helpful for the interactions between the memory and datapath operations in case of multiprocessor system architecture. The address unit generates the addresses for the sample points which are taken from the memory unit. The related sample is taken from the generated address and directed to the datapath to be processed. This unit is also triggered with the clock signal and directed by the control signals from the control unit.\n\nThe address unit with a quite simple structure could be thought as a kind of index counter directed by the control signals, simply.\n\nM =lo[g.sub.2]N (8)\n\nWhere N is the number of sample points and M is the number of address bits. Basically, the registers are the data storage components and buffers.\n\nThe realization of the inverse transform is possible with some minor changes. It is enough only to change the shift amount values of the shifting elements in the datapath. So, the inverse transform could be performed by utilizing the same circuit.\n\nB. Proposed 2D lifting based architecture\n\nThe 2D architecture is composed of almost the same components with the 1D architecture. Solely, 2D signals which are images are processed by virtue of a few additional control signals and multi-processing units to operate in parallel. The 2D signals are considered as matrix and the elements of matrix lay out from bottom to top in the memory.\n\nTo perform the lifting based DWT while working with images beforehand the wavelet transform coefficients of all the rows of the matrix are computed after that operation the wavelet transform coefficients of all the columns of the matrix are computed. Therefore, the approximation coefficients and horizontal, vertical and diagonal coefficients of image are computed.\n\nThe 2D architecture design has been composed of almost the same components with the 1D architecture design. 2D signals which are generally images has been processed by using a few additional control signals and multiprocessing units which operate in parallel. The 2D signals are considered as matrix, and the matrix elements placed from lower addresses to higher addresses, in the memory.\n\nTo perform the lifting based DWT while working with images or 2D signals, at first step the wavelet transform coefficients of all the rows of the matrix are computed. After that operation the wavelet transform coefficients of all the columns of the matrix are computed. Thus, the approximation, horizontal, vertical and diagonal coefficients have been computed for 2D signal input.\n\nSo, the datapath and control unit have to operate in cooperation, and also that control unit has been used for all the other processing units to coordinate other operations. Decision signals which are obtained from one of the processing units are sufficient to direct the main control unit because all the other processing units fulfils the same processes by operating in parallel and synchronously. As well, a single address unit provides carrying out all the datapath operations for the lifting based wavelet transform.\n\nThe samples stored at the memory unit (MU) are shared by each processing unit by the help of the address unit whereby the operations are executed in parallel. Each processing unit could reach only its permitted address range to prevent a problem. The number of the processing unit (PU) is given by [L.sub.pu]. The rows with correct order and quantity have to be transferred to each processing unit. Because the number of the processing units, [L.sub.pu] are less than the row number of the 2D signals, in general. These operations are accomplished by the help of the control and address units. A scalable structure has been formed by adjusting the proper number of PUs. Today's easily affordable multiprocessor structures are similar to these units. Consider an image which is the size of [N.sub.row] x [N.sub.column], and the necessary memory unit which is the size of ([N.sub.row] x [N.sub.column])x1. Each individual image row is send to its dedicated PU so; the control passes to the next row after finishing the process of a previous row. PUs could fulfil their assigned task by reaching only their authorized MU ranges. Each PU has a right to reach only [N.sub.row]/[L.sub.pu] number of rows.\n\nThe second step begins when all the rows are finished and sent to the in-place memory. All the columns are sent to the memory after the processing in the second step. The same hardware architecture is used for this second step by using the first step output coefficients as the input of the second step. The architecture is called folded structure when a single architectural layer is used for more than one transform step in this manner , .\n\nThe lowpass and highpass coefficients are obtained after the first step. The transpose of the first step output, wavelet coefficients, are applied to the input of the same circuit to complete the 2D transform. The desired output values after the transform are approximation (LL), horizontal (LH), vertical (HL) and diagonal (HH) coefficients revealing the information about the image.\n\nBoundary issue: Another critical issue is the boundary extension method which affects the performance of the wavelet transform in case of 2D signal. The boundary extension problem corresponds to image samples which are placed at the edges. In this study, symmetric boundary extension method has been employed. The second sample and the (N-1)th sample have been used to calculate the transform coefficients at the frame boundaries, for the first and last samples. Actually, the relevant register content is used twice to overcome to this issue. The extension method is quite appropriate for the lifting schema. All the rows are processed in parallel and after finishing these steps the coefficient calculation of all the columns begins in order to avoid the control and computation overhead.\n\nIV. RESULTS AND DISCUSSIONS\n\nSome experiments have been performed to test the performance of proposed system architecture. Designed architecture is synthesized for the Xilinx Spartan 3e FPGA board for this purpose. FPGAs offer high performance, flexible and balanced solutions compared to other common digital systems. The FPGAs have possibility of high level parallelism as well. Computation time is one of the crucial criteria to measure the system performance; it is defined as the total number of the clock cycles between the input and output time instants of the first sample.\n\n1D signal data has been applied to the input of the designed architecture. The elapsed time which is the duration of having the first transformed sample after the first sample input takes 4 clock cycles. In general, the computation time is technology dependent. Total computation time is computed using T=N x [T.sub.CLK] where N denotes the number of clock cycles, [T.sub.CLK] is clock cycle time. The completion of the whole frame (for N point input signal) takes N+5 clock cycles beginning from the first sample input. The original form of the signal samples, the reconstructed signal with 256 points are shown in the Fig. 6. The superimposed form of the original and reconstructed signals is given also given in the Fig. 6. The obtained approximation coefficients and the detail coefficients are given in the Fig. 7. The approximation coefficients which are shown in the Fig. 7 are quite similar to the original signal and the detail coefficients reveal the fine details of the signal. The reconstruction error has not been obtained after the inverse transform.\n\nThe recovered signal sample values are assessed with an objective measure and RMSE (root mean square error) is obtained as zero for the signal points and shown in Fig. 8.\n\nDue to the characteristics of the CDF 5/3 filter this result has been obtained after recovering the image from lifting based wavelet transformation. A system which is designed to use for the lossless wavelet transformation and compression should provide that the recovered signal points are exactly the same as the original signal points, after inverse transformation. Here, this requirement is provided.\n\nTo change the shift amounts of the shifting units taking part in the datapath after reversing the sample point order of the transformed signal is quite effective. The shifting units are utilized to implement the multiplication and division operation when recovering the transformed signal to its original form. In the inverse transform, the sign of the scaling factors is changed, merge is exchanged with split, and the dataflow is reversed. The number of signal sample points, N, can be altered easily due to the proposed architecture which satisfies that system is scalable. The 2D architecture has been tested after the first step, and some images have been used for trials. Initially, the baboon image has been used for the 2D lifting based DWT.\n\nImage which has 256 x 256 sample points defines the memory requirement at the same time. The number of PUs is chosen as four for the baboon image.\n\nSome particular row ranges of image are assigned to each individual PU and the coordination of the operations are organized by means of the CU-AU pair. The load has been shared between each of the four PUs to reach the best performance. Predefined address ranges of memory are related to the corresponding PUs because the PUs can reach only the allowed memory space. The original image before wavelet transform is given in the Fig. 9. Approximation (LL), horizontal (LH), vertical (HL) and diagonal (HH) coefficients after transform are given in the Fig. 10.\n\nIn 2D (image) case, the total computation time for N x N signal is:\n\n[mathematical expression not reproducible] (9)\n\nwhere [L.sub.pu] denotes the number of processing units. The reconstructed image is shown in the Fig. 11.\n\nWhen the obtained results are examined it is seen that the approximation coefficients are so similar to the original image. A horizontal, vertical and diagonal coefficient represents the related details of the image. After the first test the Lena image which is well known and frequently used in literature is chosen to test the constructed architecture. 512x512 sized grayscale image which is stored at the memory has been divided into equal sized rectangular frames. After that, equal number of frames has been transferred to each processing unit. The system component which has been designed as the address unit which adjusts the required range has been employed in order to prevent a breakdown of synchronization between memory unit and processing unit.\n\nThe number of PUs has been increased to eight using the scalability property of the architecture for the second image. The original image and the transform output are shown in the Fig. 12 and Fig. 13, respectively. The reverse transformation is realized by virtue of the obtained coefficients and the resultant image after the reconstruction process is given in the Fig. 14.\n\nThe original pixel values and the reconstructed image pixel values have been compared. For comparison purposes PSNR (Peak Signal to Noise Ratio) is frequently used criterion in compression performance. RMSE graphic is supplied in Figure 15 rather than PSNR which goes to infinity for zero loss. As expected 2D RMSE is zero for all pixels.\n\nV. CONCLUSION\n\nIn this study, an efficient multiprocessor pipelined system architecture for the lossless CDF 5/3 DWT filter is proposed. This structure has a simple datapath unit, and all the operations are shared onto designed processing units.\n\nGeneral system architecture has a hierarchical structure so that an effective scalability is provided. Thanks to scalability the number of PUs can be adapted where increasing number of PUs lead to decrease in computation time since they operate in parallel. Satisfying results have been obtained after conducting some test by using different size of images.\n\nThe usage of multiplier units has been avoided by using the characteristics of CDF 5/3 filter in favor of designed architecture when considering the system hardware. The control overhead and hardware complexity has been reduced by the simple construction of the datapath and control unit.\n\nThe symmetric boundary extension method has been used for the image edge points. So, the best system performance has been tried to achieve especially in 2D image case.\n\nThe same hardware components are used twice as folded architecture to use the hardware resources efficiently in 2D case. The same hardware resources have been used for the inverse transform but only changing the shift amounts of the shifting units. In addition, the pipeline technique has been utilized at the datapath either to avoid the time delays or to provide the efficient use of hardware resources.\n\nAs a consequence, the proposed architecture performs the operations of the widely used lossless CDF 5/3 filter with less hardware components, properly. It exhibits the better performance compared to the case without pipeline technique. The proposed architecture can be adapted efficiently for different applications which are based on wavelet analysis such as DWT based image compression, signal denoising, edge detection, speech recognition, biomedical image processing etc.\n\nREFERENCES\n\n S.G. Mallat, \"A Theory for Multiresolution Signal Decomposition: The Wavelet Representation\", IEEE Trans. Pattern Analysis Mach. Int., vol. 11, no.7, pp. 674-693, July 1989. doi: 10.1109/34.192463\n\n S.C.B. Lo, H. Li, M.T. Freedman, \"Optimization of Wavelet Decomposition for Image Compression and Feature Preservation\", IEEE Transactions on Medical Imaging, vol. 22, no. 9, September 2003. doi: 10.1109/TMI.2003.816953\n\n L. Cheng, D.L. Liang, Z.H. Zhang, \"Popular Biorthogonal Wavelet Filters via a Lifting Scheme and its Application in Image Compression\", IEE Proc.-Vis. Image Signal Process., vol. 150, no. 4, pp. 227-232, August 2003. doi: 10.1049/ip-vis:20030557\n\n T. Park, S. Jung, \"High Speed Lattice Based VLSI Architecture of 2D Discrete Wavelet Transform for Real-Time Video Signal Processing\", IEEE Transactions on Consumer Electronics, vol. 48, no. 4, pp. 1026-1032, November 2002. doi: 10.1109/TCE.2003.1196434\n\n T. Andre, M. Antonini, M. Barlaud, R.M. Gray,\"Entropy-Based Distortion Measure and Bit Allocation for Wavelet Image Compression\", IEEE Trans. on Image Processing, vol. 16, issue. 12, pp. 3058-3064, December 2007. doi: 10.1109/TIP. 2007. 909408\n\n G. Bhatnagar, Q.M.J. Wu, B. Raman, \"A New Fractional Random Wavelet transform for Fingerprint Security\", IEEE Trans. on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 42, no. 1, pp. 262-275, January 2012. doi: 10.1109/TSMCA.2011. 2147307\n\n I. Ram, I. Cohen, M. Elad, \"Facial Image Compression Using Patch-Ordering-Based Adaptive Wavelet Transform\", IEEE Signal Processing Letters, vol. 21, no. 10, pp. 1270-1274, October 2014. doi: 10.1109/LSP.2014.2332276\n\n C.A. Garcia, A. Otero, X. Vila, D.G. Marquez, \"A New Algorithm for Wavelet-Based Heart Rate Variability Analysis\", Elsevier, Biomedical Signal Processing and Control, 8, pp. 542-550, 2013. doi.org/10.1016/j.bspc.2013.05.006\n\n E. Causevic, R.E. Morley, M.V. Wickerhauser, A.E. Jacquin, \"Fast Wavelet Estimation of Weak Biosignals\", IEEE Transactions on Biomedical Engineering, vol. 52, no. 6, pp. 1021-1032, June 2005. doi: 10.1109/TBME.2005.846722\n\n K.G. Oweiss, A. Mason, Y. Suhail, A.M. Kamboh, K.E. Thomson, \"A scalable Wavelet Transform VLSI Architecture for Real-Time Signal Processing in High-Density Intra-Cortical Implants\", IEEE Transactions on Circuits and Systems-I: Regular Papers, vol. 54, no. 6, pp. 1266-1278, June 2007. doi: 10.1109/TCSI.2007.897726\n\n J.A.T. Machado, A.C. Costa, M.D. Quelhas, \"Wavelet Analysis of Human DNA\", Elsevier Genomics 98, pp. 155-163, 2011. doi.org/10.1016/j.ygeno.2011.05.010\n\n S. Saini, L. Dewan, \"Application of Discrete Wavelet Transform for Analysis of Genomic Sequences of Mycobacterium Tuberculosis\", Springerplus. 5: 64; 2016. doi: 10.1186/s40064-016-1668-9\n\n T. Meng, A.T. Soliman, M. Shyu, Y. Yang, S. Chen, S.S. Iyengar, J.S. Yordy, P. Iyengar, \"Wavelet Analysis in Current Cancer Genome Research: A Survey\", IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 10, no. 6, pp. 1442-1459, November/December 2013. doi: 10.1109/TCBB.2013.134\n\n I. Rodriguez, A. Manuel-Lazaro, A. Carlosena, A. Bermudez, J. Del Rio, S.S.Panahi, \"Signal Processing in Ocean Bottom Seismographs for Refraction Seismology\", IEEE Trans. Instr. Measurement, vol. 55, no. 2, pp. 652-658, April 2006. doi: 10.1109/TIM.2006.870107\n\n J. Ma, G. Plonka, H. Chauris, \"A New Sparse Representation of Seismic Data Using Adaptive Easy-Path Wavelet Transform\", IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 3, pp. 540-544, October 2010. doi: 10.1109/LGRS.2010.2041185\n\n C.P. Uzunoglu, \"Investigation of Degradative Signals on Outdoor Solid Insulators Using Continuous Wavelet Transform\", Journal of Electrical Engineering and Technology, vol. 11, no.3, pp. 683-689, 2016. doi.org/10.5370/JEET.2016.11.3.683\n\n I.S. Uzun, A. Amira, \"Framework for FPGA-Based Discrete Biorthogonal Wavelet Transforms Implementation\", IEE Proc.-Vis. Im. Sig. Proc., vol. 153, no. 6, Dec. 2006. doi: 10.1049/ip-vis:20045080\n\n J.T. Olkkonen, H. Olkkonen, \"Discrete Lattice Wavelet Transform\", IEEE Transactions on Circuits ans Systems-II: Express Briefs, vol. 54, no. 1, pp. 71-75, January 2007. doi: 10.1109/TCSII.2006.883097\n\n H.I. Shahadi, R. Jidin, W.H. Way, Y.A. Abbas, \"Efficient FPGA Architecture for Dual Mode Integer Haar Lifting Wavelet Transform Core\", Journal of Applied Sciences 14 (5): pp. 436-444, 2014. doi: 10.3923/jas.2014.436.444\n\n K. Andra, C. Chakrabarti, T. Acharya, \"A VLSI Architecture for Lifting-Based Forward and Inverse Wavelet Transform\", IEEE Transactions on Signal Processing, vol. 50, no. 4, pp. 966-977, April 2002. doi: 10.1109/78.992147\n\n W. Sweldens, \"The Lifting Scheme: A Custom-Design Construction of Biorthogonal Wavelets\", Applied and Comp. Harmonic Analysis 3, no. 0015, pp. 186-200, 1996. doi.org/10.1006/acha.1996. 0015\n\n I. Daubechies, W. Sweldens, \"Factoring Wavelet Transforms into Lifting Steps\", The Journal of Fourier Analysis and Applications, vol. 4, issue 3, pp. 247-269, 1998. doi.org/10.1007/BFb0011095\n\n W. Zhang, Z. Jiang, Z. Gao, Y. Liu, \"An Efficient VLSI Architecture for Lifting-Based Discrete Wavelet Transform\", IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 59, no. 3, pp. 158-162, March 2012. doi: 10.1109/TCSII.2012.2184369\n\n K.A. Kotteri, S. Barua, A.E. Bell, E. Carletta, \"A Comparison of Hardware Implementations of the Biorthogonal 9/7 DWT: Convolution Versus Lifting\", IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 52, no. 5, pp. 256-260, May 2005. doi: 10.1109/TCSII.2005.843496\n\n O. Fatemi, S. Bolouki, \"Pipeline, Memory-Efficient and Programmable Architecture for 2D Discrete Wavelet Transform Using Lifting Scheme\", IEE Proc.-Circuits Devices Syst., vol. 152, no. 6, pp. 703-708, December 2005. doi: 10.1049/ip-cds:20059055\n\n J. Song, I.C. Park, \"Pipelined Discrete Wavelet Transform Architecture Scanning Dual Lines\", IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 56, no. 12, pp. 916-9, December 2009. doi: 10.1109/TCSII.2009.2035257\n\n B.K. Mohany, P.K. Meher, \"Area-Delay-Power_Efficient architecture for Folded Two-Dimensional Discrete Wavelet Transform by Multiple Lifting Computation\", IET Image Process, vol. 8, iss. 6, pp. 345-353, 2014. doi: 10.1049/iet-ipr.2012.0661\n\n G. Shi, W. Liu, L. Zhang, F. Li, \"An Efficient Folded Architecture for Lifting-Based Discrete Wavelet Transform\", IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 56, no. 4, pp. 290-294, April 2009. doi: 10.1109/TCSII.2009.2015393\n\n C.T. Huang, P.C. Tseng, L.G. Chen, \"Flipping Structure: An Efficient VLSI Architecture for Lifting-Based Discrete Wavelet Transform\", IEEE Transactions on Signal Processing\", vol. 52, no. 4, pp. 1080-1089, April 2004. doi: 10.1109/TSP.2004.823509\n\n A. Darji, S. Agrawal, A. Oza, V. Sinha, A. Verma, S.N. Merchant, A.N. Chandorkar, \"Dual-Scan Parallel Flipping Architecture for a Lifting-Based 2-D Discrete Wavelet Transform\", IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 61, no. 6, pp. 433-437, June 2014. doi: 10.1109/TCSII.2014.2319975\n\n G. Dillen, B. Georis, J.D. Legat, O. Cantineau, \"Combine Line-Based Architecture for the 5-3 and 9-7 Wavelet Transform of JPEG2000\", IEEE Trans. on Circuits and Systems for Video Tech., vol. 13, no. 9, pp. 944-950, September 2003. doi: 10.1109/TCSVT. 2003.816518\n\n W.J. Laan, A.C. Jalba, J.B.T.M. Roerdink, \"Accelerating Wavelet Lifting on Graphics Hardware Using CUDA\", IEEE Transactions on Parallel and Distributed Systems, vol. 22, no. 1, pp. 132-146, January 2011. doi: 10.1109/TPDS.2010.143\n\n C.E. Kozyrakis, D.A. Patterson, \"Scalable Vector Processors for Embedded Systems\", IEEE Computer Society, vol. 23, issue no. 06, pp. 36-45, Nov./Dec. 2003. doi: 10.1109/MM.2003. 1261385\n\n K.C.B. Tan, T. Arslan, \"Low Power Embedded Extension Algorithm for Lifting-Based Discrete Wavelet Transform in JPEG2000\", IEEE Electronics Letters, vol. 37, Issue. 22, pp. 1328-1330, October 2001. doi: 10.1049/el:20010915\n\n W. Jiang, A. Ortega, \"Lifting Factorization-Based Discrete Wavelet Transform Architecture Design\", IEEE Trans. Circuits Syst. for Video Tech., vol. 11, no. 5, pp. 651-657, May 2001. doi: 10.1109/76.920194\n\n M. Dali, A. Guessoum, R.M. Gibson, A. Amira, N. Ramzan, \"Efficient FPGA Implementation of High-Throughput Mixed Radix Multipath Delay Commutator FFT Processor for MIMO-OFDM\", Advances in Electrical and Computer Engineering (AECE), vol. 17, no. 1, pp. 27-38, 2017. doi: 10.4316/AECE.2017.01005\n\n A.Y. Jean-Cuellar, L. Morales-Velazquez, R.J. Romero-Troncoso, R.A. Osornio-Rios, \"FPGA-Based Embedded System Architecture for Micro-Genetic Algorithms Applied to Parameters Optimization in Motion Control\", Advances in Electrical and Computer Eng. (AECE), vol. 15, no. 1, pp. 23-32, 2015. doi: 10.4316/AECE.2015. 01004\n\n A.R. Calderbank, I. Daubechies, W. Sweldens, B.L. Yeo, \"Wavelet Transforms That Map Integers to Integers\", Applied and Computational Harmonic Analysis 5, no. HA970238, pp. 332-369, 1998. doi.org/10.1006/acha.1997.0238\n\n W. Sweldens, \"The Lifting Scheme: A Construction of Second Generation Wavelets\", SIAM Journal on Math. Anal., vol. 29, issue 2, pp. 511-546, March 1998. doi: 10.1137/ S0036141095289051\n\n W.A. Pearlman, Wavelet Image Compression, Synthesis Lectures on Image, Video, and Multimedia Processing, Alan C. Bovik, Series Editor, Morgan & Claypool Publishers, 2013. doi: 10.2200/S00464ED1V01Y201212IVM013\n\nSerap CEKLI\n\nMaltepe University, Computer Engineering Department, Istanbul, Turkey\n\[email protected]\n\nDigital Object Identifier 10.4316/AECE.2018.02003\n```TABLE I. DECISION CONTROL SIGNALS GENERATED BY DATAPATH\n\nSignal Operation\nName\n\nc1 first there sample counter\nc2_is_eo even-odd counter, adjusts the even and odd samples\nsequentially\ncs sample counter, gets selected number of input samples\n\nTABLE II. DATAPATH OPERATIONS\n\nOperation Related Datapath Operation\nNumber\n\nc1<= 0\n1 c2<= 0\ncsample <= 0\neven_left <= datasamples\n2 c1<= c1 + 1\ncsample <= csample + 1\nodd_old <= datasamples\n3 c1<= c1 + 1\ncsample <= csample + 1\n4 even_right <= datasamples\nc1<= c1 + 1\nc1<= 0\n5 c2<= 0\ncsample <= 0\nodd_new <= odd_old + ((even_left + even_right) >> 1)\n6 odd_reg <= datasamples\nc1<= 0\ncsample <= csample + 1\neven_new <= even left + ((odd_new + odd_new) >> 2)\neven_left <= even right\n7 even_right <= datasamples\nodd_old <= oddnew\ncsample <= csample + 1\nodd_new <= odd_reg + ((even_left + even_right) >> 1)\n8 odd_temp <= datasamples\nc2<= c2 + 1\ncsample <= csample + 1\neven_new <= even_left + ((odd_old + odd_new) >> 2)\neven_left <= even_right\n9 even_right <= datasamples\nodd_old <= odd_new\nc2 <= 0\ncsample <= csample + 1\nodd_new <= odd_reg + ((even left + even left) >> 1)\n10 odd_reg <= datasamples\nc2<= c2 + 1\ncsample <= csample + 1\n11 even_new <= even_left + ((odd old + odd new) >> 2)\nc2<= 0\n```\nCOPYRIGHT 2018 Stefan cel Mare University of Suceava\nNo portion of this article can be reproduced without the express written permission from the copyright holder."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9037864,"math_prob":0.90643764,"size":48986,"snap":"2021-04-2021-17","text_gpt3_token_len":11381,"char_repetition_ratio":0.16820465,"word_repetition_ratio":0.0488677,"special_character_ratio":0.23496509,"punctuation_ratio":0.15499097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9735351,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T04:53:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a26b22db-f91b-4266-9d93-c1e40e2e67bc>\",\"Content-Length\":\"88771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72772df9-9711-4512-aa21-1245e2bdbbcb>\",\"WARC-Concurrent-To\":\"<urn:uuid:33c0a6d9-a4db-4336-a854-8bc64ca2de21>\",\"WARC-IP-Address\":\"45.35.33.117\",\"WARC-Target-URI\":\"https://www.thefreelibrary.com/A+Computationally+Efficient+Pipelined+Architecture+for+1D%2F2D+Lifting...-a0562766622\",\"WARC-Payload-Digest\":\"sha1:TW53AYRDBSRNEZJEIQ4Y4GY2WZGOAMYE\",\"WARC-Block-Digest\":\"sha1:O7HUTF2UGUMET7CK6ATHJDCUTXV7W3I4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519883.54_warc_CC-MAIN-20210120023125-20210120053125-00024.warc.gz\"}"} |
https://www.dummies.com/article/academics-the-arts/study-skills-test-prep/act/act-geometry-test-triangle-trauma-268033/ | [
"##### ACT For Dummies: Book + 3 Practice Tests Online + Flashcards, 7th Edition",
null,
"Many of the geometry problems on the ACT require you to know a lot about triangles. Remember the facts and rules about triangles given here, and you’re on your way to acing geometry questions.\n\n## Classifying triangles\n\nTriangles are classified based on the measurements of their sides and angles. Here are the types of triangles you may need to know for the ACT:\n• Equilateral: A triangle with three equal sides and three equal angles.\n• Isosceles: A triangle with two equal sides and two equal angles. The angles opposite equal sides in an isosceles triangle are also equal.\n• Scalene: A triangle with no equal sides and no equal angles.",
null,
"Equilateral, isosceles, and scalene triangles.\n\n## Sizing up triangles\n\nWhen you’re figuring out ACT questions that deal with triangles, you need to know these rules about the measurements of their sides and angles:\n• In any triangle, the largest angle is opposite the longest side.",
null,
"The largest angle is opposite the longest side.\n• In any triangle, the sum of the lengths of two sides must be greater than the length of the third side.",
null,
"The sum of the lengths of two sides of a triangle is greater than the length of the third side.\n• In any type of triangle, the sum of the interior angles is 180 degrees.",
null,
"The sum of the interior angles of a triangle is 180 degrees.\n• The measure of an exterior angle of a triangle is equal to the sum of the two remote interior angles.",
null,
"The measure of an exterior angle is equal to the sum of the two remote interior angles.\n\n## Zeroing in on similar triangles\n\nSeveral ACT math questions require you to compare similar triangles. Similar triangles look alike but are different sizes. Here’s what you need to know about similar triangles:\n• Similar triangles have the same angle measures. If you can determine that two triangles contain angles that measure the same degrees, you know the triangles are similar.\n• The sides of similar triangles are in proportion. For example, if the heights of two similar triangles are in a ratio of 2:3, then the bases of those triangles are also in a ratio of 2:3.",
null,
"Similar triangles have proportionate sides.\n\nDon’t assume that triangles are similar on the ACT just because they look similar to you. The only way you know two triangles are similar is if the test tells you they are or you can determine that their angle measures are the same."
] | [
null,
"https://www.dummies.com/wp-content/uploads/act-for-dummies-7th-edition-cover-9781119612643-199x255.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-equilateral-triangles.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-largest-angle-triangle.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-sum-sides-triangle.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-interior-angles-sum.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-exterior-angles-measure.jpg",
null,
"https://www.dummies.com/wp-content/uploads/act-similar-triangles.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6456684,"math_prob":0.99361366,"size":26081,"snap":"2023-40-2023-50","text_gpt3_token_len":7017,"char_repetition_ratio":0.13532999,"word_repetition_ratio":0.7780679,"special_character_ratio":0.32161343,"punctuation_ratio":0.25463322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993347,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T17:43:02Z\",\"WARC-Record-ID\":\"<urn:uuid:51db2619-c40f-4f06-9455-f590103c6778>\",\"Content-Length\":\"80394\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08b98d0c-b790-4ed9-af37-320dce0387f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:632396b0-57c9-4769-b304-78c395b1be5d>\",\"WARC-IP-Address\":\"172.64.151.92\",\"WARC-Target-URI\":\"https://www.dummies.com/article/academics-the-arts/study-skills-test-prep/act/act-geometry-test-triangle-trauma-268033/\",\"WARC-Payload-Digest\":\"sha1:P3F67TU5PHJM66FAT6PZP4Z4E5QHCEIX\",\"WARC-Block-Digest\":\"sha1:H5IA2V6VZRQ6G5DKZTYVRKIZ6M2YUUGN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506528.19_warc_CC-MAIN-20230923162848-20230923192848-00627.warc.gz\"}"} |
https://www.mathdoubts.com/cot-values/ | [
"# Cotangent values\n\nThe trigonometric function cotangent gives a value for every angle of a right triangle and each value is called the cotangent value. In trigonometry, there are many cot values but five cot values are used mostly and they are used to derive the remaining cot function values mathematically.\n\n## Table\n\nThe special values of cotangent function for some standard angles are given here in a tabular form with proofs. The cot chart is really helpful to us in mathematics and everyone who studies trigonometry should remember them.\n\nAngle $(\\theta)$ Cot value $(\\cot{\\theta})$\n$0^°$ $0$ $0^g$ $\\infty$ $\\infty$\n$30^°$ $\\dfrac{\\pi}{6}$ $33\\dfrac{1}{3}^g$ $\\sqrt{3}$ $1.7321$\n$45^°$ $\\dfrac{\\pi}{4}$ $50^g$ $1$ $1$\n$60^°$ $\\dfrac{\\pi}{3}$ $66\\dfrac{2}{3}^g$ $\\dfrac{1}{\\sqrt{3}}$ $0.5774$\n$90^°$ $\\dfrac{\\pi}{2}$ $100^g$ $0$ $0$\nLatest Math Topics\nEmail subscription\nMath Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7988205,"math_prob":0.99997604,"size":1044,"snap":"2020-24-2020-29","text_gpt3_token_len":334,"char_repetition_ratio":0.13365385,"word_repetition_ratio":0.0,"special_character_ratio":0.34195402,"punctuation_ratio":0.044198897,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999603,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T15:38:54Z\",\"WARC-Record-ID\":\"<urn:uuid:48df3485-ee09-4403-ab3b-aaf1e1381694>\",\"Content-Length\":\"15071\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00f2b498-5e30-44a5-b7ef-d8f86751ed3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:445485a7-4ac4-4750-9e87-5006eb4ca149>\",\"WARC-IP-Address\":\"66.33.217.231\",\"WARC-Target-URI\":\"https://www.mathdoubts.com/cot-values/\",\"WARC-Payload-Digest\":\"sha1:CA6C3UXPJCWEJOE4SKZJRWK76CFBRXY5\",\"WARC-Block-Digest\":\"sha1:RKK432A6YI4ETWJDB5OZMPMQHBMUQMZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657145436.64_warc_CC-MAIN-20200713131310-20200713161310-00402.warc.gz\"}"} |
https://www.physicsforums.com/threads/magnetic-flux-vs-temperature-relationship-in-ferromagnets.923380/ | [
"# Magnetic Flux vs Temperature Relationship in Ferromagnets\n\nHi everyone,\n\nI have recently done an experiment testing the effect of temperature on magnetism through measurement of magnetic flux at a constant distance away from the measuring device. I used a range of 0-90 C for neodymium and bar magnets, and found a reasonably linear trend, with a similar slope to what I was expecting from its coefficient of residual induction.\n\nHowever, when I inserted linear trendlines for my data, they gave an unreasonably high curie point (when there was 0 flux density) for the bar magnets, indicating the trend could not have been linear in order for the flux density to be 0 at the Curie point. Even more perplexing was the fact that the relationship DID seem to be linear for the neodymium magnets!\n\nAfter I did some more research, I found that the temperature coefficient of residual induction was not necessarily linear, especially outside of the 0-100 C range. Does anyone have formulas which describes this change, or formulas for the relationship between magnetic flux density and temperature? I've researched for hours, but the best relationship I could come up with was the linear approximation from the temperature coefficient of residual induction.\n\n•",
null,
"astrocat98\n\nHomework Helper\nGold Member\nIn addition, I believe general formulas like this in regards to phase transitions are covered in the book by Nigel Goldenfeld on the Renormalization Group: https://www.amazon.com/dp/0201554097/?tag=pfamazon01-20 The renormalization group theory actually predicts such an equation with the exponent ## \\gamma=.32 ##, etc., with the 3-D Ising model. (see p.111 of the book=I have a copy of this book and it contains some very interesting calculations and info.)\n@vanhees71 and @blue_leaf77 . A very interesting topic that the OP posted. Might you have any additional inputs?\n\nLast edited:\nBy additional inputs, do you mean did I measure any other variables other than magnetic flux? Unfortunately, I didn't, and wish I had in hindsight. How did you find the equation that you posted above [M/Mo = ((Tc-T)/Tc)^0.36]? The picture of the graph doesn't even give a formula. Could you please send me the information on the renormalization group theory and the equation? I don't actually have the Nigel Goldenfeld book.\n\nWould I be able to calculate a value of y from my datasets, or should I just use the approximation of y = 0.32 given by the book? I can see how you can convert B to H by using M = Mr x Mo and substituting M into B = M x H, but how would I find values of B outside my measured temperature range (0-90 C)? Is this covered in the book as well?\n\nHomework Helper\nGold Member\n\nLast edited:\nThanks so much! This has really helped. One last thing - is M0 the magnetisation at T = 0, or is it something else entirely? Would I be able to find out what M0 is for my dataset?\n\nI have M = µ0 x Br\nso Br = M/µ0\n\nand B =",
null,
"(",
null,
"-",
null,
") for my cylindrical neodymium magnet, where L = length of magnet, R = radius of magnet and X = distance from pole, all in mm.\nso B =",
null,
"(",
null,
"-",
null,
")\n\nFrom the relationship you gave me, M/M0 = ((Tc-T)/Tc)^0.36.\n\nI know what temperatures I measured the flux density at, so I can find what fraction of the Curie temperature that is and use the graph to find what M/M0 was for that temperature. How do I find M0 so I can substitute it into M/M0 = ((Tc-T)/Tc)^0.36, to find M, then substitute M into my flux formula?. Could I find what M0 is using T/Tc = 0?\n\n#### Attachments\n\n•",
null,
"Homework Helper\nGold Member\nVery good. Yes, ## M_o ## is the magnetization of the permanent magnet at ## T=0 ## K, and I'm pretty sure, at least for best results, the geometry needs to be a long cylindrical shape. (and not spherical, etc.). Your data point at the lowest temperature you measured will be reasonably close to this value. ## \\\\ ## ( ## M_o ## for the material should also be available in a google. Let me see if it shows up... They give 1.6 T, but that is for a very high quality special compound ## Nd_2Fe_{14}B ##. If you got something in the neighborhood of 1.0 T, it is probably reasonably accurate...Note: For units, use ## B=\\mu_o H+M ##, so that ## B_r=M ##. That's the simplest wat to compute it. (Sometimes the equation ## B=\\mu_o H +\\mu_o M ## is also used=this makes ## M=B_r/\\mu_o ##. Your ## M=\\mu_o B_r ## is incorrect) . ## \\\\ ## Yes, you could also extrapolate back to ## T/T_C=0 ##. The one problem you will likely have is you didn't go to extremely high temperatures. Note: My computer was unable to read the .png data sets. I'm very curious to know though, what did you get for ## B_r=M ##? ## \\\\ ## One additional item you might find of interest, since you are on the subject of magnetism is a PhysicsForums Insights article that I recently authored. https://www.physicsforums.com/insights/permanent-magnets-ferromagnetism-magnetic-surface-currents/ I didn't go into a discussion of the quantum mechanical exchange effect in this article, but it might give you some introduction into permanent magnets.\n\nLast edited:\nI have been working through this Experiment with @Scoopadifuego and we are both very, very greatful for your help\n\n•",
null,
"Does anyone know of any values of magnetism (mT) at 0K for neodymium and Iron magnets?\n\nHomework Helper\nGold Member\nDoes anyone know of any values of magnetism (mT) at 0K for neodymium and Iron magnets?\nTry this \"link\": https://en.wikipedia.org/wiki/Remanence ## \\\\ ## I'm curious=what did you get when you computed ## B_r=M_o ## from your data?\n\nBy substituting Br = Mμ0 and M = M0 ((Tc - T)/Tc)^0.36 into B =",
null,
"(",
null,
"-",
null,
") and rearranging, I got M0 = 0.1233T, but it seems too low, as you mentioned above it should be somewhere around 1T. This would give Br = 1.55 x 10^-7 ((583.15-T)/583.15)^0.36, which doesn't make sense at all, as Br should be about 1T for neodymium magnets. Also, when I substitute the value of M0 into my flux formula, it gives B = 1.12 x 10^-10 (((583.15-T)/583.15)^0.36, which is thousands of times lower than what my data actually is.\n\nI've checked the calculations multiple times and can't see anything wrong, so I'm figuring I've messed up somewhere in constructing my formula.\n\nHomework Helper\nGold Member\nBy substituting Br = Mμ0 and M = M0 ((Tc - T)/Tc)^0.36 into B = View attachment 209841 ( View attachment 209842 - View attachment 209843 ) and rearranging, I got M0 = 0.1233T, but it seems too low, as you mentioned above it should be somewhere around 1T. This would give Br = 1.55 x 10^-7 ((583.15-T)/583.15)^0.36, which doesn't make sense at all, as Br should be about 1T for neodymium magnets. Also, when I substitute the value of M0 into my flux formula, it gives B = 1.12 x 10^-10 (((583.15-T)/583.15)^0.36, which is thousands of times lower than what my data actually is.\n\nI've checked the calculations multiple times and can't see anything wrong, so I'm figuring I've messed up somewhere in constructing my formula.\nWhat did you measure for ## B ## at a distance from one of the pole faces of the magnet? Also, what is the ## L ## for the magnet, and how far was your distance ## X ## from the pole endface?\n\n•",
null,
"Lussimio16\nB = 3.47 x 10^-4 T at 20 degrees C, so 293.15K at about 25mm away from the meter, where L = 39mm and R = 3mm.\n\nHomework Helper\nGold Member\nThe ## M_o=B_r=.12 ##T looks sort of ok. I think you got the arithmetic correct=I estimated it, but didn't compute it with a calculator. I presume in the data you gave me that ## X=25 ## mm. It might not be a very high quality magnet that you have. Meanwhile, the formula reads ## B_r=M_o ## without any ## \\mu_o ##, at least in the units I have been using. If you write ## .12=M_o (1-295/583)^{.36}=M_o (.8) ##(approximately), so that ## M_o=.15 ## T which could be reasonable for a low quality Neodynium magnet. It could perhaps also be due to errors in the measurement of the magnetic field ## B ##.\n\n@Scoopadifuego, the Neodymium magnet we used was pretty low quality in comparison to other Neodymium magnets so this is actually quite justifiable.\n\n•",
null,
"Homework Helper\nGold Member\nFrom all appearances, your experiment was pretty successful. I don't know how accurate your meter is that measured the magnetic field strength, but if it was accurate, then you might simply have a Neodynium magnet of only moderate strength. Your data collection and formulas look good. :)\n\nI'd just like to extend my appreciation for your help again. You have been fantastic ! Thank you for your guidance :) @CharlesLink\n\n•",
null,
"Homework Helper\nGold Member\n\nLast edited:\n@Charles Link Thank you so much for helping @zac_physics_student and me with this problem. The graphs turned out well, and I wouldn't have been able to do this without your help.\n\n#### Attachments\n\n•",
null,
"Homework Helper\nGold Member\nHomework Helper\nGold Member\n@zac_physics_student and @Scoopadifuego I decided to give the compass measurement idea a try that I mentioned in post #18. I went out and bought a compass and some (a stack of 15) strong disc magnets (they were inexpensive) from which I made a cylindrical bar magnet that was 2.75\" long and ## D= 11/16 ##\" in diameter. With the magnet at a distance of 14\" from the compass, the compass needle was pulled westward by 45 degrees, and likewise, upon turning the magnet around, the compass needle was pushed eastward by 45 degrees at the same distance. ## \\\\ ## The pole method made for simple calculations: (Alternatively, you could use the formula you did that is derived from the magnetic surface currents. A Taylor series expansion of the denominators with the square roots in your expression with ## L ## and/or ## X>>R ## would give the very same formula for ## H ## and ## B ## that I have here.) ## \\\\ ## ## \\sigma_m=M ## and ## A=\\pi D^2/4 ##. ## \\\\ ## Meanwhile ## H=\\frac{1}{4 \\pi \\mu_o} \\sigma_m A(\\frac{1}{s_1^2}-\\frac{1}{s_2^2} ) ## where ## s_1=14##\" and ##s_2=16.75 ##\". ## \\\\ ## Using ## B=\\mu_o H =.5 E-4 ## T,(.5 gauss=.5 E-4 T is the approximate strength of the Earth's magnetic field), gives the result that ## M=1.1 ## T.## \\\\ ## *********************************************************************************************************************************************************** ## \\\\ ## And one additional note: The Taylor series calculation is ## \\frac{L+X}{\\sqrt{(L+X)^2+R^2}}=\\frac{1}{\\sqrt{1+(\\frac{R}{L+X})^2}} \\approx 1-\\frac{1}{2}(\\frac{R}{L+X})^2 ## and similarly for the term without any ## L ##. The result is that this calculation for the on-axis ## B ## from the magnetic surface currents agrees with the on-axis ## B ## computed by the pole method in the approximation that the poles can be treated like point sources. ## \\\\ ## ******************************************************************************************************************** ## \\\\ ## Editing: Additional input: The value that I used for the Earth's magnetic field as .5 gauss in my location might be somewhat high. The official value they have on-line for the north horizontal component is only .2 gauss and the vertical component is .5 gauss, but only the horizontal component will affect the compass when placed horizontal. Thereby, the ## M ## of the magnet that I used, found by computation, would be approximately .4 T rather than 1.1 T.\n\nLast edited:\n•",
null,
"zac_physics_student and vanhees71\nLussimio16\nThe ## M_o=B_r=.12 ##T looks sort of ok. I think you got the arithmetic correct=I estimated it, but didn't compute it with a calculator. I presume in the data you gave me that ## X=25 ## mm. It might not be a very high quality magnet that you have. Meanwhile, the formula reads ## B_r=M_o ## without any ## \\mu_o ##, at least in the units I have been using. If you write ## .12=M_o (1-295/583)^{.36}=M_o (.8) ##(approximately), so that ## M_o=.15 ## T which could be reasonable for a low quality Neodynium magnet. It could perhaps also be due to errors in the measurement of the magnetic field ## B ##.\nWhat is Br?\nI am doing a similar experiment for an IB assessment, but it turned out to be more complicated than I thought... I measured the field strength of a magnet cooling down from 80C to room temp, but I am struggling with how to process the data for the lab report\n\nLast edited:\nHomework Helper\nGold Member\nWhat is Br?\nI am doing a similar experiment for an IB assessment, but it turned out to be more complicated than I thought... I measured the field strength of a magnet cooling down from 80C to room temp, but I am struggling with how to process the data for the lab report\nThis should give you a good introduction into the various parameters involved in the magnetic field from a magnetized cylinder.\n\nEdit: What does your data look like? Some detail might help in determining what you need to do to process it. With Curie temperatures ## T_C ## that can be above 1000 degrees C, a temperature of 80 degrees C may or may not give much change in the magnetic field from room temperature. See especially posts 2 and 3 above.\n\nLast edited:\n•",
null,
"vanhees71\nPerson",
null,
"This is data I took from an NdFeB magnet where temperature in K is on the x and B in millitesla is on the y. The data doesn’t seem to fit what you all are saying and I wanted to get y’all’s opinions.\n\n•",
null,
"Person\nIn this experiment the magnet is kept at a fixed distance away from the gaussmeter.\n\n•",
null,
"Homework Helper\nGold Member\nThe data looks good. Looks like the Curie temperature ## T_C ## is around 500K. The theoretical formula in post 2, ## M=M_o ((T_C-T)/T_C))^{\\gamma} ## , would have a more rapid decline at the Curie temperature (see the graphs of post 19), but the theoretical formula may not be exact.\n\nJust an additional comment: The mechanism for a permanent magnet is a rather complex one, and it shouldn't be too surprising if there are some differences between experiment and the formula that comes from group theory and critical phenomena.\n\nOne more item: a google gives ## T_C=583K ## for NdFeB magnets, so your data is certainly in the right ballpark.\n\nLast edited:\n•",
null,
"vanhees71 and Person\nHomework Helper\nGold Member\nIn response to a question from @IBphysics There are basically 2 different types of MKS units for the magnetization ## M ## in common use: The older textbooks like to use the formula ## B=\\mu_o H+M ##, while the newer textbooks seem to be switching to ## B=\\mu_o H+\\mu_o M ##. For the first formula, ## M ## has units of ## T ##, while the second one will have units of ## A/m ##.\n\nI prefer using ## T ##, but some others prefer the ## A/m ##.\n\nLast edited:\n•",
null,
"vanhees71"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png.209620/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png.209621/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png.209619/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-56-8-png.209624/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-56-8-png.209623/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-56-8-png.209622/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png-png.209841/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png-png.209842/",
null,
"https://www.physicsforums.com/attachments/upload_2017-8-24_9-53-17-png-png.209843/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"https://www.physicsforums.com/attachments/1634407380979-jpeg.290791/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9680423,"math_prob":0.62187904,"size":2373,"snap":"2023-14-2023-23","text_gpt3_token_len":472,"char_repetition_ratio":0.12157028,"word_repetition_ratio":0.97686374,"special_character_ratio":0.19258323,"punctuation_ratio":0.0771028,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9688146,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T06:58:05Z\",\"WARC-Record-ID\":\"<urn:uuid:e2b076ac-3c24-485e-9f87-1fa7754c0a4b>\",\"Content-Length\":\"175275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2402937-cebe-4e2d-a253-2f28eb5b3b60>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef64bf66-fc81-4a84-823f-75e41db2b59d>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/magnetic-flux-vs-temperature-relationship-in-ferromagnets.923380/\",\"WARC-Payload-Digest\":\"sha1:RWK5J3ME4GQSVEFPMP5ZQUTNHTOC4A4U\",\"WARC-Block-Digest\":\"sha1:BGGMUJSJVXQAY3T6JFIV2TK3W6ZZOMOP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945433.92_warc_CC-MAIN-20230326044821-20230326074821-00058.warc.gz\"}"} |
https://onepetro.org/SPEGTS/proceedings-abstract/93GTS/All-93GTS/SPE-26179-MS/54889 | [
"Abstract\n\nThis paper presents an \"equivalent drawdown time\" for hydraulically fractured wells. This new equivalent time is derived from a general elliptical flow model. This new equivalent time is helpful in post-fracture pressure buildup test analysis for wells with finite-conductivity fractures including wellbore storage and fracture-face skin. Examples are provided.\n\nIntroduction\n\nIn 1980, Agarwal derived an \"equivalent drawdown time\" to account for producing time effects when drawdown type curves are used to analyze pressure buildup and other test data. Agarwal's equivalent drawdown time was derived for radial flow by assuming that the logarithmic approximation to the original solution is appropriate. In the rest of this paper, we call Agarwal's equivalent drawdown time radial equivalent time.\n\nSince the introduction of Agarwal's radial equivalent drawdown time, its importance has been widely recognized in well test analysis. Similarly, an equivalent drawdown time for linear flow problems, which we call linear equivalent time in the remainder of this paper, has been introduced.\n\nFor more than a decade, engineers have routinely used radial equivalent time or linear equivalent time to analyze well test data for hydraulically fractured wells due to the lack of a suitable equivalent drawdown time for such wells. Engineers find that radial and linear equivalent times work under some specific conditions and do not work under other conditions.\n\nFluid flow toward a well with a hydraulic fracture is an example of elliptical flow problems. In this paper, we present a method to determine \"equivalent drawdown time\" for pressure buildup test analysis for hydraulically fractured wells. We derived this method from a general elliptical flow model. We call this new equivalent drawdown time elliptical equivalent time. At \"early\" flow times, this elliptical equivalent drawdown time approaches the equivalent drawdown time derived for linear flow problems. At \"large\" flow times, this new equivalent drawdown time approaches Agarwal's equivalent drawdown time derived for radial flow problems.\n\nWe also provide examples to show how to use our elliptical equivalent time for hydraulically fractured wells.\n\nMODEL DESCRIPTION\n\nWe consider a hydraulic fracture in a single-layer, isotropic reservoir. The reservoir can be either finite or infinite. The fracture shape is elliptical. The fracture has a maximum width, bmax, at the wellbore and half-length, Lf. Fig. 1 is a schematic of the physical model.\n\nThe half focal-length of the elliptical fracture is L, which we treat as equal to the fracture half-length, Lf, for practical purposes. The fracture conductivity is generally finite; an infinite conductivity fracture is a limiting case of finite conductivity fractures.\n\nIf the reservoir is finite, we assume that the outer reservoir boundary is also elliptical and is \"confocal\" with the elliptical fracture.\n\nThe reservoir can be either homogeneous or elliptically composite. For an elliptically composite reservoir, each interface between two adjacent zones is assumed to be confocal with the elliptical fracture.\n\nP. 405^\n\nThis content is only available via PDF.\nYou can access this article if you purchase or spend a download."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9003572,"math_prob":0.9765677,"size":3092,"snap":"2021-21-2021-25","text_gpt3_token_len":605,"char_repetition_ratio":0.20045337,"word_repetition_ratio":0.034782607,"special_character_ratio":0.17626132,"punctuation_ratio":0.09596929,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795652,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-15T06:50:53Z\",\"WARC-Record-ID\":\"<urn:uuid:2f535fd1-9672-4c46-b315-84fcbcaafc49>\",\"Content-Length\":\"73597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:07a859a2-f576-40da-b21d-68043364f63d>\",\"WARC-Concurrent-To\":\"<urn:uuid:388efcd9-5b09-42a3-af64-8a5a7722079e>\",\"WARC-IP-Address\":\"52.224.196.54\",\"WARC-Target-URI\":\"https://onepetro.org/SPEGTS/proceedings-abstract/93GTS/All-93GTS/SPE-26179-MS/54889\",\"WARC-Payload-Digest\":\"sha1:WQDXBGFMM5CWBVMPT457EBKKHDGFLO2G\",\"WARC-Block-Digest\":\"sha1:BEBGTWXEZPPDPPCXXAVS4HZRPKRWR2RT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487617599.15_warc_CC-MAIN-20210615053457-20210615083457-00359.warc.gz\"}"} |
http://clay6.com/qa/10606/the-volume-of-a-cube-is-1-83-cm-3-find-the-volume-of-25-such-identical-cube | [
"",
null,
"# The volume of a cube is $1.83 \\;cm^3$. Find the volume of $25$ such identical cubes to correct number of significant figures.\n$(a)\\;45.9 \\quad (b)\\;45.75 \\quad (c)\\;46 \\quad (d)\\;45.8$"
] | [
null,
"http://clay6.com/images/down_arrow_square.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5145943,"math_prob":0.9997266,"size":734,"snap":"2020-34-2020-40","text_gpt3_token_len":223,"char_repetition_ratio":0.10958904,"word_repetition_ratio":0.034188036,"special_character_ratio":0.26975477,"punctuation_ratio":0.13986014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943169,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T00:23:12Z\",\"WARC-Record-ID\":\"<urn:uuid:4c919d08-a1ba-4aa5-ae32-5986ad95c143>\",\"Content-Length\":\"18400\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:18250188-134f-4b99-a685-a2c7059a66e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:393836f3-91d4-4d98-812f-a4529317a076>\",\"WARC-IP-Address\":\"139.162.17.55\",\"WARC-Target-URI\":\"http://clay6.com/qa/10606/the-volume-of-a-cube-is-1-83-cm-3-find-the-volume-of-25-such-identical-cube\",\"WARC-Payload-Digest\":\"sha1:3Z4FYS56LZ4OPVV75WSVAXW34Q66JX4P\",\"WARC-Block-Digest\":\"sha1:BGHSQPFZZN23PUPZVTTRBD47NX2NVPGR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400221382.33_warc_CC-MAIN-20200924230319-20200925020319-00166.warc.gz\"}"} |
https://www.studiestoday.com/concept-linear-equations-two-variables-cbse-class-9-concepts-linear-equations-two-variables-1-213342 | [
"# CBSE Class 9 Mathematics Linear Equations In Two Variables Notes Set B\n\nCBSE Class 9 Concepts for Linear Equations in Two Variables (1). Learning the important concepts is very important for every student to get better marks in examinations. The concepts should be clear which will help in faster learning. The attached concepts made as per NCERT and CBSE pattern will help the student to understand the chapter and score better marks in the examinations.\n\nChapter 4:\n\nLinear Equations in Two Variables\n\nChapter Notes\n\nTop Definitions\n\n1. An equation of the form ax + by + c = 0, where a, b and c are real numbers, such that a and b are not both zero, is called a linear equation in two variables.\n\n2. A linear equation in two variables is represented geometrically by a straight line the points of which make up the collection of solutions of equation. This is called the graph of the linear equation.\n\nTop Concepts\n\n1. A linear equation in two variables has infinitely many solutions.\n\n2. The graph of every linear equation in two variables is a straight line.\n\n3. x = 0 is the equation of the y – axis and y = 0 is the equation of the x–axis.\n\n4. The graph of x = k is a straight line parallel to the y –axis.\n\n5. The graph of y = k is a straight line parallel to the x – axis.\n\n6. An equation of the type y = mx represents a line passing through the origin, where m is a real number.\n\n7. Every point on the line satisfies the equation of the line and every solution of the equation is a point on the line.\n\n8. The solution of a linear equation is not effected when:\n\n(i) The same number is added or subtracted from both the side of an equation.\n\n(ii) Multiplying or dividing both the sides of the equation by the same non zero number.",
null,
"",
null,
"## Tags:\n\nClick for more Linear Equations in two variables Study Material\n\n## Latest NCERT & CBSE News\n\nRead the latest news and announcements from NCERT and CBSE below. Important updates relating to your studies which will help you to keep yourself updated with latest happenings in school level education. Keep yourself updated with all latest news and also read articles from teachers which will help you to improve your studies, increase motivation level and promote faster learning\n\n### Training Programme on the Alternative Academic Calendar\n\nIn view of the extraordinary situation prevailing in the world due to COVID-19 pandemic, the CBSE Board had announced the ‘Revised Academic Curriculum for classes 9- 12 for the session 2020-21’. The reduced topics will not be part of the internal assessment or for the...\n\n### CBSE Reduced Syllabus Class 9 and 10\n\nThe prevailing health emergency in the country and at different parts of the world as well as the efforts to contain the spread of Covid-19 pandemic has resulted in loss of class room teaching due to closure of schools. Therefore the CBSE Board has decided to revise...\n\n### FAQs CBSE Board Exam Results Class 10 and 12\n\nFAQs CBSE Board Exam Results Class 10 and 12 Q.1. What does the term RT in the marksheet mean? Ans.The term RT means REPEAT IN THEORY. This is the term used from 2020 instead of FAIL IN THEORY(FT) Q.2. What does the term RP in the marksheet mean? Ans.The term RP means...\n\n### Board Cancelled Official CBSE Statement\n\nKeeping in view the requests received from various State Governments and the changed circumstances as on date, following has been decided- 1. Examinations for classes X and XII which were scheduled from 1st July to 15th, 2020 stand cancelled. 2. Assessment of the...\n\n### Re verification of Class 10 Board Exams Marks\n\nModalities and Schedule for Secondary School (Class X) Examinations, 2020 in the subjects whose examinations have been conducted by CBSE for the processes of (I) Verification of Marks (II) Obtaining Photocopy of the Evaluated Answer Book(s) (Ill) Re-evaluation of Marks...\n\n### Cogito and The Question Book A series on Thinking skills by CBSE\n\nThe Board has dedicated this academic session 2020-21 for ‘Competency Based Learning’. Skills connected to Critical & Creative thinking, Problem Solving, Collaboration and Communication are core to successful living in the 21st Century. To focus on the importance...\n\n×"
] | [
null,
"https://www.studiestoday.com/sites/default/files/images1/class_9_maths_concept_5.PNG",
null,
"https://www.studiestoday.com/sites/default/files/images1/class_9_maths_concept_5a.PNG",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92917055,"math_prob":0.89002335,"size":1833,"snap":"2020-34-2020-40","text_gpt3_token_len":411,"char_repetition_ratio":0.17550574,"word_repetition_ratio":0.11014493,"special_character_ratio":0.22967812,"punctuation_ratio":0.09115282,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97161543,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T22:46:00Z\",\"WARC-Record-ID\":\"<urn:uuid:582fb386-5769-4f7a-a0ef-5d8b482c2240>\",\"Content-Length\":\"89901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3de3e06f-683e-461c-9677-91cf7e4a6df9>\",\"WARC-Concurrent-To\":\"<urn:uuid:38ef5788-9bd3-47ff-ae4a-145c0f6e2d16>\",\"WARC-IP-Address\":\"192.124.249.102\",\"WARC-Target-URI\":\"https://www.studiestoday.com/concept-linear-equations-two-variables-cbse-class-9-concepts-linear-equations-two-variables-1-213342\",\"WARC-Payload-Digest\":\"sha1:ODCRKDYAV7YZVTH3AFUA3DW6Q4MLVIYZ\",\"WARC-Block-Digest\":\"sha1:Y6NPIRX3KRXSAY2N4HKV5O5VUCSP3JIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738366.27_warc_CC-MAIN-20200808224308-20200809014308-00372.warc.gz\"}"} |
https://legkovopros.ru/questions/262662/why-are-php-function-calls-so-expensive | [
"# Why are PHP function calls *so* expensive?\n\nA function call in PHP is expensive. Here is a small benchmark to test it:\n\n``````<?php\nconst RUNS = 1000000;\n\n// create test string\n\\$string = str_repeat('a', 1000);\n\\$maxChars = 500;\n\n// with function call\n\\$start = microtime(true);\nfor (\\$i = 0; \\$i < RUNS; ++\\$i) {\nstrlen(\\$string) <= \\$maxChars;\n}\necho 'with function call: ', microtime(true) - \\$start, \"\\n\";\n\n// without function call\n\\$start = microtime(true);\nfor (\\$i = 0; \\$i < RUNS; ++\\$i) {\n!isset(\\$string[\\$maxChars]);\n}\necho 'without function call: ', microtime(true) - \\$start;\n``````\n\nThis tests a functionally identical code using a function first (`strlen`) and then without using a function (`isset` isn't a function).\n\nI get the following output:\n\n``````with function call: 4.5108239650726\nwithout function call: 0.84017300605774\n``````\n\nAs you can see the implementation using a function call is more than five (5.38) times slower than the implementation not calling any function.\n\nI would like to know why a function call is so expensive. What's the main bottleneck? Is it the lookup in the hash table? Or what is so slow?\n\nI revisited this question, and decided to run the benchmark again, with XDebug completely disabled (not just profiling disabled). This showed, that my tests were fairly convoluted, this time, with 10000000 runs I got:\n\n``````with function call: 3.152988910675\nwithout function call: 1.4107749462128\n``````\n\nHere a function call only is approximately twice (2.23) as slow, so the difference is by far smaller.\n\nI just tested the above code on a PHP 5.4.0 snapshot and got the following results:\n\n``````with function call: 2.3795559406281\nwithout function call: 0.90840601921082\n``````\n\nHere the difference got slightly bigger again (2.62). (But on the over hand the execution time of both methods dropped quite significantly).\n\n35\nзадан Yay295 22 September 2019 в 07:51\nподелиться"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74580157,"math_prob":0.9170598,"size":1737,"snap":"2021-04-2021-17","text_gpt3_token_len":454,"char_repetition_ratio":0.18003462,"word_repetition_ratio":0.07971015,"special_character_ratio":0.31606218,"punctuation_ratio":0.17455621,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97433907,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T13:30:06Z\",\"WARC-Record-ID\":\"<urn:uuid:907ed9fa-569b-4b20-b186-986db1a46bd3>\",\"Content-Length\":\"20431\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53cb67aa-6056-4712-af2b-a6c1d52b31a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f1d23d4-e750-43c8-8735-4b073c0ee0c6>\",\"WARC-IP-Address\":\"193.42.110.57\",\"WARC-Target-URI\":\"https://legkovopros.ru/questions/262662/why-are-php-function-calls-so-expensive\",\"WARC-Payload-Digest\":\"sha1:WPF4WT5XQEM64TWXETAGDHC66B3G546T\",\"WARC-Block-Digest\":\"sha1:P6UEAZ3U6O6CU72HXRR7TA6NSCK2PNC3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704824728.92_warc_CC-MAIN-20210127121330-20210127151330-00042.warc.gz\"}"} |
https://fastcarpenter.com/longleaf-pine-wood/ | [
"# Longleaf pine wood properties and uses",
null,
"## Longleaf pine wood\n\n• Other names: A wood of the Southern pine genera.\n• Scientific name: Pinus palustris\n• Genera: Pinaceae\n• Places where these trees are found: Growing Longleaf pine stretches from the eastern area of North Carolina to Florida, which is at the south and to the west of the eastern region of Texas.\n\n### Physical properties of Longleaf pine wood\n\nColor of the wood: The Sapwood is yellowish-white, the heartwood is reddish-brown.\n\nStructure of wood: Similar to Shortleaf, Loblolly, and Slash pine.\n\nShrinkage: Moderately highly shrinkable. To reach the zero humidity state from the green condition, the radial, tangential, and total volumetric shrinkage is 5.1%, 7.5%, and 12.2%, respectively.\n\nWorkability: Easy and convenient.\n\nWeight, density, and specific gravity: The specific gravity is 0.54 (density 553 kg per cubic meter) based on the volume of the green condition and weight in the zero humidity. In 12% humidity, the specific gravity becomes 0.59 (density 604 kg per cubic meter) based on the volume and weight in the zero humidity state.0.54 (density 553 kg per cubic meter), in the 12% humidity state based on the volume and weight in the zero humidity state, the specific gravity becomes 0.59 (density 604 kg per cubic meter).\n\nMoisture at the green condition: 31% In the heartwood and 106% moisture is present in the sapwood.\n\nNatural durability: The heartwood is moderately sustainable, but the sapwood is not sustainable.\n\nSeasoning property: In the high temperature, in the fault state, at a moderate speed, this wood is dried.\n\nTreatability: The Sapwood is preservable in the pressure process, but the heartwood is not preservable.\n\n### Mechanical properties of Longleaf pine wood\n\nModulus of rupture: In green condition, the modulus of rupture is 58.62 Newton per square millimeter (8500 lb per square inch). In the standard of IEB, this value is 55.17 Newton per square millimeter.\n\nModulus of elasticity: In green condition, the modulus of elasticity is 0.0109 million Newton per square millimeter (1.59 million lb per square inch). In the 12% humidity state, the parameter becomes 0.0136 Newton per square millimeter (1.98 million lb per square inch.\n\nWork to maximum load: In green condition, the maximum load is 0.246 kg per cubic centimeter (8.9 lb per square inch). In the 12% humidity state, the parameter becomes 0.327 kg per cubic centimeter (11.8 lb per square inch).\n\nImpact bending: In green conditions, the impact bending is 88.9 centimeters (35 inches). In 12% humidity, this parameter becomes 86.36 centimeters (34 inches) in height.\n\nCompression parallel to grain-maximum crushing strength: In green condition maximum crushing strength of this wood is 29.79 Newton per square millimeter (4.320 lb per square inch), in the12% humidity state 58.41 Newton per square millimeter (8.470 lb per square inch).\n\nCompression perpendicular to grain- fiber stress at the proportional limit: In green condition, the fiber stress at the proportional limit is 3.31 Newton per square millimeter (480 lb per square inch), in 12% humidity, the parameter becomes 6.62 Newton per square millimeter (960 lb per square inch).\n\nShear parallel to grain-maximum shearing strength: In green condition, the maximum shearing strength is 7.17 Newton per square millimeter (1040 lb per square inch). In 12% humidity, the parameter becomes 10.41 Newton per square millimeter (1510 lb per square inch)\n\nTension perpendicular to grain-maximum tensile strength: In green condition, the maximum tensile strength is 2.28 Newton per square millimeter (330 lb per square inch), in 12% humidity, the parameter becomes 3.24 Newton per square millimeter (470 lb per square inch).\n\nSide hardness-Load perpendicular to grain: In green condition, the side hardness of the longleaf pine is 267.62 kg (590 lb). In 12% humidity, this parameter becomes 394.63 kg (870 lb).\n\n### Uses of Longleaf pine wood\n\nSimilar to the Slash pine.",
null,
""
] | [
null,
"https://fastcarpenter.com/wp-content/uploads/2021/12/longleaf-pine.jpg",
null,
"https://secure.gravatar.com/avatar/d356606687eae9ae1bad3b0d68ff2ea8",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7847954,"math_prob":0.9922177,"size":3923,"snap":"2023-40-2023-50","text_gpt3_token_len":944,"char_repetition_ratio":0.17963766,"word_repetition_ratio":0.13754046,"special_character_ratio":0.25184807,"punctuation_ratio":0.15572716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95010066,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T17:24:16Z\",\"WARC-Record-ID\":\"<urn:uuid:5e3d9216-cffb-4bfb-9db9-b4dafc89594f>\",\"Content-Length\":\"76242\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04c82e17-78ef-4cf4-bfc8-af5d1c5b9f9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4aa7939d-1309-41e2-856d-f539b58b4661>\",\"WARC-IP-Address\":\"209.236.117.90\",\"WARC-Target-URI\":\"https://fastcarpenter.com/longleaf-pine-wood/\",\"WARC-Payload-Digest\":\"sha1:FHB6GPSUGJ244EJ2BJVGICYIR42NUO4N\",\"WARC-Block-Digest\":\"sha1:SK3ZDXSGS34GDVVOJC4QBIMZTZG3OJK5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506528.19_warc_CC-MAIN-20230923162848-20230923192848-00204.warc.gz\"}"} |
https://www.codevscolor.com/c-program-sort-array-using-pointer/ | [
"# C program to sort array in ascending or descending order using pointer",
null,
"# Introduction :\n\nIn this C programming tutorial, we will learn how to sort elements of an array in ascending or descending order using C pointer. The program will take the array inputs from the user and sort the array in ascending or descending order.\n\n## Pointer in C :\n\nA variable is called a pointer variable in C if it can hold the address of a variable. Each pointer variable has specific data type e.g. an integer pointer variable can hold the address of an integer variable, a character pointer variable can hold the address of a character variable etc.\n\n& is used to get the memory address of a variable. For example :\n\n``int num = 10;``\n\nFor this integer variable, &num will give us the address of this variable. We can create one pointer variable to hold this address like below :\n\n``int *p ;``\n\np can hold the address of an integer variable.\n\n## Array and pointer :\n\nWe can store the address of an array in a pointer variable. The name of the array is the same as the address of the first element. We can store the address of the first element and access the other elements using an index.\n\nFor example, if int *p is a pointer to an array, its ith element can be accessed by *(p + i).\n\nIn this tutorial, we are using array pointers.\n\n## C program :\n\n``````#include <stdio.h>\n\n//1\nvoid swap(int *firstNumber, int *secondNumber);\nvoid printArray(int *array, int size);\n\nint main()\n{\n//2\nint *array;\nint type;\nint i, j, size;\n\n//3\nprintf(\"Enter the size of the array : \");\nscanf(\"%d\", &size);\n\n//4\nfor (i = 0; i < size; i++)\n{\nprintf(\"Enter element %d : \", (i + 1));\nscanf(\"%d\", (array + i));\n}\n\n//5\nprintf(\"Initial array : \");\nprintArray(array, size);\n\n//6\nprintf(\"Enter 1 to sort in increasing order and 0 to sort in decreasing order : \");\nscanf(\"%d\", &type);\n\n//7\nfor (i = 0; i < size; i++)\n{\n//8\nfor (j = i + 1; j < size; j++)\n{\nif (type == 1)\n{\n//increasing order sorting\nif (*(array + j) < *(array + i))\n{\nswap((array + i), (array + j));\n}\n}\nelse\n{\n//decreasing order sorting\nif (*(array + j) > *(array + i))\n{\nswap((array + i), (array + j));\n}\n}\n}\n}\n\n//9\nprintf(\"Final array : \");\nprintArray(array, size);\nreturn 0;\n}\n\n//10\nvoid swap(int *firstNumber, int *secondNumber)\n{\nint temp = *firstNumber;\n*firstNumber = *secondNumber;\n*secondNumber = temp;\n}\n\n//11\nvoid printArray(int *array, int size)\n{\nint i;\nprintf(\"[ \");\nfor (i = 0; i < size; i++)\n{\nprintf(\"%d \", *(array + i));\n}\nprintf(\"]\\n\");\n}``````\n\n## Explanation :\n\nThe commented numbers in the above program denote the step numbers below :\n\n1. swap function is used to swap two numbers. It takes two integer pointers of two numbers and swaps them using the pointer. printArray is used to print an array. It takes one pointer to an array and the size of the array. Using the pointer, it prints its elements.\n2. Create one pointer to an array and few variables.\n3. Ask the user to enter the size of the array. Read the size and store it in the size variable.\n4. Run one for loop to read the contents of the array. Read each element one by one.\n5. Print the user entered array. We are using the printArray method to print the array elements.\n6. Ask the user to enter the type of sorting. It reads the value and stores it in type variable. 1 is for ascending order and 0 is for descending order.\n7. This is the main sorting part. For the first iteration of the loop, it is comparing the first element of the array to all other elements. Based on the sorting method, it finds out the largest or smallest element of the array. For the second iteration, it finds out the second largest or second smallest element etc. If you are getting confused with the loops, print all the values of i, j and other variables using printf. Try to analyze the printf logs and it will be clearer.\n8. The inner loop is for comparing the elements right to the ith element with the ith element itself. Based on the value of type, it compares the element and swaps them both using swap function.\n9. Finally, print the modified array to the user.\n10. swap function takes two integer pointers and swaps the values lies in the address defined by these integers.\n11. printArray function is used to print an array as mentioned above. It takes one pointer to an integer array and the length of the array.\n\n### Sample Output :\n\n``````Enter the size of the array : 5\nEnter element 1 : 5\nEnter element 2 : 4\nEnter element 3 : 3\nEnter element 4 : 2\nEnter element 5 : 1\nInitial array : [ 5 4 3 2 1 ]\nEnter 1 to sort in increasing order and 0 to sort in decreasing order : 1\nFinal array : [ 1 2 3 4 5 ]\n\nEnter the size of the array : 5\nEnter element 1 : 3\nEnter element 2 : 8\nEnter element 3 : 23\nEnter element 4 : 123\nEnter element 5 : 9\nInitial array : [ 3 8 23 123 9 ]\nEnter 1 to sort in increasing order and 0 to sort in decreasing order : 0\nFinal array : [ 123 23 9 8 3 ]``````\n\n## Conclusion :\n\nIn this C tutorial, we have learned how to sort the elements of an array in ascending or descending order using C pointer. Try to run the program and drop one comment below if you have any queries."
] | [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAAACXBIWXMAAAsSAAALEgHS3X78AAACJUlEQVQoz2PQsJmi4TxDw2OGRuAiKf0EQQENIVEjIRED7EhIV0jEUJBPU9whTWXlIwZNuxlyOumi0jZiCm7C4qZCIvpwpYLC+kAE5YoaCvJriRmEKDXsUGjYojzphMqKh0DN0xUMCyVU/IXFzYSE9YAGwzULixmKiBkhNAtoi2r7yqXPUGzfq7rsgcqyewwaNpM1XWZpes+XVA8VFNQCKoLYycmjaWTio6HlwsWrBbUfqF9QR4BNXsI1R3UpWDPQ2bJaiSANQjoQN4uIGcopWMkpWOvquatpOMkrWsvIWYqIGwvya4obhiq37lOZfVV1yT2wZtvpisZl0ppRUlpRQqLGQEv4BHS9fZN8/JLtHcJd3GLiE0pc3WOBDhES1BFVc5eN65FJ7FOefAoUYCBnu87W8l2k6jFRWMJcWETfxi7UPzDN2TXa1j7UzSPWyzsxODTTwioQ5GwhXUEuFaAp8rXrVdc+g/h5ppQa2MMiwBAytLQOtnMId3SOBFpo7xhhbhno4BShp+8BjEJBXg0Jh3SVebdUlz9UWXYfpFnDZYaSTb2Esi84tA24ebW4eDR5+LR5+EAMbl5NTm4NIFsQGC4COqLqHlJ+5XIF8yDOnqJu36/uPl3ONB8cZobCoihITMJEQsZCXMpcTMoMFHPiJsBwlfQqVF31hEHDdpqkcjAonEUNMZMUMPyk5G0UNd3l1Jzl1V0kpC3kcuapzLmuMv+GyvIHABxvkzzFcssrAAAAAElFTkSuQmCC",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72590244,"math_prob":0.98431,"size":4932,"snap":"2020-24-2020-29","text_gpt3_token_len":1266,"char_repetition_ratio":0.18303572,"word_repetition_ratio":0.14922279,"special_character_ratio":0.290146,"punctuation_ratio":0.1392157,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9851536,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T21:44:39Z\",\"WARC-Record-ID\":\"<urn:uuid:86dd3b4a-9937-4851-9881-1fe21430b2fc>\",\"Content-Length\":\"126857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31bc5dbf-d5e7-4444-ab5d-c7e11ff9df96>\",\"WARC-Concurrent-To\":\"<urn:uuid:325b854c-0951-428b-988a-7f08def82cb9>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://www.codevscolor.com/c-program-sort-array-using-pointer/\",\"WARC-Payload-Digest\":\"sha1:HGZKBKAZMPCJFS5HJ4ZHEDPWPZHKFF54\",\"WARC-Block-Digest\":\"sha1:TDRDGW3BWPAFZUJVX6ESGOH24ARM67MZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657151761.87_warc_CC-MAIN-20200714212401-20200715002401-00569.warc.gz\"}"} |
https://gitlab.mpcdf.mpg.de/ift/nifty/-/commit/f66235b7ffcc651927fd501a61d4c2c932ec2377 | [
"### Merge branch 'fixmpi_mr' into 'NIFTy_5'\n\n```Fix MPI on NIFTy_5\n\nSee merge request !350```\nparents ffc6059b 537fb1a6\nPipeline #61476 passed with stages\nin 22 minutes\n ... ... @@ -187,8 +187,8 @@ class NewtonCG(DescentMinimizer): e = QuadraticEnergy(0*energy.position, energy.metric, energy.gradient) p = None if self._napprox > 1: unscmet, sc = energy.unscaled_metric() p = makeOp(approximation2endo(unscmet, self._napprox)*sc).inverse met = energy.metric p = makeOp(approximation2endo(met, self._napprox)).inverse e, conv = ConjugateGradient(ic, nreset=self._nreset)(e, p) return -e.position ... ...\n ... ... @@ -18,9 +18,12 @@ from .. import utilities from ..linearization import Linearization from ..operators.energy_operators import StandardHamiltonian from ..operators.endomorphic_operator import EndomorphicOperator from .energy import Energy from mpi4py import MPI import numpy as np from ..probing import approximation2endo from ..sugar import makeOp, full from ..field import Field from ..multi_field import MultiField ... ... @@ -56,10 +59,83 @@ def allreduce_sum_field(fld): return MultiField(fld.domain, res) class KLMetric(EndomorphicOperator): def __init__(self, KL): self._KL = KL self._capability = self.TIMES | self.ADJOINT_TIMES self._domain = KL.position.domain def apply(self, x, mode): self._check_input(x, mode) return self._KL.apply_metric(x) def draw_sample(self, from_inverse=False, dtype=np.float64): return self._KL.metric_sample(from_inverse, dtype) class MetricGaussianKL_MPI(Energy): \"\"\"Provides the sampled Kullback-Leibler divergence between a distribution and a Metric Gaussian. A Metric Gaussian is used to approximate another probability distribution. It is a Gaussian distribution that uses the Fisher information metric of the other distribution at the location of its mean to approximate the variance. In order to infer the mean, a stochastic estimate of the Kullback-Leibler divergence is minimized. This estimate is obtained by sampling the Metric Gaussian at the current mean. During minimization these samples are kept constant; only the mean is updated. Due to the typically nonlinear structure of the true distribution these samples have to be updated eventually by intantiating `MetricGaussianKL` again. For the true probability distribution the standard parametrization is assumed. The samples of this class are distributed among MPI tasks. Parameters ---------- mean : Field Mean of the Gaussian probability distribution. hamiltonian : StandardHamiltonian Hamiltonian of the approximated probability distribution. n_samples : integer Number of samples used to stochastically estimate the KL. constants : list List of parameter keys that are kept constant during optimization. Default is no constants. point_estimates : list List of parameter keys for which no samples are drawn, but that are (possibly) optimized for, corresponding to point estimates of these. Default is to draw samples for the complete domain. mirror_samples : boolean Whether the negative of the drawn samples are also used, as they are equally legitimate samples. If true, the number of used samples doubles. Mirroring samples stabilizes the KL estimate as extreme sample variation is counterbalanced. Default is False. napprox : int Number of samples for computing preconditioner for sampling. No preconditioning is done by default. _samples : None Only a parameter for internal uses. Typically not to be set by users. seed_offset : int A parameter with which one can controll from which seed the samples are drawn. Per default, the seed is different for MPI tasks, but the same every time this class is initialized. Note ---- The two lists `constants` and `point_estimates` are independent from each other. It is possible to sample along domains which are kept constant during minimization and vice versa. See also -------- `Metric Gaussian Variational Inference`, Jakob Knollmüller, Torsten A. Enßlin, ``_ \"\"\" def __init__(self, mean, hamiltonian, n_samples, constants=[], point_estimates=[], mirror_samples=False, _samples=None, seed_offset=0): napprox=0, _samples=None, seed_offset=0): super(MetricGaussianKL_MPI, self).__init__(mean) if not isinstance(hamiltonian, StandardHamiltonian): ... ... @@ -82,6 +158,8 @@ class MetricGaussianKL_MPI(Energy): lo, hi = _shareRange(n_samples, ntask, rank) met = hamiltonian(Linearization.make_partial_var( mean, point_estimates, True)).metric if napprox > 1: met._approximation = makeOp(approximation2endo(met, napprox)) _samples = [] for i in range(lo, hi): if mirror_samples: ... ... @@ -142,8 +220,8 @@ class MetricGaussianKL_MPI(Energy): else: mymap = map(lambda v: self._hamiltonian(lin+v).metric, self._samples) self._metric = utilities.my_sum(mymap) self._metric = self._metric.scale(1./self._n_samples) self.unscaled_metric = utilities.my_sum(mymap) self._metric = self.unscaled_metric.scale(1./self._n_samples) def apply_metric(self, x): self._get_metric() ... ... @@ -151,12 +229,22 @@ class MetricGaussianKL_MPI(Energy): @property def metric(self): if ntask > 1: raise ValueError(\"not supported when MPI is active\") return self._metric return KLMetric(self) @property def samples(self): res = _comm.allgather(self._samples) res = [item for sublist in res for item in sublist] return res def unscaled_metric_sample(self, from_inverse=False, dtype=np.float64): if from_inverse: raise NotImplementedError() lin = self._lin.with_want_metric() samp = full(self._hamiltonian.domain, 0.) for v in self._samples: samp = samp + self._hamiltonian(lin+v).metric.draw_sample(from_inverse=False, dtype=dtype) return allreduce_sum_field(samp) def metric_sample(self, from_inverse=False, dtype=np.float64): return self.unscaled_metric_sample(from_inverse, dtype)/self._n_samples\nSupports Markdown\n0% or .\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!\nPlease register or to comment"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56776834,"math_prob":0.9791674,"size":5926,"snap":"2022-40-2023-06","text_gpt3_token_len":1570,"char_repetition_ratio":0.13694698,"word_repetition_ratio":0.017216643,"special_character_ratio":0.2645967,"punctuation_ratio":0.2509542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.991962,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T00:49:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d98638e7-641f-4256-999d-c3a5b5b9d54d>\",\"Content-Length\":\"346859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40b2e7bc-026f-4c8f-b511-8058f1ed33b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e21ebc5-7fb1-4ad3-9b42-2b92946e18b4>\",\"WARC-IP-Address\":\"130.183.206.201\",\"WARC-Target-URI\":\"https://gitlab.mpcdf.mpg.de/ift/nifty/-/commit/f66235b7ffcc651927fd501a61d4c2c932ec2377\",\"WARC-Payload-Digest\":\"sha1:EZWAWKXOB3SBW75NRRWM4N4WBNMQDHE6\",\"WARC-Block-Digest\":\"sha1:CE3JGBAIHVRT55G24P75WFOGMYGC6JCU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337529.69_warc_CC-MAIN-20221004215917-20221005005917-00400.warc.gz\"}"} |
https://bettymills.com/sharp-elw516tbsl-scientific-calculator-elw516tbsl.html | [
"",
null,
"Sharp Electronics EL-W516TBSL Scientific Calculator, 16-Digit LCD\n\nItem # SHR ELW516TBSL bySharp Electronics(Mfg. Part # ELW516TBSL, UPC # 074000019584)\n\n\\$33.58 Each\nOrder 5+ and save \\$0.67 Each\n\nThe EL-W516TB-SL model is perfect for students studying general math and science, pre-algebra, algebra, geometry, trigonometry, statistics, biology, and chemistry. This calculator performs 640 advanced scientific, math, and statistics functions. The EL-W516TB-SL utilizes a 4-line display featuring an exponent symbol. WriteView display allows users to view their input and results as they would appear in a textbook. The home key allows users to start fresh from any screen. This calculator has eight temporary memories, three definable memories, one independent memory, and one last answer memory. This calculator also allows equation editing & playback, 1 & 2 variable statistics, normal, stat, and drill modes, seven regression types, and N-BASE calculations: HEX, BIN, DEC, OCT, PEN. Other functions include: logarithms, reciprocals, powers, roots, factorials, trigs, and hyperbolic trigs. Includes a protective hard case and operates on twin-power (solar with battery backup).\n\n• Global Product Type : Calculators-Scientific\n• Power Source(s) : Battery; Solar\n• Display Notation : Numeric\n• Number of Display Digits : 16\n• Display Characters x Display Lines : 16 x 4\n• Memory : 1 Independent; 1 Last Answer; 3 Definable; 8 Temporary\n• Display Type(s) : LCD\n• Case : Hard\n• Display Angle : Fixed\n• Display Characters Height : 4 mm\n• Percent Key(s) : Yes\n• Fraction Calculations : Yes\n• Fraction/Decimal Conversions : Yes\n• Decimal Function : Yes\n• +/- Switch Key : Yes\n• Currency Exchange Function : No\n• Metric Conversion : Yes\n• Backspace Key : Yes\n• Double Zero Key : No\n• Amortization : No\n• Base Number Calculations : Yes\n• Bond Calculations : No\n• Cash Flow Calculations : No\n• Complex Number Calculations : Yes\n• Confidence Interval Calculating : No\n• Cost/Sell/Margin : No\n• Date Calculations : No\n• Depreciation Calculations : No\n• Display Window Resolution : 96 x 32 Dot Matrix Display\n• Entry Logic : WriteView\n• Equation Editor : Yes\n• Grand Total Key : No\n• Graphing Functions : No\n• Higher Mathematical Functions : Complex; Drill; Equation; List; Matrix; Normal; Statistics\n• Hyperbolic Functions : Yes\n• Hypothesis Testing : No\n• Interest Rate Conversion : No\n• Item Count Function : No\n• Levels of Parentheses : Unlimited\n• Linear Regression : Yes\n• Loan Calculation : No\n• Logical (Boolean) Operations : Yes\n• Markup/Down Key : No\n• Matrices : Yes\n• Percent Add-On/Discount : No\n• Polar-Rectangular Conversion : Yes\n• Probability (Random Number) : Yes\n• Programming Steps : No\n• Simultaneous Equations : Yes\n• Square Root Key : Yes\n• Tax Calculation : No\n• Time-Value-of-Money : No\n• Time/Date : No\n• Trig/Log Functions : Yes\n• Variable Regression : Yes\n• Variable Statistics : Yes\n• Size : 3.1 x 6.6 in\n• I/O Port : No\n• Replacement Batteries : LR44\n• Wall-Mountable : No\n• Quantity : 1 each"
] | [
null,
"https://cf1.bettymills.com/images/bmPrintHeader.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60905117,"math_prob":0.8288307,"size":2967,"snap":"2019-43-2019-47","text_gpt3_token_len":772,"char_repetition_ratio":0.19068512,"word_repetition_ratio":0.003992016,"special_character_ratio":0.26390293,"punctuation_ratio":0.22030652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98911965,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T02:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:236b6215-ca81-475c-a68a-928f7c60f606>\",\"Content-Length\":\"144649\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d62762c-ca7b-4dbd-995a-12c4f15521dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f7848c2-7b1d-4481-b3b5-28dbdd87bf1c>\",\"WARC-IP-Address\":\"34.213.104.251\",\"WARC-Target-URI\":\"https://bettymills.com/sharp-elw516tbsl-scientific-calculator-elw516tbsl.html\",\"WARC-Payload-Digest\":\"sha1:DLWCTHPAM6DJIRDRMKEB5G4HUJ6X2RX6\",\"WARC-Block-Digest\":\"sha1:5VH2TOLRM4RZFROPVZLAYT4DABQQF57L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987795403.76_warc_CC-MAIN-20191022004128-20191022031628-00419.warc.gz\"}"} |
https://www.lynda.com/Excel-tutorials/Variance/422098/459842-4.html | [
"",
null,
"",
null,
"",
null,
"# Variance\n\nVariance is the basis for many advanced statistical concepts. Given the heights of two groups of five children, Joe Schumueller provides detailed examples for calculating deviation to ultimately calculate Variance, the average of squared deviations. Greek notation is introduced with Varience being represnsented by sigma notation.\n\nResume Transcript Auto-Scroll\n3h 45m\n2,722,562\n##### Skills covered in this course",
null,
"",
null,
"",
null,
""
] | [
null,
"https://cdn.lynda.com/static/images/consumermigration/Lynda Icon_2x.png",
null,
"https://cdn.lynda.com/static/images/consumermigration/Arrow Right_icon_2x.png",
null,
"https://cdn.lynda.com/static/images/consumermigration/Linkedin Icon_2x.png",
null,
"https://cdn.lynda.com/static/images/consumermigration/Lynda Icon_2x.png",
null,
"https://cdn.lynda.com/static/images/consumermigration/Arrow Right_icon_2x.png",
null,
"https://cdn.lynda.com/static/images/consumermigration/Linkedin Icon_2x.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8241389,"math_prob":0.7815407,"size":5114,"snap":"2019-26-2019-30","text_gpt3_token_len":1308,"char_repetition_ratio":0.11878669,"word_repetition_ratio":0.014475271,"special_character_ratio":0.24384044,"punctuation_ratio":0.1024735,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96046245,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T05:53:22Z\",\"WARC-Record-ID\":\"<urn:uuid:4843d7bf-bd00-4753-a5a9-e77978f57d28>\",\"Content-Length\":\"297272\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:deed83b6-0505-4413-88ff-84fb209ca20d>\",\"WARC-Concurrent-To\":\"<urn:uuid:809fabfd-4fec-4472-a521-2e6862de9baf>\",\"WARC-IP-Address\":\"8.39.42.106\",\"WARC-Target-URI\":\"https://www.lynda.com/Excel-tutorials/Variance/422098/459842-4.html\",\"WARC-Payload-Digest\":\"sha1:H46CPF6VNZSYCRDDAXRLVFC3FULONNG3\",\"WARC-Block-Digest\":\"sha1:NU7KVWBIC3CZYT52BP2KYAUMH5GVQVMP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526888.75_warc_CC-MAIN-20190721040545-20190721062545-00297.warc.gz\"}"} |
https://wiki.kidzsearch.com/wiki/2_(number) | [
"kidzsearch.com > wiki\n\n# 2 (number)\n\n ← 1 2 3 →\nCardinaltwo\nOrdinal2nd (second / twoth)\nFactorizationprime\nGaussian integer factorization$(1 + i)(1 - i)$\nPrime1st\nDivisors1, 2\nRoman numeralII\nRoman numeral (unicode)Ⅱ, ⅱ\nGreek prefixdi-\nLatin prefixduo- bi-\nOld English prefixtwi-\nBinary102\nTernary23\nQuaternary24\nQuinary25\nSenary26\nOctal28\nDuodecimal212\nVigesimal220\nBase 36236\nGreek numeralβ'\nArabic٢\nUrdu",
null,
"Ge'ez\nBengali\nChinese numeral二,弍,贰,貳\nDevanāgarī\nTelugu\nTamil\nHebrewב (Bet)\nKhmer\nKorean이,둘\nThai\n\n2 (Two; ) is a number, numeral, and glyph. It is the number after 1 (one) and the number before 3 (three). In Roman numerals, it is II.\n\n## In mathematics\n\nTwo has many meanings in math. For example: $1 + 1 = 2$. An integer is even if half of it equals an integer. If the last digit of a number is even, then the number is even. This means that if you multiply 2 times anything, it will end in 0, 2, 4, 6, or 8.\n\nTwo is the smallest, first, and only even prime number. The next prime number is three. Two and three are the only prime numbers next to each other. The even numbers above two are not prime because they are divisible by 2.\n\nFractions with 2 in the bottom do not yield infinite.\n\nTwo is the framework of the binary system used in computers. The binary way is the simplest system of numbers in which natural numbers (0-9) can be written.\n\nTwo also has the unique property that 2+2 = 2·2 = 22 and 2! + 2 = 22.\n\nPowers of two are important to computer science.\n\nThe square root of two was the first known irrational number."
] | [
null,
"https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Urdu_numeral_two.svg/6px-Urdu_numeral_two.svg.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7804615,"math_prob":0.99143153,"size":1784,"snap":"2021-31-2021-39","text_gpt3_token_len":570,"char_repetition_ratio":0.116292134,"word_repetition_ratio":0.0,"special_character_ratio":0.31221974,"punctuation_ratio":0.11080332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957163,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T00:52:46Z\",\"WARC-Record-ID\":\"<urn:uuid:a1dc8e7a-422f-4fb1-9784-df15930007f6>\",\"Content-Length\":\"35424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76008fff-e0a0-47fd-9a3b-841b5da2ee7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:56ea8794-b8b2-4d70-b5c3-90ec0aa14859>\",\"WARC-IP-Address\":\"52.8.135.154\",\"WARC-Target-URI\":\"https://wiki.kidzsearch.com/wiki/2_(number)\",\"WARC-Payload-Digest\":\"sha1:KER2CW3GUSMRN4OOOHJBGMHGOWKV67AY\",\"WARC-Block-Digest\":\"sha1:CBBFRHZJV5VJUZRK7QEIU3YAJQ7QDN2R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153899.14_warc_CC-MAIN-20210729234313-20210730024313-00178.warc.gz\"}"} |
http://fredrikj.net/blog/2018/07/the-arb-matrix-revolutions/ | [
"# The Arb matrix revolutions\n\nJuly 3, 2018\n\nThe previous post on linear algebra in Arb ended with the cliffhanger that the improvements for multiplication and linear solving had yet to be adapted for computing determinants and for complex matrices. That is now done.\n\n## Complex matrix multiplication\n\nExcept for very small matrices, acb_mat_mul now performs a complex matrix multiplication by reordering the data and doing four real matrix multiplications via arb_mat_mul. This benefits from the faster real matrix multiplication algorithm described in the previous post.\n\nAs a benchmark of acb_mat_mul, we multiply the size-$n$ complex DFT matrix $$A_{j,k} = \\frac{\\omega^{jk}}{\\sqrt{n}}, \\quad \\omega = e^{-2\\pi i/n}$$ with a copy of itself. (The utility method acb_mat_dft that constructs this matrix has been added to Arb.)\n\nAt 53-bit precision, we observe the following speedup:\n\n$n$ Time, Arb 2.13 Time, Arb 2.14-git Speedup\n10 0.000228 0.000143 1.59\n30 0.00762 0.00175 4.35\n100 0.305 0.0416 7.33\n300 9.375 0.668 14.0\n1000 408.6 17.7 23.1\n\nThe new algorithm is not only faster, it is also more accurate. The following shows entry $(n/2,n/2)$ in the output matrix:\n\nn Arb 2.13 Arb 2.14-git\n==============================================================================\n10 [1.0000000000000 +/- 2.01e-15] [1.00000000000000 +/- 7.96e-16]\n30 [1.0000000000000 +/- 2.79e-15] [1.00000000000000 +/- 6.38e-16]\n100 [1.000000000000 +/- 1.43e-14] [1.00000000000000 +/- 6.11e-16]\n300 [1.000000000000 +/- 3.06e-14] [1.00000000000000 +/- 4.63e-16]\n1000 [1.00000000000 +/- 1.48e-13] [1.00000000000000 +/- 7.72e-16]\n\n\nThe performance improvements for different matrices and levels of precision are similar to those for real matrices, described in the previous post.\n\n## Complex solving and inverse\n\nLinear solving uses the same preconditioning algorithm as in the real case, and the new code for complex matrices likewise benefits from fast matrix multiplication through the use of block recursive LU factorization.\n\nAs a benchmark of acb_mat_inv and acb_mat_solve (the former just calls the latter to solve $AX = I$), we compute the inverse of the size-$n$ complex DFT matrix, starting at 53-bit precision and doubling the precision until the matrix is proved to be invertible.\n\n$n$ Time, Arb 2.13 Time, Arb 2.14-git Speedup\n10 0.0004 0.0006 0.67\n30 0.0104 0.0097 1.07\n100 0.896 0.22 4.07\n300 54.4 3.76 14.5\n1000 3996 83.8 47.7\n\nFor sizes 10, 30, 100, 300 and 1000, the old algorithm terminates at 53, 53, 212, 848, 1696 bits of precision. The new algorithm terminates at 53 bits every time for this matrix which is well-conditioned. The following shows the entry at position $(n/2,n/2)$ in the output matrix at 53-bit precision:\n\nn Arb 2.13 Arb 2.14-git\n===============================================================================================================\n10 [-0.3162277660 +/- 5.04e-11] + [+/- 3.35e-11]*I [-0.31622776601684 +/- 4.22e-15] + [+/- 2.47e-15]*I\n30 [+/- 10.7] + [+/- 10.5]*I [-0.18257418583506 +/- 8.22e-15] + [+/- 3.70e-15]*I\n100 [0.10000000000000 +/- 6.72e-15] + [+/- 6.63e-15]*I\n300 [0.0577350269190 +/- 4.91e-14] + [+/- 1.19e-14]*I\n1000 [0.0316227766017 +/- 4.34e-14] + [+/- 2.71e-14]*I\n\n\nThe new algorithm loses precision very slowly. Although the old algorithm succeeds at 53-bit precision for both $n = 10$ and $n = 30$, the accuracy with the new algorithm is far superior.\n\n## Real and complex determinants\n\nDeterminants of large matrices now use a preconditioning algorithm described in a 2010 paper by Siegfried Rump (a different preconditioning algorithm for determinants is given in the Hansen-Smith paper from 1967, but I tested this first and it does not seem to work well). The idea is simple: we first compute an approximate LU factorization $A \\approx PLU$, then compute approximate inverses $L' \\approx L^{-1}$ and $U' \\approx U^{-1}$, and build the matrix $B = L' P^{-1} A U'$ using ball arithmetic. $B$ is close to the identity matrix, so we can determine its determinant accurately using the Gershgorin circle theorem. The determinants of the triangular matrices $L'$ and $U'$ and the permutation matrix $P$ are of course trivial to compute, and this gives us the determinant of $A$.\n\nThere is actually a subtle issue with the algorithm as Rump describes it: Rump takes the product of the Gershgorin circles to enclose the determinant, but it does not follow from the Gershgorin circle theorem that this is correct since there is not a one-to-one correspondence between Gershgorin circles and eigenvalues (unless the circles are disjoint). I have been informed by private communication that a proof of correctness exists for real matrices (by a more elaborate argument), but the question for complex matrices is apparently an open theoretical problem! Regardless, this gap in the algorithm is easily fixed without such a proof: one just picks a single circle enclosing all the Gershgorin circles and then raises this to the $n$-th power. This gives bounds that are slightly more pessimistic but still acceptable.\n\nAs a benchmark of acb_mat_det, we compute the determinant of the size-$n$ complex DFT matrix, starting at 53-bit precision and doubling the precision until the determinant is proved to be nonzero.\n\n$n$ Time, Arb 2.13 Time, Arb 2.14-git Speedup\n10 0.000109 0.000109 1.00\n30 0.0033 0.0095 0.35\n100 0.429 0.254 1.69\n300 27.4 3.88 7.06\n1000 1712 94.5 18.1\n\nFor sizes 10, 30, 100, 300 and 1000, the old algorithm terminates at 53, 53, 212, 848, 1696 bits of precision. The new algorithm terminates at 53 bits every time for this matrix which is well-conditioned. The result at 53-bit precision is as follows:\n\nn Arb 2.13 Arb 2.14-git\n===============================================================================================================\n10 [-1.0000000000 +/- 5.68e-12] + [+/- 5.67e-12]*I [-1.0000000000 +/- 8.70e-12] + [+/- 8.69e-12]*I\n30 [1.0 +/- 7.08e-3] + [+/- 7.08e-3]*I [1.000000000 +/- 1.98e-11] + [+/- 1.98e-11]*I\n100 [+/- 9.43e-10] + [1.0000000 +/- 9.43e-10]*I\n300 [+/- 2.90e-8] + [1.000000 +/- 2.90e-8]*I\n1000 [+/- 1.73e-6] + [-1.0000 +/- 1.73e-6]*I\n\n\nDeterminants seem to lose precision slightly faster than the entries in the inverse matrix. Nevertheless, only 10 digits are lost at $n = 1000$.\n\nWith $n = 30$, the new algorithm is three times slower than the old algorithm but much more accurate. The old code is very slightly more accurate at $n = 10$ but this is essentially just numerical noise.\n\nFor completeness, we also give benchmark results for computing the determinant of the real DCT matrix using arb_mat_det:\n\n$n$ Time, Arb 2.13 Time, Arb 2.14-git Speedup\n10 0.000046 0.000048 0.96\n30 0.00105 0.0024 0.44\n100 0.124 0.052 2.38\n300 3.761 0.783 4.80\n1000 498 17.8 23.8\n\nThe 53-bit output:\n\nn Arb 2.13 Arb 2.14-git\n===============================================================================\n10 [-1.00000000000 +/- 7.98e-13] [-1.00000000000 +/- 7.98e-13]\n30 [-1.0000 +/- 1.99e-6] [-1.000000000 +/- 1.29e-11]\n100 [1.00000000 +/- 5.53e-10]\n300 [1.000000 +/- 1.65e-8]\n1000 [1.000000 +/- 5.24e-7]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7694266,"math_prob":0.9935197,"size":7142,"snap":"2020-10-2020-16","text_gpt3_token_len":2216,"char_repetition_ratio":0.20622022,"word_repetition_ratio":0.11417697,"special_character_ratio":0.46261552,"punctuation_ratio":0.16223586,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872305,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T01:22:30Z\",\"WARC-Record-ID\":\"<urn:uuid:fa58e98d-5de3-462f-a541-0f9782f81052>\",\"Content-Length\":\"11726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c4fe247-dde0-49f1-835a-0da6954188db>\",\"WARC-Concurrent-To\":\"<urn:uuid:f48faf9a-521b-4693-8d43-5d2c0726a48f>\",\"WARC-IP-Address\":\"207.38.94.35\",\"WARC-Target-URI\":\"http://fredrikj.net/blog/2018/07/the-arb-matrix-revolutions/\",\"WARC-Payload-Digest\":\"sha1:GSV27VIMZB2NLVJM2WBSCQ5TIEFBFBUJ\",\"WARC-Block-Digest\":\"sha1:OKFVN7FQZ6XRLG3VR4562HQI7ZUX7IOF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143455.25_warc_CC-MAIN-20200217235417-20200218025417-00012.warc.gz\"}"} |
https://factoryarticles.info/tag/literal/ | [
"",
null,
"# Tag: literal\n\n#### SOLVING PHYSICS LITERAL EQUATIONS\n\nIn physics, literal equations are equations that contain more than one variable. These equations express one variable in terms of the other variables present. Literal equations are commonly used in…"
] | [
null,
"https://mc.yandex.ru/watch/92516837",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87139994,"math_prob":0.99570465,"size":431,"snap":"2023-40-2023-50","text_gpt3_token_len":80,"char_repetition_ratio":0.21077283,"word_repetition_ratio":0.8666667,"special_character_ratio":0.16473317,"punctuation_ratio":0.08571429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844872,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-21T09:43:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b0d9afd7-83a1-47e3-a28d-d37d7c1b5666>\",\"Content-Length\":\"61473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b706a71-bf14-415c-bc9b-cf48aca4589b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e10045a8-9f1f-4ea1-8823-a0016a506cc6>\",\"WARC-IP-Address\":\"67.211.218.75\",\"WARC-Target-URI\":\"https://factoryarticles.info/tag/literal/\",\"WARC-Payload-Digest\":\"sha1:SYRAF7NGBMS66P3FSBA2RZXRJVRMV56J\",\"WARC-Block-Digest\":\"sha1:T32VB2PS3U3WTZVXK6CBFL5QIRU4FS6Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233505362.29_warc_CC-MAIN-20230921073711-20230921103711-00343.warc.gz\"}"} |
https://studylib.net/doc/10460248/harmonic-analysis-and-the-ft-fourier-transforms-component... | [
"# Harmonic Analysis and the FT Fourier Transforms components (cosines and sines )",
null,
"```Harmonic Analysis and the FT\n• All signals can be treated as a combination of periodic\ncomponents (cosines and sines )\n– Resulting signal is a sum of individual waveforms.\nsum = v + 1.2v + 1.5v\n4\n3\n2\n1\n0\n-1\n-2\n-3\n-4\n– Example applets\nhttp://www.chem.uoa.gr/Applets/Applet_Index2.htm\nFourier Transforms\n• The FT allows conversion between time domain and frequency\ndomain\n– t and 1/t are FT pairs (product is dimensionless)\nf ( ω) = ∫−+∞\n∞ f ( t )[cos( ωt ) − i sin( ωt )] dt\nf ( t ) = ∫−+∞\n∞ f ( ω)[sin( ωt ) − i cos( ωt )]dω\n1\nFourier Filtering\nPractical Considerations\n• Collecting continuous, infinite datsets is problematic!\n– Typically do just the opposite\n– Makes the math a little different\n∞\nf ( ω) = ∑ f ( t )[cos(ωt ) − i sin(ωt )]\n−∞\n• Apodization\n2\n```"
] | [
null,
"https://s2.studylib.net/store/data/010460248_1-06151c8b23c619fc4cc29488a4ea5b6c-768x994.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67304033,"math_prob":0.98739016,"size":759,"snap":"2021-43-2021-49","text_gpt3_token_len":254,"char_repetition_ratio":0.10198676,"word_repetition_ratio":0.013513514,"special_character_ratio":0.33069828,"punctuation_ratio":0.0729927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9959859,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T15:02:50Z\",\"WARC-Record-ID\":\"<urn:uuid:753e17fe-2baf-44d9-964e-559ab7e58385>\",\"Content-Length\":\"48831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b740920-aa14-4f8a-8ccd-a872bc3a99ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:d56e015a-b0e1-4f07-bf35-eb9518fd3dd1>\",\"WARC-IP-Address\":\"172.67.175.240\",\"WARC-Target-URI\":\"https://studylib.net/doc/10460248/harmonic-analysis-and-the-ft-fourier-transforms-component...\",\"WARC-Payload-Digest\":\"sha1:LACKB5VLCLGMDOWVZUJWG3SVCV7ODI5H\",\"WARC-Block-Digest\":\"sha1:ZWPLORJQF7C3VVHZ7IYHZLEWVGDCREUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964364169.99_warc_CC-MAIN-20211209122503-20211209152503-00539.warc.gz\"}"} |
https://kevinausman.net/tutorials/introduction-to-mathematica-an-extended-example-harmonic-oscillator/ | [
"# Introduction to Mathematica: An Extended Example: Harmonic Oscillator\n\nBy this point in the sequence of tutorials, we have developed enough skills with Mathematica™ to construct a relatively-full example that is a classic in quantum mechanics: the harmonic oscillator. In the development below, we will be assuming that we have already derived the wavefunctions and energy levels for this system (we can certainly derive them in Mathematica™, and I may add a tutorial on how to do that at a later point). Over the course of this tutorial, we will generate the figure shown to the right, showing the probability density functions and energy levels for the harmonic oscillator for levels v=0 to 10.\n\nLet’s start with the potential energy curve. We are also going to set up some specific conditions that we will use for making plots: the force constant will equal 1, the mass will equal 1, and reduced Planck’s constant will equal 1 (i.e., we are operating in atomic units). Notice the usage of Set, the definition of a function with a variable argument (in this case, the x coordinate), the replacement rules that allow for the application of conditions temporarily, a two-dimensional plot of a function implementing those conditions (with a bunch of aesthetic modifiers that you don’t need to worry about, but which will make our final result easier to visualize), and the fact that we can assign the plot itself to a variable for later reference.\n\nNow let’s define the wavefunctions. There are a couple of advanced techniques being used in that definition below, which I will explain, but hopefully you will be able to see how you can do the same thing without using those advanced techniques. First, let me point out the aspects of the code that we went over in earlier tutorials:\n\n• The use of Set to define the function. Since this particular function is not well-defined when the quantum number v is not an integer, you could argue that one should define this function using SetDelayed instead. The use of ConditionalExpression (described below), however, means that the function is only defined in cases where v is in the correct domain.\n• The use of Greek letters to make the code follow standard chemistry notation more closely.\n• The separation of one function parameter (the quantum number) as a subscript from the other function parameter (the x-coordinate) as being within the brackets.\n• The use of replacement rules to apply our specific conditions to the wavefunction (in the last line).\n\nThe advanced techniques that you can do without if you so choose are:\n\n• ConditionalExpression. This says that the part before the comma is the value of the function, but only under the conditions that are listed after the comma. As I mentioned above, you can leave this out and use DelayedSet instead.\n• Module. This is a way to define “local” variables for a set of statements. Here, α and y are only being defined within the Module. We assign values to them, and then use those values in the ConditionalExpresson to simplify the form of the function. Outside of the Module, those two local variables are not modified. This is a convenient way to ensure that you don’t litter the kernel with definitions that might interfere with later work you are doing.\n\nWe also need the energies. Notice, in the code below, the use of a Table to print out some test values to make sure the expression works, and the use of replacement rules to substitute in our conditions.\n\nWhat we would really like to do is to plot the energy levels on the potential energy curve. We have the y values (the energies), but we also need to know the x-values to limit the horizontal span of the lines. The x values are the places where the energy level are equal to the potential energy. Let’s see how to do that.\n\nIt is instructive to consider the first line above, and how it arose as I was coding this. Many students look at that kind of line and think, “Wow, there’s a lot going on at once… how do you put it all together right away?” And the answer is that you don’t. You do it stepwise, making sure that each part works first. Let’s walk through how I did it.\n\nFirst, I thought, “The basic thing I want to do is find the x values where the energies are equal. That sounds like a Solve statement. Let’s pick v=3 just to have something to work with:\n\nOk, that’s looking like a good start. But let’s simplify our life by substituting in for the constants. Copy-paste and modify:\n\nNotice the use of parentheses to ensure that the replacement rule “/.conditions” applies to both sides of the equation. Ok, this is good so far, but I don’t really want the result to be a replacement rule, so we should use the replacement rule:\n\nThat says, “give me x, where x has been replaced with these two values.” So let’s put that in conjunction with our earlier statement.\n\nWonderful. Now let’s generalize this so that we can do it at any quantum number, and we will write this as a function where the only argument is a subscript:\n\nAnd we’re done. Now in reality, I did not actually keep all of those lines. I started with the first one, checked it, and then made sequential modifications to it until it worked like I wanted. Like this:\n\nOk, so we now have the crossings. We want lines running from {leftCrossingx,energy} to {rightCrossingx,energy}. Let’s do see what we can do. In particular notice the use of the Table within the Graphics statement. As with many other multi-component lines, this one was built up by working from the inside out, similar to my extended example above.\n\nNow it would be nice to plot the wavefunctions on each line. Or actually, the wavefunction squared would probably be better. To do that, we need to offset each wavefunction vertically from zero by the magnitude of the energy level. And just from a graphics prettiness standpoint, we should scale them slightly. Fortunately, that is straightforward.\n\nNow we just combine this with the energy level diagram, and we are done.\n\nOur final result is a professional-looking plot that contains a huge amount of information."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92647165,"math_prob":0.9472139,"size":5979,"snap":"2019-51-2020-05","text_gpt3_token_len":1262,"char_repetition_ratio":0.1283682,"word_repetition_ratio":0.0038240917,"special_character_ratio":0.20873056,"punctuation_ratio":0.10350584,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913102,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T21:23:22Z\",\"WARC-Record-ID\":\"<urn:uuid:95b65cf4-b8f6-4079-9d7c-b9ce352c3c33>\",\"Content-Length\":\"78600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1b30c6a-93e5-46d6-bae3-0caa1515c06f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f01b16ed-d1e7-4f54-ae27-4ead346a0967>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://kevinausman.net/tutorials/introduction-to-mathematica-an-extended-example-harmonic-oscillator/\",\"WARC-Payload-Digest\":\"sha1:RRJPQ3JEDUC52ZTMWEKW5N6WGTSSAXLJ\",\"WARC-Block-Digest\":\"sha1:FDW62JS56XWMMDSWWWW64GYPUS7EXMPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540502120.37_warc_CC-MAIN-20191207210620-20191207234620-00540.warc.gz\"}"} |
http://brandonlin.com/cubing/csp.html | [
"Cubeshape Parity\n\nCubeshape Parity, as the name suggests, involves fixing parity errors on the Square-1 during the cubeshape step. The main advantage to this method is that checking for parity occurs during inspection, which isn't factored into your final time. While the prospects of this method are groundbreaking, the method is still in development and is not currently being used in competitions by a lot of cubers, due to the amount of practice it takes to master this method.\n\nProps to Matthew Sheerin for inventing his technique. The method is based off of this Speedsolving thread.\n\nFirst, What is Parity?\n\nThe term \"parity\" is often thrown around incorrectly, and the correct phrase that should be used is \"parity error\". Parity refers to the oddness/evenness of the number of swaps required to solve a puzzle. (For example, an Adj swap has odd parity, since it requires one swap, and an H-Perm is of even parity since it requires 2.) Notice that when you make a slice turn of the puzzle misaligned by 1,0, you swap 2 pairs of corners and 2 pairs of edges, making 4 swaps. Therefore, keeping the puzzle in cubeshape will maintain the puzzle in even parity.\n\nStrange things happen when the Square-1 is not in the cubeshape, however. For example, setup the Square-1 with the moves /3,3/1,2/2,-2. Notice that by doing a slice move here, you swap 3 pairs of corners, resulting in 3 swaps and creating odd parity. Then, when you reverse the moves above, you will have a parity error, since you cannot return the puzzle to even parity by simply keeping it in cubeshape.\n\nColloquially, however, we simply use the word \"parity\" to refer to the fact that a puzzle has this parity error. In Cubeshape Parity (also known as CSP), we aim to fix parity errors during the cubeshape step.\n\nThe Basic Idea\n\nFor every case of cubeshape, we assign what we call a reference scheme. This is a known position of the Square-1 in a particular cubeshape that makes it easier for us to determine the parity of the Square-1. Because parity relies on the number of swaps, essentially we will be counting how many swaps it takes to restore the puzzle to this reference scheme. Depending on whether we determine that there is even or odd parity, we will perform cubeshape in a particular way that restores the cubeshape and fixes even parity.\n\nThe best way to learn CSP is to provide examples. Consider the case 80/Star, with the following reference scheme: (images are viewed from the top of the puzzle)",
null,
"Pretend that this is the \"solved\" position of the 80/Star case; given a scramble in this case we need to count how many pair swaps it takes to return the puzzle to the reference scheme. If you know how to solve a Rubik's Cube blindfolded, it's the same principle; we count the cycles for where which every piece needs to go, except this time we don't care about the location of the pieces, we only care about whether the number is even or odd.\n\nIf you don't know what I'm talking about, here's a short example. Scramble your cube with the following scramble (I'm assuming you use white top, red front):\n\n3,-1/1,-2/-1,-4/1,-2/2,0/\n-3,-3/3,0/-1,-5/-4,-2/6,0\n\nStarting with the YR edge, we find the place where it has to go in the reference scheme. In that place, there is a YB edge. We count \"1\" to represent we need one swap so far. Next, we see where the YB edge has to go, and we see that a WR edge is in its place. Then, we count \"2\" since now there are 2 swaps.\n\nNow, since white-red goes back to where we started, we will have to start with a new cycle. and count the remaining number of swaps. We start with the WB edge, since the WG edge is already in its place. This has to go where the WO edge is, so we count \"3\". This returns the white-orange piece to its original position, meaning we have to start a new cycle. The only cycle that remains is between the YG and YO edges, which means we count up to \"4\". Thus, the edges require a total of 4 swaps to solve to the reference scheme.\n\nWe now do the same for the corners, without resetting the numbering. Without explanation, using the reference scheme we have the following cycles\n\nWBO → YGO → WGR\n\nWOG → YGR\n\nYBR → WRB → YOB\n\nmeaning the corners require a total of 5 swaps. In total, we count 9 swaps, meaning the scramble has odd parity with respect to the reference scheme.\n\nNow, for each cubeshape we require two ways of doing cubeshape that will either preserve or switch parity. It so happens that for the cases with a star shape, the two ways differ by turning the star layer by 2 units (I refer to this as a \"star-shift\"). This changes parity because it performs an equivalent of 5 swaps on the corners. If we end up counting an even number of swaps, we perform the cubeshape algorithm normally without a star-shift. If we count an odd amount of swaps, then we perform a star-shift and solve cubeshape normally. In our case, we have odd parity, so we would do\n\n0,2/2,4/-2,-1/3,3/\n\nwhere the red part indicates the star-shift that changes parity.\n\nBE CAREFUL: for this particular case, we require the 8-group of edges to be in the back. If we solved cubeshape with it in the front, then our two ways of solving a parity error would be reversed, since this is equivalent to rotating the star layer by 6 units (3 swaps). Pay attention to these things when you learn cases.\n\nAnother Example Trace\n\nHere's another example trace, this time with the shape Scallop/Scallop. This has the reference scheme:",
null,
"We consider the scramble\n\n-2,0/-1,-4/0,-3/0,-3/4,-2/2,-4/-5,0/-3,-3\n/-5,-2/-4,0/4,0/-2,-2/4,6\n\nThe edges must cycle like so:\n\nWO → YR → YG → WG → YB → WB → YO → WR\n\nand the corners go as so:\n\nWOG → YOB → WGR\n\nYGO → YBR → WRB\n\nWe have 7 edge swaps and 4 corners swaps, resulting in 11 total swaps, which means we have odd parity. The odd parity algorithm for Scallop/Scallop is:\n\n-2,2/2,-2/1,2/-3,-3/\n\nThe blue represents what I call a \"slice-shift\", which switches parity because it performs 3 corner swaps. It is also very prevalent in many CSP algorithms.\n\nIn this case, it does not matter whether the scallops were in the front or back; they achieve the same thing. For some cases the orientation will matter, and for other cases the orientation will not; you will need to keep them in mind when you learn CSP cases.\n\nA Few Things to Note\n\n1. The star-shift and slice-shift will be very common in many algorithms, but that does not mean they are always in the odd parity algorithms; it depends on the specific case.\n\n2. It does not matter what color scheme you choose to use, as long as you keep it consistent between white and yellow. I simply use this one because I'm used to it and have been using it ever since. For edges, going clockwise,\n\nred, green, orange, blue\n\nusually starting with white on top and yellow on bottom. For corners, it is the same thing by considering the sticker to the right.\n\n3. For mirrors (when the layers are switched), you can either perform a z2 if the middle layer is flipped, or you can trace with white/yellow inverted.\n\nList of Cases\n\nI would list all the cases out here, but there are just too many of them. Luckily, thanks to the efforts of Tommy Szeliga and Rowe Hessler, they have created a Google Doc with all the cases you need to know for CSP. You can find this doc here.\n\nVideo Series\n\nAlongside with this doc, I am also creating a video series detailing all the CSP cases. You can find a link to the playlist here.\n\nConcluding Remarks\n\nCSP is very difficult to begin with, but once you get the hang of it you will start seeing massive improvements to your times. If you have any questions/needed clarifications about this guide, don't hesitate to contact me through my Facebook page or YouTube channel. Happy Squaning!"
] | [
null,
"http://brandonlin.com/img/csp/80star.png",
null,
"http://brandonlin.com/img/csp/scallop2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93796206,"math_prob":0.8977422,"size":7660,"snap":"2022-05-2022-21","text_gpt3_token_len":1888,"char_repetition_ratio":0.12826541,"word_repetition_ratio":0.0014503263,"special_character_ratio":0.23890339,"punctuation_ratio":0.11839709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9542636,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T02:40:24Z\",\"WARC-Record-ID\":\"<urn:uuid:efc1162f-1488-48df-8584-cc7a4e7b675e>\",\"Content-Length\":\"11760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df54f467-cacf-478e-9ed5-331973e519e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a17e68e-7a7e-4e4d-9b4d-8edbb33da3de>\",\"WARC-IP-Address\":\"45.55.187.66\",\"WARC-Target-URI\":\"http://brandonlin.com/cubing/csp.html\",\"WARC-Payload-Digest\":\"sha1:5NPKDYDJF35GYT2NJJIAIQH5FWV2FFVH\",\"WARC-Block-Digest\":\"sha1:WVK2RJFKJNSXKHQ3BSIQTQUZYNICAKQ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320302715.38_warc_CC-MAIN-20220121010736-20220121040736-00662.warc.gz\"}"} |
https://www.proactiveconcepts.co.za/34819-kg/of/rebar/per/cubic/meter/of/concrete/foundation.html | [
" kg of rebar per cubic meter of concrete foundation\n\n# kg of rebar per cubic meter of concrete foundation\n\n•",
null,
"### Unit Weight Of Steel Bar\n\nA: Air density is the mass of air per unit of volume it occupies, and it is expressed in kilograms per cubic meter when using Metal Calculator - Steel Weight, Conversions, Carbon Use O''Neal Steel''s calculators to determine steel weight, conversions, carbon equivalency and PCM.\n\nGet Price\n•",
null,
"### Concrete Calculator\n\nHow does this concrete calculator work? This is a house tool that can help you estimate the cement requirements for your construction project. It uses a premixed cement density of 133 pounds per cubic foot (60 pounds/0.45 cubic feet, or 80 pounds/0.60 cubic feet) or 2,130kg/cubic meters to obtain the weight in the result.\n\nGet Price\n•",
null,
"### Rebar Per Cubic Yard Of Concrete\n\nSteel has a density of about 490 pounds per cubic foot, which is more than three times the density of concrete, so the generally accepted density of normally reinforced concrete is 150 pounds per cubic foot which results in a cubic yard weighing 4050 pounds, but this depends on the quantity and grade of the rebar .\n\nGet Price\n•",
null,
"### Weight of reinforcing steel per cubic yard of concrete\n\nWeight of reinforcing steel per cubic yard of concrete Products. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including, Weight of reinforcing steel per cubic yard of concrete, quarry, aggregate, and different kinds of minerals.\n\n### kg of rebar per cubic meter of concrete foundation ...\n\nIf you are reinforcing the concrete with rebar then it depends on what loads to structure element; but a (very) rough average of 120 KG per cubic meter. . How do you calculate total cost of 1 cubic meter reinforcement cement concrete of heavy foundation.\n\nGet Price\n•",
null,
"### Concrete Calculator\n\nHow does this concrete calculator work? This is a house tool that can help you estimate the cement requirements for your construction project. It uses a premixed cement density of 133 pounds per cubic foot (60 pounds/0.45 cubic feet, or 80 pounds/0.60 cubic feet) or 2,130kg/cubic meters to obtain the weight in the result.\n\nGet Price\n•",
null,
"### Volume Of 1 Bag Of Cement In m3 - Engineering Feed\n\nThis construction video tutorial will show you how to find out the numbers of cement bags in one cubic meter. It is known that the density of cement is 1440 kg/cum and Weight of 1 bag cement = 50 kg.\n\nGet Price\n•",
null,
"### Concrete Price - Order Concrete Mix Starting £65 Per Cubic ...\n\nRMS Concrete supply quality concrete mix in London,Es and Kent starting at £65 per cubic metre. Find out a concrete delivery facility in your area by RMS Concrete and order now!\n\nGet Price\n•",
null,
"### Concrete Calculator\n\nCalculate concrete volume of curbs & gutters. We calculate the volume in sections from the ground to the top of each section. The Curb and Gutter are each a section. Volume in Cubic Meters (m 3) = Volume in Cubic Feet (ft 3) x 0.0283.\n\nGet Price\n•",
null,
"### How to Calculate the Weight of Steel Bar? - Online Calculator\n\nFormula for Unit weight of steel = D 2 /162.28 Kg/m. Let''s take an example, If we want to calculate the unit weight of 8mm steel rod of 2-metre height, Weight of steel = 8 2 /162.28 = 0.3944 kg/m * 2m = 0.79 kg. So 1- meter of 8 mm steel weighs around 0.79kg.\n\nGet Price\n•",
null,
"### Corrosion Protection of Steel Rebar in Concrete using ...\n\nand 0.8 kg of chlorides per cubic meter of concrete. generally within 0.9 to 1.1 kg of chlorides per cubic meter of concrete11. available migrating corrosion inhibitors on steel rebar in three concrete densities. high density concrete impedes corrosive species from reaching the surface of the rebar.\n\nGet Price\n•",
null,
"### Formula (d^2/162) for calculation of per meter weight of ...\n\nFormula (d^2/162) for calculation of per meter weight of rebar derived To find : weight of steel per metre. Since the formula to be derived is d^2/162, assumed cross section is circular and the material is cylinder.\n\nGet Price\n•",
null,
"### kg/m³ - Kilogram Per Cubic Meter. Conversion Chart ...\n\nThis is a conversion chart for kilogram per cubic meter (Metric System). To switch the unit simply find the one you want on the page and click it. You can also go to the universal conversion page. 2: Enter the value you want to convert (kilogram per cubic meter). Then click the Convert Me button.\n\nGet Price\n•",
null,
"### Concrete Calculator - Cemen Tech\n\nCalculate and the answer is 0.08 cubic yards for one concrete tube Multiply 0.08 x 50 = 4 total cubic yards of concrete for 50 tubes Note that this calculation is the volume of your tubes only and does not account for any overflow or loss at the bottom of your tubes\n\nGet Price\n•",
null,
"### Rebar / Reinforced Steel wieghts and specifications from ...\n\nRebar, Mesh & Construction Supplies (Pty) Ltd t/a RMCS 571 Setter Road, Commercia Industrial MIDRAND, Gauteng Province, 1685 South Africa\n\nGet Price\n•",
null,
"### Rebar: end projection and scheduling by element - Autodesk ...\n\nThat is, the number of kg of reinforcement per cubic meter of concrete in an element. Volume I can schedule easily, rebar weight I can also do using unit wt x rebar volume. The problem is when I have several different elements I want densities on, in the same project.\n\nGet Price\n•",
null,
"### kg/m³ - Kilogram Per Cubic Meter. Conversion Chart ...\n\nThis is a conversion chart for kilogram per cubic meter (Metric System). To switch the unit simply find the one you want on the page and click it. You can also go to the universal conversion page. 2: Enter the value you want to convert (kilogram per cubic meter). Then click the Convert Me button.\n\nGet Price\n•",
null,
"### Weight of common engineering metals, in kilograms per ...\n\na guide to the weight of common engineering metals, expressed as kgs per cubic metre. ... square or rectangular piece of material, multiply the width, height and length to get the volume, then multiply by the \"Kg /Cu. Mtr\" figure for the appropriate material from the above table.\n\nGet Price\n•",
null,
"### CONCRETE GRADE: M5 = 1:4:8 M10= 1:3:6 M15= 1:2:4 M20= 1:1 ...\n\nFresh concrete density: 2430 Kg/ M 3 M25 ( 1 : 2.28 : 3.27) Cement : 340 Kg/ M 3 20 mm Jelly : 667 Kg/ M 3 12.5 mm Jelly : 445 Kg/ M 3 River sand : 775 Kg/ M 3 Total water : 185 Kg/ M 3 Admixture : 0.6% Fresh concrete density: 2414 Kg/ M 3 Note: sand 775 + 2% moisture, Water185 -20.5 = 164 Liters, Admixture = 0.5% is 100ml M30 ( 1 : 2 : 2.87)\n\nGet Price\n•",
null,
"### Estimating Costs for Concrete Formwork, Rebar, Labor, and ...\n\nMost concrete includes some type of reinforcement, such as rebar, wire mesh, plastic mesh, or fiber added to the concrete mix to increase strength and crack-resistance. Standard reinforcing materials can add approximately \\$0.18 cents per square foot.\n\nGet Price\n•",
null,
"### Weight Metres per Bar Dia Metre per Tonne (Kg)\n\nBar Dia Weight per Metre (Kg) Metres per Tonne 6mm 0.222 4504 8mm 0.395 2531 10mm 0.616 1623 12mm 0.888 1126 16mm 1.579 633 20mm 2.466 406\n\nGet Price\n•",
null,
"### Calculation of Concrete\n\n1 cubic meter of sand weighs 1200-1700 kg on average - 1500 kg. Gravel and crushed stone. According to various sources, the weight of 1 cubic meter ranges from 1200 to 2500 kg depending on the fraction (size). Heavier - more than fine. So pereschityvat price per ton of sand and gravel you will have your own. Or clarify the sellers.\n\nGet Price\n•",
null,
"### Calculation For Per Meter Weight Of Rebar - YouTube\n\nApr 24, 2017· Calculation For Per Meter Weight Of Rebar : https://youtu/bys_SzkE0B4 This video shows how to calculate per meter weight of rebar. Learn more at :\n\nGet Price\n•",
null,
"### Foundation - Building a house in Ireland\n\nThe concrete is usually poured at a level of at least 300 mm, so which approximately would be a third of the cubic meter, so divide the figure of 48 by 3 which would give you 16. This means that you will require approximately 16 cubic meters of ready mix for your external walls.\n\nGet Price\n•",
null,
"### Concrete calculator readymix - Cashbuild\n\nConcrete calculator readymix . With this Concrete Volume Calculator, just enter the dimensions of the area to be concreted and you will be provided with the cubic metres of ready mixed concrete required for your project. Please enter the dimensions in the white fields below. The calculations will be completed when you leave the last input field.\n\nGet Price\n•",
null,
"### How Much Concrete Do I Need? | Sakrete\n\nHow Much Concrete Do I Need? ... This means the \"density\" of concrete is about 145 lbs per cubic foot. Now that we have that information we can calculate the yield. Add up both the dry material in the bag (80 lbs) and the water it takes to mix it up (1 gallon which weighs 8.3 lbs) for a total weight of 88.3 lbs. ...\n\nGet Price\n•",
null,
"### Concrete Volume Calculator\n\nHoles & Columns. Calculate the volume of cubic yards needed to fill in a hole, column or round footing. Note: While different mixes have different density levels, these calculators presume a density of 133 pounds per cubic foot or 2,130kg/cubic meter.\n\n### How Much Concrete Do I Need? | Hunker\n\nMar 28, 2018· To calculate the volume of concrete you need in cubic yards, you would multiply the length (2 feet) by the width (2 feet) by the depth (4 inches). Divide the answer by 12 to get everything into feet; the answer is 1.3 cubic feet.\n\nGet Price\n•",
null,
"### Concrete - Volume Estimate\n\nEstimate required concrete volume per sq. ft. of slab Engineering ToolBox - Resources, Tools and Basic Information for Engineering and Design of Technical Applications! - the most efficient way to navigate the Engineering ToolBox!\n\nGet Price\n•",
null,
"### 2019 Rebar Prices | Cost to Install Steel Reinforcing Bars\n\nRebar Cost Per Ton . Purchasing rebar by the ton is much more common than purchasing it by the pound, but still uncommon in the residential construction business. This is because the most common rebar sizes, #3, #4 and #5, come in bundles of 266, 149.7 and 95.9 20-foot sticks per ton, respectively, which is more than most residential jobs require.\n\nGet Price\n•",
null,
"### On-Site concrete calculator. - Source4me\n\nOn-Site Concrete calculator. Use this calculator to determine how much sand, aggregate (gravel) and cement is required for mixing on site a given area of concrete (1:2:4 ratio). Please enter the dimensions in the white fields below and click calculate to display the results. See below for help on concrete.\n\nGet Price\n•",
null,
"### Metric calculator for concrete - cement, sand, gravel etc\n\nThese Concrete Calculators provide the required quantities of cement and all-in ballast or cement, sharp sand and gravel required to give a defined volume of finished concrete. Both of these concrete calculators make an allowance for the fact that material losses volume after being mixed to make concrete.\n\n### Concrete slab calculator: cost and materials | JustCalc\n\nFree online concrete slab calculator estimates amount of materials needed for a monolithic slab on grade and their total cost. . Particularly, amount of cement, ... (per 1 cubic meter) Rebar (per 1 ton) ... volume of the concrete needed for the foundation plate;\n\nGet Price"
] | [
null,
"https://www.proactiveconcepts.co.za/image/188.jpg",
null,
"https://www.proactiveconcepts.co.za/image/10.jpg",
null,
"https://www.proactiveconcepts.co.za/image/355.jpg",
null,
"https://www.proactiveconcepts.co.za/image/85.jpg",
null,
"https://www.proactiveconcepts.co.za/image/195.jpg",
null,
"https://www.proactiveconcepts.co.za/image/357.jpg",
null,
"https://www.proactiveconcepts.co.za/image/119.jpg",
null,
"https://www.proactiveconcepts.co.za/image/129.jpg",
null,
"https://www.proactiveconcepts.co.za/image/220.jpg",
null,
"https://www.proactiveconcepts.co.za/image/258.jpg",
null,
"https://www.proactiveconcepts.co.za/image/173.jpg",
null,
"https://www.proactiveconcepts.co.za/image/225.jpg",
null,
"https://www.proactiveconcepts.co.za/image/351.jpg",
null,
"https://www.proactiveconcepts.co.za/image/54.jpg",
null,
"https://www.proactiveconcepts.co.za/image/100.jpg",
null,
"https://www.proactiveconcepts.co.za/image/210.jpg",
null,
"https://www.proactiveconcepts.co.za/image/71.jpg",
null,
"https://www.proactiveconcepts.co.za/image/289.jpg",
null,
"https://www.proactiveconcepts.co.za/image/90.jpg",
null,
"https://www.proactiveconcepts.co.za/image/122.jpg",
null,
"https://www.proactiveconcepts.co.za/image/277.jpg",
null,
"https://www.proactiveconcepts.co.za/image/89.jpg",
null,
"https://www.proactiveconcepts.co.za/image/130.jpg",
null,
"https://www.proactiveconcepts.co.za/image/201.jpg",
null,
"https://www.proactiveconcepts.co.za/image/45.jpg",
null,
"https://www.proactiveconcepts.co.za/image/404.jpg",
null,
"https://www.proactiveconcepts.co.za/image/4.jpg",
null,
"https://www.proactiveconcepts.co.za/image/79.jpg",
null,
"https://www.proactiveconcepts.co.za/image/275.jpg",
null,
"https://www.proactiveconcepts.co.za/image/361.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82230633,"math_prob":0.9767413,"size":10754,"snap":"2020-34-2020-40","text_gpt3_token_len":2584,"char_repetition_ratio":0.17572093,"word_repetition_ratio":0.15724286,"special_character_ratio":0.24893063,"punctuation_ratio":0.12943925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796788,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T03:29:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a6a5e9b1-f457-457c-825c-2992ef7eef49>\",\"Content-Length\":\"32722\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b616904-3185-4a54-adc6-9ca2afa477c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fb4c0f9-8892-4ddb-b81c-839d48902b98>\",\"WARC-IP-Address\":\"172.67.200.198\",\"WARC-Target-URI\":\"https://www.proactiveconcepts.co.za/34819-kg/of/rebar/per/cubic/meter/of/concrete/foundation.html\",\"WARC-Payload-Digest\":\"sha1:J5UAYZPAII2CIS375J2PGDVYXKGL6EWG\",\"WARC-Block-Digest\":\"sha1:6WJ2AJ5GMACNBZSQUI6DXY35EE4TJY3Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400250241.72_warc_CC-MAIN-20200927023329-20200927053329-00568.warc.gz\"}"} |
http://www.stuffedsheets.com/products/math_products/alg-cf3.htm | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Stuffed Sheets™ are the most thorough compilation of math I have ever seen in such a small and manageable format... more\n\nAlgebra Series - Concepts and Formulas\nAlgebraic Fractions, Rules of Exponents, Operations on Polynomials 1 (ALG-CF3)",
null,
"This sheet provides a comprehensive review of all of the most important topics, concepts and formulas spanning the algebraic concepts of: algebraic fractions, rules of exponents and operations on polynomials, explained in detail, with detailed illustrations. Unlike charts and notes, this Concepts and Formulas™ sheet anticipates and answers questions about how to use the concepts - how to solve problems with them, it doesn't just list facts and formulas.\n\nTopics Covered:\n\n• Addition properties of real numbers\n• Subtraction properties of real numbers\n• Multiplication properties of real numbers\n• Division properties of real numbers\n• Special cases of division with zero\n• Algebraic fractions\n• Means and Extremes\n• Prime factors\n• How to multiply algebraic fractions\n• How to divide algebraic fractions\n• How to add and subtract algebraic fractions\n• Complex algebraic fractions\n• How to simplify algebraic fractions\n• Polynomials\n• Simplified polynomials\n• Monomials\n• Binomials and Trinomials\n• Polynomials in standard form\n• Polynomials in x\n• Polynomials of degree n\n• How to subtract polynomials\n• The basic rules of exponents\n• How to multiply polynomials\n• How to calculate products of binomials by the F-O-I-L method\nThis e-book can only be read on the computer on which it was registered. Each purchase grants the license for use on only one computer.\n\nSystem requirements: Windows 95, 98, 98SE, 2000, NT, Me, XP or Vista; at least 256 MB of RAM (memory)\n\nWhat are E-Sheets?\n\nRelated Stuffed Sheets and InSights e-books (prerequisite material):\nBasic Mathematics Series:\nFundamentals (BMS-CF1)\nFractions (BMS-CF2)\nOperations on Fractions (BMS-CF3)\nDecimals (BMS-CF4)\nOperations on Decimals (BMS-CF5)\nPercentages (BMS-CF6)\nRatios, Proportions, Rates and Variations (BMS-CF7)\nThe Decimal System (BMS-EB1)\nPrime and Composite Numbers (BMS-EB2)\nFractions (BMS-EB4)\nPercentages (BMS-EB5)\nRatios, Proportions and Rates (BMS-EB6)\nAlgebra Series:\nAlgebraic Expressions, Order of Operations, Sets (ALG-CF1)\nSets of Numbers, Number Lines, Intervals and Absolute Value (ALG-CF2)\nOperations on Polynomials 2, Polynomial Factoring Techniques (ALG-CF4)\nRational Algebraic Expressions, Operations on Rational Algebraic Expressions (ALG-CF5)\nGraphing Fundamentals 1 - Relations and Functions, Function Tests (ALG-CF9)\nPolynomial Functions and Equations (ALG-CF24)\n E-Sheets™ \\$3.00 Download",
null,
""
] | [
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/math_products.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/ordering_information.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/subject_index-site_search.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/pod_and_anicasts.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/quick_solutions.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/contact_us.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/about_us.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/what's_new!.jpg",
null,
"http://www.stuffedsheets.com/images_global/spacer.gif",
null,
"http://www.stuffedsheets.com/images_global/testimonials_r2_c2.gif",
null,
"http://www.stuffedsheets.com/images_global/adamtart.GIF",
null,
"http://www.stuffedsheets.com/images_home/ALG-CF3-E Cover image - small.gif",
null,
"http://www.tell-a-friend-wizard.com/button/tell_friend.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8230869,"math_prob":0.9320291,"size":2710,"snap":"2022-27-2022-33","text_gpt3_token_len":695,"char_repetition_ratio":0.1714708,"word_repetition_ratio":0.017994858,"special_character_ratio":0.21586716,"punctuation_ratio":0.09490741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957432,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T22:22:38Z\",\"WARC-Record-ID\":\"<urn:uuid:f5d11309-f193-431b-9735-1eb693aac38e>\",\"Content-Length\":\"34550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14b9be88-af22-4b8a-8698-86c3740fd9f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b98faea-6582-4857-9dcc-88587625eba5>\",\"WARC-IP-Address\":\"207.150.212.54\",\"WARC-Target-URI\":\"http://www.stuffedsheets.com/products/math_products/alg-cf3.htm\",\"WARC-Payload-Digest\":\"sha1:ELF73LF3Z7S5T6PLBESUQFBPDXMIKZF2\",\"WARC-Block-Digest\":\"sha1:RGTLYZZ3Q3BX73RPOEVIGEED5MWH4BWJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572077.62_warc_CC-MAIN-20220814204141-20220814234141-00144.warc.gz\"}"} |
https://cs.unibuc.ro/~lleustean/Seminar-Logic/seminar-logic.html | [
"",
null,
"# Logic Seminar\n\n### Organizers: Laurențiu Leuștean, Andrei Sipoș\n\nThe logic seminar features talks on\nmathematical logic,\nphilosophical logic and\nlogical aspects of computer science.\nThe seminars are online, using Google Meet as the underlying platform. All seminars, except where otherwise indicated, will be at 10:00.",
null,
"Past Seminars\n\n## Talks in 2020-2021\n\n### Thursday, June 3, 2021\n\nGabriel Ciobanu (Alexandru Ioan Cuza University of Iași and Romanian Academy (ICS))\nChoice principles and infinities for finitely supported structures\n\nAbstract: Finitely supported structures are related to permutation models of Zermelo-Fraenkel set theory with atoms (ZFA) and to the theory of nominal sets. They were originally introduced in 1930s by Fraenkel, Lindenbaum and Mostowski to prove the independence of the axiom of choice and the other axioms in ZFA. We use a set theory defined by the axioms of ZFA set theory extended with an additional axiom for finite support. The consistency (validity) of choice principles in various models of Zermelo-Fraenkel set theory (ZF) and of ZFA (including the symmetric models and the permutation models) was investigated deeply in the last century. The choice axiom was proved to be independent from the axioms of ZF and ZFA. In the new theory of finitely supported structures, we analyze the consistency of various choice principles (and equivalent/related results), as well as the consistency of results regarding cardinality and infinity. We prove the inconsistency of choice principles for finitely supported structures (i.e., their formulations with respect to the finite support requirement are not valid). Related to this inconsistency of choice principles, we present some pairwise non-equivalent definitions for the notion of infinity (difficult to imagine, we have seven forms of infinity for finitely supported structures). We compare these forms, and present examples of atomic sets satisfying a certain form of infinity, while they do not satisfy others. In particular, we focus on the notion of Dedekind infinity and on uniformly supported sets.\n\nReferences:\n A. Alexandru, G. Ciobanu, Foundations of Finitely Supported Structures: a set theoretical viewpoint, Springer, 2020.\n\n### Thursday, May 27, 2021\n\nCătălin Dima (University Paris-Est Créteil)\nRational Synthesis in the Commons with Careless and Careful Agents\n\nAbstract: Turn-based multi-agent games on graphs are games where the states are controlled by a single player who decides which edge to follow. Each player has a temporal objective that he tries to achieve, and one player is the designated controller', whose objective captures the desirable outcomes of the whole system. Cooperative rational synthesis is the problem of computing a Nash equilibrium (w.r.t. the individual temporal objectives) that satisfies the controller’s objective. In this presentation, we tackle this problem in the context where each action has a cost or a benefit on one shared common pool energy resource. The paper investigates the problem of synthesizing the controller such that there exists an individually rational behavior of all the agents that satisfies the controller's objective and does not deplete the resource. We consider two types of agents: careless and careful. Careless agents only care for their temporal objective, while careful agents also pay attention not to deplete the system's resource. We study the complexity of the problem of cooperative rational synthesis with parity or Büchi objectives, careful or careless agents, and costs encoded in binary or unary.\nBased on joint work with Rodica Condurache (Alexandru Ioan Cuza University of Iași), Youssouf Oualhadj (University Paris-Est Créteil) and Nicolas Troquard (Free University of Bozen-Bolzano).\n\n### Thursday, May 20, 2021\n\nFlorin Crăciun (Babeș-Bolyai University)\nTBA\n\n### Thursday, May 13, 2021\n\nBruno Dinis (University of Lisbon)\nFunctional interpretations for nonstandard arithmetic\n\nAbstract: In the past few years there has been a growing interest in trying to make explicit constructive aspects of nonstandard methods with the use of functional interpretations. Functional interpretations are maps of formulas from the language of one theory into the language of another theory, in such a way that provability is preserved. These interpretations typically replace logical relations by functional relations. Functional interpretations have many uses, such as relative consistency results, conservation results and extraction of computational content from proofs. After giving a short introduction to nonstandard analysis, I will present several recent functional interpretations in the context of nonstandard arithmetic as well as some results that come from these interpretations.\n\nReferences:\n B. van den Berg, E. Briseid, P. Safarik, A functional interpretation for nonstandard arithmetic. Ann. Pure Appl. Logic 163 (2012), no. 12, 1962-1994.\n B. Dinis, J. Gaspar, Intuitionistic nonstandard bounded modified realisability and functional interpretation. Ann. Pure Appl. Logic 169 (2018), no. 5, 392-412.\n B. Dinis, J. Gaspar, Hardwiring t-truth in functional interpretations. In preparation.\n B. Dinis, É. Miquey, Realizability with stateful computations for nonstandard analysis. 29th EACSL Annual Conference on Computer Science Logic (CSL 2021), 19:1-19:23.\n F. Ferreira, J. Gaspar, Nonstandardness and the bounded functional interpretation. Ann. Pure Appl. Logic 166 (2015), no. 6, 701-712.\n S. Sanders, The unreasonable effectiveness of nonstandard analysis. J. Logic Comput. 30 (2020), no. 1, 459-524.\n\n### Thursday, April 22, 2021\n\nGabriel Istrate (West University of Timișoara)\nKernelization, Proof Complexity and Social Choice\n\nAbstract: We display an application of the notions of kernelization and data reduction from parameterized complexity to proof complexity: Specifically, we show that the existence of data reduction rules for a parameterized problem having (a). a small-length reduction chain, and (b). small-size (extended) Frege proofs certifying the soundness of reduction steps implies the existence of subexponential size (extended) Frege proofs for propositional formalizations of the given problem. We apply our result to infer the existence of subexponential Frege and extended Frege proofs for a variety of problems. Improving earlier results of Aisenberg et al. (ICALP 2015), we show that propositional formulas expressing (a stronger form of) the Kneser-Lovasz Theorem have polynomial size Frege proofs for each constant value of the parameter $k$. Previously only quasipolynomial bounds were known (and only for the ordinary Kneser-Lovasz Theorem). Another notable application of our framework is to impossibility results in computational social choice: we show that, for any fixed number of agents, propositional translations of the Arrow and Gibbard-Satterthwaite impossibility theorems have subexponential size Frege proofs.\nThis is joint work with Cosmin Bonchiș and Adrian Crăciun.\n\n### Thursday, April 15, 2021\n\nDorel Lucanu (Alexandru Ioan Cuza University of Iași)\n(Co)Initial (Co)Algebra Semantics in Matching Logic\n\nAbstract: Matching logic is a unifying foundational logic for defining formal programming language semantics, which adopts a minimalist design with few primitive constructs that are enough to express all properties within a variety of logical systems, including FOL, separation logic, (dependent) type systems, modal mu-logic, and more. In this presentation, we will show how the initial algebra semantics and the final (coinitial) coalgebra semantics are captured as theories in Matching Logic. The presentation will be examples-based.\nThis is joint work with Xiaohong Chen and Grigore Roșu.\n\nReferences:\n X. Chen, D. Lucanu, G. Roșu. Matching logic explained. Journal of Logical and Algebraic Methods in Programming 120 (2021), 100638.\n X. Chen, D. Lucanu, G. Roșu. Initial algebra semantics in matching logic. UIUC Technical Report (2020).\n\n### Thursday, April 8, 2021\n\nGheorghe Ştefănescu (University of Bucharest)\nAdaptive Virtual Organisms: A Compositional Model for Complex Hardware-software Binding\n\nAbstract: The relation between a structure and the function it runs is of interest in many fields, including computer science, biology (organ vs. function) and psychology (body vs. mind). Our paper addresses this question with reference to computer science recent hardware and software advances, particularly in areas as Robotics, Self-Adaptive Systems, IoT, CPS, AI-Hardware, etc.\nAt the modelling, conceptual level our main contribution is the introduction of the concept of virtual organism\" (VO), to populate the intermediary level between reconfigurable hardware agents and intelligent, adaptive software agents. A virtual organism has a structure, resembling the hardware capabilities, and it runs low-level functions, implementing the software requirements. The model is compositional in space (allowing the virtual organisms to aggregate into larger organisms) and in time (allowing the virtual organisms to get composed functionalities).\nThe virtual organisms studied here are in 2D (two dimensions) and their structures are described by 2D patterns (adding time, we get a 3D model). By reconfiguration, an organism may change its structure to another structure in the same 2D pattern. We illustrate the VO concept with a few increasingly more complex VO’s dealing with flow management or a publisher-subscriber mechanism for handling services. We implemented a simulator for a VO, collecting flow over a tree-structure (TC-VO), and the quantitative results show reconfigurable structures are better suited than fixed structures in dynamically changing environments.\nFinally, we briefly show how Agapia - a structured parallel, interactive programming language where dataflow and control flow structures can be freely mixed - may be used for getting quick implementations for VO’s simulation.\n\nReferences:\n C. I. Păduraru, G. Ştefănescu. Adaptive virtual organisms: A compositional model for complex hardware-software binding. Fundamenta Informaticae 173, no. 2-3, 139-176, 2020.\n\n### Thursday, April 1, 2021\n\nAlexandru Dragomir (University of Bucharest)\nAn Introduction to Protocols in Dynamic Epistemic Logic\n\nAbstract: Dynamic epistemic logics are useful in reasoning about knowledge and acts of learning, seen as epistemic actions. However, not all epistemic actions are allowed to be executed in an initial epistemic model, and this is where the concept of a protocol comes in: a protocol stipulates what epistemic actions are allowed to be performed in a model. The aim of my presentation is to introduce the audience to two accounts of protocols in DEL: Hoshi's , and Wang's .\n\nReferences:\n T. Hoshi. Epistemic dynamics and protocol information. PhD thesis, Stanford, CA, USA. AAI3364501 (2009).\n Y. Wang. Epistemic Modelling and Protocol Dynamics. PhD thesis, University of Amsterdam. (2010).\n\n### Thursday, March 25, 2021\n\nMircea Marin (West University of Timișoara)\nRegular matching problems for infinite trees\n\nAbstract: We study the matching problem $\\exists\\sigma:\\sigma(L)\\subseteq R$?\" where $L,R$ are regular tree languages over finite ranked alphabets $X$ and $\\Sigma$ respectively, and $\\sigma$ is a substitution such that $\\sigma(x)$ is a set of trees in $T(\\Sigma\\cup H)\\setminus H$ for all $x\\in X$. Here, $H$ denotes a set of holes which are used to define a concatenation of trees. Conway studied this problem in the special case for languages of finite words in his classical textbook Regular algebra and finite machines\" and showed that if $L$ and $R$ are regular, then the problem $\\exists \\sigma:\\forall x\\in X:\\sigma(x)\\neq\\emptyset\\wedge\\sigma(L)\\subseteq R$?\" is decidable. Moreover, there are only finitely many maximal solutions, which are regular and effectively computable. We extend Conway's results when $L,R$ are regular languages of finite and infinite trees, and language substitution is applied inside-out. We show that if $L\\subseteq T(X)$ and $R\\subseteq T(\\Sigma)$ are regular tree languages then the problem $\\exists\\sigma\\forall x\\in X:\\sigma(x)\\neq\\emptyset\\wedge\\sigma_{io}(L)\\subseteq R$?\" is decidable. Moreover, there are only finitely many maximal solutions, which are regular and effectively computable. The corresponding question for the outside-in extension $\\sigma_{oi}$ remains open, even in the restricted setting of finite trees. Our main result is the decidability of $\\exists\\sigma:\\sigma_{io}(L)\\subseteq R$?\" if $R$ is regular and $L$ belongs to a class of tree languages closed under intersection with regular sets. Such a special case pops up if $L$ is context-free.\nThis is joint work with Carlos Camino, Volker Diekert, Besik Dundua and Géraud Sénizergues.\n\nReferences:\n C. Camino, V. Diekert, B. Dundua, M. Marin, G. Sénizergues, Regular matching problems for infinite trees, arXiv:2004.09926 [cs.FL], 2021.\n\n### Thursday, March 18, 2021\n\nAndrei Sipoș (University of Bucharest)\nA proof mining case study on the unit interval\n\nAbstract: In 1991, Borwein and Borwein proved the following: if $L>0$, $f:[0,1] \\to [0,1]$ is $L$-Lipschitz, $(x_n)$, $(t_n) \\subseteq [0,1]$ are such that for all $n$, $x_{n+1}=(1-t_n)x_n +t_nf(x_n)$ and there is a $\\delta>0$ such that for all $n$, $t_n \\leq \\frac{2-\\delta}{L+1}$, then the sequence $(x_n)$ converges. The relevant fact here is that the main argument used in their proof is of a kind that hasn't been analyzed yet from the point of view of proof mining, and thus it may serve as an illustrative new case study. We shall present our work on the proof , showing how to extract a uniform and computable rate of metastability for the above family of sequences.\n\nReferences:\n D. Borwein, J. Borwein, Fixed point iterations for real functions. J. Math. Anal. Appl. 157, no. 1, 112-126, 1991.\n A. Sipoș, Rates of metastability for iterations on the unit interval. arXiv:2008.03934 [math.CA], 2020.\n\n### Thursday, March 11, 2021\n\nMihai Prunescu (University of Bucharest)\nExponential Diophantine equations over ${\\mathbb Q}$\n\nAbstract: In a previous exposition we have seen that the solvability over ${\\mathbb Q}$ is undecidable for systems of exponential Diophantine equations. We now show that the solvability of individual exponential Diophantine equations is also undecidable, and this happens as well for some narrower families of exponential Diophantine equations.\n\nReferences:\n M. Prunescu, The exponential Diophantine problem for ${\\mathbb Q}$. The Journal of Symbolic Logic, Volume 85, Issue 2, 671-672, 2020.\n\n### Thursday, March 4, 2021\n\nUnwinding of proofs\n\nAbstract: The unwinding of proofs program dates back to Kreisel in the fifties and rests on the following broad question: `What more do we know if we have proved a theorem by restricted means than if we merely know that it is true?\" This research program has since been dubbed proof mining and has been greatly developed during the last two decades and emerged as a new form of applied proof theory [1,2]. Through the use of proof-theoretical tools, the proof mining program is concerned with the unveil of hidden finitary and combinatorial content from proofs that use infinitary noneffective principles. In this talk, we set out to give a brief introduction to the proof mining program, focusing on the following points:\n\n• functional interpretations in an introductory way;\n• the bounded functional interpretation [3,4];\n• a concrete translation example: the metric projection argument.\nWe finish with a brief discussion of some recent results [5,6].\n\nReferences:\n U. Kohlenbach. Applied Proof Theory: Proof Interpretations and their Use in Mathematics. Springer, 2008.\n U. Kohlenbach. Proof-theoretic methods in nonlinear analysis. In M. Viana B. Sirakov, P. Ney de Souza, editor, Proceedings of the International Congress of Mathematicians - Rio de Janeiro 2018, Vol. II: Invited lectures, pages 61-82. World Sci. Publ., 2019\n F. Ferreira and P. Oliva. Bounded functional interpretation. Annals of Pure and Applied Logic, 135:73-112, 2005.\n F. Ferreira. Injecting uniformities into Peano arithmetic. Annals of Pure and Applied Logic, 157:122-129, 2009.\n B. Dinis and P. Pinto. Effective metastability for a method of alternating resolvents. arXiv:2101.12675 [math.FA], 2021.\n U. Kohlenbach and P. Pinto. Quantitative translations for viscosity approximation methods in hyperbolic spaces. arXiv:2102.03981 [math.FA], 2021.\n\n### Thursday, February 25, 2021\n\nHorațiu Cheval (University of Bucharest)\nFormalizing Gödel's System $T$ in Lean\n\nAbstract: In 1958, Gödel introduced his functional interpretation as a method of reducing the consistency of first-order arithmetic to that of a quantifier-free system of primitive recursive functionals of higher type. His work has since enabled other advances in proof theory, notably the program of proof mining. We give a brief overview of Gödel's system $T$ and then explore a formalization thereof as a deep embedding in the Lean proof assistant.\n\nReferences:\n mathlib Community, The Lean Mathematical Library, 2019.\n\n### Thursday, February 18, 2021\n\nHorațiu Cheval (University of Bucharest)\nAn introduction to Lean\n\nAbstract: Lean is a proof assistant and programming language developed by Leonardo de Moura at Microsoft Research starting from 2013, based on dependent types and Coquand's Calculus of Inductive Constructions. It offers typical functional programming features, as well as a rich and extensible tactic language for writing proofs. The standard library, mathlib , implements an ever-increasing number of definitions and theorems, from basic facts about real numbers to more advanced topics like algebraic geometry or category theory, providing a structured framework for formalizing mathematics. We present an introduction to Lean's theory, methods of writing proofs and then look at how concrete mathematics is done in mathlib.\n\nReferences:\n mathlib Community, The Lean Mathematical Library, 2019.\n\n### Thursday, December 17, 2020\n\nFormal verification of SeL4\n\nAbstract: This presentation introduces the SeL4 system, a microkernel first designed in Haskell, whose implementation in C has now a full formal proof of correctness. The focus is on the methods used to abstract the C code implementation into Isabelle/HOL theorems: syntactic translation, memory model and some abstraction levels added to refine the system representation, this involves usage of additional tools like a C-to-Isabelle parser and AutoCorres that will be shortly described.\n\nReferences:\n G. Heiser, The seL4 Microkernel. An introduction. White paper, The seL4 Foundation, 2020.\n\nAutoCorres tutorial\n\nAbstract: AutoCorres is a tool for abstracting C code into Isabelle/HOL (Isabelle's specialization in Higher Order Logic). This presentation will introduce some basic proof rules used in most of the Isabelle's theorems that will be needed to follow some introductory exercises of using AutoCorres tool with sample C programs. The study cases cover the following topics: proving correctness of programs with loops and proving statements about programs that manipulate heap memory.\n\nReferences:\n D. Greenaway, Automated proof-producing abstraction of C code, PhD Thesis, University of New South Wales, Sidney, 2014.\n\n### Thursday, December 3, 2020\n\nAndrei Sipoș (University of Bucharest / IMAR)\nA quantitative multi-parameter mean ergodic theorem\n\nAbstract: In , Avigad, Gerhardy and Towsner proof-theoretically analyzed the classical textbook proof due to Riesz of the mean ergodic theorem for Hilbert spaces in order to extract uniform rates of metastability. In , Kohlenbach and Leuștean extracted rates from a simplified argument of Garrett Birkhoff that applies to uniformly convex Banach spaces. What we do here is to analyze another proof of Riesz which was inspired by Birkhoff's one and which delineates more clearly the role played by uniform convexity. The corresponding argument has the advantage that it can be used to prove a more general mean ergodic theorem for a finite family of commuting linear contracting operators.\n\nReferences:\n J. Avigad, P. Gerhardy, H. Towsner, Local stability of ergodic averages. Trans. Amer. Math. Soc. 362, no. 1, 261-288, 2010.\n U. Kohlenbach, L. Leuștean, A quantitative mean ergodic theorem for uniformly convex Banach spaces. Ergodic Theory Dynam. Systems 29, 1907-1915, 2009.\n A. Sipoș, A quantitative multi-parameter mean ergodic theorem, arXiv:2008.03932 [math.DS], 2020.\n\n### Thursday, November 5, 2020\n\nAndrei Sipoș (University of Bucharest / IMAR)"
] | [
null,
"https://cs.unibuc.ro/~lleustean/Seminar-Logic/logo.gif",
null,
"https://cs.unibuc.ro/~lleustean/Seminar-Logic/logo.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83948356,"math_prob":0.7925063,"size":22185,"snap":"2021-21-2021-25","text_gpt3_token_len":5542,"char_repetition_ratio":0.12348406,"word_repetition_ratio":0.09532653,"special_character_ratio":0.22019382,"punctuation_ratio":0.14607595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97699124,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T06:17:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b75f8165-85c1-49d5-9080-2c27c9f74b1c>\",\"Content-Length\":\"33614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53e644c1-dd2a-42e5-840c-c0a3cee6e925>\",\"WARC-Concurrent-To\":\"<urn:uuid:edfecb82-f7a5-4206-9e7c-6ed743116e6c>\",\"WARC-IP-Address\":\"193.226.51.38\",\"WARC-Target-URI\":\"https://cs.unibuc.ro/~lleustean/Seminar-Logic/seminar-logic.html\",\"WARC-Payload-Digest\":\"sha1:PTDY7KQ7J4LFGR6D4KCCD55FD5PU6HXD\",\"WARC-Block-Digest\":\"sha1:BL7PLIBRQCKD37XX2MYW6VUQHT7EPYHQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988741.20_warc_CC-MAIN-20210506053729-20210506083729-00094.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.