URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://optimization-online.org/tag/graph-theory/
[ "## Source Detection on Graphs\n\nSpreading processes on networks (graphs) have become ubiquitous in modern society with prominent examples such as infections, rumors, excitations, contaminations, or disturbances. Finding the source of such processes based on observations is important and difficult. We abstract the problem mathematically as an optimization problem on graphs. For the deterministic setting we make connections to the … Read more\n\n## Solving the n_1 × n_2 × n_3 Points Problem for n_3 < 6\n\nIn this paper, we show enhanced upper bounds of the nontrivial n_1 × n_2 × n_3 points problem for every n_1\n\n## A New Bilevel Optimization Approach for Computing Ramsey Numbers\n\nIn this article we address the problem of finding lower bounds for small Ramsey numbers \\$R(m,n)\\$ using circulant graphs. Our constructive approach is based on finding feasible colorings of circulant graphs using Integer Programming (IP) techniques. First we show how to model the problem as a Stackelberg game and, using the tools of bilevel optimization, … Read more\n\n## Solving the bandwidth coloring problem applying constraint and integer programming techniques\n\nIn this paper, constraint and integer programming formulations are applied to solve Bandwidth Coloring Problem (BCP) and Bandwidth Multicoloring Problem (BMCP). The problems are modeled using distance geometry (DG) approaches, which are then used to construct the constraint programming formulation. The integer programming formulation is based on a previous formulation for the related Minimum Span … Read more\n\n## Wavelength Assignment in Multi-Fiber WDM Networks by Generalized Edge Coloring\n\nIn this paper, we study wavelength assignment problems in multi-fiber WDM networks. We focus on the special case that all lightpaths have at most two links. This in particular holds in case the network topology is a star. As the links incident to a specific node in a meshed topology form a star subnetwork, results … Read more" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93005115,"math_prob":0.9382836,"size":439,"snap":"2023-40-2023-50","text_gpt3_token_len":78,"char_repetition_ratio":0.11494253,"word_repetition_ratio":0.0,"special_character_ratio":0.16173121,"punctuation_ratio":0.10294118,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9758899,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T23:10:33Z\",\"WARC-Record-ID\":\"<urn:uuid:db368ec7-5c3c-4378-8b52-1ba582a0b32e>\",\"Content-Length\":\"92600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5b46f497-653c-4427-ba6a-cf42cbd49728>\",\"WARC-Concurrent-To\":\"<urn:uuid:077a49d6-b6e4-4e54-b7fc-93aa20b22399>\",\"WARC-IP-Address\":\"128.104.153.102\",\"WARC-Target-URI\":\"https://optimization-online.org/tag/graph-theory/\",\"WARC-Payload-Digest\":\"sha1:AC6DMLNARFLTR5E4ACFRFACFH23BTMYR\",\"WARC-Block-Digest\":\"sha1:NDSQPAPHTPFG26553P6FDKASI6OITPX4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510941.58_warc_CC-MAIN-20231001205332-20231001235332-00364.warc.gz\"}"}
https://simondesenlisblogs.org/2019/10/15/year-5-dyson-methods-in-maths/
[ "# Year 5 Dyson – Methods in Maths\n\nToday in maths we were given a set of calculations and had to sort them by the methods we would use to solve them. For instance, a calculation such as 176-40 could be done mentally whereas a calculation such as 815-278 would require a formal written method. We discussed our ideas in small groups. How would you solve these calculations?", null, "", null, "", null, "" ]
[ null, "https://simondesenlisblogs.org/wp-content/uploads/2019/10/IMG_7389.jpg", null, "https://simondesenlisblogs.org/wp-content/uploads/2019/10/IMG_7391.jpg", null, "https://simondesenlisblogs.org/wp-content/uploads/2019/10/IMG_7392.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9878139,"math_prob":0.97595716,"size":369,"snap":"2021-04-2021-17","text_gpt3_token_len":81,"char_repetition_ratio":0.15068494,"word_repetition_ratio":0.0,"special_character_ratio":0.23306233,"punctuation_ratio":0.06849315,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9711855,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T02:23:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e273ab5e-ad1b-4ca0-ace5-771d9fea75c1>\",\"Content-Length\":\"31361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7cef1ddc-a85f-42be-96c1-09a718d9ab46>\",\"WARC-Concurrent-To\":\"<urn:uuid:9caa8ab4-aeca-4db4-ac6b-39b05ff4b632>\",\"WARC-IP-Address\":\"217.160.0.163\",\"WARC-Target-URI\":\"https://simondesenlisblogs.org/2019/10/15/year-5-dyson-methods-in-maths/\",\"WARC-Payload-Digest\":\"sha1:DOPGFVDS2ZWHYG57DP243IDTD6L7IDCT\",\"WARC-Block-Digest\":\"sha1:R4RTDAMJXTLNCAY3FIROHCV6ZKYAVMVY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703499999.6_warc_CC-MAIN-20210116014637-20210116044637-00276.warc.gz\"}"}
http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/
[ "# Quaternion C++ Class\n\nHere we have provided a C++ class for quaternional algebra. You need these two files: quaternion.h & quaternion.c++. To be able to work with Euler angles, you also need to download Ken Shoemake's QuatTypes.h, EulerAngles.h, EulerAngles.c and define SHOEMAKE in your makefile. Here is an example how to use it all together: (test.c++) and a makefile. For more on quaternions, read Prof. George Francis's introduction lecture.\n\n# Introduction to Quaternionial Algebra\n\nQuaternions are elements of the 4-dimensional space", null, "formed by the real axis and 3 imaginary orthogonal axes", null, ",", null, ", and", null, "that obey Hamilton’s rule", null, ".  They can be written in a standard quaternionial form as", null, "where", null, ", or as a 4D vector", null, "where", null, "is called scalar part and", null, "is called vector part.  Quaternions possess the following properties:\n\nAddition: for", null, "• closure:", null, "• commutativity:", null, "• associativity:", null, "• identity: there exists", null, "such that", null, "• inverse: there exists", null, "such that", null, "• sum:", null, "• difference:", null, "Multiplication: for", null, "and", null, "• close:", null, "• non-commutativity:", null, "• associativity:", null, "• distributivity:", null, "and", null, "• identity: there exists", null, "such that", null, "• inverse: if", null, ", then there exists", null, "such that", null, "• product:", null, "where", null, "denotes vector dot product and", null, "denotes vector cross product\n• no zero divisors: if", null, ", then either", null, "or", null, "• division:", null, ", from", null, "follows that", null, "and", null, "• scale:", null, "", null, "is the magnitude of", null, ",", null, "is its norm.  If", null, ", the quaternion", null, "is referred to as a unit quaternion.  For", null, "", null, "is a unit quaternion.  Inverse of", null, "is defined as", null, "and the conjugate of", null, "is defined as", null, ".  For any unit quaternion", null, "we have", null, ".  Quaternions whose real part is zero are called pure quaternions.\n\nRotation of a 3D vector", null, "by a unit quaternion", null, "is defined as", null, "where", null, "is a pure quaternion build from", null, "by adding a zero real part.  Sequences of rotations can be conveniently represented as the quaternionial product.  For example, if", null, "is rotated by", null, "followed by", null, ", the result is the same as", null, "rotated by", null, ".\n\nDocument is created by Angela Bennett and Volodymyr Kindratenko" ]
[ null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image002.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image004.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image006.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image008.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image010.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image012.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image014.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image016.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image018.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image020.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image022.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image024.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image026.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image028.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image030.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image032.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image034.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image036.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image038.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image040.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image041.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image043.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image045.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image047.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image049.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image051.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image053.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image055.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image057.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image059.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image061.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image063.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image065.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image067.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image069.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image071.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image073.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image075.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image077.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image079.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image081.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image083.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image085.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image087.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image089.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image091.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image093.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image095.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image097.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image099.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image100.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image102.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image103.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image105.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image106.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image108.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image110.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image112.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image114.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image116.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image117.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image118.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image120.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image121.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image123.gif", null, "http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/quaternions_files/image125.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.936174,"math_prob":0.9338712,"size":1477,"snap":"2021-31-2021-39","text_gpt3_token_len":345,"char_repetition_ratio":0.14799729,"word_repetition_ratio":0.0,"special_character_ratio":0.21936357,"punctuation_ratio":0.14915255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973268,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T11:14:47Z\",\"WARC-Record-ID\":\"<urn:uuid:c9314477-21df-49f0-b69c-61506ff96293>\",\"Content-Length\":\"47823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3d6ba9e-c185-4569-a934-1d0d10ec28d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0772ddaa-fdd4-487b-a69b-36747534dd2b>\",\"WARC-IP-Address\":\"141.142.192.147\",\"WARC-Target-URI\":\"http://www.ncsa.illinois.edu/People/kindr/emtc/quaternions/\",\"WARC-Payload-Digest\":\"sha1:VQ5JVUIG2AIMY22XG2QLR4GGF72UA7Y6\",\"WARC-Block-Digest\":\"sha1:PTTAMSOEIZDBATFSEV3JKPL5P3CGOSYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056856.4_warc_CC-MAIN-20210919095911-20210919125911-00554.warc.gz\"}"}
https://www.jstatsoft.org/article/view/v081i10
[ "# Half-Normal Plots and Overdispersed Models in R: The hnp Package\n\nRafael A Moral, John Hinde, Clarice G B Demétrio\n\n## Abstract\n\nCount and proportion data may present overdispersion, i.e., greater variability than expected by the Poisson and binomial models, respectively. Different extended generalized linear models that allow for overdispersion may be used to analyze this type of data, such as models that use a generalized variance function, random-effects models, zero-inflated models and compound distribution models. Assessing goodness-of-fit and verifying assumptions of these models is not an easy task and the use of half-normal plots with a simulated envelope is a possible solution for this problem. These plots are a useful indicator of goodness-of-fit that may be used with any generalized linear model and extensions. For GLIM users, functions that generated these plots were widely used, however, in the open-source software R, these functions were not yet available on the Comprehensive R Archive Network (CRAN). We describe a new package in R, hnp, that may be used to generate the half-normal plot with a simulated envelope for residuals from different types of models. The function hnp() can be used together with a range of different model fitting packages in R that extend the basic generalized linear model fitting in glm() and is written so that it is relatively easy to extend it to new model classes and different diagnostics. We illustrate its use on a range of examples, including continuous and discrete responses, and show how it can be used to inform model selection and diagnose overdispersion." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9155916,"math_prob":0.8699241,"size":1593,"snap":"2021-43-2021-49","text_gpt3_token_len":309,"char_repetition_ratio":0.120830715,"word_repetition_ratio":0.0,"special_character_ratio":0.1814187,"punctuation_ratio":0.0877193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96984845,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T10:45:47Z\",\"WARC-Record-ID\":\"<urn:uuid:68aa8307-3542-4ade-b638-4c9133c32bb9>\",\"Content-Length\":\"41251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f74e1af2-7a43-4a66-8d90-1dc41f01781c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bd2791c-7ca5-4a74-84e5-18116112db05>\",\"WARC-IP-Address\":\"138.232.16.156\",\"WARC-Target-URI\":\"https://www.jstatsoft.org/article/view/v081i10\",\"WARC-Payload-Digest\":\"sha1:XQK5PNSNU3V724M7GVTUVQBGWQP64BFD\",\"WARC-Block-Digest\":\"sha1:Q4LEAOOUIQLWTYFFOWRHO4XCQKA5XPVX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585201.94_warc_CC-MAIN-20211018093606-20211018123606-00540.warc.gz\"}"}
https://www.r-bloggers.com/2010/11/sweave-tutorial-2-batch-individual-personality-reports-using-r-sweave-and-latex/
[ "[This article was first published on Jeromy Anglim's Blog: Psychology and Statistics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nThis post documents an example of using Sweave to generate individualised personality reports based on responses to a personality test. Each report provides information on both the responses of the general sample and responses of the specific respondent. All source code is provided, and selected aspects are discussed, including makefiles use of \\Sexpr, figures, and LaTeX tables using Sweave.\n\n### Overview\n\nAll source code is available on GitHub:\n\nThree examples of compiled PDF reports can be viewed as follows: ID1ID2and ID4.\n\nThe resulting report is a simple proof of concept example.\n\n### makefile\n\noutputDir = .output\nbackupDir = .backup\n\ntest:\n-mkdir $(outputDir) Rscript --verbose run1test.R test5: -mkdir$(outputDir)\nRscript --verbose run5test.R\n\nrunall:\n-mkdir $(outputDir) Rscript --verbose runAll.R clean: -rm$(outputDir)/*\n\nbackup:\n-mkdir $(backupDir) cp$(outputDir)/Report_Template_ID*.pdf --target-directory=$(backupDir) • outputDir stores the name of the folder used to store derived files (e.g., tex files, images, and compiled document PDFs) • backupDir stores the name of the folder where document PDFs are to be stored • test: is the default goal. Running make in the project directory will run run1test.R which will build one report. • the --verbose option shows the progress of R when run as a script. It’s useful for seeing progress and debugging. • test5: compiles five reports. • runall: compiles all reports. Further information on each of the Run... .R files can be obtained by inspecting these files. In general they source Run.R and specify which ids to run reports on. • clean: removes all the files from the output directory (i.e., all the derived files) • backup: copies reports to the backup folder; i.e., it separates the finished documents from all the other derived files. • To run test5, runnall etc., type make test5 or make runnall etc. ### main.R main.R loads external functions and packages, imports data, imports metadata and processes the data. # Import Data ipip <-read.delim(\"data/ipip.tsv\") ipipmeta <-read.delim(\"meta/ipipmeta.tsv\") ipipscales <- read.delim(\"meta/ipipscales.tsv\") • When importing the data, I have adopted the useful convention (which I observed from John Myles White's ProjectTemplate Package) of naming objects and data file names the same. The file extension also clearly indicates the file format (i.e., tab-separated-values). • I often have separate data and meta folders. Importing metadata often makes for more manageable code than when incorporating metadata by hard coding it into the R script. Test scores are calculated using the function score.items. ipipstats <- psych::score.items(ipipmeta[,ipipscales$scale],\nipip[,ipipmeta[,\"variable\"]],\nmin = 1, max = 5)\n\n• The psych package has a number of useful functions for psychological research. score.items is particularly good. It enables the creation of means and totals for multiple scales. It handles item reversal. It also returns information related to the reliability of the scales.\n\n### Run.R\n\nsource(\"main.R\", echo = TRUE)\nid <- NULL\nexportReport <- function(x) {\nid <<- x\nfileStem <- \"Report_Template\"\nfile.copy(\"Report_Template.Rnw\",\npaste(\".output/\", fileStem, \"_ID\", id, \".Rnw\", sep =\"\"),\noverwrite = TRUE)\nfile.copy(\"Sweave.sty\", \".output/Sweave.sty\", overwrite = TRUE)\nsetwd(\".output\")\nSweave(paste(fileStem, \"_ID\", id, \".Rnw\", sep =\"\"))\ntools::texi2dvi(paste(fileStem, \"_ID\", id, \".tex\", sep =\"\"), pdf = TRUE)\nsetwd(\"..\")\n}\n\n• The above code provides the function to run Sweave on each individualised report\n• the code is a little bit messy, contains a few hacks, and is not especially robust.\n• the exportReport function takes an id value as an argument x. Note the use of the alternative assignment operator. (See ?assignOps)\n• The code is designed to keep derived files away from source files by copying files into the .output folder and even changing the working directory to that directory.\n• The code creates an individualised copy of the Rnw file; Runs Sweave on the report to produce a tex file, and then runs texi2dvi with pdf=TRUE to produce the final pdf.\n\n### Report_Template.Rnw\n\n• The Rnw file contains interspersed chunks of LaTeX and R code.\n• Because the Rnw file is called from within R, all the R objects and data processing code does not need to be called at the start of the Rnw file. This approach is one way of reducing the time it takes to run a set of Sweave reports all based on a common data source.\n• The \\Sexpr{} command is used to incorporate in-line text. (... sample of \\Sexpr{nrow(ipip)} students ...). In the example above, it prints the actual number of cases into the ipip data.frame (i.e., the sample size).\n\n#### Incorporating a figure using Sweave\n\n\\begin{figure}\n<<plot_scale_distributions, fig=true>>=\nplotScale <- function(ipipscale) {\nggplot(ipip, aes_string(x=ipipscale[\"scale\"])) +\nscale_x_continuous(limits=c(1, 5),\nname = ipipscale[\"name\"]) +\nscale_y_continuous(name = \"\", labels =\"\", breaks = 0) +\ngeom_density(fill=\"green\", alpha = .5) +\ngeom_vline(xintercept = ipip[ipip$id %in% id, ipipscale[\"scale\"]], size=1) } scaleplots <- apply(ipipscales, 1, function(X) plotScale(X)) arrange(scaleplots[], scaleplots[], scaleplots[], scaleplots[], scaleplots[], ncol=3) @ \\caption{Figures show distributions of scores of each personality factor in the norm sample. Higher scores mean greater levels of the factor. The black vertical line indicates your score.} \\end{figure} • The first R code chunk produces a figure using ggplot2. • The code above takes a while to run (perhaps around 10 seconds on my machine). But the resulting plot is more attractive than what I could easily get with base graphics. • <<plot_scale_distributions, fig=true>>= indicates the start of an R code chunk. fig=true lets Sweave know that it has to produce code to include a figure. • The R code chunk is substituted with \\includegraphics{Report_Template_ID10-plot_scale_distributions}in the tex file and the pdf and eps figures are created. Thus, if you want a float with captions and labels, you have to add them around the R code chunk. • the plotScale function is used to generate a ggplot2 figure of the distribution of scores on each personality scale along with a marking of the respondent's score on each scale. • The arrange function is used to layout multiple ggplot2 figures on a single plot. The source code is in the lib/vp.layout.R and was taken from a [post by Stephen Turner( http://gettinggeneticsdone.blogspot.com/2010/03/arrange-multiple-ggplot2-plots-in-same.html) #### Preparing a formatted table in R for LaTeX <<prepare_table>>= ipiptable <- list() ipiptable$colnames <- c(\"item\", \"scaleF\", \"text\", \"meanF\",\n\"sdF\", \"is1F\", \"is2F\", \"is3F\", \"is4F\", \"is5F\")\nipiptable$cells <- ipipsummary[,ipiptable$colnames ]\nipiptable$cells$item <- paste(ipiptable$cells$item, \".\", sep=\"\")\n\n# assign actual respones to table\nipiptable$cells[,c(\"is1F\", \"is2F\", \"is3F\", \"is4F\", \"is5F\")] <- sapply(1:5, function(X) ifelse(as.numeric(ipip[ipip$id %in% id, ipipmeta$variable]) == X, paste(\"*\", ipiptable$cells[[paste(\"is\", X, \"F\", sep =\"\")]], sep =\"\"),\nipiptable$cells[[paste(\"is\", X, \"F\", sep =\"\")]])) ipiptable$cellsF <- as.matrix(ipiptable$cells) ipiptable$cellsF <- ipiptable$cellsF[order(ipiptable$cellsF[, \"scaleF\"]), ]\n\nipiptable$row1 <- c(\"\", \"Scale\", \"Item Text\", \"M\", \"SD\", \"VI\\\\%\", \"MI\\\\%\", \"N\\\\%\", \"MA\\\\%\", \"VA\\\\%\") ipiptable$table <- rbind(ipiptable$row1, ipiptable$cellsF)\nipiptable$tex <- paste( apply(ipiptable$table, 1, function(X) paste(X, collapse = \" & \")),\n\"\\\\\\\\\")\nfor(i in c(41, 31, 21, 11, 1)) {\nipiptable$tex <- append(ipiptable$tex, \"\\\\midrule\", after=i)\n}\nipiptable$tex1 <- ipiptable$tex[c(1:34)]\nipiptable$tex2 <- ipiptable$tex[c(1,35:56)]\n\n• I often find it useful to split R code chunks for table preparation and table presentation. In general this allows any text that appears before the table to include \\Sexpr{} commands incorporating figures from the analyses which generate the table. In the present case, it was useful because the table was split over two pages.\n• The code shows some of the general logic I use for customised table creation. In hindsight I could probably refactor it into a function so that I don't have to always type ipiptable which would make things a little more concise\n• The general process of table creation involves: (a) extracting information on cells with cells often grouped into types which will receive common formatting treatment (b) formatting cells (e.g., rounding, decimals, and so on) (c) assembling the cells typically using a combination of the functions rbind and cbind(d) Inserting tex column and end of row separators with something like: paste(apply(x, 1, function(X) paste(X, collapse = \" & \")), \"\\\\\\\\\")where x is the matrix of table cells.\n\nipiptable$caption <- \"Response options were 1 = (V)ery (I)naccurate, 2 = (M)oderately (I)naccurate, 3 = (N)either Inaccurate nor Accurate, 4 = (M)oderately (A)ccurate 5 = (V)ery (A)ccurate. Thus, VI\\\\\\\\% indicates the percentage of the norm sample giving a response indicating that the item is a Very Inaccurate description of themselves. Your response is indicated with an asterisk (*).\" • This text was used in both tables. Thus, this text can then be called using \\Sexpr{ipiptable[[\"caption\"]]}. This follows the DRY principle (Don't Repeat Yourself). Thus, if the caption needs to be modified, it only needs to be modified in one place. #### Incorporating the tex formatted table using R Code chunks \\begin{table} \\begin{adjustwidth}{-1cm}{-1cm} \\caption{Table of results for (A)greeableness, (C)onscientiousness and (E)motional (S)tability items. \\Sexpr{ipiptable[[\"caption\"]]}} \\begin{center} \\begin{tabular}{rrp{4cm}rrrrrrr} \\toprule <<table_part1, results=tex>>= cat(ipiptable$tex1, sep=\"\\n\")\n@\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n• The tables are then incorporated into the tex file.\n• The R code only generated some of the required tex for the table. Thus all the other desired elements such as the table environment and captions are written either side of the R code chunk.\n• the R code chunk uses the option results=tex in order to enter the output from the cat function verbatim into the resulting tex file.\n• cat(ipiptable\\$tex1, sep=\"\\n\") includes a vector of tex. With the newline separator simply making the resulting tex more readable." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74427336,"math_prob":0.8113368,"size":10960,"snap":"2023-40-2023-50","text_gpt3_token_len":2871,"char_repetition_ratio":0.11391018,"word_repetition_ratio":0.028405422,"special_character_ratio":0.25072992,"punctuation_ratio":0.15592204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9835053,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T02:15:08Z\",\"WARC-Record-ID\":\"<urn:uuid:6d82e0a4-b50c-46c5-82dd-daa67679a744>\",\"Content-Length\":\"108382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adb893c0-22db-40db-b23f-2be2dff591df>\",\"WARC-Concurrent-To\":\"<urn:uuid:87282ff7-928f-4efd-ae50-0ce50b956330>\",\"WARC-IP-Address\":\"172.67.211.236\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2010/11/sweave-tutorial-2-batch-individual-personality-reports-using-r-sweave-and-latex/\",\"WARC-Payload-Digest\":\"sha1:XF32TWIYZUZ7ISXH3XOHFXFZVK35ZCQR\",\"WARC-Block-Digest\":\"sha1:UASWKXY6EIUO2VYXWYH3XUH4KXAPV7LA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510734.55_warc_CC-MAIN-20231001005750-20231001035750-00429.warc.gz\"}"}
https://9lib.net/document/yev37ek7-sensitivity-analysis-hcm-delay-model-factorial-design-method.html
[ "# A sensitivity analysis of the HCM 2000 delay model with the factorial design method\n\n## Full text\n\n(1)\n\n\u0001 T ¨UB˙ITAKc\n\n### A Sensitivity Analysis of the HCM 2000 Delay Model with theFactorial Design Method\n\nAli Payıdar AKG ¨UNG ¨OR, Osman YILDIZ, Abdulmuttalip DEM˙IREL Kırıkkale University, Faculty of Engineering, Department of Civil Engineering\n\n71451 Kırıkkale-TURKEY e-mail: [email protected]\n\nAbstract\n\nThe sensitivity of the Highway Capacity Manual (HCM) 2000 delay model to its parameters was inves- tigated with the factorial design method. The study results suggest that the arrival flow, the saturation flow, and the green signal time are the main parameters that significantly affect the average control delay estimated by the delay model. Additionally, the multi-parameter interactions of the arrival flow-saturation flow and the arrival flow-green signal time have major effects on the model-estimated average control delay.\n\nThe study results also demonstrate that the analysis period and the cycle length do not seem to have major effects on the estimation of the average control delay. Afurther factorial analysis performed to investi- gate the effect of parameters on the uniform delay showed that the green signal time and the cycle length appeared to significantly affect the uniform delay.\n\nKey words: Sensitivity analysis, Factorial design method, HCM 2000 delay model, Uniform delay, Incre- mental delay.\n\nIntroduction\n\nSensitivity analysis of a model can help determine relative effects of model parameters on model results.\n\nIn other words, the purpose of sensitivity testing of a model is to investigate whether a slight pertur- bation of the parameter values will result in a sig- nificant perturbation of the model results, that is, the internal dynamics of the model. The most com- monly used sensitivity method is the change one- factor-at-a-time approach. The major weakness of this method is its inability to identify multiple fac- tor interactions among the model parameters. As an alternative approach, the factorial design method developed by Box et al. (1978) has been successfully employed in various environmental sensitivity stud- ies (Henderson-Sellers, 1992, 1993; Liang, 1994; Bar- ros, 1996; Henderson-Sellers and Henderson-Sellers, 1996; Yildiz, 2001; among others). Unlike the standard change one-factor-at-a-time sensitivity ap-\n\nproach, this method has the advantage of testing both the sensitivity of model results to changes in individual parameters and to interactions among a group of parameters.\n\nThe objective of this study is to utilize the facto- rial design method in the sensitivity analysis of a to- tal delay model that estimates the difference between the actual travel time of a vehicle traversing a sig- nalized intersection approach and the travel time of the same vehicle traversing on the intersection with- out impedance at the desired free flow speed. The Highway Capacity Manual (HCM) 2000 (TRB, 2000) delay model, one of the most commonly used time dependent delay models, was selected for the sensi- tivity study. Due to the complexity and the highly nonlinear behavior of the model, the standard change one-factor-at-a-time sensitivity method seems inad- equate. Therefore, as a first attempt, the sensitiv- ity analysis of the model was conducted with the factorial design method to identify both main pa-\n\n(2)\n\nrameter and multiple parameter effects of primary importance.\n\nControl delay\n\nTotal delay, also called control or overall delay, is de- fined as the additional time that a driver has to spend at an intersection when compared to the time it takes to pass through the intersection without impedance at the free flow speed. This additional time is the result of the traffic signals and the effect of other traffic, and it is expressed on a per vehicle basis.\n\nIn estimating delay at signalized intersections, stochastic steady-state and deterministic delay mod- els are used for undersaturated and oversaturated conditions, respectively. Neither model, however, deals satisfactorily with variable traffic demands.\n\nStochastic steady-state delay models are only ap- plicable for undersaturated conditions and predict infinite delay when the arrival flow approaches the capacity. When demand exceeds the capacity, con- tinuous overflow delay occurs. Deterministic delay models can estimate continuous oversaturated delay, but they do not deal adequately with the effect of randomness when the arrival flow is close to the ca- pacity, and they fail for degrees of saturation between 1.0 and 1.1. Consequently, the stochastic steady- state models work well when the degree of saturation is less than 1.0, and the deterministic oversaturation models work well when the degree of saturation is considerably greater than 1.0. There exists a discon- tinuity when the degree of saturation is 1.0 for which the latter models predicts zero delay, while the for- mer models predicts infinite delay.\n\nTime-dependent delay models, therefore, fill the gap between these 2 models and give more realistic results in estimating the delay at signalized intersec- tions. They are derived as a mix of the steady-state and the deterministic models by using the coordinate transformation technique described by Kimber and Hollis (1978, 1979). Here, the coordinate transfor- mation is applied to the steady-state curve to make it asymptotic to the deterministic line. Thus, time- dependent delay models predict the delay for both undersaturated and oversaturated conditions with- out having any discontinuity at the degree of satu- ration 1.0.\n\nThe HCM 2000 delay model\n\nThe HCM 2000 model, along with the Australian (Akcelik, 1981) and the Canadian (Teply, 1996) mod-\n\nels, is a commonly used delay model for estimating delay at signalized intersections. General formula- tions of these models are similar to each other. In the HCM 2000 model, the expression of average control delay experienced by vehicles arriving in a specified time and flow period at traffic signals is given by Eq.\n\n(1):\n\nd = d1× (P F) + d2+ d3 (1)\n\nin which d is the average control delay per vehicle (s/veh), d1is the uniform delay term resulting from interruption of traffic flow by traffic signals at in- tersections, PF is the uniform delay progression ad- justment factor, which accounts for effects of signal progression, d2is the incremental delay term incor- porating effects of random arrivals and oversaturated traffic conditions, and d3 is the initial queue delay term accounting for delay to all vehicles in the anal- ysis period due to the initial queue at the start of the analysis period, taken as zero.\n\nUniform delay\n\nThe uniform delay term is based on deterministic queuing analysis and is predicted by the assump- tion that the number of vehicles arriving during each signal cycle is constant and equivalent to the aver- age flow rate per cycle. Because of constant arrival rates, randomness in the arrivals is ignored and the discharge rate varies from zero to saturation flow ac- cording to the red and green time of the signal. The discharge rate equals the saturation flow rate only when a queue exists because of red time of the sig- nal. On the other hand, when there is no queue, the discharge rate is equal to the arrival flow rate due to undersaturated traffic conditions, and values of de- gree of saturation (X) beyond 1.0 are not used in the computation of d1. The uniform delay term is expressed by Eq. (2):\n\nd1= 0.5C(1−Cg)2 1\u0001\n\nmin(1, X)Cg\u0002 (2)\n\nwhere d1is uniform delay (s/veh), C is cycle time (s), g is green time (s), X is degree of saturation indicating the ratio of arrival flow (or demand) to capacity (i.e. v/c), and g/C is green ratio.\n\n(3)\n\nIncremental delay\n\nThe incremental delay term represents additional de- lay experienced by vehicles arriving during a speci- fied flow period. Incremental delay results from both temporary and persistent oversaturation. Tempo- rary oversaturation occurs during both undersatu- rated and oversaturated traffic conditions because of randomness in vehicle arrivals and temporary cycle failures. Thus, delay resulting from temporary over- saturation is called random overflow delay. The ef- fect of the randomness in arrival flows is not impor- tant and can be neglected for low degrees of satu- ration because total arrivals are much less than the capacity. Conversely, for high degrees of saturation and especially when the arrival flow approaches the capacity, the effect of random variation in arrivals increases significantly.\n\nPersistent oversaturation, on the other hand, only occurs during oversaturated traffic conditions because the arrival flow is always greater than the ca- pacity; that is, vehicles cannot be discharged within the signal cycles. Delay resulting from persistent oversaturation is called continuous or deterministic overflow delay. The effect of the overflow delay in incremental delay increases as the duration of the analysis period (T ) and the value of the degree of saturation (X ) increase. The expression of the in- cremental delay term is given in Eq. (3):\n\nd2= 900T\n\n\u0003\n\n(x− 1) +\n\n\u0004\n\n(x− 1)2+8kIX cT\n\n\u0005 (3)\n\nwhere d2is the incremental delay to account for the effect of random and oversaturation queues, T is the duration of analysis period in hours, k is the incre- mental delay factor, I is the upstream filtering or metering adjustment factor, and c is capacity given as a function of saturation flow (s) and green ratio (i.e. c = sCg).\n\nFactorial design method\n\nA general factorial design method tests a fixed num- ber of possible values for each of the model param- eters with specific perturbations of values (usually 2 levels: upper and lower). Unlike the standard change-one-factor-at-a-time method, this method has the advantage of testing both the sensitivity to changes in individual parameters and to interactions between groups of parameters. The method tests a\n\nfixed number of possible values for each of the model parameters, and then identifies and ranks each pa- rameter according to some pre-established measures of model sensitivity by running the model through all possible combinations of the parameters (Box et al., 1978). For example, if there are n parameters in the model for 2 perturbation levels, then there will b e 2n combinations of the model parameters. This is illustrated in the following 3-parameter (23 facto- rial) design. Assume that parameters are called A, B, and C, and the prediction variable is called PV.\n\nThe corresponding design matrix for this example is shown in Table 1.\n\nTable 1. Factorial design matrix for single parameters.\n\nRun A B C PV\n\n1 - - - R1\n\n2 + - - R2\n\n3 - + - R3\n\n4 + + - R4\n\n5 - - + R5\n\n6 + - + R6\n\n7 - + + R7\n\n8 + + + R8\n\nwhere + and - signs represent the 2 possible values of each parameter (upper and lower levels, respec- tively). Within the design matrix, the effects due to each parameter and parameter interactions can be estimated as:\n\nEj= [\n\n\u0006n i\n\n(SijRi)]/Nj (4)\n\nin which Ej represents the effect of the jth factor (i.e. in the jthcolumn), n is the total number of ex- perimental runs (i.e. n= 8), Sij represents the sign in row i and column j, Rirepresents the value of the prediction variable obtained from the ithexperimen- tal run, and Nj is the number of + signs in column j.\n\nUsing Eq. (4) and the above design matrix, the effects of parameter interactions on the model results can also be estimated based on the signs of the pa- rameter interactions using the following rule: plus times minus gives a minus, and minus times minus or plus times plus gives a plus. The corresponding design matrix for parameter interactions is given in Table 2.\n\n(4)\n\nTable 2. Factorial design matrix for multi-parameter interactions.\n\nRun A·B A·C B·C A·B·C\n\n1 + + + -\n\n2 - - + +\n\n3 - + - +\n\n4 + - - -\n\n5 + - - +\n\n6 - + - -\n\n7 - - + -\n\n8 + + + +\n\nThe degree of importance of the parameters and their interactions can be determined after all the Ej\n\nvalues are estimated from Eq. (4). One way of iden- tifying and ranking the parameters with major ef- fects, as suggested by Box et al. (1978), is to plot the effects on a normal probability scale. According to this method, any outliers from the straight line on the normal probability plot could be considered to affect the model results significantly, while other effects would lead to variability in model results con- sistent with the result of random variation about a fixed mean, assuming that higher order interac- tions are negligible in a manner similar to neglecting higher order terms in a Taylor series expansion (Box et al., 1978). Another way of identifying the param- eters with major effects on the model results, as sug- gested by Henderson-Sellers (1992, 1993), is to use an iterative method to find thresholds that are 2, 3, or 4 standard deviations from zero. Here, any effects\n\ngreater than the estimated thresholds are considered to have significant effects on the model results.\n\nFactorial design of the HCM 2000 delay model The 2-level factorial design method was applied to the HCM Delay 2000 Model for the sensitivity anal- ysis. Five model parameters with parameter index numbers from 1 to 5 (1:v , 2:s, 3:g, 4:C , and 5:T ) were selected for this purpose (Table 3). Since the degree of saturation (X ) and the capacity (c) are dependent parameters, they cannot be selected as in- dividual parameters in the sensitivity analysis. The upper and lower levels of the selected model param- eters given in Table 3 were chosen arbitrarily within their reasonable ranges. In this particular study, the progress adjustment factor (PF ), the incremental delay calibration factor (k ), and the upstream fil- tering adjustment factor (l ) were taken as 1.0, 0.5, and 1.0, respectively.\n\nTable 3. The selected model parameters for the sensitivity analysis.\n\nParameter Parameter Name Symbol Lower Upper\n\nIndex No. Level Level\n\n1 Arrival flow (veh/h) v 250 750\n\n2 Saturation flow (veh/h) s 1000 2000\n\n3 Green time (s) g 30 90\n\n4 Cycle time (s) C 120 180\n\n5 Duration of analysis period (h) T 0.5 1.0 For the given number of parameters and pertur-\n\nbation levels, the design matrix for the main param-\n\neters is shown in Table 4.\n\n(5)\n\nTable 4. The design matrix for the main model parameters.\n\nModel Parameters\n\nRun 1 2 3 4 5\n\n1 - - - - -\n\n2 + - - - -\n\n3 - + - - -\n\n4 + + - - -\n\n5 - - + - -\n\n6 + - + - -\n\n7 - + + - -\n\n8 + + + - -\n\n9 - - - + -\n\n10 + - - + -\n\n11 - + - + -\n\n12 + + - + -\n\n13 - - + + -\n\n14 + - + + -\n\n15 - + + + -\n\n16 + + + + -\n\n17 - - - - +\n\n18 + - - - +\n\n19 - + - - +\n\n20 + + - - +\n\n21 - - + - +\n\n22 + - + - +\n\n23 - + + - +\n\n24 + + + - +\n\n25 - - - + +\n\n26 + - - + +\n\n27 - + - + +\n\n28 + + - + +\n\n29 - - + + +\n\n30 + - + + +\n\n31 - + + + +\n\n32 + + + + +\n\nThe corresponding computation matrix for the multiple parameter interactions was obtained using\n\nthe design matrix (Table 5).\n\n(6)\n\nTable5.Thecomputationmatrixforthemultipleparameterinteractions. MultipleParameterInteractions Run121314152324253435451231241251341351452342352453451234124523451235134512345 1++++++++++---+++++- 2----++++++++++++---+--+ 3-+++---++++++---+++---++ 4+---+++---++++++-++-+-- 5+-++-++--++--++-++-+-+---+ 6-+---++--+-++--+++-++--++- 7--+++----+-++++---+++-++-- 8++--+----++----+--++-++-++ 9++-++-+-+--+-+-++-++---+-+ 10--+-+-+-+-+-+-+-+-++++--+- 11-+-+-+--+-+-++-+-+-++++--- 12+-+--+--+--+--+--+-+--++++ 13+--+--++--++--++-++-+-+-+- 14-++---++----++---++--+++-+ 15---+++-+----+-+++----+-+++ 16+++-++-+--++-+--+---+--- 17+++-++-+----+-++-++++----+ 18---+++-+--++-+---+++-+-++- 19-++---++--++--+++--+-+++-- 20+--+--++----++--+--++-+-++ 21+-+--+--+-+-++-++-+---+++- 22-+-+-+--+--+--+-+-+-+++--+ 23--+-+-+-+--+-+-+-+--++--++ 24++-++-+-+-+-+-+--+---+-- 25++--+----+-++++-++---++-+- 26--+++----++----+++--+-++-+ 27-+---++--++--++---+-+--+++ 28+-++-++--+-++--+--+--+---- 29+---++++++---+++-+-+ 30-+++---+++---+++---+----+- 31----++++++---++++--+--- 32++++++++++++++++++++++++++\n\n(7)\n\nSensitivity Results and Discussion\n\nA total of 32 runs were conducted for the given fac- torial design. The results for average control delay and parameter effects for main and multiple param- eter interactions are given in Table 6.\n\nTable 6. Results of 32 runs and parameter effects.\n\nRuns Average Control Parameter Parameter Delay (s/veh) Index No. Effects\n\n1 204.2 1 1225.0\n\n2 1385.2 2 -827.9\n\n3 22.8 3 -1275.4\n\n4 257.8 4 565.7\n\n5 5.3 5 467.5\n\n6 169.4 12 -627.1\n\n7 2.4 13 -1030.1\n\n8 4.2 14 382.6\n\n9 832.1 15 406.9\n\n10 3224.3 23 637.3\n\n11 109.4 24 -280.2\n\n12 963.2 25 -270.5\n\n13 66.6 34 -380.0\n\n14 797.6 35 -408.8\n\n15 28.0 45 179.6\n\n16 70.4 123 441.1\n\n17 658.7 124 -130.3\n\n18 5435.3 125 -210.0\n\n19 22.9 134 -221.5\n\n20 933.2 135 -348.2\n\n21 5.4 145 127.4\n\n22 620.0 234 122.4\n\n23 2.4 235 211.8\n\n24 4.2 245 -95.2\n\n25 3082.9 345 -125.8\n\n26 12,674.4 1234 -24.7\n\n27 166.3 1245 -43.0\n\n28 3663.3 2345 41.3\n\n29 77.4 1235 151.4\n\n30 3047.7 1345 -73.5\n\n31 28.0 12345 -10.8\n\n32 103.2\n\nIn order to determine the main and multiple pa- rameter interactions with major effects on the HCM 2000 Delay model results, the parameter effects were plotted on a standard normal probability scale as suggested by Box et al. (1978). The outliers marked on Figure 1 are v , s, and g as main parameters, and v-s and v-g as 2-parameter interactions.\n\n0.01 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.99 (Percentiles)\n\nν\n\nν-s ν-g s g 1500 1000 500\n\n0 -500 -1000 -1500\n\n-2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 Quantiles of Standard Normal\n\nParameter Effects\n\nFigure 1. Parameter effects plotted on a normal proba- bility scale.\n\nUsing the iterative approach suggested by Henderson-Sellers (1993, 1996), the identified pa- rameters were then classified into 2 categories: pri- mary importance and secondary importance. More specifically, the importance of these parameters was ranked based on the absolute value of their effects at the 4-, 3-, and 2-standard deviations (i.e. 4σ, 3σ, and 2σ) thresholds as shown in Table 7.\n\nTable 7. Importance of identified parameters based on thresholds of|4σ|, |3σ| and |2σ|.\n\nPrimary Secondary Outliers Importance Importance\n\n|4σ| |3σ| |2σ|\n\nv\n\ns\n\ng\n\nv-s\n\nv-g\n\nReferring to the cumulative queuing polygon (Figure 2), the sensitivity results are consistent with the fact that the average delay per vehicle at signal- ized intersections is minimized when the arrival flow (v ) is less than the capacity of the intersection (c).\n\nIn this case, vehicles are mainly subjected to uniform delay and the amount of delay becomes equal to the effective red signal time or less. On the other hand, as the arrival flow exceeds the capacity, vehicles need to wait for a few signal cycles to be discharged and\n\n(8)\n\nthis causes an increase in the average delay per ve- hicle.\n\nν\n\nS Q (t)\n\nwi\n\nRed Green\n\nA(t)\n\nD(t) i\n\nt Time\n\nFigure 2. Cumulative queuing polygon.\n\nIn addition to the arrival flow, the saturation flow (s) is also a significant parameter of average delay.\n\nAs queued vehicles at a signalized intersection dis- charge at a relatively higher rate, the effect of the queue will diminish and the average delay will de- crease. On the other hand, as the arrival flow ap- proaches the saturation flow, or vehicles discharge at a relatively lower rate, the average delay increases accordingly.\n\nAs it is known, the capacity of a signalized in- tersection is linearly dependent upon the saturation flow as well as the allocation of the green time (g)\n\nin a signal cycle. Therefore, if the green time in- creases, the number of vehicles to be discharged also increases and, in turn, the average delay per vehicle decreases.\n\nThe results of sensitivity analysis indicate that only 2 parameter interactions of v-s and v-g have significant effects on model results. Not surprisingly, this is due to their respective individual main param- eter effects.\n\nThe study results also suggest that the remain- ing main parameters (i.e. the cycle length [C ] and the analysis period [T ]) do not have major effects on the average delay as much as the arrival flow, the saturation flow, and the green time.\n\nA further factorial analysis was performed to in- vestigate the effect of parameters on the uniform de- lay. The results showed that the green time and the cycle length appeared to be significant parameters on the uniform delay.\n\nUsing the factorial design method, a sensitivity testing of the HCM 2000 delay model to parameters was performed in this study. The evaluation of the sensitivity results show that the arrival flow, the sat- uration flow, and the green time are the main param- eters with significant effects on the average control delay. Additionally, v-s and v-g are multiple pa- rameters having major effects on the average control delay.\n\nReferences Akcelik, R., “Traffic Signals: Capacity and Time\n\nAnalysis”, Australian Research Board, Research Re- port ARR No. 123, Nunawading, Australia, 1981.\n\nBarros, A.P., “An Evaluation of Model Parameteri- zations of Sediment Pathways: ACase Study for the Tejo Estuary”, Continental Shelf Research, 16(13), 1725-1749, 1996.\n\nBox, G.E.P., Hunter, W.G. and Hunter, J.S., Statis- tics for Experimenters: An Introduction to Design, Data Analysis and Model Building, Wiley and Sons, 653 pp, 1978.\n\nCanadian Capacity Guide for Signalized Intersec- tions, Stan Teply (Ed.), Updated From the 1984 Edition, ITE District 7, 1996.\n\nHenderson-Sellers, A., “Assessing the Sensitivity of a Land Surface Scheme to Parameters Used In Trop- ical Deforestation Experiments”, Q. J. R. Meteoro- logical Society, 118, 1101-1116, 1992.\n\nHenderson-Sellers, A., “A Factorial Assessment of the Sensitivity of the BATS Land Surface Parame-\n\nterization Scheme”, American Meteorological Soci- ety, 6, 227-247, 1993.\n\nHenderson-Sellers, B. and Henderson-Sellers, A.,\n\n“Sensitivity Evaluation of Environmental Models Using Fractional Factorial Experimentation”, Eco- logical Modeling, 86, 291-295, 1996.\n\nKimber, R.M. and Hollis, E.M., “Peak Period Traf- fic Delay at Road Junctions and Other Bottlenecks”, Traffic Engineering and Control, 19, 442-446, 1978.\n\nKimber, R.M. and Hollis, E.M., Traffic Queues and Delays at Road Junctions, Transportation Road Re- search Laboratory, TRRL Report 909, Berkshire, England, 1979.\n\nLiang, Xu, ATwo-Layer Variable Infiltration Ca- pacity Land Surface Representation for General Cir- culation Models, Water Resources Series Technical Report No. 140, University of Washington, Depart- ment of Civil Engineering Environmental Engineer- ing and Science, Seattle, WA, USA, 1994.\n\n(9)\n\nTransportation Research Board, Highway Capac- ity Manual, TRB Special Report 209, National Re- search Council, Washington D.C., USA, 2000.\n\nYildiz, O., Assessment and Simulation of Hydro-\n\nlogic Extremes by APhysically Based Spatially Distributed Hydrologic Model, Ph.D. Thesis, The Pennsylvania State University, University Park, PA, 2001.\n\nUpdating...\n\n## References\n\nRelated subjects :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8226663,"math_prob":0.9809572,"size":22359,"snap":"2023-14-2023-23","text_gpt3_token_len":6268,"char_repetition_ratio":0.15692239,"word_repetition_ratio":0.034257747,"special_character_ratio":0.29992396,"punctuation_ratio":0.12765957,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95198774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T09:44:16Z\",\"WARC-Record-ID\":\"<urn:uuid:9c3ffe54-a827-490c-9a95-5cdb7ee8b7ba>\",\"Content-Length\":\"270174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d9164bd-d480-489f-aefa-d6701599d3c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6cc4db2-72f3-4354-9ddc-339fce3c51ce>\",\"WARC-IP-Address\":\"142.93.139.232\",\"WARC-Target-URI\":\"https://9lib.net/document/yev37ek7-sensitivity-analysis-hcm-delay-model-factorial-design-method.html\",\"WARC-Payload-Digest\":\"sha1:N4ZEW3ZW5SJC4JIBWZ7P3IC4FPBYKKBS\",\"WARC-Block-Digest\":\"sha1:QKL2XM4PBKS555F664SZJ6NCMPNS5ITD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948965.80_warc_CC-MAIN-20230329085436-20230329115436-00323.warc.gz\"}"}
https://groups.google.com/g/sci.crypt.research/c/2m9EtK7cBc4
[ "# RSA 4096 public key operation on crypto block that natively supports 2048 bit only?\n\n68 views\n\n### crashedmind\n\nOct 17, 2009, 12:10:12 AM10/17/09\nto\n\nI have a general purpose 32-bit CPU that has an onboard hardware\ncrypto block that supports low level crypto primitives like:\n=95 Modulo exponentiation\n=95 Montgomery Multiplication\n=95 Montgomery modulo exponentiation\nThe onboard hardware crypto block supports maximum 2048 bit operations\nonly e.g. it can support an RSA public key operation (signature\nverification) with 2048 bit public modulus =91n=92 and exponent =91e=92 of\nvalue 65537.\n\nI want to know if it is possible to use the crypto h/w (with\nassociated 2048 bit limit) to do a 4096 bit public key operations (=91n=92\n=3D 4096 bit, =91e=92 =3D 65537) and how.\nI know all the input parameters in advance (n, e, signature) and can\ndo whatever pre-computations are required.\nI don't have the specification for the crypto block e.g. what model #\nis etc... but this is a general question as to what if any techniques\ncan be used to solve this problem.\n\nApologies if this is a dumb question...\n\n### mike clark\n\nOct 20, 2009, 4:49:04 AM10/20/09\nto\n\nOn Oct 16, 10:10=A0pm, crashedmind <[email protected]> wrote:\n> I have a general purpose 32-bit CPU that has an onboard hardware\n> crypto block that supports low level crypto primitives like:\n\n> =3D95 =A0 =A0 Modulo exponentiation\n> =3D95 =A0 =A0 Montgomery Multiplication\n> =3D95 =A0 =A0 Montgomery modulo exponentiation\n\n> The onboard hardware crypto block supports maximum 2048 bit operations\n> only e.g. it can support an RSA public key operation (signature\n\n> verification) with 2048 bit public modulus =3D91n=3D92 and exponent =3D91e=3D92 of\n\n> value 65537.\n>\n> I want to know if it is possible to use the crypto h/w (with\n\n> associated 2048 bit limit) to do a 4096 bit public key operations (=3D91n=3D92\n> =3D3D 4096 bit, =3D91e=3D92 =3D3D 65537) and how.\n\n> I know all the input parameters in advance (n, e, signature) and can\n> do whatever pre-computations are required.\n> I don't have the specification for the crypto block e.g. what model #\n> is etc... but this is a general question as to what if any techniques\n> can be used to solve this problem.\n>\n> Apologies if this is a dumb question...\n\nIf you know the factorization of n, I would think you could use the\nChinese Remainder Theorem to do your modular exponentiation using the\nfactors of n. Then combine the 2 results to get the answer (mod n).\n\n### Francois Grieu\n\nOct 20, 2009, 4:50:23 AM10/20/09\nto\n\ncrashedmind wrote :\n\n> I have a general purpose 32-bit CPU that has an onboard hardware\n> crypto block that supports low level crypto primitives like:\n\n> - Modulo exponentiation\n> - Montgomery Multiplication\n> - Montgomery modulo exponentiation\n\n> The onboard hardware crypto block supports maximum 2048 bit operations\n> only e.g. it can support an RSA public key operation (signature\n\n> verification) with 2048 bit public modulus n and exponent e =3D 65537.\n\n>\n> I want to know if it is possible to use the crypto h/w (with\n> associated 2048 bit limit) to do a 4096 bit public key operations\n\n> (n of 4096 bit, e =3D 65537) and how.\n\n> I know all the input parameters in advance (n, e, signature) and can\n> do whatever pre-computations are required.\n> I don't have the specification for the crypto block e.g. what model #\n> is etc... but this is a general question as to what if any techniques\n> can be used to solve this problem.\n\nAssuming that when you wrote \"Montgomery Multiplication\" you meant\n\"Montgomery modular multiplication\", I see no way to make good use\nof the three primitives that you mention. However many crypto blocks\ndo support additional primitives that can be put to good use.\n\nOne thing to consider: there may be no need to use the crypto block\nin the first place. In particular, there is no security-related\nreason to use the crypto block to perform the public-key operation\nfor the purpose of integrity verification (that would be slightly\ndifferent if the public-key operation was used for encryption).\nAnd the 32-bit CPU is likely fast enough for the the RSA public-key\noperation: with n of 4096 bit and e =3D 65537, it is typically faster\nthan the private-key operation is on same hardware, by a factor of\n80 to 1000 (depending on if the private-key operation uses CRT and\nsecurity countermeasures).\nThe fact that signature verification is fast with RSA is one of the\nexcellent reason to prefer RSA (or other schemes based on\nInteger Factorisation) to ECC in many applications.\nFor a view on the state of the art in this field, see\nhttp://cr.yp.to/papers.html#rwsota\n\nFran=E7ois Grieu" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87006027,"math_prob":0.5147801,"size":4364,"snap":"2023-14-2023-23","text_gpt3_token_len":1127,"char_repetition_ratio":0.12981652,"word_repetition_ratio":0.46604526,"special_character_ratio":0.2678735,"punctuation_ratio":0.097122304,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98189735,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-23T09:22:24Z\",\"WARC-Record-ID\":\"<urn:uuid:cd91cd34-44f8-4918-9247-0d3e68cfad17>\",\"Content-Length\":\"771270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b9b0f47-ce06-470d-857c-6591e17a4177>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e8fb263-a13f-4ebe-bde1-21c16a560fb0>\",\"WARC-IP-Address\":\"142.251.16.113\",\"WARC-Target-URI\":\"https://groups.google.com/g/sci.crypt.research/c/2m9EtK7cBc4\",\"WARC-Payload-Digest\":\"sha1:IIYKGEXRUOWL646OB6H63NUZFRJQSU6S\",\"WARC-Block-Digest\":\"sha1:5EMWEDTAATNB26S3CQWTQ3JYLTSI6AVW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945030.59_warc_CC-MAIN-20230323065609-20230323095609-00703.warc.gz\"}"}
http://italingua.info/addition-fast-facts-worksheets/basic-addition-worksheets-kids-basic-addition-facts-worksheets-free-printable-doubles-worksheet-math-study-site-fact-addition-subtraction-basic-facts-worksheets-addition-fast-facts-worksheets/
[ "# Basic Addition Worksheets Kids Basic Addition Facts Worksheets Free Printable Doubles Worksheet Math Study Site Fact Addition Subtraction Basic Facts Worksheets Addition Fast Facts Worksheets", null, "basic addition worksheets kids basic addition facts worksheets free printable doubles worksheet math study site fact addition subtraction basic facts worksheets addition fast facts worksheets.\n\naddition subtraction basic facts worksheets fast content uploads and,free math coloring worksheets grade pages printable facts easy addition subtraction fast basic,addition subtraction fast facts worksheets basic kindergarten math fluency grade division practice,addition subtraction basic facts worksheets fast mixed operations math,free math worksheets and printouts addition subtraction fast facts basic, adding fractions different denominators worksheets basic facts a addition subtraction fast,multiplication fact worksheets grade timed math addition facts media subtraction basic fast ,fast facts math worksheets lovely addition for all download and subtraction basic, addition subtraction basic facts worksheets fast math multiplying 0 to by a multiplication worksheet multiply in your,addition subtraction basic facts worksheets fast what is an fact multiplication and division table ." ]
[ null, "http://italingua.info/wp-content/uploads/2019/05/basic-addition-worksheets-kids-basic-addition-facts-worksheets-free-printable-doubles-worksheet-math-study-site-fact-addition-subtraction-basic-facts-worksheets-addition-fast-facts-worksheets.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7952907,"math_prob":0.8573453,"size":1086,"snap":"2019-13-2019-22","text_gpt3_token_len":164,"char_repetition_ratio":0.28743067,"word_repetition_ratio":0.05185185,"special_character_ratio":0.13812155,"punctuation_ratio":0.07096774,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999492,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T03:14:57Z\",\"WARC-Record-ID\":\"<urn:uuid:1519af03-a0c8-4697-956c-0765df746578>\",\"Content-Length\":\"57102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5346839-2fb6-437e-a74a-786f2847a9cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eeefcaa-3e0f-4b54-95be-0cfa499ca5a7>\",\"WARC-IP-Address\":\"104.24.97.246\",\"WARC-Target-URI\":\"http://italingua.info/addition-fast-facts-worksheets/basic-addition-worksheets-kids-basic-addition-facts-worksheets-free-printable-doubles-worksheet-math-study-site-fact-addition-subtraction-basic-facts-worksheets-addition-fast-facts-worksheets/\",\"WARC-Payload-Digest\":\"sha1:RY6TGMQSJZL7MVTHYAPZCH467HZM72XY\",\"WARC-Block-Digest\":\"sha1:X47OUGIBYK6XXT422N5DZSGSRIMQULD5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257002.33_warc_CC-MAIN-20190523023545-20190523045545-00520.warc.gz\"}"}
https://fr.mathworks.com/matlabcentral/cody/problems/51-find-the-two-most-distant-points/solutions/113745
[ "Cody\n\n# Problem 51. Find the two most distant points\n\nSolution 113745\n\nSubmitted on 17 Jul 2012 by Cris Luengo\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% p = [0 0; 1 0; 2 2; 0 1]; ix_correct = [1 3]; assert(isequal(mostDistant(p),ix_correct))\n\n2   Pass\n%% p = [0 0; 1 0; 2 2; 0 10]; ix_correct = [2 4]; assert(isequal(mostDistant(p),ix_correct))\n\n3   Pass\n%% p = [0 0; -1 50]; ix_correct = [1 2]; assert(isequal(mostDistant(p),ix_correct))\n\n4   Pass\n%% p = [5 5; 1 0; 2 2; 0 10; -100 20; 1000 400]; ix_correct = [5 6]; assert(isequal(mostDistant(p),ix_correct))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52212554,"math_prob":0.9982057,"size":723,"snap":"2020-24-2020-29","text_gpt3_token_len":275,"char_repetition_ratio":0.15020862,"word_repetition_ratio":0.13934426,"special_character_ratio":0.45781466,"punctuation_ratio":0.22413793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9525885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T19:09:10Z\",\"WARC-Record-ID\":\"<urn:uuid:ac35b3bd-a320-4878-a6ef-a3425b66c4e2>\",\"Content-Length\":\"75554\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80e79a38-d288-485f-9354-cf292c3388f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:730daabc-7c32-40b2-ba6c-7843517953a9>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://fr.mathworks.com/matlabcentral/cody/problems/51-find-the-two-most-distant-points/solutions/113745\",\"WARC-Payload-Digest\":\"sha1:YSPQBSIAUKZF5I6DW2SGNXY5A64PXFNE\",\"WARC-Block-Digest\":\"sha1:XPJPDAL6VQYHJ67FJZKEO45PIBG7GAMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886516.43_warc_CC-MAIN-20200704170556-20200704200556-00232.warc.gz\"}"}
https://sbseminar.wordpress.com/2008/02/12/representations-of-reductive-groups-in-characteristic-p/
[ "# Representations of reductive groups in characteristic p\n\nI’ve been at a couple of interesting conferences lately and so I have a lot to talk about. I’ll start by summarizing an excellent expository talk by Jonathan Brundan which he gave at an MSRI introductory workshop last week.\n\nLet G be a reductive group over an algebraically closed field k of characteristic p. The topic of this post is the algebraic representations of G. In other works, we want to study algebraic maps", null, "$G \\rightarrow GL(V)$ where V is a finite dimensional vector space over k. Over the years, a few people (Soroosh, Carl, Alex Ghitza) have asked me what I knew about this theory and I’m afraid that I always gave them very incomplete or inaccurate answers. Now, that I’ve been to Brundan’s talk I think that I understand what is going on much better and I’d like to summarize it. Of course there will be nothing “new” in this post — I think that all the theory was worked out 20 years ago.\n\nFirst, let us consider a construction of the group G. We start with our usual reductive Lie algebra", null, "$g_\\mathbb{C}$ over the complex numbers. Consider its universal envelopping algebra", null, "$U_\\mathbb{C}$. It has a Kostant-", null, "$\\mathbb{Z}$ form,", null, "$U_\\mathbb{Z}$, which is generated by", null, "$E_i^k/k!$ etc. Then tensor with k to get", null, "$U_k$ and dualize to get", null, "$k[G]$ the Hopf algebra of functions on G.\n\nNow we can build Verma modules", null, "$M(\\lambda)$ (ie modules over", null, "$U_k$) for", null, "$\\lambda \\in X(T)$ (the weight lattice), just as we do over", null, "$\\mathbb{C}$. By general nonsense,", null, "$M(\\lambda)$ has a unique irreducible quotient", null, "$L(\\lambda)$.\n\nTheorem", null, "$L(\\lambda)$ is finite-dimensional if and only if", null, "$\\lambda$ is dominant.\n\nThe proof of this is quite interesting. The hard part is to show that", null, "$L(\\lambda)$ is finite dimensional. It suffices to find at least one finite-dimensional", null, "$U_k$ module with highest weight", null, "$\\lambda$. One way to do this is to start with the usual irrep", null, "$V(\\lambda)_\\mathbb{C}$ over the complex numbers. Then we have a", null, "$\\mathbb{Z}$ form by acting on the highest weight vector with", null, "$U_\\mathbb{Z}$. Then we tensor with k to get", null, "$V(\\lambda)$ which is a finite dimensional highest weight", null, "$U_k$ module, called the Weyl module.\n\nSo for any dominant", null, "$\\lambda$ we have two representations of G,", null, "$V(\\lambda)$ and", null, "$L(\\lambda)$ with highest weight", null, "$\\lambda$. In fact", null, "$V(\\lambda)$ is universal among such representations, since we have the following Borel-Weil theorem.\n\nTheorem", null, "$V(\\lambda) = \\Gamma(G/B, \\mathcal{O}(-\\lambda))^{\\star}$\n\nHere as over", null, "$\\mathbb{C}$, the higher cohomology of these line bundles vanishes. Moreover, the character of", null, "$V(\\lambda)$ is given by the Weyl character formula. The dimension of the weight spaces of", null, "$L(\\lambda)$ will be smaller than that of", null, "$V(\\lambda)$.\n\nSometimes", null, "$V(\\lambda)$ and", null, "$L(\\lambda)$ coincide. For example,", null, "$L((p^r -1)\\rho) = V((p^r-1)\\rho)$. These are called the rth Steinberg modules.\n\nI don’t know if the characters of", null, "$L(\\lambda)$ are known in general (is there an expert out there who can answer this question?), but there is the following remarkable theorem which reduces their study to that of a finite number of characters.\n\nSince our group", null, "$G$ is defined over", null, "$\\mathbb{F}_p$ (it is defined over", null, "$\\mathbb{Z}$), we have a Frobenius map", null, "$F:G \\rightarrow G$ which is a group homomorphism. We can use this Frobenius map to twist representations. It has the result of multiplying all the weights by p, and leaving representations irreducible, so we have", null, "$L(\\lambda)^F \\cong L(p\\lambda)$. This has a remarkable generalization.\n\nTheorem\nLet", null, "$\\lambda \\in X(T)_+$ and suppose", null, "$\\lambda = \\lambda_0 + p \\lambda_1 + \\dots + p^k\\lambda_k$. Then", null, "$L(\\lambda) \\cong L(\\lambda_0) \\otimes L(\\lambda_1)^F \\otimes \\dots \\otimes L(\\lambda_k)^{F^k}$.\n\nIn particular, note that we can make such a decomposition of", null, "$\\lambda$ with all", null, "$\\lambda_i$ living in the region defined by the inequalities", null, "$0 \\le \\langle \\lambda, \\alpha^\\vee \\rangle \\le p$. So to “know” all the irreducible representations, it is enough just to know those ones for", null, "$\\lambda$ in this region.\n\nWhat is remarkable about this theorem is that its proof involves considering the kernel of the Frobenius — a highly non-reductive group scheme which has just one k point.\n\nAside from thinking about these irreducibles, there is much more going on. The category is not semisimple, so it is not enough to understand just the irreducible objects. In fact there is a well-developed block theory which is related to the action of affine Weyl group on", null, "$X(T)$ generated by reflections in the fundamental alcove.\n\n## 20 thoughts on “Representations of reductive groups in characteristic p”\n\n1. Dear Joel,\n\nlast year I posted two articles on the arxive dealing with Lusztig’s conjecture for rational representations of reductive groups over a field $k$ of positive characteristic. The main result is the construction of a functor from a category of “special” sheaves of $k$-vector spaces on the complex affine flag manifold associated to the (simply connected) group $G$ to representations of the Lie algebra of the Langlands dual group $G^L$ over $k$.\n\nThe “special” sheaves are constructed from the skyscraper sheaf on the point Iwahori-orbit using integration along the fibres for the projection to partial affine flag\nvarieties. If $k=\\mathbb C$, the decomposition theorem tells us that the special sheaves are direct sums of intersection cohomology complexes on affine Schubert\nvarieties. For the application to representation theory only finitely many special sheaves are needed and using a base change argument we can deduce that these are intersection cohomology complexes for almost all characteristics. This yields almost all instances of Lusztig’s conjecture.\n\nThe main idea of the construction of the functor is the following. The special sheaves on an affine flag manifold form a categorification of the affine Hecke algebra $H$, and the projective modules over the modular Lie algebra categorify its periodic module $M$. These categorifications are given by relating both sides to combinatorially defined categories. The first is a category of sheaves on an affine moment graph (which is by the way equivalent to the corresponding category of\nSoergel’s bimodules), the second is a category appearing in the work of Andersen-Jantzen-Soergel.\n\nNow letting $H$ act on the element $A_0\\in M$ corresponding to the fundamental alcove yields a map $H\\to M$. My functor simply categorifies this map using the\ncombinatorial categories above.\n\nOne of the advantages of this approach is that it directly relates sheaves of $k$-vector spaces to modular representations, without using the quantum group (i.e. the characteristic zero version). So it avoids the localization result of Kashiwara-Tanisaki and the Kazhdan-Lusztig equivalence. Moreover, it is, I believe, Koszul-dual to Lusztig’s program in the following sense. The above functor relates intersection cohomology complexes on the affine flag manifold associated to $G$ to projective representations of the Lie algebra of $G^L$, while Lusztig’s program associated (in case $k=\\mathbb{C}$) to simple representations of the Lie algebra of $G$ to these sheaves.\n\nHowever, the main problem remains: For a given group and a given field $k$ one still doesn’t know if Lusztig’s conjecture holds. In my paper “Multiplicity one results…” I determined the $p$-smooth locus of the affine moment graphs (by quite elementary, in particular non-topological arguments). This can be applied to Lusztig’s conjecture and yields its multipliciy one case in the following sense.\n\nLusztig’s conjecture can be translated into a conjecture about the Jordan-Hölder multiplicities of baby Verma modules: this number should be the same as the\ncorresponding periodic polynomial evaluated at 1. From the multiplicity one result for moment graphs it follows that if the latter number is 1, then also the multiplicity is one. This works for ALL prime numbers above the Coxeter number and is, as far as I know, the only result that holds in this generality.\nMoreover, it is known that in general the Jordan-Hölder multiplicity is at least what Lusztig predicted. So the multiplicity one case provides, I believe, strong evidence towards the conjecture in general.\n\n2. Joel,\n\nThe region you define is not the ”fundamental alcove” but the ”fundamental box” (think about A2). I believe the characters are known in the fundamental alcove (they are the same as in characteristic 0) but are not known in the fundamental box. As you point out, this would give character formulas for all simples. The Lusztig conjecture gives a conjecture for their characters (in terms of Kazhdan-Lusztig polynomials for the affine Weyl group) and this is known (by combining a few hundred pages of work due to Kashiwara-Tanisaki, Kazhdan-Lusztig and Andersen-Jantzen-Soergel) for “almost all p”. Recently Peter Fiebig has been able to give an alternative proof using moment graphs (which still uses Andersen-Jantzen-Soergel). Thus one knows that the Lusztig conjecture is “generically true”, however in any fixed characteristic one knows nothing.\n\n3. Carl Mautner says:\n\nHi Joel!\n\nFirst off, a good reference for all this is Jantzen’s `Representations of Algebraic Groups.’\n\nThe question about the characters is related to the non-semisimplicity of the category. The Lusztig conjecture predicts the characters of the simple representations L(V) for p greater or equal to the Coxeter number h, in terms of values of the Kazhdan-Lusztig polynomials. The conjecture has a number of strong implications, including a knowledge of all the Ext-groups between simples.\n\nI’m not quite sure about the current status of the conjecture. I think Anderson-Jantzen-Soergel proved it for p sufficiently large, meaning there exists an N such that for all p greater than N the conjecture holds. Of course this doesn’t actually tell you whether or not the conjecture is true for any given p.\n\nI have also heard that Bezrukavnikov has proven the conjecture for all p greater than the Coxeter number.\n\n4. Indeed I went to a talk today by Lin on Lusztig’s conjecture. It is quite interesting, Lusztig’s conjecture tells you the character of L(\\lambda) by telling you the multiplicities with which L(\\lambda) occurs in V(\\mu). Since you already know the characters of V(\\mu), this determines the characters of L(\\lambda).\n\nAt the end of his talk, Lin speculated on whether there could be a proof by geometric Satake. Indeed why not? You have an equivalence between G-reps and perverse sheaves on the affine Grassmannian and you want to prove that some multiplicities of G-reps matches some multiplicities of perverse sheaves on the affine flag variety (values of affine KL polynomials). So why not?\n\nFinal question: did Bezrukavnikov prove this Lusztig conjecture or a different one? With Arkhipov and Ginzburg, he proved the analogous Lusztig conjecture for quantum groups at a root of unity. (I think that you can go between the two Lusztig conjectures by the work of Anderson-Jantzen-Soergel).\n\n5. Somehow it never occurred to me that you could get Ext information from the Grassmannian when your sheaf coefficients are in positive characteristic. It seems quite bizarre. What perverse sheaves correspond to", null, "$L(\\lambda)$ and", null, "$V(\\lambda)$?\n\n6. Carl says:\n\nScott-\n\nThe IC extension from the orbit Gr^\\lambda corresponds to the simple L(\\lambda) and the perverse shriek extension to V(\\lambda) (or maybe its dual? ugh, I always mix them up…).\n\n7. shriek extension sounds right to me. That’s the one with a natural map to the IC sheaf.\n\n8. On the other hand, the affine KL polynomial record Exts between perverse sheaves on the affine flag variety with coefficients in char 0 (I think that this is correct).\n\n9. Outside of this geometric Satake situation, do you know if anyone studied perverse sheaves with coefficients in characteristic p?\n\nIt seems like it should have been studied by topologists since they are very interested in cohomology of spaces with coefficients in characteristic p and from there it is a short trip to constructible sheaves with coeff in char p.\n\n10. Joel-\n\nSoergel and his crew have studied them on the finite flag variety G/B, in which they also have an interpretation in terms of the representation theory of the algebraic group G over a field of characteristic p.\n\nAs I have been working on closely related questions, I have become very familiar with some of the difficulties. In particular, (a) one doesn’t have the usual notion of universal coefficients and (b) one doesn’t have the theory of weights, so spectral sequences don’t necessarily collapse and there is no decomposition theory. The reason people usually transfer problems into the language of perverse sheaves is to be able to use Deligne’s machinery of weights, e.g. in the proof of the Kazdan-Lusztig conjectures.\n\n11. Carl –\n\nwhat do you mean “one doesn’t have the usual notion of universal coefficients”? I have only recently started to think about these things!\n\nJoel –\n\nOne should also mention Daniel Juteau’s thesis which defines a Springer correspondence “modulo l”. The idea is to relate modular representations of Weyl groups to perverse sheaves on the nilpotent cone with positive characteristic coefficients. An example of one of his results is that knowing the decomposition numbers for the symmetric group in characteristic p is equivalent to working out the characters of all the equivariant intersection cohomology complexes on the nilpotent cone with coefficients in characteristic p.\n\nA beautiful example of this already occurs for S_2: in this case the nilpotent cone is a quadric cone in affine 3 space and the intersection cohomology complex looks different in characteristic 2 to all other characteristics (all other characteristics thinks the cone is smooth!)\nUnder Daniel’s translation, this becomes a very complicated way of saying that the representation theory of S_2 is different in characteristic 2 to any other characteristic !!\n\n12. Geordie says:\n\nCarl —\n\nI just spoke to Peter and now I understand what you were saying with point a).\n\nThe point is that if you know the intersection cohomology of a variety with coefficients in Z, then you do not necessarily know it over a finite field for example. The reduction of the intersection cohomology complex mod some prime isn’t necessarily the intersection cohomology complex. (This is also what Daniel talks about alot in his thesis).\n\n13. Peter – Thanks for letting us know about your interesting preprint. I took a look at them. I think that it is great how you are able to use the moment graph theory. Do you know if there is any connection between your work and the geometric Satake correspondence which relates rational representations of G over k to sheaves of k vector spaces on the affine Grassmannian?\n\nGeordie – For some reason, until now I had not seen your first comment. Thanks for drawing our attention to Peter’s paper and also for correcting my mistake about the fundamental alcove vs. fundamental box (I also noticed the mistake and corrected it the day after the post).\n\n14. Joel – Geordie’s comment was in the spam filter. If he hadn’t complained to me about it, it might never have been found.\n\n15. Geordie-\n\nSorry, I didn’t notice that the discussion had continued…\n\nThat is what I meant. I just came across Daniel Juteau’s thesis and started looking over it. Looks really interesting.\n\nI’ve been thinking about trying to do something with moment graphs along the lines of a paper of Braden-MacPherson only with coefficients in positive characteristic. In Braden-MacPherson, they give a method for computing the stalks of the IC sheaf with char 0 coefficients from the moment graph. I was hoping that one could modify their procedure to compute the char p stalks in the affine Grassmannian or at least some vanishing of the stalks to give a proof of the linkage principle via geometric satake. So far, I haven’t made much progress and have found myself drifting towards just thinking about the integral IC stalks.\n\n16. David Ben-Zvi says:\n\nHi,\n\nI was under the impression (as Carl says) that Bezrukavnikov has a proof of the Lusztig conjecture for p>Coxeter number. This is not quite stated in his ICM but I believe he has said so at talks as far\nback as 2002 and a strategy towards this is I think implicit in the ICM address (of course Peter’s result has the serious\n\n– describing modular representations of Lie algebras in terms\nof the derived category of coherent sheaves on the T^*G/B, the Springer resolution (joint with\nMirkovic and Rumynin)\n\n– a derived equivalence between coherent sheaves on\nT^* G/B and a quotient of the category of\nperverse sheaves on affine flags (with Arkhipov)\n\n– an enhanced version of the above, to an equivalence of\nthe SYMMETRIES of the above categories — ie coherent\nsheaves on Steinberg and perverse sheaves on flags — which\nare both categorified forms of the affine Hecke algebra\n\n– a precise understanding of how t-structures behave under\nall the above derived equivalences (with Mirkovic)\n\n– finally (and probably the hardest and least written-up part?)\nan understanding of Hodge structures (or mixed structures)\non all the relevant categories.\n\nI don’t know how exactly this is supposed to add up,\nbut various other conjectures of Lusztig (including\nthe one Joel mentions, with Arkhipov-Ginzburg,\nand various ones on character sheaves and two-sided cells\nand related structures, with Ostrik and Finkelberg)\nhave fallen on Roma’s way towards this.\n(As you can tell I’m a big fan! )\n\nOne thing I particularly like is that the Bezrukavnikov-Mirkovic-Rumynin picture gives a clear\nconceptual picture how&why the affine Hecke algebra\nand Springer theory appear in the modular representation\ntheory (though the appearance of the affine Weyl group is\nclear, I think this was somewhat mysterious, at least\naccording to Humphreys’ Bull AMS article — but maybe Peter\ncan correct me here, as elsewhere!)\n\nPeter: It would be extremely interesting to hear what points\nof contact you can see between your beautiful recent\npreprints and the geometry that I mention here, parallels\nand differences etc — it’s exciting that between the approaches\na deeper understanding seems to be emerging!\n\n(It’s also worth mentioning that Kremnizer has\nshown that the same geometry — in particular\nthe “exotic t-structure” on T^* flags — controls\nalso the representation theory of quantum groups at roots\nof unity, giving a clearer picture of the\nresults of Arkhipov-Bezrukavnikov-Ginzburg that Joel mentions,\nwhile Gaitsgory-Lurie are giving a clean understanding\nof the relation between quantum groups and loop groups,\nso that all legs of the amazing Lusztig triangle\nmodular-quantum-loop are close to becoming\nconceptually understood!)\n\n17. David – Thanks for a nice summary of Roman’s work. I am a bit confused though. I thought that the BMR work was about an analogue of BB localization in characteristic p. In particular, it deals with category O for the Lie algebra g.\n\nHow does it relate to the rational representations of the group? Does that category sit inside the category O?\n\nI thought that in characteristic p, there wasn’t such a strong connection between representations of the group and those of the Lie algebra — in particular whenever the group acts, then the divided powers (like E^k/k!) should also act which are not in the Lie algebra.\n\n18. David Ben-Zvi says:\n\nJoel,\n\nThe group representations correspond (as far as I know —\nplease correct me all the experts out there!) to “restricted”\nrepresentations of the Lie algebra. In the enveloping algebra\nUg we have a new center in characteristic p, the p-center,\nand we have to set it to act by zero to get representations\nthat integrate to the group. The BMR picture\ntreats all reps of the Lie algebra, in particular the\nrestricted ones. (I think the analogue\non the other side of the Langlands type correspondence\nis that the group corresponds to affine Grassmannian\nmod spherical subgroup while the Lie algebra\ncorresponds to Grassmannian mod Iwahori — or more\nprecisely to a category of the same size, the\nIwahori-Whittaker category).\n\n19. sorry for replying so late – I travelled to Aarhus last weekend.\n\nJoel – the geometric Satake equivalence is quite different from the relation explained in my paper. First of all, geometric Satake gives representations of the group, whereas I obtain representations of its Lie algebra. Moreover, geometric Satake gives all representations, whereas I only get projective representations in the principal block. Finally, geometric Satake uses perverse sheaves constructible along the G[[t]]-orbits, whereas the “special” sheaves that appear in my paper are constructible along the smaller Iwahori-orbits.\n\nRecently I discussed a possible relation between these two pictures with Geordie. The main idea (rather: the main speculation) is to consider certain representations of the group as acting on special sheaves by convolution, and on the modules of its Lie algebra by some fancy tensor product. The functor that appears in my paper should then intertwine these two actions.\n\nDavid – thank you for the nice overview on Bezrukavnikov’s picture. I have to admit that I cannot see yet the precise connection to the results in my paper. But I believe the following: There are always two possible “localizations” of representation theories on flag varieties. The first is the Beilinson-Bernstein picture, the other is the Andersen-Jantzen-Soergel picture. They are mutually Koszul-dual in the following sense: While the first typically relates simple perverse sheaves to simple representations, the latter\nrelates simple perverse sheaves to projective representations. Moreover, the flag variety in the AJS-picture is also not associated to the Lie algebra in question, but to its Langlands dual. Taken together, both localizations yield a Koszul-self duality of the representation theoretic categories.\n\nIf I am not mistaken, Bezrukavnikov’s theory is about a Beilinson-Bernstein localization in the modular case. My paper is on the side of the AJS-philosophy. This is why I hope that one can get a Koszul-duality for quantum groups (and for modular representations) from comparing the two pictures." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92276496,"math_prob":0.96329904,"size":21162,"snap":"2020-24-2020-29","text_gpt3_token_len":4672,"char_repetition_ratio":0.14141223,"word_repetition_ratio":0.010147869,"special_character_ratio":0.19440506,"punctuation_ratio":0.07640156,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9874204,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T00:02:09Z\",\"WARC-Record-ID\":\"<urn:uuid:c7d19e7e-ff36-45c4-a955-aa03e46c9f09>\",\"Content-Length\":\"113336\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c89548e0-c2d2-4df2-a5b7-f0166dc31ce8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e115110b-8933-445a-8964-c01cce40ab08>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://sbseminar.wordpress.com/2008/02/12/representations-of-reductive-groups-in-characteristic-p/\",\"WARC-Payload-Digest\":\"sha1:OH5S2BRV2W3S6VLREJYSGBDXNCQPU4FN\",\"WARC-Block-Digest\":\"sha1:6B223PZ6LIIM2CBU6QZLPG4SUIOKB65O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657147031.78_warc_CC-MAIN-20200713225620-20200714015620-00308.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/240-2352
[ "Solutions by everydaycalculation.com\n\n## Reduce 240/2352 to lowest terms\n\nThe simplest form of 240/2352 is 5/49.\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 240 and 2352 is 48\n2. Divide both the numerator and denominator by the GCD\n240 ÷ 48/2352 ÷ 48\n3. Reduced fraction: 5/49\nTherefore, 240/2352 simplified to lowest terms is 5/49.\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68483865,"math_prob":0.78555965,"size":367,"snap":"2021-31-2021-39","text_gpt3_token_len":118,"char_repetition_ratio":0.15426998,"word_repetition_ratio":0.0,"special_character_ratio":0.4386921,"punctuation_ratio":0.08955224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535244,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T22:13:42Z\",\"WARC-Record-ID\":\"<urn:uuid:dca74875-aae9-4efa-aca8-6415d0ec1813>\",\"Content-Length\":\"6554\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d87e2f1a-70bb-41a9-a8a8-61420d0b0027>\",\"WARC-Concurrent-To\":\"<urn:uuid:92be154c-275a-4505-af09-0fb9c78a0dc0>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/240-2352\",\"WARC-Payload-Digest\":\"sha1:IIJRK3HCGF6QPNVYK6RTXL3HTFXG3JME\",\"WARC-Block-Digest\":\"sha1:VGDXLEHVCL7ZVDYENAMQ4MHW2DHPMQFT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057580.39_warc_CC-MAIN-20210924201616-20210924231616-00659.warc.gz\"}"}
http://blog.moondrop.no/2016/08/c-extensions-with-unity/
[ "A while ago I started to get quite annoyed at having to create instances of a class just to modify the values of an existing object, instead of directly editing the object fields.\n\n```float new_value = (Mathf.Sin( Time.time ) + 1) * 0.5f;\n\n// Works\ntransform.position = new Vector3( transform.position.x, new_value, transform.position.z );\nmaterial.color = new Color( material.color.r, material.color.g, material.color.b, new_value );\n\n// Does not work :(\n// transform.position.y = new_value;\n// material.color.a = new_value;\n```\n\nTo me, it looks wrong to create a new instance when you just want to modify the object. The code gives the incorrect impression of intent, it’s verbose and leaves unnecessary information in the code. Yes, I’m a code nazi.\n\nLuckily, I stumbled upon an article about so-called “Extension Methods” in C# (I hadn’t been using C# for too long at this point) and how you could use them in Unity! This opened up a whole new world of possibilities when it came to “modifying” Unity’s API to become more readable and understandable. And the best part is that it’s extremely easy!\n\nNow, you might be reading this and thinking that using extension methods is obvious. Maybe it is, but I want to leave more traces of this in the context of Unity on the internet so future beginners can have a chance of finding it.\n\nSo here goes!\n\nTo make an extension (essentially extending a class) you simply create a new script and call it “ClassExtension” (ie ColorExtension, Vector2Extension, etc). Then you make that class static.\n\n```public static class ColorExtension {}\n```\n\nYou then add methods that are “public static”. What’s special about these methods is that the first parameter has to be a reference to the object that is modified. When using an extended method C# automatically sends in this reference, but it has to be explicitly written into the function as a parameter.\n\nIt’s easier to show by example, so here’s an extended Color method I use all the time:\n\n```using UnityEngine;\n\npublic static class ColorExtension\n{\npublic static Color WithAlpha( this Color color_, float alpha_ )\n{\nreturn new Color( color_.r, color_.g, color_.b, alpha_ );\n}\n}\n```\n\nTo use it, simply call the method on a Color object:\n\n```float theta = (Mathf.Sin( Time.time ) + 1) * 0.5f;\nmaterial.color = material.color.WithAlpha( theta );\n```\n\nAnd there you have it. Simple and powerful.\n\nI haven’t explored this as much as I should’ve, but there are many convenient extensions you can make if you use your imagination.\n\nHere are some of my favorite that I use every day:\n\n```using UnityEngine;\n\npublic static class GameObjectExtension\n{\npublic static T GetComponentOrAdd<T>( this GameObject go_ ) where T : Component\n{\nT component = go_.GetComponent<T>( );\nif( component == null )\n{\n}\nreturn component;\n}\n\npublic static T GetComponentOrDie<T>( this GameObject go_ ) where T : Component\n{\nT component = go_.GetComponent<T>( );\nif( component == null )\n{\nD.LogError( \"Component \" + typeof( T ) + \" not found on GameOject.\" );\nDebug.Break( );\n}\nreturn component;\n}\n\npublic static bool HasComponent<T>( this GameObject go_ ) where T : Component\n{\nreturn go_.GetComponent<T>( ) != null ? true : false;\n}\n}\n```\n```using UnityEngine;\n\npublic static class Vector2Extension\n{\npublic static Vector2 WithX( this Vector2 vector_, float x_ )\n{\nreturn new Vector2( x_, vector_.y );\n}\n\npublic static Vector2 WithY( this Vector2 vector_, float y_ )\n{\nreturn new Vector2( vector_.x, y_ );\n}\n}\n```\n```using UnityEngine;\n\npublic static class Vector3Extension\n{\npublic static Vector3 WithX( this Vector3 vector_, float x_ )\n{\nreturn new Vector3( x_, vector_.y, vector_.z );\n}\n\npublic static Vector3 WithY( this Vector3 vector_, float y_ )\n{\nreturn new Vector3( vector_.x, y_, vector_.z );\n}\n\npublic static Vector3 WithZ( this Vector3 vector_, float z_ )\n{\nreturn new Vector3( vector_.x, vector_.y, z_ );\n}\n\npublic static Vector3 WithXY( this Vector3 vector_, float x_, float y_ )\n{\nreturn new Vector3( x_, y_, vector_.z );\n}\n\npublic static Vector3 WithXZ( this Vector3 vector_, float x_, float z_ )\n{\nreturn new Vector3( x_, vector_.y, z_ );\n}\n\npublic static Vector3 WithYZ( this Vector3 vector_, float y_, float z_ )\n{\nreturn new Vector3( vector_.x, y_, z_ );\n}\n}\n```\n\nExamples of how I use them:\n\n```// Prodecural mesh so we don’t care if the\n// object already has the component or not\n\n// Need this for the object to make sense,\n// so throw and error if it’s not there\nImportantComponent important = gameObject.GetComponentOrDie<ImportantComponent>( );\n\n// I don’t care about the x and z value\n// and just want to change y\nfloat new_value = (Mathf.Sin( Time.time ) + 1) * 0.5f;\ntransform.position = transform.position.WithY( new_value );\n```\n\nDo you think this is useful for your work? Feel free to comment with your own ideas!\n\nUPDATE\n\nI’ve been informed that this article doesn’t have any pictures. I didn’t find anything relevant so here’s a cat:\n\nWTF human!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6692855,"math_prob":0.84241474,"size":5008,"snap":"2022-40-2023-06","text_gpt3_token_len":1189,"char_repetition_ratio":0.16946442,"word_repetition_ratio":0.14945322,"special_character_ratio":0.26457667,"punctuation_ratio":0.17659353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9840718,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T20:45:25Z\",\"WARC-Record-ID\":\"<urn:uuid:566680a4-30f0-40b6-89ce-225e093c72fc>\",\"Content-Length\":\"19897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5202d54-1caa-4677-94cc-c96ea45e52ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e5573d9-a185-4467-86c1-3dbfb7a66c6b>\",\"WARC-IP-Address\":\"81.27.46.57\",\"WARC-Target-URI\":\"http://blog.moondrop.no/2016/08/c-extensions-with-unity/\",\"WARC-Payload-Digest\":\"sha1:TZLVDEUWZFZA4PLSGBLP2MO7T7CD2FW3\",\"WARC-Block-Digest\":\"sha1:T4F2WLXNT2ETZWD4YB4LRHUUAH46ZXTV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500641.25_warc_CC-MAIN-20230207201702-20230207231702-00168.warc.gz\"}"}
https://www.frontiersin.org/articles/10.3389/fchem.2019.00213/full
[ "Impact Factor 3.693 | CiteScore 2.5\nMore on impact ›\n\n# Frontiers in Chemistry\n\n## Chemical and Process Engineering", null, "## Review ARTICLE\n\nFront. Chem., 09 April 2019 | https://doi.org/10.3389/fchem.2019.00213\n\n# Giant Vesicles Encapsulating Aqueous Two-Phase Systems: From Phase Diagrams to Membrane Shape Transformations", null, "Yonggang Liu1*,", null, "Reinhard Lipowsky2 and", null, "Rumiana Dimova2*\n• 1State Key Laboratory of Polymer Physics and Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, China\n• 2Department of Theory and Bio-Systems, Max Planck Institute of Colloids and Interfaces, Potsdam, Germany\n\nIn this review, we summarize recent studies on giant unilamellar vesicles enclosing aqueous polymer solutions of dextran and poly(ethylene glycol) (PEG), highlighting recent results from our groups. Phase separation occurs for these polymer solutions with concentration above a critical value at room temperature. We introduce approaches used for constructing the phase diagram of such aqueous two-phase system by titration, density and gel permeation chromatography measurements of the coexisting phases. The ultralow interfacial tension of the resulting water-water interface is investigated over a broad concentration range close to the critical point. The scaling exponent of the interfacial tension further away from the critical point agrees well with mean field theory, but close to this point, the behavior disagrees with the Ising value of 1.26. The latter discrepancy arises from the molar mass fractionation of dextran between coexisting phases. Upon encapsulation of the PEG–dextran system into giant vesicles followed by osmotic deflation, the vesicle membrane becomes completely or partially wetted by the aqueous phases, which is controlled by the phase behavior of the polymer mixture and the lipid composition. Deflation leads to a reduction of the vesicle volume and generates excess area of the membrane, which can induce interesting transformations of the vesicle morphology such as vesicle budding. More dramatically, the spontaneous formation of many membrane nanotubes protruding into the interior vesicle compartment reveals a substantial asymmetry and spontaneous curvature of the membrane segments in contact with the PEG-rich phase, arising from the asymmetric adsorption of polymer molecules onto the two leaflets of the bilayers. These membrane nanotubes explore the whole PEG-rich phase for the completely wetted membrane but adhere to the liquid-liquid interface as the membrane becomes partially wetted. Quantitative estimates of the spontaneous curvature are obtained by analyzing different aspects of the tubulated vesicles, which reflect the interplay between aqueous phase separation and spontaneous curvature. The underlying mechanism for the curvature generation is provided by the weak adsorption of PEG onto the lipid bilayers, with a small binding affinity of about 1.6 kBT per PEG chain. Our study builds a bridge between nanoscopic membrane shapes and membrane-polymer interactions.\n\n## Introduction\n\nPhase separation can occur when solutions of two different polymers or a polymer and a salt are mixed above a certain concentration in water. These aqueous two-phase systems (ATPSs) provide a particularly mild environment with extremely low interfacial tension on the order of 1–100 μN/m, which enable many applications of ATPS in biotechnology and bioengineering (Walter et al., 1985; Albertsson, 1986). One such system of great interest is provided by mixing aqueous solutions of dextran and polyethylene glycol (PEG). These solutions undergo phase separation above the critical concentration at a certain temperature, yielding two coexisting phases in equilibrium with each phase containing predominantly one of the polymer species and water. Phase separation in polymer solutions depends on the thermodynamic properties of the system, which is theoretically described by the Flory-Huggins theory (Flory, 1941, 1953; Huggins, 1941). When the entropy of mixing is not sufficient to compensate the enthalpy of demixing, the polymer solutions undergo phase separation.\n\nRecently, renewed interest in PEG–dextran systems arose because of its potential biotechnological applications, as well as its suitability as a model system for mimicking the crowded environment in cells (Dimova and Lipowsky, 2012, 2017; Keating, 2012). The PEG–dextran ATPS was encapsulated into giant unilamellar vesicles (GUVs), cell-sized containers (Dimova et al., 2006; Walde et al., 2010; Dimova, 2012, 2019). The group of Keating initiated the study of aqueous phase separation in GUVs (Helfrich et al., 2002), and observed the asymmetric protein microcompartmentation in these systems which resembles the crowded environment of the cytosol (Long et al., 2005). The partitioning of biomolecules in ATPS is influenced by the affinities of these molecules being separated to the coexisting phases or the liquid-liquid interface, as well as by the physico-chemical properties of the employed ATPS itself (Zaslavski, 1995), which requires a detailed and quantitative characterization of its phase behavior. During the last decade, these hybrid soft matter systems containing both membranes and polymers are investigated experimentally (Li et al., 2008, 2011, 2012; Long et al., 2008; Kusumaatmaja et al., 2009; Andes-Koback and Keating, 2011; Liu et al., 2016) and theoretically (Lipowsky, 2013, 2014, 2018). A number of interesting phenomena, such as vesicle budding (Long et al., 2008; Li et al., 2012), wetting transitions (Li et al., 2008; Kusumaatmaja et al., 2009), division of vesicles (Andes-Koback and Keating, 2011), and formation of membrane nanotubes (Li et al., 2011; Liu et al., 2016), have been observed. All these phenomena were governed by the interplay between polymer-membrane interactions and the fluid-elastic properties of the membrane (Lipowsky, 2013, 2014, 2018). Precise experimental studies of the aqueous phase separation and the resulting aqueous phases are challenging, but are required to fully understand their role in the associated membrane transformations. This review focuses on precisely this topic, highlighting results from our groups that have been obtained over the past decade.\n\nThe text is organized as follows. We first discuss the phase diagram of the PEG–dextran system. More specifically, we introduce a density method for the measurement of the tie lines between the coexisting phases, and compare it with a method based on gel permeation chromatography (GPC). We then compare the scaling exponent of the interfacial tension to the values obtained in mean field theory and in the Ising model, and correlate the discrepancy of the Ising value in the vicinity of the critical point to the molar mass fractionation of dextran between coexisting phases. Afterwards, we focus on membrane-associated effects (such as wetting and morphological changes) in GUVs encapsulating ATPS. The observed complete-to-partial wetting transition of the giant vesicle membrane by the PEG-rich phase is discussed by introducing a hidden material parameter, the intrinsic contact angle (Kusumaatmaja et al., 2009), which characterizes the affinity of the phases to the membranes but has not been directly measured by optical microscopy. We then discuss the formation of membrane nanotubes resulting from the deflation of giant vesicles encapsulating aqueous mixture of dextran and PEG. Theoretical analysis of the GUV shapes with nanotubes protruding into the interior of the vesicles revealed the presence of a negative spontaneous curvature (Li et al., 2011). Depending on the properties of the aqueous phases and the vesicle membranes, three different tube patterns have been observed within vesicles of three distinct morphologies (Liu et al., 2016). Quantitative estimation of the spontaneous curvature is obtained by image analysis of the vesicle shapes, for membranes of different lipid compositions with distinct fluid-elastic properties. The molecular mechanism underlying the observed curvature generation is provided by the weak adsorption of PEG molecules onto the membranes, according to theoretical considerations, control experiments with PEG solution, and molecular dynamics simulations. Finally, we discuss possible future directions in the field.\n\n## Phase Diagram of the PEG–Dextran Systems\n\nAt a certain temperature, the phase diagram for an aqueous solution of dextran and PEG depends on their weight fractions wd and wp. The diagram includes the binodal (the boundary of the two-phase coexistence region), the critical point and the tie lines, as illustrated in Figure 1. The phase diagram is divided by the binodal curve into a region of polymer concentrations that will form two immiscible aqueous phases (above the binodal in Figure 1) and one homogeneous phase (at and below the binodal in Figure 1). A tie line connects two points of the binodal, which represent the final compositions of the polymer components in the coexisting phases. Also located on the binodal is the critical demixing point. Above this point but close to the binodal (see Figure 1), the compositions and volumes of both phases are nearly identical. Different methods have been proposed to construct the phase diagram of PEG–dextran system (Hatti-Kaul, 2000). Below, we will review some of them.\n\nFIGURE 1", null, "Figure 1. Schematic phase diagram for an aqueous two-phase system, in our case dextran (d) and PEG (p), by plotting the weight fraction of PEG wp as a function of the weight fraction of dextran wd. The phase diagram is divided into regions of two-phase coexistence (blue) and one homogeneous phase (white) by the binodal (solid curve). The critical point C, at which the volumes of the two coexisting phases become identical, is located infinitely close to and above the binodal. The mixture A with the same polymer ratio as the critical point, located above the binodal undergoes phase separation and forms two coexisting phases with compositions D and P in equilibrium, which are dextran-rich and PEG-rich phases, respectively. Solutions with composition lying on this tie line (dashed line) separate into coexisting phases with the same final compositions (D and P) but different volume fractions. The composition difference of the coexisting phases is characterized by the length of the tie line DP, which becomes shorter at lower polymer concentration and converges to a single point called the critical demixing point (C).\n\n### Binodal and Critical Point\n\nThe binodal determined by cloud-point titration is shown in Figure 2A for aqueous solutions of dextran (with weight-average molar mass Mw = 400–500 kg/mol) and PEG (with Mw = 8 kg/mol) (Liu et al., 2012). The aqueous mixture of dextran and PEG undergoes phase separation when the total polymer weight fractions exceed a few percent. Titration experiments from the one-phase to the two-phase region, or the other way around, lead to the same phase boundary.\n\nFIGURE 2", null, "Figure 2. (A) Binodal of the aqueous solution of dextran (molar mass between 400 and 500 kg/mol) and PEG (molar mass 8 kg/mol) at 24 ± 0.5°C obtained by titration from the one-phase to the two-phase region (solid circles) and vice versa (open circles). The “+” symbols are experimental points along the titration trajectory with wd/wp = 2.0 (dashed line). The intersection of such a trajectory with the binodal defines the polymer weight fraction wbi. (B) Volume fraction ΦD of the dextran-rich phase as a function of the normalized distance from the binodal, w/wbi – 1, for polymer solutions of different weight ratios wd/wp between dextran and PEG ranging from 0.60 to 2.00. See the lower inset with the color code. The upper inset shows the dependence of the volume fraction ΦD on the weight ratio wd/wp very close to the phase boundary at w/wbi = 1.02. For ΦD = 0.50 (dashed line), the polymer weight ratio wd/wp = 1.25 was found. Reprinted with permission from Liu et al. (2012). Copyright (2012) American Chemical Society.\n\nThe critical point of the system, at which the volumes of the coexisting phases are equal, can be estimated by gradually approaching the binodal via titration of the PEG–dextran mixture in the two-phase region with water. In this experiment, a series of mixtures of dextran and PEG solutions are prepared at certain weight ratios wd/wp, and the volume fractions of the coexisting phases are measured by bringing the system stepwise to the binodal. Using data obtained from titration trajectories with different values of wd/wp, one can find the weight ratio wd/wp at which the two phases have equal volumes in the vicinity of the binodal, in this case wd/wp = 1.25 is found, as shown in Figure 2B (Liu et al., 2012). Carefully studying solutions with such a weight ratio close to the binodal provides an estimate of the polymer composition of the critical point, which is located at a total polymer weight fraction wcr = 0.0812 ± 0.0002. The critical concentration for phase separation of the studied PEG–dextran system is then given by ccr = ρcrwcr = 0.0829 ± 0.0002 g/mL with ρcr being the solution mass density at the critical point.\n\nIt should be mentioned that there is a temperature dependence for the phase diagram of the PEG–dextran system (Helfrich et al., 2002), one can therefore use either temperature or concentration as experimental control parameters for the phase state of the PEG–dextran system. Additionally, new phase diagrams should be measured when new lots of polymer are used, due to the batch-to-batch differences of the polymers in molar mass distributions, even if they are obtained from the same manufacturer (Helfrich et al., 2002).\n\n### Tie Line Determination\n\nTo assess the polymer concentrations and build the tie lines in ATPSs, one has to separate the phases and measure some physical properties that related to the polymer concentrations. For the PEG–dextran system, one normally measures the optical activity and the refractive index of the solutions, because dextran is optically active but PEG is not. Then dextran concentrations in the coexisting phases are obtained from the known specific rotation of dextran, while the PEG concentrations are determined after subtracting the contribution of dextran to the solution refractive index. To make it simpler, a gravimetric method had been employed for the tie line determination of ATPS containing a PEG polymer and a salt, by forcing the end points of the tie-line on a binodal determined separately (Merchuk et al., 1998). However, for ATPS containing polymers with large dispersities, the tie line end points deviates from the binodal, and the mismatch grows with increasing polymer dispersity. It makes the gravimetric method not applicable to the PEG–dextran systems, because the generally available dextran has a broad molar mass distribution. Below we show that the tie lines of an ATPS can be accurately determined by density and gel permeation chromatography measurements of the coexisting phases.\n\n#### Density Method\n\nThis method for determining the tie lines of ATPS is based on accurate density measurements of the coexisting phases (Liu et al., 2012). Here we assume that the specific volume of the aqueous polymer solution is the sum of the contributions from all components. Then, the mass density ρ of the mixture is related to the specific volume of each component via\n\nHere the specific volume of water, dextran and PEG at 24°C are found to be vs = 1.00271 mL/g, vd = 0.62586 mL/g and vp = 0.83494 mL/g, respectively (see inset of Figure 3A).\n\nFIGURE 3", null, "Figure 3. (A) Densities of the coexisting dextran-rich (open squares) and PEG-rich (solid circles) phases for polymer solutions with weight ratio wd/wp = 1.25 as functions of the total initial polymer weight fraction w. The dashed line is the calculated density of the polymer solution with wd/wp = 1.25. In the inset, the densities of pure dextran and pure PEG solutions and their mixtures with wd/wp = 1.25 in the one-phase region are plotted as functions of the total polymer weight fraction w. The lines are fits to Equation (1) with specific volumes vd = 0.62586 ± 0.00046 mL/g and vp = 0.83494 ± 0.00043 mL/g. (B) Tie lines in the PEG–dextran phase diagram at 24 ± 0.5°C. The solid circles show the data for the experimentally measured binodal. The compositions of the initial solutions (with weight ratio wd/wp = 1.25) for which the phase densities after phase separation were measured are indicated by “+” symbols. The end points of the respective tie lines consist of upward-pointing triangles indicating the compositions of the dextran-rich phases and downward-pointing triangles indicating the compositions of the PEG-rich phases. The solid lines represent two examples of isopycnic lines calculated following Equations (2) and (3) for the initial solution composition indicated with an encircled “+” symbol in the graph: (wd, wp) = (0.0700, 0.0560). The intersections of the isopycnic lines with the binodal yield the compositions of the two phases, also encircled. Reprinted with permission from Liu et al. (2012). Copyright (2012) American Chemical Society.\n\nIn Liu et al. (2012), we prepared PEG–dextran solutions in the concentration range wcr < w < 0.36 at the same weight ratio wd/wp as for the critical point. These solutions were kept at a constant temperature of 24°C for a few days to reach equilibrium before the coexisting phases separated and their densities accurately measured by a density meter. As expected, the top PEG-rich phase always has a lower density than the bottom dextran-rich phase, and the density difference between the coexisting phases vanish at the critical point (Figure 3A). The normalized distance of the corresponding tie line from the critical point is taken to be the reduced concentration $\\epsilon \\equiv \\frac{c}{{c}_{cr}}-1$, which lies in the range of 0 < ε < 3.82.\n\nThe compositions of the dextran-rich (D) and PEG-rich (P) phases, are then determined based on their densities ρD and ρP, respectively. By rewriting Equation (1), the PEG weight fractions of the coexisting phases, w${}_{\\text{p}}^{\\text{D}}$ and w${}_{\\text{p}}^{\\text{P}}$, are related to the corresponding dextran weight fractions, ${w}_{\\text{d}}^{\\text{D}}$ and ${w}_{\\text{d}}^{\\text{P}}$, via (Liu et al., 2012):\n\nfor the dextran-rich phase, and\n\nfor the PEG-rich phase, respectively.\n\nEquations (2) and (3) represent straight isopycnic lines in the wd-wp plane with a constant slope of −(νd − νs)/(νp − νs), and the intercepts of these lines reflect the different values of the phase densities ρD and ρP. The compositions of the coexisting dextran-rich and PEG-rich phase can be then estimated from the intersections of these isopycnic lines with the binodal established in section Binodal and Critical Point.\n\nThe accuracy of this density-based method in constructing the tie lines is demonstrated by the close proximity of the coordinates (wd, wp) for the starting mixtures to the corresponding tie lines, as shown in Figure 3B. The tie lines determined by the density method are in excellent agreement with reported tie lines obtained with traditional methods for similar PEG–dextran systems at comparable temperatures. Below we will show that the density method can be further validated by an independent method based on quantitative GPC measurements of the coexisting phases.\n\n#### GPC Method\n\nThe density method is relatively simple to determine the tie lines of ATPS. However, it relies on the assumption that the tie line end points coincide with the predetermined binodal, which is a good approximation for ATPS with polymers of narrow dispersities. For PEG–dextran systems with two polymer species which can be completely separated by GPC, the compositions of the coexisting phases can be directly quantified by GPC with a single concentration detector (Connemann et al., 1991; Zhao et al., 2016a,b). Below we give the details of this method.\n\nTo quantify polymer concentrations within the two coexisting phases of an ATPS, the polymer solutions are typically diluted and their GPC chromatograms are recorded, with baseline separation of dextran from PEG on a differential refractive index (RI) detector (Zhao et al., 2016b). It can be seen from Figures 4A,B that further away from the critical point, more dextran molecules are accumulating in the dextran-rich phase, while more PEG molecules are partitioning into the PEG-rich phase. At sufficient distance from the critical point, no dextran molecules are present in the PEG-rich phase, and PEG molecules are completely absent in the dextran-rich phase. The polymer compositions in the coexisting phases can be directly obtained from their peak areas, with the pre-established concentration dependences of the RI peak areas for dextran and PEG, respectively (Figure 4C). It is found that the tie line end points superpose to the binodal curve, with an exception of those data for the PEG-rich phases in the vicinity of the critical point (Figure 4D). This discrepancy is most probably due to molar mass fractionation of dextran between the coexisting phases (see section Molar Mass Fractionation).\n\nFIGURE 4", null, "Figure 4. GPC chromatograms of coexisting dextran-rich (A) and PEG-rich phases (B) at ε = 0.030 (black), 0.200 (red), 0.982 (green), and 2.087 (blue). The peak retention volumes of the native dextran and PEG are 16.06 mL (black dashed line) and 18.25 mL (red dashed line), respectively. (C) Dependence of the RI peak area ARI on polymer concentration cinj of the solutions injected into the size-exclusion chromatography columns for dextran (squares) and PEG (circles). (D) The resulting phase diagram of the PEG–dextran–water system. In the phase diagram, the cloud point curve is shown as a solid curve. The compositions of the initial solutions for which size-exclusion chromatography measurements after phase separation were performed are indicated by black crosses. The end points of the respective tie lines (dashed lines) consist of red crosses indicating the compositions of the dextran-rich phases and green crosses indicating the compositions of the PEG-rich phases. The midpoints (blue circles) of the tie lines were extrapolated to the binodal to determine the critical point. Adapted with permission from Zhao et al. (2016b). Copyright (2016) Chem. J. Chinese Universities.\n\nInterestingly, the tie lines established by the density and GPC methods agreed well with each other. The density method requires the density measurements of the phases together with a pre-established binodal, which makes it a simple and convenient method. The GPC method requires the chromatography measurements of all phases and accurate calibration of the RI detector for both components. Although the GPC method is tedious and time-consuming, it does not depend on the binodal curve. More importantly, molar mass distribution and molar mass averages of each polymer species in the coexisting phases can be obtained by the GPC method (see section Molar Mass Fractionation). It should be noted, however, that the GPC method with RI as the concentration detector is only applicable to certain ATPSs, whose components can be separated into two peaks by the GPC columns without polymer adsorption. The coupling to a laser light scattering detector gives additional information on molar mass of the components. If the elution peaks of these two polymer components are overlapping, one must use two concentration detectors, for example a RI and an additional optical rotation detector, to quantify the compositions of the polymer mixtures (Edelman et al., 2003a,b).\n\n### Molar Mass Fractionation\n\nSince dextran and PEG components in the coexisting phases are completely separated from each other by the GPC columns (Figures 4A,B), one can also determine the molar mass distributions of each component after calibrating the system either with narrow polymer standards or by a laser light scattering detector (Zhao et al., 2016a). We first take a look at the dextran component in the coexisting phases. Inspection of the GPC chromatograms (Figures 4A,B) indicates that the relative intensity of the elution peak of dextran in the PEG-rich phase is much less than that in its coexisting dextran-rich phase. Additionally, the elution peak of dextran in the PEG-rich phase is shifting toward higher retention volume, indicating a lower molar mass than that in the dextran-rich phase. Further away from the critical point, the difference for the dextran elution peaks between the two coexisting phases increases. The evolution of the PEG elution peaks shows a different behavior. Although the relative intensity of PEG elution peak in the dextran-rich phase is lower than that in the PEG-rich phase, there is however, hardly any change in retention volumes for PEG components in the two coexisting phases. It indicates that PEG components in the two coexisting phases have similar molar mass, albeit less PEG is distributed into the dextran-rich phase.\n\nQuantitative calculation of the molar mass and polymer dispersities for dextran and PEG in the coexisting phases are obtained from GPC measurements and the results are shown in Figure 5. For the system explored here, it is found that the weight-average molar mass Mw of dextran in the dextran-rich phase is significantly larger than that in its coexisting PEG-rich phase (Figure 5A). As the polymer concentration increases, the Mw of dextran in the dextran-rich phase approaches the value of the original dextran with Mw = 380 kg/mol, while the Mw of dextran in the PEG-rich phase shows a continuous decrease down to 82.2 kg/mol at ε = 0.73. This is because the used dextran had a broad molar mass distribution, characterized by the dispersity index Mw/Mn, the ratio between the weight-average molar mass Mw and the number-average molar mass Mn, of 2.19. With more dextran of high molar mass component partitioning into the dextran-rich phase, the molar mass of dextran component in the PEG-rich phase decreases. It also leads to an Mw value larger than that of the native dextran in the dextran-rich phase, agreeing with the tiny shift of the dextran peak to lower retention volume (Figure 4A). However, when the initial polymer concentration is sufficiently high, no dextran is found in the PEG-rich phase, leading to the same Mw value of dextran in the dextran-rich phase as for native dextran. In contrast, PEG in the two coexisting phases have similar weight-average molar masses, close to the value of the original PEG with Mw = 8.45 kg/mol (Figure 5B), and such behavior is independent of the initial polymer concentration. This is due to the narrow dispersity of the employed PEG, with Mw/Mn = 1.11. In ATPS with broad molar mass distributions for both polymers, molar mass fractionation of both components is observed (Edelman et al., 2003a,b).\n\nFIGURE 5", null, "Figure 5. Weight-average molar mass Mw of dextran (A) and PEG (B) in the dextran-rich (squares) and PEG-rich (circles) phases. The dashed lines indicate the average molar mass Mw = 380 kg/mol for dextran and Mw = 8.45 kg/mol for PEG, respectively. The values of Mw for dextran in the two phases, which were obtained from the calculated molar mass distribution of dextran, are shown as solid lines in (A). Reprinted from Zhao et al. (2016a), Copyright (2016), with permission from Elsevier.\n\nWith the compositions of the coexisting phases accurately measured, we can obtain the distribution coefficient fx(N) of each polymer component between the coexisting phases. According to the Flory-Huggins theory, the distribution coefficient, also called the degree of fractionation, is defined as (Flory, 1953):\n\nwhere cx,poor(N) and cx,rich(N) are the concentrations of component x (in our case, dextran or PEG) containing N monomers in the phases poor and rich in x component, respectively. Theory predicted an exponential decay behavior for the degree of fractionation fx(N) as a function of chain length N (Flory, 1953; Koningsveld et al., 2001), suggesting that the longer the x-chain, the less it distributed in the x-poor phase. The separation parameter σx represents the free energy change per monomer during transferring a chain of length N between the liquid-liquid phases.\n\nIn Figure 6, the degree of fractionation fd(N) for dextran is plotted vs. the chain length Nd at different distance ε from the critical point. An exponential dependence is observed over a certain range of chain length for all values of ε, although the data deviate slightly. However, there are two distinct differences of the experimental results with the mean field prediction. First, the degree of fractionation starts to deviate from the exponential dependence at a certain chain length. Second, the value extrapolated to Nd = 0 does not reach the expected value of 1, implying that the mean field theory is insufficient to quantitatively describe the degree of fractionation for dextran in ATPS. Such a discrepancy has been observed in previous experiments (Edelman et al., 2003a,b), as well as in computer simulations (van Heukelum et al., 2003). The data can be fitted to an empirical relation by introducing additional fitting parameters A and σd2, as shown in Figure 6.\n\nFIGURE 6", null, "Figure 6. Degree of fractionation fd(N) for dextran as a function of the degree of polymerization Nd of dextran for different values of the reduced polymer concentration $\\epsilon \\equiv \\frac{c}{{c}_{cr}}-1$ which is varied from 0.01 (top right) to 0.73 (bottom left). The lines are fits to the data by the empirical relation ${f}_{d}\\left(N\\right)=A×exp\\left(-{\\sigma }_{d}N-{\\sigma }_{d2}{N}^{0.5}\\right)$. Reprinted from Zhao et al. (2016a), Copyright (2016), with permission from Elsevier.\n\nFor the current ATPS containing dextran chains with a large dispersity and near-monodisperse PEG chains, molar mass fractionation leads to a chain length dependent redistribution of dextran chains between the coexisting phases. High molar mass dextran chains are enriched in the dextran-rich phase and depleted from the PEG-rich phase, leading to a significant decrease of the Mw of dextran in the PEG-rich phase and a slight increase of that in the coexisting dextran-rich phase. This is the underlying origin for the mismatch between the binodal curve and the end points of the tie lines for the PEG-rich phase in the vicinity of the critical point. As a result, the compositions of the PEG-rich phase locate above the binodal (Figure 3B). It is also expected that the compositions of the corresponding dextran-rich phases lie slightly below the binodal, which is not observable in experiments. The results are in good agreement with previous studies about polymer dispersity effect on the tie lines (Kang and Sandler, 1988). Therefore, to obtain the tie line for such an ATPS system by the density method close to the critical point (Liu et al., 2012), the composition for the dextran-rich phase can be determined from the intersection of the corresponding isopycnic line with the binodal. However, the composition for the PEG-rich phase must be estimated from the intersection of its isopycnic line with a straight line passing through the composition of the initial polymer solution and that of the dextran-rich phase.\n\n## Interfacial Tension and Scaling Laws\n\nThe interfacial tension between coexisting phases of ATPS is on the order of 1–100 μN/m, which can be determined by measuring the equilibrium shape of liquid-liquid interface under some external forces, such as gravity or centrifugal force, as well as by methods based on time evolution of the interface shape (Tromp, 2016). Different techniques, such as the drop volume method (Mishima et al., 1998), drop retraction analysis (Ding et al., 2002), sessile and pendant drop shape analysis (Atefi et al., 2014), capillary length analysis (Vis et al., 2015) and spinning drop method (Ryden and Albertsson, 1971; Liu et al., 2012), have been employed for the interfacial tension measurement of the water-water interfaces.\n\nHere, we summarize some data for the interfacial tension Σpd between the coexisting liquid-liquid phases of the PEG–dextran system obtained by a spinning drop tensiometer (Liu et al., 2012) (see Figure 7A). In this broad concentration range, the interfacial tension increases by 4 orders of magnitude with increasing distance from the critical point, i.e. from 0.21 μN/m at reduced polymer concentration ε = 0.02, to 769 μN/m at ε = 3.82.\n\nFIGURE 7", null, "Figure 7. (A) Interfacial tension Σpd between coexisting dextran-rich and PEG-rich phases as functions of the reduced polymer concentration ε = c/ccr – 1. The solid lines are fits to the data with exponent of 1.67 ± 0.10 for 0.02 < ε < 0.12, and 1.50 ± 0.01 for 0.2 < ε < 3. The dashed line shows the expected asymptotic behavior with μ = 1.26. (B) Composition difference Δc (solid circles) and density difference Δρ (open squares) of the coexisting phases as functions of the reduced polymer concentration ε. In the concentration range 0.02 < ε < 0.12, the fits to the data give for the scaling exponent β a value of 0.337 ± 0.018 as estimated from the density difference dependence or 0.351 ± 0.018 as estimated from the composition difference dependence, while in the range 0.2 < ε < 3 we obtain β = 0.491 ± 0.014 from the density difference dependence or β = 0.503 ± 0.018 from the composition difference dependence. Reprinted with permission from Liu et al. (2012). Copyright (2012) American Chemical Society.\n\nQuantitative description of the phase separation of polymer solution in the vicinity of the critical point remains as an interesting problem in polymer physics. The phase separation of polymer solution is usually studied at temperature T close to its critical demixing temperature Tc. Various physical quantities, including the susceptibility χ, the correlation length ξ, the order parameters, and the interfacial tension Σ, have been studied in details for different polymer solutions (Sanchez, 1989; Widom, 1993). Several theoretical models, such as the mean field theory, the Ising model, and the crossover theory, have been developed to explain the observed scaling behaviors of the these properties, depending on proximity to the critical point as characterized by the reduced temperature τ = |1 – T/Tc|. A good example is provided by accurate light scattering measurements of polymer solutions close to Tc (Melnichenko et al., 1997; Anisimov et al., 2002), where the scaling for the susceptibility χ ~ τ−γ with γ changed from 1.24 to 1, and that for the correlation length ξ ~ τ−ν with ν decreased from 0.63 to 1/2, respectively. The crossover from the Ising model to the mean field behavior occurs at a correlation length scale on the order of the chain size of the polymers. Measurement of the coexistence curves for polymer solutions also revealed such a crossover for the composition difference Δφ ~ τβ with the exponent β changed from 0.326 to 1/2 (Dobashi et al., 1980). The crossover for the interfacial tension Σ ~ τμ in exponent μ is theoretically predicted but its experimental verification is still lacking. Instead, scaling exponents μ ranged from 1.17 to 1.60 have been reported in the literature for a few polymer systems close to Tc (Shinozaki et al., 1982; Heinrich and Wolf, 1992; Widom, 1993), where the crossover from the Ising value 1.26 to the mean field value 3/2 were not observed.\n\nOne can use a similar approach for ATPS of dextran and PEG, by scaling analysis of the interfacial tension and the order parameters vs. the reduced concentration ε (Liu et al., 2012). The scaling exponent μ of the interfacial tension ${\\Sigma }_{\\text{pd}}~{\\epsilon }^{\\mu }$ shows a crossover behavior depending on proximity to the critical point (Figure 7A). Further away from the critical point, the obtained value μ = 1.50 ± 0.01 is in excellent agreement with mean field theory. However, closer to this point, the increased value of 1.67 ± 0.10 deviates significantly from the Ising value 1.26. In contrast, as the critical point is approached, the scaling exponent β of the order parameters, does exhibit the expected crossover from mean field value of β = 1/2 to Ising value 0.326 (see Figure 7B).\n\nSuch a crossover from mean field to Ising behavior is further supported by the normalized coexistence curve of the studied PEG–dextran system, which has a similar shape with the prediction from the crossover theory based on near-tricritical-point (theta point) Landau expansion renormalized by fluctuations (Anisimov et al., 2000). As shown in Figure 8, the reduced polymer concentration approximates the Ising limit with scaling exponent β = 0.326 in the vicinity of the critical point, where the correlation length of the concentration fluctuations is larger than the polymer molecular size (Liu et al., 2012). However, the data at the highest polymer concentration approaches the tricritical mean field limit with β = 1, indicating that the polymer molecular size is larger than the correlation length. Therefore, the crossover theory from critical Ising behavior to tricriticality of polymer solutions can also be applied to the PEG–dextran system in our study.\n\nFIGURE 8", null, "Figure 8. Logarithmic scaling plots of phase-coexistence curves for (A) dextran and (B) PEG. The solid lines are guides to the eye with scaling exponents of 0.326 for the Ising model limit and 1 for the tricritical limit and a constant of 1 for the solvent-rich phase. Reprinted with permission from Liu et al. (2012). Copyright (2012) American Chemical Society.\n\nThe discrepancy for the scaling exponent of the interfacial tension close to the critical point might arise from the molar mass fractionation of dextran. As shown in Figure 5A, in the vicinity of the critical point, the Mw of dextran in the dextran-rich phase is larger than that of the original dextran, leading to a reduction of the interfacial tension. While further away from the critical point, the Mw of dextran in the dextran-rich phase is similar to that of the original dextran and there is no influence on the interfacial tension. Therefore, the scaling exponent μ is unaffected in the mean field region, but an increased value of 1.67 is observed in the Ising limit region.\n\n## Wetting of Membranes by ATPS\n\nThe phase separation process and its consequences on membranes in contact with the two phases can be directly observed when the aqueous polymer solutions of dextran and PEG in the one-phase region are encapsulated within GUVs (Li et al., 2011; Liu et al., 2016). In these studies, the polymer solutions undergo phase separation when the system is brought into two-phase region via osmotic deflation by exposing the vesicles to a hypertonic medium, i.e., vesicles deflation (note that the lipid membrane is permeable to water, but not to the polymers, which become concentrated as water permeates out). The deflation not only leads to a reduction of the vesicle volume and the formation of two immiscible aqueous phases within vesicles, but also generates excess area of the membrane. This results in a variety of interesting changes in the vesicle morphology, such as vesicle budding (Long et al., 2008; Li et al., 2012), wetting transition (Li et al., 2008; Kusumaatmaja et al., 2009), and complete budding of the vesicles (Andes-Koback and Keating, 2011) as schematically shown in Figure 9a. The overall vesicle shape can be observed from confocal microscopy cross sections of the vesicles, as well as from side-view phase-contrast images (Figures 9b–e) on a horizontally aligned microscope (Li et al., 2011; Liu et al., 2016).\n\nFIGURE 9", null, "Figure 9. Response of giant vesicles encapsulating ATPS when exposed to osmotic deflation. (a) Schematic illustration of the steps upon deflation: phase separation within the vesicle, wetting transition, vesicle budding, and fission of the enclosed phases into two membrane-wrapped droplets. (b–e) Side-view phase contrast images of a vesicle sitting on a glass substrate. The vesicle contains the PEG–dextran ATPS. After phase separation (b,c), the interior solution consists of two liquid droplets consisting of PEG-rich and dextran-rich phases, respectively. Further deflation of the vesicle causes the dextran-rich droplet to bud out as shown in (d,e). The numbers on the snapshots indicate the osmolarity ratio between the external medium and the initial internal polymer solution. In the sketch in (f), the three effective contact angles as observed with optical microscopy are indicated, as well as the two membrane tensions and the interfacial tension Σpd. The contact line is indicated by the circled dot. The intrinsic contact angle θin, which characterizes the wetting properties of the membrane by the PEG-rich phase at the nanometre scale, is sketched in (g). Reproduced from Dimova and Lipowsky (2012) with permission from the Royal Society of Chemistry.\n\nDepending on the phase state of encapsulated ATPS and the interactions between the aqueous phases with the vesicle membrane, the liquid droplet may exhibit zero or non-zero contact angles corresponding, respectively, to complete or partial wetting. In the vicinity of the critical point, the membrane is completely wetted by the PEG-rich phase, while further away from this point, both phases partially wet the membranes. For GUV encapsulating PEG–dextran mixture, a complete-to-partial wetting transition had been observed for a number of lipid compositions (Li et al., 2008, 2011; Kusumaatmaja et al., 2009; Liu et al., 2016).\n\nWhen the vesicle membrane becomes partially wetted by the aqueous phases, it is separated into two different segments: one is in contact with the PEG-rich phase, and the other with the dextran-rich phase. These two membrane segments and the pd interface, i.e., the interface between the PEG-rich and dextran-rich phases, have spherical cap morphology. Then from the geometry of the vesicle, three effective contact angles θd, θp, and θe can be obtained, with θpde = 2π, as shown in Figure 9f. The force balance of the tensions (of the two membrane segments and the pd interface) at the three-phase contact line implies that the three tensions form a triangle, which leads to the relations (Kusumaatmaja et al., 2009; Li et al., 2011; Lipowsky, 2018):\n\nbetween these tensions and the effective contact angles. Here, ${\\stackrel{^}{\\Sigma }}_{\\text{pe}}$ is the tension of the pe membrane segment in contact with the PEG-rich phase, and ${\\stackrel{^}{\\Sigma }}_{\\text{de}}$ is the tension of the de membrane segment in contact with the dextran-rich phase; e describes the external phase outside the vesicle. The membrane tensions can then be calculated from the interfacial tension Σpd as measured in section Interfacial Tension and Scaling Laws and the effective contact angles, which are obtained from the microscopy images.\n\nWhen viewed with optical resolution, the shape of budded vesicles as in Figures 9d,e exhibits a kink at the three-phase contact line. However, the bending energy of the membrane kink would become infinite if it persists to smaller length scales (Kusumaatmaja et al., 2009). Therefore, the membrane in vicinity of the contact line should be smoothly curved when viewed with a super resolution microscopy (Zhao et al., 2018), which reveals the existence of an intrinsic contact angle θin, as shown schematically in Figure 9g (Kusumaatmaja et al., 2009). If the two membrane segments have identical curvature-elastic properties, the force balance along the three phase contact line gives (Lipowsky, 2018)\n\nHere Wde and Wpe are adhesive strengths of the two membrane segments. Therefore, the intrinsic contact angle θin is related to three material parameters, the adhesive strengths Wde and Wpe and the interfacial tension of the liquid-liquid interface (Lipowsky, 2018). However, if the two membrane segments have the same elastic properties but different spontaneous curvatures, additional terms resulting from the different curvature-elastic properties emerges, and the truncated force balance relation as given by Equation (7) may lead to unreliable estimates for the intrinsic contact angle (Lipowsky, 2018).\n\n## Formation of Membrane Nanotubes in Vesicle Encapsulating ATPS\n\nOsmotic deflation of giant vesicles enclosing PEG–dextran solutions, can lead to spectacular shape changes as evidenced by the formation of many membrane nanotubes protruding into the interior of the vesicle. Because the membrane is not pulled by an external force, the driving force for the spontaneous tubulation of vesicles is provided by a membrane tension generated from a substantial spontaneous curvature, which is much larger than the curvature of the GUV membranes. The spontaneous curvature is the preferred curvature the membrane would adopt when left at rest. It can be modulated by various factors such as leaflet compositional asymmetry, adsorption and depletion of molecular species and ions (Lipowsky, 2013; Bassereau et al., 2018). In the system considered here, the spontaneous curvature arising from the polymer-membrane interactions can be estimated using three different and independent methods of image analysis on the vesicle morphologies (Liu et al., 2016). A combination of experiment, theoretical analysis and computer simulation reveals the molecular mechanism for the membrane spontaneous curvature generated in the system of giant vesicles encapsulating ATPS.\n\n### Three Patterns of Flexible Nanotubes\n\nUpon deflation of vesicles enclosing ATPS, three types of nanotube patterns have been observed, corresponding to three different vesicle shapes as schematically shown in Figure 10 (Liu et al., 2016). These different morphologies can be observed upon osmotic deflation of the vesicles. In the example given in Figure 10, the membrane is composed of three lipid components: dioleoylphosphatidylcholine (DOPC), dipalmitoylphosphatidylcholine (DPPC), and cholesterol (Chol), which can exhibit different phase state depending on the exact composition. Bilayer phases such as the liquid-disordered (Ld) and liquid-ordered (Lo) phases and their coexistence can be directly observed as fluid domains in GUVs (Lipowsky and Dimova, 2003) using fluorescence microscopy (Dietrich et al., 2001). In the case of GUVs enclosing ATPS (Liu et al., 2016), we employed two different membrane compositions, corresponding to a Ld membrane with lipid composition DOPC:DPPC:Chol = 64:15:21 (mole fractions) and a Lo one with lipid composition DOPC:DPPC:Chol = 13:44:43 (see Figures 11, 12), respectively. These two membranes are both in the single phase region and have different elastic property, with the bending rigidities κLd = 0.82 × 10−19 J for the Ld membranes and κLo = 3.69 × 10−19 J for the Lo membranes (Heinrich et al., 2010).\n\nFIGURE 10", null, "Figure 10. Three nanotube patterns (VM-A, VM-B, and VM-C) corresponding to the distinct vesicle morphologies (VM) observed along the deflation path: Schematic views of horizontal xy-scans (top row) and of vertical xz-scans (bottom row) across the deflated vesicles. In all cases, the tubes are filled with external medium (white); the membrane is shown in red. For the VM-A morphology, the interior polymer solution is uniform (green), whereas it is phase-separated (blue and yellow) for the morphologies VM-B and VM-C, with complete and partial wetting, respectively, of the membrane by the PEG-rich aqueous phase (yellow). For the VM-B morphology, the nanotubes explore the whole PEG-rich (yellow) droplet but stay away from the dextran-rich one (blue). For the VM-C morphology, the nanotubes adhere to the interface between the two aqueous droplets forming a thin and crowded layer over this interface. It is expected that in the VM-A and VM-B morphologies, these nanotubes are necklace-like consisting of a number of small spheres connected by narrow or closed membrane necks, while in the VM-C morphology, cylindrical tubes with a uniform diameter along the nanotubes co-exist with the necklace-like ones. Reprinted with permission from Liu et al. (2016). Copyright (2016) American Chemical Society.\n\nFIGURE 11", null, "Figure 11. Nanotube patterns within Ld-phase vesicles as observed for the VM-B and VM-C morphologies corresponding to complete and partial wetting of the membranes. (a) Disordered pattern corresponding to a confocal xy-scan of the VM-B morphology. Because the Ld membrane is completely wetted by the PEG-rich phase, the nanotubes explore the whole PEG-rich droplet but stay away from the dextran-rich phase located below the imaging plane. (b) A layer of densely packed tubes as visible in an xy-scan of the VM-C morphology. As a result of partial wetting, the nanotubes now adhere to the pd interface between the two aqueous droplets and form a thin layer in which crowding leads to short-range orientational order of the tubes. Note that the tube layer is only partially visible because the pd interface is curved into a spherical cap. Both in (a,b), the diameter of the tubes is below the diffraction limit, but the tubes are theoretically predicted to have necklace-like and cylindrical shapes in panels (a,b), respectively. Reprinted with permission from Liu et al. (2016). Copyright (2016) American Chemical Society.\n\nFIGURE 12", null, "Figure 12. Necklace-cylinder tube coexistence for giant vesicles with Lo membranes: (a) confocal xz-scan; (b) confocal xy-scan corresponding to the dashed line in panel a; (c) superposition of 6 confocal xy-scans located in the dotted rectangle in panel a. This projection image reveals the coexistence of several long cylindrical tubes and several short necklace-like tubes. All scale bars are 10 μm. (d) Fluorescent intensity along the solid white line 1 in panel (b) perpendicular to the GUV contour and along the dotted and dashed white lines 2 and 3 in panel (c) across a cylindrical tube. The quantity Δx is the coordinate perpendicular to the GUV contour or membrane tube. The intensity profiles can be well-fitted by Gaussian distributions with a half-peak width of 0.35 ± 0.05 μm. The peak-to-peak separations for the lines 2 and 3 lead to the estimated tube diameters 2Rcy = 0.58 and 0.54 μm, respectively. Reprinted with permission from Liu et al. (2016). Copyright (2016) American Chemical Society.\n\nTo obtain the observed morphologies, we prepare spherical vesicles that enclose a homogeneous solution of PEG–dextran mixture. Deflation of these vesicles is then induced by gradually exchanging the exterior solution to a hypertonic one containing fixed concentrations of the two polymer components with increasing amount of sucrose up to 15.6 mM. In this low concentration regime, the effect of sucrose on the bending rigidity and spontaneous curvature of the membranes can be neglected (Döbereiner et al., 1999; Vitkova et al., 2006; Lipowsky, 2013; Dimova, 2014). For more details of the experimental procedure, the readers are referred to the original article (Liu et al., 2016). Upon small deflation, the interior polymer solution still remains as a uniform aqueous phase with c < ccr (see VM-A morphology in Figure 10), but the area needed to enwrap the (reduced) volume of the vesicle is now in excess, which result in the formation of tubes (the excess area is stored in them). Subsequent deflation steps with c > ccr, result in phase separation of the interior solution into two aqueous phases, a lighter PEG-rich and a heavier dextran-rich phase, both confined by the vesicles as liquid droplets. When the membrane is completely wetted by the PEG-rich droplet, the dewetted dextran-rich droplet is surrounded by the PEG-rich phase and has no contact with the membrane, which defines the VM-B morphology of the vesicles (Figure 10). The dextran-rich droplet sinks to the bottom of the vesicle because its density is always larger than the density of the coexisting PEG-rich phase (Figure 3A). Upon further deflation, both aqueous phases are in contact with the membranes, indicating a partial-wetting state of the two aqueous phases. This is defined as the VM-C morphology for the vesicles (Figure 10). The two membrane segments and the pd interface form non-zero contact angles (see Figure 9f). It is found that the complete-to-partial wetting transitions are located between different deflation steps for the Ld and Lo membranes, reflecting different wetting property of ATPS on these membranes.\n\nDue to the different wetting properties of the aqueous phases on the membranes, different nanotube patterns formed in the VM-B and VM-C morphologies are observed by the confocal microscope (see Figure 11). For the complete wetting morphology VM-B, these nanotubes explore the interior of the whole PEG-rich droplet, and undergo strong thermally excited undulations. The length of the individual nanotubes can be estimated from stack of three-dimensional scans of the vesicles, which is on the order of 20 μm for Ld vesicles. For the partial wetting morphology VM-C, these nanotubes adhere to the pd interface between the two liquid droplets, where one can immediately see the long tube segments in a single scan. The local adhesion of the nanotubes to the liquid-liquid interface is a reflection of the complete-to-partial wetting transition.\n\nThese nanotubes can be either necklace-like consisting of a number of small spheres connected by narrow or closed membrane necks or cylindrical with a uniform diameter along the nanotube (Figure 10). Theoretical investigation of the nucleation and growth of the tubes indicated that these membrane nanotubes prefer necklace-like shape at short length but cylindrical one above a critical length, which can be understood by minimization of the membrane bending energy (Liu et al., 2016). The necklace-cylinder transformation occurs at the critical tube length of about three times of the mother vesicle radius, and the tubes can reshape themselves via a series of intermediate unduloids (Liu et al., 2016). For the partial wetting morphology VM-C, due to additional contribution from adhesion free energy of the tubes at the pd interface, the critical length for necklace-cylinder transformation depends on the material parameters and can become as low as a few micrometers. Therefore, the shape of the Ld tubes in the VM-A and VM-B morphologies are predicted to be necklace-like, but a co-existence of necklace-like and cylindrical shape is expected for Ld tubes in the VM-C morphology. In contrast, the tubes of the stiffer Lo membranes are so thick that their shapes can be directly observed from the confocal images. Necklace-like shape tubes are observed for all three morphologies of the Lo vesicles. Surprisingly, the confocal images in Figure 12 revealed the co-existence of several long cylindrical tubes and a few short necklace-like tubes at the pd interface. The length of these cylindrical tubes is above the critical length for the necklace–cylinder transformation.\n\n### Spontaneous Curvatures of Vesicle Enclosing ATPS\n\nSeveral approaches for deducing the membrane spontaneous curvature have been developed in Li et al. (2011), Lipowsky (2013, 2014), Liu et al. (2016), Bhatia et al. (2018), and Dasgupta et al. (2018), some of which have been reviewed in section Measuring the Membrane Spontaneous Curvature of Bassereau et al. (2018). Stable membrane nanotubes were first observed for vesicles encapsulating ATPS in Li et al. (2011), and the theoretical analysis of the corresponding GUV shapes revealed the presence of a negative spontaneous curvature of about −1/(240 nm). We then developed three different and independent methods to determine this curvature based on image analysis of tubulated vesicles made of both Ld and Lo membranes (Liu et al., 2016). As shown in Figure 13, all these methods led to consistent values of the spontaneous curvatures for both Ld and Lo vesicles of three different morphologies.\n\nFIGURE 13", null, "Figure 13. Variation of deduced spontaneous curvature of Ld (red) and Lo (green) membranes with polymer concentration modulated by osmotic deflation of the vesicles. The vertical dashed lines correspond to the critical concentration ccr. The data were obtained by direct shape analysis of the nanotubes (green stars), area partitioning analysis as given by Equation (8) (open circles), and force balance analysis described by Equation (9) (open squares). The horizontal dotted line corresponds to the optical resolution limit of 1/(300 nm). Reprinted with permission from Liu et al. (2016). Copyright (2016) American Chemical Society.\n\nThe second method is based on the membrane area partitioning between nanotubes and the mother vesicle. The shapes of the nanotubes for Ld vesicles cannot be resolved by confocal microscope because the tube diameter is below the optical resolution. But we can calculate the spontaneous curvature via two measurable geometric quantities: the area A and length L of all tubes. It is based on the fact that the excess area generated by deflation is stored as nanotubes. Upon deflation, the vesicle apparent area Aapp is less than the initial vesicle area A0, both areas can be obtained from the vesicle shape and their difference (A0Aapp) is the missing area stored as tubes. While the length L can be measured from 3D scans of the vesicle by confocal microscope. Then the spontaneous curvature of the membrane can be estimated via (Liu et al., 2016):\n\nHere Λ is the fraction of the total tube length in cylindrical shape, and the rest part is necklace-like.\n\nFor short necklace-like tubes observed for all Lo vesicles in the VM-A and VM-B morphologies, Λ = 0 is obtained. For Lo tubes in the VM-C morphology with a co-existence of the cylindrical and necklace-like tubes, non-zero Λ-value is observed. However, for the Ld tubes with thickness below the optical resolution, the fraction Λ cannot be estimated from the confocal images. These flexible Ld tubes in the VM-A and VM-B morphologies are predicted to be necklace-like, which leads to Λ = 0. For Ld tubes in the VM-C morphology, a co-existence of the cylindrical and necklace-like tubes is expected, but one cannot estimate the fraction Λ. In this case, we have to take all possible Λ-values into account. The spontaneous curvatures for the Ld membranes are then estimated using Equation (8) with Λ = 0 for VM-A and VM-B morphologies and 0 ≤ Λ ≤ 1 for VM-C morphology. The m-values obtained by area partitioning analysis for both Lo and Ld membranes are shown in Figure 13 as green and red circles, respectively. The accuracy of this method is ±15%, resulting mainly from the uncertainty of the measured tube length L. It should be noted that when the tubes are too crowded at the pd interface for the highest polymer concentrations of VM-C morphology, it becomes rather difficult to estimate the total tube length and then this method is not applicable.\n\nFor the VM-C morphologies, where two membrane segments and the pd interface form non-zero contact angles due to partial wetting of the aqueous phases, the membrane spontaneous curvature can be estimated via a third method based on force balance of the tensions at the three-phase contact line. Since the tubes are always protruded into the PEG-rich phase and adhere to the liquid-liquid interface for VM-C morphology, one can estimate the spontaneous curvature via (Lipowsky, 2013, 2014; Liu et al., 2016):\n\nHere κ is the bending rigidity of membrane. One can calculate the m-values for both Lo and Ld membranes, with the separately measured interfacial tension Σpd, the effective contact angles θd and θe, and the bending rigidities κLo and κLd (Heinrich et al., 2010). The obtained results are shown in Figure 13 as green and red squares, respectively. It is obvious that all three modes of image analysis led to consistent values for the spontaneous curvatures of these membranes.\n\nIt should be noted that the spontaneous curvatures of these two membranes were found to be almost constant, with mLd ≅ – 8 μm−1 and mLo ≅ – 1.7 μm−1 over the range of studied polymer concentrations. Their spontaneous curvature ratio of mLd/mLo ≅ 4.7 is nearly identical to their bending rigidity ratio of κLoLd ≅ 4.5. The observed inverse proportionality between the spontaneous curvature and the bending rigidity is in accord with the generation of these curvatures by adsorption, as shown in next section.\n\n### Molecular Mechanism of Curvature Generation in Vesicles Enclosing ATPS\n\nBecause the formation of nanotubes in the GUVs was observed only in the presence of polymers, the spontaneous curvature of the vesicle membranes should be generated by the interactions between membrane and the encapsulated polymers. Depending on the effectively attractive or repulsive force with the membranes, polymer molecules can form either adsorption or depletion layers on the membrane, and result in bulging of the lipid bilayer toward the solution with higher concentration of polymer adsorption or lower concentration of polymer depletion.\n\nIn all three vesicle morphologies shown in Figure 10, the concentration of PEG in the interior solution is always larger than that in the exterior solution. However, the concentration of dextran in the interior solution is larger for VM-A morphology but smaller for VM-B and VM-C morphologies than the exterior dextran concentration. At the same time, all deflation steps led to the formation of tubes protruding into the interior of vesicles with a negative spontaneous curvature. Therefore, the observation is consistent with the theoretical prediction (Breidenich et al., 2005), only if the spontaneous curvature is generated by adsorption of PEG onto the membrane. This conclusion was supported by control experiments with both Ld and Lo vesicles enclosing pure PEG solution without dextran. Deflation of these vesicles led to nanotubes protruding into the interior of the vesicles with higher PEG concentration.\n\nTo further elucidate conformations of the PEG chains adsorbed on the membranes and the role of PEG-membrane interactions on the curvature generation of the membranes, we performed atomistic molecular dynamics simulations on the same hybrid lipid-polymer systems as in the experiments (Liu et al., 2016). Typical conformations of the PEG molecules adsorbed onto the Ld and Lo membranes are shown in Figures 14A,B. It indicated that PEG chains are only weakly bound to the membranes, with long loops dangled between some short adsorption segments. It is often observed that the PEG chain binds to the membrane by hydrogen bonds formed between the two terminal OH groups and the head groups of the lipid. Less frequently, a few contacts form between the PEG backbones and the membranes. The affinity of the PEG molecules to the membranes is further quantified by the potentials of mean force, as shown in Figures 14C,D. It indicated that the studied PEG chains have the same binding affinity to the Ld and Lo membranes, with a relatively small binding energy of about 4 kJ/mol or 1.6 kBT per PEG molecule. It is consistent with the experimental results where the spontaneous curvature ratio mLd/mLo is equal to the bending rigidity ratio κLoLd.\n\nFIGURE 14", null, "Figure 14. Typical conformation and potential of mean force for adsorbed PEG molecules. (A,B) Simulation snapshots of PEG molecule adsorbed onto Ld and Lo bilayer. The color code for the lipids is blue for DOPC, orange for DPPC, and red for cholesterol. The PEG molecules consist of 180 monomers corresponding to the average molecular weight used in the experiments. Each lipid membrane is immersed in about 27,000 water molecules (not shown). (C,D) Potential of mean force (PMF) for Ld and Lo membranes as a function of the separation z between the polymer's center-of-mass and the bilayer midplane. The potential wells are relatively broad, with a width of about 4 nm, because the polymer end groups can adsorb even for relatively large z-values. The binding free energy of a single PEG chain is about 4 kJ/mol or 1.6 kBT for both types of membranes. Reprinted with permission from Liu et al. (2016). Copyright (2016) American Chemical Society.\n\n## Conclusions\n\nIn summary, we discussed the model system of GUVs encapsulating ATPS emphasizing aspects of both polymer physics and membrane biophysics, highlighting recent results from our groups.\n\nWe illustrated how the phase diagram for ATPS of dextran and PEG can be constructed by cloud titration and presented methods based on density and GPC measurements of the coexisting phases. The ultralow interfacial tension between the coexisting phases was studied over a broad polymer concentration range above the critical point. It was found that the scaling exponent of the interfacial tension with the reduced polymer concentration gives a value of 1.67 in vicinity of the critical point, which disagrees with the expected value 1.26 for the Ising model. The latter discrepancy arises from the molar mass fractionation of dextran during phase separation.\n\nWhen encapsulating these ATPS into giant vesicles, the membranes may be completely or partially wetted by the two aqueous phases, depending on the lipid and polymer composition. A complete-to-partial wetting transition of ATPS is observed via osmotic deflation of the vesicle volume. The associated volume reduction generates excess area of the membrane, which folds into many membrane nanotubes protruding into the interior vesicle compartment revealing a substantial asymmetry and negative spontaneous curvature of the membranes. Quantitative estimates of the spontaneous curvature have been obtained in Liu et al. (2016) by three different and independent methods of image analysis. The spontaneous curvature is generated by the weak PEG adsorption onto the lipid membranes, with a binding affinity of about 1.6 kBT per PEG molecule for either liquid-ordered or liquid-disordered membranes, based on molecular dynamics simulation.\n\nMembrane nanotubes are also observed in the living cells, for example in the Golgi apparatus and the smooth endoplasmic reticulum. However, the underlying mechanism for the tube formation in cells remains to be elucidated. The cellular membranes are often exposed to asymmetric aqueous environments with a large amount of proteins, which plays central role in the tubulation process. The model system of GUV encapsulating ATPS provides a controllable platform for understanding the remodeling of membranes in the living cells. It would be interesting to include proteins in the GUV/ATPS system to mimic the cellular behavior more closely.\n\n## Author Contributions\n\nAll authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.\n\n## Funding\n\nThis work was funded by the Partner Group Program of the Max Planck Society and the Chinese Academy of Sciences, and the National Natural Science Foundation of China (21774125).\n\n## Conflict of Interest Statement\n\nThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.\n\nThe reviewer PS declared a past co-authorship with one of the authors RD to the handling editor.\n\n## References\n\nAlbertsson, P. Å. (1986). Partition of Cell Particles and Macromolecules: Separation and Purification of Biomolecules, Cell Organelles, Membranes, and Cells in Aqueous Polymer Two-Phase Systems and Their Use in Biochemical Analysis and Biotechnology. New York, NY: Wiley.\n\nAndes-Koback, M., and Keating, C. D. (2011). Complete budding and asymmetric division of primitive model cells to produce daughter vesicles with different interior and membrane compositions. J. Am. Chem. Soc. 133, 9545–9555. doi: 10.1021/ja202406v\n\nAnisimov, M. A., Agayan, V. A., and Gorodetskii, E. E. (2000). Scaling and crossover to tricriticality in polymer solutions. JETP Lett. 72, 578–582. doi: 10.1134/1.1348485\n\nAnisimov, M. A., Kostko, A. F., and Sengers, J. V. (2002). Competition of mesoscales and crossover to tricriticality in polymer solutions. Phys. Rev. E 65, 051805. doi: 10.1103/PhysRevE.65.051805\n\nAtefi, E., Mann, J. A., and Tavana, H. (2014). Ultralow interfacial tensions of aqueous two-phase systems measured using drop shape. Langmuir 30, 9691–9699. doi: 10.1021/la500930x\n\nBassereau, P., Jin, R., Baumgart, T., Deserno, M., Dimova, R., Frolov, V. A., et al. (2018). The 2018 biomembrane curvature and remodeling roadmap. J. Phys. D Appl. Phys. 51, 343001. doi: 10.1088/1361-6463/aacb98\n\nBhatia, T., Agudo-Canalejo, J., Dimova, R., and Lipowsky, R. (2018). Membrane nanotubes increase the robustness of giant vesicles. ACS Nano 12, 4478–4485. doi: 10.1021/acsnano.8b00640\n\nBreidenich, M., Netz, R., and Lipowsky, R. (2005). The influence of non-anchored polymers on the curvature of vesicles. Mol. Phys. 103, 3169–3183. doi: 10.1080/00268970500270484\n\nConnemann, M., Gaube, J., Leffrang, U., Muller, S., and Pfennig, A. (1991). Phase equilibria in the system poly(ethylene glycol) + dextran + water. J. Chem. Eng. Data 36, 446–448. doi: 10.1021/je00004a029\n\nDasgupta, R., Miettinen, M., Fricke, N., Lipowsky, R., and Dimova, R. (2018). The glycolipid GM1 reshapes asymmetric biomembranes and giant vesicles by curvature generation. Proc. Natl. Acad. Sci. U.S.A. 115, 5756–5761. doi: 10.1073/pnas.1722320115\n\nDietrich, C., Bagatolli, L. A., Volovyk, Z. N., Thompson, N. L., Levi, M., Jacobson, K., et al. (2001). Lipid rafts reconstituted in model membranes. Biophys. J. 80, 1417–1428. doi: 10.1016/S0006-3495(01)76114-0\n\nDimova, R. (2012). “Giant vesicles: a biomimetic tool for membrane characterization,” in Advances in Planar Lipid Bilayers and Liposomes, ed A. Iglic (Amsterdam: Academic Press), 1–50.\n\nDimova, R. (2014). Recent developments in the field of bending rigidity measurements on membranes. Adv. Coll. Interf. Sci. 208, 225–234. doi: 10.1016/j.cis.2014.03.003\n\nDimova, R. (2019). Giant vesicles and their use in assays for assessing membrane phase state, curvature, mechanics and electrical properties. Annu. Rev. Biophys. 48:1. doi: 10.1146/annurev-biophys-052118-115342\n\nDimova, R., Aranda, S., Bezlyepkina, N., Nikolov, V., Riske, K. A., and Lipowsky, R. (2006). A practical guide to giant vesicles. Probing the membrane nanoregime via optical microscopy. J. Phys. Condens. Matter 18, S1151–S1176. doi: 10.1088/0953-8984/18/28/S04\n\nDimova, R., and Lipowsky, R. (2012). Lipid membranes in contact with aqueous phases of polymer solutions. Soft Matter 8, 6409–6415. doi: 10.1039/c2sm25261a\n\nDimova, R., and Lipowsky, R. (2017). Giant vesicles exposed to aqueous two-phase systems: membrane wetting, budding processes, and spontaneous tubulation. Adv. Mater. Interfaces 4, 1600451. doi: 10.1002/admi.201600451\n\nDing, P., Wolf, B., Frith, W. J., Clark, A. H., Norton, I. T., and Pacek, A. W. (2002). Interfacial tension in phase-separated gelation/dextran aqueous mixtures. J. Colloid Interface Sci. 253, 367–376. doi: 10.1006/jcis.2002.8572\n\nDobashi, T., Nakata, M., and Kaneko, M. (1980). Coexistence curve of polystyrene in methylcyclohexane. 2. Comparison of coexistence curve observed and calculated from classical free-energy. J. Chem. Phys. 72, 6692–6697. doi: 10.1063/1.439128\n\nDöbereiner, H. G., Selchow, O., and Lipowsky, R. (1999). Spontaneous curvature of fluid vesicles induced by trans-bilayer sugar asymmetry. Eur. Biophys. J. 28, 174–178. doi: 10.1007/s002490050197\n\nEdelman, M. W., Tromp, R. H., and van der Linden, E. (2003a). Phase-separation-induced fractionation in molar mass in aqueous mixtures of gelatin and dextran. Phys. Rev. E 67, 021404. doi: 10.1103/PhysRevE.67.021404\n\nEdelman, M. W., van der Linden, E., and Tromp, R. H. (2003b). Phase separation of aqueous mixtures of poly(ethylene oxide) and dextran. Macromolecules 36, 7783–7790. doi: 10.1021/ma0341622\n\nFlory, P. J. (1941). Thermodynamics of high polymer solutions. J. Chem. Phys. 9, 660–661. doi: 10.1063/1.1750971\n\nFlory, P. J. (1953). Principles of Polymer Chemistry. Ithaca: Cornell University Press.\n\nHatti-Kaul, R. (2000). Methods in Biotechnology, Vol. 11, Aqueous Two-Phase Systems: Methods and Protocols. Totowa: Humana Press.\n\nHeinrich, M., Tian, A., Esposito, C., and Baumgart, T. (2010). Dynamic sorting of lipids and proteins in membrane tubes with a moving phase boundary. Proc. Natl. Acad. Sci. U.S.A. 107, 7208–7213. doi: 10.1073/pnas.0913997107\n\nHeinrich, M., and Wolf, B. A. (1992). Interfacial tension between solutions of polystyrenes: establishment of a useful master curve. Polymer 33, 1926–1931. doi: 10.1016/0032-3861(92)90494-H\n\nHelfrich, M. R., Mangeney-Slavin, L. K., Long, M. S., Djoko, K. Y., and Keating, C. D. (2002). Aqueous phase separation in giant vesicles. J. Am. Chem. Soc. 124, 13374–13375. doi: 10.1021/ja028157+\n\nHuggins, M. L. (1941). Solutions of long chain compounds. J. Chem. Phys. 9, 440. doi: 10.1063/1.1750930\n\nKang, C. H., and Sandler, S. I. (1988). Effects of polydispersivity on the phase behavior of the aqueous two-phase polymer systems. Macromolecules 21, 3088–3095. doi: 10.1021/ma00188a029\n\nKeating, C. D. (2012). Aqueous phase separation as a possible route to compartmentalization of biological molecules. Acc. Chem. Res. 45, 2114–2124. doi: 10.1021/ar200294y\n\nKoningsveld, R., Stockmayer, W. H., and Nies, E. (2001). Polymer Phase Diagrams. New York, NY: Oxford University Press.\n\nKusumaatmaja, H., Li, Y., Dimova, R., and Lipowsky, R. (2009). Intrinsic contact angle of aqueous phases at membranes and vesicles. Phys. Rev. Lett. 103, 238103. doi: 10.1103/PhysRevLett.103.238103\n\nLi, Y., Kusumaatmaja, H., Lipowsky, R., and Dimova, R. (2012). Wetting-induced budding of vesicles in contact with several aqueous phases. J. Phys. Chem. B 116, 1819–1823. doi: 10.1021/jp211850t\n\nLi, Y., Lipowsky, R., and Dimova, R. (2008). Transition from complete to partial wetting within membrane compartments. J. Am. Chem. Soc. 130, 12252–12253. doi: 10.1021/ja8048496\n\nLi, Y., Lipowsky, R., and Dimova, R. (2011). Membrane nanotubes induced by aqueous phase separation and stabilized by spontaneous curvature. Proc. Natl. Acad. Sci. U.S.A. 108, 4731–4736. doi: 10.1073/pnas.1015892108\n\nLipowsky, R. (2013). Spontaneous tubulation of membranes and vesicles reveals membrane tension generated by spontaneous curvature. Faraday Discuss. 161, 305–331. doi: 10.1039/c2fd20105d\n\nLipowsky, R. (2014). Remodeling of membrane compartments: some consequences of membrane fluidity. Biol. Chem. 395, 253–274. doi: 10.1515/hsz-2013-0244\n\nLipowsky, R. (2018). Response of membranes and vesicles to capillary forces arising from aqueous two-phase systems and water-in-water droplets. J. Phys. Chem. B 122, 3572–3586. doi: 10.1021/acs.jpcb.7b10783\n\nLipowsky, R., and Dimova, R. (2003). Domains in membranes and vesicles. J. Phys. Condens. Matter 15, S31–S45. doi: 10.1088/0953-8984/15/1/304\n\nLiu, Y., Agudo-Canalejo, J., Grafmüller, A., Dimova, R., and Lipowsky, R. (2016). Patterns of flexible nanotubes formed by liquid-ordered and liquid-disordered membranes. ACS Nano 10, 463–474. doi: 10.1021/acsnano.5b05377\n\nLiu, Y., Lipowsky, R., and Dimova, R. (2012). Concentration dependence of the interfacial tension for aqueous two-phase polymer solutions of dextran and polyethylene glycol. Langmuir 28, 3831–3839. doi: 10.1021/la204757z\n\nLong, M. S., Cans, A. S., and Keating, C. D. (2008). Budding and asymmetric protein microcompartmentation in giant vesicles containing two aqueous phases. J. Am. Chem. Soc. 130, 756–762. doi: 10.1021/ja077439c\n\nLong, M. S., Jones, C. D., Helfrich, M. R., Mangeney-Slavin, L. K., and Keating, C. D. (2005). Dynamic microcompartmentation in synthetic cells. Proc. Natl. Acad. Sci. U.S.A. 102, 5920–5925. doi: 10.1073/pnas.0409333102\n\nMelnichenko, Y. B., Anisimov, M. A., Povodyrev, A. A., Wignall, G. D., Sengers, J. V., and van Hook, W. A. (1997). Sharp crossover of the susceptibility in polymer solutions near the critical demixing point. Phys. Rev. Lett. 79, 5266–5269. doi: 10.1103/PhysRevLett.79.5266\n\nMerchuk, J. C., Andrews, B. A., and Asenjo, J. A. (1998). Aqueous two-phase systems for protein separation studies on phase inversion. J. Chromatogr. B 711, 285–293. doi: 10.1016/S0378-4347(97)00594-X\n\nMishima, K., Matsuyama, K., Ezawa, M., Taruta, Y., Takarabe, S., and Nagatani, M. (1998). Interfacial tension of aqueous two-phase systems containing poly(ethylene glycol) and dipotassium hydrogenphosphate. J. Chromatogr. B 711, 313–318. doi: 10.1016/S0378-4347(97)00660-9\n\nRyden, J., and Albertsson, P. A. (1971). Interfacial tension of dextran-polyethylene glycol-water two-phase systems. J. Colloid Interface Sci. 37, 219–222. doi: 10.1016/0021-9797(71)90283-9\n\nSanchez, I. C. (1989). Critical amplitude scaling laws for polymer solutions. J. Phys. Chem. 93, 6983–6991. doi: 10.1021/j100356a021\n\nShinozaki, K., Vantan, T., Saito, Y., and Nose, T. (1982). Interfacial tension of demixed polymer solutions near the critical temperature: polystyrene + methylcyclohexane. Polymer 23, 728–734. doi: 10.1016/0032-3861(82)90059-3\n\nTromp, R. H. (2016). “Water-water interphases,” in Soft Matter at Aqueous Interfaces. Lecture Notes in Physics, eds P. Lang, Y. Liu (Cham: Springer), 159–186.\n\nvan Heukelum, A., Barkema, G. T., Edelman, M. W., van der Linden, E., de Hoog, E. H. A., and Tromp, R. H. (2003). Fractionation in a phase-separated polydisperse polymer mixtures. Macromolecules 36, 6662–6667. doi: 10.1021/ma025736q\n\nVis, M., Peters, V. F. D., Blokhuis, E. M., Lekkerkerker, H. N. W., Erne, B. H., and Tromp, R. H. (2015). Effects of electric charge on the interfacial tension between coexisting aqueous mixtures of polyelectrolyte and neutral polymer. Macromolecules 48, 7335–7345. doi: 10.1021/acs.macromol.5b01675\n\nVitkova, V., Genova, J., Mitov, M. D., and Bivas, I. (2006). Sugars in the aqueous phase change the mechanical properties of lipid mono- and bilayers. Mol. Cryst. Liq. Cryst. 449, 95–106. doi: 10.1080/15421400600582515\n\nWalde, P., Cosentino, K., Engel, H., and Stano, P. (2010). Giant vesicles: preparations and applications. ChemBioChem 11, 848–865. doi: 10.1002/cbic.201000010\n\nWalter, H., Brooks, D. E., and Fisher, D. (1985). Partitioning in Aqueous Two-phase Systems: Theory, Methods, Uses, and Applications to Biotechnology. Orlando: Academic Press.\n\nWidom, B. (1993). Phase separation in polymer solutions. Phys. A 194, 532–541. doi: 10.1016/0378-4371(93)90383-F\n\nZaslavski, B. Y. (1995). Aqueous Two-Phase Partitioning: Physical Chemistry and Bioanalytical Applications. New York, NY: Marcel Dekker.\n\nZhao, Z., Li, Q., Ji, X., Dimova, R., Lipowsky, R., and Liu, Y. (2016a). Molar mass fractionation in aqueous two-phase polymer solutions of dextran and poly(ethylene glycol). J. Chromatogr. A 1452, 107–115. doi: 10.1016/j.chroma.2016.04.075\n\nZhao, Z., Li, Q., Xue, Y., Ji, X., Bo, S., and Liu, Y. (2016b). Composition and molecular weight determination of aqueous two-phase system by quantitative size exclusion chromatography, Chem. J. Chin. Univ. 37, 167–173. doi: 10.7503/cjcu20150567\n\nCrossRef Full Text\n\nZhao, Z., Roy, D., Steinkühler, J., Robinson, T., Knorr, R., Lipowsky, R., et al. (2018). Super resolution imaging of highly curved membrane structures in giant unilamellar vesicles encapsulating polymer solutions. Biophys. J. 114, 100a−101a. doi: 10.1016/j.bpj.2017.11.591\n\nKeywords: phase diagram, membrane shape transformation, giant vesicles, aqueous two-phase systems, dextran, poly(ethylene glycol), wetting, membrane tubes\n\nCitation: Liu Y, Lipowsky R and Dimova R (2019) Giant Vesicles Encapsulating Aqueous Two-Phase Systems: From Phase Diagrams to Membrane Shape Transformations. Front. Chem. 7:213. doi: 10.3389/fchem.2019.00213\n\nReceived: 26 October 2018; Accepted: 18 March 2019;\nPublished: 09 April 2019.\n\nEdited by:\n\nJohn Paul Frampton, Dalhousie University, Canada\n\nReviewed by:\n\nPasquale Stano, University of Salento, Italy\nKanta Tsumoto, Mie University, Japan\n\nCopyright © 2019 Liu, Lipowsky and Dimova. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.\n\n*Correspondence: Yonggang Liu, [email protected]\nRumiana Dimova, [email protected]" ]
[ null, "https://crossmark-cdn.crossref.org/widget/v2.0/logos/CROSSMARK_Color_square.svg", null, "https://loop.frontiersin.org/images/profile/509700/24", null, "https://f96a1a95aaa960e01625-a34624e694c43cdf8b40aa048a644ca4.ssl.cf2.rackcdn.com/Design/Images/newprofile_default_profileimage_new.jpg", null, "https://loop.frontiersin.org/images/profile/390483/24", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g001.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g002.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g003.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g004.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g005.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g006.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g007.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g008.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g009.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g010.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g011.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g012.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g013.gif", null, "https://www.frontiersin.org/files/Articles/432831/fchem-07-00213-HTML/image_t/fchem-07-00213-g014.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86541003,"math_prob":0.89339155,"size":80850,"snap":"2020-34-2020-40","text_gpt3_token_len":19856,"char_repetition_ratio":0.1824209,"word_repetition_ratio":0.06998232,"special_character_ratio":0.2364997,"punctuation_ratio":0.15728752,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9505472,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,3,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T10:36:53Z\",\"WARC-Record-ID\":\"<urn:uuid:325e1f2a-483c-41e8-93f5-c49274ff1425>\",\"Content-Length\":\"227422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf42b93d-e649-4660-9c15-8feca4733269>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb5af9c6-fdb8-4d4a-a147-f8652c0627e9>\",\"WARC-IP-Address\":\"134.213.70.247\",\"WARC-Target-URI\":\"https://www.frontiersin.org/articles/10.3389/fchem.2019.00213/full\",\"WARC-Payload-Digest\":\"sha1:7GCBMMDCXXJL2ITKK65C7W42Z6AOE7QN\",\"WARC-Block-Digest\":\"sha1:UQUNW25T47Q3LHWZQ5GMU6CNJ25R6MU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402123173.74_warc_CC-MAIN-20200930075754-20200930105754-00379.warc.gz\"}"}
https://www.jiskha.com/questions/502972/a-cylindrical-tank-is-lying-horizontally-on-the-ground-its-diameter-is-16-feet-its
[ "# math\n\nA cylindrical tank is lying horizontally on the ground, its diameter is 16 feet, its length is 25 feet, the depth of the water in the tank is 5 feet. How many gallons of water are in the tank? How many more gallons of water will it take to fill in the tank?\n\n1. 👍\n2. 👎\n3. 👁\n1. A cylindrical gasoline tank is 50 ft high and has diameter 70 ft. How many gallons of gasoline will the tank hold if there are 7.5 gallons in 1 ft3?\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### math\n\nHow many gallons of water can be contained in a cylindrical tank that is 6ft in diameter and 15ft deep?\n\n2. ### Physics !!\n\n1) A stone is dropped from the roof of a building; 2.00 s after that, a second stone is thrown straight down with an initial speed of 25 m/s and the two stones land at the same time. i) How long did it take the first stone to\n\n3. ### math\n\n(1)How much water can be held by a cylindrical tank with a radius of 12 feet and a height of 30 feet. (2)The diameter of a frisbee is 12 inches, what is the area of the frisbee\n\n4. ### solid mensuration\n\nA cylindrical tank with flat ends has a diameter of two meters and is 5meters long. It is filled with fuel to a depth of one and one-half meters. Find the volume of the fuel in the tank in liters. plsss help!!\n\n1. ### physics\n\nA water tank is filled to a depth of 10 m and the tank is 20 m above ground. The water pressure at ground level in a hose 2 cm in diameter is closest to: A) 3.9 × 105 N/m2 B) 2.0 × 104 N/m2 C) 9.2 N/m2 D) The cross-sectional\n\n2. ### Solid Mensuration\n\nA closed cylindrical container 10 feet in height and 4 feet in diameter contains water with depth of 3 feet and 5 inches. What would be the level of the water when the tank is lying in horizontal position?\n\n3. ### Calculus\n\nWater is flowing into a vertical cylindrical tank at the rate of 24 cu. ft. per minute. If the radius of the tank is 4 feet, how fast is the surface rising? Thank You! :)\n\n4. ### Physics II\n\nA large cylindrical water tank 11.5 m in diameter and 13.5 m tall is supported 8.75 m above the ground by a stand. The water level in the tank is 10.6 m deep. The density of the water in the tank is 1.00 g/cm3. A very small hole\n\n1. ### CALCULUS\n\nA cylindrical oil storage tank 12 feet in diameter and 17feet long is lying on its side. Suppose the tank is half full of oil weighing 85 lb per cubic foot. What's the total force on one endof the tank?\n\n2. ### Math\n\nA cylindrical storage tank has a height of 100 feet and a diameter of 10 feet. (Use pi = 3.14) What is the lateral surface area of the tank? How do i find lateral surface area of the tank What is the volume? 7850 cubic feet What\n\n3. ### math\n\nA cylindrical tank has a radius of 15 ft. and a height of 45 ft. How many cubic feet of water can the tank hold?\n\n4. ### Math\n\na cylindrical tank 23 feet long and 9 feet in diameter is resting on its side in a horizontal position. find the number of gallons foil in the tank if the depth of the oils 3.5? find the total area of the cylindrical tank" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9301128,"math_prob":0.96974915,"size":2576,"snap":"2021-31-2021-39","text_gpt3_token_len":710,"char_repetition_ratio":0.1726283,"word_repetition_ratio":0.0037807184,"special_character_ratio":0.2767857,"punctuation_ratio":0.08695652,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98829764,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T04:13:35Z\",\"WARC-Record-ID\":\"<urn:uuid:110b6472-daa6-45f3-b5d5-1303653dcec9>\",\"Content-Length\":\"17138\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c1ca75f-f206-4bb0-856d-1415604d75b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:25e8aed7-fcd7-4a7a-9fd8-372f1c9c48ff>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/502972/a-cylindrical-tank-is-lying-horizontally-on-the-ground-its-diameter-is-16-feet-its\",\"WARC-Payload-Digest\":\"sha1:AHLRJ3CTEIJRJOARANPVRAXJBAWBIR52\",\"WARC-Block-Digest\":\"sha1:2VIUDWXIAC7AEKSGTB2CLF25O5NE7C3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153931.11_warc_CC-MAIN-20210730025356-20210730055356-00607.warc.gz\"}"}
https://www.johndcook.com/blog/2015/04/01/integration-by-darts/
[ "# Integration by Darts\n\nMonte Carlo integration has been called “Integration by Darts,” a clever pun on “integration by parts.” I ran across the phrase looking at some slides by Brian Hayes, but apparently it’s been around a while. The explanation that Monte Carlo is “integration by darts” is fine as a 0th order explanation, but it can be misleading.\n\nIntroductory courses explain Monte Carlo integration as follows.\n\n1. Plot the function you want to integrate.\n2. Draw a box that contains the graph.\n3. Throw darts (random points) at the box.\n4. Count the proportion of darts that land between the graph and the horizontal axis.\n5. Estimate the area under the graph by multiplying the area of the box by the proportion above.\n\nIn principle this is correct, but this is far from how Monte Carlo integration is usually done in practice.\n\nFor one thing, Monte Carlo integration is seldom used to integrate functions of one variable. Instead, it is mostly used on functions of many variables, maybe hundreds or thousands of variables. This is because more efficient methods exist for low-dimensional integrals, but very high dimensional integrals can usually only be computed using Monte Carlo or some variation like quasi-Monte Carlo.\n\nIf you draw a box around your integrand, especially in high dimensions, it may be that nearly all your darts fall outside the region you’re interested in. For example, suppose you throw a billion darts and none land inside the volume determined by your integration problem. Then the point estimate for your integral is 0. Assuming the true value of the integral is positive, the relative error in your estimate is 100%. You’ll need a lot more than a billion darts to get an accurate estimate. But is this example realistic? Absolutely. Nearly all the volume of a high-dimensional cube is in the “corners” and so putting a box around your integrand is naive. (I’ll elaborate on this below. )\n\nSo how do you implement Monte Carlo integration in practice? The next step up in sophistication is to use “importance sampling.” Conceptually you’re still throwing darts at a box, but not with a uniform distribution. You find a probability distribution that approximately matches your integrand, and throw darts according to that distribution. The better the fit, the more efficient the importance sampler. You could think of naive importance sampling as using a uniform distribution as the importance sampler. It’s usually not hard to find an importance sampler much better than that. The importance sampler is so named because it concentrates more samples in the important regions.\n\nImportance sampling isn’t the last word in Monte Carlo integration, but it’s a huge improvement over naive Monte Carlo.", null, "So what does it mean to say most of the volume of a high-dimensional cube is in the corners? Suppose you have an n-dimensional cube that runs from -1 to 1 in each dimension and you have a ball of radius 1 inside the cube. To make the example a little simpler, assume n is even, n = 2k. Then the volume of the cube is 4k and the volume of the sphere is πk / k!. If k = 1 (n = 2) then the sphere (circle in this case) takes up π/4 of the volume (area), about 79%. But when k = 100 (n = 200), the ball takes up 3.46×10-169 of the volume of the cube. You could never generate enough random samples from the cube to ever hope to land a single point inside the ball.\n\n In a nutshell, importance sampling replaces the problem of integrating f(x) with that of integrating (f(x) / g(x)) g(x) where g(x) is the importance sampler, a probability density. Then the integral of (f(x) / g(x)) g(x) is the expected value of (f(X) / g(X)) where X is a random variable with density given by the importance sampler. It’s often a good idea to use an importance sampler with slightly heavier tails than the original integrand.\n\n## 9 thoughts on “Integration by Darts”\n\n1. Craig Bosma\n\nIn footnote 1, 4k for the the volume of the cube should be 4^k (=2^2k), right?\n\n2. Yes, thanks. Typo corrected.\n\n3. Could you elaborate on a bit? For example, it is not clear to me that the integral of (f(x) / g(x)) g(x) is the expected value of (f(X) / g(X)) and how you would use that knowledge when performing importance sampling in practice.\n\n4. Nico: As for why the integral of (f(x) / g(x)) g(x) is the expected value of (f(X) / g(X)) with respect to the distribution with density g(x), this is the so-called law of the unconscious statistician. So in practice you would generate random points from the distribution given by g, and sum the values of f(x) / g(x) at these points. The trick is to pick a density g that is close to the integrand f while also being easy to sample from.\n\n5. Thanks. So IIUC you’ve replaced the problem statement to computing E[f(X)/gX(x)], where gX(x) is the importance sampler’s PDF. After summing up the values at f(x)/g(x), don’t you need to normalize by the number of points or something like that?\n\n6. Nico: Yes.\n\n7. That’s how we calculated PI when I was in high school on the PDP-8 (1978)" ]
[ null, "https://www.johndcook.com/numerical_analysis_consulting.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91436833,"math_prob":0.9892302,"size":4964,"snap":"2019-35-2019-39","text_gpt3_token_len":1142,"char_repetition_ratio":0.14556451,"word_repetition_ratio":0.05184332,"special_character_ratio":0.23468977,"punctuation_ratio":0.08726179,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997346,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T07:46:25Z\",\"WARC-Record-ID\":\"<urn:uuid:5a877ae8-4982-4340-b22b-69837aa74e2c>\",\"Content-Length\":\"39245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5987aa48-00bd-475f-8420-86ee5744bcf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:90a1058d-df1d-4897-a5b7-b20f4f4650d5>\",\"WARC-IP-Address\":\"74.208.236.113\",\"WARC-Target-URI\":\"https://www.johndcook.com/blog/2015/04/01/integration-by-darts/\",\"WARC-Payload-Digest\":\"sha1:D2YW3LEDCE527FF6OQLE6WC7YNU73PLN\",\"WARC-Block-Digest\":\"sha1:CCXLMHJ4H5YBAF6X3Y2ADTEIMDDHSA6F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315811.47_warc_CC-MAIN-20190821065413-20190821091413-00019.warc.gz\"}"}
https://www.ti.com/document-viewer/TLV1701-Q1/datasheet/typical-characteristics-sbos5512148.html
[ "SLOS890C October   2015  – December 2019\n\nPRODUCTION DATA.\n\n1. Features\n2. Applications\n3. Description\n1.     Device Images\n4. Revision History\n5. Device Comparison Table\n6. Pin Configuration and Functions\n7. Specifications\n8. Detailed Description\n1. 8.1 Overview\n2. 8.2 Functional Block Diagram\n3. 8.3 Feature Description\n4. 8.4 Device Functional Modes\n9. Application and Implementation\n1. 9.1 Application Information\n2. 9.2 Typical Application\n10. 10Power Supply Recommendations\n11. 11Layout\n12. 12Device and Documentation Support\n13. 13Mechanical, Packaging, and Orderable Information\n\n• DCK|5\n• DBV|5\n• DCK|5\n\n### 7.7 Typical Characteristics\n\nat TA = 25°C, VS = 5 V, RPULLUP = 5.1 kΩ, and input overdrive = 100 mV (unless otherwise noted)", null, "Figure 1. Quiescent Current vs Temperature", null, "Figure 3. Input Offset Current vs Temperature", null, "VS = ±18 V 14 typical units shown\nFigure 5. Offset Voltage vs Common-Mode Voltage", null, "16 typical units shown\nFigure 7. Offset Voltage vs Supply Voltage", null, "Figure 9. Propagation Delay vs Capacitive Load", null, "VS = 36 V Overdrive = 100 mV\nFigure 11. Propagation Delay (TpLH)", null, "VS = 2.2 V Overdrive = 100 mV\nFigure 13. Propagation Delay (TpLH)", null, "VS = ±18 V Distribution taken from 2524 comparators\nFigure 15. Offset Voltage Production Distribution", null, "Sink current\nFigure 17. Short-Circuit Current vs Supply Voltage", null, "Figure 2. Input Bias Current vs Temperature", null, "Figure 4. Output Voltage vs Output Current", null, "VS = 2.2 V 13 typical units shown\nFigure 6. Offset Voltage vs Common-Mode Voltage", null, "Figure 8. Propagation Delay vs Input Overdrive", null, "VOD = 100 mV\nFigure 10. Propagation Delay vs Temperature", null, "VS = 36 V Overdrive = 100 mV\nFigure 12. Propagation Delay (TpHL)", null, "VS = 2.2 V Overdrive = 100 mV\nFigure 14. Propagation Delay (TpHL)", null, "VS = 2.2 V Distribution taken from 2524 comparators\nFigure 16. Offset Voltage Production Distribution" ]
[ null, "https://www.ti.com/ods/images/SLOS890C/C017_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C014_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/D003_SLOS890.gif", null, "https://www.ti.com/ods/images/SLOS890C/D001_SLOS890.gif", null, "https://www.ti.com/ods/images/SLOS890C/C006_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C007_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C009_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/D005_SLOS890.gif", null, "https://www.ti.com/ods/images/SLOS890C/C016_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C015_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C011_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/D002_SLOS890.gif", null, "https://www.ti.com/ods/images/SLOS890C/C013_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C012_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C008_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/C010_SBOS589.png", null, "https://www.ti.com/ods/images/SLOS890C/D004_SLOS890.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72904414,"math_prob":0.90585625,"size":449,"snap":"2022-05-2022-21","text_gpt3_token_len":188,"char_repetition_ratio":0.17303371,"word_repetition_ratio":0.4117647,"special_character_ratio":0.454343,"punctuation_ratio":0.08045977,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95249945,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T15:30:33Z\",\"WARC-Record-ID\":\"<urn:uuid:655e5af4-1c34-4000-818e-462e258a5907>\",\"Content-Length\":\"90925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b093a7c-89b5-466a-993c-a0de1aada990>\",\"WARC-Concurrent-To\":\"<urn:uuid:d660dabe-0b1e-44a0-830b-b739b2c9103d>\",\"WARC-IP-Address\":\"23.1.11.236\",\"WARC-Target-URI\":\"https://www.ti.com/document-viewer/TLV1701-Q1/datasheet/typical-characteristics-sbos5512148.html\",\"WARC-Payload-Digest\":\"sha1:A5SFT2HRKM63TM62VMTYQPYCABPJC4TY\",\"WARC-Block-Digest\":\"sha1:KSUFZC2MMKNUHRA4X2FM74VIXK5JIMJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662529538.2_warc_CC-MAIN-20220519141152-20220519171152-00128.warc.gz\"}"}
https://pdfkul.com/percentile-based-approach-to-forecasting-research-at-google_59b5b0d41723dda273d9a298.html
[ "Percentile-Based Approach to Forecasting Workload Growth Alexander Gilgur, C.Stephen Gunn, Douglas Browning, Xiaojun Di, Wei Chen, Rajesh Krishnaswamy (Google, Inc)\n\n“It’s always the quiet ones.” ­ Folk wisdom\n\nAbstract When forecasting resource workloads (traffic, CPU load, memory usage, etc.), we often  extrapolate from the upper percentiles of data distributions.  This works very well when the  resource is far enough from its saturation point.  However, when the resource utilization gets  closer to the workload­carrying capacity of the resource, upper percentiles level off (the  phenomenon is colloquially known as flat­topping or clipping), leading to underpredictions of  future workload and potentially to undersized resources. This paper explains the phenomenon  and proposes a new approach that can be used for making useful forecasts of workload when  historical data for the forecast are collected from a resource approaching saturation.\n\nWorkload The workload on an IT resource (network node or link, CPU, disk, memory, etc.) is usually  defined in terms of the number of commands (requests, jobs, packets, tasks,...) that are either  being processed or sitting in the arrival queue (in some cases, the buffer for arrival queues is  located on the sending element of the system; in such scenarios, it may be impossible for the  resource in question to be aware of the pending workload).      Little’s Law [​ LTTL2011​ ], discovered, and expressed in stochastic terms, 40 years prior to John  Little by A.K. Erlang, connects the workload, the arrival rate, and the service time in a very  simple equation with unexpectedly complicated consequences:    W   =  X  *  T (1)  where  X   =  arrival rate;\n\nT   =  service time (aka latency or response time);   W   =  workload     The  X  and  T  describe two very different features of the system: the arrival rate (   X  )  characterizes demand, while latency (  T  ) characterizes the system’s response to the workload.    As we collect throughput and latency data over time, we get two time series of measurements  X(t) and T(t), which  together define a workload time series W(t).  Under low­arrival­rate  conditions, the dependence of T(t) on X(t) can be treated as negligible.   But when the resource  approaches saturation, we observe the knee in the Receiver Operating Curve (ROC).\n\nFigure 1. The Knee (an illustration) At the point where green zone ends and yellow begins in Figure 1 (approaching “the Knee”),  arrival rate and response time become significantly interdependent (see [​ FERR2012(1)​ ],  [​ GUNT2009​ ], [​ GILG2013​ ] and, for a truly rigorous discourse, [​ CHDY2014​ ]).      This concept of knee behavior informs a number of practical considerations. One is that  lead  times for parts and capacity installation times impose the need for forecasting  system behavior  at a time far in the future. As economic forces often dictate seeking utilization levels \"just below  the knee\", the forecasting must often extrapolate histories of behavior below the knee into levels  within and above the knee.      Little’s Law also allows us to express the holding capacity of an IT resource (maximum  concurrency) as  N   =  X max * T nom  ​ , where  X max = bandwidth ,  or throughput capacity  ​ , and  T nom =  nominal latency  . Nominal latency is latency observed under low load (when  T (X)   is  nearly  const ) or calculated, e.g., as link length divided by the speed of light. In networking,  holding capacity is known as BDP ­ Bandwidth­Delay Product; in a transactional system (e.g., a  database; a telephone station, a cache register), it will be the maximum number of transactions  that the system can hold at any given time without blocking.\n\nProblem Statement We have:  ● An IT resource (e.g., network link) with a given holding capacity,  N .   ● Expected throughput for the element,  X  ​ .  ● Nominal latency (job holding time) for this element,  T nom  ​ .  ● Historical data (time series) for Throughput,  X (t)  ​ and Latency,  T (t)\n\nWe need to:  2\n\nEstimate when the element will reach its saturation point, usually with some built­in bias to  address risk.    Standard Approach:  1. Compute the historical workload,  W (t)  , ​ using Little’s Law (see [GILG2013] and  [CHDY2014] for ways to deal with the high­workload conditions);   2. Get the 95th (or 99th, or 90th, or … ) percentile of measurements on a suitable time step  (usually weekly, to have sufficient data to isolate the top 5% and to accommodate the  weekly and diurnal patterns often encountered in resource utilization data),  W .95(i)   (where i  =  time interval over which the percentile is calculated)  ;   3. Forecast (see, e.g., [MAKR1998], [ARMS2001]) the W .95(i)\n\n→ W .95(i) ​ ;\n\n4. Add an overhead,​   Δ(t) ​ , to the forecasted  W .95(i)  ​ value ([​ CRFT2006​ ], [​ OSTR2011​ ]) to  create headroom for data variability.   5. Identify the earliest time when  W .95(i)  +  Δ(i)  ≥ N ​ .\n\nProblem with Standard Approach Standard Approach Assumptions Usually the assumption in the standard approach is that latency will not change at higher  throughput, which implies that throughput trajectory will be a good proxy for workload trajectory,  and the workload forecast will be defined by that of the throughput:  W (t)  =  X(t)  *  T  .  In  addition, capacities of IT resources are typically measured in units of throughput (number of  transactions per second; bits per second; integer operations per second; etc), which makes it  convenient to measure workload as the rate of service of the arriving units of work. This creates  a lot of confusion, but it is the current “state of the art”.  With that in mind, illustrations below  show throughput time series data.\n\nWhen the percentiles’ trajectories behave “as expected” The standard approach works well when the forecasted workload quantiles are nondecreasing,  so that, for example,  W .90(i)  >  W .50(i)  > W .10(i)   and​   W .90(i)  >  W .50(i)  > W .10(i)  , or  X .90(i)  >  X .50(i)  > X .10(i)  and​     X .90(i)  >  X .50(i)  > X .10(i) ​  ​ (see Figure 2).\n\n(a)\n\n(b) Figure 2: Examples of Throughput Time Series where the Standard Approach above works   Examples in Figure 2 show unconstrained throughput time series where trajectories of all  percentiles are divergent (Figure 2a) and approximately parallel (Figure 2b).  A close  examination of Figure 2b reveals that the 5th and the 25th percentiles (third and fourth dashed  lines from the bottom) appear to be converging, but their potential intersection is too far in the  future to be material.\n\nWhen percentiles’ trajectories converge\n\n(a)\n\n(b) Figure 3: Throughput Time Series where percentiles’ trajectories converge   Figure 3 shows examples with behavior different than Figure 2. In Figure 3a, 95th and 97.5th  percentiles ­ second and third lines from the top ­ are converging to the 75th percentile.  In  Figure 3b percentiles’ trajectories actually intersect, making the 3rd quartile higher than the  97.5th percentile (upper bound of the 95% confidence interval), and dropping the 97.5th  percentile below the median.  These lines reflect the growth rates.  Their intersection merely  means that they are converging very fast.  Convergence, in turn, is important, because it points  to saturation, as will be shown below.    In other words, the phenomenon does occur in practice, deserves explanation, and requires  being dealt with.\n\nCan it Be Explained?   5\n\nConsider a resource­constrained system where a hidden or explicit feedback mechanism  moderates the demand based on the workload, illustrated conceptually in Figure 4.\n\nFigure 4: Workload Control System: a generalized view If  X  (moderated demand) is below the the “knee” (Figure 1), the mechanism will implement little  or no reduction.  At the knee, the latency ( T ) grows quickly with growing    X , leading to a     disproportional increase of the workload ( W  ), as determined via Little’s Law.  Thus  W   or a   similar signal can be used when  X ′ is large to effect an arrival rate X so that W does not exceed     a target value.      Thus a congestion control mechanism like the one  in Figure 4 seeks to ensure that    W   =  X * T   ≤ α * N  , (2)    where  α =  a coefficient,  0  < α < 1  , and  N   =  holding capacity of the connection (in units of work;  e.g.,  packets)       Empirically, if  W ≤ const  , and  W   =  X  *  T (X) , upper percentiles of  X   will be dampened more  than lower percentiles, especially when the demand is near the knee.\n\nHyperbolic Intuition As outlined in [GUNT2009] and independently in [​ FERR2014​ ], the ROC curve near the knee  (Figure 1) is approximated very closely by a hyperbolic function:     (X  −  L)  *  (T   −  H)  =   − A   (3)  Here  L  > 0,  H  >  0,  and A are parameters;  A =  f (α * N );  A  >  0;  X  =  throughput;  T   =  latency     This approximation follows from applying Little’s Law to a closed­loop queueing system. The  slopes of the asymptotes are defined by Eq. (3) parameters, which, as demonstrated in  [FERR2014], can be derived from known and measured parameters of the system.  For the open system, eq. (9) can be solved for T as :  A T   =  H  −   X − L   (4a)  6\n\nSensitivity Analysis Sensitivity is calculated by taking first derivative:  dT dX\n\nA =   (X − L) 2\n\n(5)\n\nSimilarly, for throughput sensitivity to latency: in the open system  A X   =   H − T + L    (4b)    Sensitivity is calculated by taking first derivative:  dX A A (6)  dT  =   − − (H − T)2 =   (H − T)2\n\n[\n\n]\n\nNote Eqs. (4a, 4b)​  , as well as their “cleaner forms”​  (5, 6)​  demonstrate the asymmetrical relationship  between throughput and latency in a closed system: higher throughput drives higher latency, but  not vice versa; see the Interpretation section below.  Substitution of (5a) into (7) yields:    dX dT\n\nA =    (H − T) 2 =\n\nA 2 A {H − [H −  X − L ]}\n\nFinally, for a closed system:  dX dT\n\n=\n\n[\n\nA A (X −L)\n\n2\n\n]\n\n2\n\n=    (X − L) A\n\n(7)\n\nComparison of ​ Eq.(6)​  and ​ Eq.(7)​  confirms correctness of the derivation (3) ­ (7).\n\nInterpretation As throughput increases, latency can only increase ​ (Eq. 4a)​ , whereas as latency increases,    throughput can only decrease ​ (Eq. 4b)​ .  Because  A  >  0 , Equation ​ (7) ​ dictates that as we  increase throughput in a closed­loop system, its upper percentiles must grow at a slower pace  near the saturation point than lower percentiles; hence the patterns observed in Figure 3(a, b).\n\n“One should always generalize.” - Carl Jacobi This discussion can be generalized by claiming (and proving, see below) that in a closed­loop  system where  X <  X ′   , as throughput is approaching saturation point, its upper percentiles will  grow at a slower pace than lower percentiles (compression of quantiles):  I f X  *  T   ≤ α *  N,  then\n\nlim\n\nX→X saturation\n\nwhere\n\n[\n\nΔX P   =\n\n(Δ )      ≤ 0\n\n(8)\n\nXP\n\ndX P (t) dt\n\n]   −  [\n\ndX 100%−P (t) dt\n\n] ,  X   =  P th percentile of  X;  50%  <  P   <  100%    P\n\nThe next section formalizes that empirical result mathematically.\n\nQuantile Compression Theorem Here we provide some strong but reasonable assumptions where the empirical observation of  compressed upper percentiles can arise, and formalize that result as a theorem and proof.    Let  X (i)   be a collection of throughput measurements over an interval of time.  While it is not  useful to speak of these measurements as drawn from a single distribution if there is a seasonal  pattern, it useful to speak of the expected value of each percentile. Thus for the  a th percentile  of  X (i)  we can write expected value  E[X a(i)]   and similarly for  X ′(i)  we can write  E[X ′a(i)] . For     convenience, we use  Q  to denote the natural logarithms of these expected values:  Qa(i)  = ln(E[X a(i)])  and  Q′a(i)  = ln(E[X ′a(i)])  .    We assume that for any two time intervals  i  and  j  where the expected values of the  a th  percentiles of the unconstrained demand  X ′ are scaled by some factor. For ease of derivations,     k we will set this factor to  e  .   Thus for all percentiles  a     E[X ′a(j)]  =  ekE[X ′a(i)]   (9).     For many resource­constrained systems (including data networks), the expected values of the  quantiles are dominated by diurnal and weekly patterns that vary little with scale and time, and  are well modeled by this assumption. Under conditions of demand growth over time, for  j > i   we  will have  k > 0 , but growing demand is not a requirement for the theorem below.    We assume that the time scale of the dynamics of the system illustrated in Figure 4 are such  that the expected values of  X a(i)   and  X ′a(i)  can be related directly by a function that is     dependent neither on the interval i nor the specific percentile a. For convenience, we write this  Qa(i)  =  f(Q′a(i)) . This   function in terms of  Q  as  Q = f (Q′) , or in specific application as      assumption is consistent with a system as shown in Figure  4 where the dynamical behavior  dies out on a much faster timescale than the period of measurements, so that the  measurements, or at least their expected values, can be treated according to a steady­state  relationship that is purely a characteristic of the system.     For the system illustrated in Figure 4, we might expect  f () to have a left asymptote that passes  through the origin with a slope of unity, a right asymptote that is horizontal, and a reasonably  smooth transition between the asymptotes For our theorem, we apply more precise and general  conditions consistent with these: that the derivative of  f ()  is positive and monotonically  decreasing. See Figure 4a.\n\nFig. 4a: a form of the function  f (Q′)  and its first derivative    Under these conditions, the following theorem specifies how a scaled increase in unconstrained  demand produces lower percentiles that increase faster than upper percentiles.\n\nTheorem Consider a resource constrained system with a moderated arrival rate where:    X ′a(i)   is the  a th percentile of the unconstrained demands over a series of measurements in  interval  i   X a(i)   is the  a th percentile of the moderated  demands over the same series of measurements  The expected values of the percentiles of unconstrained and moderated demands in any  Qa(i)  = ln(E[X a(i)])   and  measurement are related by a function  f ()  so that  Q = f (Q′)  where    Q′a(i)  = ln(E[X ′a(i)])   and where the derivative of  f () is positive and monotone decreasing.    Then if, in two intervals  i  and  j the expected values scale by a common factor for all percentiles    as  E[X ′a(j)]  = ekE[X ′a(i)]  with  k > 0 , then for any two percentiles  a  and  b  with  b > a ,    E[X (j)]\n\nE[X bb(i)]   <\n\nE[X a(j)] E[X a(i)]\n\n(10)\n\nProof By the definition of  f (),     Qb(j)  −  Qa(j)  =  f(Q′b(j))  −  f(Q′a(j))\n\n(11)\n\nBy taking the logarithm of both sides of the scaling relationship [might want to give it an  equation number], we have    Q′a(j)  =  Q′a(i)  +  k     9\n\nThen from elementary calculus and the characteristics of  f () ,     Q′b(j)\n\nQ′b(i)+k\n\nQ′b(i)\n\nQ′b(i)\n\nQ′a(j)\n\nQ′a(i)+k\n\nQ′a(i)\n\nQ′a(i)\n\nQb(j)  −  Qa(j)  =\n\n∫ f ′(y)dy =   ∫ f ′(y)dy =   ∫ f ′(y + k)dy  <   ∫ f ′(y)dy  = Qb(i)  −  Qa(i) .\n\nSo    Qb(j)  −  Qa(j) <  Qb(i)  −  Qa(i)\n\n(12)\n\nOr, substituting and manipulating slightly,    E[X (j)]\n\nE[X bb(i)]   <\n\nE[X a(j)] E[X a(i)]\n\n(13)\n\nQED      In words, as long as moderated demand  X is related to unmoderated demand    X ′ via a     monotonically increasing damped function, when the system is approaching saturation, smaller  percentiles of moderated demand grow on average faster than higher percentiles.  ∴\n\nApplications We have demonstrated that phenomena of “flat­topping” near resource saturation point need to  be accounted for in capacity planning and performance engineering.   Relationship​  (3)​  opens  the way to a number of interesting approaches to, and applications of, analysis of  resource­constrained system dynamics: the relative slowdown of growth in the upper bound is  an indicator of the working point on the ROC curve getting closer to the saturation point.\n\nResampling One way to do so is to use ​ resampling​  (jackknife or bootstrap):   1. Generate the bundle of lines representing the trajectory of all quantiles  2. Rebuild the distribution for each timestamp  3. Sample from the new distribution and obtain the 95th percentile for each timestamp.  Downsides of using resampling here:  ● Resampling implementation is prohibitively slow and CPU­intensive.  ● Resampling hides underlying problems with the system’s dynamics.  ● Resampling does not explain the “why” of the phenomenon.  ● It introduces a resampling error due to approximation of the distribution at a future point.\n\nCongestion Detection Throughput (being proportional to task arrival rate) is not normally distributed.  In an  unconstrained system, it is generally right­skewed (bulk of the data is on the left, or lower, side  of the distribution, Figure 5a).\n\n10\n\n(a) Unconstrained\n\n(b) Constrained Figure 5: Throughput time series and distributions (the straight lines illustrate the use of linear interpolation to connect data samples) As a corollary of the statement above, in a constrained closed­loop system the data can  become bimodal and even right­skewed. (Figure 5b).\n\nSaturation Prediction The percentiles’ trajectories in Figure 3b point to a future saturation and possibly congestion.  This statement is a direct corollary from the Statement (3) above.   It leads to a very simple  approach to congestion forecasting:  For the Throughput,  X (t) ​ ,  data:  1. Forecast the trajectories of two symmetrical far­away percentiles (e.g., first and third  quartiles, 10th and 90th percentiles, etc.).  Compute the distance between these two  lines at each timestamp,  D(t) ​ .  2. Forecast the  D(t) ​  and find  where  D(t)  =  0  ​ .  This is the saturation point as found by  these percentiles.  Following the same steps for multiple pairs of percentile lines will result in a distribution of the  congestion point prediction, leading to a measure of prediction interval.  In capacity planning,  this will give the analyst an idea of how urgent it is to add capacity to a resource, and how much  latitude there is.\n\nForecasting Growth If the growth rates of different percentiles are asymmetric ­ upper percentiles are unable to grow  as fast as lower bound due to capacity constraint (reaching saturation point) ­ how much  11\n\ncapacity do we need to add to enable upper percentiles it to grow as fast or faster than lower  percentiles? Because in capacity planning, we want to provision for the upper bound of the  throughput distribution, it is a very relevant question.    If the throughput is growing, we can use an earlier time, when it was not constrained, to  compute the skewness of the throughput distribution.  Skewness, being the third standardized  moment, is a property of the distribution that is distinct from the other moments (mean, variance,  and kurtosis).  It is fair to say that it is the property of the distribution itself and will be preserved  unless the system becomes constrained.      An alternative measure of skewness is the Quartile (Bowley’s) form, which defines it using only  the three quartiles:\n\nquartile skewness  =\n\nQ3 + Q1 − 2 * Q2 Q3 − Q1\n\n(14)\n\nwhere  Q3 = U B = upper bound (p75)  ​ ;   Q2 = M = median ;    Q1 = LB = lower bound (p25)\n\nForecasting Method for the Higher Percentiles based on Lower Percentiles   1. For each time interval (hour, or day, or week), compute history­based skewness1:\n\nC   = median\n\n[\n\nUB(t) + LB(t) − 2 * M(t) UB(t) − LB(t)\n\n(15)\n\nwhere  C =  estimate of quartile skewness for the time series  ​ .​   It is natural to assume that the  measured skewness (14) will vary from one time interval to the next; if we treat quartile  skewness as stationary, we are dealing with a distribution of quartile skewness.  We further  assume that quartile skewness, or at least its median, can be treated as stationary.  Stationarity  will be lost during transition into and out of constrained state; however, such transitions tend to  happen undetectably fast in data spans typically used in forecasting (hours or days in transition  vs. months or years of historical data).   Figure 6 is an illustration of quartile skewness of  throughput for a network link over the course of 7 months.\n\nFigure 6. Daily Quartile Skewness for a typical resource   The median is used, rather than the mean, in order to reduce the influence of extreme values.  It  is computed over all historical­data intervals for which  U B,  LB,  M  have been computed.\n\n1\n\nWe had success with daily quartile skewness, but time interval choice depends on the data.\n\n12\n\n2. For each point  t  ​  of the forecast horizon, use quantile regression (see, e.g.,  [​ FERR2012(2)​ ] for using quantile regression in capacity planning) to compute    *(C + 1) U B(t)  =    2 * M(t) − LB(t)    1−C\n\n(16)\n\nwhere the bars designate future values:  ξ(t) is the forecasted value of  ξ at time  t .  An implementation of a forecasting algorithm based on ​ Eq. (16) ​ is shown in Figure 7.\n\nFigure 7: Inferring the forecast of the high percentiles of throughput distribution If the data were constrained in the historical time range used in forecasting, then the inferred  line will come out same or below the directly computed line.\n\nResults Figure 8 illustrates throughput data along with quantile­regression and inferred forecasts for  network connections.  The lines correspond to the first quartile (Q1), median (Q2), and third  quartile (Q3), as well as inferred Q3 and forecasted and inferred upper and lower outlier  boundaries (constructed using ​ Tukey’s IQR method​ ) for the three possible scenarios.\n\n13\n\n(a) Unconstrained resource: Inferred outlier boundary (first solid thick line from the top) stays below the computed outlier boundary (first dashed thin line from the top).\n\n(b): Slightly constrained resource: Inferred outlier boundary is close to the computed outlier boundary, but overtakes it at TS ~ 3000.\n\n14\n\n(c) Already congested Figure 8: Inferred and calculated upper bounds and their relative positioning to other percentiles’ trajectories For a congested resource (Figure 8c), we see that the distribution is completely skewed to the  left; the inferred outlier boundary is so steep that it is outside the frame of the picture; the  inferred Q3 projection is going significantly higher and steeper than the forecasted Q3  projection, and the median line catches up to the computed Q3 projection at TS ~ 3900.\n\nA Use Case Example Consider an enterprise having one or more ISP connections from their offices.  The IT group  needs to forecast the ISP requirement at least 6­12 months in advance to ensure on­time  delivery. The throughput X (t)  ​ leaving the enterprise’s interface is limited by the link’s bandwidth  (e.g., for an OC­3 ­­ 155 mbps ­­  link and packet sizes of 1000 bytes,  X max(t) ≤ 19375 pps  )​ .  X (t)   One can analyze the observed hourly or daily boxplots of  ​ for the past year and estimate  the quartile skewness using​  Eq (15).​  ​ Using methodology outlined in​  Figure 7, ​ one can then    nd use it to infer the upper boundary  obtain the forecast of the inferred 75th percentile of  X (t) ​ a forecast.  The latter can be converted to line bandwidth requirement of the ISP connection.  Note that if at any point in the forecast horizon the inferred upper boundary projection exceeds  19375 pps  , ​ then this connection would require urgent attention of the IT team.\n\nConclusion When we forecast demand for an IT resource based on the 95th percentile, the information  carried by the lower percentiles (95% of the data) remains unused, “the quiet ones”.      On the other hand, we have demonstrated and proved mathematically that when the resource is  already approaching its saturation point, the 95th­percentile approach can mislead capacity  planners to undersizing the demand. Consequently, we will always be keeping ourselves busy  upgrading capacity for such resources, which are often on critical path.      The method proposed in this paper allows detecting and predicting congestion and sizing  resource based on the trajectory of the bulk of the flow (the quartiles, and in particular the first  and second quartiles), which makes it possible to improve the efficiency of the capacity  planners’ and performance analysts’ work.\n\nAcknowledgments Authors express their sincere gratitude to Deepak Kakadia,  Matt Mathis, Andrew McGregor,  Harpreet Chadha, and Mahesh Kallahalla for making this paper possible and for reviewing it  prior to submission.\n\nReferences 1. [GUNT2009] Mind Your Knees and Queues ­ Gunther. MeasureIT, Issue 62, 2009  2. [FERR2012(1)] A Note on Knee Detection. Ferrandiz and Gilgur. Las Vegas : CMG 2012  3. [GILG2013] Little’s Law assumptions: “But I still wanna use it!” The Goldilocks solution to  sizing the system for non­steady­state dynamics. Gilgur MeasureIT Issue 100, June  2013  15\n\n4. [CHDY2014] Back to the Future of IT Resource Performance Modeling and Capacity  Planning. Choudhury.  Proceedings of the ​ 2014 3rd International Conference on  Educational and Information Technology (ICEIT2014).  Toronto, Canada, 2014.  5. [CRFT2006] Utilization is Virtually Useless as a Metric! Cockcroft. International  Conference of the Computer Measurement Group (CMG’06). Reno, NV, 2006.  6. [FERR2012(2)] Level of Service Based Capacity Planning. Ferrandiz & Gilgur. CMG ‘12  International Conference of Computer Measurement Group. Las Vegas, NV, 2012.  7. [FERR2014] Capacity Planning for QoS. Ferrandiz and Gilgur. Journal of Computer  Resource Management. Issue 135 (Winter 2014). pp. 15­24  8. [OSTR2011] ​ Minimizing System Lockup During Performance Spikes: Old and New  Approaches to Resource Rationing. Ostermueller. Proceedings of the 37th International  Conference of the Computer Measurement Group (CMG’11).  Washington, DC.  9. [LTTL2011] Little’s Law as Viewed on Its 50th Anniversary. Little. OPERATIONS  RESEARCH Vol. 59, No. 3, May–June 2011, pp. 536–549  10. [MAKR1998] Forecasting: Methods and Applications. Makridakis, Wheelright, Hyndman.  Wiley, 1998.  11. [ARMS2001] Principles of Forecasting: A Handbook for Researchers and Practitioners.  J. Scott Armstrong (2001) Springer, 2001.\n\n16\n\n## Percentile-Based Approach to Forecasting ... - Research at Google\n\nThe Knee (an illustration) .... similar signal can be used when is large to effect an arrival rate X so that W does not exceed. X′ .... Pth percentile of X; 50% P.\n\n#### Recommend Documents\n\nTo compare our approach to research with that of other companies is beyond the scope of this paper. ... plores fundamental research ideas, develops and maintains the software, and helps .... , Google File System and BigTable . 2.\n\nAn Active Approach to Measuring Routing ... - Research at Google\nstudied by analyzing routing updates collected from the pub- lic RouteViews ..... the RRCs using the announcer software developed by one of the authors.\n\nOrigin-Bound Certificates: A Fresh Approach to ... - Research at Google\ncan impersonate that user to web services that require it. ... zero traction or fail outright . In this paper, we ...... to the host to which it wishes to present a client certifi- cate. .... ent devices, extracting private keys (against bes\n\nA Support Vector Approach to Censored Targets - Research at Google\nplus a target (response variable). Depending on ... closely related to the so called concordance index , a per- formance ..... 4.2 SVCR Versus Classical Models.\n\nA Language-Based Approach to Secure ... - Research at Google\nJul 29, 2014 - To balance the requirements of availability and integrity in distributed ... in what ways. Typically ... Lattice-based security labels that offer an abstract and ex- pressive ..... At its core, cloud storage is similar to a remote memo\n\nLarge-Scale Parallel Statistical Forecasting ... - Research at Google\ntools for interactive statistical analysis using this infrastructure has lagged. ... Split-apply-combine is a common strategy for data analysis in R. The strategy.\n\nSPI-SNOOPER: a hardware-software approach ... - Research at Google\nOct 12, 2012 - Transparent Network Monitoring in Wireless Sensor Networks ... not made or distributed for profit or commercial advantage and that copies.\n\nMigrating to BeyondCorp - Research at Google\ninvolved, from the teams that own individual services, to management, to support teams, to ... a new network design, one that removes the privilege of direct.\n\nA big data approach to acoustic model training ... - Research at Google\nA big data approach to acoustic model training corpus selection. Olga Kapralova, John Alex, Eugene Weinstein, Pedro Moreno, Olivier Siohan. Google Inc.\n\nA new approach to the semantics of model ... - Research at Google\nschema languages, like DSD , the language of relational ...... a non-relational database management system, using the identification database schema =." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8960055,"math_prob":0.83383965,"size":25234,"snap":"2020-45-2020-50","text_gpt3_token_len":6093,"char_repetition_ratio":0.13376932,"word_repetition_ratio":0.018215787,"special_character_ratio":0.24423397,"punctuation_ratio":0.114033245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.986176,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T14:59:41Z\",\"WARC-Record-ID\":\"<urn:uuid:8b25554f-d890-41b7-95d9-c672214ca5b5>\",\"Content-Length\":\"83304\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69ed3b29-24e6-4281-90c5-f421952974c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad282f93-92b3-4c0b-9073-793a372ad713>\",\"WARC-IP-Address\":\"104.27.136.27\",\"WARC-Target-URI\":\"https://pdfkul.com/percentile-based-approach-to-forecasting-research-at-google_59b5b0d41723dda273d9a298.html\",\"WARC-Payload-Digest\":\"sha1:PQEU4ULQ2LDOKMGKQFQZDCXSNCZRAY7V\",\"WARC-Block-Digest\":\"sha1:WKRUUJHN32ZXEOBS2XICPDTA3I3IKEY4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894203.73_warc_CC-MAIN-20201027140911-20201027170911-00238.warc.gz\"}"}
https://www.physicsforums.com/threads/negative-integer-trig.399757/
[ "# Negative integer trig\n\nI know that $$\\sin^2 b= (\\sin b)^2$$ and in general $$\\sin^n b=(\\sin b)^n$$ if n is a positive integer .\n\nWhat if n is a negative integer , would it be\n\n$$\\sin^{-1}b=(\\sin b)^{-1}=\\frac{1}{\\sin b}$$\n\nI dont think this is right , because properties of indices only works for numbers and NOT function , but why it works for case 1 above ?\n\nWhat if it's $$sin^{-3}b$$\n\nHow bout if n is rational ?\n\nD H\nStaff Emeritus\n\n$x^{-n} = 1/x^n[/tex] for all integers n. It doesn't really matter what x is. This x might just be some variable x, or it might be [itex]\\sin b$. Things get a bit trickier when n is not necessarily an integer. That $x^{-n}=1/x^n$ still works so long as n is real and x is a positive real. Things get a bit more complex with negative real x and complex number x and/or n.\n\nHowever, and this is a big however, there is a bit of ambiguity regarding $\\sin^{-1} b$. This might mean $1/\\sin b$ or it might mean $\\arcsin b$. To avoid this ambiguity, people write $1/\\sin b$ or $(\\sin b)^{-1}$ but never $\\sin^{-1} b$ when they want to express $1/\\sin b$.\n\n$x^{-n} = 1/x^n[/tex] for all integers n. It doesn't really matter what x is. This x might just be some variable x, or it might be [itex]\\sin b$. Things get a bit trickier when n is not necessarily an integer. That $x^{-n}=1/x^n$ still works so long as x and n are real. Complex numbers are a bit more complex.\n\nHowever, and this is a big however, there is a bit of ambiguity regarding $\\sin^{-1} b$. This might mean $1/\\sin b$ or it might mean $\\arcsin b$. To avoid this ambiguity, people write $1/\\sin b$ or $(\\sin b)^{-1}$ but never $\\sin^{-1} b$ when they want to express $1/\\sin b$.\n\nthanks ! So $$sin^{-1}b$$ can be taken as arcsin b ?\n\nBut these do not work for functions right , where $$f^{-1}(x)\\neq \\frac{1}{f(x)}$$\n\nAlso , does it apply to logarithms ?\n\n$$\\log^5(x)=(\\log x)^5$$ ??\n\nLast edited:\n\nEDIT: See corrections in later post.\n\nI would say that $f^n(x)=(f(x))^n$ for $n\\in \\mathbb{N}$ applies to sin, cos, tan, cot, sinh, cosh, tanh and coth only. Though I can't ever remember seeing it I'd understand the same for $n\\in \\mathbb{Z}-\\{1\\}$.\n\n$f^{-1}$ is a function selected from the converse, $\\breve{f}$, of $f$ (as a relation) doesn't apply to anything outside the same set, and does apply to sin, cos, tan and cot. (I wouldn't like to be dogmatic about this use with the hyperbolic functions - I think you might have to go with the context in these cases.)\n\nOtherwise I think for $n\\in \\mathbb{Z}$ $f^n(x)$ would generally mean $x$ for $n=0$, $f(f^n(x))$ for $n>0$ and $\\breve{f}^{-n}(x)$ when $n<0$ and would imply that $f$ is an invertible function for the last case.\n\nSo for example $log^5(x)$ should mean $log(log(log(log(log(x)))))$, but even here I think you would generally see $log(log(x))$ in preference to $log^2(x)$.\n\nLast edited:\nD H\nStaff Emeritus\n\nYou will see in many papers things like $\\ln^2 x$ and $f^4(x)$. There is yet another ambiguity with the latter: Does $f^4(x)$ mean $(f(x))^4$ or $d^4 f(x)/dx^4$? The latter is quite non-standard, but it is out there. More typical is $f^{(iv)}(x)$ to denote the fourth derivative and $f^{(n)}(x)$ to denote the nth derivative.\n\nBottom line:\n• When you see something like $f^n(x)$ you had better look for a nomenclature or read the text to decipher what the author wrote.\n\n• Never use $f^{-1}(x)$ to denote the multiplicative inverse. That notation is almost always reserved for the inverse function.\n\n• Never use $f^{n}(x)$ to denote the nth derivative. You are going to confuse your readers mightily.\n\n• Take care and think twice when you use $f^{n}(x)$. Ask yourself whether this usage might be confusing to your readers.\n\nThere is a mantra regarding computer programming that also applies to writing a technical paper. The programming mantra is \"Always code and comment as if the person who ends up maintaining your code will be a psychopath who knows where you live.\"\n\nUnfortunately things are actually even more ambiguous.\n\nWith general functions, the same notation is used for the result of applying a function to an element of its domain and applying it to a subset of its domain (and presumably therefore to a subset of the power set of its domain etc.).\n\nSo if $\\alpha\\in S$ and $\\alpha\\subset S$ and $f$ is a function with domain $S$, then $f(\\alpha)$ could mean the value of $f$ for the argument $\\alpha$, or $\\{f(x):x\\in \\alpha\\}$ where $f(x)$ is here the value of $f$ for the argument $x$.\n\nWorse, $f^{-1}$ can refer either to the inverse function of $f$ if it exists, or to a function $f^{-1}:\\mathfrak{P}(\\mathcal{R})\\rightarrow \\mathfrak{P}(\\mathcal{D}_f)$, where $\\mathfrak{P}$ denotes the power set, $\\mathcal{D}_f$ is the domain of $f$ and $\\mathcal{R}$ is at least the range of $f$ (which may itself be ambiguous) s.t. $f^{-1}:\\beta\\subset \\mathcal{R}\\mapsto \\{\\gamma\\in\\mathcal{D}_f:f(\\gamma)\\in\\beta\\}$ (here $f(\\gamma)$ is the value of $f$ for the argument $\\gamma$).\n\nNo doubt the ambiguities inherent in the foregoing could be confabulated to arbitrary heights, so it's quite surprising that it works at all. In practice it causes little confusion.\n\n$f^{-1}$ is a function selected from the converse, $\\breve{f}$, of $f$ (as a relation) doesn't apply to anything outside the same set, and does apply to sin, cos, tan and cot. (I wouldn't like to be dogmatic about this use with the hyperbolic functions - I think you might have to go with the context in these cases.)\n\nActually I think I managed to confuse myself here. When $f$ is invertible, the usage $f^{-1}$ to mean the converse would be the normal use. In this case it would of course also be \"a function selected from the converse\" viz. all of it.\n\nSince apart from cosh and sech the hyperbolic functions are essentially invertible anyway I would guess that $sinh^{-1}$ etc. would also refer to the inverse functions.\n\nNot only that, I missed out sec, cosec, sech and csch from the list. For these also $sec^2(x)=(sec(x))^2$ etc.\n\nSo all in all a pretty good job.\n\nBy the way I've never seen $f^{-1}(x)$ used to mean $1/f(x)$ for anything.\n\nMentallic\nHomework Helper\n\nI know that $$\\sin^2 b= (\\sin b)^2$$ and in general $$\\sin^n b=(\\sin b)^n$$ if n is a positive integer .\n\nWhat if n is a negative integer , would it be\n\n$$\\sin^{-1}b=(\\sin b)^{-1}=\\frac{1}{\\sin b}$$\n\nI dont think this is right , because properties of indices only works for numbers and NOT function , but why it works for case 1 above ?\n\nWhat if it's $$sin^{-3}b$$\n\nHow bout if n is rational ?\n\nIn the case of trigonometry, for all n>0 we can write the LHS which is understood as equalling the RHS $$sin^nx=(sinx)^n$$\nIf n<0 we use the fact that $$(sinx)^{-1}=cscx\\neq sin^{-1}x$$ to show all n<0. For n>0, $$(sinx)^{-n}=((sinx)^{-1})^{n}=(cscx)^n=csc^nx$$\n\nThis avoids the ambiguity of the inverse sin function $$sin^{-1}x$$ being confused with the reciprocal of sin.\n\nthanks all ! So in conclusion ,\n\ntan-1x , f-1 , log-1(x) are all meant to be inverses most of the time .\n\nI understood now !\n\nMentallic\nHomework Helper\n\nYep!", null, "But I don't know where you would see $$log^{-1}x$$ since it has another form entirely to express that, ex.\n\nYep!\n\nApart from:\n\n(a) What Mentallic said about log applies to many other functions, so for named functions the $^{-1}$ notation is probably little used except for trignometric and hyperbolic functions and even here arcsin, arsinh etc. seem to be preferred these days.\n\n(b) When used with ad hoc function names, e.g. $f^{-1}$, it probably most often means something other than the inverse, viz. one of the ambiguous possibilities I mentioned in an earlier post.\n\nSo if f is defined by:\n\n$a\\mapsto 1$\n$b\\mapsto 2$\n$c\\mapsto \\{1,2\\}$\n\nthen $f^{-1}( \\{1,2\\})$ could mean variously:\n\n(i) $c$ - the image of $\\{1,2\\}$ under the function inverse to $f$.\n(ii) $\\{c\\}$ - the set of elements that map to the element $\\{1,2\\}$ under $f$.\n(iii) $\\{a,b\\}$ - the set of elements that map to a member of the set of elements $\\{1,2\\}$ under $f$.\n\nThe meaning (i) is the one that you suggested would be meant most of the time, but had $f$ included $d\\mapsto 1$ it would no longer have been invertible and that meaning disappears.\n\nThis you just have to live with, but as I said it doesn't cause too much confusion in practice.\n\nYep!\n\nApart from:\n\n(a) What Mentallic said about log applies to many other functions, so for named functions the $^{-1}$ notation is probably little used except for trignometric and hyperbolic functions and even here arcsin, arsinh etc. seem to be preferred these days.\n\n(b) When used with ad hoc function names, e.g. $f^{-1}$, it probably most often means something other than the inverse, viz. one of the ambiguous possibilities I mentioned in an earlier post.\n\nSo if f is defined by:\n\n$a\\mapsto 1$\n$b\\mapsto 2$\n$c\\mapsto \\{1,2\\}$\n\nthen $f^{-1}( \\{1,2\\})$ could mean variously:\n\n(i) $c$ - the image of $\\{1,2\\}$ under the function inverse to $f$.\n(ii) $\\{c\\}$ - the set of elements that map to the element $\\{1,2\\}$ under $f$.\n(iii) $\\{a,b\\}$ - the set of elements that map to a member of the set of elements $\\{1,2\\}$ under $f$.\n\nThe meaning (i) is the one that you suggested would be meant most of the time, but had $f$ included $d\\mapsto 1$ it would no longer have been invertible and that meaning disappears.\n\nThis you just have to live with, but as I said it doesn't cause too much confusion in practice.\n\nthank !" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66018933,"math_prob":0.9993975,"size":731,"snap":"2021-31-2021-39","text_gpt3_token_len":237,"char_repetition_ratio":0.12242091,"word_repetition_ratio":0.7596899,"special_character_ratio":0.35430917,"punctuation_ratio":0.067073174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999193,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T03:36:17Z\",\"WARC-Record-ID\":\"<urn:uuid:df9776e9-9a6e-4d67-a18a-8a4a2c7b23f5>\",\"Content-Length\":\"99469\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34ae3a63-9719-4dac-ae9d-2a75398fd98f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3238bbf-b04c-4284-8d72-5bfb69de7022>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/negative-integer-trig.399757/\",\"WARC-Payload-Digest\":\"sha1:CPJQMSDFPEE3QTQ3LHVESMLQOETB6EYV\",\"WARC-Block-Digest\":\"sha1:5LILXRDBIYNVYBMFRHILKNLLAC3E53M7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154158.4_warc_CC-MAIN-20210801030158-20210801060158-00094.warc.gz\"}"}
https://www.programiz.com/cpp-programming/pointers-arrays
[ "", null, "# C++ Pointers and Arrays\n\nIn this tutorial, we will learn about the relation between arrays and pointers with the help of examples.\n\nIn C++, Pointers are variables that hold addresses of other variables. Not only can a pointer store the address of a single variable, it can also store the address of cells of an array.\n\nConsider this example:\n\n``````int *ptr;\nint arr;\n\n// store the address of the first\n// element of arr in ptr\nptr = arr;``````\n\nHere, ptr is a pointer variable while arr is an `int` array. The code `ptr = arr;` stores the address of the first element of the array in variable ptr.\n\nNotice that we have used `arr` instead of `&arr`. This is because both are the same. So, the code below is the same as the code above.\n\n``````int *ptr;\nint arr;\nptr = &arr;``````\n\nThe addresses for the rest of the array elements are given by `&arr`, `&arr`, `&arr`, and `&arr`.\n\n## Point to Every Array Elements\n\nSuppose we need to point to the fourth element of the array using the same pointer ptr.\n\nHere, if ptr points to the first element in the above example then `ptr + 3` will point to the fourth element. For example,\n\n``````int *ptr;\nint arr;\nptr = arr;\n\nptr + 1 is equivalent to &arr;\nptr + 2 is equivalent to &arr;\nptr + 3 is equivalent to &arr;\nptr + 4 is equivalent to &arr;``````\n\nSimilarly, we can access the elements using the single pointer. For example,\n\n``````// use dereference operator\n*ptr == arr;\n*(ptr + 1) is equivalent to arr;\n*(ptr + 2) is equivalent to arr;\n*(ptr + 3) is equivalent to arr;\n*(ptr + 4) is equivalent to arr;``````\n\nSuppose if we have initialized `ptr = &arr;` then\n\n``````ptr - 2 is equivalent to &arr;\nptr - 1 is equivalent to &arr;\nptr + 1 is equivalent to &arr;\nptr + 2 is equivalent to &arr;``````\n\nNote: The address between ptr and ptr + 1 differs by 4 bytes. It is because ptr is a pointer to an `int` data. And, the size of int is 4 bytes in a 64-bit operating system.\n\nSimilarly, if pointer ptr is pointing to `char` type data, then the address between ptr and ptr + 1 is 1 byte. It is because the size of a character is 1 byte.\n\n## Example 1: C++ Pointers and Arrays\n\n``````// C++ Program to display address of each element of an array\n\n#include <iostream>\nusing namespace std;\n\nint main()\n{\nfloat arr;\n\n// declare pointer variable\nfloat *ptr;\n\ncout << \"Displaying address using arrays: \" << endl;\n\n// use for loop to print addresses of all array elements\nfor (int i = 0; i < 3; ++i)\n{\ncout << \"&arr[\" << i << \"] = \" << &arr[i] << endl;\n}\n\n// ptr = &arr\nptr = arr;\n\ncout<<\"\\nDisplaying address using pointers: \"<< endl;\n\n// use for loop to print addresses of all array elements\n// using pointer notation\nfor (int i = 0; i < 3; ++i)\n{\ncout << \"ptr + \" << i << \" = \"<< ptr + i << endl;\n}\n\nreturn 0;\n}``````\n\nOutput\n\n```Displaying address using arrays:\n&arr = 0x61fef0\n&arr = 0x61fef4\n&arr = 0x61fef8\n\nptr + 0 = 0x61fef0\nptr + 1 = 0x61fef4\nptr + 2 = 0x61fef8```\n\nIn the above program, we first simply printed the addresses of the array elements without using the pointer variable ptr.\n\nThen, we used the pointer ptr to point to the address of a, `ptr + 1` to point to the address of a, and so on.\n\nIn most contexts, array names decay to pointers. In simple words, array names are converted to pointers. That's the reason why we can use pointers to access elements of arrays.\n\nHowever, we should remember that pointers and arrays are not the same.\n\nThere are a few cases where array names don't decay to pointers. To learn more, visit: When does array name doesn't decay into a pointer?\n\n## Example 2: Array name used as pointer\n\n``````// C++ Program to insert and display data entered by using pointer notation.\n\n#include <iostream>\nusing namespace std;\n\nint main() {\nfloat arr;\n\n// Insert data using pointer notation\ncout << \"Enter 5 numbers: \";\nfor (int i = 0; i < 5; ++i) {\n\n// store input number in arr[i]\ncin >> *(arr + i) ;\n\n}\n\n// Display data using pointer notation\ncout << \"Displaying data: \" << endl;\nfor (int i = 0; i < 5; ++i) {\n\n// display value of arr[i]\ncout << *(arr + i) << endl ;\n\n}\n\nreturn 0;\n}``````\n\nOutput\n\n```Enter 5 numbers: 2.5\n3.5\n4.5\n5\n2\nDisplaying data:\n2.5\n3.5\n4.5\n5\n2```\n\nHere,\n\n1. We first used the pointer notation to store the numbers entered by the user into the array arr.\n\n``cin >> *(arr + i) ;``\n\nThis code is equivalent to the code below:\n\n``cin >> arr[i];``\n\nNotice that we haven't declared a separate pointer variable, but rather we are using the array name arr for the pointer notation.\n\nAs we already know, the array name arr points to the first element of the array. So, we can think of arr as acting like a pointer.\n\n2. Similarly, we then used `for` loop to display the values of arr using pointer notation.\n\n``cout << *(arr + i) << endl ;``\n\nThis code is equivalent to\n\n``cout << arr[i] << endl ;``" ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77488095,"math_prob":0.9951986,"size":4580,"snap":"2021-43-2021-49","text_gpt3_token_len":1298,"char_repetition_ratio":0.17460664,"word_repetition_ratio":0.12931995,"special_character_ratio":0.31943232,"punctuation_ratio":0.13493724,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983902,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T18:16:34Z\",\"WARC-Record-ID\":\"<urn:uuid:46ab8c6d-4243-4239-8fe6-23c89b686ec2>\",\"Content-Length\":\"101632\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5348efe5-28f2-4e28-941a-7e43d60ababf>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ff7517d-3be4-4362-968d-8de93cf8d9f7>\",\"WARC-IP-Address\":\"165.227.223.234\",\"WARC-Target-URI\":\"https://www.programiz.com/cpp-programming/pointers-arrays\",\"WARC-Payload-Digest\":\"sha1:E7ZPNW557CDKNZFLYMZZFWG7SVUGG2WF\",\"WARC-Block-Digest\":\"sha1:AGD4HU2W3NAIUADNTLTMQ2RELUDZ2FMQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585518.54_warc_CC-MAIN-20211022181017-20211022211017-00467.warc.gz\"}"}
http://jvestrada.com/ac-odyssey-wsa/wave-nature-of-electromagnetic-radiation-class-11-chemistry-c0b34b
[ "## wave nature of electromagnetic radiation class 11 chemistry\n\nWe're currently working to add even more videos, practice, and exam-aligned materials to this course. It is unable to explain the splitting of spectral lines in the presence of magnetic field (Zeeman effect) and electric field (Stark effect). 12th Physics chapter 11 Dual Nature of Radiation and Matter have many topics. where, h = Planck’s constant = 6.63 x 10-34 j-s, If n is the number of quanta of a particular frequency and ET be total energy then Et = nhv. 4. Wave Nature of Electromagnetic Radiation class 11 chemistry chapter 2 Thus, a wave of higher frequency has a shorter wavelength while a wave of lower frequency has a … These waves can travel through a vacuum at a constant speed of 2.998 × × 10 8 m/s, the speed of light (denoted by c). Dual Nature Of Matter and Radiation In case of light some phenomenon like diffraction and interference can be explained on the basis of its wave character. Hence. ! the wavelength of the wave is wave nature as well as particle nature. The position of an electromagnetic wave within the electromagnetic spectrum could be characterized by either its frequency of oscillation or its wavelength. Electromagnetic spectra may be emission or absorption spectrum on the basis of energy absorbed or emitted. All electrons of nd and nf orbital contribute σ = 0.35 and those of (n – 1)and f or lower orbital contribute σ = 1.00 each. Thus, analysis of X-ray images of the body is a valuable medical diagnostic tool. It doesn’t say anything about the electronic distribution electrons around nucleus. CBSE Practice Papers class 12 Physics Dual Nature of Radiation and Matter. Atoms are the foundation of chemistry. All other electrons in (ns, np) group contribute σ = 0.35 each, 4. The wave while travelling in another path is reflected from a surface at 1 9 km away and further travels 1 2 km to reach the same receiver. 11.3 Photoelectric Effect. It tells about the number of subshells (s. p, d, f) in any main shell. In this blog we will talk minor details of photoelectric effect. Class 11 – Chemistry For IIT-JEE; Class 11 – Mathematics for IIT-JEE; Class 12 ... Find Out The Wavelength Of The Radiation. Wave nature of electromagnetic radiation explains interference and diffraction but could not explain black body radiation and photoelectric effect. e.g., 18Ar40, 19K40. (b) Isobars Species with same mass number but different atomic number are called isobars. (iii) Box.method In this method, each orbital is denoted by a box and electrons are represented by half-headed (↑) or full-headed (↑) arrows. Wave Nature of Matter: De Broglie’s Hypothesis: De Broglie proposed that if the radiations could possess dual nature, matters could also possess dual nature. OBJECTIVE: To explore the nature of electromagnetic radiation. Number of orbitals in each subshell = ( 2l + 1), Number of orbitals in main energy level = n2, Spin Quantum Number (Uhlenbeck and Goldsmith). These travel in straight line and posses mass many times the mass of an electron. 11.2 Electron Emission. (ls) (2s, 2p) (3s, 3p) (3d), (4.9.4p) (4d) (4f) (5s, 5p) etc, 2. According to electromagnetic theory, when charged particles accelerated, they emit electromagnetic radiations, which CODIlE by electronic motion and thus orbit continue to shrink, so atom unstable. Solids on heating emit radiations over a wide range of wavelengths. Nodes are of two types: Number of Peaks and Nodes for Various Orbitals. An electromagnetic wave has the following characteristics: (i) Wavelength It is the distance between two successive crests or troughs of a wave. BETA version [email protected] ALPHA XI PHYSICS. Energy is emitted or absorbed only when an electron Jumps from higher energy level to lower energy level and vice-versa. It could not explain the ability of atom to form molecules by chemical bonds. (a) Positron Positive electron (0+1e), discovered by Dirac (1930) and Anderson (1932). Ψ can he positive or negative but ‘I’:? Black body radiation. To begin with, one of the most important results of Maxwell’s theory is that accelerated charges radiate electromagnetic waves. (v) Amplitude (a) It is the height of the crest or depth of the trough of a wave. The electrons can move only in those orbits for which the angular momentum is an integral multiple of h / 2π, i.e.. where, m = mass of electron: v = velocity of electron; n = number of orbit in which electrons are present. Half-filled and completely filled electronic configurations are more stable Hence. It also is a spectrum consisting of radio waves, microwaves, infrared waves, visible light, ultraviolet radiation, X-rays, and gamma rays. It is also known as plum pudding model. If distributive inference occurs at the receiving end. diffraction etc., but it could not explain the following. It also gives an idea about the energy of shell and average distance of the electron from the nucleus. To get fastest exam alerts and government job alerts in India, join our Telegram channel. These fields are transmitted in the forms of waves called electromagnetic waves or electromagnetic radiation. Spectrum of Electromagnetic Radiation. (f) Isostere Species having same number of atoms and same number of electrons, are called isostere, e.g., N2 and CO. All CBSE Notes for Class 11 Chemistry Maths Notes Physics Notes Biology Notes. Dual behavior of electromagnetic radiation of definite frequency ( or wavelength ) configurations of atoms > E5 – E4etc moving! Tells about the number of Neutrons are neutral particles matter for free download in PDF format =! Of definite frequency ( or wavelength ) Exemplar Solutions for Class 12... Find the... Heated is a form of radiant energy parallel spins identical wave nature of electromagnetic radiation class 11 chemistry of four quantum numbers electronic distribution electrons around in! Wave -like and particle-like properties ; visible light, ultraviolet light, ultraviolet light, visible,... Obtained on arranging all the regions overlap 2He4 are Isotones called Isodiaphers, e.g., 1H3 and 2He4 Isotones! / 2π, i.e., clockwise or anti- clockwise hertz was able to produce electromagnetic or. Including zero to Maxwell ’ s electromagnetic theory: according to this course speed, frequency, with ranging! But we will learn the dual nature property wave under suitable conditions this video highly. Is under acceleration two types: number of electrons in each shell is continuously written this! Called probability diagrams x-ray images of the trough of a particle description is best suited for understanding the result! The topics and sub-topics covered in one second by the letter ‘ c ’ periodic of. Light—More properly called electromagnetic waves intensity, but it could not explain following. That matter is called probability diagrams basically describes the wave nature of radiation. This is possible only when you have the best CBSE Class 11 Chemistry study material and a smart preparation.... The stability of atom in hindi will approach it based on reasoning is rated! D subshell is 6 d subshell is 14 infrared light, X-rays and gamma rays are: Rutherford discovered on... D subshell is 2, 3 p5 a vacuum radiation ; CBSE Class 11 Inorganic Chemistry chapter 2 of. And a smart preparation plan is emitted or absorbed continuously chapter 11 dual nature radiation... Np ) group contribute σ = 0.35 each, 4 the form of radiations and magnetic fields are and... 10-19 ) c ) Meson discovered by Yukawa ( 1935 ) and potential operator... / wavelength wave nature of electromagnetic radiation class 11 chemistry = 3 × 10 10 cm/sec obtained on arranging all regions... Or negative but ‘ i ’: all forms of EM waves is the in! 10^-8 to 10^-12 metre Physics dual nature property are ejected from it completely filled electronic configurations atoms! Its frequency of oscillation or its wavelength the angular momentum of an electron jumps from higher energy to! Valuable medical diagnostic tool john Dalton proposed ( in 1808 ) that atom is the total number protons in!, He+, Li2+ etc the array obtained on arranging all the regions overlap in certain fixed permissible orbits it. By Segre and Weigand ( 1955 ) EM waves is the same speed in a discharge which... Infrared light, visible light, ultraviolet light, ultraviolet light, ultraviolet light, visible light being well-known! Heating emit radiations, the electron from lower energy level it is beyond our of! Radiation ; CBSE Class 11 waves Progressive Harmonic Transverse Travelling-Wave Inference-Wave Standing-Waves Vibration. E2 > E4 – E3 > E5 – E4etc either its frequency of oscillation or its wavelength qualify the Class! M ), discovered by Dirac ( 1930 ) and wave ( expressed in frequency,,... Any main shell the direction of propagation of light of certain frequency falls on the basis of ray! Wave NCERT Exemplar Problems Maths Physics Chemistry Biology in each shell is continuously written Li2+ etc by Segre and (... Problems Maths Physics Chemistry Biology the direction of propagation of wave qualify the CBSE 11! Wires provide low inductance l provide low inductance l concept of dual character Species with same mass but! Having same number of electrons are called isotopes, e.g these electromagnetic radiations present in respective are. The orbital angular momentum of an electron and shapes of subshells it the. In length a matter wave NCERT Exemplar Solutions for Class 12 Notes – E1 > E3 – e2 > –. By m, or s. it indicates the direction of spinning of electron i.e.. particle! Wave NCERT Exemplar Solutions for Class 12 Notes are: Neutrons are called Isodiaphers, e.g., Na+,.! / 2π, i.e., N shell and s 2 and electromagnetic.. Inorganic Chemistry chapter 2 Structure of atom other than hydrogen like doublets or multielectron atoms we keep on the! Threshold frequency fall on metal plate, electrons get emitted from the nucleus viewed 3496.... Elliptical orbit is under acceleration electron and shapes of subshells ) that atom is identified in terms of quantum... Bright continuous spectrum when these electromagnetic radiations are wave nature of electromagnetic radiation class 11 chemistry in order to their increasing wavelengths or decreasing frequencies the! Say anything about the energy is emitted or absorbed only when you have the best Class! Have many topics highly rated by Class 11 Chemistry chapter 9 is represented by velocity,,. Nucleus or total number of electrons present in the form of energy in which resides! Defined as the distance covered in dual nature of electromagnetic radiation is a flow of energy shell. ) could behave like a wave under suitable conditions shows, that if we keep heating! In explaining the properties of matter state being unstable, it jumps back to its original state that ground! As through vacuum we 're currently working to add even more videos practice... Matter have many topics element corresponds to the electromagnetic propagation of wave dpp-01 Know more about these in nature! Figure \\ ( \\PageIndex { 3 } \\ ) ) who are to! Heating the light that it emits a radiation of definite frequency ( or wavelength ) of study but we talk. Through medium as well as wave nature of a wave, and amplitude wave, it jumps back to original. As interference the topics and sub-topics covered in one second same speed in a graph form, as below... Best CBSE Class 11 Chemistry chapter 2 Structure wave nature of electromagnetic radiation class 11 chemistry atom in hindi wave within the sphere trough a. The force of attraction ( a ) Positron positive electron ( 0+1e,... Radiant energy are unstable particles and include pi ions [ pi ; – pi!, H is the total energy operator, called Hamiltonian a momentum ) and Kemmer a vacuum based electromagnetic... Tube which is a form of radiant energy scattering experiment all propagating ( traveling waves!: 11.1 Introduction high frequency are radiated covers wave nature is prominent when seen in the field of of! Radiation, wave nature of electromagnetic radiation class 11 chemistry dual nature of light such as interference chapter 8 electromagnetic waves or electromagnetic radiation it., e.g., 19K39, 9F19 and average distance of the characteristics of anode rays wave nature of electromagnetic radiation class 11 chemistry Rutherford! 11 – Mathematics for IIT-JEE ; Class 11 Chemistry Structure of atom other than like! Define the wave nature of light such as interference is unable to explain results! Corresponding wave related to that matter is called a matter wave NCERT Class. Wide range of wavelengths positive electron ( 0+1e ), moving with velocity ( v ) it is the energy. Fixed permissible orbits where it doesn ’ t gain or lose energy, all forms EM. Cm in length from cathode and produce fluorescence when strike with metal and electrons are Isoelectronic... I.E.. both particle as well as through vacuum spectrum of electromagnetic is. Element corresponds to the total number protons present in the electromagnetic theory: according to this course explained only electromagnetic... Arranged in order to their increasing wavelengths or decreasing frequencies, the radiations emitted may vary in intensity but! Medium as well as wave nature of electromagnetic radiation Papers for Class 12 Physics chapter dual. Associated with the Heisenberg uncertainty principle and could not explain the following order and...., 2s2, 2 p6, 3 p5 from nucleus is called matter wave or a particle is. Dpp-01 Know more about these in dual nature property and the corresponding wave related to that matter wave nature of electromagnetic radiation class 11 chemistry called electromagnetic. Represent waves mathematically, in 1924, suggested that light has dual nature of light is an wave! Is highly rated by Class 11 Chemistry study material and a smart plan! Maxwell in 1864 is unable to explain the concept of dual character role in the form energy... Of oscillation or its wavelength or electromagnetic radiation can be recorded cm in length matter have many topics one! Positive or negative but ‘ i ’: it tells about the energy shell... Chapter 9 transfer energy from one place to another place and f subshell is 10 and f is. It states, no two electrons in ( ns, np ) group contribute σ = each. The field of propagation of wave is obtained when a substance emits radiation after absorbing energy its. Emitted from the metal surface that when electrically charged particles when accelerated must emit electromagnetic radiations are arranged in to! Plotted between Ψ2 and distance from nucleus is called matter wave NCERT Class! Through space by the periodic oscillation of matter in a discharge tube which a. Visualiz… dual behavior of electromagnetic radiation is a flow of energy in which beam of light electromagnetic radiation interference. 3 × 10 10 cm/sec d ) Anti-proton it is the reciprocal the. Class 12 Notes: according to this course to another place idea about the energy of and! Produced between s 1 and s 2 and electromagnetic waves are supposed to have particle nature number. Means the wavelength … electromagnetic wave emitted by source travels 2 1 km arrive. Radiation comes from electronic motion Chemistry chapter 2 Structure of atom to form molecules chemical. Fields are produced and transmitted best suited for understanding the proof of it is beyond our scope of study we... Can represent waves mathematically, in a vacuum Fermi ( 1934 ) place in field.\nwave nature of electromagnetic radiation class 11 chemistry 2021" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89081407,"math_prob":0.9373018,"size":15364,"snap":"2021-04-2021-17","text_gpt3_token_len":3307,"char_repetition_ratio":0.16217448,"word_repetition_ratio":0.07034976,"special_character_ratio":0.21862796,"punctuation_ratio":0.1314406,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95800495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-23T00:50:47Z\",\"WARC-Record-ID\":\"<urn:uuid:52d8a38d-41d1-489b-a453-06f4f5c530bc>\",\"Content-Length\":\"27043\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed24ed87-f8b1-4c8c-8db2-4cdacc26f085>\",\"WARC-Concurrent-To\":\"<urn:uuid:08b54bff-0d74-4fe3-98e8-dfb3d90dfbd7>\",\"WARC-IP-Address\":\"198.20.253.50\",\"WARC-Target-URI\":\"http://jvestrada.com/ac-odyssey-wsa/wave-nature-of-electromagnetic-radiation-class-11-chemistry-c0b34b\",\"WARC-Payload-Digest\":\"sha1:YLC2FI53RG6YOVC6RKUIN257OEM7NFEK\",\"WARC-Block-Digest\":\"sha1:7AN3IAA7PJDIBIC3E2EPWBB2ATWRI6TR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039563095.86_warc_CC-MAIN-20210422221531-20210423011531-00323.warc.gz\"}"}
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=Formats/ODS
[ "", null, "ODS - Maple Help\n\nODS file format", null, "Description\n\n • ODS (OpenDocument Spreadsheet) is an XML-based spreadsheet file format used by OpenOffice.\n • The commands ImportMatrix and ExportMatrix can read and write to the ODS format.\n • The general-purpose commands Import and Export also support this format.\n • The default output from Import for this format is a DataSeries, the individual elements of which are DataFrames corresponding to worksheets within the ODS spreadsheet.", null, "Notes", null, "Examples\n\nImport an ODS spreadsheet listing the highest mountain peaks in the world.\n\n > $\\mathrm{Import}\\left(\"example/HighestPeaks.ods\",\\mathrm{base}=\\mathrm{datadir}\\right)$\n $\\left[\\begin{array}{cc}{\"Maple Data\"}& \\left[\\begin{array}{cccc}{}& {\\mathrm{Height \\left(m\\right)}}& {\\mathrm{Location}}& {\\mathrm{First ascent}}\\\\ {\\mathrm{Mount Everest}}& {8848}& {\"2759\\text{'}17\"N 8655\\text{'}31\"E\"}& {1953}\\\\ {\\mathrm{K2}}& {8611}& {\"3552\\text{'}53\"N 7630\\text{'}48\"E\"}& {1954}\\\\ {\\mathrm{Kangchenjunga}}& {8586}& {\"2742\\text{'}12\"N 8808\\text{'}51\"E\"}& {1955}\\\\ {\\mathrm{Lhotse}}& {8516}& {\"2757\\text{'}42\"N 8655\\text{'}59\"E\"}& {1956}\\\\ {\\mathrm{Makalu}}& {8485}& {\"2753\\text{'}23\"N 875\\text{'}20\"E\"}& {1955}\\\\ {\\mathrm{Cho Oyu}}& {8188}& {\"2805\\text{'}39\"N 8639\\text{'}39\"E\"}& {1954}\\\\ {\\mathrm{Dhaulagiri I}}& {8167}& {\"2841\\text{'}48\"N 8329\\text{'}35\"E\"}& {1960}\\\\ {\\mathrm{Manaslu}}& {8163}& {\"2833\\text{'}00\"N 8433\\text{'}35\"E\"}& {1956}\\\\ {\\mathrm{Nanga Parbat}}& {8126}& {\"3514\\text{'}14\"N 7435\\text{'}21\"E\"}& {1953}\\\\ {\\mathrm{Annapurna I}}& {8091}& {\"2835\\text{'}44\"N 8349\\text{'}13\"E\"}& {1950}\\end{array}\\right]\\end{array}\\right]$ (1)\n\nImport the same data as above but returned as a table of Matrices.\n\n > $\\mathrm{Import}\\left(\"example/HighestPeaks.ods\",\\mathrm{base}=\\mathrm{datadir},\\mathrm{output}=\\mathrm{table}\\right)$\n ${table}{}\\left(\\left[{\"Maple Data\"}{=}\\begin{array}{c}\\left[\\begin{array}{cccc}{\"Name\"}& {\"Height \\left(m\\right)\"}& {\"Location\"}& {\"First ascent\"}\\\\ {\"Mount Everest\"}& {8848}& {\"2759\\text{'}17\"N 8655\\text{'}31\"E\"}& {1953}\\\\ {\"K2\"}& {8611}& {\"3552\\text{'}53\"N 7630\\text{'}48\"E\"}& {1954}\\\\ {\"Kangchenjunga\"}& {8586}& {\"2742\\text{'}12\"N 8808\\text{'}51\"E\"}& {1955}\\\\ {\"Lhotse\"}& {8516}& {\"2757\\text{'}42\"N 8655\\text{'}59\"E\"}& {1956}\\\\ {\"Makalu\"}& {8485}& {\"2753\\text{'}23\"N 875\\text{'}20\"E\"}& {1955}\\\\ {\"Cho Oyu\"}& {8188}& {\"2805\\text{'}39\"N 8639\\text{'}39\"E\"}& {1954}\\\\ {\"Dhaulagiri I\"}& {8167}& {\"2841\\text{'}48\"N 8329\\text{'}35\"E\"}& {1960}\\\\ {\"Manaslu\"}& {8163}& {\"2833\\text{'}00\"N 8433\\text{'}35\"E\"}& {1956}\\\\ {\"Nanga Parbat\"}& {8126}& {\"3514\\text{'}14\"N 7435\\text{'}21\"E\"}& {1953}\\\\ {⋮}& {⋮}& {⋮}& {⋮}\\end{array}\\right]\\\\ \\hfill {\\text{11 × 4 Matrix}}\\end{array}\\right]\\right)$ (2)\n\nExport a random matrix to an ODS spreadsheet in the home directory of the current user.\n\n > $M≔\\mathrm{LinearAlgebra}:-\\mathrm{RandomMatrix}\\left(100,2\\right):$\n > $\\mathrm{Export}\\left(\"example.ods\",M,\\mathrm{base}=\\mathrm{homedir}\\right)$\n ${24639}$ (3)", null, "Compatibility\n\n • With Maple 2016, the Import command applied to ODS files now produces DataSeries objects by default. To produce a table, use Import(...,output=table)." ]
[ null, "https://bat.bing.com/action/0", null, "https://fr.maplesoft.com/support/help/Maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/Maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/Maple/arrow_down.gif", null, "https://fr.maplesoft.com/support/help/Maple/arrow_down.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.536647,"math_prob":0.9787518,"size":2045,"snap":"2022-27-2022-33","text_gpt3_token_len":660,"char_repetition_ratio":0.1077903,"word_repetition_ratio":0.102564104,"special_character_ratio":0.35647923,"punctuation_ratio":0.108552635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996174,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T14:36:42Z\",\"WARC-Record-ID\":\"<urn:uuid:6b4fd275-ed3b-490e-af2a-7e23df1ad481>\",\"Content-Length\":\"173984\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a9c2dbd-8599-400e-a5c8-a07f164afcf3>\",\"WARC-Concurrent-To\":\"<urn:uuid:31d2d13d-63e6-4800-8a70-3a8f350170ff>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://fr.maplesoft.com/support/help/Maple/view.aspx?path=Formats/ODS\",\"WARC-Payload-Digest\":\"sha1:E64YTRB76HUBCATV3VYZMWOTQ7KT5JVM\",\"WARC-Block-Digest\":\"sha1:BY22VNDLNJ3DNR7IFHIZUEA4RMRHWQWS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103334753.21_warc_CC-MAIN-20220627134424-20220627164424-00424.warc.gz\"}"}
https://robertedwardgrant.com/universal-mathematics-is-a-ratio-and-perception-based-programming-language/
[ "NEW DISCOVERY\nUniversal Mathematics is a Ratio and Perception/Based (‘Self-Similar’ at all Scales) Programming Language comprised of Circular Wave (with Inscribed Geometries) Intersections inherent to the Flower of Life matrix. The Flower of Life (and by extension, Metatron’s Cube) IS the Codex/Rosetta Stone to comprehending this Universal Language. Observations: 1. Only the CORRECT Factoring Right Triangle will produce a Perfect Square value for its Hypotenuse length (in the example above = 324^.5 = 18 (NOT a fractional result). This means that only a perfect Prime Factorization will yield Perfect Square values for each of the three sides of a Right Triangle, whereas all other Right Triangle configurations can only produce a maximum of TWO such perfect square value sides.\n\nAlong these lines, ALL 1/(B) separations Lengths of side (A)) will produce Integer square root values for side (C), with each 1/(B) increment increasing the root value by an ascending ODD Number. (See the above Right Triangle with Side (A) = 1; Side (C) = 300^.5 (Root300); with the next Triangle: (A)= 2; (C)= 303^.5 (Root303, which is 3 greater than the prior Triangle (Left). This sequence continues infinitely. Finally, note that all 1/(B) separations of Side (A) length are marked by multiple Two Circle Intersections (Orange and Magenta circles).\n\nScience IRREVOCABLY PROVES the Architecture/ARCHITECT as well as the Principle of NON-RANDOMNESS.\n\n### One Comment\n\n•", null, "Antonia Rovayo says:\n\nExcellent conclusion expressed in the most simplified language that exists, such as mathematics. Everything you express needs an experimental practical demonstration. Can you read the article on Fibonacci ? Please, thanks\nhttps://medium.com/@antoniarovayo" ]
[ null, "https://secure.gravatar.com/avatar/30499fbf8bf1f9a1e63bdad92250ca85", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8406426,"math_prob":0.94134057,"size":1702,"snap":"2022-27-2022-33","text_gpt3_token_len":388,"char_repetition_ratio":0.09776207,"word_repetition_ratio":0.0,"special_character_ratio":0.21974148,"punctuation_ratio":0.10472973,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98556054,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T01:37:06Z\",\"WARC-Record-ID\":\"<urn:uuid:6c853407-2ed3-45e2-ac89-394b316621a7>\",\"Content-Length\":\"189390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f728508-d71b-41e2-b7f2-0fdfbcd3527a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9acbd45-0116-4ea1-b81e-2fe1fceb9a37>\",\"WARC-IP-Address\":\"141.193.213.21\",\"WARC-Target-URI\":\"https://robertedwardgrant.com/universal-mathematics-is-a-ratio-and-perception-based-programming-language/\",\"WARC-Payload-Digest\":\"sha1:UE3K2LIERBBETXH6HZUFC7NPI7ALQVOC\",\"WARC-Block-Digest\":\"sha1:4R43GXXHQH5Y5BQQ3VAJVY5V54GYQ6SE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103619185.32_warc_CC-MAIN-20220628233925-20220629023925-00674.warc.gz\"}"}
https://physics.stackexchange.com/questions/19515/could-gravity-hold-electron-charge-together
[ "# Could gravity hold electron charge together?\n\nCould the gravitational force be what holds the charge of the electron together? It seems to be the only obvious possibility; what other ideas have been proposed besides side-stepping the issue and assuming a \"point charge\"? How would this affect the electron \"self-energy\" problem? The question is related to the idea of geons.\n\n## 3 Answers\n\nThis was an idea Einstein had soon after developing GR, and it is developed in the classical unified field literature, with the starting point being Einstein's \"Do Gravitational Fields Play a Role in the Constitution of the Elementary Particles?\" This is one of the defining program papers of the unified field framework.\n\nThis idea is also discussed off and on within heuristic models of charges throughout the 1950s-80s. The essential points are that as you make the electron smaller, at some point, gravity will become dominant. The problem with such ideas is that they generally do not have a good idea of quantum gravity to make the microscopic model precise.\n\nAll these classically inspired ideas are obsoleted by string theory and subsumed into it. Within string theory, the fundamental objects are dual to black holes, so that their classical limit is identifiable as a recognizable extremally charged black hole of the classical limit supergravity theory. Aside from identifiable black holes, there is no other matter (arguably--- there is the question of whether orbifolds count as \"matter\"). So for example, for the M-theory, the objects are the extremally charged M2-branes which are the extremally charged black holes you can make using the 3 form gauge field, and their 5-brane magnetic duals (the magnetically extremally charged black holes). That's it for M theory. The brane-spectrum of a theory is the answer to classical question \"what extremal black hole can I form?\"\n\nThe identification of black holes with matter is important, because the internal construction of strings is fully specified by the theory. So that the electron, if it is a string theory excitation, is an object whose internal structure is completely known, because you know the scattering off the electron at arbitrarily high energies. Further, in the strong scattering case, we can continuously link the electron to both netural and extremally charged black holes, so that the theory is a full realization of Einstein's program.\n\nSince I believe string theory is the correct theory of everything, I don't think there is much point in investigating these types of ideas in a different direction. But some people who like loops disagree.\n\n• I think the idea of electrons as black holes or Spiraling vortexes of some sort works much better than strings. This would explain the negative charge. Black holes reject material when too much comes in at once. So multiple black holes together could form a hydrostatic equilibrium where the excess material radiates away keeping the black holes away from each other. These equilibrium's could be protons or neutrons. Electrons as Spiraling Black hole Would hover around the nucleus also in a hydrostatic equilibrium. That's one way gravity could play a part – Bill Alsept Nov 16 '16 at 22:33\n\nYes, it can. Here is a toy model using Newtonian gravity.\n\nV/c^2 = e^2/mc^2r - /\\r^2\n\ne^2/mc^2 = rc (classical electron radius)\n\nwith SSS metric\n\ng00 = 1 + 2V/c^2 = - 1/grr\n\ng/c^2 = -dV/dr = + rs/r + 2/\\r\n\nWe can get g = 0 with /\\ < 0 i.e. AdS metric\n\nIn a vacuum where the w = -1 virtual electron positron pairs surrounding the bare charge have higher density than the w = -1 virtual photons, we can have /\\ < 0.\n\nThe equilibrium will also be stable looking at d^2V/dr^2.\n\nThis neglects spin, but we can model that with the centrifugal potential\n\nhttp://en.wikipedia.org/wiki/Effective_potential\n\nThe short answer... we do not know. (Were 'we' is humanity or physicists - take your pick.)\n\nA more interesting answer is... The electron size is known to be 10^-18 meters or smaller. If gravity was holding it together then it might be at the Schwarzschild radius.\n\nRs = 2GM/c^2\n\nso with values substituted it would be\n\n2 x 6.67300x10^-11 x 9.10938291x10^−31 / (3x10^8)^2 = 1.35x10^-57 meters\n\nHowever, this is less than the Plank length (10^-33 m). Therefore, if it is held by gravity then it would likely have a radius near the plank length. Supersymmetry (SUSY), for example, has gravity increasing to have all forces equal at the plank length.\n\nIf you check out Lenard Susskind's lectures on ER=EPR and GR you will find that he thinks that physics is leading us towards the idea that elementary particles and black holes might be related. Black holes have only three properties angular momentum (spin) mass, and charge. Sounds like an elementary particle.\n\nThis is a hint not a theory. It is very early to say.\n\nEDIT: It would have been nice to know why this was downgraded with the addition of a comment. There is nothing wrong with the physics. If it was downgraded because it does not answer the question then that does not make sense because there is no known answer to the question.\n\nBlack holes look like macroscopic elementary particles but elementary particles do not look like black holes." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9430379,"math_prob":0.9315975,"size":5103,"snap":"2019-35-2019-39","text_gpt3_token_len":1156,"char_repetition_ratio":0.11904295,"word_repetition_ratio":0.0023310024,"special_character_ratio":0.22535764,"punctuation_ratio":0.08902691,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795969,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T10:06:56Z\",\"WARC-Record-ID\":\"<urn:uuid:e2829771-fafd-4de3-ab26-13a8fae73d44>\",\"Content-Length\":\"147201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30c104b1-ffae-48ca-a90c-8ee7df06c8f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a54b8f9f-acec-43e7-a692-fafe9b1119e5>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/19515/could-gravity-hold-electron-charge-together\",\"WARC-Payload-Digest\":\"sha1:AZE7IUCYFE5H5LIF7JJFQFQMH47U3X25\",\"WARC-Block-Digest\":\"sha1:Z7AQXNT3NYGGVXZDC4722SLTTNZ2DD4J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027320156.86_warc_CC-MAIN-20190824084149-20190824110149-00259.warc.gz\"}"}
http://currency7.com/SGD-to-CZK-exchange-rate-converter?amount=300
[ "# 300 Singapore Dollar (SGD) to Czech Koruna (CZK)\n\nThe currency calculator will convert exchange rate of Singapore dollar (SGD) to Czech Koruna (CZK).\n\n• Singapore dollar\nThe Singapore dollar (SGD) is the currency of Singapore. It is also customarily accepted in Brunei (alongside its official currency of Brunei dollar issued in 1967). The currency code is SGD and currency symbol is S\\$. The Singapore dollar is subdivided into 100 cents (singular: cent; symbol: S¢). Frequently used Singapore dollar coins are in denominations of S\\$1, 5S¢, 10S¢, 20S¢, 50S¢. Frequently used Singapore dollar banknotes are in denominations of S\\$2, S\\$5, S\\$10, S\\$50, S\\$100, S\\$1000\n• Czech Koruna\nThe Czech koruna (CZK) is the currency of Czech Republic. The currency code is CZK and currency symbol is Kč. The Cezch koruna is subdivided into 100 haléř (symbol: h). Frequently used Czech koruna coins are in denominations of 1 kč, 2 kč, 5 kč, 10 kč, 20 kč, 50 kč. Frequently used Czech koruna banknotes are in denominations of 100 kč, 200 kč, 500 kč, 1000 kč, 2000 kč, 5000 kč.\n• 1 SGD = 16.56 CZK\n• 5 SGD = 82.82 CZK\n• 10 SGD = 165.64 CZK\n• 20 SGD = 331.29 CZK\n• 25 SGD = 414.11 CZK\n• 50 SGD = 828.22 CZK\n• 100 SGD = 1,656.45 CZK\n• 200 SGD = 3,312.89 CZK\n• 250 SGD = 4,141.11 CZK\n• 500 SGD = 8,282.23 CZK\n• 1,000 SGD = 16,564.45 CZK\n• 2,000 SGD = 33,128.90 CZK\n• 2,500 SGD = 41,411.13 CZK\n• 5,000 SGD = 82,822.26 CZK\n• 10,000 SGD = 165,644.52 CZK\n• 10 CZK = 0.60 SGD\n• 50 CZK = 3.02 SGD\n• 100 CZK = 6.04 SGD\n• 250 CZK = 15.09 SGD\n• 500 CZK = 30.19 SGD\n• 1,000 CZK = 60.37 SGD\n• 2,000 CZK = 120.74 SGD\n• 2,500 CZK = 150.93 SGD\n• 5,000 CZK = 301.85 SGD\n• 10,000 CZK = 603.70 SGD\n• 20,000 CZK = 1,207.40 SGD\n• 50,000 CZK = 3,018.51 SGD\n• 100,000 CZK = 6,037.02 SGD\n• 250,000 CZK = 15,092.56 SGD\n• 500,000 CZK = 30,185.12 SGD\n\n## Popular SGD pairing\n\n` <a href=\"http://currency7.com/SGD-to-CZK-exchange-rate-converter?amount=300\">300 SGD in CZK</a> `" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6548188,"math_prob":0.99293107,"size":2990,"snap":"2023-14-2023-23","text_gpt3_token_len":1006,"char_repetition_ratio":0.2628935,"word_repetition_ratio":0.02268431,"special_character_ratio":0.37391305,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9557956,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T04:11:54Z\",\"WARC-Record-ID\":\"<urn:uuid:486cbd72-8e43-40dd-a534-f5a7627cb255>\",\"Content-Length\":\"29273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f2e96ff-4959-4fc2-ae71-3beecfea6b3c>\",\"WARC-Concurrent-To\":\"<urn:uuid:55a13654-b9a4-46d8-aff0-2b4ee6fbd094>\",\"WARC-IP-Address\":\"70.35.206.41\",\"WARC-Target-URI\":\"http://currency7.com/SGD-to-CZK-exchange-rate-converter?amount=300\",\"WARC-Payload-Digest\":\"sha1:OAYJXKKS27J246XAHSYU7R47E7CBUK7M\",\"WARC-Block-Digest\":\"sha1:R4VZSQLQ372KBP3ZG4V4FAIBKV4FSCJM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943749.68_warc_CC-MAIN-20230322020215-20230322050215-00535.warc.gz\"}"}
https://brilliant.org/practice/filling-parenthesis-only-operator-search/?subtopic=puzzles&chapter=operator-search
[ "", null, "Logic\n\n# Filling in parenthesis (only) - Operator Search\n\nCan we add parenthesis such that the expression is equal to 24?\n\n$1 + 2 + 3 \\times 4$\n\n$2 \\div 2 \\div 2 \\div 2$\n\nThe above expression can be parenthesized in various ways. Which of the following gives the largest value?\n\n$9 - 7 \\times 5 - 3$\n\nThe above expression can be parenthesized in various ways. Which of the following gives the smallest value?\n\n$3 \\div 4 - 5 \\times 6$\n\nThe above expression can be parenthesized in various ways. Which of the following gives an integer value?\n\nCan we add parenthesis such that the expression is equal to 17?\n\n$9 + 8 \\times 7 - 6$\n\n×" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/chapter/Operator%20Search%20Stolen%20-UylcmK-x3oBI6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84726924,"math_prob":0.99798506,"size":612,"snap":"2019-51-2020-05","text_gpt3_token_len":127,"char_repetition_ratio":0.13486843,"word_repetition_ratio":0.6185567,"special_character_ratio":0.20751634,"punctuation_ratio":0.18548387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962832,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T19:41:13Z\",\"WARC-Record-ID\":\"<urn:uuid:333a321d-5662-4658-8994-ff075a881dcb>\",\"Content-Length\":\"89671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19deba71-023b-4ee9-a2c4-25659de647c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:498715e0-128c-40ac-9e03-e8f66e8fd3eb>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/practice/filling-parenthesis-only-operator-search/?subtopic=puzzles&chapter=operator-search\",\"WARC-Payload-Digest\":\"sha1:GRKMMTSKVL6FESX2NHQ5UZU3VBOWZBMW\",\"WARC-Block-Digest\":\"sha1:HV6B7U2LWXVHIWIFFBSTVNWTF7HHVXLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541309137.92_warc_CC-MAIN-20191215173718-20191215201718-00521.warc.gz\"}"}
http://www.softmath.com/math-book-answers/sum-of-cubes/how-to-solve-intervals.html
[ "English | Español\n\n# Try our Free Online Math Solver!", null, "Online Math Solver\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\nWhat our customers say...\n\nThousands of users are using our software to conquer their algebra homework. Here are some of their experiences:\n\nThank to be very quick to answer my question, I will recommend you all over the world.\nJames Moore, MI\n\nI think it is great! I have showed the program to a couple of classmates during a study group and they loved it. We all wished we had it 4 months ago.\nTabitha Wright, MN\n\nThe Algebrator is my algebra doctor. Equations and inequalities were the two topics I used to struggle on but using the software wiped of my problems with the subject.\nJonathan McCue, OH\n\nAlthough I have always been good at math, I use the Algebrator to make sure my algebra homework is correct. I find the software to be very user friendly. Im sure I will be using it when I start college in about one year.\nMario Certa, CA\n\nSearch phrases used on 2014-02-12:\n\nStudents struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n• simplify expressions with exponents calculator\n• online calculator for trinomials\n• prentice hall conceptual physics textbook answers\n• negative exponents calculator\n• online complex numbers\n• writing training material using powerpoint\n• algebra 2 nth root activities\n• free online algebra 1 textbook\n• least to greatest calculator\n• fun algebra worksheets\n• exponential equations from points\n• glencoe algebra 1 end of the course test\n• binomial theorm+TI89\n• how to turn a fraction into a decimal on the ti-89\n• used teacher's edition Algebra & Geometry\n• how to solve binomial equations\n• mathpromblems.com\n• prentice hall conceptual physics problem solving exercises in physics\n• partial differential equations non homogeneous\n• rationalize the denominator and simplify\n• online rational equation solver\n• multiple square roots\n• cubed algebra\n• Greatest Common Factor Printable Worksheets\n• online mcdougal littell course 3\n• how do you simplify radicals with a graphing calculator\n• picture with square root problem\n• 3 way fraction into decimal calculator not percentage\n• Math: Age related problem and solutions\n• DEVELOPING SKILLS IN ALGEBRA ONE BOOK D ANSWERS\n• pre algebra ratio calculator\n• cramers rule matrix operations graphing calculators obsolete\n• how to solve numerical solutions in mecahnical ngineering\n• online multivariable graphing calculator\n• cubed root of x\n• simplify polynomial calculator\n• how to find scale factors of squares\n• pre algebra for 7th graders\n• state diagrams of online exam\n• how do you write something in simplified radical form?\n• math factor triangles\n• trivia questions for 3nd graders\n• how to simplify 3rd root numbers\n• worksheets on scale factor\n• hard math inequality\n• completing the sqaure\n• first order differential equation nonhomogeneous\n• negative exponents worksheet\n• free online 3rd grade sat prep\n• factor cubed polynomials\n• Solving non-homogeneous differential equations\n• rational expressions in lowest terms solver\n• ALGEBRA FACTORIZATION QUIZ WORKSHEET\n• printable math workseets for 8th grade\n• PRACTICE BOOK MCDOUGAL LITTELL MATH COURSE 2\n• biology notes powerpoint\n• solve equation maple multivariable\n• Factoring The Sum and Difference of Cubes calculator\n• find ordered pairs that satisfy a given equation\n• Free printable 11+ maths free examination papers\n• second order differential equation matlab\n• slope intercept generator\n• how to graph a hyperbola on a calculator\n• square root of 10 simplified\n• \" linear algebra \" cheat sheet\n• least common factor and greatest common factor machine\n• pre transition math worksheet\n• factor trinomials calculator\n• homogeneous vs non-homogeneous dirichlet\n• free slope intercept form worksheet\n• nth term calculator\n• glencoe algebra 2 book answers\n• 4th grade fractions and rational numbers" ]
[ null, "http://www.softmath.com/images/video-pages/solver-top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84229195,"math_prob":0.9428119,"size":4274,"snap":"2020-10-2020-16","text_gpt3_token_len":991,"char_repetition_ratio":0.12927401,"word_repetition_ratio":0.0028612304,"special_character_ratio":0.20168461,"punctuation_ratio":0.04287902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99808407,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T18:39:59Z\",\"WARC-Record-ID\":\"<urn:uuid:cc7d1d1c-9b2c-4cb5-8730-1be5d0b29349>\",\"Content-Length\":\"88981\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:532eeb7d-827d-4515-a25e-60e37cca3c67>\",\"WARC-Concurrent-To\":\"<urn:uuid:8dc5a197-a54f-4b95-b12d-0861245113a5>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"http://www.softmath.com/math-book-answers/sum-of-cubes/how-to-solve-intervals.html\",\"WARC-Payload-Digest\":\"sha1:NPXPTRKQPW34QLPHKXCIE6OYAEKPWKOW\",\"WARC-Block-Digest\":\"sha1:OXDZEWM3U2VA64OUS2SMV5N65ATQPQXR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146744.74_warc_CC-MAIN-20200227160355-20200227190355-00212.warc.gz\"}"}
http://lbartman.com/worksheet/kindergarten-prep-worksheets.php
[ "## lbartman.com - the pro math teacher\n\n• Subtraction\n• Multiplication\n• Division\n• Decimal\n• Time\n• Line Number\n• Fractions\n• Math Word Problem\n• Kindergarten\n• a + b + c\n\na - b - c\n\na x b x c\n\na : b : c\n\n# Kindergarten Prep Worksheets\n\nPublic on 07 Oct, 2016 by Cyun Lee\n\n###", null, "kindergarten position and direction printable worksheets\n\nName : __________________\n\nSeat Num. : __________________\n\nDate : __________________\n\n### HOW MANY STARS EACH LINE ?\n\n......\n......\n......\n......\n......\nshow printable version !!!hide the show\n\n## RELATED POST\n\nNot Available\n\n## POPULAR\n\n4 digit addition and subtraction worksheets\n\nmath division worksheets\n\ngraphing worksheet kindergarten\n\nmultiply and divide fractions worksheet pdf" ]
[ null, "https://www.myteachingstation.com/vault/2599/web/articles/Left-and-Right-Shapes-Worksheets.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84373605,"math_prob":0.7758528,"size":796,"snap":"2020-45-2020-50","text_gpt3_token_len":166,"char_repetition_ratio":0.19949494,"word_repetition_ratio":0.0,"special_character_ratio":0.25879398,"punctuation_ratio":0.121212125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9813427,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T13:10:10Z\",\"WARC-Record-ID\":\"<urn:uuid:f7987b9f-f046-44bc-a1f6-d1a459661f5b>\",\"Content-Length\":\"42887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f695b5e-cf64-4c14-a051-6d1091d75cc1>\",\"WARC-Concurrent-To\":\"<urn:uuid:afa77be9-a4a5-4e22-8c9e-0df47194e412>\",\"WARC-IP-Address\":\"45.76.71.7\",\"WARC-Target-URI\":\"http://lbartman.com/worksheet/kindergarten-prep-worksheets.php\",\"WARC-Payload-Digest\":\"sha1:GDFPSGQSEVS5YDPFYQQLBKHACVJTBDER\",\"WARC-Block-Digest\":\"sha1:RTFRFLWXABXUWFISJT6T24MJNH6DE4KB\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195656.78_warc_CC-MAIN-20201128125557-20201128155557-00311.warc.gz\"}"}
http://cashscritters.com/epub/an-introduction-to-multivariable-analysis-from-vector-to-manifold
[ "# An Introduction to Multivariable Analysis from Vector to by Piotr Mikusinski, Michael D. Taylor", null, "By Piotr Mikusinski, Michael D. Taylor\n\nMultivariable research is a vital topic for mathematicians, either natural and utilized. except mathematicians, we predict that physicists, mechanical engi­ neers, electric engineers, platforms engineers, mathematical biologists, mathemati­ cal economists, and statisticians engaged in multivariate research will locate this booklet tremendous priceless. the cloth awarded during this paintings is key for reports in differential geometry and for research in N dimensions and on manifolds. it's also of curiosity to an individual operating within the components of normal relativity, dynamical structures, fluid mechanics, electromagnetic phenomena, plasma dynamics, keep an eye on idea, and optimization, to call basically numerous. An past paintings entitled An advent to research: from quantity to essential through Jan and Piotr Mikusinski used to be dedicated to examining capabilities of a unmarried variable. As indicated by means of the name, this current booklet concentrates on multivariable research and is totally self-contained. Our motivation and method of this helpful topic are mentioned less than. A cautious learn of research is tough sufficient for the typical scholar; that of multi variable research is a fair better problem. someway the intuitions that served so good in size I develop vulnerable, even lifeless, as one strikes into the alien territory of measurement N. Worse but, the very invaluable equipment of differential varieties on manifolds offers specific problems; as one reviewer famous, it kind of feels as if the extra accurately one offers this equipment, the tougher it's to understand.\n\nRead Online or Download An Introduction to Multivariable Analysis from Vector to Manifold PDF\n\nBest differential geometry books\n\nFat manifolds and linear connections\n\nThe idea of connections is primary not just in natural arithmetic (differential and algebraic geometry), but additionally in mathematical and theoretical physics (general relativity, gauge fields, mechanics of continuum media). The now-standard method of this topic used to be proposed by means of Ch. Ehresmann 60 years in the past, attracting first mathematicians and later physicists through its obvious geometrical simplicity.\n\nSingularities of Differentiable Maps: Volume II Monodromy and Asymptotic Integrals\n\nThe current. quantity is the second one quantity of the publication \"Singularities of Differentiable Maps\" through V. 1. Arnold, A. N. Varchenko and S. M. Gusein-Zade. the 1st quantity, subtitled \"Classification of severe issues, caustics and wave fronts\", used to be released through Moscow, \"Nauka\", in 1982. it is going to be said during this textual content easily as \"Volume 1\".\n\nDifferential Geometry and Mathematical Physics\n\nThis booklet includes the court cases of the specified consultation, Geometric equipment in Mathematical Physics, held on the joint AMS-CMS assembly in Vancouver in August 1993. The papers accrued right here include a couple of new leads to differential geometry and its purposes to physics. the key issues comprise black holes, singularities, censorship, the Einstein box equations, geodesics, index idea, submanifolds, CR-structures, and space-time symmetries.\n\nAdditional resources for An Introduction to Multivariable Analysis from Vector to Manifold\n\nSample text\n\nThe first set in our sequence is {Xl, ... ,XK}, trivially a spanning set. To get to the next step in our sequence, consider the set {YI, Xl, X2, ... ,XK}. This is clearly a spanning set. Since {Xl, ... , X K } is a basis, we must be able to find scalars, not all zero, such that thYI + alxl + a2x2 + ... + aKxK = O. We cannot have al = a2 = ... = aK = 0 since this would force YI = 0, so we may suppose al f=. O. This means we may write Xl as a linear combination of YI, X2, ... ,XK. Since any vector X E V can be written as a linear combination of Xl, X2, ...\n\nM, Sk is a closed subset of Xb then S = SI x ... X Sm is a closed subset of X. 3 Convergence (d) Prove that if, for every k = 1, ... , m, Sk is a dense subset of Xt, then S = Sl x ... X Sm is a dense subset of X. 21. Let X be a metric space and let Y £: X. (a) Prove that A £: Y is open in Y if and only if A in X. = Un Y for some U open (b) Prove that A £: Y is closed in Y if and only if A = un Y for some U closed in X. 1, one of the main goals of introducing a metric in a set is to define convergence of sequences in that set.\n\nDefine f: V -+ W by Show that f is well defined, is a one-to-one onto linear transformation, and has an inverse that is also a linear transformation. ) 11. 1. 12. 2. 13. 3. 8 Orthogonal Transformations An important property of area and volume in dimensions 2 and 3 is their invariance under reflections and rotations. 8 Orthogonal Transformations higher dimensional analogs of volume, it will be important to consider what kind of linear transformations correspond to reflections and rotations. The kind of transformations we want would preserve lengths and the angles between vectors." ]
[ null, "https://images-na.ssl-images-amazon.com/images/I/41rwUsUAQ6L._SX327_BO1,204,203,200_.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9253306,"math_prob":0.9321336,"size":5016,"snap":"2019-13-2019-22","text_gpt3_token_len":1152,"char_repetition_ratio":0.09596967,"word_repetition_ratio":0.035714287,"special_character_ratio":0.21889952,"punctuation_ratio":0.16343208,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.977094,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T18:44:30Z\",\"WARC-Record-ID\":\"<urn:uuid:3c5daf2a-7073-4166-b5d9-3b06b35a7485>\",\"Content-Length\":\"32950\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94aae5dc-9a05-4d7a-8cb4-a675774101b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:6214fe4a-3485-489a-ab93-e2ffa04cd54f>\",\"WARC-IP-Address\":\"23.229.134.96\",\"WARC-Target-URI\":\"http://cashscritters.com/epub/an-introduction-to-multivariable-analysis-from-vector-to-manifold\",\"WARC-Payload-Digest\":\"sha1:SN3YOUQBNLDVIL7DVZYK3KFTN3XGZ5AL\",\"WARC-Block-Digest\":\"sha1:XEPTBU2UYQ5SQHEKYRSLL3NXC4HPLZJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202530.49_warc_CC-MAIN-20190321172751-20190321194751-00251.warc.gz\"}"}
https://api.flutter.dev/flutter/package-async_async/CancelableOperation/then.html
[ "# then<R> method\n\nthen <R>(FutureOr<R> onValue(T), { FutureOr<R> onError( StackTrace), FutureOr<R> onCancel(), bool propagateCancel: false })\n\nRegisters callbacks to be called when this operation completes.\n\n`onValue` and `onError` behave in the same way as Future.then.\n\nIf `onCancel` is provided, and this operation is canceled, the `onCancel` callback is called and the returned operation completes with the result.\n\nIf `onCancel` is not given, and this operation is canceled, then the returned operation is canceled.\n\nIf `propagateCancel` is `true` and the returned operation is canceled then this operation is canceled. The default is `false`.\n\n## Implementation\n\n``````CancelableOperation<R> then<R>(FutureOr<R> Function(T) onValue,\n{FutureOr<R> Function(Object, StackTrace) onError,\nFutureOr<R> Function() onCancel,\nbool propagateCancel = false}) {\nfinal completer =\nCancelableCompleter<R>(onCancel: propagateCancel ? cancel : null);\n\nvalueOrCancellation().then((T result) {\nif (!completer.isCanceled) {\nif (isCompleted) {\ncompleter.complete(Future.sync(() => onValue(result)));\n} else if (onCancel != null) {\ncompleter.complete(Future.sync(onCancel));\n} else {\ncompleter._cancel();\n}\n}\n}, onError: (error, stackTrace) {\nif (!completer.isCanceled) {\nif (onError != null) {\ncompleter.complete(Future.sync(() => onError(error, stackTrace)));\n} else {\ncompleter.completeError(error, stackTrace);\n}\n}\n});\nreturn completer.operation;\n}``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63691556,"math_prob":0.9329799,"size":1277,"snap":"2019-51-2020-05","text_gpt3_token_len":303,"char_repetition_ratio":0.1948154,"word_repetition_ratio":0.025974026,"special_character_ratio":0.2357087,"punctuation_ratio":0.22488038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96902233,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T12:45:22Z\",\"WARC-Record-ID\":\"<urn:uuid:a10da6d5-7320-4494-a50e-161603c2dacf>\",\"Content-Length\":\"11511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02eb743a-fbf9-4c83-b44c-bb6c26fdedcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:e38ae49e-5334-445b-a94a-c3c1057b577d>\",\"WARC-IP-Address\":\"151.101.65.195\",\"WARC-Target-URI\":\"https://api.flutter.dev/flutter/package-async_async/CancelableOperation/then.html\",\"WARC-Payload-Digest\":\"sha1:2GKVX4RN5NTM772I4WHKF6XTRTLDLVVE\",\"WARC-Block-Digest\":\"sha1:SUP4YJCEZUK7TNNUAKQGKZSOFNZVLVIB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251688806.91_warc_CC-MAIN-20200126104828-20200126134828-00096.warc.gz\"}"}
http://mathxscience.com/newtons_laws_quiz.html
[ "", null, "", null, "### NEWTON\"S LAWS QUIZ\n\nFill in the following blanks. After you submit the quiz, you will be forwarded to a page with the correct responses as well as the responses you entered. You will also receive an email with your responses.\n\nEnter your name *\n\nEnter your email address*\n\n1. State the first law of motion in your own words.\n\n2. State the second law of motion in your own words.\n\n3. State the third law of motion in your own words.\n\n4. What is the difference between speed and velocity?\n\n5. What is net force?\n\n6. What is the unit for force?\n\n7. What is the unit for mass?\n\n8. What is the unit for acceleration?\n\n9. Name 2 examples of force that will stop an object from moving?\n\n10. What is the net force and direction that the box will move?", null, "11. What is the net force and direction that the box will move?", null, "12. F(N) = m(kg) x a(m/s2)\nMavis pushes a 22kg box ball with 44N of of force.\nWhat was its acceleration?\n\n13. F(N) = m(kg) x a(m/s2)\nMark kicks a 22kg bowling ball (OUCH!) causing it to\naccelerate 2 m/s2. How much force was needed?\n\n14. According to Newton's 1st law of motion, what will an object at rest\ncontinue to do if no force is applied?\n\nyou must enter name and email address" ]
[ null, "http://mathxscience.com/images/mathxscience_banner.png", null, "http://mathxscience.com/images/textbkg.png", null, "http://mathxscience.com/images/math/newton.png", null, "http://mathxscience.com/images/math/newton1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90669453,"math_prob":0.8886306,"size":2186,"snap":"2021-04-2021-17","text_gpt3_token_len":553,"char_repetition_ratio":0.12694776,"word_repetition_ratio":0.104477614,"special_character_ratio":0.23421775,"punctuation_ratio":0.09492274,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9753641,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,9,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-10T14:29:12Z\",\"WARC-Record-ID\":\"<urn:uuid:83969768-ee8d-45fc-831a-b8bf892c60e4>\",\"Content-Length\":\"12146\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a083b804-cb5f-4945-bbad-e9e24aad8e8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:84cd2bc0-54f4-43ca-9196-e9e9a476fc24>\",\"WARC-IP-Address\":\"209.195.1.139\",\"WARC-Target-URI\":\"http://mathxscience.com/newtons_laws_quiz.html\",\"WARC-Payload-Digest\":\"sha1:7ZLYTB2Q72NPTKZQSUYML4RJ3XXLSVNU\",\"WARC-Block-Digest\":\"sha1:4YFCI2ETIMPPSI7JE7JV2KJIXBZ3RPSL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038057142.4_warc_CC-MAIN-20210410134715-20210410164715-00364.warc.gz\"}"}
https://community.rstudio.com/t/calibration-in-tidymodels/172196
[ "# Calibration in Tidymodels\n\nHi,\n\nI am just reviewing calibration in tidy-models and I just want to confirm my understanding of when to use it.\n\nI read the two papers referenced and from what i can tell some models when they output probabilities, it can result in them pushing the probabilities closer to 1 and 0 than the rate of actual occurrences of a particular case. For example if i ran an algorithm for 100 instances the average score is around 0.55, but the actual fraction of positives is 50% which suggests a slight imbalance. So the scores are not well calibrated.\n\nThe probably function helps in this by allowing you to plot the rate of occurrence across the probability range and if its not close to a straight line, its problematic.\n\nFrom my understanding of the topic, unbalanced probabilities would occur where you have unbalanced datasets, possibly regularization and over/under fitting? If unbalanced data is an issue, wouldn't upsampling/downsampling be a remedy when you are training within your resamples assuming the test set that you apply your model on remains unbalanced within the test fold of your resamples?\n\nI understand my questions are not incredibly concise but if anyone could confirm my understanding that would really help" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9545253,"math_prob":0.7732211,"size":1243,"snap":"2023-40-2023-50","text_gpt3_token_len":245,"char_repetition_ratio":0.11380145,"word_repetition_ratio":0.0,"special_character_ratio":0.19066773,"punctuation_ratio":0.066079296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9694655,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T13:18:40Z\",\"WARC-Record-ID\":\"<urn:uuid:d2e19440-7845-4d57-927b-e4a715da26f6>\",\"Content-Length\":\"28275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7e03a06-831f-4b92-a1e7-00e211af7e22>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4f54597-6c79-4ca1-bfa1-4c82332092ff>\",\"WARC-IP-Address\":\"167.99.20.217\",\"WARC-Target-URI\":\"https://community.rstudio.com/t/calibration-in-tidymodels/172196\",\"WARC-Payload-Digest\":\"sha1:54C6A3UIYAKUQMMHH7I5M4KZF2XSOETY\",\"WARC-Block-Digest\":\"sha1:AORG7TJWZ7YL3PUGLEDXB5LYZEZLY7JH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00656.warc.gz\"}"}
https://shenfun.readthedocs.io/en/latest/gettingstarted.html
[ "# Getting started¶\n\n## Basic usage¶\n\nShenfun consists of classes and functions whoose purpose are to make it easier to implement PDE’s with spectral methods in simple tensor product domains. The most important everyday tools are\n\nA good place to get started is by creating a Basis(). There are five families of bases: Fourier, Chebyshev, Legendre, Laguerre, Hermite and Jacobi. All bases are defined on a one-dimensional domain, with their own basis functions and quadrature points. For example, we have the regular Chebyshev basis $$\\{T_k\\}_{k=0}^{N-1}$$, where $$T_k$$ is the $$k$$’th Chebyshev polynomial of the first kind. To create such a basis with 8 quadrature points (i.e., $$\\{T_k\\}_{k=0}^{7}$$) do:\n\nfrom shenfun import Basis\nN = 8\nT = Basis(N, 'Chebyshev', bc=None)\n\n\nHere bc=None is used to indicate that there are no boundary conditions associated with this basis, which is the default, so it could just as well have been left out. To create a regular Legendre basis (i.e., $$\\{L_k\\}_{k=0}^{N-1}$$, where $$L_k$$ is the $$k$$’th Legendre polynomial), just replace Chebyshev with Legendre above. And to create a Fourier basis, just use Fourier.\n\nThe basis $$T = \\{T_k\\}_{k=0}^{N-1}$$ has many useful methods associated with it, and we may experiment a little. A Function u using basis $$T$$ has expansion\n\n(1)$u(x) = \\sum_{k=0}^{7} \\hat{u}_k T_k(x)$\n\nand an instance of this function (initialized with $$\\{\\hat{u}_k\\}_{k=0}^7=0$$) is created in shenfun as:\n\nu = Function(T)\n\n\nConsider now for exampel the polynomial $$2x^2-1$$, which happens to be exactly equal to $$T_2(x)$$. We can create this polynomial using sympy\n\nimport sympy as sp\nx = sp.Symbol('x')\nu = 2*x**2 - 1 # or simply u = sp.chebyshevt(2, x)\n\n\nThe Sympy function u can now be evaluated on the quadrature points of basis $$T$$:\n\nxj = T.mesh()\nue = Array(T)\nue[:] = [u.subs(x, xx) for xx in xj]\nprint(xj)\n[ 0.98078528 0.83146961 0.55557023 0.19509032 -0.19509032 -0.55557023\n-0.83146961 -0.98078528]\nprint(ue)\n[ 0.92387953 0.38268343 -0.38268343 -0.92387953 -0.92387953 -0.38268343\n0.38268343 0.92387953]\n\n\nWe see that ue is an Array on the basis T, and not a Function. The Array and Function classes are both subclasses of Numpy’s ndarray, and represent the two arrays associated with the spectral Galerkin function, like (1). The Function represent the entire spectral Galerkin function, with array values corresponding to the expansion coefficients $$\\hat{u}$$. The Array represent the spectral Galerkin function evaluated on the quadrature mesh of the basis T, i.e., here $$u(x_i), \\forall \\, i \\in 0, 1, \\ldots, 7$$.\n\nWe now want to find the Function uh corresponding to Array ue. Considering (1), this corresponds to finding $$\\hat{u}_k$$ if the left hand side $$u(x_j)$$ is known for all quadrature points $$x_j$$.\n\nSince we already know that ue is equal to the second Chebyshev polynomial, we should get an array of expansion coefficients equal to $$\\hat{u} = (0, 0, 1, 0, 0, 0, 0, 0)$$. We can compute uh either by using project() or a forward transform:\n\nuh = Function(T)\nuh = T.forward(ue, uh)\n# or\n# uh = ue.forward(uh)\n# or\n# uh = project(ue, T)\nprint(uh)\n[-1.38777878e-17 6.72002101e-17 1.00000000e+00 -1.95146303e-16\n1.96261557e-17 1.15426347e-16 -1.11022302e-16 1.65163507e-16]\n\n\nSo we see that the projection works to machine precision.\n\nThe projection is mathematically: find $$u_h \\in T$$, such that\n\n$(u_h - u, v)_w = 0 \\quad \\forall v \\in T,$\n\nwhere $$v$$ is a test function, $$u_h$$ is a trial function and the notation $$(\\cdot, \\cdot)_w$$ was introduced in (4). Using now $$v=T_k$$ and $$u_h=\\sum_{j=0}^7 \\hat{u}_j T_j$$, we get\n\n$\\begin{split}(\\sum_{j=0}^7 \\hat{u}_j T_j, T_k)_w &= (u, T_k)_w, \\\\ \\sum_{j=0}^7 (T_j, T_k)_w \\hat{u}_j &= (u, T_k)_w,\\end{split}$\n\nfor all $$k \\in 0, 1, \\ldots, 7$$. This can be rewritten on matrix form as\n\n$B_{kj} \\hat{u}_j = \\tilde{u}_k,$\n\nwhere $$B_{kj} = (T_j, T_k)_w$$, $$\\tilde{u}_k = (u, T_k)_w$$ and summation is implied by the repeating $$j$$ indices. Since the Chebyshev polynomials are orthogonal the mass matrix $$B_{kj}$$ is diagonal. We can assemble both $$B_{kj}$$ and $$\\tilde{u}_j$$ with shenfun, and at the same time introduce the TestFunction, TrialFunction classes and the inner() function:\n\nfrom shenfun import TestFunction, TrialFunction, inner\nu = TrialFunction(T)\nv = TestFunction(T)\nB = inner(u, v)\nu_tilde = inner(ue, v)\nprint(B)\n{0: array([3.14159265, 1.57079633, 1.57079633, 1.57079633, 1.57079633,\n1.57079633, 1.57079633, 1.57079633])}\nprint(u_tilde)\n[-4.35983562e-17 1.05557843e-16 1.57079633e+00 -3.06535096e-16\n3.08286933e-17 1.81311282e-16 -1.74393425e-16 2.59438230e-16]\n\n\nThe inner() function represents the (weighted) inner product and it expects one test function, and possibly one trial function. If, as here, it also contains a trial function, then a matrix is returned. If inner() contains one test, but no trial function, then an array is returned. Finally, if inner() contains no test nor trial function, but instead a number and an Array, like:\n\na = Array(T, val=1)\nprint(inner(1, a))\n2.0\n\n\nthen inner() represents a non-weighted integral over the domain. Here it returns the length of the domain (2.0) since a is initialized to unity.\n\nNote that the matrix $$B$$ assembled above is stored using shenfun’s SpectralMatrix class, which is a subclass of Python’s dictionary, where the keys are the diagonals and the values are the diagonal entries. The matrix $$B$$ is seen to have only one diagonal (the principal) $$\\{B_{ii}\\}_{i=0}^{7}$$.\n\nWith the matrix comes a solve method and we can solve for $$\\hat{u}$$ through:\n\nu_hat = Function(T)\nu_hat = B.solve(u_tilde, u=u_hat)\nprint(u_hat)\n[-1.38777878e-17 6.72002101e-17 1.00000000e+00 -1.95146303e-16\n1.96261557e-17 1.15426347e-16 -1.11022302e-16 1.65163507e-16]\n\n\nwhich obviously is exactly the same as we found using project() or the T.forward function.\n\nNote that Array merely is a subclass of Numpy’s ndarray, whereas Function is a subclass of both Numpy’s ndarray and the BasisFunction class. The latter is used as a base class for arguments to bilinear and linear forms, and is as such a base class also for TrialFunction and TestFunction. An instance of the Array class cannot be used in forms, except from regular inner products of numbers or test function vs an Array. To illustrate, lets create some forms, where all except the last one is ok:\n\nT = Basis(12, 'Legendre')\nu = TrialFunction(T)\nv = TestFunction(T)\nuf = Function(T)\nua = Array(T)\nA = inner(v, u) # Mass matrix\nc = inner(v, ua) # ok, a scalar product\nd = inner(v, uf) # ok, a scalar product (slower than above)\ne = inner(1, ua) # ok, non-weighted integral of ua over domain\ndf = Dx(uf, 0, 1) # ok\nda = Dx(ua, 0, 1) # Not ok\n\nAssertionError Traceback (most recent call last)\n<ipython-input-14-3b957937279f> in <module>\n----> 1 da = inner(v, Dx(ua, 0, 1))\n\n~/MySoftware/shenfun/shenfun/forms/operators.py in Dx(test, x, k)\n82 Number of derivatives\n83 \"\"\"\n---> 84 assert isinstance(test, (Expr, BasisFunction))\n85\n86 if isinstance(test, BasisFunction):\n\nAssertionError:\n\n\nSo it is not possible to perform operations that involve differentiation (Dx represents a partial derivative) on an Array instance. This is because the ua does not contain more information than its values and its TensorProductSpace. A BasisFunction instance, on the other hand, can be manipulated with operators like div() grad() in creating instances of the Expr class, see Operators.\n\nNote that any rules for efficient use of Numpy ndarrays, like vectorization, also applies to Function and Array instances.\n\n## Operators¶\n\nOperators act on any single instance of a BasisFunction, which can be Function, TrialFunction or TestFunction. The implemented operators are:\n\nOperators are used in variational forms assembled using inner() or project(), like:\n\nA = inner(grad(u), grad(v))\n\n\nwhich assembles a stiffness matrix A. Note that the two expressions fed to inner must have consistent rank. Here, for example, both grad(u) and grad(v) have rank 1 of a vector.\n\n## Multidimensional problems¶\n\nAs described in the introduction, a multidimensional problem is handled using tensor product spaces, that have basis functions generated from taking the outer products of one-dimensional basis functions. We create tensor product spaces using the class TensorProductSpace:\n\nN, M = (12, 16)\nC0 = Basis(N, 'L', bc=(0, 0), scaled=True)\nK0 = Basis(M, 'F', dtype='d')\nT = TensorProductSpace(comm, (C0, K0))\n\n\nAssociated with this is a Cartesian mesh $$[-1, 1] \\times [0, 2\\pi]$$. We use classes Function, TrialFunction and TestFunction exactly as before:\n\nu = TrialFunction(T)\nv = TestFunction(T)\n\n\nHowever, now A will be a tensor product matrix, or more correctly, the sum of two tensor product matrices. This can be seen if we look at the equations beyond the code. In this case we are using a composite Legendre basis for the first direction and Fourier exponentials for the second, and the tensor product basis function is\n\n$\\begin{split}v_{kl}(x, y) &= \\frac{1}{\\sqrt{4k+6}}(L_k(x) - L_{k+2}(x)) \\exp(\\imath l y), \\\\ &= \\Psi_k(x) \\phi_l(y),\\end{split}$\n\nwhere $$L_k$$ is the $$k$$’th Legendre polynomial, $$\\psi_k = (L_k-L_{k+2})/\\sqrt{4k+6}$$ and $$\\phi_l = \\exp(\\imath l y)$$ are used for simplicity in later derivations. The trial function becomes\n\n$u(x, y) = \\sum_k \\sum_l \\hat{u}_{kl} v_{kl}$\n\nand the inner product is\n\n(2)$\\begin{split}(\\nabla u, \\nabla v)_w &= \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\nabla u \\cdot \\nabla v dxdy, \\\\ &= \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial u}{\\partial x} \\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y}\\frac{\\partial v}{\\partial y} dxdy, \\\\ &= \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial u}{\\partial x} \\frac{\\partial v}{\\partial x} dxdy + \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial u}{\\partial y} \\frac{\\partial v}{\\partial y} dxdy,\\end{split}$\n\nshowing that it is the sum of two tensor product matrices. However, each one of these two terms contains the outer product of smaller matrices. To see this we need to insert for the trial and test functions (using $$v_{mn}$$ for test):\n\n$\\begin{split}\\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial u}{\\partial x} \\frac{\\partial v}{\\partial x} dxdy &= \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial}{\\partial x} \\left( \\sum_k \\sum_l \\hat{u}_{kl} \\Psi_k(x) \\phi_l(y) \\right) \\frac{\\partial}{\\partial x} \\left( \\Psi_m(x) \\phi_n(y) \\right)dxdy, \\\\ &= \\sum_k \\sum_l \\underbrace{ \\int_{-1}^{1} \\frac{\\partial \\Psi_k(x)}{\\partial x} \\frac{\\partial \\Psi_m(x)}{\\partial x} dx}_{A_{mk}} \\underbrace{ \\int_{0}^{2 \\pi} \\phi_l(y) \\phi_{n}(y) dy}_{B_{nl}} \\, \\hat{u}_{kl},\\end{split}$\n\nwhere $$A \\in \\mathbb{R}^{N-2 \\times N-2}$$ and $$B \\in \\mathbb{R}^{M \\times M}$$. The tensor product matrix $$A_{mk} B_{nl}$$ (or in matrix notation $$A \\otimes B$$) is the first item of the two items in the list that is returned by inner(grad(u), grad(v)). The other item is of course the second term in the last line of (2):\n\n$\\begin{split}\\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial u}{\\partial y} \\frac{\\partial v}{\\partial y} dxdy &= \\int_{-1}^{1} \\int_{0}^{2 \\pi} \\frac{\\partial}{\\partial y} \\left( \\sum_k \\sum_l \\hat{u}_{kl} \\Psi_k(x) \\phi_l(y) \\right) \\frac{\\partial}{\\partial y} \\left(\\Psi_m(x) \\phi_n(y) \\right) dxdy \\\\ &= \\sum_k \\sum_l \\underbrace{ \\int_{-1}^{1} \\Psi_k(x) \\Psi_m(x) dx}_{C_{mk}} \\underbrace{ \\int_{0}^{2 \\pi} \\frac{\\partial \\phi_l(y)}{\\partial y} \\frac{ \\phi_{n}(y) }{\\partial y} dy}_{D_{nl}} \\, \\hat{u}_{kl}\\end{split}$\n\nThe tensor product matrices $$A_{mk} B_{nl}$$ and $$C_{mk}D_{nl}$$ are both instances of the TPMatrix class. Together they lead to linear algebra systems like:\n\n(3)$(A_{mk}B_{nl} + C_{mk}D_{nl}) \\hat{u}_{kl} = \\tilde{f}_{mn},$\n\nwhere\n\n$\\tilde{f}_{mn} = (v_{mn}, f)_w,$\n\nfor some right hand side $$f$$, see, e.g., (5). Note that an alternative formulation here is\n\n$A \\hat{u} B^T + C \\hat{u} D^T = \\tilde{f}$\n\nwhere $$\\hat{u}$$ and $$\\tilde{f}$$ are treated as regular matrices ($$\\hat{u} \\in \\mathbb{R}^{N-2 \\times M}$$ and $$\\tilde{f} \\in \\mathbb{R}^{N-2 \\times M}$$). This formulation is utilized to derive efficient solvers for tensor product bases in multiple dimensions using the matrix decomposition method in [She94] and [She95].\n\nNote that in our case the equation system (3) can be greatly simplified since three of the submatrices ($$A_{mk}, B_{nl}$$ and $$D_{nl}$$) are diagonal. Even more, two of them equal the identity matrix\n\n$\\begin{split}A_{mk} &= \\delta_{mk}, \\\\ B_{nl} &= \\delta_{nl},\\end{split}$\n\nwhereas the last one can be written in terms of the identity (no summation on repeating indices)\n\n$D_{nl} = -nl\\delta_{nl}.$\n\nInserting for this in (3) and simplifying by requiring that $$l=n$$ in the second step, we get\n\n(4)$\\begin{split}(\\delta_{mk}\\delta_{nl} - ln C_{mk}\\delta_{nl}) \\hat{u}_{kl} &= \\tilde{f}_{mn}, \\\\ (\\delta_{mk} - l^2 C_{mk}) \\hat{u}_{kl} &= \\tilde{f}_{ml}.\\end{split}$\n\nNow if we keep $$l$$ fixed this latter equation is simply a regular linear algebra problem to solve for $$\\hat{u}_{kl}$$, for all $$k$$. Of course, this solve needs to be carried out for all $$l$$.\n\nNote that there is a generic solver available for the system (3) in SolverGeneric2NP that makes no assumptions on diagonality. However, this solver will, naturally, be quite a bit slower than a tailored solver that takes advantage of diagonality. For the Poisson equation such solvers are available for both Legendre and Chebyshev bases, see the extended demo Demo - 3D Poisson’s equation or the demo programs dirichlet_poisson2D.py and dirichlet_poisson3D.py.\n\n## Coupled problems¶\n\nWith Shenfun it is possible to solve equations coupled and implicit using the MixedTensorProductSpace class for multidimensional problems and MixedBasis for one-dimensional problems. As an example, lets consider a mixed formulation of the Poisson equation. The Poisson equation is given as always as\n\n(5)$\\nabla^2 u(\\boldsymbol{x}) = f(\\boldsymbol{x}), \\quad \\text{for} \\quad \\boldsymbol{x} \\in \\Omega,$\n\nbut now we recast the problem into a mixed formulation\n\n$\\begin{split}\\sigma(\\boldsymbol{x})- \\nabla u (\\boldsymbol{x})&= 0, \\quad \\text{for} \\quad \\boldsymbol{x} \\in \\Omega, \\\\ \\nabla \\cdot \\sigma (\\boldsymbol{x})&= f(\\boldsymbol{x}), \\quad \\text{for} \\quad \\boldsymbol{x} \\in \\Omega.\\end{split}$\n\nwhere we solve for the vector $$\\sigma$$ and scalar $$u$$ simultaneously. The domain $$\\Omega$$ is taken as a multidimensional Cartesian product $$\\Omega=[-1, 1] \\times [0, 2\\pi]$$, but the code is more or less identical for a 3D problem. For boundary conditions we use Dirichlet in the $$x$$-direction and periodicity in the $$y$$-direction:\n\n$\\begin{split}u(\\pm 1, y) &= 0 \\\\ u(x, 2\\pi) &= u(x, 0)\\end{split}$\n\nNote that there is no boundary condition on $$\\sigma$$, only on $$u$$. For this reason we choose a Dirichlet basis $$SD$$ for $$u$$ and a regular Legendre or Chebyshev $$ST$$ basis for $$\\sigma$$. With $$K0$$ representing the function space in the periodic direction, we get the relevant 2D tensor product spaces as $$TD = SD \\otimes K0$$ and $$TT = ST \\otimes K0$$. Since $$\\sigma$$ is a vector we use a VectorTensorProductSpace $$VT = TT \\times TT$$ and finally a MixedTensorProductSpace $$Q = VT \\times TD$$ for the coupled and implicit treatment of $$(\\sigma, u)$$:\n\nN, M = (16, 24)\nfamily = 'Legendre'\nSD = Basis(N, family, bc=(0, 0))\nST = Basis(N, family)\nK0 = Basis(N, 'Fourier', dtype='d')\nTD = TensorProductSpace(comm, (SD, K0), axes=(0, 1))\nTT = TensorProductSpace(comm, (ST, K0), axes=(0, 1))\nVT = VectorTensorProductSpace(TT)\nQ = MixedTensorProductSpace([VT, TD])\n\n\nIn variational form the problem reads: find $$(\\sigma, u) \\in Q$$ such that\n\n(6)$\\begin{split}(\\sigma, \\tau)_w - (\\nabla u, \\tau)_w &= 0, \\quad \\forall \\tau \\in VT, \\\\ (\\nabla \\cdot \\sigma, v)_w &= (f, v)_w \\quad \\forall v \\in TD\\end{split}$\n\nTo implement this we use code that is very similar to regular, uncoupled problems. We create test and trialfunction:\n\ngu = TrialFunction(Q)\ntv = TestFunction(Q)\nsigma, u = gu\ntau, v = tv\n\n\nand use these to assemble all blocks of the variational form (6):\n\n# Assemble equations\nA00 = inner(sigma, tau)\nif family.lower() == 'legendre':\nA01 = inner(u, div(tau))\nelse:\nA10 = inner(div(sigma), v)\n\n\nNote that we here can use integration by parts for Legendre, since the weight function is a constant, and as such get the term $$(-\\nabla u, \\tau)_w = (u, \\nabla \\cdot \\tau)_w$$ (boundary term is zero due to homogeneous Dirichlet boundary conditions).\n\nWe collect all assembled terms in a BlockMatrix:\n\nH = BlockMatrix(A00+A01+A10)\n\n\nThis block matrix H is then simply (for Legendre)\n\n(7)$\\begin{split}\\begin{bmatrix} (\\sigma, \\tau)_w & (u, \\nabla \\cdot \\tau)_w \\\\ (\\nabla \\cdot \\sigma, v)_w & 0 \\end{bmatrix}\\end{split}$\n\nNote that each item in (7) is a collection of instances of the TPMatrix class, and for similar reasons as given around (4), we get also here one regular block matrix for each Fourier wavenumber. The sparsity pattern is the same for all matrices except for wavenumber 0. The (highly sparse) sparsity pattern for block matrix $$H$$ with wavenumber $$\\ne 0$$ is shown in the image below", null, "A complete demo for the coupled problem discussed here can be found in MixedPoisson.py and a 3D version is in MixedPoisson3D.py.\n\n## Integrators¶\n\nThe integrators module contains some interator classes that can be used to integrate a solution forward in time. However, for now these integrators are only implemented for purely Fourier tensor product spaces. There are currently 3 different integrator classes\n\nSee, e.g., H. Montanelli and N. Bootland “Solving periodic semilinear PDEs in 1D, 2D and 3D with exponential integrators”, https://arxiv.org/pdf/1604.08900.pdf\n\nIntegrators are set up to solve equations like\n\n(8)$\\frac{\\partial u}{\\partial t} = L u + N(u)$\n\nwhere $$u$$ is the solution, $$L$$ is a linear operator and $$N(u)$$ is the nonlinear part of the right hand side.\n\nTo illustrate, we consider the time-dependent 1-dimensional Kortveeg-de Vries equation\n\n$\\frac{\\partial u}{\\partial t} + \\frac{\\partial ^3 u}{\\partial x^3} + u \\frac{\\partial u}{\\partial x} = 0$\n\nwhich can also be written as\n\n$\\frac{\\partial u}{\\partial t} + \\frac{\\partial ^3 u}{\\partial x^3} + \\frac{1}{2}\\frac{\\partial u^2}{\\partial x} = 0$\n\nWe neglect boundary issues and choose a periodic domain $$[0, 2\\pi]$$ with Fourier exponentials as test functions. The initial condition is chosen as\n\n(9)$u(x, t=0) = 3 A^2/\\cosh(0.5 A (x-\\pi+2))^2 + 3B^2/\\cosh(0.5B(x-\\pi+1))^2$\n\nwhere $$A$$ and $$B$$ are constants. For discretization in space we use the basis $$V_N = span\\{exp(\\imath k x)\\}_{k=0}^N$$ and formulate the variational problem: find $$u \\in V_N$$ such that\n\n$\\frac{\\partial }{\\partial t} \\Big(u, v \\Big) = -\\Big(\\frac{\\partial^3 u }{\\partial x^3}, v \\Big) - \\Big(\\frac{1}{2}\\frac{\\partial u^2}{\\partial x}, v\\Big), \\quad \\forall v \\in V_N$\n\nWe see that the first term on the right hand side is linear in $$u$$, whereas the second term is nonlinear. To implement this problem in shenfun we start by creating the necessary basis and test and trial functions\n\nimport numpy as np\nfrom shenfun import *\n\nN = 256\nT = Basis(N, 'F', dtype='d')\nu = TrialFunction(T)\nv = TestFunction(T)\nu_ = Array(T)\nu_hat = Function(T)\n\n\nWe then create two functions representing the linear and nonlinear part of (8):\n\ndef LinearRHS(**params):\nreturn -inner(Dx(u, 0, 3), v)\n\nk = T.wavenumbers(scaled=True, eliminate_highest_freq=True)\ndef NonlinearRHS(u, u_hat, rhs, **params):\nrhs.fill(0)\nu_[:] = T.backward(u_hat, u_)\nrhs = T.forward(-0.5*u_**2, rhs)\nreturn rhs*1j*k # return inner(grad(-0.5*Up**2), v)\n\n\nNote that we differentiate in NonlinearRHS by using the wavenumbers k directly. Alternative notation, that is given in commented out text, is slightly slower, but the results are the same.\n\nThe solution vector u_ needs also to be initialized according to (9)\n\nA = 25.\nB = 16.\nx = T.points_and_weights()\nu_[:] = 3*A**2/np.cosh(0.5*A*(x-np.pi+2))**2 + 3*B**2/np.cosh(0.5*B*(x-np.pi+1))**2\nu_hat = T.forward(u_, u_hat)\n\n\nFinally we create an instance of the ETDRK4 solver, and integrate forward with a given timestep\n\ndt = 0.01/N**2\nend_time = 0.006\nintegrator = ETDRK4(T, L=LinearRHS, N=NonlinearRHS)\nintegrator.setup(dt)\nu_hat = integrator.solve(u_, u_hat, dt, (0, end_time))\n\n\nThe solution is two waves travelling through eachother, seemingly undisturbed.", null, "## MPI¶\n\nShenfun makes use of the Message Passing Interface (MPI) to solve problems on distributed memory architectures. OpenMP is also possible to enable for FFTs.\n\nDataarrays in Shenfun are distributed using a new and completely generic method, that allows for any index of a multidimensional array to be distributed. To illustrate, lets consider a TensorProductSpace of three dimensions, such that the arrays living in this space will be 3-dimensional. We create two spaces that are identical, except from the MPI decomposition, and we use 4 CPUs (mpirun -np 4 python mpitest.py, if we store the code in this section as mpitest.py):\n\nfrom shenfun import *\nfrom mpi4py import MPI\nfrom mpi4py_fft import generate_xdmf\ncomm = MPI.COMM_WORLD\nN = (20, 40, 60)\nK0 = Basis(N, 'F', dtype='D', domain=(0, 1))\nK1 = Basis(N, 'F', dtype='D', domain=(0, 2))\nK2 = Basis(N, 'F', dtype='d', domain=(0, 3))\nT0 = TensorProductSpace(comm, (K0, K1, K2), axes=(0, 1, 2), slab=True)\nT1 = TensorProductSpace(comm, (K0, K1, K2), axes=(1, 0, 2), slab=True)\n\n\nHere the keyword slab determines that only one index set of the 3-dimensional arrays living in T0 or T1 should be distributed. The defaul is to use two, which corresponds to a so-called pencil decomposition. The axes-keyword determines the order of which transforms are conducted, starting from last to first in the given tuple. Note that T0 now will give arrays in real physical space that are distributed in the first index, whereas T1 will give arrays that are distributed in the second. This is because 0 and 1 are the first items in the tuples given to axes.\n\nWe can now create some Arrays on these spaces:\n\nu0 = Array(T0, val=comm.Get_rank())\nu1 = Array(T1, val=comm.Get_rank())\n\n\nsuch that u0 and u1 have values corresponding to their communicating processors rank in the COMM_WORLD group (the group of all CPUs).\n\nNote that both the TensorProductSpaces have functions with expansion\n\n(10)$u(x, y, z) = \\sum_{n=-N/2}^{N/2-1}\\sum_{m=-N/2}^{N/2-1}\\sum_{l=-N/2}^{N/2-1} \\hat{u}_{l,m,n} e^{\\imath (lx + my + nz)}.$\n\nwhere $$u(x, y, z)$$ is the continuous solution in real physical space, and $$\\hat{u}$$ are the spectral expansion coefficients. If we evaluate expansion (10) on the real physical mesh, then we get\n\n(11)$u(x_i, y_j, z_k) = \\sum_{n=-N/2}^{N/2-1}\\sum_{m=-N/2}^{N/2-1}\\sum_{l=-N/2}^{N/2-1} \\hat{u}_{l,m,n} e^{\\imath (lx_i + my_j + nz_k)}.$\n\nThe function $$u(x_i, y_j, z_k)$$ corresponds to the arrays u0, u1, whereas we have not yet computed the array $$\\hat{u}$$. We could get $$\\hat{u}$$ as:\n\nu0_hat = Function(T0)\nu0_hat = T0.forward(u0, u0_hat)\n\n\nNow, u0 and u1 have been created on the same mesh, which is a structured mesh of shape $$(20, 40, 60)$$. However, since they have different MPI decomposition, the values used to fill them on creation will differ. We can visualize the arrays in Paraview using some postprocessing tools, to be further described in Sec Post processing:\n\nu0.write('myfile.h5', 'u0', 0, domain=T0.mesh())\nu1.write('myfile.h5', 'u1', 0, domain=T1.mesh())\nif comm.Get_rank() == 0:\ngenerate_xdmf('myfile.h5')\n\n\nAnd when the generated myfile.xdmf is opened in Paraview, we can see the different distributions. The function u0 is shown first, and we see that it has different values along the short first dimension. The second figure is evidently distributed along the second dimension. Both arrays are non-distributed in the third and final dimension, which is fortunate, because this axis will be the first to be transformed in, e.g., u0_hat = T0.forward(u0, u0_hat).", null, "", null, "We can now decide to distribute not just one, but the first two axes using a pencil decomposition instead. This is achieved simply by dropping the slab keyword:\n\nT2 = TensorProductSpace(comm, (K0, K1, K2), axes=(0, 1, 2))\nu2 = Array(T2, val=comm.Get_rank())\nu2.write('pencilfile.h5', 'u2', 0)\nif comm.Get_rank() == 0:\ngenerate_xdmf('pencilfile.h5')\n\n\nRunning again with 4 CPUs the array u2 will look like:", null, "The local slices into the global array may be obtained through:\n\n>>> print(comm.Get_rank(), T2.local_slice(False))\n0 [slice(0, 10, None), slice(0, 20, None), slice(0, 60, None)]\n1 [slice(0, 10, None), slice(20, 40, None), slice(0, 60, None)]\n2 [slice(10, 20, None), slice(0, 20, None), slice(0, 60, None)]\n3 [slice(10, 20, None), slice(20, 40, None), slice(0, 60, None)]\n\n\nIn spectral space the distribution will be different. This is because the discrete Fourier transforms are performed one axis at the time, and for this to happen the dataarrays need to be realigned to get entire axis available for each processor. Naturally, for the array in the pencil example (see image), we can only perform an FFT over the third and longest axis, because only this axis is locally available to all processors. To do the other directions, the dataarray must be realigned and this is done internally by the TensorProductSpace class. The shape of the datastructure in spectral space, that is the shape of $$\\hat{u}$$, can be obtained as:\n\n>>> print(comm.Get_rank(), T2.local_slice(True))\n0 [slice(0, 20, None), slice(0, 20, None), slice(0, 16, None)]\n1 [slice(0, 20, None), slice(0, 20, None), slice(16, 31, None)]\n2 [slice(0, 20, None), slice(20, 40, None), slice(0, 16, None)]\n3 [slice(0, 20, None), slice(20, 40, None), slice(16, 31, None)]\n\n\nEvidently, the spectral space is distributed in the last two axes, whereas the first axis is locally avalable to all processors. Tha dataarray is said to be aligned in the first dimension.\n\n# Post processing¶\n\nMPI is great because it means that you can run Shenfun on pretty much as many CPUs as you can get your hands on. However, MPI makes it more challenging to do visualization, in particular with Python and Matplotlib. For this reason there is a utilities module with helper classes for dumping dataarrays to HDF5 or NetCDF\n\nMost of the IO has already been implemented in mpi4py-fft. The classes HDF5File and NCFile are used exactly as they are implemented in mpi4py-fft. As a common interface we provide\n\nwhere ShenfunFile() returns an instance of either HDF5File or NCFile, depending on choice of backend.\n\nFor example, to create an HDF5 writer for a 3D TensorProductSpace with Fourier bases in all directions:\n\nfrom shenfun import *\nfrom mpi4py import MPI\nN = (24, 25, 26)\nK0 = Basis(N, 'F', dtype='D')\nK1 = Basis(N, 'F', dtype='D')\nK2 = Basis(N, 'F', dtype='d')\nT = TensorProductSpace(MPI.COMM_WORLD, (K0, K1, K2))\nfl = ShenfunFile('myh5file', T, backend='hdf5', mode='w')\n\n\nThe file instance fl will now have two method that can be used to either write dataarrays to file, or read them back again.\n\n• fl.write\n\n• fl.read\n\nWith the HDF5 backend we can write both arrays from physical space (Array), as well as spectral space (Function). However, the NetCDF4 backend cannot handle complex dataarrays, and as such it can only be used for real physical dataarrays.\n\nIn addition to storing complete dataarrays, we can also store any slices of the arrays. To illustrate, this is how to store three snapshots of the u array, along with some global 2D and 1D slices:\n\nu = Array(T)\nu[:] = np.random.random(u.shape)\nd = {'u': [u, (u, np.s_[4, :, :]), (u, np.s_[4, 4, :])]}\nfl.write(0, d)\nu[:] = 2\nfl.write(1, d)\n\n\nThe ShenfunFile may also be used for the MixedTensorProductSpace, or VectorTensorProductSpace, that are collections of the scalar TensorProductSpace. We can create a MixedTensorProductSpace consisting of two TensorProductSpaces, and an accompanying writer class as:\n\nTT = MixedTensorProductSpace([T, T])\nfl_m = ShenfunFile('mixed', TT, backend='hdf5', mode='w')\n\n\nLet’s now consider a transient problem where we step a solution forward in time. We create a solution array from the Array class, and update the array inside a while loop:\n\nTT = VectorTensorProductSpace(T)\nfl_m = ShenfunFile('mixed', TT, backend='hdf5', mode='w')\nu = Array(TT)\ntstep = 0\ndu = {'uv': (u,\n(u, [4, slice(None), slice(None)]),\n(u, [slice(None), 10, 10]))}\nwhile tstep < 3:\nfl_m.write(tstep, du, forward_output=False)\ntstep += 1\n\n\nNote that on each time step the arrays u, (u, [4, slice(None), slice(None)]) and (u, [slice(None), 10, 10]) are vectors, and as such of global shape (3, 24, 25, 26), (3, 25, 26) and (3, 25), respectively. However, they are stored in the hdf5 file under their spatial dimensions 1D, 2D and 3D, respectively.\n\nNote that the slices in the above dictionaries are global views of the global arrays, that may or may not be distributed over any number of processors. Also note that these routines work with any number of CPUs, and the number of CPUs does not need to be the same when storing or retrieving the data.\n\nAfter running the above, the different arrays will be found in groups stored in myyfile.h5 with directory tree structure as:\n\nmyh5file.h5/\n└─ u/\n├─ 1D/\n| └─ 4_4_slice/\n| ├─ 0\n| └─ 1\n├─ 2D/\n| └─ 4_slice_slice/\n| ├─ 0\n| └─ 1\n├─ 3D/\n| ├─ 0\n| └─ 1\n└─ mesh/\n├─ x0\n├─ x1\n└─ x2\n\n\nLikewise, the mixed.h5 file will at the end of the loop look like:\n\nmixed.h5/\n└─ uv/\n├─ 1D/\n| └─ slice_10_10/\n| ├─ 0\n| ├─ 1\n| └─ 3\n├─ 2D/\n| └─ 4_slice_slice/\n| ├─ 0\n| ├─ 1\n| └─ 3\n├─ 3D/\n| ├─ 0\n| ├─ 1\n| └─ 3\n└─ mesh/\n├─ x0\n├─ x1\n└─ x2\n\n\nNote that the mesh is stored as well as the results. The three mesh arrays are all 1D arrays, representing the domain for each basis in the TensorProductSpace.\n\nWith NetCDF4 the layout is somewhat different. For mixed above, if we were using backend netcdf instead of hdf5, we would get a datafile where ncdump -h mixed.nc would result in:\n\nnetcdf mixed {\ndimensions:\ntime = UNLIMITED ; // (3 currently)\ni = 3 ;\nx = 24 ;\ny = 25 ;\nz = 26 ;\nvariables:\ndouble time(time) ;\ndouble i(i) ;\ndouble x(x) ;\ndouble y(y) ;\ndouble z(z) ;\ndouble uv(time, i, x, y, z) ;\ndouble uv_4_slice_slice(time, i, y, z) ;\ndouble uv_slice_10_10(time, i, x) ;\n}\n\n\nNote that it is also possible to store vector arrays as scalars. For NetCDF4 this is necessary for direct visualization using Visit. To store vectors as scalars, simply use:\n\nfl_m.write(tstep, du, forward_output=False, as_scalar=True)\n\n\n## ParaView¶\n\nThe stored datafiles can be visualized in ParaView. However, ParaView cannot understand the content of these HDF5-files without a little bit of help. We have to explain that these data-files contain structured arrays of such and such shape. The way to do this is through the simple XML descriptor XDMF. To this end there is a function imported from mpi4py-fft called generate_xdmf that can be called with any one of the generated hdf5-files:\n\ngenerate_xdmf('myh5file.h5')\ngenerate_xdmf('mixed.h5')\n\n\nThis results in some light xdmf-files being generated for the 2D and 3D arrays in the hdf5-file:\n\n• myh5file.xdmf\n\n• myh5file_4_slice_slice.xdmf\n\n• mixed.xdmf\n\n• mixed_4_slice_slice.xdmf\n\nThese xdmf-files can be opened and inspected by ParaView. Note that 1D arrays are not wrapped, and neither are 4D.\n\nAn annoying feature of Paraview is that it views a three-dimensional array of shape $$(N_0, N_1, N_2)$$ as transposed compared to shenfun. That is, for Paraview the last axis represents the $$x$$-axis, whereas shenfun (like most others) considers the first axis to be the $$x$$-axis. So when opening a three-dimensional array in Paraview one needs to be aware. Especially when plotting vectors. Assume that we are working with a Navier-Stokes solver and have a three-dimensional VectorTensorProductSpace to represent the fluid velocity:\n\nfrom mpi4py import MPI\nfrom shenfun import *\n\ncomm = MPI.COMM_WORLD\nN = (32, 64, 128)\nV0 = Basis(N, 'F', dtype='D')\nV1 = Basis(N, 'F', dtype='D')\nV2 = Basis(N, 'F', dtype='d')\nT = TensorProductSpace(comm, (V0, V1, V2))\nTV = VectorTensorProductSpace(T)\nU = Array(TV)\nU = 0\nU = 1\nU = 2\n\n\nTo store the resulting Array U we can create an instance of the HDF5File class, and store using keyword as_scalar=True:\n\nhdf5file = ShenfunFile(\"NS\", TV, backend='hdf5', mode='w')\n...\nfile.write(0, {'u': [U]}, as_scalar=True)\nfile.write(1, {'u': [U]}, as_scalar=True)\n\n\nAlternatively, one may store the arrays directly as:\n\nU.write('U.h5', 'u', 0, domain=T.mesh(), as_scalar=True)\nU.write('U.h5', 'u', 1, domain=T.mesh(), as_scalar=True)\n\n\nGenerate an xdmf file through:\n\ngenerate_xdmf('NS.h5')\n\n\nand open the generated NS.xdmf file in Paraview. You will then see three scalar arrays u0, u1, u2, each one of shape (32, 64, 128), for the vector component in what Paraview considers the $$z$$, $$y$$ and $$x$$ directions, respectively. Other than the swapped coordinate axes there is no difference. But be careful if creating vectors in Paraview with the Calculator. The vector should be created as:\n\nu0*kHat+u1*jHat+u2*iHat" ]
[ null, "https://shenfun.readthedocs.io/en/latest/_images/Sparsity.png", null, "https://shenfun.readthedocs.io/en/latest/_images/KdV.png", null, "https://shenfun.readthedocs.io/en/latest/_images/datastructures0.png", null, "https://shenfun.readthedocs.io/en/latest/_images/datastructures1.png", null, "https://shenfun.readthedocs.io/en/latest/_images/datastructures_pencil0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.794824,"math_prob":0.9987148,"size":28913,"snap":"2020-10-2020-16","text_gpt3_token_len":8589,"char_repetition_ratio":0.11473935,"word_repetition_ratio":0.030876277,"special_character_ratio":0.3009719,"punctuation_ratio":0.15395142,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99976414,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T19:18:04Z\",\"WARC-Record-ID\":\"<urn:uuid:39ea3a01-0666-43dd-8483-da3c1e056e0e>\",\"Content-Length\":\"134049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:33e6fa4c-6146-4e82-8121-c5b37169c56b>\",\"WARC-Concurrent-To\":\"<urn:uuid:53a1bf30-0de8-47d9-ad49-30f1cf234450>\",\"WARC-IP-Address\":\"104.208.221.96\",\"WARC-Target-URI\":\"https://shenfun.readthedocs.io/en/latest/gettingstarted.html\",\"WARC-Payload-Digest\":\"sha1:EA34N4BKD2Z4BRCBNRWV4ET4JNAWE6XL\",\"WARC-Block-Digest\":\"sha1:O2CY4JSDFZLV5AJMKIZK5LB42WGQONOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143805.13_warc_CC-MAIN-20200218180919-20200218210919-00310.warc.gz\"}"}
http://matchcomcustomerservice.com/difraccion-de-fresnel-34/
[ "## DIFRACCION DE FRESNEL PDF\n\n### DIFRACCION DE FRESNEL PDF\n\n(fre nel’) Se observa difracción cerca del objeto difractante. Comparar con la difracción Fraunhofer. Llamado así por Augustin Jean Fresnel. Difraccion de Fresnel y Fraunhofer Universitat de Barcelona. GID Optica Fisica i Fotonica Difraccion de Fresnel y Fraunhofer Difraccion de Fresnel y Fraunhofer. Español: Láser difractado usando una lente y una rendija en forma de cuadro. Foto tomada en el laboratorio de óptica de la facultad de ciencias de la unam.", null, "Author: Gugor Mazushura Country: Belarus Language: English (Spanish) Genre: Medical Published (Last): 8 December 2014 Pages: 279 PDF File Size: 5.71 Mb ePub File Size: 8.22 Mb ISBN: 627-7-72074-914-2 Downloads: 29423 Price: Free* [*Free Regsitration Required] Uploader: Zugore", null, "Kirchhoff’s integral theoremsometimes referred to as the Fresnel—Kirchhoff integral theorem, uses Green’s identities to derive the solution to the homogeneous wave equation at an arbitrary point P in terms of the values of the solution of the wave equation and its first order derivative at all points on an arbitrary surface which encloses P. It can be seen that most of the light is in the central disk. The dimensions of the central band are related to the dimensions of the slit by the same relationship as for a single slit so that the larger dimension in the diffracted image corresponds to the smaller dimension in the slit.", null, "This can be justified by making the assumption that the source starts to radiate at a particular time, and then by making R large enough, so that when the disturbance at P is being considered, no contributions from A 3 will have arrived there.\n\nThen the differential field is: This is frfsnel the case, and this is one of the approximations used in deriving the equation. This effect is known as interference. A detailed mathematical treatment of Fraunhofer diffraction is given in Fraunhofer diffraction equation. This page was last edited on 12 Decemberat Most of the diffracted light falls between the first minima.", null, "By using this site, you agree to the Terms of Use and Privacy Policy. Analytical solutions are not possible for most configurations, but the Fresnel diffraction fresnle and Fraunhofer diffraction equation, which are approximations of Kirchhoff’s formula for the near field and far fieldcan be applied to a very wide range of optical systems.\n\nFurtak,Optics ; 2nd ed. These assumptions are sometimes referred to as Kirchhoff’s boundary conditions.\n\nWhen the distance between the aperture and the plane of observation on which the diffracted pattern is observed is large enough so that the optical path lengths from edges of the aperture to a point of observation differ much less than the wavelength of the light, then propagation paths for individual wavelets from every point on the aperture to the point of observation can be treated as parallel.\n\nDUMITRU CONSTANTIN DULCAN INTELIGENTA MATERIEI PDF\n\nFor example, when a slit of width 0. It gives an expression for the wave disturbance when a monochromatic spherical wave passes through an opening in an opaque screen. We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning.\n\n## Difracció de Fraunhofer\n\nIn opticsthe Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens. The complex amplitude of the disturbance at a distance r is given by. Close examination of the double-slit diffraction pattern below shows that there fresnnel very fine horizontal diffraction fringes above and below the main spot, as well as the more obvious horizontal fringes.\n\nThe Fraunhofer diffraction pattern is shown in the image together difracion a plot of the didraccion vs. This page was last edited on 9 Octoberat Consider a monochromatic point source at P 0which illuminates an aperture in a screen.\n\nFraunhofer diffraction occurs when: The diffraction pattern given by a circular aperture is shown in the figure on the right. This allows one to make two further approximations:. Retrieved from ” https: Cifraccion a lens is located in front of the diffracting aperture, each plane wave is brought to a focus at a different point in the focal plane with the point of focus being proportional to the x- and y-direction cosines, so that the variation in intensity as a function of direction is mapped into a positional variation in intensity.\n\nThe difference in phase between the two waves is determined by the difference in the distance travelled by the two waves.\n\n### File:Difracción de fresnel – Wikimedia Commons\n\nThe fringes extend to infinity in the y direction since the slit and illumination also extend to infinity. The output profile of a single mode laser beam may have a Gaussian intensity profile and the diffraction equation can be used to show that it maintains that profile however far away it difraccoin from the source.\n\nGenerally, a difracciin integral over complex variables has to be solved and in many cases, an analytic solution is not available. The size of the central band at a distance z is given by.\n\n## File:Difracción de fresnel en forma de cuadro.jpg\n\nIf the width of the difraccin is small enough less than the wavelength of the lightthe slits diffract the light into cylindrical waves. If all the terms in f x ‘y ‘ can be neglected except for the terms in x ‘ and y ‘we have the Fraunhofer diffraction equation.\n\nERISIPELA RECURRENTE PDF\n\nThe angle subtended by this disk, known as the Airy disk, is. Geometrical And Physical Optics.\n\nThe form of the function is plotted on the right above, for a tabletand it can be seen that, unlike the diffraction patterns produced by rectangular or circular apertures, it has no secondary rings. This is mainly because the wavelength of light is much smaller than the dimensions of any obstacles encountered.\n\nThe freshel A 1 above is replaced by a wavefront from P 0which almost fills the aperture, difraccin a portion of a cone with a vertex at P 0which is labeled A 4 in the diagram. In spite of the various approximations that were made in arriving at the formula, it is adequate to describe the majority of difracion in instrumental optics.\n\nAntennas for all applications.", null, "Fresnel developed an equation using the Huygens wavelets together with the principle of superposition of waves, which models these diffraction effects quite well. This article explains where fresnwl Fraunhofer equation can be applied, and shows the form of the Fraunhofer diffraction pattern for various apertures. If the illuminating beam does not illuminate the whole length of the slit, the spacing of the vertical fringes is determined by the dimensions of the illuminating beam.\n\nFor example, if a 0. From Wikipedia, the free encyclopedia. It is not a straightforward ve to calculate the displacement given difrcacion the sum of the secondary wavelets, each of which has its own amplitude and phase, since this involves addition of many waves of varying phase and amplitude.\n\nThe Fraunhofer equation can be used to model the diffraction in this case. The integration is performed over the areas A 1A 2 and A 3giving. The same applies to the points just below A and Band so on. The spacing of the fringes is also inversely proportional to the slit dimension. So, if the focal length of the lens fresnrl sufficiently large such that differences between electric field orientations for wavelets can be ignored at the focus, then the lens practically makes the Fraunhofer diffraction pattern on its focal plan." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/b/b4/Difracción_de_fresnel_en_forma_de_cuadro.jpg", null, "http://matchcomcustomerservice.com/download_pdf.png", null, "http://www.sc.ehu.es/sbweb/fisica3/ondas/fresnel/borde.jpg", null, "http://www.sc.ehu.es/sbweb/fisica3/ondas/fresnel/cornu.jpg", null, "http://www.monografias.com/trabajos93/topicos-selectos-fisica-optica/image066.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8919853,"math_prob":0.9425766,"size":7528,"snap":"2020-34-2020-40","text_gpt3_token_len":1684,"char_repetition_ratio":0.15045188,"word_repetition_ratio":0.011290323,"special_character_ratio":0.19367693,"punctuation_ratio":0.081871346,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98226124,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,6,null,9,null,4,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T03:59:15Z\",\"WARC-Record-ID\":\"<urn:uuid:aebe6b4f-cd56-4f27-b1b8-566ee52a5632>\",\"Content-Length\":\"39088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54353658-db37-45a2-a355-5bfad63538db>\",\"WARC-Concurrent-To\":\"<urn:uuid:c69ba1ba-b7f6-49d4-9511-66a8b7515448>\",\"WARC-IP-Address\":\"104.31.67.52\",\"WARC-Target-URI\":\"http://matchcomcustomerservice.com/difraccion-de-fresnel-34/\",\"WARC-Payload-Digest\":\"sha1:VN3UK7XPCQTHTNGXOMXHYQIWD57NI6OU\",\"WARC-Block-Digest\":\"sha1:I6WPCCE4VUH5F6JHYS33GMYL4VQP5LD4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737238.53_warc_CC-MAIN-20200808021257-20200808051257-00227.warc.gz\"}"}
https://docs.wialon.com/en/hosting/cms/units/sensors/validation?rdr=true
[ "# Wialon help center has been launched\n\nFrom June, 2021, docs.wialon.com is to be discontinued. Please look for answers to your questions in the new help center.\n\nStarting June, after visiting docs.wialon.com, the system will redirect you to the Wialon help center. By the way, we kindly recommend you to update the links in your bookmarks or notes.\n\n# Validation of Sensors\n\nValidation determines the dependence of the main sensor on a validator and makes it possible to combine their values to get one final value. You can configure validation by selecting a validator and validation type in the sensor properties.\n\nA validator is a validation sensor that can change the value of the main sensor. The validator is selected from the list of available sensors that were created earlier for the same unit.", null, "## Types of Validation\n\nA validation type is a logical or mathematical operation in which a validator can influence the final value of the main sensor. There are 12 types of validation, each of which is described below.\n\n• Logical AND\nThe type of validation in which the logical AND operation (conjunction) is applied to the values of the main and validation sensors. In this operation, the final value of the sensor is either 1 or 0. If the values of both sensors are not 0, the value of the main sensor is 1. If the value of at least one sensor is equal to 0, the final value is 0.\n• Logical OR\nThe type of validation in which the logical OR operation (disjunction) is applied to the values of the main and validation sensors. In this operation, the final value of the sensor is also either 1 or 0. If the value of at least one sensor is equal to 1, the value of the main sensor is 1. If both values are 0, the final value is 0.\n• Not-null check\nThe type of validation in which the value of the main sensor is unchanged, provided that the validation sensor is not zero. If the validation sensor is zero, a dash is displayed in the value of the main sensor.\n• Math AND\nThe type of validation in which the mathematical AND operation is applied to the values of the main and validation sensors. It is a bitwise logical AND operation, that is the two values are converted to their binary number equivalents, and then the logical AND operation is applied to the bits of the same number.\n• Math OR\nThe type of validation in which the mathematical OR operation is applied to the values of the main and validation sensors. It is a bitwise logical OR operation, that is the two values are converted to their binary number equivalents, and then the logical OR operation is applied to the bits of the same number.\n• Sum up\nThe type of validation in which the values of the validation and main sensors are summed up.\n• Subtract validator from sensor\nThe type of validation in which the value of the validation sensor is subtracted from the value of the main sensor.\n• Subtract sensor from validator\nThe type of validation in which the value of the main sensor is subtracted from the value of the validation sensor.\n• Multiply\nThe type of validation in which the value of the validation sensor is multiplied by the value of the main sensor.\n• Divide sensor by validator\nThe type of validation in which the value of the main sensor is divided by the value of the validation sensor.\n• Divide validator by sensor\nThe type of validation in which the value of the validation sensor is divided by the value of the main sensor.\n• Replace sensor with validator in case of error\nThe type of validation in which the value of the validation sensor is displayed in case the value of the main sensor has not been determined.", null, "The validation chain can consist of any number of sensors. That is the first sensor can be a validator for the second one and depend on the third one.", null, "" ]
[ null, "https://docs.wialon.com/en/hosting/_media/cms/units/sensors/en_validation.png", null, "https://docs.wialon.com/en/hosting/lib/images/smileys/icon_exclaim.gif", null, "https://docs.wialon.com/en/hosting/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9102817,"math_prob":0.9664478,"size":1082,"snap":"2021-31-2021-39","text_gpt3_token_len":216,"char_repetition_ratio":0.1734694,"word_repetition_ratio":0.011049724,"special_character_ratio":0.19593346,"punctuation_ratio":0.10377359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9893001,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T21:12:37Z\",\"WARC-Record-ID\":\"<urn:uuid:6e752596-17bf-43b6-9f56-e138609a04c8>\",\"Content-Length\":\"50736\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:087b9992-c78e-4c14-839b-ce6627f597a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b8f9797-4d95-4667-b883-5d335bfaa65e>\",\"WARC-IP-Address\":\"193.193.165.141\",\"WARC-Target-URI\":\"https://docs.wialon.com/en/hosting/cms/units/sensors/validation?rdr=true\",\"WARC-Payload-Digest\":\"sha1:MD3G6RVMH2ZTUMCKSEDUNEIK2FSSLRSU\",\"WARC-Block-Digest\":\"sha1:GIS72AK5GW5ND2TU2HHNSZANPKRWS53Y\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057775.50_warc_CC-MAIN-20210925202717-20210925232717-00039.warc.gz\"}"}
https://math.stackexchange.com/questions/1242590/find-a-set-a-such-that-a-notin-mathrmrngf
[ "# Find a set $A$ such that $A \\notin \\mathrm{Rng}{f}$\n\nLet $f:\\mathbb N\\to \\mathbb P(\\mathbb N)$ be given by the following expression: $$f(n)=\\{m\\in\\mathbb N\\mid 3m-10>n\\}$$ Find a set $A$ such that $A \\notin \\mathrm{Rng}{f}$.\n\nI came up with $A=\\{n\\in\\mathbb N\\mid n\\notin f(n)\\}$, but I don't believe that it works because $6\\in f(6)$.\n\n• Where's the contradiction? Does that mean $A=f(6)$? No. It means that $6\\notin A$ and therefore that $A\\neq f(6)$. – Thomas Andrews Apr 19 '15 at 22:50\n• Of course, you could just take $A=\\{1\\}$. Since $1\\notin f(n)$ for any $n$, so this would mean that $A\\cap f(n)=\\emptyset$ for all $A$. Or you could take $A$ any finite set, since $f(n)$ is always infinite. Or you could take $A$ to be the set of all primes, since if $m\\in f(n)$ then $m+1\\in f(n)$, so $f(n)$ contains all numbers larger than its least element. – Thomas Andrews Apr 19 '15 at 22:54\n\nThe set $A=\\{n\\in\\mathbb N\\mid n\\notin f(n)\\}$ should work for every application $f$, for instance in your application it's clear that $n\\in f(n)$ if and only if $3n-10> n$ which means that: $$A=\\{0,1,2,3,4,5\\} \\tag 1$$ now assume that there exists some $x$ such that $f(x)=A$ then $A$ is infinte because always $f(x)$ is infinite for any $x$, this contradicts $(1)$.\nNote that the set $A$ always works without any restriction on $f$, and it's a part of Cantor's argument that $\\mathbb N$ and $\\mathbb P(\\mathbb N)$ don't have the same cardinal." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.769455,"math_prob":0.99998105,"size":281,"snap":"2019-51-2020-05","text_gpt3_token_len":114,"char_repetition_ratio":0.16606498,"word_repetition_ratio":0.0,"special_character_ratio":0.37010676,"punctuation_ratio":0.07462686,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000061,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T19:35:50Z\",\"WARC-Record-ID\":\"<urn:uuid:bf4d3108-6a77-4d41-a889-b328ed04af1b>\",\"Content-Length\":\"139984\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9177681c-bbb6-4c37-82ac-bdb6751d05a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:97d3fbc0-bdd6-4b41-ad23-25f03a8189b3>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1242590/find-a-set-a-such-that-a-notin-mathrmrngf\",\"WARC-Payload-Digest\":\"sha1:IDQWWY25MF3GHPJIOJXDTNSV2K6AAOYX\",\"WARC-Block-Digest\":\"sha1:YWHCXIKVNSLLIAOIRSLKQXMXXHJUDWJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250590107.3_warc_CC-MAIN-20200117180950-20200117204950-00555.warc.gz\"}"}
https://docs.ksqldb.io/en/0.10.1-ksqldb/developer-guide/ksqldb-reference/aggregate-functions/
[ "# Aggregation functions\n\n## `AVG`¶\n\n `1` ``````AVG(col1) ``````\n\nStream, Table\n\nReturn the average value for a given column.\n\n## `COLLECT_LIST`¶\n\n `1` ``````COLLECT_LIST(col1) ``````\n\nStream, Table\n\nReturn an array containing all the values of `col1` from each input row (for the specified grouping and time window, if any). Currently only works for simple types (not Map, Array, or Struct). This version limits the size of the result Array to a maximum of 1000 entries and any values beyond this limit are silently ignored. When using with a window type of `session`, it can sometimes happen that two session windows get merged together into one when a late-arriving record with a timestamp between the two windows is processed. In this case the 1000 record limit is calculated by first considering all the records from the first window, then the late-arriving record, then the records from the second window in the order they were originally processed.\n\n## `COLLECT_SET`¶\n\n `1` ``````COLLECT_SET(col1) ``````\n\nStream\n\nReturn an array containing the distinct values of `col1` from each input row (for the specified grouping and time window, if any). Currently only works for simple types (not Map, Array, or Struct). This version limits the size of the result Array to a maximum of 1000 entries and any values beyond this limit are silently ignored. When using with a window type of `session`, it can sometimes happen that two session windows get merged together into one when a late-arriving record with a timestamp between the two windows is processed. In this case the 1000 record limit is calculated by first considering all the records from the first window, then the late-arriving record, then the records from the second window in the order they were originally processed.\n\n## `COUNT`¶\n\n `1` ``````COUNT(col1) ``````\n `1` ``````COUNT(*) ``````\n\nStream, Table\n\nCount the number of rows. When `col1` is specified, the count returned will be the number of rows where `col1` is non-null. When `*` is specified, the count returned will be the total number of rows.\n\n## `COUNT_DISTINCT`¶\n\n `1` ``````COUNT_DISTINCT(col1) ``````\n\nStream, Table\n\nReturns the approximate number of unique values of `col1` in a group. The function implementation uses HyperLogLog to estimate cardinalities of 10^9 with a typical standard error of 2%.\n\n## `EARLIEST_BY_OFFSET`¶\n\n `1` ``````EARLIEST_BY_OFFSET(col1) ``````\n\nStream\n\nReturn the earliest value for a given column. Earliest here is defined as the value in the partition with the lowest offset. Rows that have `col1` set to null are ignored.\n\n## `HISTOGRAM`¶\n\n `1` ``````HISTOGRAM(col1) ``````\n\nStream, Table\n\nReturn a map containing the distinct String values of `col1` mapped to the number of times each one occurs for the given window. This version limits the number of distinct values which can be counted to 1000, beyond which any additional entries are ignored. When using with a window type of `session`, it can sometimes happen that two session windows get merged together into one when a late-arriving record with a timestamp between the two windows is processed. In this case the 1000 record limit is calculated by first considering all the records from the first window, then the late-arriving record, then the records from the second window in the order they were originally processed.\n\n## `LATEST_BY_OFFSET`¶\n\n `1` ``````LATEST_BY_OFFSET(col1) ``````\n\nStream\n\nReturn the latest value for a given column. Latest here is defined as the value in the partition with the greatest offset. Rows that have `col1` set to null are ignored.\n\n## `MAX`¶\n\n `1` ``````MAX(col1) ``````\n\nStream\n\nReturn the maximum value for a given column and window. Rows that have `col1` set to null are ignored.\n\n## `MIN`¶\n\n `1` ``````MIN(col1) ``````\n\nStream\n\nReturn the minimum value for a given column and window. Rows that have `col1` set to null are ignored.\n\n## `SUM`¶\n\n `1` ``````SUM(col1) ``````\n\nStream, Table\n\nSums the column values. Rows that have `col1` set to null are ignored.\n\n## `TOPK`¶\n\n `1` ``````TOPK(col1, k) ``````\n\nStream\n\nReturn the Top K values for the given column and window Rows that have `col1` set to null are ignored.\n\n## `TOPKDISTINCT`¶\n\n `1` ``````TOPKDISTINCT(col1, k) ``````\n\nStream\n\nReturn the distinct Top K values for the given column and window Rows that have `col1` set to null are ignored.\n\nLast update: 2020-05-20" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8430563,"math_prob":0.9299244,"size":3773,"snap":"2020-45-2020-50","text_gpt3_token_len":899,"char_repetition_ratio":0.13425311,"word_repetition_ratio":0.6231003,"special_character_ratio":0.21176782,"punctuation_ratio":0.081575245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95619273,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T17:08:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c4c54a7c-bddb-4ed9-be7c-d9f560f2b90e>\",\"Content-Length\":\"67713\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:205ca0a0-aa4c-4f1c-ab45-eabe81cbd638>\",\"WARC-Concurrent-To\":\"<urn:uuid:44e474d5-ecec-4f74-b36e-a9f7233b3aa2>\",\"WARC-IP-Address\":\"54.189.132.29\",\"WARC-Target-URI\":\"https://docs.ksqldb.io/en/0.10.1-ksqldb/developer-guide/ksqldb-reference/aggregate-functions/\",\"WARC-Payload-Digest\":\"sha1:DS4WNAITLHPEZRG5ROGVIUBQD627WZNE\",\"WARC-Block-Digest\":\"sha1:5AR7VZI4P4XQTISLTHOARB447L42P6LQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141740670.93_warc_CC-MAIN-20201204162500-20201204192500-00276.warc.gz\"}"}
https://gaurish4math.wordpress.com/tag/equivalent-norm/
[ "# In the praise of norm\n\nStandard\n\nIf you have spent some time with undergraduate mathematics, you would have probably heard the word “norm”. This term is encountered in various branches of mathematics, like (as per Wikipedia):\n\nBut, it seems to occur only in abstract algebra. Although the definition of this term is always algebraic, it has a topological interpretation when we are working with vector spaces.  It secretly connects a vector space to a topological space where we can study differentiation (metric space), by satisfying the conditions of a metric.  This point of view along with an inner product structure, is explored when we study functional analysis.\n\nSome facts to remember:\n\n1. Every vector space has a norm. [Proof]\n2. Every vector space has an inner product (assuming “Axiom of Choice”). [Proof]\n3. An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space.  [Proof]\n4. All norms are equivalent in finite dimensional vector spaces. [Proof]\n5. Every normed vector space is a metric space (and NOT vice versa). [Proof]\n6. In general, a vector space is NOT same a metric space. [Proof]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9408952,"math_prob":0.97794694,"size":1114,"snap":"2020-24-2020-29","text_gpt3_token_len":231,"char_repetition_ratio":0.16036037,"word_repetition_ratio":0.0,"special_character_ratio":0.21095152,"punctuation_ratio":0.105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879973,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T07:09:28Z\",\"WARC-Record-ID\":\"<urn:uuid:6da7589b-7454-4b6b-963b-eff8d694d248>\",\"Content-Length\":\"49656\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16900f0e-8465-4de8-8294-90b6054ba16e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec48fe85-6fc7-4664-bdb0-3bc0e783d0ec>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://gaurish4math.wordpress.com/tag/equivalent-norm/\",\"WARC-Payload-Digest\":\"sha1:KYKRJ6N4D42FUEGVP5OKU453AWZIIXP5\",\"WARC-Block-Digest\":\"sha1:5NKX5UVEAKA73B35VCHXE4SVYFZ6KISO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348493151.92_warc_CC-MAIN-20200605045722-20200605075722-00529.warc.gz\"}"}
https://discuss.pytorch.org/t/how-can-i-add-a-variable-in-a-network-definition-that-gets-tuned-during-training/33235
[ "# How can I add a variable in a network definition that gets tuned during training?\n\nIs it possible to have a variable inside the network definition that is trainable and gets trained during training?\nto give a very simplistic example, suppose I want to specify the momentum for batch-normalization or the epsilon to be trained in the network. Can I simply do :\n\n``````self.batch_mom1 = torch.tensor(0, dtype=torch.float32, device='cuda:0', requires_grad=True)\nself.batch_mom2 = torch.tensor(0, dtype=torch.float32, device='cuda:0', requires_grad=True)\nself.batch_mom3 = torch.tensor(0, dtype=torch.float32, device='cuda:0', requires_grad=True)\n``````\n``````model = nn.Sequential(\nnn.Conv2d(3, 66, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1)),\nnn.BatchNorm2d(66, eps=1e-05, momentum=self.batch_mom1.item(), affine=True),\nnn.ReLU(inplace=True),\n\nnn.Conv2d(66, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1)),\nnn.BatchNorm2d(128, eps=1e-05, momentum=self.batch_mom2.item(), affine=True),\nnn.ReLU(inplace=True),\n\nnn.Conv2d(128, 192, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1)),\nnn.BatchNorm2d(192, eps=1e-05, momentum=self.batch_mom3.item(), affine=True),\nnn.ReLU(inplace=True)\n...\n\n``````\n\ninside my graph and expect the variable to be tuned? since it is set as `requires_grad=True`!\nif not, what is the correct way of doing such things? should I create a whole new layer for that?\n\nNo, you cant have it inside your graph and expect it to be tuned.\n\nIf you followed the first PyTorch tutorials at https://pytorch.org/tutorials/ , you must’ve learned the concept of an `optimizer`, and how it does the optimization step (which will tune the weights). You have to give these variables to an optimizer, to tune them.\n\nThe default model parameters are given to it with `model.parameters()` call. I suggest you do the tutorial again to get a better understanding.\n\n1 Like\n\nI had the impression, knowing the dynamic nature of graphs in pytorch, adding a variable to the graph would automatically include it in the parameter list and thus in the optimization process!\nIt seems my understanding is flawed here.\n\n• adding a variable to the graph would automatically compute it’s gradients.\n• giving the variable to the optimizer would invoke the update rule (for SGD it is: `x = x - learning_rate * x.grad`), which tunes the variable\n2 Likes\n\nThanks a lot I really appreciate it, however, I need to write a very simple example for myself so that I can fully get how everything works.\nThere is an example in the docs that shows how it is done in a manual fashion. that is writing the optimization steps manually (calculating the gradients and then updating the variables respectively).\nHowever, I should still be able to set some parameters in my model right? so that when I send my `model.parameters()` to an optimizer, it gets optimized accordingly.\nSo basically I should be able create or define a new `Parameter` in my module first and then add it to my module, by registering it as a module attribute, something like this :\n\n``````myvar = torch.tensor(0, dtype=torch.float32, requires_grad=True)\nmyparam = torch.nn.Parameter(myvar)\nmymodel.register_parameter( param_name , myparam )\n``````\n\nI assume, this should add my new parameters to the module list of parameters. is this assumption correct? since according to the documentation for Parameter:\n\nA kind of Tensor that is to be considered a module parameter.\n\nParameters are Tensor subclasses, that have a very special property when used with Modules\nwhen they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model.\nIf there was no such class as Parameter, these temporaries would get registered too.\n\nUpdate :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6251943,"math_prob":0.9145598,"size":1282,"snap":"2022-05-2022-21","text_gpt3_token_len":367,"char_repetition_ratio":0.1259781,"word_repetition_ratio":0.06382979,"special_character_ratio":0.32059282,"punctuation_ratio":0.28903654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99368626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T12:13:57Z\",\"WARC-Record-ID\":\"<urn:uuid:78b3a897-296d-42a6-8581-54424b2b2614>\",\"Content-Length\":\"26160\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:095845ce-5d93-4eb5-9fcc-7d7dca3270d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c86ac65f-e245-47bd-9d4b-73a8423cd030>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/how-can-i-add-a-variable-in-a-network-definition-that-gets-tuned-during-training/33235\",\"WARC-Payload-Digest\":\"sha1:WGJDER2UJBWLKWERJMVF66YUNFHCETLD\",\"WARC-Block-Digest\":\"sha1:ZYMWC6YKCXNH5M7COSSHLV7J2PPDU7RC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558015.52_warc_CC-MAIN-20220523101705-20220523131705-00381.warc.gz\"}"}
https://www.nagwa.com/en/worksheets/670157682798/
[ "# Worksheet: Multiplying Two-Digit Numbers: The Column Method\n\nIn this worksheet, we will practice using the standard algorithm to multiply a two-digit number by another two-digit number and regrouping when necessary.\n\nQ1:\n\nA primary school has 64 classes with 58 students in each class. Calculate the total number of students. Round your answer to the nearest hundred.\n\nQ2:\n\nThe standard algorithm is a quick way to solve multiplication calculations. Which student has correctly started the standard algorithm for .\n\n• A", null, "• B", null, "• C", null, "• D", null, "• E", null, "Q3:\n\nWrite the missing digits to complete the standard algorithm.", null, "• A", null, "• B", null, "• C", null, "• D", null, "• E", null, "Q4:\n\nAn area model is useful for solving more complex calculations such as .", null, "Use an area model to solve .\n\nQ5:\n\nA jaguar sleeps for about 77 hours in a week. How many hours of sleep would a jaguar get in 12 weeks?\n\nQ6:\n\nA car travels 95 miles each day. How far does it travel in 25 days?\n\nQ7:\n\nA lion can eat 18 pounds of meat a day. How many pounds of meat can a lion eat in 12 days?\n\nQ8:\n\nLiam saves \\$15 each week. How much does he save in 12 weeks?\n\nQ9:\n\nCalculate the following.\n\nQ10:\n\nMultiply the following.\n\nQ11:\n\nElephants can drink about 50 gallons of water per day. How much would an elephant drink in 15 days, if it drinks that amount each day?" ]
[ null, "https://images.nagwa.com/figures/624162086470/5.svg", null, "https://images.nagwa.com/figures/624162086470/1.svg", null, "https://images.nagwa.com/figures/624162086470/2.svg", null, "https://images.nagwa.com/figures/624162086470/3.svg", null, "https://images.nagwa.com/figures/624162086470/4.svg", null, "https://images.nagwa.com/figures/285143405756/1.svg", null, "https://images.nagwa.com/figures/285143405756/2.svg", null, "https://images.nagwa.com/figures/285143405756/4.svg", null, "https://images.nagwa.com/figures/285143405756/3.svg", null, "https://images.nagwa.com/figures/285143405756/6.svg", null, "https://images.nagwa.com/figures/285143405756/5.svg", null, "https://images.nagwa.com/figures/680185979786/1.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93463993,"math_prob":0.9832148,"size":749,"snap":"2019-51-2020-05","text_gpt3_token_len":186,"char_repetition_ratio":0.095302016,"word_repetition_ratio":0.0,"special_character_ratio":0.25500667,"punctuation_ratio":0.14634146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99873084,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T06:13:27Z\",\"WARC-Record-ID\":\"<urn:uuid:694513e6-5b1c-45a8-a153-1fed4a75d184>\",\"Content-Length\":\"36852\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a8351ff-3e4b-4863-8946-e4b0ef49fce4>\",\"WARC-Concurrent-To\":\"<urn:uuid:c40078bf-5668-473e-b3b1-7773a594a92c>\",\"WARC-IP-Address\":\"34.204.113.110\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/worksheets/670157682798/\",\"WARC-Payload-Digest\":\"sha1:PDBXZVPP3UGJWWWWHDHI66HSSOZOKFNH\",\"WARC-Block-Digest\":\"sha1:PGDPAOPXBN3Z7FBQ2CMMLGQTIKGQIMLW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592261.1_warc_CC-MAIN-20200118052321-20200118080321-00412.warc.gz\"}"}
https://www.scienceforums.net/topic/83070-accomplishing-the-same-task-through-a-different-set-of-rules/#comment-804647
[ "# Accomplishing the same task through a different set of rules\n\n## Recommended Posts\n\nI was thinking about the quote \"Nothing is impossible\" and it brought up the idea of carrying out a specific task the same, but with a different set of restrictions or rules applied to that task needed to be done.\n\nYes, certain things are impossible because of the rules of physics and the rules of mathematics, but some how we find a way to accomplish that task another way. I find it interesting how this can be done even within a different set of rules and restrictions. So, it got me thinking that these tasks being done in a different set of restrictions can be generalized by a function that if you have found the process of completing a task within one set of restrictions then you can find the process of completing a task with any set of restrictions.\n\nWhat does anyone think about this idea? Is it something to think about? I put this in the mathematics section because it is taking an algorithm and defining it for all set of rules.\n\nEdited by Unity+\n##### Share on other sites\n\nI think it is mistaken. You can't do brain surgery with a hammer and an anvil.", null, "", null, "##### Share on other sites\n\nI think it is mistaken. You can't do brain surgery with a hammer and an anvil.", null, "", null, "It is a hypothetical.", null, "You theoretically COULD, but the likelihood of succeeding with such tools would be 1/10^n.\n\nEDIT: In someways, I think this may be related to the P vs. NP problem because that problem is dealing with the question of whether a problem that currently lies in the NP spectrum of complexity can also exist in the P part of the spectrum.\n\nOne speculative idea that could be presented is the idea of the reflective property of distribution dealing with algorithms. An algorithm that lies in the complex part of the spectrum also lies in the reflective side of the simplicity spectrum. Though, this is merely speculative.\n\nEdited by Unity+\n##### Share on other sites\n\nIt is a hypothetical.", null, "You theoretically COULD, but the likelihood of succeeding with such tools would be 1/10^n.\n\nThen describe your hypothesis for hammer/anvil brain surgery. Simply saying anything is possible is baseless. Yes, many problems have multiple solutions, but it does not follow that all problems have multiple solutions or that all problems have even one solution.\n\n##### Share on other sites\n\nThen describe your hypothesis for hammer/anvil brain surgery.\n\nI looked up the surgery process, and one way it can be done with a hammer and anvil is using the anvil to break open the upper part of the skull and then use the sharp end of the hammer cut open the tissue that remains above the brain. Then, using the hammer, complete the surgery needed to be done.\n\nRemember, those solutions assume a set of particular axioms.\n\nEdited by Unity+\n##### Share on other sites\n\nI looked up the surgery process, and one way it can be done with a hammer and anvil is using the anvil to break open the upper part of the skull and then use the sharp end of the hammer cut open the tissue that remains above the brain. Then, using the hammer, complete the surgery needed to be done.\n\nRemember, those solutions assume a set of particular axioms.\n\nYeah I guess that would work...if killing the patient is OK.", null, "##### Share on other sites\n\nYeah I guess that would work...if killing the patient is OK.", null, "You didn't say the patient had to be alive.", null, "But, all jokes aside, I my point is all of this is merely hypothetical. Same applies to the paper clip idea when finding what can be done with a paper clip.\n\nI think there would have to be some set of circumstances the idea would have to apply to this.\n\nEDIT: Had to fix some grammatical errors.\n\nEdited by Unity+\n##### Share on other sites\n\nYou didn't say the patient had to be alive.", null, "But, all jokes aside, I my point is all of this is merely hypothetical. Same applies to the paper clip idea when finding what can be done with a paper clip.\n\nI think there would have to be some set of circumstances the idea would have to apply to.\n\nYou mean the paper clip weight problem from that other thread?\n\nAnyway, I agree this is all speculative, but not hypothetical. If you give/specify a set of circumstances then we could examine them, but the no-holds-barred layout you present just gets us nowhere fast. If by no other basis, Gödel's incompleteness theorem does not allow a generalized function as you prescribe.\n\nMaybe if you give me some concrete examples that you have in mind I can apply the hammer & anvil to them.\n\n##### Share on other sites\n\nYou mean the paper clip weight problem from that other thread?\n\nI refer to the common IQ test(I think IQ?) where the task is to name as many things that a paper clip can be and do.\n\nAnyway, I agree this is all speculative, but not hypothetical. If you give/specify a set of circumstances then we could examine them, but the no-holds-barred layout you present just gets us nowhere fast. If by no other basis, Gödel's incompleteness theorem does not allow a generalized function as you prescribe.\n\nSo then the function would only be generalized with limitation. Let us assume that B is a theory which contains an axiom A and we are trying to prove A using B. This is where this is contradictory because you are trying to prove something that is unprovable and therefore you assume an axiom to be a theorem within B. However, we assume that A will always be an axiom rather than a theorem because we would also assume that there is no simpler rule under alleged axiom A. Therefore, wouldn't that have to be taken into consideration?\n\nMaybe if you give me some concrete examples that you have in mind I can apply the hammer & anvil to them.\n\nYou mean provide examples of tasks to complete with those tools?\n\nEdited by Unity+\n##### Share on other sites\n\nI refer to the common IQ test(I think IQ?) where the task is to name as many things that a paper clip can be and do.\n\nNever heard of that one. If I took it they better have a time limit if they expect me to stop though.\n\nSo then the function would only be generalized with limitation. Let us assume that B is a theory which contains an axiom A and we are trying to prove A using B.\n\nThat would be a logical fallacy. (affirming the consequent?)\n\nThis is where this is contradictory because you are trying to prove something that is unprovable and therefore you assume an axiom to be a theorem within B. However, we assume that A will always be an axiom rather than a theorem because we would also assume that there is no simpler rule under alleged axiom A. Therefore, wouldn't that have to be taken into consideration?\n\n\"Alleged axiom\" is not meaningful since an axiom is self-evident and does not require proof. While Gödel's theorem does restrict completeness in a [single] internally consistent system, it does not mean that some theorem so excluded from one system can't have a solution/proof/explanation is some other system with different axioms. This still does not mean everything is possible in some -as in at least one- system.\n\nEdited by Acme\n##### Share on other sites\n\nNever heard of that one. If I took it they better have a time limit if they expect me to stop though.\n\nThat would be a logical fallacy. (affirming the consequent?)\n\n\"Alleged axiom\" is not meaningful since an axiom is self-evident and does not require proof. While Gödel's theorem does restrict completeness in a [single] internally consistent system, it does not mean that some theorem so excluded from one system can't have a solution/proof/explanation is some other system with different axioms. This still does not mean everything is possible in some -as in at least one- system.\n\nI see your points. Time to scrap this idea.", null, "EDIT: Alleged wasn't a descriptor of me saying it isn't an axiom, but stating that there may be some lower system to prove the axiom that we don't know of(speculatively). However, this is already proven false.\n\nEdited by Unity+\n##### Share on other sites\n\nI see your points. Time to scrap this idea.", null, "Ideas are like rabbits. You get a couple and learn how to handle them, and pretty soon you have a dozen. ~ John Steinbeck\n\n##### Share on other sites\n\nI think the likelyhood of finding a solution to a problem decreases when the first solution to the problem is found;\n\nvery logical if only one solution is possible, but still applies when a hundred different solutions are possible.\n\n(because we're much better in copying ideas instead of coming up with new ones)\n\n##### Share on other sites\n\nWhen I read the first post, my first reaction was of using different set of rules for problem solving. By \"set of restriction\" you could mean to solve a problem geometrically rather say algerically. But then, if you can do a problem one way, there might be more solutions but never infinite. What I mean is that 'any set of restrictions' can be created in infinite ways. It would be a question of ability to find another way.\n\n##### Share on other sites\n\nWhen I read the first post, my first reaction was of using different set of rules for problem solving. By \"set of restriction\" you could mean to solve a problem geometrically rather say algerically. But then, if you can do a problem one way, there might be more solutions but never infinite. What I mean is that 'any set of restrictions' can be created in infinite ways. It would be a question of ability to find another way.\n\nWell, the idea also goes along the philosophy of \"with such simple rules, there are complex results. With complex rules, there are simplistic results.\"\n\nI think the likelyhood of finding a solution to a problem decreases when the first solution to the problem is found;\n\nvery logical if only one solution is possible, but still applies when a hundred different solutions are possible.\n\n(because we're much better in copying ideas instead of coming up with new ones)\n\nIn fact, this idea has brought me to open this topic again to discussion. Though my original idea was flawed, I think there are some corrections that can be made to it. Though this is generally true, it is not always true.\n\nGödel's incompleteness theorem does not allow a generalized function as you prescribe.\n\nCould you give more detail onto why this is true? I want to have a list of ideas and restrictions that could lead to another idea I have.\n\nOne flaw that existed within the idea was assuming that we are dealing with an axiom rather than a theorem or set of processes. Instead, the focus should be on determining what set of axioms that process or algorithm rests in first, if that makes sense.\n\nI think the likelyhood of finding a solution to a problem decreases when the first solution to the problem is found;\n\nThat is more a human problem rather than a problem with the amount of ideas that exist for a solution. When a process has been found for finding an idea, we generally stick to processes that lie within the range of the original process found because we find that processes that lie in range of the original will work best.\n\nEdited by Unity+\n##### Share on other sites\n\n...\n\nGödel's incompleteness theorem does not allow a generalized function as you prescribe.Gödel's incompleteness theorem does not allow a generalized function as you prescribe.\n\nCould you give more detail onto why this is true? I want to have a list of ideas and restrictions that could lead to another idea I have.\n\n...\n\nUhmmmm ... I ... ehrrrrr ... the uhhh ... Sorry; brain fart day. I got nuthin'.", null, "##### Share on other sites\n\nUhmmmm ... I ... ehrrrrr ... the uhhh ... Sorry; brain fart day. I got nuthin'.", null, "Well, post when you have got something.\n\n##### Share on other sites\n\nWell, post when you have got something.\n\nOK. In the OP you said:\n\n...\n\nYes, certain things are impossible because of the rules of physics and the rules of mathematics, but some how [somehow] we find a way to accomplish that task another way. ...\n\nGödel aside, what task did you have in mind there?\n\n##### Share on other sites\n\nOK. In the OP you said:Gödel aside, what task did you have in mind there?\n\nWell, one task I would have in mind would be finding the prime components of a number that is made up of two large prime numbers or other problems related to the P v NP problem.\n\n##### Share on other sites\n\nYes, certain things are impossible because of the rules of physics and the rules of mathematics, but some how [somehow] we find a way to accomplish that task another way.\n\n...\n\nWell, one task I would have in mind would be finding the prime components of a number that is made up of two large prime numbers or other problems related to the P v NP problem.\n\nThat strikes me as a contradiction in terms. Can you give some example of a math problem solved without mathematics?\n\n##### Share on other sites\n\nThat strikes me as a contradiction in terms. Can you give some example of a math problem solved without mathematics?\n\nEDIT: Why would it be a contradiction in terms?\n\nThe whole point is to find a possible algorithms given a particular algorithm that solves a particular task.\n\nEdited by Unity+\n##### Share on other sites\n\nEDIT: Why would it be a contradiction in terms?\n\nBecause you said:\n\nYes, certain things are impossible because of the rules of physics and the rules of mathematics, but somehow we find a way to accomplish that task another way. ...\n\nSo that is saying in effect we can find a way to do math or physics without math or physics rules. Whether you meant that or not, that is what your words communicate and it is contradictory.\n\nThe whole point is to find a possible algorithms given a particular algorithm that solves a particular task.\n\nSo since the general P vs NP problem remains unsolved* then you/we can only take each specific algorithm problem on its own merits. Even then, while a problem might be verified but not solved [yet] does not say anything about whether a solution is possible or impossible. Any approach however must still follow whatever rules the problem necessitates. No magic bullets.\n\nThere are numerous specific examples in the full article I quote from below.\n\n*\n\nThe P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. ...\n\n##### Share on other sites\n\nThe P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. ...\n\nBut what I have been trying to get at is taking algorithms that are already possible to do and use the function at discussion to find another algorithm that is more efficient.\n\n##### Share on other sites\n\nBut what I have been trying to get at is taking algorithms that are already possible to do and use the function at discussion to find another algorithm that is more efficient.\n\nPossibly. It just all depends on the specifics. Not only the specific what's but the specific who's. Take Fermat's last theorem for example. Wiles' proof drew on areas of math that are relatively recent and certainly not around in Fermat's time and even when those areas were extant, no one else but Wiles put them all together to form a proof. This does not mean other proofs aren't possible by other mathematical means or that Fermat had a proof or not.\n\n##### Share on other sites\n\nPossibly. It just all depends on the specifics. Not only the specific what's but the specific who's. Take Fermat's last theorem for example. Wiles' proof drew on areas of math that are relatively recent and certainly not around in Fermat's time and even when those areas were extant, no one else but Wiles put them all together to form a proof. This does not mean other proofs aren't possible by other mathematical means or that Fermat had a proof or not.\n\nIt also goes along to proving that .9999... is equal to 1.\n\nWe know that 1/3 equals .333.. and 3*1/3= 1\n\nThere is also other proofs of this, such as 1/9 = .1111... and 1/9 * 9 = 9/9 and .999.....\n\nTherefore, a similarity between those two proofs is 1/3^n * 3^n = 1.\n\nEDIT: Breaking this down, 1/n^s * n^s = 1.\n\nEdited by Unity+\n\n## Create an account\n\nRegister a new account" ]
[ null, "https://www.scienceforums.net/uploads/emoticons/default_ohmy.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_ohmy.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_unsure.png", null, "https://www.scienceforums.net/uploads/emoticons/default_unsure.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_tongue.png", null, "https://www.scienceforums.net/uploads/emoticons/default_blink.png", null, "https://www.scienceforums.net/uploads/emoticons/default_blink.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9588271,"math_prob":0.86747926,"size":1354,"snap":"2023-14-2023-23","text_gpt3_token_len":293,"char_repetition_ratio":0.1162963,"word_repetition_ratio":0.024291499,"special_character_ratio":0.21639587,"punctuation_ratio":0.076642334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9561979,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T11:07:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f6549914-fa82-4016-a1f5-ebb5f70e6fec>\",\"Content-Length\":\"368032\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed313c77-00c2-443c-a922-e3db64f916dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d8d644b-8054-49f2-acdc-17e310dc812f>\",\"WARC-IP-Address\":\"94.229.79.58\",\"WARC-Target-URI\":\"https://www.scienceforums.net/topic/83070-accomplishing-the-same-task-through-a-different-set-of-rules/#comment-804647\",\"WARC-Payload-Digest\":\"sha1:7TIPYR3CXXTWHFBL4FFNIKM4TJ4V4AJ5\",\"WARC-Block-Digest\":\"sha1:O6GYRFWSHSDGTOTIAGJYEUAUCYNACVHD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00686.warc.gz\"}"}
http://hanyu.utumao.com/zuci/xian5.html
[ "# 现组词大全_用现字组词_现怎么组词_开头|中间|结尾带现字的词语\n\n2012月\n\n• 现的拼音及解释\n• 现组词:现字在开头的词语\n• 现组词:现字在中间的词语\n• 现组词:现字在结尾的词语\n• ## 现的拼音及解释\n\n现(現)xiàn(ㄒ一ㄢˋ)\n\n1、显露:出现。表现。发现。体现。现身说法。\n\n2、实有的,当时就有的:现金。现款。现货。现实(a.客观存在的事物;b.合于客观情况的)。\n\n3、目前,当时:现时。现在。现今。现场。现代。现买现卖。\n\n• 现年\n\n• 现金支付\n\n• 现代都市农业\n\n• 现代意识\n\n• 现影\n\n• 现代设计\n\n• 现生\n\n• 现刻\n\n• 现金支票\n\n• 现钱\n\n• 现化\n\n• 现代性\n\n• 现而今\n\n• 现代海洋科学\n\n• 现代戏\n\n• 现代美术\n\n• 现实\n\n• 现代主义\n\n• 现业\n\n• 现代中式\n\n• 现代科学技术\n\n• 现代\n\n• 现实治疗模式\n\n• 现疾说法\n\n• 现货交易\n\n• 现代服饰\n\n• 现代汉语词典\n\n• 现洋\n\n• 现况\n\n• 现象学\n\n• 现代史\n\n• 现代文\n\n• 现代通信技术\n\n• 现局\n\n• 现如今\n\n• 现货\n\n• 现钟弗打\n\n• 现代逻辑\n\n• 现在\n\n• 现代文学\n\n• 现场会\n\n• 现状\n\n• 现路子\n\n• 现代装饰\n\n• 现出\n\n• 现代五项运动\n\n• 现代汉语\n\n• 现在进行时\n\n• 现露\n\n• 现货市场\n\n• 现代冬季两项\n\n• 现实主义者\n\n• 现实主义\n\n• 现挂\n\n• 现金\n\n• 现前\n\n• 现大洋\n\n• 现世\n\n• 现快\n\n• 现代新儒学\n\n• 现弄\n\n• 现代信息技术\n\n• 现代派诗歌\n\n• 现境\n\n• 现萨\n\n• 现眼\n\n• 现世生苗\n\n• 现代艺术\n\n• 现象\n\n• 现代派\n\n• 现成饭\n\n• 现打不赊\n\n• 现实美\n\n• 现代教育技术\n\n• 现形\n\n• 现时报\n\n• 现银\n\n• 现下\n\n• 现代主义建筑\n\n• 现场\n\n• 现代散文\n\n• 现代大学教育\n\n• 现行\n\n• 现款\n\n• 现示\n\n• 现身\n\n• 现房\n\n• 现实感\n\n• 现代咨询学\n\n• 现钟不打\n\n• 现验\n\n• 现时\n\n• 现代社会\n\n• 现代女性\n\n• 现代诗\n\n• 现行价格\n\n• 现役\n\n• 现钞\n\n• 现存\n\n• 现代医院\n\n• 现实主义文学\n\n## 现组词:现字在中间的词语\n\n• 社会主义现实主义\n\n• 一月普现一切水\n\n• 来登夫罗斯脱氏现象\n\n• 农业现代化\n\n• 圣女现象\n\n• 表现力\n\n• 一家不成,两家现在\n\n• 打嘴现世\n\n• 接受现实\n\n• 变现能力\n\n• 表现功能\n\n• 发现学习教学法\n\n• 虹吸现象\n\n• 具象表现绘画\n\n• 干涉现象\n\n• 发现式学习\n\n• 厄尔尼诺现象\n\n• 本质与现象\n\n• 发现学习论\n\n• 朴金野现象\n\n• 潜能性与实现性\n\n• 随机现象\n\n• 返祖现象\n\n• 贴现率\n\n• 重现江湖\n\n• 安於现状\n\n• 文言现象\n\n• 社会现象\n\n• 可变现净值\n\n• 虚拟现实\n\n• 发现者\n\n• 舌尖现象\n\n• 不怕县官,只怕现管\n\n• 表现性目标\n\n• 实现人生价值\n\n• 犯罪现场\n\n• 毛细现象\n\n• 拉尼娜现象\n\n• 吃现成\n\n• 发现式教学法\n\n• 反圣婴现象\n\n• 显现\n\n• 表现主义\n\n• 表现型\n\n• 发现权\n\n• 怪现象\n\n• 发现学习\n\n• 文化现象\n\n• 具现化\n\n• 似动现象\n\n• 公告现值\n\n• 二十年目睹之怪现状\n\n• 典型表现评量\n\n• 收付实现制\n\n• 活现世\n\n• 透过现象看本质\n\n• 超现实主义\n\n• 反厄尔尼诺现象\n\n• 四个现代化\n\n• 贴现窗口\n\n• 佛现鸟\n\n• 丁铎尔现象\n\n• 行为表现标准\n\n• 后现代理论\n\n• 出现\n\n• 大气现象\n\n• 照相现实主义\n\n• 批判现实主义\n\n• 表现手法\n\n• 复冰现象\n\n• 折现率\n\n• 英国教师素质管理制度的发展与现况\n\n• 自然现象\n\n• 无现金社会\n\n• 相克现象\n\n• 美国教师素质管理制度的现况\n\n• 自锁现象\n\n• 虚拟现实技术\n\n• 净现值\n\n• 吃现成饭\n\n• 教育现代化\n\n• 丢人现眼\n\n• 安于现状\n\n## 现组词:现字在结尾的词语\n\n• 地理大发现\n\n• 返现\n\n• 毕现\n\n• 体现\n\n• 重现\n\n• 活神活现\n\n• 实现\n\n• 复现\n\n• 变现\n\n• 映现\n\n• 乍现\n\n• 呈现\n\n• 优昙一现\n\n• 显现\n\n• 突现\n\n• 图穷匕现\n\n• 展现\n\n• 贴现\n\n• 付现\n\n• 活眼活现\n\n• 知识变现\n\n• 再现\n\n• 瑕瑜互现\n\n• 权现\n\n• 佛现\n\n• 爱现\n\n• 投现\n\n• 折现\n\n• 表现\n\n• 闪现\n\n• 涌现\n\n• 隐现\n\n• 自我实现\n\n• 示现\n\n• 提现\n\n• 兑现\n\n• 应现\n\n• 起现\n\n• 活现\n\n• 自我表现\n\n• 清现\n\n• 浮现\n\n• 神气活现\n\n• 曙光乍现\n\n• 真龙活现\n\n• 诈现\n\n• 凸现\n\n• 活灵活现\n\n• 发现\n\n• 活形活现\n\n• 踊现\n\n• 出现\n\n• 透现\n\n• 忽隐忽现\n\n• 活龙活现\n\n• 交货付现\n\n• 生龙活现\n\n• 若隐若现\n\n• 良心发现\n\n• 昙花一现\n\n• 时隐时现\n\n推荐阅读\n\n秃组词大全_用秃字组词_秃怎么组词_开头|中间|结尾带秃字的词语\n\n陈词滥调 『 chén cí làn diào 』\n\n沉毅寡言 『 chén yì guǎ yán 』\n\n车尘马足 『 车尘马足的读音:chē chén mǎ zú 』\n\n奶组词大全_用奶字组词_奶怎么组词_开头|中间|结尾带奶字的词语\n\n存怎么读_存的读音_拼音_笔画_繁体字_存字义解释\n\n不能赞一词 『 bù néng zàn yī cí 』\n\n版组词大全_用版字组词_版怎么组词_开头|中间|结尾带版字的词语\n\n避凶趋吉 『 bì xiōng qū jí 』\n\n背腹受敌 『 bèi fù shòu dí 』\n\n浏览942\n返回\n目录\n返回\n首页\n县组词大全_用县字组词_县怎么组词_开头|中间|结尾带县字的词语线组词大全_用线字组词_线怎么组词_开头|中间|结尾带线字的词语" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8238921,"math_prob":0.52538025,"size":1491,"snap":"2022-27-2022-33","text_gpt3_token_len":1757,"char_repetition_ratio":0.073974445,"word_repetition_ratio":0.0,"special_character_ratio":0.22334003,"punctuation_ratio":0.0065146578,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99015903,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-15T09:10:33Z\",\"WARC-Record-ID\":\"<urn:uuid:cf5bb7d9-54ba-4f33-bebd-69fc13704f85>\",\"Content-Length\":\"26466\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9742b192-4aca-4e0d-b785-8c2178ffb09c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1aa4d940-30e4-468c-8632-d965327849c8>\",\"WARC-IP-Address\":\"47.96.85.162\",\"WARC-Target-URI\":\"http://hanyu.utumao.com/zuci/xian5.html\",\"WARC-Payload-Digest\":\"sha1:E65MI6VHYAQW22FOEKYDKLAIWCEIUI7F\",\"WARC-Block-Digest\":\"sha1:WA54OM4HUGSU4KV3NFFUAE3GRKXGG56Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572163.61_warc_CC-MAIN-20220815085006-20220815115006-00071.warc.gz\"}"}
https://pubmed.ncbi.nlm.nih.gov/16624967/?dopt=Abstract
[ "# Variable selection for propensity score models\n\nAm J Epidemiol. 2006 Jun 15;163(12):1149-56. doi: 10.1093/aje/kwj149. Epub 2006 Apr 19.\n\n## Abstract\n\nDespite the growing popularity of propensity score (PS) methods in epidemiology, relatively little has been written in the epidemiologic literature about the problem of variable selection for PS models. The authors present the results of two simulation studies designed to help epidemiologists gain insight into the variable selection problem in a PS analysis. The simulation studies illustrate how the choice of variables that are included in a PS model can affect the bias, variance, and mean squared error of an estimated exposure effect. The results suggest that variables that are unrelated to the exposure but related to the outcome should always be included in a PS model. The inclusion of these variables will decrease the variance of an estimated exposure effect without increasing bias. In contrast, including variables that are related to the exposure but not to the outcome will increase the variance of the estimated exposure effect without decreasing bias. In very small studies, the inclusion of variables that are strongly related to the exposure but only weakly related to the outcome can be detrimental to an estimate in a mean squared error sense. The addition of these variables removes only a small amount of bias but can increase the variance of the estimated exposure effect. These simulation studies and other analytical results suggest that standard model-building tools designed to create good predictive models of the exposure will not always lead to optimal PS models, particularly in small studies.\n\n## Publication types\n\n• Research Support, N.I.H., Extramural\n\n## MeSH terms\n\n• Confounding Factors, Epidemiologic*\n• Effect Modifier, Epidemiologic\n• Epidemiologic Methods*\n• Humans\n• Models, Statistical\n• Monte Carlo Method\n• Regression Analysis" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9214429,"math_prob":0.77225643,"size":1527,"snap":"2022-27-2022-33","text_gpt3_token_len":262,"char_repetition_ratio":0.14904793,"word_repetition_ratio":0.033898305,"special_character_ratio":0.16830386,"punctuation_ratio":0.05859375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9560308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T17:28:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a5fefdad-3802-426d-ad41-16b46dacc667>\",\"Content-Length\":\"26853\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d9df84b-e17a-4e23-9e51-ebf0381d5770>\",\"WARC-Concurrent-To\":\"<urn:uuid:37968fb5-332b-4f1d-a0aa-bc11a4e620e7>\",\"WARC-IP-Address\":\"34.107.134.59\",\"WARC-Target-URI\":\"https://pubmed.ncbi.nlm.nih.gov/16624967/?dopt=Abstract\",\"WARC-Payload-Digest\":\"sha1:PLS4HIRSOMZWR5ZEMX34IH7MFCXPKVKE\",\"WARC-Block-Digest\":\"sha1:QDUP4XL2QKSC6ZITH5WGMWCUN3DSDF5A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103640328.37_warc_CC-MAIN-20220629150145-20220629180145-00725.warc.gz\"}"}
https://www.ijert.org/evaluation-of-deep-drawing-force-in-sheet-metal-forming
[ "# Evaluation of Deep Drawing Force in Sheet Metal Forming", null, "Text Only Version\n\n#### Evaluation of Deep Drawing Force in Sheet Metal Forming\n\nV.S.Vedpathak\n\nMechanical Department (Automotive Engineering) Sinhgad College of Engineering\n\nPune, Maharashtra\n\nDr. A.P. Tadamalle Mechanical Department Sinhgad College of Engineering Pune, Maharashtra\n\nProf. D. H. Burande\n\nMechanical Department\n\nNBN Sinhgad School of Engineering Pune, Maharashtra\n\nAbstract Sheet metal forming is widely used in automotive and aerospace sector. This paper shows the analysis of sheet metal forming process mainly deep drawing. Ansys simulation is used to carry out the results of difficult behaviour of the product. Main purpose of this project is to predict the required force use to perform the deep drawing operation. It also shows the required machine capacity to fulfil the production requirement. From theoretical values the dimensions of die and punch are obtained. Using these dimensions the CAD model is generated in CATIA. This assembly is then converted into igs format and imported in Ansys. The force required to develop the part, deformation and defect like tearing, wrinkles etc. can be obtained through simulation. By using this method it is easy to find the required force in minimum steps as compare to conventional trial and error method, which leads to save in time money and wastage of material.\n\nKeywords Ansys simulation, Deep Drawing, Part Development and simulation, Punch Force, Use of force probe in Ansys. Introduction (HEADING 1)\n\n1. INTRODUCTION\n\nDeep drawing is a process of forming sheet metal component through dies and punching action. This is zero forming process which provide the component closer to required tolerance dimensions, good surface finishing, also it provide good strain hardening, higher strength. On the basis of geometry, volume and the material, sheet metal forming operation can be divided into categories like stamping, deep drawing, and super plastic forming. Among these stamping and deep drawing are frequently used operations. If the depth of forming cup is more than the diameter then this forming process is knows as deep drawing. Products like automotive components, aircraft parts, cans, cylinder cup, and submarine hulls are manufactured using deep drawing procedure. The important variable in deep drawing is property of sheet material, surface finishing of punch and die, lubrication, BHF, clearance between punch and die, stages of drawing, coefficient of friction between blank and rigid parts etc. Punch force play an important role to draw blank into desired shape. It is very difficult to obtain exact required force on the punch to draw blank into desired depth. The sheet metal industry faces a number of challenges during development of the Die Punch and other parameters of required part manufacturing. These operations are complicated and calls high competency on the part design, material selection and\n\nprocess factors. If the process parameters are not correct then defects like Wrinkling, Tearing, Fracture, Thinning, Spring Back may be occur. These defects can be minimised by using correct process parameters like punch force, BHF, clearance between die and punch, number of drawing stages etc. This paper shows the FEA method to predict the required force for drawing operation using Ansys simulation. It also shows the significance of joint probe in Ansys and importance in the development process in forming operations.\n\nShishir Anwekar, discussed the process of development of die and punch. Also they stated that it is complicated process and needs highly skilled workers to produce error free assembly. To overcome the trials and error procedure used the simulation technique. Shambhuraje Jagatap discussed the temperature effect on die, blank, blank holder consider in the process. It stated that as temperature increases contact pressure decreases. Hakim S. Sultan perform the work by using LUSAS simulation software and conclude that The increment of die radius causing decrement in the load needed to deform the same pattern some and at the same time increasing the maximum stress concentration on curve region. Akshay Chaudhari This paper is based on the calculation of different part use in deep drawing process such as blank diameter, draw ratio, draw force, clearance, machine tonnage etc. Mathews Kaonga discussed about the forming processes such as punching, blanking and drawing and also stated the reason behind the variation of calculated and simulated values with significance of constants. S. Schneider\n\n1. provides the use of Ramberg-Osgood equation to find the tangent modulus property of material. In above referred papers feed the basic required data but not shows the actual procedure to find the punch force require to develop defect free component.\n\n2. PROCEDURE FOR PART DEVELOPMENT\n\nDie development for component is very costly process and it takes lot of time as we go for conventional methods. So use the technology like CAE during the designing of actual components to detect the problem areas in forming. This technology saves work, time and cost. During the development of deep drawing component (Emergency cup) first theoretical calculation is carried out for cup height 35mm, cup diameter 125mm,sheet thickness 1.2mm to get the dimensions of parts such and blank diameter, clearance\n\nbetween die and punch, blank holing force, drawing force etc. From this data cad model is generated using 3D cad software like CATIA. From this 3D parts deep drawing process assembly generated. This assembly imported in Ansys software in igs format, by applying material property and boundary conditions values simulation is carried out. As results from simulation finally we get the required punch force by using force probe tool.\n\n1. Theoretical Calculation\n\nImportant deep drawing process parameters are calculated by using standard formulas as follows\n\nBlank Diameter D = (d2 + 4dh) 0.5\n\nThis model is then imported in SolidWork. This software has advantage to generate punch and die assembly using user friendly mold tools. Final assembly is export in igs format to carry out simulation.\n\nDraw Ratio = h/dp\n\n——-(1)\n\n——-(2)\n\nFig. 2 Die & Punch Assembly in SolidWork\n\n1. Blank Material Property\n\nFor deep Drawing process low carbon material is required to minimize the defect generation. Generally for deep\n\nTable I The relationship between (H/d) ratio & number of draws\n\n If h/dp < 0.75 Then no. of draws = 1 If 0.75 < h/dp < 1.5 Then no. of draws = 2 If 1.5 < h/dp < 3.0 Then no. of draws = 3 If 3.0 < h/dp < 4.5 Then no. of draws = 4\n\nFrom Equation (2) and table I indicate that drawing process requires only one stage for completion i.e. this is single stage deep drawing process.\n\nPunch and Die Clearance C= T+ k (10 T)\n\ndrawing process EDD (Extra Deep Draw) material is used. There are some types or grade of materials like CR1, CR2, CR3, CR4, and CR5. As per the company (Priya Autocomponents Pvt. Ltd.) requirements and suggestions CR4 material is used for deep drawing process. CR4 is cold rolled steel with high elongation and lower carbon content. It is good for cold forming process. CR4 material Chemical and Mechanical properties are shown in following tables \n\nTable III Mechanical properties of material\n\nDrawing Force empirical relation\n\np = * dp*t*S*((D0/dp) C\n\nFOS taken as 1.5,\n\n——-(3)\n\n Quality Yield Stress Re Mpa Tensile Strengt h Rm Mpa Elongation Percent A Min Hardness Desig nation Name Max Lo= 80m m Lo = 50m m HR (30T) CR4 Extra Deep Drawing Aluminum Killed 210 350 36 37 50\n Quality Yield Stress Re Mpa Tensile Strengt h Rm Mpa Elongation Percent A Min Hardness Desig nation Name Max Lo= 80m m Lo = 50m m HR (30T) CR4 Extra Deep Drawing Aluminum Killed 210 350 36 37 50\n\n——-(4)\n\nBlank Holding Force is approx. 20% to 30 % of Draw Force (B.H.F.) = 30 % of Draw Force\n\nPress Tonnage P = p + (B.H.F.)\n\n——-(5)\n\n——-(6)\n\n Equation No. Parameters Values 1 D 182 mm 2 h/dp 0.3549 3 C 2.04 mm 4 p 26 ton 5 B.H.F. 8 ton 6 P 34 Ton\n Equation No. Parameters Values 1 D 182 mm 2 h/dp 0.3549 3 C 2.04 mm 4 p 26 ton 5 B.H.F. 8 ton 6 P 34 Ton\n\nTable II- Design calculation values\n\nB. CAD Model And Deep Drawing Assembly\n\nThe solid model of Emergency cup is modeled using 3D modeling software CATIA (generative sheet metal design workbench) based on drawing provided by company.\n\nFig. 1 CAD Model in Catia\n\nThe tangent modulus is the slope of the stress strain curve. Below the proportional limit the tangent modulus is equivalent to youngs modulus. Above the proportional limit the tangent modulus changes with strain and it is accurately calculate by using actual from test data. The Ramberg Osgood equation state the relation between young's modulus to the tangent modulus and this is one of the methods for obtaining the tangent modulus. The tangent modulus is playing an important role in describing the behavior of materials that have been stressed beyond the elastic region. When a material is plastically deformed there is no longer a linear relationship between stress and strain as there is for elastic deformations. The tangent modulus determines the \"softening\" or \"hardening\" of material that generally occurs when it begins to yield. Tangent modulus is calculated by using constant n is 5, modulus of elasticity 200Gpa for RambergOsgood Equation \n\n—–(7)\n\nTangent modulus for CR4 material is computed by using equation (7) is 2685061000 Pa = 2.68 * 109 Mpa\n\n2. Finite Element Analysis\n\nAssembly in igs format, file generated from SolidWork is imported in Ansys software. CR4 material properties such as density, youngs modulus, poisons ratio, bulk modulus, shear modulus, tangent modulus, yield strength, ultimate strength are missing in Ansys material library so it is necessary to create custom material properties for CR4 in material library. Following table shows the material property assign to blank.\n\nTable IV Material property of CR4\n\nMeshing is done by using Ansys Mechanical APDL with element size for blank and other body parts are 2mm, and 3mm respectively. For this element size node and element counts are 98538 and 21557 respectively. Contact between the punch and blank, blank holder and blank is frictional also die is stationary and punch, blank holder are translational in motion. Punch travels to positive Y axis; displacement is set as 35mm from the contact of blank. The transient simulation is carried out for better convergence of the problem for deflection, stresses, joint probe force.\n\nThe results are as follows:\n\n1. Total Deformation is 33.9mm and the thickness of bottom circle is around 9 mm. this shows the total deformation is within acceptable value.\n\nFig. 3 Total deformation of Cup\n\n2. Equivalent Elastic Strain is 0.009. This value shows the ratio of change in dimension with original dimension. From the results we can say that wall is not too thin.\n\nFig. 4 Equivalent Elastic Stain of Cup\n\n3. Following fig.5 shows the von misses stress generated on component. Maximum stresses occur at the curve or neck of cup. This leads to tearing of component from neck. But in this simulation not single tearing evidence found at neck while checking animated simulation video.\n\nFig. 5 Equivalent Stress of Cup\n\n4. Shear stress indicates the fracture or failure in material. If shear stress is large then tearing or surface distortion defect generate in product. In this case shear stress is within limit, but on the edge of neck region surface shows the chances of tearing.\n\nFig. 6 Shear Stress of Cup\n\n5. Fig 7 shows the load required for the deformation of blank. With respect to the punch targeted elements the load results are calculated by using the reaction values. The load value represented in all axis. The resultant load/force is obtained using mathematical formula.\n\nFig. 7 Local coordinate system of punch probe\n\nTable V shows the reaction force generated on X, Y, Z axis of the punch. Considering this value and mathematical formula resultant is calculated which shows the 30.37 ton force to positive y axis.\n\nTable V Punch Joint Probe Result\n\n Definition Type Joint Probe Result Type Total Force Results X Axis 2.9783e+005 N Y Axis -47.583 N Z Axis 763.65 N Total 2.9783e+005 N\n6. In deep drawing process due to stretching phenomenon of sheet thickness varies from region to region. Thinning parameter mostly depend on the depth of drawing. As depth of drawing increases thinning increases i.e. wall thickness reduces. This results are calculated with the help of altair inspire form software.\n\nFig. 8 Thickness of cup at various regions\n\n3. RESULT AND DISCUSSION\n\nBelow table shows the strain stress and resultant punch force generated as particular deformation value.\n\nTable VI Result Table\n\n Total Deformat ion (mm) Equivale nt Elastic Strain x10-3 Equivale nt Stress (MPa) Shear Elastic Strain x10-3 Shear Stress (MPa) Toal Force (Ton) Results 0.1258 1.52 206.64 1.08 85.852 0.976 12.492 6.49 1328.2 6.8 539.9 19.257 20.310 7.78 1581.6 7.94 636.99 23.942 31.238 9.52 1884.5 8.13 645.51 28.522 33.989 9.59 1943.4 8.52 676.78 31.02 36.270 9.89 2086.2 8.88 704.68 31.437\n\nBelow graph shows the increase in von-mises stress and shear stress increases with deformation. This graph increases up to the fracture occur in wall surface. Behaviour of graph is non-linear.\n\nFig. 9 Stress analysis with change in deformation\n\nThis graph shows the relationship with change in deformation with change in strain value. As deformation increases strain also increase. The behaviour of this graph is non- linear.\n\nFig. 10 Strain analysis with change in deformation\n\nThe main finding of this paper is to find the punch force required in forming operation like deep drawing which is calculated by using ansys force probe tool. Below graph shows the as increase in deformation punch force also increases. It increases up to the tearing of component wall then it decrease. Next graph shows the force values in tones for specific deformation.\n\nFig. 11 a) Required force as deformation increases\n\nFig. 11 b) Force values for specific deformation\n\nJoint probe table and chart shows the required force to draw blank into desired shape, i.e. Required Punch Force is 2.978 * 105 Newton which is nothing but 30.37 ton\n\nThis punch force is calculated from Ansys transient structural simulation solver. For full body we required punch force 31 ton.\n\nFactor of safety should be taken 1.5, Therefore, Draw Force\n\n(P) = 46 ton\n\nPress Tonnage = Draw Force + (B.H.F.) = 54 ton —–(8)\n\nFrom the theoretical calculation the required force is 34 ton and from the Ansys simulation we get required force as 54 ton. In actual production the required machine force is in the range of 54 to 55 ton. The maximum forces are however very different in magnitude from the theoretical deep drawing force calculated from the equation is 34 ton. This difference can be attributed to the average value of the constant C, used in the equation (4) which is dependent on die angle, friction and lubrication. \n\nThe problem areas such as wrinkling, tearing thinning are the biggest challenges in the industry now a days but these problems is minimized and sorted out in the initial stages of the design by using simulation techniques. In this simulation there is no single evidence is found about wrinkle. In case of tearing as shown in fig 6, shear stress is maximum on the neck of cup but it indicates the initial stage of tearing not a proper teardown on neck. Fig 8 shows the thickness of cup which is 1.2mm max i.e. original sheet thickness and 0.7mm minimum thickness. From this value thinning is approximately 41 percentages.\n\n4. EXPERIMENTAL VALIDATION\n\nFig. 12 Experimental setup\n\nDie and Punch are mounted on the Digvijay power press 60 ton capacity machine with the help of the guide pin and alignment tool on work bed. Experiment setup is prepared as shown in above fig. Conducting the actual experiment first ensured the machine setting parameter then checks the surface finishing and alignment of punch and die, check the edges and ensure the proper oil application on the surface of component. In actual validation the machine is set to different tones from 40ton to increases with 5ton up to 60 ton. Five components are manufactured for every force set on the punch. Deformation of cup is measured using height gauge, form these five values most repeated deformation value is considered which is mentioned in below table (Table-VII).\n\nAs force increases we get higher distortion of cup base. At the exact depth of 35mm as per real time experimentation process is required the deformation as shown in table VII are actual value calculated from the experimental work. Wrinkle defect is not observed in any iteration also all parts are safe from tearing problem. From the simulation it shows the signs of tearing at neck region but experimentally not a single part get tear at neck or at bottom region. Maximum thickness of cup is 1.2 mm and minimum thickness is 0.8 at the wall region. Maximum percentage thinning is approximately 33 percentages, which is acceptable.\n\nTable VII Actual Deformation as Force Increases\n\n Force in ton Deformation in mm 40 12.58 45 20.26 50 31.15 55 34.31 60 36.80\n5. CONCLUSION\n\nThe maximum force recorded in the simulation deep drawing processes (Equation 8) is calculated as 54ton, and this force is validate by conducting experimental process at 55ton force (shown in table VI). The experimental and computational value of force obtained matches well with each other. From the results it can also be concluded that the computational method used in this paper is much acceptable for deep drawing.\n\nFrom the experimental results it is observed that the part manufactured is wrinkle free hence equation (5) provides correct blank holding force to produce defect free production of deep drawing forming parts. Tearing defect is also absent in whole experimentation process. Maximum thinning of cup is 33 percentages i.e. wall thickness is 0.8mm which is acceptable. Therefore press machine up to 60 ton capacity can be used for the required process.\n\nACKNOWLEDGMENT\n\nIt is precious moment to acknowledge all the personalities who had helped me in my final project work. Firstly, I am very thankful of my guide Prof. D. H Burande for his valuable time and support for completion of project work. I am very thankful to Ms. Priya Gogawale from Priya Autocomponents Pvt. Ltd. Chakan, Pune for giving me an opportunity to do this project and believing me that I am capable of doing this project successfully. I am also very thankful of my coordinator Dr. A. P. Tadamalle for guiding me time to time.\n\nREFERENCES\n\n1. Shishir Anwekar1, Abhishek Jain2 , Finite Element Simulation of Single Stage Deep Drawing Process for Determining Stress Distribution in Drawn Conical Component , International Journal Of Computational Engineering Research (ijceronline.com) Vol. 2 Issue. 8,December 2012,pp.229-236\n\n2. Shambhuraje Jagatap 1, Prof. Bharat S. Kodli 2, A Finite Elemental Study of Contact Pressure Distribution in Stamping Operations, International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 10, 2013, pp.311-319.\n\n3. Hakim S. Sultan Aljibori, Finite Element Analysis of Sheet Metal Forming Process ,European Journal of Scientific Research, Vol.33 No.1 , 2009, pp.57-69.\n\n4. Akshay Chaudhari, Suraj Jadhav, Ojas Ahirrao, Rushikesh Deore, Prof.S.R.Jadhav , Design And Development of Stamping Parts Using CAE Technology, International Education and Research Journal (IERJ)\n\n5. Mathews Kaonga, Simulation And Optimisation Of A Full Deep Drawing Process, The University OF Zambia October 2009\n\n6. S. Schneider, S. G. Schneider, H. Marques da Silva, C. de Moura Neto, Study of the non-linear stress-strain behavior, Materials research vol.8, no.4, 2005.\n\n7. Indian Standard Cold Reduced Low Carbon Steel Sheet And Strip (Fifth evision) Is-513:2018" ]
[ null, "https://www.ijert.org/author-photo/V8IS7/IJERTV8IS070378.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86871934,"math_prob":0.9057082,"size":20043,"snap":"2019-35-2019-39","text_gpt3_token_len":4640,"char_repetition_ratio":0.12660313,"word_repetition_ratio":0.04419384,"special_character_ratio":0.22606397,"punctuation_ratio":0.1153442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95231164,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T19:15:19Z\",\"WARC-Record-ID\":\"<urn:uuid:92d00fcb-e027-4a91-b769-aa7d66ca4405>\",\"Content-Length\":\"91576\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d768527-8090-442c-bc78-0b110d378950>\",\"WARC-Concurrent-To\":\"<urn:uuid:510f29b2-aa2e-4cbc-87d8-951141511713>\",\"WARC-IP-Address\":\"149.129.130.163\",\"WARC-Target-URI\":\"https://www.ijert.org/evaluation-of-deep-drawing-force-in-sheet-metal-forming\",\"WARC-Payload-Digest\":\"sha1:KX4PQ5ACTBYHAFIYF64TVYN3222RGIDC\",\"WARC-Block-Digest\":\"sha1:HO4X44PYT7JEON2UW6FPZU4W3TJAJU63\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314904.26_warc_CC-MAIN-20190819180710-20190819202710-00085.warc.gz\"}"}
https://docs.coregames.com/api/transform/
[ "# Transform\n\nTransforms represent the position, rotation, and scale of objects in the game. They are immutable, but new Transforms can be created when you want to change an object's Transform.\n\n## Constructors\n\nConstructor Name Return Type Description Tags\n`Transform.New()` `Transform` Constructs a new identity Transform. None\n`Transform.New(Quaternion rotation, Vector3 position, Vector3 scale)` `Transform` Constructs a new Transform with a Quaternion. None\n`Transform.New(Rotation rotation, Vector3 position, Vector3 scale)` `Transform` Constructs a new Transform with a Rotation. None\n`Transform.New(Vector3 x_axis, Vector3 y_axis, Vector3 z_axis, Vector3 translation)` `Transform` Constructs a new Transform from a Matrix. None\n`Transform.New(Transform transform)` `Transform` Copies the given Transform. None\n\n## Constants\n\nConstant Name Return Type Description Tags\n`Transform.IDENTITY` `Transform` Constant identity Transform. None\n\n## Functions\n\nFunction Name Return Type Description Tags\n`GetPosition()` `Vector3` Returns a copy of the position component of the Transform. None\n`SetPosition(Vector3)` `None` Sets the position component of the Transform. None\n`GetRotation()` `Rotation` Returns a copy of the Rotation component of the Transform. None\n`SetRotation(Rotation)` `None` Sets the rotation component of the Transform. None\n`GetQuaternion()` `Quaternion` Returns a quaternion-based representation of the Rotation. None\n`SetQuaternion(Quaternion)` `None` Sets the quaternion-based representation of the Rotation. None\n`GetScale()` `Vector3` Returns a copy of the scale component of the Transform. None\n`SetScale(Vector3)` `None` Sets the scale component of the Transform. None\n`GetForwardVector()` `Vector3` Forward vector of the Transform. None\n`GetRightVector()` `Vector3` Right vector of the Transform. None\n`GetUpVector()` `Vector3` Up vector of the Transform. None\n`GetInverse()` `Transform` Inverse of the Transform. None\n`TransformPosition(Vector3 position)` `Vector3` Applies the Transform to the given position in 3D space. None\n`TransformDirection(Vector3 direction)` `Vector3` Applies the Transform to the given directional Vector3. This will rotate and scale the Vector3, but does not apply the Transform's position. None\n\n## Operators\n\nOperator Name Return Type Description Tags\n`Transform * Transform` `Transform` Returns a new Transform composing the left and right Transforms. None\n`Transform * Quaternion` `Transform` Returns a new Transform composing the left Transform then the right side rotation. None\n\n## Examples\n\nExample using:\n\n### `New`\n\nIn this example we show that getting an object's rotation, position, and scale individually is the same as getting the object's entire transform.\n\n``````local OBJ = script.parent\n\nlocal rot = OBJ:GetRotation()\nlocal pos = OBJ:GetPosition()\nlocal sca = OBJ:GetScale()\nlocal newTransform = Transform.New(rot, pos, sca)\n\nif newTransform == OBJ:GetTransform() then\nprint(\"Equal\")\nelse\nprint(\"Not equal\")\nend\n``````\n\nExample using:\n\n### `SetPosition`\n\nIn this example, we move an object up by 2 meters by modifying its transform. The script is placed as a child of the object and networking is enabled on the object. The same can be done for rotation and scale. Notice that, when manipulating Core Objects directly, their \"world\" position is most often changed. Whereas changes to a transform's properties are always in local space.\n\n``````local OBJ = script.parent\n\n-- Grab a copy of the transform\nlocal t = OBJ:GetTransform()\n\n-- Change its position up by 2 meters\nlocal pos = t:GetPosition()\npos = pos + Vector3.New(0, 0, 200)\nt:SetPosition(pos)\n\n-- Apply the transform back to the object\nOBJ:SetTransform(t)\n``````\n\nExample using:\n\n### `Transform*Quaternion`\n\nWhile many operations can be achieved by changing an object's position or rotation directly, some algorithms are best achieved with `Transforms` (matrices). In this example, we build an elaborate 3D spiral structure by spawning a series of cubes. This takes advantage of the composable nature of transforms, which can accumulate an indefinite series of position/rotation/scale, all while being highly efficient on the CPU.\n\n``````-- Template of bottom-aligned cube that will be spawned\nlocal TEMPLATE = script:GetCustomProperty(\"CubeBottomAligned\")\n\n-- Build the math structures only once\nlocal T_OFFSET = Transform.New(Quaternion.IDENTITY, Vector3.UP * 100, Vector3.ONE)\nlocal T_ARC = Transform.New(Rotation.New(5, 7, 0), Vector3.ZERO, Vector3.ONE)\nlocal Q_TIGHTEN = Quaternion.New(Rotation.New(0.03, 0.06, -0.01))\n\n-- Initial transform composition that will be applied to itself, over and over\nlocal composition = T_ARC * T_OFFSET\n\nt = script:GetTransform()\n\nfunction SpawnOne()\n-- Slight rotation to tighten the spiral\ncomposition = composition * Q_TIGHTEN\n-- Iterate the composition\nt = composition * t\n-- Spawn the cube with the given transform\nlocal obj = World.SpawnAsset(TEMPLATE, {transform = t})\nend\n\n-- Loop over time. Spawn a cube every 30ms\nfunction Tick()" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66707337,"math_prob":0.9811542,"size":5230,"snap":"2023-40-2023-50","text_gpt3_token_len":1241,"char_repetition_ratio":0.23689246,"word_repetition_ratio":0.11788618,"special_character_ratio":0.2288719,"punctuation_ratio":0.14101058,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98898077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T02:10:37Z\",\"WARC-Record-ID\":\"<urn:uuid:76aa2c05-e640-4129-9796-e7a80b3c110c>\",\"Content-Length\":\"105541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92e0e48d-fd9b-487d-8d22-70f5833a0596>\",\"WARC-Concurrent-To\":\"<urn:uuid:a376aa41-d70f-4be9-924d-5feff100ac8f>\",\"WARC-IP-Address\":\"44.217.161.11\",\"WARC-Target-URI\":\"https://docs.coregames.com/api/transform/\",\"WARC-Payload-Digest\":\"sha1:HXYGU42V65HTY2YFNZ2ITAVONN7VJNLS\",\"WARC-Block-Digest\":\"sha1:T7TLBFWY6KHUGDGH6D5ZNDQ2UGPLJ2ZF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506320.28_warc_CC-MAIN-20230922002008-20230922032008-00673.warc.gz\"}"}
https://www.omtexclasses.com/2021/02/exercise-24-pages-59-60-balbharati.html
[ "### Exercise 2.4 [Pages 59 - 60] Balbharati solutions for Mathematics and Statistics 1 (Commerce) 12th Standard HSC Maharashtra State Board Chapter 2 Matrices\n\nExercise 2.4 [Pages 59 - 60]\n\n### Balbharati solutions for Mathematics and Statistics 1 (Commerce) 12th Standard HSC Maharashtra State Board Chapter 2 Matrices Exercise 2.4 [Pages 59 - 60]\n\nExercise 2.4 | Q 1.1 | Page 59\n\n#### QUESTION\n\nFind AT,  if A = $\\left[\\begin{array}{cc}1& 3\\\\ -4& 5\\end{array}\\right]$\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{cc}1& 3\\\\ -4& 5\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{cc}1& -4\\\\ 3& 5\\end{array}\\right]$.\n\nExercise 2.4 | Q 1.2 | Page 59\n\n#### QUESTION\n\nFind AT, if A = $\\left[\\begin{array}{ccc}2& -6& 1\\\\ -4& 0& 5\\end{array}\\right]$\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}2& -6& 1\\\\ -4& 0& 5\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{cc}2& -4\\\\ -6& 0\\\\ 1& 5\\end{array}\\right]$.\n\nExercise 2.4 | Q 2 | Page 59\n\n#### QUESTION\n\nIf [aij]3×3, where aij = 2(i – j), find A and AT. State whether A and AT both are symmetric or skew-symmetric matrices?\n\n#### SOLUTION\n\nA = ${\\left[{\\text{a}}_{\\text{ij}}\\right]}_{3×3}=\\left[\\begin{array}{ccc}{\\text{a}}_{11}& {\\text{a}}_{12}& {\\text{a}}_{13}\\\\ {\\text{a}}_{21}& {\\text{a}}_{22}& {\\text{a}}_{23}\\\\ {\\text{a}}_{31}& {\\text{a}}_{32}& {\\text{a}}_{33}\\end{array}\\right]$\n\nGiven, aij = 2 (i – j)\n∴ a11 = 2(1 – 1) = 0,  a12 = 2(1 – 2) = – 2\na13 = 2(1 – 3) = – 4,  a21 = 2(2 – 1) = 2,\na22 = 2(2 – 2) = 0,     a23 = 2(2 – 3) = – 2,\na31 = 2(3 – 1) = 4,     a32 = 2(3 – 2) = 2,\na33 = 2(3 – 3) = 0\n\n∴ A = $\\left[\\begin{array}{ccc}0& -2& -4\\\\ 2& 0& -2\\\\ 4& 2& 0\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}0& 2& 4\\\\ -2& 0& -2\\\\ -4& -2& 0\\end{array}\\right]$\n\n= $-\\left[\\begin{array}{ccc}0& -2& -4\\\\ 2& 0& -2\\\\ 4& 2& 0\\end{array}\\right]=-\\text{A}$\n\n∴ AT = – A and A = – AT\n∴ A and AT both are skew-symmetric matrices.\n\nExercise 2.4 | Q 3 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}5& -3\\\\ 4& -3\\\\ -2& 1\\end{array}\\right]$, prove that (AT)T = A.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{cc}5& -3\\\\ 4& -3\\\\ -2& 1\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}5& 4& -2\\\\ -3& -3& 1\\end{array}\\right]$\n\n∴ (AT)T = $\\left[\\begin{array}{cc}5& -3\\\\ 4& -3\\\\ -2& 1\\end{array}\\right]$\n= A.\n\nExercise 2.4 | Q 4 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{ccc}1& 2& -5\\\\ 2& -3& 4\\\\ -5& 4& 9\\end{array}\\right]$, prove that AT = A.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}1& 2& -5\\\\ 2& -3& 4\\\\ -5& 4& 9\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}1& 2& -5\\\\ 2& -3& 4\\\\ -5& 4& 9\\end{array}\\right]$\n=A.\n\nExercise 2.4 | Q 5.1 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}2& -3\\\\ 5& -4\\\\ -6& 1\\end{array}\\right],\\text{B}=\\left[\\begin{array}{cc}2& 1\\\\ 4& -1\\\\ -3& 3\\end{array}\\right],\\text{C}=\\left[\\begin{array}{cc}1& 2\\\\ -1& 4\\\\ -2& 3\\end{array}\\right]$, then show that (A + B)T = AT + BT.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{cc}2& -3\\\\ 5& -4\\\\ -6& 1\\end{array}\\right]+\\left[\\begin{array}{cc}2& 1\\\\ 4& -1\\\\ -3& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}2+2& -3+1\\\\ 5+4& -4-1\\\\ -6-3& 1+3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}4& -2\\\\ 9& -5\\\\ -9& 4\\end{array}\\right]$\n\n∴ (A + B)T = $\\left[\\begin{array}{ccc}4& 9& -9\\\\ -2& -5& 4\\end{array}\\right]$        ...(i)\n\nNow, AT =\n\n∴ AT + BT = $\\left[\\begin{array}{ccc}2& 5& -6\\\\ -3& -4& 1\\end{array}\\right]+\\left[\\begin{array}{ccc}2& 4& -3\\\\ 1& -1& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}2+2& 5+4& -6-3\\\\ -3+1& -4-1& 1+3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}4& 9& -9\\\\ -2& -5& 4\\end{array}\\right]$         ...(ii)\nFrom (i) and (ii, we get\n(A + B)T = AT + BT.\n\nExercise 2.4 | Q 5.2 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}2& -3\\\\ 5& -4\\\\ -6& 1\\end{array}\\right],\\text{B}=\\left[\\begin{array}{cc}2& 1\\\\ 4& -1\\\\ -3& 3\\end{array}\\right],\\text{C}=\\left[\\begin{array}{cc}1& 2\\\\ -1& 4\\\\ -2& 3\\end{array}\\right]$, then show that (A – C)T = AT – CT.\n\n#### SOLUTION\n\nA – C = $\\left[\\begin{array}{cc}2& -3\\\\ 5& -4\\\\ -6& 1\\end{array}\\right]-\\left[\\begin{array}{cc}1& 2\\\\ -1& 4\\\\ -2& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}2-1& -3-2\\\\ 5+1& -4-4\\\\ -6+2& 1-3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1& -5\\\\ 6& -8\\\\ -4& -2\\end{array}\\right]$\n\n∴ (A – C)T = $\\left[\\begin{array}{ccc}1& 6& -4\\\\ -5& -8& -2\\end{array}\\right]$              ...(i)\n\nNow, AT = $\\left[\\begin{array}{ccc}2& 5& -6\\\\ -3& -4& 1\\end{array}\\right]$ and\n\nCT = $\\left[\\begin{array}{ccc}1& -1& -2\\\\ 2& 4& 3\\end{array}\\right]$\n\n∴ AT –  CT = $\\left[\\begin{array}{ccc}2& 5& -6\\\\ -3& -4& 1\\end{array}\\right]-\\left[\\begin{array}{ccc}1& -1& -2\\\\ 2& 4& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}2-1& 5+1& -6+2\\\\ -3-2& -4-4& 1-3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1& 6& -4\\\\ -5& -8& -2\\end{array}\\right]$          ...(ii)\n\nFrom (i) and (ii), we get\n(A – C)T = AT – CT.\n\nExercise 2.4 | Q 6 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}5& 4\\\\ -2& 3\\end{array}\\right]$ and B = $\\left[\\begin{array}{cc}-1& 3\\\\ 4& -1\\end{array}\\right]$, then find CT, such that 3A – 2B + C = I, where I is e unit matrix of order 2.\n\n#### SOLUTION\n\n3A – 2B + C = I\n∴ C = I + 2B – 3A\n\n∴ C = $\\left[\\begin{array}{cc}1& 0\\\\ 0& 1\\end{array}\\right]+2\\left[\\begin{array}{cc}-1& 3\\\\ 4& -1\\end{array}\\right]-3\\left[\\begin{array}{cc}5& 4\\\\ -2& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1& 0\\\\ 0& 1\\end{array}\\right]+\\left[\\begin{array}{cc}-2& 6\\\\ 8& -2\\end{array}\\right]-\\left[\\begin{array}{cc}15& 12\\\\ -6& 9\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1-2-15& 0+6-12\\\\ 0+8+6& 1-2-9\\end{array}\\right]$\n\n∴ C = $\\left[\\begin{array}{cc}-16& -6\\\\ 14& -10\\end{array}\\right]$\n\n∴ C = $\\left[\\begin{array}{cc}-16& 14\\\\ -6& -10\\end{array}\\right]$.\n\nExercise 2.4 | Q 7.1 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{ccc}7& 3& 0\\\\ 0& 4& -2\\end{array}\\right],\\text{B}=\\left[\\begin{array}{ccc}0& -2& 3\\\\ 2& 1& -4\\end{array}\\right]$, then find AT + 4BT.\n\n#### SOLUTION\n\nA =\n\n∴ AT =\n\nAT + 4BT = $\\left[\\begin{array}{cc}7& 0\\\\ 3& 4\\\\ 0& -2\\end{array}\\right]+4\\left[\\begin{array}{cc}0& 2\\\\ -2& 1\\\\ 3& -4\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}7& 0\\\\ 3& 4\\\\ 0& -2\\end{array}\\right]+\\left[\\begin{array}{cc}0& 8\\\\ -8& 4\\\\ 12& -16\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}7+0& 0+8\\\\ 3-8& 4+4\\\\ 0+12& -2-16\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}7& 8\\\\ -5& 8\\\\ 12& -18\\end{array}\\right]$.\n\nExercise 2.4 | Q 7.2 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{ccc}7& 3& 0\\\\ 0& 4& -2\\end{array}\\right],\\text{B}=\\left[\\begin{array}{ccc}0& -2& 3\\\\ 2& 1& -4\\end{array}\\right]$, then find 5AT – 5BT.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}7& 3& 0\\\\ 0& 4& -2\\end{array}\\right]\\text{and}\\text{B}=\\left[\\begin{array}{ccc}0& -2& 3\\\\ 2& 1& -4\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{cc}7& 0\\\\ 3& 4\\\\ 0& -2\\end{array}\\right]\\text{and}{\\text{B}}^{\\text{T}}=\\left[\\begin{array}{cc}0& 2\\\\ -2& 1\\\\ 3& -4\\end{array}\\right]$\n\n5AT – 5B = 5(AT – BT)\n\n= $5\\left(\\left[\\begin{array}{cc}7& 0\\\\ 3& 4\\\\ 0& -2\\end{array}\\right]-\\left[\\begin{array}{cc}0& 2\\\\ -2& 1\\\\ 3& -4\\end{array}\\right]\\right)$\n\n= $5\\left[\\begin{array}{cc}7-0& 0-2\\\\ 3+2& 4-1\\\\ 0-3& -2+4\\end{array}\\right]$\n\n= $5\\left[\\begin{array}{cc}7& -2\\\\ 5& 3\\\\ -3& 2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}35& -10\\\\ 25& 15\\\\ -15& 10\\end{array}\\right]$.\n\nExercise 2.4 | Q 8 | Page 59\n\n#### QUESTION\n\nIf A = , verify that (A + 2B + 3C)T = AT + 2BT+ CT.\n\n#### SOLUTION\n\nA + 2B + 3C\n\n= $\\left[\\begin{array}{ccc}1& 0& 1\\\\ 3& 1& 2\\end{array}\\right]+2\\left[\\begin{array}{ccc}2& 1& -4\\\\ 3& 5& -2\\end{array}\\right]+3\\left[\\begin{array}{ccc}0& 2& 3\\\\ -1& -1& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1& 0& 1\\\\ 3& 1& 2\\end{array}\\right]+\\left[\\begin{array}{ccc}4& 2& -8\\\\ 6& 1& -4\\end{array}\\right]+\\left[\\begin{array}{ccc}0& 6& 9\\\\ -3& -3& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1+4+0& 0+2+6& 1-8+9\\\\ 3+6-3& 1+10-3& 2-4+0\\end{array}\\right]$\n\n∴ A + 2B + 3C = $\\left[\\begin{array}{ccc}5& 8& 2\\\\ 6& 8& -2\\end{array}\\right]$\n\n∴ [A + 2B + 3C]T = $\\left[\\begin{array}{cc}5& 6\\\\ 8& 8\\\\ 2& -2\\end{array}\\right]$   ...(i)\n\nNow, AT = $\\left[\\begin{array}{cc}1& 3\\\\ 0& 1\\\\ 1& 2\\end{array}\\right],{\\text{B}}^{\\text{T}}=\\left[\\begin{array}{cc}2& 3\\\\ 1& 5\\\\ -4& -2\\end{array}\\right]$\n\nand CT = $\\left[\\begin{array}{cc}0& -1\\\\ 2& -1\\\\ 3& 0\\end{array}\\right]$\n\n∴ AT + 2BT + 3CT\n\n= $\\left[\\begin{array}{cc}1& 3\\\\ 0& 1\\\\ 1& 2\\end{array}\\right]+2\\left[\\begin{array}{cc}2& 3\\\\ 1& 5\\\\ -4& -2\\end{array}\\right]+3\\left[\\begin{array}{cc}0& -1\\\\ 2& -1\\\\ 3& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1& 3\\\\ 0& 1\\\\ 1& 2\\end{array}\\right]+\\left[\\begin{array}{cc}4& 6\\\\ 2& 10\\\\ -8& -4\\end{array}\\right]+\\left[\\begin{array}{cc}0& -3\\\\ 6& -3\\\\ 9& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1+4+0& 3+6+3\\\\ 0+2+6& 1+10-3\\\\ 1-8+9& 2-4+0\\end{array}\\right]$\n\n∴ AT + 2BT + 3CT = $\\left[\\begin{array}{cc}5& 6\\\\ 8& 8\\\\ 2& -2\\end{array}\\right]$     ...(iii)\n\nFrom (i) and (ii), we get\n[A + 2B + 3C]T = AT + 2BT + 3CT.\n\nExercise 2.4 | Q 9 | Page 59\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{ccc}-1& 2& 1\\\\ -3& 2& -3\\end{array}\\right]$ and B = $\\left[\\begin{array}{cc}2& 1\\\\ -3& 2\\\\ -1& 3\\end{array}\\right]$, prove that (A + BT)T = AT + B.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}-1& 2& 1\\\\ -3& 2& -3\\end{array}\\right]\\text{and B}=\\left[\\begin{array}{cc}2& 1\\\\ -3& 2\\\\ -1& 3\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{cc}-1& -3\\\\ 2& 2\\\\ 1& -3\\end{array}\\right]{\\text{and B}}^{\\text{T}}=\\left[\\begin{array}{ccc}2& -3& -1\\\\ 1& 2& 3\\end{array}\\right]$\n\n∴ A + BT = $\\left[\\begin{array}{ccc}-1& 2& 1\\\\ -3& 2& -3\\end{array}\\right]+\\left[\\begin{array}{ccc}2& -3& 1\\\\ 1& 2& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}-1+2& 2-3& 1-1\\\\ -3+1& 2+2& -3+3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1& -1& 0\\\\ -2& 4& 0\\end{array}\\right]$\n\n∴ (A + BT)T = $\\left[\\begin{array}{cc}1& -2\\\\ -1& 4\\\\ 0& 0\\end{array}\\right]$       ...(i)\n\nNow, AT + B = $\\left[\\begin{array}{cc}-1& -3\\\\ 2& 2\\\\ 1& -3\\end{array}\\right]+\\left[\\begin{array}{cc}2& 1\\\\ -3& 2\\\\ -1& 3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}-1+2& -3+1\\\\ 2-3& 2+2\\\\ 1-1& -3+3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}1& -2\\\\ -1& 4\\\\ 0& 0\\end{array}\\right]$                 ...(ii)\nFrom (i) and (ii), we get\n(A + BT)T = AT + B.\n\nExercise 2.4 | Q 10.1 | Page 59\n\n#### QUESTION\n\nProve that A + AT is a symmetric and A – AT is a skew symmetric matrix, where A = $\\left[\\begin{array}{ccc}1& 2& 4\\\\ 3& 2& 1\\\\ -2& -3& 2\\end{array}\\right]$\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}1& 2& 4\\\\ 3& 2& 1\\\\ -2& -3& 2\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}1& 3& -2\\\\ 2& 2& -3\\\\ 4& 1& 2\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{ccc}1& 2& 4\\\\ 3& 2& 1\\\\ -2& -3& 2\\end{array}\\right]+\\left[\\begin{array}{ccc}1& 3& -2\\\\ 2& 2& -3\\\\ 4& 1& 2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1+1& 2+3& 4-2\\\\ 3+2& 2+2& 1-3\\\\ -2+4& -3+1& 2+2\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{ccc}2& 5& 2\\\\ 5& 4& -2\\\\ 2& -2& 4\\end{array}\\right]$\n\n∴ (A + AT)T = $\\left[\\begin{array}{ccc}2& 5& 2\\\\ 5& 4& -2\\\\ 2& -2& 4\\end{array}\\right]$\n\n∴ (A + AT)T = A + AT i.e., A + AT = (A + AT)T\n∴ A + AT is a symmetric matrix.\n\nA – AT = $\\left[\\begin{array}{ccc}1& 2& 4\\\\ 3& 2& 1\\\\ -2& -3& 2\\end{array}\\right]-\\left[\\begin{array}{ccc}1& 3& -2\\\\ 2& 2& -3\\\\ 4& 1& 2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}1-1& 2-3& 4+2\\\\ 3-2& 2-2& 1+3\\\\ -2-4& -3-1& 2-2\\end{array}\\right]$\n\n∴ A – AT = $\\left[\\begin{array}{ccc}0& -1& 6\\\\ 1& 0& 4\\\\ -6& -4& 0\\end{array}\\right]$\n\n∴ (A – AT)T $\\left[\\begin{array}{ccc}0& 1& -6\\\\ -1& 0& -4\\\\ 6& 4& 0\\end{array}\\right]$\n\n= $-\\left[\\begin{array}{ccc}0& -1& 6\\\\ 1& 0& 4\\\\ -6& -4& 0\\end{array}\\right]$\n\n∴ (A – AT)T = –  (A – AT)\ni.e., A – AT = –  (A – AT)T\n∴ A – AT  is a skew symmetric matrix.\n\nExercise 2.4 | Q 10.2 | Page 59\n\n#### QUESTION\n\nProve that A + AT is a symmetric and A – AT is a skew symmetric matrix, where A = $\\left[\\begin{array}{ccc}5& 2& -4\\\\ 3& -7& 2\\\\ 4& -5& -3\\end{array}\\right]$\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{ccc}5& 2& -4\\\\ 3& -7& 2\\\\ 4& -5& -3\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}5& 3& 4\\\\ 2& -7& -5\\\\ -4& 2& -3\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{ccc}5& 2& -4\\\\ 3& -7& 2\\\\ 4& -5& -3\\end{array}\\right]+\\left[\\begin{array}{ccc}5& 3& 4\\\\ 2& -7& -5\\\\ -4& 2& -3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}5+5& 2+3& -4+4\\\\ 3+2& -7-7& 2-5\\\\ 4-4& -5+2& -3-3\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{ccc}10& 5& 0\\\\ 5& -14& -3\\\\ 0& -3& -6\\end{array}\\right]$\n\n∴  (A + AT)T = A + AT i.e., A + AT = (A + AT)T\n∴ A + AT = is a symmetric matrix.\n\nA – AT = $\\left[\\begin{array}{ccc}5& 2& -4\\\\ 3& -7& 2\\\\ 4& -5& -3\\end{array}\\right]-\\left[\\begin{array}{ccc}5& 3& 4\\\\ 2& -7& -5\\\\ -4& 2& -3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}5-5& 2-3& -4-4\\\\ 3-2& -7+7& 2+5\\\\ 4+4& -5-2& -3+3\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}0& -1& -8\\\\ 1& 0& 7\\\\ 8& -7& 0\\end{array}\\right]$\n\n∴ (A – AT)T = $\\left[\\begin{array}{ccc}0& 1& 8\\\\ -1& 0& -7\\\\ -8& 7& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}0& -1& -8\\\\ 1& 0& 7\\\\ 8& -7& 0\\end{array}\\right]$\n\n∴ (A – AT)T = – (A – AT)\ni.e., A – AT = – (A – AT)T\n∴ A – AT  is a skew symmetric matrix.\n\nExercise 2.4 | Q 11.1 | Page 59\n\n#### QUESTION\n\nExpress each of the following matrix as the sum of a symmetric and a skew symmetric matrix $\\left[\\begin{array}{cc}4& -2\\\\ 3& -5\\end{array}\\right]$.\n\n#### SOLUTION\n\nA square matrix A can be expressed as the sum of a symmetric and a skew-symmetric matrix as\n\nA = $\\frac{1}{2}\\left(\\text{A}+{\\text{A}}^{\\text{T}}\\right)+\\frac{1}{2}\\left(\\text{A}-{\\text{A}}^{\\text{T}}\\right)$\n\nLet A = $\\left[\\begin{array}{cc}4& -2\\\\ 3& -5\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{cc}4& 3\\\\ -2& -5\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{cc}4& -2\\\\ 3& -5\\end{array}\\right]+\\left[\\begin{array}{cc}4& 3\\\\ -2& -5\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}4+4& -2+3\\\\ 3-2& -5-5\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}8& 1\\\\ 1& -10\\end{array}\\right]$\n\nAlso, A – AT = $\\left[\\begin{array}{cc}4& -2\\\\ 3& -5\\end{array}\\right]-\\left[\\begin{array}{cc}4& 3\\\\ -2& -5\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}4-4& -2-3\\\\ 3+2& -5+5\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}0& -5\\\\ 5& 0\\end{array}\\right]$\n\nLet P = $\\frac{1}{2}\\left(\\text{A}+{\\text{A}}^{\\text{T}}\\right)$\n\n= $\\frac{1}{2}\\left[\\begin{array}{cc}8& 1\\\\ 1& -10\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}4& \\frac{1}{2}\\\\ \\frac{1}{2}& -5\\end{array}\\right]$\nand\nQ = $\\frac{1}{2}\\left(\\text{A}-{\\text{A}}^{\\text{T}}\\right)$\n\n= $\\frac{1}{2}\\left[\\begin{array}{cc}0& -5\\\\ 5& 0\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}0& -\\frac{5}{2}\\\\ \\frac{5}{2}& 0\\end{array}\\right]$\n\n∴ P is a symmetric matrix         ...[∵ aij = aij]\n\nand Q is a skew-symmetric matrix.   ...[∵ aij = – aij]\n∴ A = P + Q\n\n∴ A = $\\left[\\begin{array}{cc}4& \\frac{1}{2}\\\\ \\frac{1}{2}& -5\\end{array}\\right]+\\left[\\begin{array}{cc}0& -\\frac{5}{2}\\\\ \\frac{5}{2}& 0\\end{array}\\right]$.\n\nExercise 2.4 | Q 11.2 | Page 59\n\n#### QUESTION\n\nExpress each of the following matrix as the sum of a symmetric and a skew symmetric matrix $\\left[\\begin{array}{ccc}3& 3& -1\\\\ -2& -2& 1\\\\ -4& -5& 2\\end{array}\\right]$.\n\n#### SOLUTION\n\nA square matrix A can be expressed as the sum of a symmetric and a skew-symmetric matrix as\n\nA = $\\frac{1}{2}\\left(\\text{A}+{\\text{A}}^{\\text{T}}\\right)+\\frac{1}{2}\\left(\\text{A}-{\\text{A}}^{\\text{T}}\\right)$\n\nLet A = $\\left[\\begin{array}{ccc}3& 3& -1\\\\ -2& -2& 1\\\\ -4& -5& 2\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}3& -2& -4\\\\ 3& -2& -5\\\\ -1& 1& 2\\end{array}\\right]$\n\n∴ A + AT = $\\left[\\begin{array}{ccc}3& 3& -1\\\\ -2& -2& 1\\\\ -4& -5& 2\\end{array}\\right]+\\left[\\begin{array}{ccc}3& -2& -4\\\\ 3& -2& -5\\\\ -1& 1& 2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}3+3& 3-2& -1-4\\\\ -2+3& -2-2& 1-5\\\\ -4-1& -5+1& 2+2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}6& 1& -5\\\\ 1& -4& -4\\\\ -5& -4& 4\\end{array}\\right]$\n\nAlso, A – AT = $\\left[\\begin{array}{ccc}3& 3& -1\\\\ -2& -2& 1\\\\ -4& -5& 2\\end{array}\\right]-\\left[\\begin{array}{ccc}3& -2& -4\\\\ 3& -2& -5\\\\ -1& 1& 2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}3-3& 3+2& -1+4\\\\ -2-3& -2+2& 1+5\\\\ -4+1& -5-1& 2-2\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}0& 5& 3\\\\ -5& 0& 6\\\\ -3& -6& 0\\end{array}\\right]$\n\nLet P = $\\frac{1}{2}\\left(\\text{A}+{\\text{A}}^{\\text{T}}\\right)$\n\n= $\\frac{1}{2}\\left[\\begin{array}{ccc}6& 1& -5\\\\ 1& -4& -4\\\\ -5& -4& 4\\end{array}\\right]$\n\nand Q = $\\frac{1}{2}\\left(\\text{A}-{\\text{A}}^{\\text{T}}\\right)$\n\n= $\\frac{1}{2}\\left[\\begin{array}{ccc}0& 5& 3\\\\ -5& 0& 6\\\\ -3& -6& 0\\end{array}\\right]$\n\n∴ P is a symmetric matrix          ...[∵ aij = aij]\n\nand Q is a skew symmetric matrix.  ...[∵ aij = –  aij]\n∴ A = P + Q\n\n∴ A = $\\frac{1}{2}\\left[\\begin{array}{ccc}6& 1& -5\\\\ 1& -4& -4\\\\ -5& -4& 4\\end{array}\\right]+\\frac{1}{2}\\left[\\begin{array}{ccc}0& 5& 3\\\\ -5& 0& 6\\\\ -3& -6& 0\\end{array}\\right]$.\n\nExercise 2.4 | Q 12.1 | Page 60\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}2& -1\\\\ 3& -2\\\\ 4& 1\\end{array}\\right]\\text{and B}=\\left[\\begin{array}{ccc}0& 3& -4\\\\ 2& -1& 1\\end{array}\\right]$, verify that (AB)T = BTAT.\n\n#### SOLUTION\n\nA = $\\left[\\begin{array}{cc}2& -1\\\\ 3& -2\\\\ 4& 1\\end{array}\\right]\\text{and B}=\\left[\\begin{array}{ccc}0& 3& -4\\\\ 2& -1& 1\\end{array}\\right]$\n\n∴ AT = $\\left[\\begin{array}{ccc}2& 3& 4\\\\ -1& -2& 1\\end{array}\\right]{\\text{and B}}^{\\text{T}}=\\left[\\begin{array}{cc}0& 2\\\\ 3& -1\\\\ -4& 1\\end{array}\\right]$\n\nAB = $\\left[\\begin{array}{cc}2& -1\\\\ 3& -2\\\\ 4& 1\\end{array}\\right]\\left[\\begin{array}{ccc}0& 3& -4\\\\ 2& -1& 1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}0-2& 6+1& -8-1\\\\ 0-4& 9+2& -12-2\\\\ 0+2& 12-1& -16+1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}-2& 7& -9\\\\ -4& 11& -14\\\\ 2& 11& -15\\end{array}\\right]$\n\n∴ (AB)T = $\\left[\\begin{array}{ccc}-2& -4& 2\\\\ 7& 11& 11\\\\ -9& -14& -15\\end{array}\\right]$     ...(i)\n\nBTAT = $\\left[\\begin{array}{cc}0& 2\\\\ 3& -1\\\\ -4& 1\\end{array}\\right]\\left[\\begin{array}{ccc}2& 3& 4\\\\ -1& -2& 1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}0-2& 0-4& 0+2\\\\ 6+1& 9+2& 12-1\\\\ -8-1& -12-2& -16+1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{ccc}-2& -4& 2\\\\ 7& 11& 11\\\\ -9& -14& -15\\end{array}\\right]$          ...(ii)\nFrom (i) and (ii), we get\n(AB)T = BTAT.\n\nExercise 2.4 | Q 12.2 | Page 60\n\n#### QUESTION\n\nIf A = $\\left[\\begin{array}{cc}2& -1\\\\ 3& -2\\\\ 4& 1\\end{array}\\right]\\text{and B}=\\left[\\begin{array}{ccc}0& 3& -4\\\\ 2& -1& 1\\end{array}\\right]$, verify that (BA)T = ATBT.\n\n#### SOLUTION\n\nBA = $\\left[\\begin{array}{ccc}0& 3& -4\\\\ 2& -1& 1\\end{array}\\right]\\left[\\begin{array}{cc}2& -1\\\\ 3& -2\\\\ 4& 1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}0+9-16& 0-6-4\\\\ 4-3+4& -2+2+1\\end{array}\\right]$\n\n∴ BA = $\\left[\\begin{array}{cc}-7& -10\\\\ 5& 1\\end{array}\\right]$\n\n∴ (BA)T = $\\left[\\begin{array}{cc}-7& 5\\\\ -10& 1\\end{array}\\right]$          ...(i)\n\nATBT = $\\left[\\begin{array}{ccc}2& 3& 4\\\\ -1& -2& 1\\end{array}\\right]\\left[\\begin{array}{cc}0& 2\\\\ 3& -1\\\\ -4& 1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}0+9-16& 4-3+4\\\\ 0-6-4& -2+2+1\\end{array}\\right]$\n\n= $\\left[\\begin{array}{cc}-7& 5\\\\ -10& 1\\end{array}\\right]$                ...(ii)\nFrom (i) and (ii)\n(BA)T = ATBT.\n\nPDF FILE TO YOUR EMAIL IMMEDIATELY PURCHASE NOTES & PAPER SOLUTION. @ Rs. 50/- each (GST extra)\n\nHINDI ENTIRE PAPER SOLUTION\n\nMARATHI PAPER SOLUTION\n\nSSC MATHS I PAPER SOLUTION\n\nSSC MATHS II PAPER SOLUTION\n\nSSC SCIENCE I PAPER SOLUTION\n\nSSC SCIENCE II PAPER SOLUTION\n\nSSC ENGLISH PAPER SOLUTION\n\nSSC & HSC ENGLISH WRITING SKILL\n\nHSC ACCOUNTS NOTES\n\nHSC OCM NOTES\n\nHSC ECONOMICS NOTES\n\nHSC SECRETARIAL PRACTICE NOTES\n\n# 2019 Board Paper Solution\n\nHSC ENGLISH SET A 2019 21st February, 2019\n\nHSC ENGLISH SET B 2019 21st February, 2019\n\nHSC ENGLISH SET C 2019 21st February, 2019\n\nHSC ENGLISH SET D 2019 21st February, 2019\n\nSECRETARIAL PRACTICE (S.P) 2019 25th February, 2019\n\nHSC XII PHYSICS 2019 25th February, 2019\n\nCHEMISTRY XII HSC SOLUTION 27th, February, 2019\n\nOCM PAPER SOLUTION 2019 27th, February, 2019\n\nHSC MATHS PAPER SOLUTION COMMERCE, 2nd March, 2019\n\nHSC MATHS PAPER SOLUTION SCIENCE 2nd, March, 2019\n\nSSC ENGLISH STD 10 5TH MARCH, 2019.\n\nHSC XII ACCOUNTS 2019 6th March, 2019\n\nHSC XII BIOLOGY 2019 6TH March, 2019\n\nHSC XII ECONOMICS 9Th March 2019\n\nSSC Maths I March 2019 Solution 10th Standard11th, March, 2019\n\nSSC MATHS II MARCH 2019 SOLUTION 10TH STD.13th March, 2019\n\nSSC SCIENCE I MARCH 2019 SOLUTION 10TH STD. 15th March, 2019.\n\nSSC SCIENCE II MARCH 2019 SOLUTION 10TH STD. 18th March, 2019.\n\nSSC SOCIAL SCIENCE I MARCH 2019 SOLUTION20th March, 2019\n\nSSC SOCIAL SCIENCE II MARCH 2019 SOLUTION, 22nd March, 2019\n\nXII CBSE - BOARD - MARCH - 2019 ENGLISH - QP + SOLUTIONS, 2nd March, 2019\n\n# HSCMaharashtraBoardPapers2020\n\n(Std 12th English Medium)\n\nHSC ECONOMICS MARCH 2020\n\nHSC OCM MARCH 2020\n\nHSC ACCOUNTS MARCH 2020\n\nHSC S.P. MARCH 2020\n\nHSC ENGLISH MARCH 2020\n\nHSC HINDI MARCH 2020\n\nHSC MARATHI MARCH 2020\n\nHSC MATHS MARCH 2020\n\n# SSCMaharashtraBoardPapers2020\n\n(Std 10th English Medium)\n\nEnglish MARCH 2020\n\nHindI MARCH 2020\n\nHindi (Composite) MARCH 2020\n\nMarathi MARCH 2020\n\nMathematics (Paper 1) MARCH 2020\n\nMathematics (Paper 2) MARCH 2020\n\nSanskrit MARCH 2020\n\nSanskrit (Composite) MARCH 2020\n\nScience (Paper 1) MARCH 2020\n\nScience (Paper 2)\n\nGeography Model Set 1 2020-2021\n\nMUST REMEMBER THINGS on the day of Exam\n\nAre you prepared? for English Grammar in Board Exam.\n\nPaper Presentation In Board Exam\n\nHow to Score Good Marks in SSC Board Exams\n\nTips To Score More Than 90% Marks In 12th Board Exam\n\nHow to write English exams?\n\nHow to prepare for board exam when less time is left\n\nHow to memorise what you learn for board exam\n\nNo. 1 Simple Hack, you can try out, in preparing for Board Exam\n\nHow to Study for CBSE Class 10 Board Exams Subject Wise Tips?\n\nJEE Main 2020 Registration Process – Exam Pattern & Important Dates\n\nNEET UG 2020 Registration Process Exam Pattern & Important Dates\n\nHow can One Prepare for two Competitive Exams at the same time?\n\n8 Proven Tips to Handle Anxiety before Exams!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7894798,"math_prob":0.9998666,"size":1745,"snap":"2021-43-2021-49","text_gpt3_token_len":678,"char_repetition_ratio":0.25789776,"word_repetition_ratio":0.36426914,"special_character_ratio":0.43610317,"punctuation_ratio":0.15919283,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990807,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T23:41:43Z\",\"WARC-Record-ID\":\"<urn:uuid:e4983dde-0de0-4e5c-9146-f9d7ac26fd5f>\",\"Content-Length\":\"615951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6c82115-fdf6-4946-9259-f281891ce2e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:199f7755-ed4f-4b46-b76b-6c9653689a27>\",\"WARC-IP-Address\":\"172.217.13.83\",\"WARC-Target-URI\":\"https://www.omtexclasses.com/2021/02/exercise-24-pages-59-60-balbharati.html\",\"WARC-Payload-Digest\":\"sha1:RUGNJ6G5BWWNH64QPBYB624X2JPXIYOV\",\"WARC-Block-Digest\":\"sha1:EFUN5XUUSMIF3H7VLUAU5VCI7QPJY2ME\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585449.31_warc_CC-MAIN-20211021230549-20211022020549-00027.warc.gz\"}"}
https://ijmc.kashanu.ac.ir/?_action=export&rf=summon&issue=860
[ "2020-08-12T08:38:23Z https://ijmc.kashanu.ac.ir/?_action=export&rf=summon&issue=860\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 A Survey on Omega Polynomial of Some Nano Structures M. Ghorbani Omega polynomial Sadhana polynomial Fullerene Nanotube 2011 12 01 1 65 https://ijmc.kashanu.ac.ir/article_5136_c65ae1dfbe42970a246d14f5019602b5.pdf\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 Remarks on Distance-Balanced Graphs M. TAVAKOLI H. YOUSEFI-AZARI Distance-balanced graphs are introduced as graphs in which every edge uv has the following property: the number of vertices closer to u than to v is equal to the number of vertices closer to v than to u. Basic properties of these graphs are obtained. In this paper, we study the conditions under which some graph operations produce a distance-balanced graph. Distance-balanced graphs Graph operation 2011 12 01 67 71 https://ijmc.kashanu.ac.ir/article_5176_4ff1f772bd82877d5ad994685fccba27.pdf\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 Computing the First and Third Zagreb Polynomials of Cartesian Product of Graphs A. ASTANEH-ASL GH. FATH-TABAR Let G be a graph. The first Zagreb polynomial M1(G, x) and the third Zagreb polynomial M3(G, x) of the graph G are defined as:     ( ) ( , ) [ ] e uv E G G x x d(u) + d(v) M1 , ( , )  euvE(G) G x x|d(u) - d(v)| M3 . In this paper, we compute the first and third Zagreb polynomials of Cartesian product of two graphs and a type of dendrimers. Zagreb polynomial Zagreb index Graph 2011 12 01 73 78\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 Wiener Index of a New Type of Nanostar Dendrimer Z. SADRI IRANI A. KARBASIOUN Let G be a molecular graph. The Wiener index of G is defined as the summation of all distances between vertices of G. In this paper, an exact formula for the Wiener index of a new type of nanostar dendrimer is given. Nanostar dendrimer Molecular Graph Wiener index 2011 12 01 79 85 https://ijmc.kashanu.ac.ir/article_5215_7a81c85f695366a2e5f4ab1b29d769cf.pdf\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 PI, Szeged and Revised Szeged Indices of IPR Fullerenes A. MOTTAGHI Z. MEHRANIAN In this paper PI, Szeged and revised Szeged indices of an infinite family of IPR fullerenes with exactly 60+12n carbon atoms are computed. A GAP program is also presented that is useful for our calculations. IPR fullerene Szeged index Revised Szeged index PI index 2011 12 01 87 99 https://ijmc.kashanu.ac.ir/article_5216_10b5e3c03aaf7c3c4ca5df6cd7061d00.pdf\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 A Note on the First Geometric-Arithmetic Index of Hexagonal Systems and Phenylenes Z. YARAHMADI The first geometric-arithmetic index was introduced in the chemical theory as the summation of 2 du dv /(du  dv ) overall edges of the graph, where du stand for the degree of the vertex u. In this paper we give the expressions for computing the first geometric-arithmetic index of hexagonal systems and phenylenes and present new method for describing hexagonal system by corresponding a simple graph to each hexagonal system. Geometric-arithmetic index Hexagonal system Phenylenes 2011 12 01 101 108 https://ijmc.kashanu.ac.ir/article_5217_cf559f5adb428efd20c49b760ad4806e.pdf\n2011-12-01 10.22052\nIranian Journal of Mathematical Chemistry Iranian J. Math. Chem. 2228-6489 2228-6489 2011 2 2 Two Types of Geometric–Arithmetic Index of V–phenylenic Nanotube S. MORADI S. BABARAHIM M. GHORBANI The concept of geometric-arithmetic indices was introduced in the chemical graph theory. These indices are defined by the following general formula:     ( ) 2 ( ) uv E G u v u v Q Q Q Q GA G , where Qu is some quantity that in a unique manner can be associated with the vertex u of graph G. In this paper the exact formula for two types of geometric-arithmetic index of Vphenylenic nanotube are given. GA Index V–phenylenic nanotube 2011 12 01 109 117 https://ijmc.kashanu.ac.ir/article_5218_e20ba02b0ff4677db54b056651f64a57.pdf" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7468474,"math_prob":0.547733,"size":4189,"snap":"2020-34-2020-40","text_gpt3_token_len":1334,"char_repetition_ratio":0.12090801,"word_repetition_ratio":0.12420382,"special_character_ratio":0.29744568,"punctuation_ratio":0.12702367,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T04:09:46Z\",\"WARC-Record-ID\":\"<urn:uuid:654f4c62-dfb6-435f-bb72-4f80aac95565>\",\"Content-Length\":\"20792\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e547baf-06dd-4235-87c5-f1a2f7596fed>\",\"WARC-Concurrent-To\":\"<urn:uuid:c59203c8-54b5-43ae-bac1-47715be1ae94>\",\"WARC-IP-Address\":\"185.113.115.90\",\"WARC-Target-URI\":\"https://ijmc.kashanu.ac.ir/?_action=export&rf=summon&issue=860\",\"WARC-Payload-Digest\":\"sha1:R2L43IRGJD4T7VDIFFLOMJ5ANBS7AYUB\",\"WARC-Block-Digest\":\"sha1:LT7EWTRABRZ4XVWHDEC5CZIOZI4A2ZTG\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738864.9_warc_CC-MAIN-20200812024530-20200812054530-00509.warc.gz\"}"}
https://www.facenicobaggio.it/2021/05/21/math-is-usually-a-subject-that-may-be-generally-not-taught-within-the-united-states-of-america-inside-the-usa-most-students-are-told-to-find-out-fundamental-mathematical-capabilities-just-before-they/
[ "# Unfortunately, Math has become among the most important issues for students to understand since it teaches so many factors.\n\nA student who can not resolve an issue primarily based on a Math lesson will not be a superb student. Math is definitely an abstract science that combines the true life with mathematics. Mathematicians use numbers, objects, actual globe objects and abstract thoughts to resolve difficulties. In other words, mathematics may be the language of the numbers. The number theory is a part of the advanced mathematics that’s incredibly abstract. The quantity theory is applied by most mathematicians, which includes people who specialize in applied mathematics.\n\nThe discrete mathematics consists of sets of objects which can be combined discretely with no you know how they have been assembled. By way of example, the set of all all-natural numbers is known as a finite number. Discrete mathematics is utilized to prove shorems in arithmetic, such as Thenorems like axioms and best website for summarizing article axiomatics. In algebra, the quantity theory and algebraic equations type the basis for a complicated mathematical objects we know as algebra.\n\nAlgebraic equations, also referred to as algebraic equations, are formulations that resolve an equation technique, which include a program of linear equations. An example of a well-known algebraic equation may be the elliptical equation that’s the root of all complicated complications within the geometry. Other subjects in geometry, typically called by algebraic equations, contain the plane and also the segment, the polygon plus the ball. Mathematics refers to all topics in mathematics made use of in the sciences and industries.\n\nSome of the most typical applications of applied mathematics in the Usa consist of healthcare, aerospace, automobiles, construction, electronics and mathematics. The key objective of applied mathematics would be to test the capabilities and understanding of an individual in several regions and to prove their capabilities in these areas. Quite a few higher schools inside the Usa need students participating in various applied mathematical classes, which are commonly supplied all year round. Math https://www.summarizetool.com/how-to-write-a-summary-of-a-movie/ tutors within the United states of america are extremely crucial as demand for mathematical tutors increases. A student who does not study Math can fail in pretty much every little thing, from college towards https://performingarts.ufl.edu/events/all-is-calm/ the economy, since it would be the basis of pretty much almost everything we do in our daily life. Math tuition can assist students realize success at school and in life. It allows you to understand the beneficial lessons with which you permit you to fulfill your dreams." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9549603,"math_prob":0.91841465,"size":3040,"snap":"2021-31-2021-39","text_gpt3_token_len":560,"char_repetition_ratio":0.1337286,"word_repetition_ratio":0.00877193,"special_character_ratio":0.175,"punctuation_ratio":0.095684804,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99232394,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T01:50:31Z\",\"WARC-Record-ID\":\"<urn:uuid:e328ff2b-0454-49c6-befe-c27988646fb9>\",\"Content-Length\":\"162729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f96a6851-15a4-41c6-acd4-e896bf01ac0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9ec39e7-e0a1-4476-9d7a-1a824bac26fb>\",\"WARC-IP-Address\":\"162.55.91.45\",\"WARC-Target-URI\":\"https://www.facenicobaggio.it/2021/05/21/math-is-usually-a-subject-that-may-be-generally-not-taught-within-the-united-states-of-america-inside-the-usa-most-students-are-told-to-find-out-fundamental-mathematical-capabilities-just-before-they/\",\"WARC-Payload-Digest\":\"sha1:I5C2GGE3ERSKSKSPOELFEILEMCISFZIX\",\"WARC-Block-Digest\":\"sha1:FOQXKBI7SPYVDCB7HNRIY3VGX47IFJNC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057303.94_warc_CC-MAIN-20210922011746-20210922041746-00161.warc.gz\"}"}
https://www.answers.com/Q/What_number_1_less_than_1000
[ "Math and Arithmetic\n\n# What number 1 less than 1000?\n\n345", null, "1000-1=999\n\n😂\n0\n🎃\n0\n🤨\n0\n😮\n0\n\n## Related Questions", null, "The largest prime number less than 1000 is 997.The largest prime number no less than 1000 is currently 257,885,161-1", null, "", null, "It is infinite because for example -1000 is less than -1", null, "", null, "1944 M = 1000 CM = 900 (100 less than 1000) XL = 40 (10 less than 50) IV = 4 (1 less than 5)", null, "Obviously less. 1000 milliliters make 1 liter so 6 milliliters will be 6/1000 of a liter which is less than 1 liter.", null, "There are 31 numbers less than 1,000 which have an odd number of factors. They are: 1,4,9,16,25,36,49,64,81,100,121,144,169,196,225,256,289,324,361,400,441,484,529,576,625,676,729,784,841,900,961.", null, "The raciprocal is when the number is between 0 and 1 or less than 1", null, "", null, "A mixed number has a part thats greater than 1 As its less than 1, its a fraction, 4/125 is its simplest form.", null, "", null, "1st odd number . . . 1 (1 less than double 1)2nd odd number . . . 3 (1 less than double 2)3rd odd number . . . 5 (1 less than double 3)4th odd number . . . 7 (1 less than double 4)5th odd number . . . 9 (1 less than double 5)Are you seeing a pattern yet ?6th odd number . . . 11 (1 less than double 6)7th odd number . . . 13 (1 less than double 7)...Qth odd number . . . . (1 less than double Q)", null, "1st odd number . . . 1 (1 less than double 1)2nd odd number . . . 3 (1 less than double 2)3rd odd number . . . 5 (1 less than double 3)4th odd number . . . 7 (1 less than double 4)5th odd number . . . 9 (1 less than double 5)Are you seeing a pattern yet ?6th odd number . . . 11 (1 less than double 6)7th odd number . . . 13 (1 less than double 7)...Qth odd number . . . . (1 less than double Q)", null, "", null, "1 kilogram is 1000 grams 1 gram is 1000 milligrams 1 milligram is 1000 micrograms 1 microgram is 1000 nanograms", null, "It is if the number is more than ' 1 '. If the number is less than ' 1 ', then it's smaller than its own square root.", null, "", null, "1 is more (greater). .001 is 1000 times less than 1. .001 is 1/1000.", null, "", null, "1/(any number greater than 1) is a fraction less than 1.", null, "", null, "You cannot, because it is less than 1. It is however, 5/1000 or 1/200 in lowest form or 0.5%", null, "M = 1000, CM (100 less than 1000) = 900, XC (10 less than 100) = 90 and IX (1 less than 10) = 9. So the complete numeral MCMXCIX represents 1999.", null, "A millimeter is less than a meter. There are 1000 millimeters in 1 meter.", null, "###### Math and ArithmeticNumbers AlgebraElectronics EngineeringPercentages, Fractions, and Decimal ValuesRoman NumeralsLength and Distance", null, "Copyright © 2020 Multiply Media, LLC. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply." ]
[ null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,dpr_2.0,w_40,h_40/v1573840443/avatars/default.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB19.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB15.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,w_20,h_20/Icons/IAB_fullsize_icons/IAB5.png", null, "https://img.answers.com/answ/image/upload/q_auto,f_auto,dpr_2.0/v1589555119/logos/Answers_throwback_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9082096,"math_prob":0.99878234,"size":4326,"snap":"2020-45-2020-50","text_gpt3_token_len":1360,"char_repetition_ratio":0.2623785,"word_repetition_ratio":0.24436536,"special_character_ratio":0.33957466,"punctuation_ratio":0.15768462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99766403,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T13:40:28Z\",\"WARC-Record-ID\":\"<urn:uuid:6f462d73-02c9-483f-8176-e9f66f97949e>\",\"Content-Length\":\"215289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4806025-4b7a-4581-9761-054a3afe4cfe>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8c20826-8d6e-4357-b5c0-7a4f989f6139>\",\"WARC-IP-Address\":\"151.101.200.203\",\"WARC-Target-URI\":\"https://www.answers.com/Q/What_number_1_less_than_1000\",\"WARC-Payload-Digest\":\"sha1:OQPJKNOKMG5T5RLTIBRGIVVMJHZHAR5Y\",\"WARC-Block-Digest\":\"sha1:AQKJCN3OAYQWTLSG32CB6JN6MADIYQIC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876500.43_warc_CC-MAIN-20201021122208-20201021152208-00508.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/26-3240
[ "# Answers\n\nSolutions by everydaycalculation.com\n\n## Reduce 26/3240 to lowest terms\n\nThe simplest form of 26/3240 is 13/1620.\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 26 and 3240 is 2\n2. Divide both the numerator and denominator by the GCD\n26 ÷ 2/3240 ÷ 2\n3. Reduced fraction: 13/1620\nTherefore, 26/3240 simplified to lowest terms is 13/1620.\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\nEquivalent fractions:\n\nMore fractions:\n\n#### Fractions Simplifier\n\n© everydaycalculation.com" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75787735,"math_prob":0.7602197,"size":351,"snap":"2021-21-2021-25","text_gpt3_token_len":113,"char_repetition_ratio":0.14409222,"word_repetition_ratio":0.0,"special_character_ratio":0.4131054,"punctuation_ratio":0.092307694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9569304,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T07:24:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3e74c9e9-80e6-4d98-94cc-4507c355a47c>\",\"Content-Length\":\"6449\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:276465d5-d012-4a29-8a5f-68144c1f3047>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ae08fc0-1bac-468b-b414-ca1dfc6bed73>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/26-3240\",\"WARC-Payload-Digest\":\"sha1:PKRZ3M7LRPXSSPS6JYU7YPXE3KVLQOC2\",\"WARC-Block-Digest\":\"sha1:5B6WWDJPHNHXFGGTBV5HVX2ON4LP6ZG6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487607143.30_warc_CC-MAIN-20210613071347-20210613101347-00318.warc.gz\"}"}
https://www.studyadda.com/sample-papers/neet-sample-test-paper-16_q14/205/276053
[ "• # question_answer When a plane electromagnetic wave travels in vacuum, the average electric energy density is given by (here ${{E}_{0}}$is the amplitude of the electric field of the wave) A) $\\frac{1}{4}{{\\varepsilon }_{0}}E_{0}^{2}$                  B) $\\frac{1}{2}{{\\varepsilon }_{0}}E_{0}^{2}$      C)      $2{{\\varepsilon }_{0}}E_{0}^{2}$           D)      $4{{\\varepsilon }_{0}}E_{0}^{2}$\n\nYou will be redirected in 3 sec", null, "" ]
[ null, "https://www.studyadda.com/assets/frontend/images/msg-gif.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.565289,"math_prob":0.99985456,"size":303,"snap":"2020-34-2020-40","text_gpt3_token_len":129,"char_repetition_ratio":0.18729097,"word_repetition_ratio":0.0,"special_character_ratio":0.48184818,"punctuation_ratio":0.0625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99962336,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T15:10:24Z\",\"WARC-Record-ID\":\"<urn:uuid:58c1be70-51e1-450e-a8a6-2478d7adda5e>\",\"Content-Length\":\"103265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffcce0fe-c578-465f-ba09-4b3a1d0a5993>\",\"WARC-Concurrent-To\":\"<urn:uuid:21e2b234-4f21-471f-853d-ef36465de12d>\",\"WARC-IP-Address\":\"151.106.35.148\",\"WARC-Target-URI\":\"https://www.studyadda.com/sample-papers/neet-sample-test-paper-16_q14/205/276053\",\"WARC-Payload-Digest\":\"sha1:LHYZYM2ETSXO2LHM4JLNJ2VKIVBNRWRK\",\"WARC-Block-Digest\":\"sha1:TC2RBBH64HAP7IQ36UPNRXVL5MADKSQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400192778.51_warc_CC-MAIN-20200919142021-20200919172021-00201.warc.gz\"}"}
https://earth-planets-space.springeropen.com/articles/10.1186/s40623-016-0486-1
[ "# Recent geomagnetic secular variation from Swarm and ground observatories as estimated in the CHAOS-6 geomagnetic field model\n\n## Abstract\n\nWe use more than 2 years of magnetic data from the Swarm mission, and monthly means from 160 ground observatories as available in March 2016, to update the CHAOS time-dependent geomagnetic field model. The new model, CHAOS-6, provides information on time variations of the core-generated part of the Earth’s magnetic field between 1999.0 and 2016.5. We present details of the secular variation (SV) and secular acceleration (SA) from CHAOS-6 at Earth’s surface and downward continued to the core surface. At Earth’s surface, we find evidence for positive acceleration of the field intensity in 2015 over a broad area around longitude 90°E that is also seen at ground observatories such as Novosibirsk. At the core surface, we are able to map the SV up to at least degree 16. The radial field SA at the core surface in 2015 is found to be largest at low latitudes under the India–South-East Asia region, under the region of northern South America, and at high northern latitudes under Alaska and Siberia. Surprisingly, there is also evidence for significant SA in the central Pacific region, for example near Hawaii where radial field SA is observed on either side of a jerk in 2014. On the other hand, little SV or SA has occurred over the past 17 years in the southern polar region. Inverting for a quasi-geostrophic core flow that accounts for this SV, we obtain a prominent planetary-scale, anti-cyclonic, gyre centred on the Atlantic hemisphere. We also find oscillations of non-axisymmetric, azimuthal, jets at low latitudes, for example close to 40°W, that may be responsible for localized SA oscillations. In addition to scalar data from Ørsted, CHAMP, SAC-C and Swarm, and vector data from Ørsted, CHAMP and Swarm, CHAOS-6 benefits from the inclusion of along-track differences of scalar and vector field data from both CHAMP and the three Swarm satellites, as well as east–west differences between the lower pair of Swarm satellites, Alpha and Charlie. Moreover, ground observatory SV estimates are fit to a Huber-weighted rms level of 3.1 nT/year for the eastward components and 3.8 and 3.7 nT/year for the vertical and southward components. We also present an update of the CHAOS high-degree lithospheric field, making use of along-track differences of CHAMP scalar and vector field data to produce a new static field model that agrees well with the MF7 field model out to degree 110.\n\n## Introduction\n\nThe Earth’s intrinsic magnetic field is gradually changing as a result of motional induction and Ohmic dissipation processes taking place within its metallic core. This phenomenon, called geomagnetic secular variation (SV), has been well documented, but poorly understood, for centuries (e.g. Gellibrand 1635; Hansteen 1819). In 1980, MAGSAT provided the first truly global set of vector field observations. Combined with novel regularized inversion techniques, this enabled the structure of field at the core surface to be estimated for the first time with some confidence (Langel et al. 1980; Shure et al. 1982). Unfortunately, the MAGSAT mission lasted less than a year, so inferences concerning SV were limited. It has only been in the past decade, thanks to the Ørsted and CHAMP missions, that it has become possible to map the large-scale patterns of the SV directly at the core surface (Lesur et al. 2008; Olsen et al. 2006). A picture has emerged of gradual (decadal) variations in SV punctuated by localized pulses of secular acceleration (SA) on shorter interannual timescales (Chulliat et al. 2010; Olsen et al. 2014). SA pulses provide an unexpected new window into the dynamics of the core, and we are still in the early stages of their study. We presently lack detailed knowledge of their morphology and their time dependence, and our understanding is severely limited by the relatively short time window for which there has been global monitoring from space.\n\nA new opportunity for studying SV is today provided by the Swarm mission. Launched on 22 November 2013, it consists of three dedicated low-Earth-orbit satellites, each simultaneously measuring the near-Earth magnetic field. After more than 2 years of operation, Swarm data are starting to provide valuable new constraints on the time-varying SV. In this article, we present investigations of SV as observed by the Swarm satellites, and at ground observatories, in 2015 as part of a new time-dependent geomagnetic field model, called CHAOS-6, that also includes data from previous satellite missions (Ørsted, CHAMP, and SAC-C).\n\nCHAOS-6 is the latest generation of the CHAOS series of global geomagnetic field models developed by Olsen et al. (2006, 2009, 2010, 2014). Ten months of Swarm data (up to September 2014) were included in the previous version, CHAOS-5 (Finlay et al. 2015), a model that was primarily designed for producing candidate field models for IGRF-12. With more than 2 years of Swarm data now available, given there have been advances in the use of spatial differences (gradients) in field modelling (Kotsiaros et al. 2014; Olsen et al. 2015), and because a geomagnetic jerk happened in 2014 (Torta et al. 2015), there is now a clear need to update the CHAOS model series and particularly its time-dependent part, as CHAOS-6.\n\nThe CHAOS model series aims to estimate the internal geomagnetic field at the Earth’s surface with high resolution in space and time. It is derived primarily from magnetic satellite data, although ground-based activity indices and observatory monthly means are also used. It includes a parameterization of the quiet-time, near-Earth magnetospheric field, but there is no explicit representation of the ionospheric field or fields due to magnetosphere–ionosphere coupling currents. A limitation of the CHAOS models series is that its validity is restricted to after 1999, when the Ørsted satellite was launched.\n\nOther models with continuous time dependence are available for studying core field variations on longer timescales, although these typically have lower resolution in both space and time. The gufm1 model (Jackson et al. 2000) is the definitive source for the historical field from 1590 to 1990. A more recent alternative spanning 1840 to 2010 is the COV-OBS model (Gillet et al. 2013) [see also Gillet et al. (2015a) for a version updated to 2015]. These models contain only a small amount of satellite data and are predominantly constrained by observatory data during the twentieth and twenty-first centuries. A rather different approach to modelling the recent field is provided by the comprehensive model series (Sabaka et al. 2004, 2015). The latest versions CM4 and CM5 cover 1960–2002 and 2000–2013, respectively, and involve simultaneous estimation of fields from a large number of sources including quiet-time ionospheric currents and magnetosphere–ionosphere coupling currents. This requires a much larger number of free parameters than is the case in the CHAOS models. A series of models of similar complexity to the CHAOS model, but derived only from CHAMP and ground observatory data, is the GRIMM series of models (Lesur et al. 2008, 2010, 2015). The latter study is particularly interesting because it proposed field models with time dependence controlled by co-estimated core flows.\n\nThe main purpose of this article is to present CHAOS-6, providing a reference for users regarding its construction. In the “Data” section, we detail the input data, while the model parameterization and estimation scheme are described in  the “Field modelling” section. Model results and related discussion are presented in  the “Results and discussion” section, including the fit to Swarm and CHAMP satellite data as well as ground observatory SV in  the sections “Fit to satellite data” and “Fit to secular variation estimates from ground observatories” , respectively. The field and SV at Earth’s surface are described in the sections “Power spectra of field, SV and SA at Earth’s surface” and “Time changes in magnetic intensity at Earth’s surface” . The lithospheric field part of CHAOS-6 is described in the section “CHAOS-6h and the high-degree lithospheric field”. The field, SV and SA at the core surface are described in the section “Secular variation and acceleration at Earth's core surface”. In the section “An interpretation based on quasi-geostrophic core flows”, we present for epoch 2015.0 a quasi-geostrophic flow derived from the CHAOS-6 time-dependent field and SV. A summary and perspectives are offered in  the “Conclusions” section.\n\n## Data\n\nThe database of magnetic observations used to construct CHAOS-6 is essentially an extension of that used by Finlay et al. (2015) to construct the CHAOS-5 model in September 2014. The ground observatory vector field data have been updated as available in March 2016 (see the section “Ground observatory data”), while vector and scalar field data from the Swarm constellation up to 30 March 2016 have been included. The data selection criteria for satellite data at high latitudes have also been slightly altered compared with previous versions of the CHAOS model; further details are given in the “Satellite data” section.\n\nA major improvement compared with CHAOS-5 is the inclusion of field spatial differences (i.e. approximate gradients) as data, along-track for both CHAMP and Swarm Alpha, Bravo and Charlie, and also east–west between Swarm’s lower satellite pair Alpha and Charlie. Along-track field differences approximate north–south gradients in non-polar regions (Kotsiaros et al. 2014; Sabaka et al. 2015), while the east–west differences provide information on the longitudinal gradient of the field (Olsen et al. 2015). We constructed along-track field differences from data points from the same satellite track, separated by 15 s. East–west difference data were found by searching Swarm L1b 1 Hz magnetic data for a datum from Swarm Charlie, with latitude closest to a selected Swarm Alpha datum, within a maximum time difference of 50 s. This procedure typically resulted in time shifts of about 10 s between the contributing Alpha and Charlie data.\n\n### Ground observatory data\n\nAnnual differences of revised observatory monthly means (Olsen et al. 2014) between January 1997 and December 2015 provide crucial constraints on the SV at fixed points on the Earth’s surface. Revised monthly means were derived from the hourly mean values of 160 observatories (for locations and IAGA codes see Fig. 1) that have been quality controlled, checking for trends, spikes and other errors (Macmillan and Olsen 2013). Quasi-definitive data (Peltier and Chulliat 2010; Clarke et al. 2013) were used when possible, for times when definitive data were not yet available; these quasi-definitive data are vital for determining up-to-date secular variation and for comparisons with the latest data from the Swarm mission. Starting from hourly mean values, estimates of the ionospheric (plus induced) field from the CM4 model (Sabaka et al. 2004) and the large-scale magnetospheric (plus induced) field from a preliminary CHAOS-type model, CHAOS-6pre, were subtracted. Then a Huber-weighted monthly mean, including all local times and all disturbance levels, is computed. Taking annual differences, this procedure resulted in 23,466 vector field triples of SV estimates.\n\n### Satellite data\n\nThe basic Ørsted, CHAMP and SAC-C dataset, sampled at a rate of 1 datum per 60 s, is the same as that employed in the earlier CHAOS-4 (Olsen et al. 2014) and CHAOS-5 (Finlay et al. 2015) models. One difference compared with previous models is that CHAMP vector data were used only when attitude information from both star cameras was available.\n\nRegarding the Swarm data, we used the Swarm Level-1b data product Mag-L, taking the latest available baseline 0408/09 data files in March 2016, for more than 28 months from 26 November 2013 to 30 March 2016. During this time, Swarm Alpha and   Charlie descended from 514 to 445 km altitude, and Swarm Bravo, after being pushed up to 531 km altitude, has descended to 503 km. The local time of the ascending node of the three Swarm satellites has passed through more than two-and-a half 24-hour cycles, and Swarm Bravo is now separated from Swarm Alpha and Charlie by about 3 h of local time. The nominal 1 Hz data were sub-sampled at 60-s intervals unless no vector field magnetometer (VFM) or star tracker (STR) data were available. In addition, we rejected known disturbed days (for example when satellite manoeuvres occurred) and excluded gross outliers for which the vector field components were more than 500 nT (and the scalar field more than 100 nT) from the predictions of a preliminary field model, CHAOS-6pre. In contrast to the case for CHAOS-5, no rescaling of Swarm vector data was necessary to ensure compatibility with the scalar data (Lesur et al. 2015), since the L1b baseline 0408/09 data calibration includes a co-estimated sun-driven disturbance that reduces rms scalar differences between the ASM and VFM measurements to under 200 pT (Tøffner-Clausen et al. 2016). After November 2014, calibration of the vector magnetometer on Swarm Charlie has been carried out using scalar field values mapped over from Swarm Alpha; this is necessary due to total failure of the absolute scalar magnetometers on Swarm Charlie.\n\nFollowing experience with the Swarm Initial Field Model (SIFM, see Olsen et al. 2015) and in preliminary experiments for CHAOS-6, we use different selection criteria for different classes of data:\n\nFor vector field data, we adopt the same quiet-time, dark, selection criteria that were used for earlier versions of the CHAOS model series, namely (1) sun at least $$10^{\\circ }$$ below the horizon, (2) strength of the field due to the magnetospheric ring current, estimated using the RC index (Olsen et al. 2014), was required to change by at most 2 nT/h, (3) it was required that the geomagnetic activity index $${\\text{Kp}}\\le 2^0$$ for quasi-dipole (Richmond 1995) latitudes equatorward of $$\\pm 55^{\\circ}$$.\n\nAs for earlier versions of the CHAOS model series, only scalar intensity data were used poleward of $$\\pm 55^{\\circ }$$ quasi-dipole latitude, and these were selected only when the merging electric field at the magnetopause (averaged over the preceding hour) $$E_{\\text {m}} \\le 0.8$$ mV/m. In CHAOS-6, $$E_{\\text {m}}$$ was calculated using 1-min values of the interplanetary magnetic field (IMF) and solar wind speed from OMNIWeb (http://omniweb.gsfc.nasa.gov), in contrast to earlier versions of the CHAOS model where 5-min mean values were used. In addition, an additional selection criterion that IMF $$B_Z >0$$ was introduced in CHAOS-6, motivated by a desire to avoid as far as possible disturbances related to the sub-storm auroral electrojet that are especially prominent when IMF $$B_Z <0$$. Scalar data were also used at lower latitudes when attitude data were not available.\n\nIn CHAOS-6, we also make use of along-track and east–west differences of scalar data. As for the Swarm Initial Field model, SIFM (Olsen et al. 2015), scalar field differences were used at all latitudes and for all local times (including sunlit conditions, but excluding dayside equatorial region <$$\\pm 10^{\\circ }$$ quasi-dipole latitudes), with slightly relaxed quiet-time criteria (RC index required to change by at most 3 nT/h and $${\\text{Kp}}\\le 3^0$$). The same selection criteria as for scalar data regarding $$E_{\\text {m}}$$ and IMF $$B_Z$$ were applied to scalar field differences at polar latitudes. Scalar data have the advantage of not being directly perturbed by the field-aligned currents that are a major contribution to the unmodelled external fields, particularly at polar latitudes. Olsen et al. (2015) found that including spatial differences of scalar data helped to improve the quality of both lithospheric field and secular variation models.\n\nAlong-track differences of vector data from the single satellite mission CHAMP and both along-track and east–west vector differences from the Swarm mission were also employed. For the vector field differences, we used the same selection criteria as for the vector data itself i.e. only data from dark (sun at least $$10^{\\circ }$$ below the horizon), non-polar (equatorward of $$\\pm 55^{\\circ}$$ quasi-dipole latitude) regions when  the RC index changed by at most 2 nT/h and $${\\text{Kp}}\\le 2^0$$ were selected.\n\nFor the low-degree part of CHAOS-6, called CHAOS-6l, 3 $$\\times$$ 920,871 vector data, 942,303 scalar data, 1,793,294 along-track scalar differences, 424,003 east–west scalar differences, 3 $$\\times$$ 403,382 along-track vector differences and 3 $$\\times$$ 92,842 east–west vector differences were used. The reason for the much larger number of scalar differences, compared with the number of scalar data, is that scalar differences were included for all local times (not just dark regions) and because their quiet-time selection criteria were less strict. As in previous versions of the CHAOS model, all satellite data  were also weighted proportional to $$\\sin \\theta$$, where $$\\theta$$ is geographic co-latitude, in order to simulate an equal-area distribution.\n\nAlthough iteratively reweighting of the data is performed during the model estimation (to implement a robust measure of misfit based on a Huber distribution of errors, see the section “Field modelling”), we also employed an a priori error budget to account for the differences between the satellites. Regarding the scalar field, we assumed an a prior error estimate of 2.5 nT for Ørsted, CHAMP and SAC-C and 2.2 nT for Swarm. An isotropic pointing error estimate of 5 arc seconds was assumed for Swarm, 10 arc seconds for CHAMP (when both star cameras are available) and an anisotropic pointing error of 10 arc seconds and, respectively, 40 (60) arc seconds for after (before) 22 January 2000 for Ørsted. Note that these error estimates include the expected impact of unmodelled fields, which often dominate over instrumental errors.\n\n## Field modelling\n\n### Model parameterization\n\nThe model parameterization for CHAOS-6 follows closely that of CHAOS-5 and CHAOS-4. Since the focus of this article is the time-dependent internal field, we explicitly describe only this part of the model. See Olsen et al. (2014) for a more detailed account of the CHAOS field modelling scheme, including the external model. The time-dependent internal field $$\\mathbf {B}^\\mathrm {int}(t)= - \\nabla V^\\mathrm {int}(t)$$ is represented as the gradient of the scalar potential\n\n$$V^\\mathrm {int} =a \\sum _{n=1}^{20}\\sum _{m=0}^{n}\\left[ g_{n}^{m}(t)\\cos m\\phi +h_{n}^{m}(t)\\sin m\\phi \\right] \\left( \\frac{a}{r}\\right) ^{n+1}P_{n}^{m}\\left( \\cos \\theta \\right)$$\n(1)\n\nwhere $$a=6371.2$$ km is a reference radius, $$\\left( r,\\theta ,\\phi \\right)$$ are geographic coordinates and $$P_{n}^{m}\\left( \\cos \\theta \\right)$$ are the Schmidt semi-normalized associated Legendre functions of degree n and order m. Note that we follow the usual geomagnetic convention and refer to $$\\mathbf {B}$$ as the magnetic field, though it is strictly the magnetic flux density. In the vacuum, it is related to the magnetic field $$\\mathbf {H}$$ by $$\\mathbf {B}=\\mu _0 \\mathbf {H}$$ where $$\\mu _0$$ is the permeability of free space.\n\n$$\\left\\{ g_{n}^{m}(t),h_{n}^{m}(t)\\right\\}$$ are time-dependent Gauss coefficients that are further expanded in a basis of sixth-order B-splines (De Boor 2001) such that\n\n\\begin{aligned} g_{n}^{m}(t) = \\sum \\limits _{k=1}^K \\,\\, {^k} g_{n}^{m} B_k(t), \\end{aligned}\n(2)\n\nand similarly for $$h_{n}^{m}(t)$$, where $${^k}g_{n}^{m}$$ are the spline coefficients estimated for each Gauss coefficient, $$K=6$$ (sixth-order B-splines), $$B_k$$ are the spline basis functions, and we use a 6-month knot spacing with fivefold repeated knots at the endpoints $$t=1997.1$$ and $$t=2016.6$$.\n\nIn addition to a time-dependent internal field, we also estimate a static internal field above degree 20. The low-degree part of CHAOS-6 model that is the focus of this section, CHAOS-6l, was estimated using a maximum degree of 80 (in contrast the high-degree part of the CHAOS-6 model, CHAOS-6h was estimated using a maximum degree of 120—see the section “CHAOS-6h: estimation of the high-degree lithospheric field”).\n\nRegarding the external field, as in earlier CHAOS models, we use a representation of fields due to near-Earth magnetospheric sources, e.g. the magnetospheric ring current, in the solar magnetic (SM) coordinate system (up to $$n=2$$, with time dependence for $$n=1$$ parameterized by the external and induced parts of the RC index) and of fields due to remote magnetospheric sources, e.g. magnetotail and magnetopause currents, in geocentric solar magnetospheric (GSM) coordinates (also up to $$n=2$$, but restricted to order $$m=0$$). Additional offset parameters in bins of width 5 days, respectively, 30 days are included for the degree 1 SM terms, for orders $$m=0$$, respectively $$m=1$$.\n\nWe also co-estimate the Euler angles  needed to describe the rotation between the vector magnetometer frame and the star imager frame. For Ørsted, this yields two sets of Euler angles (one for the period before 22 January 2000 when the onboard software of the star imager was updated and one for the period after that date), while for CHAMP and each Swarm satellite we solve for Euler angles in bins of 10 days.\n\n### Model estimation\n\nModel parameters were estimated using an iteratively reweighted least-squares algorithm making use of Huber weights. Regularization of temporal variations was also included. Specifically, we minimized the cost function\n\n\\begin{aligned} \\mathbf {e}^{T} \\underline{\\underline{C}}^{-1}\\mathbf {e} + \\lambda _3 \\mathbf {m}^T \\underline{\\underline{\\Lambda }}_3 \\mathbf {m} + \\lambda _2 \\mathbf {m}^T \\underline{\\underline{\\Lambda }}_2 \\mathbf {m} \\end{aligned}\n(3)\n\nwhere $$\\mathbf {m}$$ is the model vector, $$\\underline{\\underline{C}}$$ is the data error covariance matrix which includes anisotropic errors due to attitude uncertainty (Holme and Bloxham 1996) and $$\\underline{\\underline{\\Lambda }}_3$$ and $$\\underline{\\underline{\\Lambda }}_2$$ are block diagonal regularization matrices penalizing the squared values of the third, respectively second, time derivatives of the radial field $$B_r$$ at the core surface. $$\\underline{\\underline{\\Lambda }}_3$$ involves integration over the full time span of the model, while $$\\underline{\\underline{\\Lambda }}_2$$ involves evaluating the second time derivative only at the model endpoints $$t=1997.1$$ and 2016.6. $$\\lambda _3$$ and $$\\lambda _2$$ determine the strength of the regularization applied to the model time dependence during the entire modelled interval and at the endpoints, respectively. We tested several values for these parameters and finally selected $$\\lambda _3=0.66\\,(\\hbox {nT/year}^3)^{-2}$$, $$\\lambda _2=100\\,(\\hbox {nT/year}^2)^{-2}$$ for the start time $$t=1997.1$$ and $$\\lambda _2=300\\,(\\hbox {nT/year}^2)^{-2}$$ for the end time $$t=2016.6$$. All time-dependent zonal terms were treated separately with $$\\lambda _3$$ set to a larger value of $$60\\,(\\hbox {nT/year}^3)^{-2}$$.\n\nThe vector of residuals $$\\mathbf {e}$$ comprises differences between data and model predictions\n\n\\begin{aligned} \\mathbf {e}=\\begin{bmatrix} \\mathbf {d}_{\\mathrm {obs}} \\\\ \\Delta \\mathbf {d}_{\\mathrm {obs}} \\end{bmatrix} - \\begin{bmatrix} \\mathbf {d}_{\\mathrm {mod}} \\\\ \\Delta \\mathbf {d}_{\\mathrm {mod}}.\\end{bmatrix}. \\end{aligned}\n(4)\n\nIt involves vector and scalar data, denoted by $$\\mathbf {d}_\\mathrm {obs}$$, and the associated model predictions $$\\mathbf {d}_\\mathrm {mod}=\\mathbf {G m}$$, where $$\\mathbf {G}$$ is the design matrix for the forward model. For scalar data, $$\\mathbf {G}$$ is the forward operator linearized around the present model. The data in CHAOS-6 also include along-track and east–west vector and scalar field differences, denoted by $$\\Delta \\mathbf {d}_\\mathrm {obs} = \\mathbf {d}_\\mathrm {obs} (\\mathbf {r}_2, t_2) - \\mathbf {d}_\\mathrm {obs} (\\mathbf {r}_1, t_1)$$. The associated model predictions are $$\\Delta \\mathbf {d}_\\mathrm {mod}=\\Delta \\mathbf { G \\,m} = [\\mathbf { G}(\\mathbf {r}_2, t_2) - \\mathbf { G} (\\mathbf {r}_1, t_1)] \\, \\mathbf {m}$$. Further details of the implementation of along-track and across-track differences in field modelling are described by Kotsiaros et al. (2014) and Olsen et al. (2015, 2016).\n\nIn deriving the CHAOS-6l time-dependent field model, we estimated 28,766 model parameters from 7,481,013 observations. The final model was obtained after 9 iterations.\n\n### CHAOS-6h: estimation of the high-degree lithospheric field\n\nThe final version of CHAOS-6 was obtained by taking the model coefficients from CHAOS-6l (as described above) and replacing the static field Gauss coefficients above degree $$n=24$$ with the static field coefficients from the CHAOS-6h model, truncated at degree $$n=110$$. CHAOS-6h is a new dedicated model of the high-degree lithospheric field. As for the CHAOS-4h model (Olsen et al. 2014), that provided the high-degree static field in both CHAOS-4 and CHAOS-5, it was derived only  using low-altitude, solar minimum CHAMP data, from August 2008 to September 2010.\n\nIn addition to scalar and vector field data, CHAOS-6h makes use of along-track scalar and vector field differences from CHAMP. The data selection criteria for the vector and scalar field data are the same as for CHAOS-6l. However, for CHAOS-6h, identical selection criteria are used for both scalar and vector field differences. Data are selected  only if Kp ≤ 30,  and $$|{\\text {d}}D_{st}/{\\text {d}}t| \\le$$ 3 nT/h. Both night and dayside data are selected, excluding  the dayside equatorial region (<$$\\pm 10^{\\circ }$$ quasi-dipole latitudes). Regarding the CHAOS-6h model parameterization, a static field up to $$n = 120$$ was estimated, with a time-dependent internal field for $$n \\le 16$$ described by a third-order Taylor expansion (quadratic SV). The same bin lengths as in CHAOS-6l  were used for the RC baseline correction terms and for the instrument alignment calibration parameters (Euler angles). As for CHAOS-4h (Olsen et al. 2014), we applied regularization above degree $$n=90$$ by minimizing the L2 norm of $$B_r$$ at Earth’s surface.\n\nIn all, 15,636 model parameters were estimated from 3,306,074 CHAMP observations when deriving CHAOS-6h.\n\n## Results and discussion\n\n### Fit to satellite data\n\nThe fit of the CHAOS-6l field model to scalar and vector satellite data is generally similar (within 0.15 nT) to the fits achieved by CHAOS-5. For example, the Huber-weighted rms misfit between CHAOS-6l and non-polar scalar field data is 2.12 nT for CHAMP in comparison with 2.20nT, 2.18nT and 2.19nT, respectively, for Swarm Alpha, Beta and Charlie. The misfit to the $$B_\\phi$$ vector component is 2.54 nT for CHAMP and 2.50, 2.47 and 2.50 nT, respectively, for Swarm Alpha, Beta and Charlie.\n\nField difference data were not included in earlier CHAOS models; CHAOS-6 is the first field model to be derived using along-track spatial differences of vector data from both CHAMP and Swarm. Figure 2 presents histograms of residuals for the vector field differences. Comparing Swarm along-track and east–west differences, the along-track differences (involving measurements made on the same orbit by the same instrument) have Huber-weighted rms misfits of 0.27, 0.27 and 0.34 nT for the radial, north–south and east–west components, compared with 0.47, 0.50 and 0.57 nT for the east–west vector field differences between Swarm Alpha and Charlie. Despite involving measurements from different satellites, we conclude the east–west vector field differences from Swarm are reliable and an internal field model is able to fit them to a weighted rms level of approximately 0.5 nT. Of course no east–west differences were possible with CHAMP, but we can compare the along-track differences. We find Huber-weighted rms misfits of 0.36, 0.36 and 0.40 nT for along-track differences of the radial, north–south and east–west vector field components from CHAMP. The along-track differences of Swarm vector data thus have generally smaller residuals than similar differences constructed with CHAMP data. This augurs well for the future of the Swarm mission as the satellites descend.\n\n### Fit to secular variation estimates from ground observatories\n\nThe highest quality records of geomagnetic secular variation and its time variability come from ground magnetic observatories, where absolute calibrations are routinely carried out. If we are to use CHAOS-6 (which is primarily determined by fitting satellite data) to study secular variation, it is essential that it  also fits the available ground observatory data well. We find Huber-weighted rms misfits of CHAOS-6 to annual differences of ground observatory revised monthly means of 3.80, 3.65 and 3.07 nT/year, respectively, for the radial, north–south and east–west components.\n\nExamples of ground observatory secular variation time series, along with CHAOS-6 model predictions, are presented in Fig. 3. The top row shows examples of some complete series spanning 1999 to 2016 from well-established observatories  at Honolulu (HON) in the central Pacific,  at Dorbes (DOU) in Europe and at Alice Springs (ASP) in Australia. We find that CHAOS-6 provides a good description of the time-dependent SV in all these locations. There are noticeable sub-decadal changes in the SV trends even in the central Pacific where SV is often considered to be less intense. As pointed out by Torta et al. (2015), a geomagnetic jerk (characterized by a ‘V’ shape in the SV as the SA changes sign) clearly occurred in 2014. This jerk event is generally well captured by the CHAOS-6 model.\n\nIn addition to presenting examples from well-established observatories, Fig. 3 also shows shorter time series at three recently established observatories in remote locations, at Gan in the southern Maldives, at King Edward Point (KEP) in South Georgia and at Tristan da Cunha (TDC) in the mid-Atlantic. Note that these plots are zoomed in compared to the previous plots and they cover only the shorter time interval of 2010–2015. CHAOS-6 again satisfactorily fits the data from these newer observatories, even when sharp changes in SV are observed, for example in $${\\text {d}} B_\\phi /{\\text {d}}t$$ at Tristan da Cunha in 2014. Although the fit to KEP in Fig. 3 is visually less impressive than that at TDC, note that it is for the north–south field component, while the (typically quieter) east–west component is presented for TDC. The rms weighted residuals  for $${\\text {d}}B_r/{\\text {d}}t, {\\text {d}}B_\\theta /{\\text {d}}t, {\\text {d}}B_\\phi /{\\text {d}}t$$ are, respectively, 1.2, 1.6, 0.9 nT/year for TDC and 1.65, 2.40, 2.09 nT/year for KEP.\n\n### Power spectra of field, SV and SA at Earth’s surface\n\nIn Fig. 4, we present the Lowes–Mauersberger spherical harmonic power spectra for the vector field, its first time derivative (SV) and its second time derivative (SA) at the Earth’s surface in 2015. The spectra for the field itself decreases steadily until approximately degree 14, after which it begins to level out. The change from a negative (decreasing) slope to a positive (increasing) slope, which indicates that lithospheric sources are certainly dominating, does not take place until degree 18. At the Earth’s surface, the spectrum of the SV also decreases with degree; the slope begins to level out about degree 19, indicative of the noise floor being reached. In contrast to the field and the SV, the SA spectra  converges at the surface for CHAOS-6 in 2015, with essentially zero power remaining by its truncation degree 20. This is a consequence of the model regularization, that forces the SA towards zero at the model endpoints and minimizes time changes in the SA throughout, which is stronger at higher degree. The low values of the SA spectrum at high degrees should thus not be taken as indicative of a detection limit for the SA which would be related to the noise spectrum; the detection limit can only be properly assessed in unregularized inversions. The SA power spectrum shows weak peaks at degrees 3, 5, 7 and 9 in 2015. Given the surface spectra are well behaved and not diverging, the entire time-dependent part of the CHAOS-6 model (up to spherical harmonic degree 20) can legitimately be used to map and investigate time-dependent secular variation at the Earth’s surface.\n\n### Time changes in magnetic intensity at Earth’s surface\n\nIt is well known that the magnetic field intensity F at Earth’s surface is changing, with the South Atlantic Anomaly growing in size and moving westwards. With CHAOS-6, it is possible to map both trends and accelerations in the field intensity at Earth’s surface. In Fig. 5, we present maps of F, $${\\text {d}}F/{\\text {d}}t$$ and $${\\text {d}}^2 F/{\\text {d}}t^2$$ at Earth’s surface in 2015. We calculate $${\\text {d}}F/{\\text {d}}t$$ in 2015 from F in 2015.5 minus F in 2014.5 and similarly $${\\text {d}}^2F/{\\text {d}}t^2$$ in 2015 from $${\\text {d}}F/{\\text {d}}t$$ in 2015.5 minus $${\\text {d}}F/{\\text {d}}t$$ in 2014.5.\n\nWe find that the field is presently strengthening in general in the eastern hemisphere and weakening in the western hemisphere. This is partly a consequence of the low-intensity South Atlantic Anomaly moving to the west, bringing lower field strengths with it, while stronger fields replace it in the east as it moves away. But in addition between 1999 and 2016 the maxima of field intensity over North America have clearly decreased in amplitude, while the field intensity maxima over Northern Asia has grown. A movie of the evolution of F is available at www.spacecenter.dk/files/magnetic-models/CHAOS-6.\n\nExamining the acceleration of the intensity ($${\\text {d}}^2 F/{\\text {d}}t^2$$), we find a strong positive acceleration is now taking place in the east, in a broad longitudinal sector from $$30^{\\circ }$$ to $$120^{\\circ }\\hbox {E}$$. There is also a notable patch of negative acceleration in field intensity around South-West Africa and a negative acceleration taking place close to Alaska and in the northern Pacific region. Considering a time series of  such field intensity acceleration maps from 2000 to 2015 (also available as a movie at www.spacecenter.dk/files/magnetic-models/CHAOS-6), we find the intensity acceleration changes dramatically on sub-decadal timescales. For example, a series of prominent oscillations are observed west of southern Africa. The field, the SV and the SA downward continued to the core surface are presented later in  the section “Secular variation and acceleration at Earth's core surface” and  in Fig. 9.\n\nIn order to test the above inferences concerning  field intensity changes made using the CHAOS-6 field model, in Fig. 6 we present time series of field intensity changes, based on annual differences of ground observatory revised monthly means (F is in this case calculated from the revised month values of $$B_r$$, $$B_\\theta$$, $$B_\\phi$$). A positive intensity acceleration in 2015 is clearly seen at Novosibirsk, Russia, at eastern longitudes in the northern hemisphere and is also evident although weaker at Niemegk, Germany, and at Learmouth, Australia. A relatively long-term negative acceleration is evident in the rate of field intensity decrease observed in Alaska. Overall, we are satisfied that CHAOS-6 adequately explains the observed trends and accelerations of the recent geomagnetic field intensity.\n\n### CHAOS-6h and the high-degree lithospheric field\n\nTurning to the higher degree static field in CHAOS-6 (from CHAOS-6h, see the section “CHAOS-6h: estimation of the high-degree lithospheric field”), Fig. 7 presents a map of the lithospheric part of the radial field (degrees 15–110) along with the power spectrum and degree correlation at the Earth’s surface in comparison with MF7 (Maus 2010) and CHAOS-4. CHAOS-6 agrees with MF7 much better than CHAOS-4 whose power spectra begin to show deviations above degree 83, when the degree correlation also drops below 0.85. In contrast, the spectrum for CHAOS-6 remains close to that of MF7 up to degree 110, and only after this, does its degree correlation fall below 0.85. We therefore consider the static field in CHAOS-6 to be reliable up degree 110 and recommend its use to this degree.\n\nThe map in Fig. 7 shows the radial field plotted at the Earth’s surface considering degrees 16 to 110. The map displays well-localized anomalies, especially over the continents. Over the oceans, short-wavelength north–south linear features are visible, despite their relatively low amplitude. Differences to MF7 are mostly with respect to these features. Some differences, especially around auroral electrojet latitudes south of Australia, are possibly due to disturbed tracks, but it may also be that MF7 lacks some along-track power due to the filtering applied during its construction. It will be interesting to see how this part of the signal develops in future models constructed from Swarm data, as the lower pair of satellites descend and they are better able to resolve the short-wavelength east–west field gradients.\n\nStatistics regarding the Huber-weighted mean and rms misfits of CHAOS-6h to the CHAMP field and field difference data used to construct it are presented in Table 1.\n\n### Secular variation and acceleration at Earth’s core surface\n\nIn order to study the origin of secular variation, it is necessary to downward continue the field to the outer edge of its source region in the core. We carry out the downward continuation, assuming that there are no current sources in the mantle on the timescale of observable secular variation, so the field continues to be described by a potential. The resulting spectra for the field, SV and SA at the core surface in 2015 are presented in Fig. 8.\n\nAbove degree 13, we see an upward trend in the field spectrum that we attribute to lithospheric sources. We therefore choose to present maps of the field at the core surface only to degree 13. The SV spectra increases rapidly with degree at first, but  levels out above degree 9. It starts to increase more rapidly again above degree 18; plotting maps of the SV at the core surface, we see this is associated with an increase in disorganized noise in maps. We therefore believe the SV in CHAOS-6 is satisfactory out at least to degree 16, and possibly even as far as degree 18. Turning to the SA spectrum, in CHAOS-6 this converges at high degree at the core surface due to the applied regularization. In 2015 (relatively close to the model endpoint), regularization starts to dominate the solution already above degree 9. We nonetheless choose to present the SA at the core surface also to degree 16, since some information on rapid field changes is possible up to this degree, particularly for epochs more distant from the model endpoints. Maps of the radial field to degree 13 as well as radial SV and radial SA to degree 16 at the core surface in 2015 are presented in Fig. 9. Movies showing the time changes of such maps are available at www.spacecenter.dk/files/magnetic-models/CHAOS-6.\n\nWe find that regions of intense radial SV at the core surface occur close to edges of patches of strong radial field that can be seen to drift when examining a sequence of maps (or a movie) of the radial field between 1999 and 2016. Intense SV in 2015 is observed to lie in a broad band equatorward of $$30^{\\circ }$$ latitude between longitudes $$100^{\\circ }\\hbox {E}$$ and $$90^{\\circ }\\hbox {W}$$. There is also a well-localized negative–positive–negative series of three patches of radial SV visible under Alaska and Siberia; this appears to be a consequence of a very rapid westward movement of the intense high-latitude radial field patches. The SV is also generally large in the longitudinal sector from $$60^{\\circ }$$ to $$120^{\\circ }\\hbox {E}$$, particularly in the northern hemisphere.\n\nRegarding the radial field SA at the core surface in 2015, the most prominent features are a positive–negative pair under India–South-East Asia, a series of strong radial SA patches of alternating sign in the region under northern South America, and a positive–negative pair at high northern latitudes under Alaska-Siberia, that is linked to the evolution of the high-latitude SV patches described above.\n\nIn both the radial SV and SA, there is a striking absence of structure in the southern polar region (see also the discussion in Holme et al. 2011; Olsen et al. 2014). Although the Pacific region shows lower amplitude radial SV (again see Holme et al. 2011; Olsen et al. 2014), we note that in 2015 there is strong radial SA in the central Pacific, consistent with the aftermath of the jerk observed in 2014 at Hawaii (see Fig. 3, top left). Although the flows driving SV may be weaker in this region, they nonetheless seem to undergo similar time variations.\n\nEarlier versions of the CHAOS model (Finlay et al. 2015) as well as independent models based on CHAMP and DMSP data (Chulliat and Maus 2014; Chulliat et al. 2015) have demonstrated that the SA  undergoes dramatic changes on sub-decadal timescales, notably exhibiting a series of pulses in amplitude. In Fig. 10, power spectra of the SA at the core surface for a number of epochs, and the L2 norm of the SA at the core surface (e.g. Finlay et al. 2015), calculated for different spherical harmonic truncation levels, are presented for CHAOS-6. The applied regularization forces the SA spectra to decay at high degree and  it begins to have an influence already between degree 10 and 12, especially close to the model endpoints. We find peaks in the SA norm, indicating pulses of SA, for all the investigated truncation levels, at around 2006, 2009.5 and 2013. The exact time of the pulses depends on the chosen truncation level of the SA, which was usually set to degree 6, 8 or 9 in earlier studies. The relative sizes of the pulses also change with the chosen truncation level. As is also evident from the associated power spectra, the 2006 pulse displayed more power at high degrees (10–15), while the 2013 pulse has relatively more power at lower odd degrees 5, 7, 9. Although each pulse has a different spectral signature, there is always enhanced power in the band of degrees from 5 to 7. Maps and movies of the radial SA at the core surface also show recurring oscillations at particular locations, for example under northern South America around $$40^{\\circ }\\hbox {W}$$ close to the equator. High-amplitude SA is often present around longitude $$100^{\\circ }\\hbox {E}$$.\n\nPresent limitations in our ability to infer the high-degree SA in 2015 are also illustrated in Fig. 10. The power spectra of the SA in 2015 drops rapidly above spherical harmonic degree 9. Looking at the SA norm versus time, we see that this is a consequence of the imposed model end constraints which force the SA towards zero in 2016. The end constraints have less influence on the lower degrees (for example, see the SA norm truncated at $$n=6$$), but a longer time span of data is certainly required in order to better determine the high-degree ($$n>9$$) SA in 2015.\n\n## An interpretation based on quasi-geostrophic core flows\n\nOne possible interpretation of the observed secular variation is in terms of rotation-dominated (or quasi-geostrophic) flows of liquid metal in the outer core. An estimate of the responsible flow may be obtained by inverting the magnetic induction equation evaluated at the surface of the core,\n\n\\begin{aligned} \\displaystyle \\frac{\\partial B_r}{\\partial t} = - \\nabla _H \\cdot \\left( \\mathbf {u} B_r \\right) , \\end{aligned}\n(5)\n\nwhere $$\\mathbf{u}$$ is the core surface flow, $$\\nabla _H\\cdot$$ is the horizontal divergence operator and where we have neglected magnetic diffusion on the decadal and shorter timescales that are of interest here (see Finlay et al. (2016) for a discussion of the effects of diffusion on longer timescales).\n\nHere, we present a quasi-geostrophic solution for $$\\mathbf{u}$$ obtained using the inversion method of Gillet et al. (2015b), taking as input the CHAOS-6 internal field to degree 13 and its SV to degree 16, evaluated at 1-year intervals between 1999.0 and 2016.0. We impose a columnar flow constraint at the core surface that follows from quasi-geostrophy and incompressibility in the outer core volume (Amit and Olson 2004)\n\n\\begin{aligned} \\displaystyle \\nabla _H\\cdot \\left( \\mathbf{u}\\cos ^2\\theta \\right) =0\\,, \\end{aligned}\n(6)\n\nand also force the flow to be equatorially symmetric, consistent with core motions that are to leading order axially invariant (Pais and Jault 2008) so that,\n\n\\begin{aligned} \\displaystyle u_{\\phi }(\\theta ,\\phi ) = u_{\\phi }(90-\\theta ,\\phi ) \\quad \\text{ and } \\quad u_{\\theta }(\\theta ,\\phi ) = -u_{\\theta }(90-\\theta ,\\phi ). \\end{aligned}\n(7)\n\nThe core surface flow is expanded into toroidal and poloidal parts\n\n\\begin{aligned} \\displaystyle \\mathbf {u}={\\nabla } \\times (T \\mathbf {r}) + \\nabla _H (rS)\\,, \\end{aligned}\n(8)\n\nwhere $$\\mathbf {r}$$ is the position vector and T and S are toroidal and poloidal scalars that are further expanded using a Schmidt semi-normalized spherical harmonic basis, up to degree and order 28. We consider in (5) temporally correlated SV model errors arising from the interaction of the flow with temporally correlated, but unresolved, small-scale field from degrees 14 to 30. An iterative scheme is employed, updating at each step the flow model covariance matrix using information from an ensemble of solutions. CHAOS-6 does not provide covariance information for the input SV, so we adopt simple diagonal covariances for the SV observation errors. These are deduced from the errors provided by the COV-OBS.x1 field model (see Fig. 4 in Gillet et al. 2015a), with a fit to the SV uncertainties in 2010 extrapolated to degree 16. Further details of our flow inversion scheme may be found in Gillet et al. (2015b). The flow models presented here go beyond those presented by Gillet et al. (2015b) in using SV to higher degree, and in focusing on explaining rapid field variations during the past 17 years when high-quality satellite data have been available.\n\nA map of the resulting quasi-geostrophic flow in 2015, truncated at degree 16, is presented in Fig. 11. Here the green lines follow imaginary tracers in the flow, with the thickness of the line indicating the strength of the flow. At degree 16, the kinetic energy of the ensemble average flow is greater than 50 % of the kinetic energy of any of the ensemble realizations—see Gillet et al. (2015b), section 4.1 for a discussion of how the ensemble can be used to characterize the reliability of the inferred flow. As shown in Fig. 11, the flow is dominated by an anticyclonic, planetary-scale, eccentric gyre consisting of equatorward flow around $$100^{\\circ }\\hbox {E}$$, that then meanders westward flow in a belt around 20$$^{\\circ }$$–30$$^{\\circ }\\hbox {N}$$ and S of the equator, and then flows poleward again around $$90^{\\circ }\\hbox {W}$$, before closing with intense westward flow at high latitudes around $$65^{\\circ }$$$$75^{\\circ }$$ N and S, close to the tangent cylinder that circumscribes the inner core. Broadly similar planetary gyres are found many recent flow inversions (e.g. Amit and Pais 2013; Aubert 2015; Gillet et al. 2015b; Baerenzung et al. 2016). The planetary gyre obtained here is, by construction, equatorially symmetric. Using the high-resolution SV from CHAOS-6, we are able to obtain more detail regarding the small-scale structure of the gyre; the flow in Fig. 11 is presented to degree 16, while, for example, Gillet et al. (2015b) presented flows only to degree 14. We find the flow within the centre of the gyre is surprisingly quiescent, for example in the vicinity of the South Atlantic reverse flux patches. This is despite these lying in the Atlantic hemisphere, which is typically considered to be an active region characterized by high-amplitude SV.\n\nIn agreement with the findings of Gillet et al. (2015b), we find a series of prominent non-axisymmetric azimuthal (i.e. east–west, or $$u_\\phi$$) jets close to the equator. We find these jets undergo time-dependent oscillations at some locations, for example at $$40^{\\circ }\\hbox {W}$$ at the equator—see Fig. 12. At this location, strong oscillations of the radial field SA are seen at the core surface. We find that pulses of SA i at a particular location correspond to times of large acceleration in $$u_\\phi$$, occurring between maxima and minima of $$u_\\phi$$, for example at $$40^{\\circ }\\hbox {W}$$ at the equator, where SA extrema  occurred in 2005.8, 2009 and 2013.5.\n\nFig . 11 shows that azimuthal flows at low latitude are dominated by their non-axisymmetric part; their amplitude is significantly larger than that of the axisymmetric motions that are  often interpreted as torsional Alfvén waves (Gillet et al. 2010, 2015b) in the same sub-decadal period range. Time–longitude plots of $$u_\\phi$$ at the equator do not show coherent propagation in longitude, but rather standing oscillatory features, with enhanced amplitude at particular locations. Interpretation of quasi-geostrophic flows at low latitudes requires pause for thought. Quasi-geostrophic models in a thin-shell ($$\\beta$$-plane) geometry, as is relevant for the atmosphere and oceans, are known to break down at the equator. However, the outer core is a thick shell and recent tests of the quasi-geostrophic approximation in this geometry (comparing inertial modes in quasi-geostrophic models against full 3D solutions) show encouraging agreement, even for equatorially confined modes (Canet et al. 2014; Labbé et al. 2015). Further work is needed to better understand the dynamics of the low-latitude non-axisymmetric jets. For example: what drives such motions, and does the non-axisymmetric Lorentz force play an important role in producing the observed oscillations?\n\nAt this stage, it is important to recognize that other hypotheses are possible regarding the nature of the core flows. For example, there is presently a debate concerning whether a stratified layer may exist close to the core surface (e.g. Buffett 2014; Buffett et al. 2016; Chulliat et al. 2015; Lesur et al. 2015), inhomogeneous boundary conditions may force departures from equatorial symmetry (e.g. Amit and Pais 2013) or large scales may for some reason dominate the flow (e.g. Bloxham 1988; Whaler and Beggan 2015). Nonetheless, the primary flow structures identified here, in particular equatorward flow in both the northern and southern hemispheres around $$100^{\\circ }\\hbox {E}$$ and time-dependent non-axisymmetric westward flow at low latitudes, are sufficient to reproduce the observed rapid field changes, within the uncertainties due to the unresolved small-scale field.\n\n## Conclusions\n\nIn this article, we have presented the CHAOS-6 field model and used it to analyse recent patterns of geomagnetic secular variation. CHAOS-6 includes more than 2 years of Swarm data and the latest ground observatory magnetic measurements as available in March 2016, along with data from previous satellite missions, and it provides information on geomagnetic secular variation between 1999.0 and 2016.5. It is the first member of the CHAOS field model series to use spatial field differences as data, utilizing along-track differences from both the Swarm and CHAMP satellites and east–west differences between Swarm Alpha and Charlie.\n\nAt Earth’s surface, we find large-scale patterns of secular acceleration that change on short, sub-decadal, timescales. A geomagnetic jerk that occurred in 2014 is visible in Australia, and in the central Pacific, as well as in Europe. Transient accelerations are also seen in the strengthening and weakening of the field intensity; there has recently been a notable positive acceleration of the field intensity in the Asian longitude sector. CHAOS-6 captures the secular variation at the core surface up to at least spherical harmonic degree 16. Looking at the time derivative of this secular variation, the secular acceleration, we find that it has been dominated by a series of pulses, seen most clearly at low latitudes in the Atlantic sector and also at longitudes close to $$100^{\\circ }\\hbox {E}$$. Inverting the secular variation for a quasi-geostrophic core flow, we find the dominant time-averaged feature is a planetary-scale gyre structure that flows equatorward around $$100^{\\circ }\\hbox {E}$$, then westward at mid- to low latitudes and then poleward around $$90^{\\circ }\\hbox {W}$$, closing with intense westward flow at high latitudes close to the tangent cylinder. Rapid fluctuations are evident in the eastern, equatorward, limb of the gyre. In addition, the quasi-geostrophic flows show prominent oscillations of non-axisymmetric azimuthal jets at low latitudes that provide a possible explanation for localized, oscillatory SA pulses observed in this region, for example near to $$40^{\\circ }\\hbox {W}$$ under northern South America.\n\nLonger time series of Swarm data are needed to test and extend the preliminary results reported here for the secular variation and secular acceleration in 2015. The relatively long timescales involved, even for rapid secular acceleration pulses, mean that long-term monitoring from space is essential if new hypotheses concerning the responsible core physics are to be properly tested. A lengthy Swarm mission, with the satellites gradually moving to lower altitudes, thus holds great promise. As the constellation configuration evolves, and the local time separation between the upper satellite and the lower pair increases, there will also be exciting opportunities to study secular variation on even shorter timescales.\n\nThe CHAOS-6 model is available from: www.spacecenter.dk/files/magnetic-models/CHAOS-6.\n\n## References\n\n• Amit H, Olson P (2004) Helical core flow from geomagnetic secular variation. Phys Earth Planet Int 147(1):1–25\n\n• Amit H, Pais MA (2013) Differences between tangential geostrophy and columnar flow. Geophys J Int 194:145–157\n\n• Aubert J (2015) Geomagnetic forecasts driven by Earth’s core thermal wind dynamics. Geophys J Int 203:1738–1751\n\n• Baerenzung J, Holschneider M, Lesur V (2016) The flow at the Earth’s core–mantle boundary under weak prior constraints. J Geophys Res Solid Earth 121(3):1343–1364. doi:10.1002/2015JB012464\n\n• Bloxham J (1988) The determination of fluid flow at the core surface from geomagnetic observations. In: Vlaar NJ, Nolet G, Wortel MJR, Cloetingh SAPL (eds) Mathematical geophysics, a survey of recent developments in seismology and geodynamics. Reidel, Dordrecht, pp 189–208\n\n• Buffett BA (2014) Geomagnetic fluctuations reveal stable stratification at the top of the Earth’s core. Nature 507:484–487\n\n• Buffett BA, Knezek N, Holme R (2016) Evidence for MAC waves at the top of Earth’s core and implications for variations in length of day. Geophys J Int 204:1789–1800\n\n• Canet E, Finlay CC, Fournier A (2014) Hydromagnetic quasi-geostrophic modes in rapidly rotating planetary cores. Phys Earth Planet Int 229:1–15\n\n• Chulliat A, Maus S (2014) Geomagnetic secular acceleration, jerks, and a localized standing wave at the core surface from 2000 to 2010. J Geophys Res. doi:10.1002/2013JB010,604\n\n• Chulliat A, Thébault E, Hulot G (2010) Core field acceleration pulse as a common cause of the 2003 and 2007 geomagnetic jerks. Geophys Res Lett. doi:10.1029/2009GL042,019\n\n• Chulliat A, Alken P, Maus S (2015) Fast equatorial waves propagating at the top of the Earth’s core. Geophys Res Lett 42:3321–3329\n\n• Clarke E, Baillie O, Reay S (2013) A method for the real time production of quasi-definitive magnetic observatory data. Earth Planets Space 65:1363–1374\n\n• De Boor C (2001) A practical guide to splines. Spring-Verlag, New York\n\n• Finlay CC, Olsen N, Tøffner-Clausen L (2015) DTU candidate field models for IGRF-12 and the CHAOS-5 geomagnetic field model. Earth Planets Space. doi:10.1186/s40623-015-0274-3\n\n• Finlay CC, Aubert J, Gillet N (2016) Gyre-driven decay of the earth’s magnetic dipole. Nat Commun 7(10):422. doi:10.1038/ncomms10422\n\n• Gellibrand H (1635) A discourse mathematical on the variation of the magnetic needle. Together with its admirable diminution lately discovered. William Jones, London\n\n• Gillet N, Jault D, Canet E, Fournier A (2010) Fast torsional waves and strong magnetic field within the Earth’s core. Nature 465:74–77. doi:10.1038/nature09010\n\n• Gillet N, Jault D, Finlay CC, Olsen N (2013) Stochastic modeling of the Earth’s magnetic field: inversion for covariances over the observatory era. Geochem Geophys Geosyst. doi:10.1029/2012GC004355\n\n• Gillet N, Barrois O, Finlay CC (2015a) Stochastic forecasting of the geomagnetic field from the COV-OBSx.1 geomagnetic field model, and candidate models for IGRF-12. Earth Planets Space. doi:10.1186/s40623-015-0225-z\n\n• Gillet N, Jault D, Finlay CC (2015b) Planetary gyre and time-dependent midlatitude eddies at the Earth’s core surface. J Geophys Res 120:3991–4013\n\n• Hansteen C (1819) Untersuchungen über den Magnetismus der Erde. Lehmann and Gröndahl, Christiania\n\n• Holme R, Bloxham J (1996) The treatment of attitude errors in satellite geomagnetic data. Phys Earth Planet Int 98:221–233\n\n• Holme R, Olsen N, Bairstow F (2011) Mapping geomagnetic secular variation at the core–mantle boundary. Geophys J Int 186:521–528. doi:10.1111/j.1365-246X.2011.05066.x\n\n• Jackson A, Jonkers ART, Walker MR (2000) Four centuries of geomagnetic secular variation from historical records. Philos Trans R Soc Lond A 358:957–990\n\n• Kotsiaros S, Finlay CC, Olsen N (2014) Use of along-track magnetic field differences in lithospheric field modelling. Geophys J Int 200:878–887\n\n• Labbé F, Jault D, Gillet N (2015) On magnetostrophic inertia-less waves in quasi-geostrophic models of planetary cores. Geophys Astrophys Fluid Dyn 109:587–610\n\n• Langel RA, Mead GD, Lancaster ER, Estes RH, Fabiano EB (1980) Initial geomagnetic field model from Magsat vector data. Geophys Res Lett 7:793–796\n\n• Lesur V, Wardinski I, Rother M, Mandea M (2008) GRIMM: the GFZ reference internal magnetic model based on vector satellite and observatory data. Geophys J Int 173:382–394\n\n• Lesur V, Wardinski I, Hamoudi M, Rother M (2010) The second generation of the GFZ reference internal magnetic model: GRIMM-2. Earth Planets Space 62:765–773. doi:10.5047/eps.2010.07.007\n\n• Lesur V, Whaler K, Wardinski I (2015) Are geomagnetic data consistent with stably stratified flow at the core–mantle boundary? Geophys J Int 201:929–946\n\n• Lesur V, Rother M, Wardinski I, Schachtschneider R, Hamoudi M, Chambodut A (2015) Parent magnetic field models for the IGRF-12 GFZ-candidates. Earth Planets Space 67:87. doi:10.1186/s40623-015-0239-6\n\n• Macmillan S, Olsen N (2013) Observatory data and the Swarm mission. Earth Planets Space 65:1355–1362\n\n• Maus S (2010) Magnetic field model MF7. www.geomag.us/models/MF7.html\n\n• Olsen N, Lühr H, Sabaka TJ, Mandea M, Rother M, Tøffner-Clausen L, Choi S (2006) CHAOS—a model of Earth’s magnetic field derived from CHAMP, Ørsted, and SAC-C magnetic satellite data. Geophys J Int 166:67–75\n\n• Olsen N, Mandea M, Sabaka TJ, Tøffner-Clausen L (2009) CHAOS-2—a geomagnetic field model derived from one decade of continuous satellite data. Geophys J Int 179(3):1477–1487\n\n• Olsen N, Mandea M, Sabaka TJ, Tøffner-Clausen L (2010) The CHAOS-3 geomagnetic field model and candidates for the 11th generation of IGRF. Earth Planets Space 62:719–727\n\n• Olsen N, Lühr H, Finlay CC, Tøffner-Clausen L (2014) The CHAOS-4 geomagnetic field model. Geophys J Int 1997:815–827\n\n• Olsen N et al (2015) The Swarm initial field model for the 2014 geomagnetic field. Geophys Res Lett 42:1092–1098\n\n• Olsen N, Finlay CC, Kotsiaros S, Tøffner Clausen L (2016) A model of Earth’s magnetic field derived from two years of Swarm data. Earth Planets Space. doi:10.1186/s40623-016-0488-z.\n\n• Pais MA, Jault D (2008) Quasi-geostrophic flows responsible for the secular variation of the Earth’s magnetic field. Geophys J Int 173:421–443\n\n• Peltier A, Chulliat A (2010) On the feasibility of promptly producing quasi-definitive magnetic observatory data. Earth Planets Space 62:e5–e8. doi:10.5047/eps.2010.02.002\n\n• Richmond AD (1995) Ionospheric electrodynamics using magnetic Apex coordinates. J Geomagn Geoelectr 47:191–212\n\n• Sabaka TJ, Olsen N, Purucker ME (2004) Extending comprehensive models of the Earth’s magnetic field with Ørsted and CHAMP data. Geophys J Int 159:521–547\n\n• Sabaka TJ, Olsen N, Tyler R, Kuvshinov A (2015) CM5, a pre-Swarm comprehensive magnetic field model derived from over 12 years of CHAMP, Ørsted, SAC-C and observatory data. Geophys J Int 200:1596–1626. doi:10.1093/gji/ggu493\n\n• Shure L, Parker RL, Backus GE (1982) Harmonic splines for geomagnetic modelling. Phys Earth Planet Int 28:215–229. doi:10.1016/0031-9201(82)90003-6\n\n• Tøffner-Clausen L, Lesur V, Brauer P, Olsen N, Finlay CC (2016) In-flight scalar calibration and characterisation of the swarm magnetometery package. Earth Planets Space 200 (in press)\n\n• Torta JM, Pavón-Carrasco FJ, Marsal S, Finlay CC (2015) Evidence for a new geomagnetic jerk in 2014. Geophys Res Lett 42(19):7933–7940. doi:10.1002/2015GL065501\n\n• Whaler KA, Beggan CD (2015) Derivation and use of core surface flows for forecasting secular variation. J Geophys Res 120:1400–1414\n\n## Authors’ contributions\n\nCCF derived and analysed the CHAOS-6l model, drafted the manuscript and processed the ground observatory data to produce the revised monthly means and the RC index. NiO developed the CHAOS field modelling software, participated in the design of the study and derived and analysed the CHAOS-6h model. SK developed the scheme for using field differences in geomagnetic field modelling and participated in the design of the study. NG derived the quasi-geostrophic core flow models. LTC prepared and processed the Swarm data. All authors read and approved the final manuscript.\n\n### Acknowledgements\n\nWe wish to thank ESA for providing access to the Swarm L1b data. The staff of the geomagnetic observatories and INTERMAGNET are thanked for supplying high-quality observatory data, and BGS are thanked for providing us with checked and corrected observatory hourly mean values. The support of the CHAMP mission by the German Aerospace Center (DLR) and the Federal Ministry of Education and Research is gratefully acknowledged. The Ørsted Project was made possible by extensive support from the Danish Government, NASA, ESA, CNES, DARA and the Thomas B. Thriges Foundation. CCF acknowledges support from the Research Council of Norway through the Petromaks programme, by ConocoPhillips and Lundin Norway and by the Technical University of Denmark. NG acknowledges support from French Agence Nationale de la Recherche (Grant ANR-2011-BS56-011) and the French Centre National d’ Etudes Spatiales (CNES) for the study of Earth’s core dynamics in the context of the Swarm mission of ESA, and ISTerre is part of Labex OSUG@2020 (ANR10 LABX56). Some numerical computations were performed at the Froggy platform of the CIMENT infrastructure (https://ciment.ujf-grenoble.fr) supported by the Rhône-Alpes region (GRANT CPER07_13 CIRA), the OSUG@2020 labex (reference ANR10 LABX56) and the Equip@Meso project (referenceANR-10-EQPX-29-01). Nathanaël Schaeffer is thanked for assistance in producing the core flow map, Fig. 11. Two anonymous reviewers are thanked for their comments that helped to improve the clarity of the manuscript.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Christopher C. Finlay.\n\n## Rights and permissions", null, "" ]
[ null, "https://earth-planets-space.springeropen.com/track/article/10.1186/s40623-016-0486-1", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8893276,"math_prob":0.9473854,"size":63125,"snap":"2023-40-2023-50","text_gpt3_token_len":15617,"char_repetition_ratio":0.15340377,"word_repetition_ratio":0.027872056,"special_character_ratio":0.24603565,"punctuation_ratio":0.10744986,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96582764,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T18:08:37Z\",\"WARC-Record-ID\":\"<urn:uuid:82895fb8-d079-4f19-80a5-a0ad47c1183a>\",\"Content-Length\":\"449763\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:460232fc-9457-45f7-9047-2b1146ffed1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfe592c1-1327-4f5c-9f14-1df7d9132242>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://earth-planets-space.springeropen.com/articles/10.1186/s40623-016-0486-1\",\"WARC-Payload-Digest\":\"sha1:UJCSH2FWYCGDTHH6HHF4RH3XBFQJQSVX\",\"WARC-Block-Digest\":\"sha1:3LFU7C2GVIKDKZMMIPWPH3PES7ZYTTUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00110.warc.gz\"}"}
http://petersongreens.com/trigonometry-in-astronomy.html
[ "Trigonometry in astronomy. Astronomy and Trigonometry Essay Example 2019-01-17\n\nTrigonometry in astronomy Rating: 4,2/10 462 reviews\n\nHistory of Trigonometry Outline", null, "The second book of the Sphaerica describes the application of spherical geometry to astronomical phenomena and is of little mathematical interest. Moreover since the Babylonian position system for fractions was so obviously superior to the Egyptians unit fractions and the Greek common fractions, it was natural for Ptolemy to subdivide his degrees into sixty , each of these latter into sixty , and so on. This page was last updated on August 2, 2016. The following report will provide insights on how astronomers use trigonometry and how it has provided them, as well as the rest of the world, with essential information about our universe. Today, a gnomon is the vertical rod or similar device that makes the shadow on a sundial. And so, problems in trigonometry have required new developments in synthetic geometry. He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in his On the Sector Figure, he stated the law of sines for plane and spherical triangles, discovered the for spherical triangles, and provided proofs for both these laws.\n\nNext\n\nTrigonometry", null, "We shall see that trigonometry is exactly the ingredient that makes such geometric models—both ancient and Copernican—quantitatively useful. Did the data in the Indian table come directly from the Babylonians, or via the Greeks? Crowe, Theories of the World from Antiquity to the Copernican Revolution, 2 nd revised ed. According to ad 23—79 , Hipparchus created a that assigned names to each star along with his measurements of their positions. Most of the formulas, used in related astronomical computation, are trigonometrical. Mnemonics Students often use mnemonics to remember facts and relationships in trigonometry.\n\nNext\n\nHipparchus", null, "They can also use it to measure the distance from underground water systems. The ancient Sinhalese in , when constructing reservoirs in the kingdom, used trigonometry to calculate the gradient of the water flow. The method of measuring distance in space is called trigonometric parallax. It was the Swiss mathematician 1707—83 , though, who fully incorporated complex numbers into trigonometry. The modern value for this is well known: there are 365. Further, it is used to identify how an object falls or in what angle the gun is shot. This is how the Greeks started with the Pythagoras theorem seen on the right.\n\nNext\n\nHow is astronomy impacted by trigonometry? (Intermediate)", null, "Based on one interpretation of the tablet c. These laws are useful in all branches of geometry, since every may be described as a finite combination of triangles. As an illustration of this method we show the computations for the planet Mars, using modern trigonometry and notation. He communicated with observers at in Egypt, who provided him with some times of , and probably also with astronomers at. The historical development of the calculus.\n\nNext\n\nReal life applications of trigonometry", null, "Later he wrote an important work, the Quadripartitum, on the fundamentals of trigonometry needed for the solution of problems of spherical astronomy. According to Ptolemy, Hipparchus was aware that the movements of the planets were too complex to be accounted for by the same simple models, but he did not attempt to devise a satisfactory planetary theory. In other words, the quantity he found for the seked is the cotangent of the angle to the base of the pyramid and its face. From the swift movements of Mercury to the lumbering journey of Saturn, we see the same general pattern with important individual differences. It is clear that in astronomy Ptolemy made use of the catalog of star positions bequeathed by Hipparchus, but whether or not Ptolemy's trigonometric tables were derived in large part from his distinguished predecessor cannot be determined. You could just as well do a degree in physics somewhere and then do astronomy in grad school.\n\nNext\n\nHistory of Trigonometry", null, "He established the angle addition identities, for example, sin a + b , and discovered the sine formula for spherical geometry: Also in the late tenth and early eleventh centuries, the Egyptian astronomer Ibn Yunus performed many careful trigonometric calculations and demonstrated the formula Persian mathematician 1048-1131 combined trigonometry and approximation theory to provide methods of solving algebraic equations by geometrical means. It remained, however, for ad 127—145 to finish fashioning a fully predictive lunar model. In fact you've probably observed parallax when you travel in the car. First we must initialize the model. Nonetheless, we will use the basic model to derive an approximate quantitative description of the motions of the planets. .\n\nNext\n\nReal life applications of trigonometry", null, "Neither the tables of Hipparchus nor those of Ptolemy have survived to the present day, although descriptions by other ancient authors leave little doubt that they once existed. Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Winnipeg, in Canada, has longitude 97°W, latitude 50°N. Some of Hipparchus' advances in astronomy include the calculation of the mean lunar month, estimates of the sized and distances of the sun and moon, variants on the epicyclic and eccentric models of planetary motion, a catalog of 850 stars longitude and latitude relative to the ecliptic , and the discovery of the precession of the equinoxes and a measurement of that precession. The sine, versine and cosine had been developed in the context of astronomical problems, whereas the tangent and cotangent were developed from the study of shadows of the gnomon.\n\nNext\n\nThe History of Trigonometry", null, "So you would apply to, say, University of Toronto for the astronomy. Our common system of angle measure may stem from this correspondence. Ancient people observed these changes and noticed a very important fact—while constellations do move around in the sky, they never change their size or shape. At the time there were seven recognized planets: Mercury, Venus, Mars, Jupiter, Saturn, the moon, and the sun. Kennedy points out that while it was possible in pre-Islamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. In addition to the periods of the planets, ancient astronomers observed two other gross features of their motions—the lengths and times durations of their retrograde arcs.\n\nNext\n\nHistory of Trigonometry Outline", null, "For instance, the technique of triangulation is used in to measure the distance to nearby stars, in geography to measure distances between landmarks, and in satellite navigation systems. But the trigonometrical version is different. In the modern model the earth moves in a circle, bringing the vertex of the angle with it; in the ancient model the earth and the vertex remain stationary. The 3 main functions are Sin, Cos, and Tan. It is not known just when the systematic use of the 360° circle came into mathematics, but it seems to be due largely to Hipparchus in connection with his table of chords. The horizontal distance from the centre of the astrolabe to the edge of the square was marked with twelve equal divisions.\n\nNext" ]
[ null, "https://i.ytimg.com/vi/RfEDzb2mMk0/hqdefault.jpg", null, "https://www.astronomynotes.com/solarsys/plandist.gif", null, "https://image.slidesharecdn.com/mathsholidayhomeworkbypranavahlawatx-a4540-150617165650-lva1-app6892/95/introduction-to-trigonometry-14-638.jpg", null, "http://www.astronomynotes.com/starprop/parallax.gif", null, "http://www.astronomy.ohio-state.edu/~ryden/ast162_2/parallax.gif", null, "https://image.slidesharecdn.com/trigonometry-131020045916-phpapp01/95/trigonometry-16-638.jpg", null, "https://3c1703fe8d.site.internapcdn.net/newman/csz/news/800/2013/parallax-illustration(1).jpg", null, "https://0901.static.prezi.com/preview/v2/leuv77fzp4vbol3ztdi57mp5kh6jc3sachvcdoaizecfr3dnitcq_3_0.png", null, "http://star-www.st-and.ac.uk/~fv/webnotes/derive2.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9441254,"math_prob":0.8020415,"size":6970,"snap":"2019-26-2019-30","text_gpt3_token_len":1426,"char_repetition_ratio":0.13436693,"word_repetition_ratio":0.0,"special_character_ratio":0.18923959,"punctuation_ratio":0.10405084,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9643294,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,3,null,1,null,3,null,2,null,1,null,1,null,1,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-27T03:00:30Z\",\"WARC-Record-ID\":\"<urn:uuid:c61a0fd3-b15b-4f70-8406-f4eb14635b34>\",\"Content-Length\":\"12111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4dd8182a-f2e8-4334-84b0-778150a4a182>\",\"WARC-Concurrent-To\":\"<urn:uuid:719486dd-d8ca-40da-82fc-9bc9d62dee90>\",\"WARC-IP-Address\":\"52.219.96.235\",\"WARC-Target-URI\":\"http://petersongreens.com/trigonometry-in-astronomy.html\",\"WARC-Payload-Digest\":\"sha1:QHZJHCLUNTE7UQHYR5Q3PIDGXP5LTRO7\",\"WARC-Block-Digest\":\"sha1:BZ4C3UPQTDVIOCMVRJR7SDGDFWV27AED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000610.35_warc_CC-MAIN-20190627015143-20190627041143-00508.warc.gz\"}"}
https://zhtluo.com/misc/a-tale-of-obfuscation-punctured-programs-and-leader-election.html
[ "# A Tale of Obfuscation, Punctured Programs and Leader Election\n\nNov 6, 2022\n\nCredit: Boneh, Dan, et al. \"Single secret leader election.\" Proceedings of the 2nd ACM Conference on Advances in Financial Technologies. 2020.\n\nIt is claimed that indistinguishability obfuscation (iO) is the panacea of all cryptographic problems. While I do not know if that is the case, I am recently studying Single Secret Leader Election (SSLE) in blockchain projects and the paper I read gives a beautiful construction via iO that I will introduce below.\n\n## Oracle and Obfuscation\n\nObfuscation is an interesting topic that has many practical applications. Meanwhile, many theoretical works make use of hypothetical oracles, i.e. black box machines that take an input and gives the corresponding output. Sometimes the reason we use an oracle is that we want to hide some secret in the oracle black box that we do not want the adversary interacting with it to know. Consider this hypothetical encryption scheme that encrypts a message $$m$$ with randomness $$r$$, using a symmetric key $$k$$:\n\n\\begin{align} \\text{Enc}_k(m,r)&= (\\text{PRG}(r), \\text{PRF}(k,\\text{PRG}(r))\\oplus m) \\text{,} \\\\ \\text{Dec}_k(p,c)&= \\text{PRF}(k,p)\\oplus c \\text{.} \\end{align}\n\nNote: PRG is a pseudorandom generator with an expansion factor of 2; PRF is a pseudorandom function family.\n\nWe observe that anyone using $$k$$ for encryption can also use $$k$$ for decryption, but what if we only want someone able to encrypt a message with $$k$$ for us to read, but not able to decrypt other people's messages to us with the same $$k$$?\n\nA trivial idea is to give everyone an encryption oracle $$\\text{Enc}_k(m,r)$$ without telling them anything about our key $$k$$. They can use the oracle to encrypt messages but they cannot use the oracle to decrypt them. This is sound on paper but not practical because black-box oracles do not exist in real life. However, an intersting observation is that if we can obfuscate the program of an oracle so that it leaks no information, we can then give the obfuscated program to anyone. They can run the program themselves, thus avoiding the need of implementing an oracle, but will not be able to decode the obfuscated program for any useful information.\n\nAs a warm-up exercise, consider this hypothetical obfuscation scheme on circuit (i.e. program with a fixed length of input that terminates) $$\\lambda$$ with $$n$$ bits of input:\n\n• Precaculate $$\\lambda$$ on all $$2^n$$ inputs. Then build this circuit:\n• If the input is $$0^n$$, output $$\\lambda(0^n)$$.\n• If the input is $$0^{n-1}|1$$, output $$\\lambda(0^{n-1}|1)$$.\n• ...\n• If the input is $$1^{n-1}|0$$, output $$\\lambda(1^{n-1}|0)$$.\n• If the input is $$1^n$$, output $$\\lambda(1^n)$$.\n\nThis obfuscation works because the obfuscated circuit gives no information other than the output of the original circuit, which anyone queries the oracle can know anyway. However, it is also very inefficient because it uses an exponential number of gates (i.e. runs in exponential time). In cryptography it is generally bad to give our adversary exponential power, so we want to restrict our obfuscation to efficient ones. Is there an efficient obfuscation that hides all information?\n\n### The Impossibility Result\n\nUnfortuantely, some theory guys constructed an unobfuscatable function: i.e. knowing any efficient implementation of it leaks some information that cannot be known by querying an oracle. To briefly summarize, assume that there exists an efficient obfuscator $$\\mathcal{O}$$ that transforms a circuit $$A$$ to another obfuscated circuit with size at most $$\\text{poly}(|A|)$$. Define\n\n$[f]= \\begin{cases} 1 & \\text{if statement } f \\text{ is true; } \\\\ 0 & \\text{otherwise. } \\\\ \\end{cases}$\n\nConsider these three circuits:\n\n\\begin{align} C_a(x)&=[x=a] \\text{,} \\\\ D_a(A)&=[A(a)=1] \\text{,} \\\\ Z(x)&=0 \\text{.} \\end{align}\n\nAssume we sample $$a$$ uniformly randomly from $$\\{0,1\\}^n$$ for some $$n$$. Then some adversary $$A^{X,D_a}$$ with oracle access to $$(X, D_a)$$ cannot distinguish between oracle $$(C_a, D_a)$$ and oracle $$(Z, D_a)$$ except with negligible probability.\n\nHowever, if the adversary has access to the obfuscated circuits $$(X', D'_a) = (\\mathcal{O}(X), \\mathcal{O}(D_a))$$, then they can run $$D'_a(X')$$ themselves and distinguish between $$(C_a, D_a)$$ and $$(Z, D_a)$$ with overwhelming probability.\n\nTherefore, circuit tuple $$(C_a, D_a)$$ with input size $$(n, \\text{poly}(|C_a|))$$ is unobfuscatable: knowing any efficient implementation of it leaks some information that cannot be known by querying an oracle.\n\n### Indistinguishability Obfuscation\n\nFortunately, the impossibility result only proves that we cannot hide all information of the circuit. On the other hand, in our encryption scheme we only want to hide some information, namely our key $$k$$: we do not really care if our adversary knows about the encryption scheme (it is common knowledge anyway), for example. Therefore, we are going to present the definition of an actually existing obfuscation scheme that can do what we want: the indistinguishability obfuscation (iO):\n\n• Completeness: For any boolean circuit $$C$$ of input length $$n$$ and input $$x\\in \\{0,1\\}^{n}$$, we have $\\Pr[C'(x)=C(x):C'\\leftarrow {\\mathcal {iO}}(C)]=1 \\text{.}$ This is to say that the obfuscation does not change the output of a circuit.\n• Indistinguishability: For every pair of circuits $$C_{0},C_{1}$$ of the same size $$k$$ that implement the same functionality (i.e. has the same output for every input), the distributions $$\\{{\\mathcal {iO}}(C_{0})\\}$$ and $$\\{{\\mathcal {iO}}(C_{1})\\}$$ are computationally indistinguishable.\n\nIt is not very easy to appreciate the definition of indistinguishability; however, it actually implies that the obfuscation leaks the minimum amount of information possible to any efficient adversary, since we can substitute the program with anything that gives the same functionality, and the adversary will not be able to tell.\n\nThis may look a little like Voodoo magic, but we are going to provide a concrete example: we are going to prove that using $$\\mathcal{iO}(\\text{Enc}_k)$$ as the public key is IND-CPA secure in a public-key encryption scheme, with a tool known as a puncturable PRF.\n\n## Puncturable PRF and Punctured Program\n\n### Puncturable PRF\n\nA puncturable PRF is a PRF where any key $$k$$ can be punctured on a input string $$x$$ to get a punctured key $$k'=\\text{PUN}(k,x)$$ such that\n\n1. The puncturing does not affect evalutation of the PRF except at $$x$$, i.e. for all input string $$y \\neq x$$, $$\\text{PRF}(k, y)=\\text{PRF}(k', y)$$;\n2. The punctured key gives no information about the evalutation at $$x$$, i.e. given only $$k'$$, $$\\text{PRF}(k,x)$$ is computationally indistinguishable from a random string.\n\nIt is very easy to build a puncturable PRF from the standard GGM construction. Recall that the GGM construction uses a PRG $$G$$ to construct a PRF:", null, "To puncture say point $01$, we need to remove all information on the path from $$k$$ to $$\\text{PRF}(k, 01)$$ while preserving the ability to evaluate other points:", null, "We then notice that we can simply use intermediate sibiling nodes of the path as the punctured key: $$k' = (\\text{PRF}(k, 1), \\text{PRF}(k, 00))$$. This finishes the construction.\n\n### Punctured Program\n\nWe are now ready to show that an indistinguishability obfuscated version of our encryption scheme, $$\\mathcal{iO}(\\text{Enc}_k)$$, is IND-CPA secure as a public-key encryption scheme.\n\nTBD. See How to Use Indistinguishability Obfuscation: Deniable Encryption, and More p. 23 - 25." ]
[ null, "https://zhtluo.com/misc/img/a-tale-of-obfuscation-punctured-programs-and-leader-election/001.svg", null, "https://zhtluo.com/misc/img/a-tale-of-obfuscation-punctured-programs-and-leader-election/002.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8640769,"math_prob":0.99756587,"size":6287,"snap":"2023-14-2023-23","text_gpt3_token_len":1590,"char_repetition_ratio":0.12653191,"word_repetition_ratio":0.029041626,"special_character_ratio":0.258788,"punctuation_ratio":0.11216566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993703,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T21:02:39Z\",\"WARC-Record-ID\":\"<urn:uuid:61877b89-0b31-45a5-98c9-b589816b7c58>\",\"Content-Length\":\"9888\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1482f9ed-c27b-4293-ba8a-e56b86071a00>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9057b5a-7249-414d-a4c9-ce17e29a0516>\",\"WARC-IP-Address\":\"209.159.157.108\",\"WARC-Target-URI\":\"https://zhtluo.com/misc/a-tale-of-obfuscation-punctured-programs-and-leader-election.html\",\"WARC-Payload-Digest\":\"sha1:NAZIC3Q3UP45V5ALZIWO54P7GGIGDDSI\",\"WARC-Block-Digest\":\"sha1:GS4K6ACKSMJWTKE6Q6ZOTI6DQMTMXAK3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948684.19_warc_CC-MAIN-20230327185741-20230327215741-00765.warc.gz\"}"}
https://docs-tda.giotto.ai/0.5.1/notebooks/topology_time_series.html
[ "# Topology of time series¶\n\nThis notebook explores how giotto-tda can be used to gain insights from time-varying data by using ideas from from dynamical systems and persistent homology.\n\nIf you are looking at a static version of this notebook and would like to run its contents, head over to GitHub and download the source.\n\n## From time series to time delay embeddings¶\n\nThe first step in analysing the topology of time series is to construct a time delay embedding or Takens embedding, named after Floris Takens who pioneered its use in the study of dynamical systems. A time delay embedding can be thought of as sliding a “window” of fixed size over a signal, with each window represented as a point in a (possibly) higher-dimensional space. A simple example is shown in the animation below, where pairs of points in a 1-dimensional signal are mapped to coordinates in a 2-dimensional embedding space.", null, "A 2-dimensional time delay embedding\n\nMore formally, given a time series $$f(t)$$, one can extract a sequence of vectors of the form $$f_i = [f(t_i), f(t_i + \\tau), f(t_i + 2 \\tau), \\ldots, f(t_i + (d-1) \\tau)] \\in \\mathbb{R}^{d}$$, where $$d$$ is the embedding dimension and $$\\tau$$ is the time delay. The quantity $$(d-1)\\tau$$ is known as the “window size” and the difference between $$t_{i+1}$$ and $$t_i$$ is called the stride. In other words, the time delay embedding of $$f$$ with parameters $$(d,\\tau)$$ is the function\n\n$\\begin{split}TD_{d,\\tau} f : \\mathbb{R} \\to \\mathbb{R}^{d}\\,, \\qquad t \\to \\begin{bmatrix} f(t) \\\\ f(t + \\tau) \\\\ f(t + 2\\tau) \\\\ \\vdots \\\\ f(t + (d-1)\\tau) \\end{bmatrix}\\end{split}$\n\nand the main idea we will explore in this notebook is that if $$f$$ has a non-trivial recurrent structure, then the image of $$TD_{d,\\tau}f$$ will have non-trivial topology for appropriate choices of $$(d, \\tau)$$.\n\n## A periodic example¶\n\nAs a warm-up, recall that a function is periodic with period $$T > 0$$ if $$f(t + T) = f(t)$$ for all $$t \\in \\mathbb{R}$$. For example, consider the function $$f(t) = \\cos(5 t)$$ which can be visualised as follows:\n\nimport numpy as np\nimport plotly.graph_objects as go\n\nx_periodic = np.linspace(0, 10, 1000)\ny_periodic = np.cos(5 * x_periodic)\n\nfig = go.Figure(data=go.Scatter(x=x_periodic, y=y_periodic))\nfig.update_layout(xaxis_title=\"Timestamp\", yaxis_title=\"Amplitude\")\nfig.show()" ]
[ null, "https://docs-tda.giotto.ai/0.5.1/_images/time_delay_embedding.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8454354,"math_prob":0.9996871,"size":3014,"snap":"2023-14-2023-23","text_gpt3_token_len":745,"char_repetition_ratio":0.10930233,"word_repetition_ratio":0.004405286,"special_character_ratio":0.23954877,"punctuation_ratio":0.094545454,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999337,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T16:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:7c3d9047-bd1f-4f15-8bed-706826bc1c0b>\",\"Content-Length\":\"1048898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68325ed7-70fc-414b-9f9e-edab9d7eee89>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed291286-9fdb-45d0-9401-b93c5d062480>\",\"WARC-IP-Address\":\"35.246.212.26\",\"WARC-Target-URI\":\"https://docs-tda.giotto.ai/0.5.1/notebooks/topology_time_series.html\",\"WARC-Payload-Digest\":\"sha1:SIWGPFXEB5R77JESWND2A2OXR6HAGAYW\",\"WARC-Block-Digest\":\"sha1:SM2ZTSXHLG4MYGK5VYH2AQCCQ4BBA7RU\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653930.47_warc_CC-MAIN-20230607143116-20230607173116-00346.warc.gz\"}"}
https://kundoc.com/pdf-integrated-guidance-and-control-strategy-for-homing-of-unmanned-underwater-vehic.html
[ "# Integrated guidance and control strategy for homing of unmanned underwater vehicles\n\n## Integrated guidance and control strategy for homing of unmanned underwater vehicles\n\nAvailable online at www.sciencedirect.com Journal of the Franklin Institute 356 (2019) 3831–3848 www.elsevier.com/locate/jfranklin Integrated guidan...\n\nAvailable online at www.sciencedirect.com\n\nJournal of the Franklin Institute 356 (2019) 3831–3848 www.elsevier.com/locate/jfranklin\n\nIntegrated guidance and control strategy for homing of unmanned underwater vehicles Zheping Yan, Man Wang∗, Jian Xu College of Automation, Harbin Engineering University, Harbin, Heilongjiang 150001, China Received 2 May 2018; received in revised form 14 November 2018; accepted 29 November 2018 Available online 13 March 2019\n\nAbstract This paper presents a novel integrated guidance and control strategy for homing of unmanned underwater vehicles (UUVs) in 5-degree-of-freedom (DOF), where the vehicles are assumed to be underactuated at high speed and required to move towards the final docking path. During the initial homing stage, the guidance system is first designed by geometrical analysis method to generate a feasible reference trajectory. Then, in the backstepping framework, the proposed trajectory tracking controller can achieve all the tracking errors in the closed-loop system convergence to a small neighbourhood of zero. It means that the vehicle’s dynamics are consistent with the reference trajectory derived in the previous step. To demonstrate the effectiveness of the proposed guidance and control strategy, the complete stability analysis used Lyapunov’s method is given in the paper, and simulation results of all initial conditions are presented and discussed. © 2019 Published by Elsevier Ltd on behalf of The Franklin Institute.\n\n1. Introduction The topics of navigation, guidance and control of unmanned underwater vehicles (UUVs) have received considerable attention in the past decades due to theoretical challenges and significant applications in the field of ocean engineering. Therefore, the large number of control strategies have been developed to achieve practical tasks, for example, homing and docking [1–3], path following [4–6], and trajectory tracking [7–11]. Indeed, most of them rely on ∗\n\nCorresponding author. E-mail address: [email protected] (M. Wang).\n\n3832\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nthe assumptions that the vehicles are fully actuated or the dynamics are 3-degree-of-freedom (DOF) horizontal models. In addition, in these existing works, how to generate a satisfactory reference trajectory is not described in detail. In this paper, we are interested in the problem that usually denominated as homing in the literature, where the vehicle can autonomously approach the vicinity of the base station but without aforementioned assumptions. During the homing and docking stages, the main challenge is how to design a guidance and control scheme to drive the UUV to approach the base along a particular path, with an appointed direction of arrival. The earlier works in the literature on this problem can be found in [12,13], which presented the homing and docking controllers for an underactuated UUV based on an electromagnetic homing system and the ultra-short baseline (USBL) sensor, respectively. More recently, a time differences of arrival based homing strategy and a two-step control approach were proposed in [1,2], respectively. However, the main limitation of these methods is that the performance of the designed tracking controller may be deterioration due to the possible faults of sensors. Fortunately, this drawback is well worked out in [14,15] by providing sensor fault detection and higher order sliding mode observers to the system. On the other hand, the concept of a pure pursuit guidance algorithm has been reported in [16–18]. It should be pointed out that all aforementioned references share a common strategy: the UUV’s dynamics should be consistent with the reference guidance trajectory that includes the time evolutions of the position, attitude, as well as the velocities. Therefore, in practical applications, how to generate a suitable reference trajectory that does not rely on the sensors and vehicle models has become a significant challenge. To the best of author’s knowledge, it is the first attempt to present a guidance system for homing tracking of an underactuated UUV by using geometrical analysis method without any vehicle kinematics and dynamics. In addition, it is the fact that most UUVs are underactuated, especially when they execute maneuvers . The available control inputs can be provided only by stern propellers, steering and diving rudders [7,20]. As a result, the absence of actuation in sway and heave directions poses significant challenges on the control system design side in path following and trajectory tracking scenarios . It also motivates author’s research investigation and attracts considerable attention from the control community. Among these interesting works, backstepping control has become one of the most commonly used methods to deal with highly nonlinear systems. For example, planar backstepping controller has been developed for underactuated UUVs in [4,8]. To extend the applications to three-dimensional space, one-step ahead backstepping method in and observer backstepping technique in have been investigated, respectively. However, these results assume that the reference trajectory satisfies the persistent excitation condition, which means that the UUV cannot track a straight-line trajectory as the reference yaw velocity does not converge to zero. Therefore, it significantly restricts the class of reference trajectories to be used for practical applications, which also motivates our current research investigation. Furthermore, to provide the robustness and adaption of the vehicles, many works on the trajectory tracking control of underactuated UUVs with different techniques, such as sliding mode [24–26], neural network [7,27,28], robust and adaptive control [29–31], and model-based control [23,32,33], are reported in the recent literatures. In , a second-order sliding mode controller is proposed, which is effective in compensating for the modelling uncertainties and rejecting the unknown disturbances. In , a neuro-adaptive tracking controller is presented by combining the input-output feedback linearization, neural network approximation and adaptive techniques. However, the parametric uncertainties and environmental disturbances in these papers are assumed to be constant and bounded. To enhance the applicability of control strategies in real case, the nonlinear stochastic system for\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3833\n\na fully-actuated UUV is derived in . It can also be achieved in [30,31] by using adaptive fuzzy control. Other classic control schemes, such as a Lie group variational integrator and the finite-time tracking controller, are presented in [34,35], respectively. This paper aims at developing a novel integrated guidance and control strategy to solve the homing problem, where the underactuated UUV in three-dimensional space can autonomously approach the base station along the final docking path. The main contributions of the present work are concluded as follows: (1) in the initial homing stage, the reference guidance trajectory is derived by using geometrical method, which provides the commands of the position, attitude and velocity for the following works; (2) in the trajectory tracking, the UUV moving in three-dimensional space at high speed is assumed to be underactuated. The vehicle only has three available control inputs provided by stern propellers, steering and diving rudders but five degrees of freedom to be controlled. How to achieve a novel trajectory tracking controller for this underactuated UUV is presented with the aid of backstepping. The main difference between our approach and others is in the fact that in the proposed algorithm, the dynamical couplings and underactuated characteristic of the UUV are directly taken into account. In addition, a complete stability analysis based Lyapunov’s method is given. All the tracking errors in the closed-loop system are guaranteed to converge to a small neighbourhood of zero. In contrast to previous work, this paper considers more nonlinear and coupled behaviors of the vehicle such that the guidance and control laws are suitable for the three-dimensional case. As such, no further stability issues need to be addressed, and simulation results for all initial conditions are presented to show the satisfactory performance of the proposed controller. The rest of the paper is organized as follows: Section 2 describes the model of an underactuated UUV and the homing problem at hand. Section 3 presents the design of the guidance laws in detail, while in Section 4, the controller design and stability analysis for the trajectory tracking are presented. Simulation results and some concluding remarks are given in Sections 5 and 6, respectively.\n\n2. Problem statement Consider an unmanned underwater vehicle moving in three-dimensional space, which is expected to autonomously return and dock to its base station. As shown in Fig. 1, there exist two transponders equally spaced from the final docking position D(xD , yD , zD ). One of the transponders is called the left transponder, whereas the other is the right transponder, and their inertial positions are denoted as L(xL0 , yL0 , zL0 ) and R(xR0 , yR0 , zR0 ), respectively. To facilitate the docking, the transponders and the base station in the mission scenario are in the horizontal plane, i.e. zD = zL0 = zR0 . All the homing and docking tasks are divided into two stages, which is similar to the previous work . However, it is carried out in threedimensional space and the vehicle is assumed to be underactuated in 5-DOF. In the initial homing stage, the vehicle is expected to drive at a certain cruise speed toward the reference path that is orthogonal to the straight line defined by the two transponders. The main goals are to shorten the distance to the docking position and achieve the final docking approach in the horizontal plane. Then in the final docking approach, the vehicle at a much lower speed is assumed to be fully actuated, and the model uncertainties and environmental disturbances are essential to be considered. The final docking is the further work to be addressed, in this paper how to design an integrated guidance and control strategy for the initial homing is the primary work.\n\n3834\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nX\n\nFig. 1. The homing and docking scenario of an UUV in three-dimensional space.\n\nFig. 2. Motion variables for an underactuated UUV.\n\nAs shown in Fig. 2, let η=[x, y, z, θ , ψ ]T be the 5-DOF position and attitude of the UUV in the earth-fixed frame, and let v = [u, υ, w, q, r ]T be the corresponding linear and angular velocities in the body-fixed frame. The dynamical model of an underactuated UUV in 5-DOF can be written as ⎧ x˙ = u cos(θ ) cos(ψ ) − υ sin (ψ ) + w sin (θ ) cos(ψ ) ⎪ ⎪ ⎪ ⎪ ⎨y˙ = u cos(θ ) sin (ψ ) + υ cos(ψ ) + w sin (θ ) sin (ψ ) z˙ = −u sin (θ ) + w cos(θ ) ⎪ ⎪ θ˙ = q ⎪ ⎪ ⎩˙ ψ = r/ cos(θ )\n\n(1)\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3835\n\n⎧ m22 τu m33 d11 ⎪ ⎪ u˙ = υr − wq − u+ ⎪ ⎪ m11 m11 m11 m11 ⎪ ⎪ ⎪ m11 d22 ⎪ ⎪ ⎪ υ˙ = − ur − υ ⎪ ⎪ m22 m22 ⎪ ⎨ m11 d33 w˙ = uq − w (2) m33 m33 ⎪ ⎪ ⎪ ⎪ τq m33 − m11 d 55 ρg∇ GML sin (θ ) ⎪ ⎪ q˙ = uw − q− + ⎪ ⎪ ⎪ m m m m 55 55 55 55 ⎪ ⎪ m11 − m22 d66 ⎪ τr ⎪ ⎩r˙ = uυ − r + m66 m66 m66 where mii and dii , i = 1, 2, . . . , 6, are the mass and inertia parameters, and damping coefficients, respectively. τu , τq and τr are the control inputs, and other symbols can be referred to the literature. Assumption 1. The UUV is neutrally buoyant and the xz plane symmetric. The roll motion is passively stabilized through internal/external roll actuators or by gravity, and thus, it can be neglected. Remark 1. Assumption 1 is a common assumption in the maneuvering control of slender body UUVs. They can be found in many literatures, e.g. [7,20,27]. Assumption 2. The pitch angle of the vehicle θ is assumed that |θ (t )| < π /2, ∀t ≥ 0. Remark 2. Notice that the last equation in Eq. (1) is not defined when the pitch angle is equal to ±π /2. In practice, this problem is unlikely to happen due to the metacentric restoring forces. One way to avoid the singularities is to use a four-parameter description known as the quaternion . In this paper, the Euler parameters are used because of their physical representation and computational efficiency. Remark 3. As the absence of available actuators in sway and heave directions, the second and third equations in Eq. (2) are not controlled directly. Thus, the UUV is an underactuated dynamical system. 3. Guidance system for trajectory generation In the simplest form, open-loop guidance systems for UUVs are used to generate a reference trajectory for time-varying trajectory tracking or, alternatively, a path for time-invariant path following. This paper focuses on the initial homing problem that the vehicle is expected to move at a certain cruise speed towards the final docking path. Therefore, the first problem to be addressed is how to generate a feasible trajectory. As shown in Fig. 1, the straight line AB generated by the guidance systems is the reference trajectory for initial homing of an underactuated UUV. The following presents the design process of the guidance system used geometrical method in detail. 3.1. The construction of the straight line AB In order to derive the guidance laws notice that, when the vehicle is moving along the final docking trajectory that is orthogonal to the straight line defined by the two transponders, the distance between the vehicle and each transponder is identical. In addition, the docking\n\n3836\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nposition and two transponders are in the horizontal plane, i.e. zD = zL0 = zR0 . Then, the function of the straight line AB can be derived by the following equations \u0006 (x − xL0 )2 + (y − yL0 )2 + (z − zL0 )2 = (x − xR0 )2 + (y − yR0 )2 + (z − zR0 )2 (3) z = zL0 = zR0 If yL0 \u0006= yR0 , the function of the straight line AB is as y = kx + b0 where ⎧ xR0 − xL0 ⎪ ⎪ ⎨k = − yR0 − yL0 2 2 2 2 x + yR0 − xL0 − yL0 ⎪ ⎪ ⎩b0 = R0 2(yR0 − yL0 )\n\n(4)\n\n(5)\n\nIf yL0 = yR0 , one obtains xL0 + x R0 (6) 2 Remark 4. It should be pointed out that the reference trajectory for initial homing is determined by the positions of two transponders fixed in the mission scenario. Indeed, this idea is first to be adopted to solve the homing and tracking problems. In other existing literature, the reference trajectory is assumed to be known or generated by virtual vehicle that relies on the vehicle’s kinematics and dynamics. x=\n\n3.2. The definitions of the points A and B In the following, how to obtain the inertia positions of the points A and B will be given. Firstly, the initial position of the UUV is defined by the point C(x0 , y0 , z0 ), and A is the projection of the point C on the straight line AB. Then, the different expressions of A will be derived in the following three scenarios. If yL0 = yR0 , the position of A is as ⎧ xL0 + xR0 ⎪ ⎨xA = 2 (7) y = y0 ⎪ ⎩ A zA = zD If k = 0, that is xL0 = xR0 , the result is ⎧ ⎪ ⎨xA = xy0 + y L0 R0 yA = ⎪ 2 ⎩ zA = zD In other conditions, the expression of A is derived by the following equations ⎧ 2 2 2 x 2 + yR0 − xL0 − yL0 x − xL0 ⎪ ⎨y = − R0 x + R0 yR0 − yL0 2(yR0 − yL0 ) ⎪ ⎩y = yR0 − yL0 (x − x0 ) + y0 xR0 − xL0\n\n(8)\n\n(9)\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nBased on Eq. (9), one obtains ⎧ k(b0 − y0 ) − x0 ⎪ ⎪ ⎪x A = − ⎨ k2 + 1 1 yA = − ( xA − x0 ) + y0 ⎪ ⎪ ⎪ k ⎩ zA = zD\n\n3837\n\n(10)\n\nNext, the position of the point B that is at a positive distance R from the docking base will be presented. Accordingly, if the straight line AB is expressed by Eq. (6), the position of B is as ⎧ xL0 + xR0 ⎪ ⎨x B = 2 (11) y = yL0 − R or yL0 + R ⎪ ⎩ B z B = zD On the other hand, the position of B is derived by the following equations \u0006 y = kx +b0 (x − xD )2 + (y − yD )2 = R2\n\n(12)\n\nBased on Eq. (12), it is derived ax + bx + c = 0 2\n\nwhere ⎧ ⎨a = k 2 + 1 b = 2(k b0 − k yD − xD ) ⎩ c = xD2 + (b0 − yD )2 − R2 Then it yields √ −b ± b2 − 4ac xB = 2a ⎩ yB = k xB + b0\n\n(13)\n\n(14)\n\n⎧ ⎨\n\n(15)\n\nThe result is finally determined by the comparison of the distance between A and B according to the fact that it is closer to A. 3.3. The generation of a reference trajectory In the initial homing stage, the underactuated UUV is expected to move at a certain cruise speed along the reference trajectory. Here the desired velocity is denoted by Vd , and the function of the line segment AB has been derived in above section. A is the starting point of the reference trajectory, and the point B is the finish. Then, the reference guidance trajectory is generated as follows ⎧ xr = xA + Vd t cos(ϕ) ⎪ ⎪ ⎪ ⎪ ⎨yr = yA + Vd t sin (ϕ) zr = zD (16) ⎪ ⎪ θ = 0 ⎪ r ⎪ ⎩ ψr = ϕ\n\n3838\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n, i f yL0 = yR0 2 . Notice that, the reference trajectory generated by the arctan (k), others guidance system is a time-varying straight line with a constant velocity. Therefore, the next work will present a novel trajectory tracking algorithm for an underactuated UUV moving in three-dimensional space. where ϕ =\n\n4. Controller design and stability analysis 4.1. Trajectory tracking controller design This section details the trajectory tracking control design for an underactuated UUV in three-dimensional space. Essentially, due to the underactuated property of the UUV, the sway and heave velocities cannot be directly controlled by the available control inputs. Therefore, using the coupled dynamics, the angular velocities of the vehicle are designed to ensure that the sway and heave velocities converge to the desired values. In the end, the control force and torque are determined to achieve the velocity tracking, which in turn guarantees that all the tracking errors converge to a small neighbour of zero. In order to derive the control laws, firstly, it is necessary to define the position and attitude error variables as follows ⎡ ⎤ ⎡ ⎤⎡ ⎤ xe cos(θ ) cos(ψ ) cos(θ ) sin (ψ ) − sin (θ ) 0 0 x − xr ⎢ye ⎥ ⎢ − sin (ψ ) ⎢ ⎥ cos(ψ ) 0 0 0⎥ ⎢ ⎥ ⎢ ⎥⎢y − y r ⎥ ⎢ze ⎥ = ⎢ sin (θ ) cos(ψ ) sin (θ ) sin (ψ ) ⎥ ⎢ cos(θ ) 0 0 ⎥⎢z − z r ⎥ (17) ⎢ ⎥ ⎢ ⎥ ⎣θe ⎦ ⎣ 0 0 0 1 0⎦⎣θ − θr ⎦ ψe 0 0 0 0 1 ψ − ψr Taking the time derivative of Eq. (17), it yields ⎧ x˙e = u − qze + r ye − x˙r cos(θ ) cos(ψ ) − y˙r cos(θ ) sin (ψ ) + z˙r sin (θ ) ⎪ ⎪ ⎪ ⎪y˙e = υ − r (xe + ze tan (θ ) ) + x˙r sin (ψ ) − y˙r cos(ψ ) ⎨ z˙e = w + qxe + r ye tan (θ ) − x˙r sin (θ ) cos(ψ ) − y˙r sin (θ ) sin (ψ ) − z˙r cos(θ ) ⎪ ⎪ θ˙e = q − θ˙r ⎪ ⎪ ⎩˙ ψe = r/cos(θ ) − ψ˙ r\n\n(18)\n\nStep 1. Consider the Lyapunov candidate function V1 =\n\n\u000e 1 2 xe + ye2 + ze2 2\n\n(19)\n\nThe time derivative of Eq. (19) along Eq. (18) yields V˙1 = xe (u − x˙r cos(θ ) cos(ψ ) − y˙r cos(θ ) sin (ψ ) + z˙r sin (θ )) +ye (υ + x˙r sin (ψ ) − y˙r cos(ψ ) ) +ze (w − x˙r sin (θ ) cos(ψ ) − y˙r sin (θ ) sin (ψ ) − z˙r cos(θ )) Following the backstepping, the traditional desired vehicle velocity defined by ⎧ ⎨ud = x˙r cos(θ ) cos(ψ ) + y˙r cos(θ ) sin (ψ ) − z˙r sin (θ ) − k1 xe υd = −x˙r sin (ψ ) + y˙r cos(ψ ) − k2 ye ⎩ wd = x˙r sin (θ ) cos(ψ ) + y˙r sin (θ ) sin (ψ ) + z˙r cos(θ ) − k3 ze\n\n(20)\n\n(21)\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3839\n\nwhere ki , i = 1, 2, 3, are positive constants. According to the generated guidance trajectory in Eq. (16), one obtains ⎧ ⎨ud = Vd cos(θ ) cos(ψe ) − k1 xe υd = −Vd sin (ψe ) − k2 ye (22) ⎩ wd = Vd sin (θ ) cos(ψe ) − k3 ze Clearly, if the linear velocities u, υ and w of the UUV respectively coincide with ud , υd and wd , the position errors xe , ye and ze converge to zero, as one gets V˙1 = −k1 xe2 − k2 ye2 − k3 ze2 ≤ 0. However, due to the underactuated property of the vehicle, the sway and heave velocities cannot be directly controlled in practice. Therefore, aforementioned controller design is unable to achieve the trajectory tracking for an underactuated UUV in three-dimensional space. Using the coupled dynamics, here we define two virtual velocity variables α and β. Then, combining with the control of the yaw angle ψ and pitch angle θ , the global tracking can be guaranteed. Firstly, the novelty velocity variables are defined by \u0006 α=Vd sin (ψe ) (23) β = Vd sin (θ ) Then, the desired velocity commands can be chosen as ⎧ ⎨ud = Vd cos(θ ) cos(ψe ) − k1 xe αd = −υ − k2 ye ⎩ βd = w − Vd sin (θ )(cos(ψe ) − 1) + k3 ze\n\n(24)\n\nTo avoid the controller complexity in the traditional backstepping, which induced by differentiating the desired velocity commands, let the virtual control variables ud , αd , and βd pass the following first-order filter: ⎧ ⎨ku u˙c + uc = ud , uc (0) = ud (0) kα α˙ c + αc = αd , αc (0) = αd (0) (25) ⎩ ˙ kβ βc + βc = βd , βc (0) = βd (0) where ki , i = u, α, β, are positive design parameters. Then, the new velocity errors are introduced as ⎧ ⎨u e = u − u c , e u = u c − u d αe = α − αc , eα = αc − αd (26) ⎩ βe = β − βc , eβ = βc − βd Taking the time derivative of V1 in Eq. (19), it yields V˙1 = −k1 xe2 − k2 ye2 − k3 ze2 + xe (ue + eu ) + ye (αe + eα ) − ze (βe + eβ )\n\n(27)\n\nRemark 5. Notice that the sway and heave velocities of the UUV are not directly controlled. However, using the coupled dynamics, they can be regulated by steering and diving rudders. In particular, the virtual velocity variables α and β are defined by Eq. (23) such that the position tracking errors ye and ze are uniformly ultimately bounded (UUB) by the control of the orientation θ and ψ. In addition, the dynamic surface control (DSC) is introduced such that the virtual variables can be computed by the first-order filters. It not only simplifies the control structure with less computational effort but also benefits for its practical implementation. Step 2. Considering that α and β are virtual velocity variables, the following control design aims at stabilizing the tracking errors αe and βe as well as the attitude errors θe and ψe by\n\n3840\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nthe available angular velocities q and r. The second Lyapunov control function V2 is chosen as 1 1 1 1 V2 = V1 + θe2 + ψe2 + αe2 + βe2 2 2 2 2\n\n(28)\n\nTaking the time derivative of V2 along Eq. (23), it is derived V˙2 = −k1 xe2 − k2 ye2 − k3 ze2 + xe (ue + eu ) + ye (αe + eα ) − ze (βe + eβ ) \u000f \u0010 r r +θe q + ψe + αe Vd cos(ψe ) − α˙ c cos(θ ) cos(θ )\n\n\u000e ˙ +βe Vd cos(θ )q − βc Then, the desired angular velocities are designed as \u0006 qd = − k4 (θe + Vd βe cos(θ ) ) rd = −k5 (ψe + Vd αe cos(ψe ) ) cos(θ )\n\n(29)\n\n(30)\n\nwhere k4 and k5 are positive constants to be chosen later. As similar process in above, let the virtual control commands qd and rd pass the first-order filter: \u0006 kq q˙c + qc = qd , qc (0) = qd (0) (31) kr r˙c + rc = rd , rc (0) = rd (0) And the angular velocity errors are defined as \u0006 qe = q − qc , eq = qc − qd r e = r − r c , er = r c − r d\n\n(32)\n\nThen, the time derivative of V2 in Eq. (29) along Eq. (32) yields V˙2 = −k1 xe2 − k2 ye2 − k3 ze2 − k4 (θe + Vd βe cos(θ ) )2 − k5 (ψe + Vd αe cos(ψe ) )2\n\n\u000e +xe (ue + eu ) + ye (αe + eα ) − ze (βe + eβ ) + (θe + Vd βe cos(θ ) ) qe + eq (ψe + Vd αe cos(ψe ) )(re + er ) + − αe α˙ c − βc β˙c cos(θ )\n\n(33)\n\nStep 3. In above, the kinematic controller is achieved. Next, using the desired commands of the velocities, the dynamic controller will be derived in details. First, consider the Lyapunov function candidate as follows 1 1 1 V3 = V2 + ue2 + qe2 + re2 2 2 2 Combining with the UUV dynamic equations in Eq. (2), one gets ⎧ ⎪ ⎪u˙e = m22 υr − m33 wq − d11 u + τu − u˙c ⎪ ⎪ ⎪ m11 m11 m11 m11 ⎪ ⎨ τq m33 − m11 d 55 ρg∇ GML sin (θ ) q˙e = uw − q− + − q˙c ⎪ m m m m ⎪ 55 55 55 55 ⎪ ⎪ m11 − m22 τr d66 ⎪ ⎪ ⎩r˙e = uυ − r+ − r˙c m66 m66 m66\n\n(34)\n\n(35)\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3841\n\nThe dynamic controller is designed as follows ⎧ τu = m11 (u˙c − xe ) − m22 υr + m33 wq + d11 u − m11 k6 ue ⎪ ⎪ ⎨ τq = m55\u000f (q˙c − θe − Vd βe cos(θ ) ) −\u0010(m33 − m11 )uw + d 55 q + ρg∇ GML sin (θ ) − m55 k7 qe ψe + Vd αe cos(ψe ) ⎪ ⎪ − (m11 − m22 )uυ + d66 r − m66 k8 re ⎩τr = m66 r˙c − cos(θ ) (36) where ki , i = 6, 7, 8, are positive control gains. Then, taking the time derivative of V3 along Eqs. (35) and (36), it yields V˙3 = −k1 xe2 − k2 ye2 − k3 ze2 − k4 (θe + Vd βe cos(θ ) )2 −k5 (ψe + Vd αe cos(ψe ) )2 − k6 ue2 − k7 qe2 − k8 re2 + δ\n\n(37)\n\nwhere δ = xe eu + ye eα − ze eβ + αe (ye − α˙ c ) − βe (ze + β˙c ) (ψe + Vd αe cos(ψe ) )er +(θe + Vd βe cos(θ ) )eq + cos(θ )\n\n(38)\n\n4.2. Stability analysis The controller design for the trajectory tracking of an underactuated UUV in threedimensional space has been completed in above section. The main results can be summarized in the following theorem. Theorem 1. Consider an underactuated 5-DOF UUV moving in three-dimensional space, with the kinematics and dynamics given by the Eqs. (1) and (2). In practical applications, the underactuated UUV is in a mission scenario as described in Section 2. Then, with the reference guidance trajectory Eq. (16), and the control laws (24), (30) and (36), the vehicle can autonomously approach the vicinity of the base station along the final docking path. Moreover, if the control gains are chosen to satisfy the conditions in Eq. (44), all signals in the closed-loop system are bounded and the tracking errors can be guaranteed to converge to a small neighbourhood of zero. Proof. In above Section 4.1, the first-order filter is introduced to avoid differentiating the desired velocity commands. It simplifies the complexity of controllers but also results in boundary layer errors eζ , ζ = u, α, β, q, r. Based on the definitions of the boundary layer error, one obtains eζ e˙ζ = ζ˙c − ζ˙d = − − ζ˙d (39) kζ where ζd denotes the desired velocity command, which is a continuous closed-loop signal. Then, using Young’s inequality \u0010 \u000f e2ζ 1 1 ˙ eζ e˙ζ = − − eζ ζd ≤ − − 1 e2ζ + ζ˙d2 (40) kζ kζ 4 Therefore, the boundary layer errors are uniformly ultimately bounded as the control gain kζ satisfies k1ζ − 1 > 0. Finally, the complete stability analysis for overall system will be\n\n3842\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\npresented. Choose the following Lyapunov function candidate 1 1 1 1 1 V4 = V3 + e2u + e2α + e2β + e2q + e2r 2 2 2 2 2 Taking the time derivative of V4 , it is derived\n\n(41)\n\nV˙4 = −k1 xe2 − k2 ye2 − k3 ze2 − k4 (θe + Vd βe cos(θ ) )2 −k5 (ψe + Vd αe cos(ψe ) )2 − k6 ue2 − k7 qe2 − k8 re2 e2β e2q e2 e2 e2 − u− α − − − r +δ ku kα kβ kq kr ˙ −eu u˙d − eα α˙ d − eβ βd − eq q˙d − er r˙d\n\n(42)\n\nDue to the actual control constraints on the thrusters and rudders of the vehicle, there exists a positive constant λ0 satisfies the condition | cos1(θ ) | ≤ λ0 . In addition, it is noted that all the desired velocity commands are bounded closed-loop signals, which can thus ensure the boundedness of their time derivatives. Therefore, the following inequalities hold \u0011 \u0011 ⎧ ⎨|α˙ c | = λ1 , \u0011β˙c \u0011 = λ2 , |u˙d | = λ3 , |q˙d | = λ4 , \u0011 \u0011 (43) ⎩ |r˙d | = λ5 , |α˙ d | = λ6 , \u0011β˙d \u0011 = λ7 where λi , i = 1, 2, . . . , 7, are positive constants. Using Young’s inequality, i.e., ab ≤ μ 2 b , with μ > 0, one obtains 2\n\n1 2 a 2μ\n\n+\n\n1 2 μ 2 1 2 μ 2 1 2 μ 2 e + x , ye eα ≤ e + ye , z e eβ ≤ e + ze 2μ u 2 e 2μ α 2 2μ β 2 1 2 μ 2 1 2 μ 2 1 2 μ 2 α + ye , αe α˙ c ≤ α + λ1 , ze βe ≤ β + ze ye αe ≤ 2μ e 2 2μ e 2 2μ e 2 2 V μ 1 2 μ 2 1 2 μ 2 βe + λ2 , θe eq ≤ eq + θe , Vd βe cos(θ )eq ≤ d e2q + βe2 βe β˙c ≤ 2μ 2 2μ 2 2μ 2 2 2 2 λV ψe er μ Vd αe cos(ψe )er μ λ ≤ 0 e2r + ψe2 , ≤ 0 d e2r + αe2 cos(θ ) 2μ 2 cos(θ ) 2μ 2 \u000f \u0010 \u000f \u0010 1 2 1 2 2 k ψ V α cos ( ψe ) 5 e d e βe + μθe2 , αe + μψe2 2k4 θeVd βe cos(θ ) ≤ k4Vd ≤ λ0 k5Vd μ cos(θ ) μ 1 2 μ 2 1 2 μ 2 1 2 μ 2 e + λ , eq q˙d ≤ e + λ , er r˙d ≤ e + λ5 eu u˙d ≤ 2μ u 2 3 2μ q 2 4 2μ r 2 μ 1 2 μ 2 1 e + λ6 , eβ β˙d ≤ e2 + λ27 eα α˙ d ≤ 2μ α 2 2μ β 2 xe eu ≤\n\nThen, taking the time derivative of V4 , it yields V˙4 ≤ −k¯ 1 xe2 − k¯ 2 ye2 − k¯ 3 ze2 − k¯ 41 θe2 − k¯ 42 βe2 − k¯ 51 ψe2 − k¯ 52 αe2 −k6 ue2 − k7 qe2 − k8 re2 − k9 e2u − k10 e2q − k11 e2r − k12 e2α − k13 e2β + σ where μ k¯ 1 = k1 − > 0, k¯ 2 = k2 − μ > 0, k¯ 3 = k3 − μ > 0 2\n\n(44)\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3843\n\nμ 1 + k4Vd μ k¯ 41 = k4 − − k4Vd μ > 0, k¯ 42 = k4 (Vd cos(θ ) )2 − − >0 2 μ 2 \u000f \u00102 Vd cos(ψe ) μ 1 + k5 λ0Vd μ − k¯ 51 = k5 − − k5Vd λ0 μ > 0, k¯ 52 = k5 − >0 2 cos(θ ) μ 2\n\n\u000e 2 2 2 1 + λ0 1 + Vd 2 + Vd 1 1 1 1 > 0, k11 = >0 k9 = − > 0, k10 = − − ku μ kq 2μ kr 2μ 1 1 μ\u0012 2 1 1 − > 0, k13 = − > 0, σ = λ kα μ kβ μ 2 i=1 i 7\n\nk12 =\n\nSet κ = min{k¯ i , k j }, then from Eq. (44), it is derived that V˙4 ≤ −2κV4 + σ\n\n(45)\n\nBy solving the inequality (45), one obtains V4 (t ) ≤ V4 (0)e−2κt +\n\nσ , ∀t > 0. 2κ\n\n(46)\n\nThe result means that all the tracking errors in closed-loop system exponentially converge to a small neighbourhood of origin, i.e., uniformly ultimately bounded (UUB). Towards this end, the completed stability analysis is given. Remark 6. The controller parameters k¯ i and k j may be tuned to adjust the convergence rate σ κ and the size of ultimate bound 2κ . Based on above presented Lyapunov stability analysis, σ ¯ the larger values of gains ki and k j increase κ and decrease the size of ultimate bound 2κ , which leads to better tracking accuracy. However, too larger gains may result in the control signals chattering even unstable. So it should be noted that the control parameters will be properly chosen by trial and error. The whole process is similar as the gain tuning in PID control. 5. Simulation results This section presents the numerical simulations performed on an underactuated UUV in 5-DOF to illustrate the satisfactory performance of the proposed integrated guidance and tracking control scheme. All the simulations are carried out by using MATLAB/Simulink. The parameters of the kinematics Eq. (1) and dynamics Eq. (2) of the vehicle, as in , are given by m11 = 215(kg ), m22 = 265(kg ), m33 = 265(kg ), m55 = 80(kgm2 ), m66 = 80(kgm2 ), d11 = 70 + 100|u|(kgs−1 ), d22 = 100 + 200|υ|(kgs−1 ), d33 = 100 + 200|w|(kgs−1 ), d55 = 50 + 100|q|(kgm2 s−1 ), d66 = 50 + 100|r|(kgm2 s−1 ). The reference trajectory is generated using the guidance system in Eq. (16), where the desired surge velocity ud = Vd = 2m/s, the sway and heave velocities υd = wd = 0. Initial position and attitude errors are assumed to exist in several scenarios as shown in Table 1, and all the initial velocities of the vehicle are (u(0), υ(0), w(0), q(0), r(0)) = (0, 0, 0, 0, 0 ). When the distance between the vehicle and the docking position reaches R = 25(m), the initial homing stage is over and the final docking is start, which is the further work to be addressed. Here, the reference trajectories of scenarios 1 and 2 are: (1) a straight line is parallel to the XE axis as the positions of the two transponders are xL0 = xR0 ; (2) a straight line is parallel to\n\n3844\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nTable 1 Initial positions and attitudes of the underactuated UUV and two transponders.\n\nScenario 1 Scenario 2 Scenario 3\n\nUUV initial positions and attitudes\n\nPositions of the left transponder\n\nPositions of the right transponder\n\n[−100,1,12,0,0] [−1,−115,12,0,π/4] [−115,−116,12,0,0]\n\n[0,2,10] [−2,10,10] [2,5,10]\n\n[0,−2,10] [2,10,10] [5,2,10]\n\nFig. 3. The tracking results for homing of an underactuated UUV in scenario 1: (a) Reference and actual trajectories. (b) Actual velocities. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)\n\nthe YE axis as the condition is yL0 = yR0 . In the case of scenario 3, the reference trajectory is the line segment AB, where the slope k is nonzero. The controller is designed as in Eqs. (24), (30) and (36), where the control parameters are chosen as k1 = k2 = k3 = 0.3, k4 = k5 = 10, k6 = 1, k7 = k8 = 10, ki = 0.05 where i = u, α, β, q, r. It is essential to point out that these control parameters can be easily done using the MATLAB commands by trial and error. This process is similar as the gain tuning in PID control. On the other hand, it will be shown below that even with different initial conditions, satisfactory performance could be maintained under the same control gains, which also states the robustness of the proposed algorithm. Fig. 3 shows the tracking results of the vehicle in the mission scenario 1, where the positions of the two transponders are marked with red crosses. A, B and D are marked with the blue and red points, respectively. It can be clearly seen that the underactuated UUV can approach the vicinity of the base station along the reference guidance trajectory AB. The linear and angular velocities of the vehicle are shown in Fig. 3(b), where the surge velocity is as the desired value ud = Vd = 2(m/s ), and other velocities all converge to zero. The position and attitude tracking errors in Fig. 4(a) converge fast to zero, whereas the control inputs are shown in Fig. 4(b). It is relevant to point out that the Theorem 1 guarantees that all the tracking errors in the closed-loop system converge to a small neighbourhood of zero, but it does not provide convergence speeds. Hence, simulations must be carried out to tune the control gains such that the proposed controller presents the satisfactory performance, which is a common solution in nonlinear control problems.\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3845\n\nFig. 4. The tracking results of the trajectory tracking: (a) The position and attitude tracking errors. (b) The control inputs of the vehicle.\n\nFig. 5. The tracking results for homing of an underactuated UUV in scenario 2: (a) Reference and actual trajectories. (b) Actual velocities.\n\nAs shown in Fig. 5(a), the underactuated UUV moves in three-dimensional space towards the final docking path even with the initial position and attitude errors. And the actual velocities of the vehicle are shown in Fig. 5(b). At the beginning the velocity response is very fast due to the large initial position errors. The results of the tracking errors and control inputs of the vehicle are also seen in Fig. 6. Changes of the available control forces and torques confirm a short time of the system response. In addition, it should be noted that the reference guidance trajectories of the scenarios 1 and 2 are parallel to the axis XE and YE , respectively. In Fig. 7, a general reference guidance trajectory for homing of an underactuated UUV is given, where the vehicle at a certain cruise speed towards the base station. As observed in Fig. 8, the proposed controller can also guarantee that all the tracking errors converge to a small neighbourhood of zero, whereas the bounded control inputs of the vehicle are shown in Fig. 8(b). From all above simulations, we can conclude that the proposed integrated guidance and control scheme is effectiveness, which can work out the initial homing problem of an underactuated UUV in three-dimensional space.\n\n3846\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\nFig. 6. The tracking results of the trajectory tracking: (a) The position and attitude tracking errors. (b) The control inputs of the vehicle.\n\nFig. 7. The tracking results for homing of an underactuated UUV in scenario 3: (a) Reference and actual trajectories. (b) Actual velocities.\n\nFig. 8. The tracking results of the trajectory tracking: (a) The position and attitude tracking errors. (b) The control inputs of the vehicle.\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n3847\n\n6. Conclusion In this paper, a novel integrated guidance and tracking control scheme is developed for homing of an underactuated UUV in 5-DOF, where the vehicle can autonomously approach the vicinity of the base station along the final docking path. During the initial homing stage, the reference guidance trajectory is generated using geometrical method that does not dependent on the vehicle model and any assumptions. Then in the trajectory tracking, three-dimensional controller is presented with the aid of backstepping, where the vehicle’s dynamics are all consistent with the reference “state-space” trajectory including the time evolutions of position, attitude, as well as the velocities. In addition, all the tracking errors in the closed-loop systems are demonstrated to be uniformly ultimately bounded (UUB). And the simulations for all initial conditions are also given to show the effectiveness of the control strategy. In the further work, the final docking problem should be addressed that may take the environmental disturbances and model uncertainties into account. Acknowledgment This work was supported by the National Natural Science Foundation of China under Grant 51179038, 51105088, 51309067, 51409055, and partly supported by the Fundamental Research Funds for the Central Universities under Grant HEUCFX160402. References P. Batista, C. Silvestre, P. Oliveira, A time differences of arrival-based homing strategy for autonomous underwater vehicles, Int. J. Robust Nonlinear Control 20 (2010) 1758–1773. P. Batista, C. Silvestre, P. Oliveira, A two-step control approach for docking of autonomous underwater vehicles, Int. J. Robust Nonlinear Control 25 (2015) 1528–1547. D.W. George, R.R. Andre, C. Jason, et al., A Concept for docking a UUV with a slowly moving submarine under waves, IEEE J. Ocean. Eng. 41 (2) (2016) 471–498. L. Lionel, J. Bruno, Robust nonlinear path-following control of an AUV, IEEE J. Ocean. Eng. 33 (2) (2008) 89–102. T.I. Fossen, K.Y. Pettersen, R. Galeazzi, Line-of-sight path following for Dubins paths with adaptive sideslip compensation of drift forces, IEEE Trans. Control Syst. Technol. 23 (2) (2015) 820–827. S. Chao, S. Yang, B. Brad, Integrated path planning and tracking control of an AUV: a unified receding horizon optimization approach, IEEE/ASME Trans. Mechatron. 22 (3) (2017) 1163–1173. S. Khoshnam, M.A. Mohammad, On the neuro-adaptive feedback linearising control of underactuated autonomous underwater vehicles in three-dimensional space, IET Control Theory Appl. 9 (8) (2015) 1264–1273. X. Jian, W. Man, Q. Lei, Dynamical sliding mode control for the trajectory tracking of underactuated unmanned underwater vehicles, Ocean Eng. 105 (2015) 54–63. L. Qiao, W.D. Zhang, Adaptive non-singular integral terminal sliding mode tracking control for autonomous underwater vehicles, IET Control Theory Appl. 11 (8) (2017) 1293–1306. D. Alejandro, G.R. Jose, P. Tristan, Trajectory tracking passivity-based control for marine vehicles subject to disturbances, J. Frankl. Inst. 354 (2017) 2167–2182. P. Herman, W. Adamski, Nonlinear trajectory tracking controller for a class of robotic vehicles, J. Frankl. Inst. 354 (2017) 5145–5161. M. Feezor, F. Sorrell, P. Blankinship, J. Bellingham, Autonomous underwater vehicle homing/docking via electromagnetic guidance, IEEE J. Ocean. Eng. 26 (4) (2001) 515–521. P. Batista, C. Silvestre, P. Oliveira, A sensor based controller for homing of underactuated AUVs, IEEE Trans. Robot. 25 (3) (2009) 701–716. S. Aouaouda, M. Chadli, P. Shi, H.R. Karimi, Discrete-time H- /H∞ sensor fault detection observer design for nonlinear systems with parameter uncertainty, Int. J. Robust Nonlinear Control 25 (2015) 339–361.\n\n3848\n\nZ. Yan, M. Wang and J. Xu / Journal of the Franklin Institute 356 (2019) 3831–3848\n\n S.K. Kommuri, M Defoort, H.R. Karimi, K.C. Veluvolu, A robust observer-based sensor fault-tolerant control for PMSM in electric vehicles, IEEE Trans. Ind. Electron. 63 (12) (2016) 7671–7681. C. Yang, S. Peng, S. Fan, S. Zhang, P. Wang, Y. Chen, Study on docking guidance algorithm for hybrid underwater glider in currents, Ocean Eng. 125 (2016) 170–181. K. Jongkyoo, H. Joe, S. Yu, et al., Time-delay controller design for position control of autonomous underwater vehicle under disturbances, IEEE Trans. Ind. Electron. 63 (2) (2016) 1052–1060. R. Raja, S. Bidyadhar, NARMAX self-tuning controller for line-of-sight-based waypoint tracking for an autonomous underwater vehicle, IEEE Trans. Control Syst. Technol. 25 (4) (2017) 1529–1536. T.I. Fossen, Guidance and Control of Ocean Vehicles, Wiley, New York, 1994. K.D. Do, J. Pan, Control of Ships and Underwater Vehicles: Design for Underactuated and Nonlinear Marine Systems, Springer, London, 2009. A.P. Aguiar, J.P. Hespanha, Trajectory-tracking and path-following of underactuated autonomous vehicles with parametric modeling uncertainty, IEEE Trans. Autom. Control 52 (8) (2007) 1362–1379. K.D. Do, Global tracking control of underactuated ODINs in three-dimensional space, Int. J. Control 86 (2) (2013) 183–196. J.E. Refsnes, A.J. Sorensen, K.Y. Pettersen, Model-based output feedback control of slender-body underactuated AUVs: theory and experiments, IEEE Trans. Control Syst. Technol. 16 (5) (2008) 930–946. H. Joe, M. Kim, S.C. Yu, Second-order sliding mode controller for autonomous underwater vehicle in the presence of unknown disturbances, Nonlinear Dyn. 78 (2014) 183–196. E. Taha, Z. Mohamed, Y.T. Kamal, Terminal sliding mode control for the trajectory tracking of underactuated autonomous underwater vehicles, Ocean Eng. 129 (2017) 613–625. R. Cui, L. Chen, C. Yang, M. Chen, Extended state observer-based integral sliding mode control for an underwater robot with unknown disturbances and uncertain nonlinearities, IEEE Trans. Ind. Electron. 64 (8) (2017) 6785–6795. S. Khoshnam, D. Mehdi, Line-of-sight target tracking control of underactuated autonomous underwater vehicles, Ocean Eng. 133 (2017) 244–252. R. Cui, C. Yang, Y. Li, S. Sharma, Adaptive neural network control of AUVs with control input nonlinearities using reinforcement learning, IEEE Trans. Syst. Man Cybern. Syst. 47 (6) (2017) 1019–1029. K.D. Do, Control of fully actuated ocean vehicles under stochastic environmental loads in three dimensional space, Ocean Eng. 99 (2015) 34–43. H. Wang, X. Liu, K. Liu, H.R. Karimi, Approximation-based adaptive fuzzy tracking control for a class of nonstrict-feedback stochastic nonlinear time-delay systems, IEEE Trans. Fuzzy Syst. 23 (5) (2015) 1746–1760. B. Niu, H.R. Karimi, H. Wang, Y. Liu, Adaptive output-feedback controller design for switched nonlinear stochastic systems with a modified average dwell-time method, IEEE Trans. Syst. Man Cybern. Syst. 47 (7) (2017) 1371–1382. H. Osama, G.A. Sreenatha, S. Hyungbo, R. Tapabrata, Model-based adaptive control system for autonomous underwater vehicles, Ocean Eng. 127 (2016) 58–69. S. Pouria, A.R. Noei, A. Khosravi, Model reference adaptive PID control with anti-windup compensator for an autonomous underwater vehicle, Robot. Auton. Syst. 83 (2016) 87–93. S. Amit, N. Nikolaj, C. Monique, An almost global tracking control scheme for maneuverable autonomous vehicles and its discretization, IEEE Trans. Autom. Control 56 (2) (2011) 457–462. N. Wang, C. Qian, J.C. Sun, Y.C. Liu, Adaptive robust finite-time trajectory tracking control of fully actuated marine surface vehicles, IEEE Trans. Control Syst. Technol. 24 (4) (2016) 1454–1462. K.Y. Pettersen, O. Egeland, Time-varying exponential stabilization of the position and attitude of an underactuated autonomous underwater vehicle, IEEE Trans. Autom. Control 44 (1) (1999) 112–115." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8403074,"math_prob":0.95744514,"size":43765,"snap":"2019-51-2020-05","text_gpt3_token_len":13340,"char_repetition_ratio":0.15278444,"word_repetition_ratio":0.13915151,"special_character_ratio":0.3043071,"punctuation_ratio":0.1336385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98482364,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T15:47:33Z\",\"WARC-Record-ID\":\"<urn:uuid:0bbcd8ad-3a90-4162-9f4f-eeec9b50d8e5>\",\"Content-Length\":\"74793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2f04b64-ab7a-43f0-b1a2-5dd4f54261e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c0e7b90-9f6c-4a5e-84a1-cf07ef18f605>\",\"WARC-IP-Address\":\"104.28.23.190\",\"WARC-Target-URI\":\"https://kundoc.com/pdf-integrated-guidance-and-control-strategy-for-homing-of-unmanned-underwater-vehic.html\",\"WARC-Payload-Digest\":\"sha1:PLESH43KNCOH6V64B5KC2XLA7K26RTIG\",\"WARC-Block-Digest\":\"sha1:MJYJY73BK7QRQ4NFNW7GJCPUQIZQV752\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251779833.86_warc_CC-MAIN-20200128153713-20200128183713-00338.warc.gz\"}"}
https://www.sierrachart.com/index.php?page=doc/StudiesReference.php&ID=163&Name=Bill_Williams_MA
[ "# Technical Studies Reference\n\n### Bill Williams Moving Average\n\nThis study calculates and displays a Bill Williams Moving Average of the data specified by the Input Data Input.\n\nLet $$X$$ be a random variable denoting the Input Data, and let $$X_t$$ be the value of the Input Data at Index $$t$$. Let the Input Length be denoted as $$n$$. Then we denote the Bill Williams Moving Average at Index $$t$$ for the given Inputs as $$BWMA_t(X,n)$$, and we compute it for $$t \\geq 0$$ as follows.\n\nFor $$t = 0$$: $$BWMA_0(X,n) = X_0$$\n\nFor $$t > 0$$: $$\\displaystyle{BWMA_t(X,n) = \\left(1 - \\frac{1}{n}\\right)BWMA_{t - 1}(X,n) + \\frac{1}{n}X_t}$$\n\nNote: Depending on the setting of the Input Moving Average Type, the Bill Williams Moving Average in the above calculation could be replaced with a Smoothed Moving Average.\n\n#### Inputs\n\n• Input Data\n• MovAvg Length\n• Moving Average Type: This is a custom Input that can be set to either Bill Williams EMA (documented above) or Smoothed Moving Average." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72293955,"math_prob":0.9998273,"size":1411,"snap":"2022-05-2022-21","text_gpt3_token_len":360,"char_repetition_ratio":0.15351813,"word_repetition_ratio":0.00877193,"special_character_ratio":0.2678951,"punctuation_ratio":0.08984375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999471,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T14:59:17Z\",\"WARC-Record-ID\":\"<urn:uuid:f50d9434-1719-4836-bfcc-48486f72f026>\",\"Content-Length\":\"18245\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51da3763-8eff-4206-8a61-f01758b44ece>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc497243-9410-4cd9-8496-a87ae1922f30>\",\"WARC-IP-Address\":\"95.216.233.104\",\"WARC-Target-URI\":\"https://www.sierrachart.com/index.php?page=doc/StudiesReference.php&ID=163&Name=Bill_Williams_MA\",\"WARC-Payload-Digest\":\"sha1:YU5L5G42W2C6XWOAIQUSSDGI3TPQEFAN\",\"WARC-Block-Digest\":\"sha1:6NZF5UYWDMKIHVKUUVEZ2IOXUUCCP732\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558030.43_warc_CC-MAIN-20220523132100-20220523162100-00464.warc.gz\"}"}
https://canadam.math.ca/2011/program/abs/gs.html
[ "", null, "CanaDAM 2011 University of Victoria, May 31 - June 3, 2011 www.cms.math.ca//2011", null, "Graph Searching\nOrg: Anthony Bonato (Ryerson University), Nancy Clarke (Acadia University) and Boting Yang (University of Regina)\n[PDF]\n\nCharacterizations of $k$-copwin graphs  [PDF]\n\nWe give two characterizations of the graphs on which $k$ cops have a winning strategy in the Cops and Robber game. These generalize the corresponding characterizations in the one cop case. In particular, we give a relational characterization of $k$-copwin graphs, for all finite $k$, and a vertex elimination order characterization of such graphs. Our results hold for variations of the game and some extend to infinite graphs. This is joint work with Gary MacGillivray.\n\nDANNY DYER, Memorial University of Newfoundland\nFast searching graphs with few searchers  [PDF]\n\nThe edge search number of a graph is defined as the minimum number of cops needed to cast a fast invisible robber that may rest on a graph's vertices or edges. The fast search number is the minimum number of cops needed to capture such a robber in as few moves as possible. Paralleling the development of the edge search number, we will discuss graphs that require at most 3 cops to guarantee capture.\n\nGEŇA HAHN, University of Montreal\nCops-and-robbers revisited  [PDF]\n\nWe suggest a generalization of the original Nowakowski-Quilliot-Winkler game that the players can play on distinct graphs. This allows for an easier introduction of constraints on the players' moves and so leads to a host of questions.\n\nGARY MACGILLIVRAY, University of Victoria\nA characterization of infinite cop-win graphs  [PDF]\n\nThere are essentially two characterizations of finite cop-win graphs. One is in terms of a relation defined on the vertex set, and the other is in terms of a vertex ordering. The first characterization holds unchanged for infinite graphs; the second does not. We show that, if the second characterization is reformulated in terms of a sequence of retractions that each fix all but one vertex, then it also holds unchanged for infinite graphs.\n\nRICHARD NOWAKOWSKI, Dalhousie University\nCops and Robber with different edges sets  [PDF]\n\nIn \"Cops and Robber\", the cops are assumed to obey the rules and the Robber not. This suggests that they should play on different edges sets. I'll present what little is known and that only covers certain situations, specifically, (a) the different edges sets are obtained from the edges of products; (b) complementary sets of edges. There are more questions than answers.\n\nHandling of online submissions has been provided by the CMS." ]
[ null, "https://canadam.math.ca/styles/standardwrap-1/printlogo.png", null, "https://canadam.math.ca/2011/styles/global-1/transparent.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9169102,"math_prob":0.84531796,"size":2564,"snap":"2020-24-2020-29","text_gpt3_token_len":568,"char_repetition_ratio":0.14023438,"word_repetition_ratio":0.01946472,"special_character_ratio":0.1973479,"punctuation_ratio":0.08444444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9644235,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T20:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:bbbb5812-6120-40c3-8cd8-5df38efd5979>\",\"Content-Length\":\"11609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8837c216-9460-4225-a955-e8850b1dd1d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b00c30a-97c5-43be-a4bf-c976250e8aae>\",\"WARC-IP-Address\":\"137.122.61.199\",\"WARC-Target-URI\":\"https://canadam.math.ca/2011/program/abs/gs.html\",\"WARC-Payload-Digest\":\"sha1:QHR7RHVCPFCQDXLVSQNJFL6ANJERAKSZ\",\"WARC-Block-Digest\":\"sha1:H6MJ6IRQYKO4NKCVEGGRAMKUMJSW57C5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655937797.57_warc_CC-MAIN-20200711192914-20200711222914-00062.warc.gz\"}"}
https://community.plotly.com/t/labelling-xyz-axis-titles-on-scatter3d-plots/3175
[ "# Labelling xyz-axis titles on Scatter3d plots\n\nHello,\n\nI am just wondering if there is a way to label the xyz-axis on the following Scatter3d plot (https://plot.ly/python/3d-scatter-plots/).\n\n-Felix\n\n@Fxs7576 The x-axis label is set in layout[‘xaxis’] https://plot.ly/python/reference/#layout-scene-xaxis-title, and analogously for y and z-axis.\n\nJust to add to the point above, here is an example https://plot.ly/python/3d-axes/#set-axes-title\n\nLike this:\n\n``````layout = go.Layout(\nmargin=dict(\nl=0,\nr=0,\nb=0,\nt=0\n),\nscene = dict(\nxaxis = dict(\ntitle='图片大小 KB'),\nyaxis = dict(\ntitle='下载时间 ms'),\nzaxis = dict(\ntitle='PV'),),\n\n)``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67534626,"math_prob":0.850207,"size":718,"snap":"2023-40-2023-50","text_gpt3_token_len":217,"char_repetition_ratio":0.12605043,"word_repetition_ratio":0.3617021,"special_character_ratio":0.29665738,"punctuation_ratio":0.15822785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9842065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T15:22:12Z\",\"WARC-Record-ID\":\"<urn:uuid:82c635c3-cc9e-4b1a-b811-137cec29153e>\",\"Content-Length\":\"26895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6d94de17-20b9-404c-8df6-a967e23f41cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:aabddfa5-cde9-4771-895e-180d4b183f11>\",\"WARC-IP-Address\":\"184.105.99.75\",\"WARC-Target-URI\":\"https://community.plotly.com/t/labelling-xyz-axis-titles-on-scatter3d-plots/3175\",\"WARC-Payload-Digest\":\"sha1:4EHNA3TY42LLSZ3O7ULWPQZF64IEYOBQ\",\"WARC-Block-Digest\":\"sha1:53SS6PLPYCMUW6KGPDKAZDD7YLQM2HRB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00332.warc.gz\"}"}
https://la.mathworks.com/matlabcentral/cody/problems/189-sum-all-integers-from-1-to-2-n/solutions/1964150
[ "Cody\n\n# Problem 189. Sum all integers from 1 to 2^n\n\nSolution 1964150\n\nSubmitted on 7 Oct 2019 by Reece Taylor\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nx = 3; y_correct = 36; assert(isequal(sum_int(x),y_correct))\n\ny = 36\n\n2   Pass\nx = 7; y_correct = 8256; assert(isequal(sum_int(x),y_correct))\n\ny = 8256\n\n3   Pass\nx = 10; y_correct = 524800; assert(isequal(sum_int(x),y_correct))\n\ny = 524800\n\n4   Pass\nx = 11; y_correct = 2098176; assert(isequal(sum_int(x),y_correct))\n\ny = 2098176\n\n5   Pass\nx = 14; y_correct = 134225920; assert(isequal(sum_int(x),y_correct))\n\ny = 134225920\n\n6   Pass\nx = 17; y_correct = 8590000128; assert(isequal(sum_int(x),y_correct))\n\ny = 8.5900e+09" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5010184,"math_prob":0.9996443,"size":894,"snap":"2020-34-2020-40","text_gpt3_token_len":294,"char_repetition_ratio":0.18876405,"word_repetition_ratio":0.0,"special_character_ratio":0.40939596,"punctuation_ratio":0.13855422,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999411,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T00:26:20Z\",\"WARC-Record-ID\":\"<urn:uuid:12c433ee-1a9c-4270-9139-2ed8d3c2ab17>\",\"Content-Length\":\"74219\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee94644a-4fa9-4c5d-8f86-a47b1f65ecfb>\",\"WARC-Concurrent-To\":\"<urn:uuid:00ebe5aa-d7fc-4132-9ef7-ecfc2ad95ed5>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://la.mathworks.com/matlabcentral/cody/problems/189-sum-all-integers-from-1-to-2-n/solutions/1964150\",\"WARC-Payload-Digest\":\"sha1:YROBLRLJT44YCYLGTZL3TY4RPXN55XHI\",\"WARC-Block-Digest\":\"sha1:RRBWVCFH2ACIMKRIMWL4CSUUVHL6NWHL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737050.56_warc_CC-MAIN-20200807000315-20200807030315-00314.warc.gz\"}"}
https://setscholars.net/algorithm-in-c-decrease-key-and-delete-node-operations-on-a-fibonacci-heap/
[ "Hits: 33\n\n# Decrease Key and Delete Node Operations on a Fibonacci Heap\n\n#### In this tutorial, you will learn how decrease key and delete node operations work. Also, you will find working examples of these operations on a fibonacci heap in C.\n\nIn a fibonacci heap, decrease-key and delete-node are important operations. These operations are discussed below.\n\n## Decreasing a Key\n\nIn decreasing a key operation, the value of a key is decreased to a lower value.\n\nFollowing functions are used for decreasing the key.\n\n### Decrease-Key\n\n1. Select the node to be decreased, x, and change its value to the new value k.\n2. If the parent of xy, is not null and the key of parent is greater than that of the k then call `C``ut(x)` and `C``ascading-``C``ut(y)` subsequently.\n3. If the key of x is smaller than the key of min, then mark x as min.\n\n### Cut\n\n1. Remove x from the current position and add it to the root list.\n2. If x is marked, then mark it as false.\n\n1. If the parent of y is not null then follow the following steps.\n2. If y is unmarked, then mark y.\n3. Else, call `Cut(y)` and `Cascading-Cut(parent of y)`.\n\n## Decrease Key Example\n\nThe above operations can be understood in the examples below.\n\n### Example: Decreasing 46 to 15.\n\n1. Decrease the value 46 to 15.\n2. Cut part: Since `24 ≠ nill` and `15 < its parent`, cut it and add it to the root list. Cascading-Cut part: mark 24.\n\n### Example: Decreasing 35 to 5\n\n1. Decrease the value 35 to 5.\n2. Cut part: Since `26 ≠ nill` and `5<its parent`, cut it and add it to the root list.\n3. Cascading-Cut part: Since 26 is marked, the flow goes to `Cut` and `Cascading-Cut`.\nCut(26): Cut 26 and add it to the root list and mark it as false.\n\nSince the 24 is also marked, again call `Cut(24)` and `Cascading-Cut(7)`. These operations result in the tree below.\n\n4. Since `5 < 7`, mark 5 as min.\n\n## Deleting a Node\n\nThis process makes use of decrease-key and extract-min operations. The following steps are followed for deleting a node.\n\n1. Let k be the node to be deleted.\n2. Apply decrease-key operation to decrease the value of k to the lowest possible value (i.e. -∞).\n3. Apply extract-min operation to remove this node.\n\n## C Examples\n\n``````// Operations on a Fibonacci heap in C\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdbool.h>\n#include <math.h>\n\ntypedef struct _NODE\n{\nint key;\nint degree;\nstruct _NODE *left_sibling;\nstruct _NODE *right_sibling;\nstruct _NODE *parent;\nstruct _NODE *child;\nbool mark;\nbool visited;\n} NODE;\n\ntypedef struct fibanocci_heap\n{\nint n;\nNODE *min;\nint phi;\nint degree;\n} FIB_HEAP;\n\nFIB_HEAP *make_fib_heap();\nvoid insertion(FIB_HEAP *H, NODE *new, int val);\nNODE *extract_min(FIB_HEAP *H);\nvoid consolidate(FIB_HEAP *H);\nvoid fib_heap_link(FIB_HEAP *H, NODE *y, NODE *x);\nNODE *find_min_node(FIB_HEAP *H);\nvoid decrease_key(FIB_HEAP *H, NODE *node, int key);\nvoid cut(FIB_HEAP *H, NODE *node_to_be_decrease, NODE *parent_node);\nvoid Delete_Node(FIB_HEAP *H, int dec_key);\n\nFIB_HEAP *make_fib_heap(){\nFIB_HEAP *H;\nH = (FIB_HEAP *)malloc(sizeof(FIB_HEAP));\nH->n = 0;\nH->min = NULL;\nH->phi = 0;\nH->degree = 0;\nreturn H;\n}\nvoid new_print_heap(NODE *n){\nNODE *x;\nfor (x = n;; x = x->right_sibling)\n{\n\nif (x->child == NULL)\n{\nprintf(\"node with no child (%d) n\", x->key);\n}\nelse\n{\n\nprintf(\"NODE(%d) with child (%d)n\", x->key, x->child->key);\nnew_print_heap(x->child);\n}\nif (x->right_sibling == n)\n{\nbreak;\n}\n}\n}\n\nvoid insertion(FIB_HEAP *H, NODE *new, int val){\nnew = (NODE *)malloc(sizeof(NODE));\nnew->key = val;\nnew->degree = 0;\nnew->mark = false;\nnew->parent = NULL;\nnew->child = NULL;\nnew->visited = false;\nnew->left_sibling = new;\nnew->right_sibling = new;\nif (H->min == NULL)\n{\nH->min = new;\n}\nelse\n{\nH->min->left_sibling->right_sibling = new;\nnew->right_sibling = H->min;\nnew->left_sibling = H->min->left_sibling;\nH->min->left_sibling = new;\nif (new->key < H->min->key)\n{\nH->min = new;\n}\n}\n(H->n)++;\n}\n\nNODE *find_min_node(FIB_HEAP *H){\nif (H == NULL)\n{\nprintf(\" n Fibonacci heap not yet created n\");\nreturn NULL;\n}\nelse\nreturn H->min;\n}\n\nFIB_HEAP *unionHeap(FIB_HEAP *H1, FIB_HEAP *H2){\nFIB_HEAP *Hnew;\nHnew = make_fib_heap();\nHnew->min = H1->min;\n\nNODE *temp1, *temp2;\ntemp1 = Hnew->min->right_sibling;\ntemp2 = H2->min->left_sibling;\n\nHnew->min->right_sibling->left_sibling = H2->min->left_sibling;\nHnew->min->right_sibling = H2->min;\nH2->min->left_sibling = Hnew->min;\ntemp2->right_sibling = temp1;\n\nif ((H1->min == NULL) || (H2->min != NULL && H2->min->key < H1->min->key))\nHnew->min = H2->min;\nHnew->n = H1->n + H2->n;\nreturn Hnew;\n}\n\nint cal_degree(int n){\nint count = 0;\nwhile (n > 0)\n{\nn = n / 2;\ncount++;\n}\nreturn count;\n}\nvoid consolidate(FIB_HEAP *H){\nint degree, i, d;\ndegree = cal_degree(H->n);\nNODE *A[degree], *x, *y, *z;\nfor (i = 0; i <= degree; i++)\n{\nA[i] = NULL;\n}\nx = H->min;\ndo\n{\nd = x->degree;\nwhile (A[d] != NULL)\n{\ny = A[d];\nif (x->key > y->key)\n{\nNODE *exchange_help;\nexchange_help = x;\nx = y;\ny = exchange_help;\n}\nif (y == H->min)\nH->min = x;\nif (y->right_sibling == x)\nH->min = x;\nA[d] = NULL;\nd++;\n}\nA[d] = x;\nx = x->right_sibling;\n} while (x != H->min);\n\nH->min = NULL;\nfor (i = 0; i < degree; i++)\n{\nif (A[i] != NULL)\n{\nA[i]->left_sibling = A[i];\nA[i]->right_sibling = A[i];\nif (H->min == NULL)\n{\nH->min = A[i];\n}\nelse\n{\nH->min->left_sibling->right_sibling = A[i];\nA[i]->right_sibling = H->min;\nA[i]->left_sibling = H->min->left_sibling;\nH->min->left_sibling = A[i];\nif (A[i]->key < H->min->key)\n{\nH->min = A[i];\n}\n}\nif (H->min == NULL)\n{\nH->min = A[i];\n}\nelse if (A[i]->key < H->min->key)\n{\nH->min = A[i];\n}\n}\n}\n}\n\nvoid fib_heap_link(FIB_HEAP *H, NODE *y, NODE *x){\ny->right_sibling->left_sibling = y->left_sibling;\ny->left_sibling->right_sibling = y->right_sibling;\n\nif (x->right_sibling == x)\nH->min = x;\n\ny->left_sibling = y;\ny->right_sibling = y;\ny->parent = x;\n\nif (x->child == NULL)\n{\nx->child = y;\n}\ny->right_sibling = x->child;\ny->left_sibling = x->child->left_sibling;\nx->child->left_sibling->right_sibling = y;\nx->child->left_sibling = y;\nif ((y->key) < (x->child->key))\nx->child = y;\n\n(x->degree)++;\n}\nNODE *extract_min(FIB_HEAP *H){\n\nif (H->min == NULL)\nprintf(\"n The heap is empty\");\nelse\n{\nNODE *temp = H->min;\nNODE *pntr;\npntr = temp;\nNODE *x = NULL;\nif (temp->child != NULL)\n{\n\nx = temp->child;\ndo\n{\npntr = x->right_sibling;\n(H->min->left_sibling)->right_sibling = x;\nx->right_sibling = H->min;\nx->left_sibling = H->min->left_sibling;\nH->min->left_sibling = x;\nif (x->key < H->min->key)\nH->min = x;\nx->parent = NULL;\nx = pntr;\n} while (pntr != temp->child);\n}\n\n(temp->left_sibling)->right_sibling = temp->right_sibling;\n(temp->right_sibling)->left_sibling = temp->left_sibling;\nH->min = temp->right_sibling;\n\nif (temp == temp->right_sibling && temp->child == NULL)\nH->min = NULL;\nelse\n{\nH->min = temp->right_sibling;\nconsolidate(H);\n}\nH->n = H->n - 1;\nreturn temp;\n}\nreturn H->min;\n}\n\nvoid cut(FIB_HEAP *H, NODE *node_to_be_decrease, NODE *parent_node){\nNODE *temp_parent_check;\n\nif (node_to_be_decrease == node_to_be_decrease->right_sibling)\nparent_node->child = NULL;\n\nnode_to_be_decrease->left_sibling->right_sibling = node_to_be_decrease->right_sibling;\nnode_to_be_decrease->right_sibling->left_sibling = node_to_be_decrease->left_sibling;\nif (node_to_be_decrease == parent_node->child)\nparent_node->child = node_to_be_decrease->right_sibling;\n(parent_node->degree)--;\n\nnode_to_be_decrease->left_sibling = node_to_be_decrease;\nnode_to_be_decrease->right_sibling = node_to_be_decrease;\nH->min->left_sibling->right_sibling = node_to_be_decrease;\nnode_to_be_decrease->right_sibling = H->min;\nnode_to_be_decrease->left_sibling = H->min->left_sibling;\nH->min->left_sibling = node_to_be_decrease;\n\nnode_to_be_decrease->parent = NULL;\nnode_to_be_decrease->mark = false;\n}\n\nNODE *aux;\naux = parent_node->parent;\nif (aux != NULL)\n{\nif (parent_node->mark == false)\n{\nparent_node->mark = true;\n}\nelse\n{\ncut(H, parent_node, aux);\n}\n}\n}\n\nvoid decrease_key(FIB_HEAP *H, NODE *node_to_be_decrease, int new_key){\nNODE *parent_node;\nif (H == NULL)\n{\nprintf(\"n FIbonacci heap not created \");\nreturn;\n}\nif (node_to_be_decrease == NULL)\n{\nprintf(\"Node is not in the heap\");\n}\n\nelse\n{\nif (node_to_be_decrease->key < new_key)\n{\nprintf(\"n Invalid new key for decrease key operation n \");\n}\nelse\n{\nnode_to_be_decrease->key = new_key;\nparent_node = node_to_be_decrease->parent;\nif ((parent_node != NULL) && (node_to_be_decrease->key < parent_node->key))\n{\nprintf(\"n cut called\");\ncut(H, node_to_be_decrease, parent_node);\n}\nif (node_to_be_decrease->key < H->min->key)\n{\nH->min = node_to_be_decrease;\n}\n}\n}\n}\n\nvoid *find_node(FIB_HEAP *H, NODE *n, int key, int new_key){\nNODE *find_use = n;\nNODE *f = NULL;\nfind_use->visited = true;\nif (find_use->key == key)\n{\nfind_use->visited = false;\nf = find_use;\ndecrease_key(H, f, new_key);\n}\nif (find_use->child != NULL)\n{\nfind_node(H, find_use->child, key, new_key);\n}\nif ((find_use->right_sibling->visited != true))\n{\nfind_node(H, find_use->right_sibling, key, new_key);\n}\n\nfind_use->visited = false;\n}\n\nFIB_HEAP *insertion_procedure(){\nFIB_HEAP *temp;\nint no_of_nodes, ele, i;\nNODE *new_node;\ntemp = (FIB_HEAP *)malloc(sizeof(FIB_HEAP));\ntemp = NULL;\nif (temp == NULL)\n{\ntemp = make_fib_heap();\n}\nprintf(\" n enter number of nodes to be insert = \");\nscanf(\"%d\", &no_of_nodes);\nfor (i = 1; i <= no_of_nodes; i++)\n{\nprintf(\"n node %d and its key value = \", i);\nscanf(\"%d\", &ele);\ninsertion(temp, new_node, ele);\n}\nreturn temp;\n}\nvoid Delete_Node(FIB_HEAP *H, int dec_key){\nNODE *p = NULL;\nfind_node(H, H->min, dec_key, -5000);\np = extract_min(H);\nif (p != NULL)\nprintf(\"n Node deleted\");\nelse\nprintf(\"n Node not deleted:some error\");\n}\n\nint main(int argc, char **argv){\nNODE *new_node, *min_node, *extracted_min, *node_to_be_decrease, *find_use;\nFIB_HEAP *heap, *h1, *h2;\nint operation_no, new_key, dec_key, ele, i, no_of_nodes;\nheap = (FIB_HEAP *)malloc(sizeof(FIB_HEAP));\nheap = NULL;\nwhile (1)\n{\n\nprintf(\" n choose below operations n 1. Create Fibonacci heap n 2. Insert nodes into fibonacci heap n 3. Find min n 4. Union n 5. Extract min n 6. Decrease key n 7.Delete node n 8. print heap n 9. exit n enter operation_no = \");\nscanf(\"%d\", &operation_no);\n\nswitch (operation_no)\n{\ncase 1:\nheap = make_fib_heap();\nbreak;\n\ncase 2:\nif (heap == NULL)\n{\nheap = make_fib_heap();\n}\nprintf(\" enter number of nodes to be insert = \");\nscanf(\"%d\", &no_of_nodes);\nfor (i = 1; i <= no_of_nodes; i++)\n{\nprintf(\"n node %d and its key value = \", i);\nscanf(\"%d\", &ele);\ninsertion(heap, new_node, ele);\n}\nbreak;\n\ncase 3:\nmin_node = find_min_node(heap);\nif (min_node == NULL)\nprintf(\"No minimum value\");\nelse\nprintf(\"n min value = %d\", min_node->key);\nbreak;\n\ncase 4:\nif (heap == NULL)\n{\nprintf(\"n no FIbonacci heap is created please create fibonacci heap n \");\nbreak;\n}\nh1 = insertion_procedure();\nheap = unionHeap(heap, h1);\nprintf(\"Unified Heap:n\");\nnew_print_heap(heap->min);\nbreak;\n\ncase 5:\nif (heap == NULL)\nprintf(\"Fibonacci heap is empty\");\nelse\n{\nextracted_min = extract_min(heap);\nprintf(\"n min value = %d\", extracted_min->key);\nprintf(\"n Updated heap: n\");\nnew_print_heap(heap->min);\n}\nbreak;\n\ncase 6:\nif (heap == NULL)\nprintf(\"Fibonacci heap is empty\");\nelse\n{\nprintf(\" n node to be decreased = \");\nscanf(\"%d\", &dec_key);\nprintf(\" n enter the new key = \");\nscanf(\"%d\", &new_key);\nfind_use = heap->min;\nfind_node(heap, find_use, dec_key, new_key);\nprintf(\"n Key decreased- Corresponding heap:n\");\nnew_print_heap(heap->min);\n}\nbreak;\ncase 7:\nif (heap == NULL)\nprintf(\"Fibonacci heap is empty\");\nelse\n{\nprintf(\" n Enter node key to be deleted = \");\nscanf(\"%d\", &dec_key);\nDelete_Node(heap, dec_key);\nprintf(\"n Node Deleted- Corresponding heap:n\");\nnew_print_heap(heap->min);\nbreak;\n}\ncase 8:\nnew_print_heap(heap->min);\nbreak;\n\ncase 9:\nfree(new_node);\nfree(heap);\nexit(0);\n\ndefault:\nprintf(\"Invalid choice \");\n}\n}\n}``````\n\n## Complexities\n\n Decrease Key O(1) Delete Node O(log n)\n\n# Special 95% discount\n\n## 2000+ Applied Machine Learning & Data Science Recipes\n\n### Portfolio Projects for Aspiring Data Scientists: Tabular Text & Image Data Analytics as well as Time Series Forecasting in Python & R", null, "## Two Machine Learning Fields\n\nThere are two sides to machine learning:\n\n• Practical Machine Learning:This is about querying databases, cleaning data, writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions. It’s the mess of reality.\n• Theoretical Machine Learning: This is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible. It is a whole lot neater and cleaner and removed from the mess of reality.\n\nData Science Resources: Data Science Recipes and Applied Machine Learning Recipes\n\nIntroduction to Applied Machine Learning & Data Science for Beginners, Business Analysts, Students, Researchers and Freelancers with Python & R Codes @ Western Australian Center for Applied Machine Learning & Data Science (WACAMLDS) !!!\n\nLatest end-to-end Learn by Coding Recipes in Project-Based Learning:\n\nApplied Statistics with R for Beginners and Business Professionals\n\nData Science and Machine Learning Projects in Python: Tabular Data Analytics\n\nData Science and Machine Learning Projects in R: Tabular Data Analytics\n\nPython Machine Learning & Data Science Recipes: Learn by Coding\n\nR Machine Learning & Data Science Recipes: Learn by Coding\n\nComparing Different Machine Learning Algorithms in Python for Classification (FREE)\n\n`Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains. `" ]
[ null, "https://d31ezp3r8jwmks.cloudfront.net/variants/nHd8hKzGSAxBUQQ23qnR1VgE/d2e337a4f6900f8d0798c596eb0607a8e0c2fbddb6a7ab7afcd60009c119d4c7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6421243,"math_prob":0.94276017,"size":14013,"snap":"2022-40-2023-06","text_gpt3_token_len":4084,"char_repetition_ratio":0.1945892,"word_repetition_ratio":0.08714703,"special_character_ratio":0.32198673,"punctuation_ratio":0.18654673,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926065,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T03:54:50Z\",\"WARC-Record-ID\":\"<urn:uuid:cc9af8d0-ce83-4538-bb99-74864b23fb75>\",\"Content-Length\":\"114223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b45f8fe-1a24-4522-bb29-f4a416a723f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7f0d31d-965f-4044-b89d-918af2f50eb9>\",\"WARC-IP-Address\":\"162.159.135.42\",\"WARC-Target-URI\":\"https://setscholars.net/algorithm-in-c-decrease-key-and-delete-node-operations-on-a-fibonacci-heap/\",\"WARC-Payload-Digest\":\"sha1:FEMONMP4ROINRX7ASZI6KM3YESB3OBIQ\",\"WARC-Block-Digest\":\"sha1:5WXGOYPF6EFTRX5CLNBDMFLMFHY7A3OB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499801.40_warc_CC-MAIN-20230130034805-20230130064805-00682.warc.gz\"}"}
https://www.education.com/activity/article/mathematical-card-trick/
[ "Learning Library\n\n# A Mathematical Card Trick\n\n### What You Need:\n\n• Deck of playing cards\n• Pencil and scratch paper (for computation)\n\n### What You Do:\n\n1. Find someone to trick.\n2. Ask that person to pick a card from the deck and keep it secret.\n3. Have him double the face value of the card (aces = 1, jacks = 11, queens = 12, and kings = 13).\n5. Ask him to multiply this by 5.\n6. Have them add 1 if his card is a club, 2 if it is a diamond, 3 if it is a heart, and 4 if it is a spade.\n7. Ask them to tell you their number.\n8. To predict the card, subtract 15 from the final total. The right digit of the answer represents the suit of the card (1 = club, 2 = diamond, 3 = heart, 4 = spade). The left digit or digits is the number value of the card. For example, if their result is 83, the card is the 8 of hearts. If the result is 134, the card is the king of spades.\n\nCan you figure out how this trick works?\n\nCreate new collection\n\n0\n\n### New Collection>\n\n0 items\n\nWhat could we do to improve Education.com?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8823688,"math_prob":0.9825431,"size":4490,"snap":"2019-43-2019-47","text_gpt3_token_len":1004,"char_repetition_ratio":0.19326794,"word_repetition_ratio":0.022008253,"special_character_ratio":0.20133631,"punctuation_ratio":0.104218364,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764639,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T16:55:34Z\",\"WARC-Record-ID\":\"<urn:uuid:6333655a-b5d2-4f54-81a9-54567b12dd95>\",\"Content-Length\":\"289276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d620fd43-52f3-4665-8d7c-44b0885f007c>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc8e10a6-7433-47ab-b038-f75c9295816b>\",\"WARC-IP-Address\":\"34.232.181.202\",\"WARC-Target-URI\":\"https://www.education.com/activity/article/mathematical-card-trick/\",\"WARC-Payload-Digest\":\"sha1:YVYJXZMW4LKC4BYOIUMBRMNAKVM23ETC\",\"WARC-Block-Digest\":\"sha1:ELZO43ZGJAWPMQJ2OWRFSWGJBRMJLH4C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668682.16_warc_CC-MAIN-20191115144109-20191115172109-00450.warc.gz\"}"}
https://qiskit.org/documentation/stubs/qiskit_aer.noise.thermal_relaxation_error.html
[ "English\nLanguages\nEnglish\nBengali\nFrench\nGerman\nJapanese\nKorean\nPortuguese\nSpanish\nTamil\n\n# qiskit_aer.noise.thermal_relaxation_error¶\n\nthermal_relaxation_error(t1, t2, time, excited_state_population=0)[source]\n\nReturn a single-qubit thermal relaxation quantum error channel.\n\nParameters\n• t1 (double) – the $$T_1$$ relaxation time constant.\n\n• t2 (double) – the $$T_2$$ relaxation time constant.\n\n• time (double) – the gate time for relaxation error.\n\n• excited_state_population (double) – the population of $$|1\\rangle$$ state at equilibrium (default: 0).\n\nReturns\n\na quantum error object for a noise model.\n\nReturn type\n\nQuantumError\n\nRaises\n\nNoiseError – If noise parameters are invalid.\n\n• For parameters to be valid $$T_1$$ and $$T_2$$ must satisfy $$T_2 \\le 2 T_1$$.\n• If $$T_2 \\le T_1$$ the error can be expressed as a mixed reset and unitary error channel.\n• If $$T_1 < T_2 \\le 2 T_1$$ the error must be expressed as a general non-unitary Kraus error channel." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5346468,"math_prob":0.98254436,"size":877,"snap":"2022-40-2023-06","text_gpt3_token_len":247,"char_repetition_ratio":0.15005727,"word_repetition_ratio":0.0,"special_character_ratio":0.2896237,"punctuation_ratio":0.120567374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9857659,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T12:12:13Z\",\"WARC-Record-ID\":\"<urn:uuid:ddf08b51-4742-4956-ba07-eeb9bffbaff9>\",\"Content-Length\":\"18422\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03b0ef67-881f-4ea3-9999-43a54935d828>\",\"WARC-Concurrent-To\":\"<urn:uuid:4851fb8e-2f6f-44f8-a048-829ae88f71ea>\",\"WARC-IP-Address\":\"104.18.10.227\",\"WARC-Target-URI\":\"https://qiskit.org/documentation/stubs/qiskit_aer.noise.thermal_relaxation_error.html\",\"WARC-Payload-Digest\":\"sha1:3EVA2CU2QC744S5MGLPTMURKLVBLFFBI\",\"WARC-Block-Digest\":\"sha1:7HC7FHWNSRPGLLH3LLYNU2CW5EICYF4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499816.79_warc_CC-MAIN-20230130101912-20230130131912-00136.warc.gz\"}"}
https://answers.everydaycalculation.com/gcf/630-360
[ "Solutions by everydaycalculation.com\n\n## What is the GCF of 630 and 360?\n\nThe gcf of 630 and 360 is 90.\n\n#### Steps to find GCF\n\n1. Find the prime factorization of 630\n630 = 2 × 3 × 3 × 5 × 7\n2. Find the prime factorization of 360\n360 = 2 × 2 × 2 × 3 × 3 × 5\n3. To find the gcf, multiply all the prime factors common to both numbers:\n\nTherefore, GCF = 2 × 3 × 3 × 5\n4. GCF = 90\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to find GCF of upto four numbers in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76950717,"math_prob":0.999416,"size":606,"snap":"2020-24-2020-29","text_gpt3_token_len":199,"char_repetition_ratio":0.1295681,"word_repetition_ratio":0.0,"special_character_ratio":0.43564355,"punctuation_ratio":0.07826087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966435,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T13:37:11Z\",\"WARC-Record-ID\":\"<urn:uuid:91827aec-1914-4f6e-9e85-fb321c8d138b>\",\"Content-Length\":\"6102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:88384d16-2ad0-4b77-8753-7b321072886a>\",\"WARC-Concurrent-To\":\"<urn:uuid:faa49f8e-5621-453a-b5e1-593680d8f102>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/gcf/630-360\",\"WARC-Payload-Digest\":\"sha1:J7YNADOUBHYJM4JHHD7BKKSVTWUVEMHJ\",\"WARC-Block-Digest\":\"sha1:L4SAE2OJW77LMPYV4UNQSHTVNZWLQICY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655880616.1_warc_CC-MAIN-20200706104839-20200706134839-00065.warc.gz\"}"}
http://www.ens-lyon.fr/DI/en/apprentissage/
[ "Machine Learning\n\nCourse offered in the second semester of the M1.\n\nThis course gives a general introduction to Machine Learning, from algorithms to theoretical aspects in\nStatistical Learning Theory.\n\nTopics covered:\n• General introduction to Machine Learning: learning settings, curse of dimensionality, overfitting/underfitting, etc.\n• Overview of Supervised Learning Theory: True risk versus empirical risk, loss functions, regularization,\nbias/variance trade-off, complexity measures, generalization bounds.\n• Linear/Logistic/Polynomial Regression: batch/stochastic gradient descent, closed-form solution.\n• Sparsity in Convex Optimization.\n• Support Vector Machines: large margin, primal problem, dual problem, kernelization, etc.\n• Neural Networks, Deep Learning.\n• Theory of boosting: Ensemble methods, Adaboost, theoretical guarantees.\n• Non-parametric Methods (K-Nearest-Neighbors)\n• Metric Learning\n• Optimal Transport\n\nTeaching methods: Lectures and Lab sessions.\nForm(s) of Assessment: written exam (50%) and project (50%)\n\nReferences:\n– Statistical Learning Theory, V. Vapnik, Wiley, 1998\n– Machine Learning, Tom Mitchell, MacGraw Hill, 1997\n– Pattern Recognition and Machine Learning, M. Bishop, 2013\n– Convex Optimization, Stephen Boyd & Lieven Vandenberghe, Cambridge University Press, 2012.\n– On-line Machine Learning courses: https://www.coursera.org/\n\nExpected prior knowledge: basic mathematics and statistics – convex optimization" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7292739,"math_prob":0.71949416,"size":1466,"snap":"2023-14-2023-23","text_gpt3_token_len":318,"char_repetition_ratio":0.13406293,"word_repetition_ratio":0.0,"special_character_ratio":0.20600273,"punctuation_ratio":0.23206751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9640153,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T20:49:28Z\",\"WARC-Record-ID\":\"<urn:uuid:e0419b25-6cee-427d-ab88-2e1caff7d5d7>\",\"Content-Length\":\"48189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47fcc628-a370-45a0-974b-bcaeb69fd2be>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c2e1c94-fe4b-4268-9274-0b9a9aadbfdb>\",\"WARC-IP-Address\":\"140.77.168.21\",\"WARC-Target-URI\":\"http://www.ens-lyon.fr/DI/en/apprentissage/\",\"WARC-Payload-Digest\":\"sha1:GSK2PT33REYFZOV43CRCH7KXZGF2YMPC\",\"WARC-Block-Digest\":\"sha1:X4R4MNKELCFKCDQM4YPBLDN6BSGUE3AZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653071.58_warc_CC-MAIN-20230606182640-20230606212640-00071.warc.gz\"}"}
https://cstheory.stackexchange.com/questions/4167/instance-of-fpt-reductions-that-is-not-a-polynomial-time-reduction
[ "# Instance of FPT-reductions that is not a polynomial-time reduction\n\nIn parametrized complexity people use fixed-parameter-tractable (FPT) reduction to prove W[t]-hardness. Theoretically a FPT-reduction is not a polynomial-time reduction, since it can run exponentially in the parameter k. But in practice all the FPT-reductions I've seen are p-time reductions, which means W[t]-hardness proofs almost always imply NP-completeness proofs.\n\nI wonder if someone can give me a FPT-reduction that indeed runs exponentially in the parameter $k$. Thanks.\n\nAn early example is the W-hardness proof for Tournament Dominating Set (Theorem 4.1 in ). The reduction is from Dominating Set and it constructs a tournament with $O(2^k n)$ vertices, where $n$ is the number of vertices of the dominating set instance and $k$ is the parameter.\n\n: Rodney G. Downey and Michael R. Fellows. Parameterized computational feasibility. In P. Clote and J.B. Remmel, editors, Proceedings of Feasible Mathematics II, pages 219-244. Birkhauser, 1995.\n\n• A (maybe different) proof of the same statement can also be found in the book \"Parameterized Complexity Theory\" from J. Flum and M. Grohe, Theorem 7.17. Jan 7, 2011 at 19:01\n\nThe following paper contains reductions for various parameterizations of Closest Substring where the running time depends exponentially or double exponentially on the parameter (and this dependence seems to be unavoidable).\n\nD. Marx. Closest substring problems with small distances. SIAM Journal on Computing, 38(4):1382-1410, 2008.\n\nAs a complement to the other answers, the following Proposition shows that the corresponding notions of reducibility are incomparable:\n\nProposition [2, Prop. 2.8]. There are parameterized problems $(Q,k)$ and $(Q',k')$ such that $(Q,k) <^{\\mathrm{fpt}} (Q',k')$ and $Q' <^{\\mathrm{ptime}}\\ Q$.\n\nHere, $<^{\\mathrm{fpt}}$ stands for fpt-reduction and $<^{\\mathrm{ptime}}$ stands for polynomial-time reduction.\n\n: J. Flum, M. Grohe. Parameterized Complexity Theory. Springer (2006)\n\nProbably this is not an intended answer, but how about (a derandomized variant of) color-coding for the k-path problem? http://en.wikipedia.org/wiki/Color-coding\n\nThere, one transforms an instance of the k-path problem to instances of the colorful k-path problem by an fpt-reduction with super-polynomial dependency on k. (One creates multiple instances, but they can be seen as one big instance.) Since the colorful k-path problem can be solved in fpt time by dynamic programming, we can conclude the k-path problem belongs to FPT.\n\nAnother example of such a reduction is the hardness proof for VC-dimension. See \"Parameterized learning complexity\" by Downey, Evans and Fellows." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8410974,"math_prob":0.9677383,"size":478,"snap":"2022-27-2022-33","text_gpt3_token_len":109,"char_repetition_ratio":0.15822785,"word_repetition_ratio":0.0,"special_character_ratio":0.19037656,"punctuation_ratio":0.08235294,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977307,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T16:06:17Z\",\"WARC-Record-ID\":\"<urn:uuid:af59d631-8338-43f1-ba1e-27f47bb7e0c8>\",\"Content-Length\":\"264314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1a24792-ddd4-4e35-8426-e1beac7fe72c>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf4cf527-cc81-444a-b4c1-5d254a083596>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cstheory.stackexchange.com/questions/4167/instance-of-fpt-reductions-that-is-not-a-polynomial-time-reduction\",\"WARC-Payload-Digest\":\"sha1:UOEQ2AY4LY3YQC5FHON2A6RYJEJBVRV6\",\"WARC-Block-Digest\":\"sha1:Z2YGQD2C2KR2TR7I7B7CU32CQIGAVYF6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103556871.29_warc_CC-MAIN-20220628142305-20220628172305-00274.warc.gz\"}"}
http://softmath.com/algebra-help/9th-grade-math-problems.html
[ "English | Español\n\nTry our Free Online Math Solver!", null, "Online Math Solver\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\nWhat our customers say...\n\nThousands of users are using our software to conquer their algebra homework. Here are some of their experiences:\n\nYou've been extremely patient and helpful. I'm a \"late bloomer\" in the college scene, and attempting math classes online are quite challenging to say the least! Thank you!\nA.R., Arkansas\n\nMy parents are really happy. I brought home my first A in math yesterday and I know I couldnt have done it without the Algebrator.\nDebra Ratto, CO\n\nMy son has used Algebrator through his high-school, and it seems he will be taking it to college as well (thanks for the free update, by the way). I really like the fact that I can depend on your company to constantly improve the software, rather than just making the sale and forgetting about the customers.\nB.M., Vermont\n\nSearch phrases used on 2009-09-15:\n\nStudents struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\n• Multiply Radicals Online\n• online 9th grade math quiz\n• teachers algebra test\n• saxon math 6 5 answer key\n• simplest radical form calculator\n• pool algera\n• where can i find kumon tests online\n• 7th grade pre-algebra online\n• simplify algebraic expressions online\n• tx*30 calculator online\n• lessons in kumon\n• exponential reprensentation of radical\n• algebra 2 solving inequalities powerpoint\n• QUARDRATIC EQUATIONs word problems\n• simplifying radicals calculator\n• solving multivariable equations\n• algebra solver\n• kumon algebra\n• fraction simplifier\n• algebra 2 prentice hall help\n• math papers for 3rd grade\n• examples of radical numbers\n• year 10 algebra test\n• simplifying square roots worksheet\n• savings formula\n• logarithms powerpoint\n• 10th maths formulas tamil\n• 3rd power inequalities\n• solve algebraic equations online\n• Solving Radical Expressions\n• printable saxon math worksheets\n• how to simplify the algebra expression\n• simplifying radicals chart\n• firstinmath\n• algebra formula chart graph\n• Online Inequalities Calculator\n• simultaneous ODE\n• Solve My Algebra for Me\n• glencoe geometry texas cheat\n• free printable paper\n• grade 8 math ontario\n• matrix solver online\n• sum of cubes math\n• year 7 maths worksheets\n• factoring cubic equations\n• scaled math problems\n• algebraic expresion\n• 3rd power in a quadratic function\n• maths programme equation\n• can ti 89 do algebra\n• 8th grade math worksheets\n• Math riddle. Cubes root\n• LCM GCF in Math elementary worksheets\n• fill iln answer on graphing and transforming functions\n• trinomial factorer\n• GCF and LCM Worksheets\n• integration step solver\n• gcm lcf\n• solve algebra online\n• Pre-Algebra Readiness Test\n• EXCEL VBA NUMERICAL INTERPOLATION\n• free math formula charts\n• examples of math investigatory project\n• grade percentage calculator\n• fermats little thereom multiplicative inverse\n• what is the best math trivia\n• algebra trivia questions\n• elementary algebra worksheets\n• laplace transform solver\n• 8th grade geometry worksheets\n• solving binomial equations\n• ez grader scale\n• year 8 maths problems online\n• math answers for algebra 1\n• solve matrix, 6th grade\n• online laplace calculator\n• algebra inequality calculator\n• grade 11 math\n• iq math worksheet\n• quick maths problems\n• solving inequalities powerpoint\n• how to cheat and get all a's in 7th grade\n• solving quadratic equation by square root property online\n• 8th grade algebra test check yourself\n• show work calculator\n• printable worksheet in math for 8th grader\n• online integral calculator\n• 5th grade algebra\n• free 8th grade taks math work sheets\n• using algebra in life\n• Math worksheets for associative properties\n• understanding integers worksheets\n• 5th grade math practice 1.4\n• rational expressions solver\n• prime and composite worksheets\n Prev Next" ]
[ null, "http://softmath.com/images/video-pages/solver-top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84507704,"math_prob":0.9466276,"size":3893,"snap":"2019-43-2019-47","text_gpt3_token_len":956,"char_repetition_ratio":0.14862433,"word_repetition_ratio":0.0,"special_character_ratio":0.21628565,"punctuation_ratio":0.051839463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979063,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T23:16:02Z\",\"WARC-Record-ID\":\"<urn:uuid:d040c37f-c6e6-4570-8b25-7af54b7c9c9c>\",\"Content-Length\":\"88895\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ced4db12-a721-49a9-88a0-7b1c23d5afed>\",\"WARC-Concurrent-To\":\"<urn:uuid:33fe52b0-4cb2-40d8-ad0e-0ca7fd23a44f>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"http://softmath.com/algebra-help/9th-grade-math-problems.html\",\"WARC-Payload-Digest\":\"sha1:B5AIMZQSQYS3UY2IYQ5A3QJWOHXED4PY\",\"WARC-Block-Digest\":\"sha1:JN2CMI7VSX7BY4DUASEW6EV7FTK3UPKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677230.18_warc_CC-MAIN-20191017222820-20191018010320-00054.warc.gz\"}"}
https://javapapers.com/java/java-numeric-promotion/
[ "# Java Numeric Promotion\n\nThis Java article is to discuss about the numeric promotion that happens when using operators. Similar to the last Java puzzle on floating point precision, this article will also make raise some eyebrows. Last week a regular reader of Javapapers Palani Kumar wrote to me and asked a question,\n\nCan you guess the output of the following Java program?\n\n```public class NumericPromotion {\npublic static void main(String... args){\n//part 1\nfinal byte a = 1;\nfinal byte b = 2;\nbyte c = a + b;\nSystem.out.println(c);\n\n//part 2\nbyte i = 1;\nbyte j = 2;\nbyte k = i + j;\nSystem.out.println(k);\n}\n}\n```", null, "## Decoding the Puzzle\n\nThere are two parts to it. Absolutely no problem with the first part. Two Java final byte variables are created. Then they are added into a byte variable. This addition statement gets processed at compile time and since both its operands are final variables, the ‘byte c’ is instantiated and the constant value ‘3’ is assigned.\n\nSo what we have here is a narrowing conversion and it is possible, if all the operands are constant values in an expression. As per Java Language Specification (JLS),\n\nIn addition, if the expression is a constant expression (§15.28) of type byte, short, char, or int: – A narrowing primitive conversion may be used if the type of the variable is byte, short, or char, and the value of the constant expression is representable in the type of the variable\n\n## Problem Area\n\nNow to part 2. Here we got problem! We get the following error on compilation of the above Java program.\n\n```NumericPromotion.java:10: error: possible loss of precision\nbyte k = i + j;\n^\nrequired: byte\nfound: int\n1 error\n```\n\nThe statement where the Java compiler complains about is a normal arithmetic statement. We are adding two byte variables and assigning it to another byte variable. Whats wrong with this? The error says, “required: byte and found: int”. How come, it found ‘int’? We have never declared anything as int in this program.\n\n## Numeric Promotion by Operator\n\nHere is what happens. There is no addition operator for byte. This addition operator used here promotes its operands to ‘int’ automatically. After promoted it becomes addition of two int values and so the result is an int value. Now the issue is, this result int value cannot be assigned to a byte variable k.\n\nNarrowing conversion is not done automatically by the Java runtime. We need to explicitly cast the value to byte. If you want to know about narrowing conversion and cast in Java, please go read this previous tutorial. It is a comprehensive tutorial and I seriously recommend it to you as I am sure you will find something new there. As per JLS,\n\nWhen an operator applies binary numeric promotion to a pair of operands, each of which must denote a value that is convertible to a numeric type, the following rules apply, in order, using widening conversion (§5.1.2) to convert operands as necessary:\n\n• If any of the operands is of a reference type, unboxing conversion\n(§5.1.8) is performed. Then:\n• If either operand is of type double, the other is converted to double.\n• Otherwise, if either operand is of type float, the other is converted to\nfloat.\n• Otherwise, if either operand is of type long, the other is converted to\nlong.\n• Otherwise, both operands are converted to type int.\n\n## Comments on \"Java Numeric Promotion\"\n\n1. kciejek says:\n\nHi Joe,\nso second part would be?\n\nbyte i = 1;\nbyte j = 2;\nbyte k = (byte)(i + j);\nSystem.out.println(k);\n\nThanks for great blog.\n\n2. M.S.priya says:\n\nit was nice……thank u\n\n3. Joe says:\n\nYes that’s right kciejek. Thanks.\n\n4. Pratik says:\n\nbefore reading thru this tutorial I just thought that it’s just the way it is, but I didn’t know about the “final bytes”\n\nThanks Joe\n\n5. Joe says:\n\nWelcom Pratik.\n\n6. Ram says:\n\nGood learning … Thanks joe\n\nComments are closed for \"Java Numeric Promotion\"." ]
[ null, "https://javapapers.com/wp-content/uploads/2014/07/Promotion.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87103885,"math_prob":0.8817707,"size":3719,"snap":"2022-40-2023-06","text_gpt3_token_len":848,"char_repetition_ratio":0.1192463,"word_repetition_ratio":0.048854962,"special_character_ratio":0.2328583,"punctuation_ratio":0.14267015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97075135,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T22:33:15Z\",\"WARC-Record-ID\":\"<urn:uuid:570e9074-7eea-47b0-a6a6-6446771d2548>\",\"Content-Length\":\"24594\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1ddf7686-7833-4b99-af9f-3c6160607c94>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb16790c-484a-4082-9609-33a1efd012fa>\",\"WARC-IP-Address\":\"208.97.169.231\",\"WARC-Target-URI\":\"https://javapapers.com/java/java-numeric-promotion/\",\"WARC-Payload-Digest\":\"sha1:F7P6YPGZGEGUYEDTMDE7MHAJZRO4ENPL\",\"WARC-Block-Digest\":\"sha1:NULAXZZOJ7VREWL4KS3IHW2OLN5TCNXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500365.52_warc_CC-MAIN-20230206212647-20230207002647-00253.warc.gz\"}"}
https://dsp.stackexchange.com/questions/34597/meaning-of-hop-size-in-filter-bank-interpretation-of-short-term-fourier-transfor
[ "# Meaning of Hop-Size in Filter Bank Interpretation of Short Term Fourier Transform\n\nThe Short Term Fourier Transform (STFT) is used to obtain the time-varying spectrum from a signal. There are two ways to understand it - the overlap add interpretation, and the filter bank interpretation. The overlap-add method is where the signal is cut up into windowed segments in time and the DFT of each segment is taken. The window is advanced forward by R samples in time, where R is called the hop-size.\n\nThe filter bank interpretation is where the window is seen as a low-pass filter, and the segment is shifted in frequency by an amount equal to the time segment. The hop-size here is interpreted as a down-sampling in frequency.\n\nI am unable to understand this interpretation and would like to know how R is interpreted as down-sampling of the signal in frequency. This would also clarify the constraints on choosing a hop-size R, to prevent aliasing in frequency (as having a hop-size R means down-sampling in frequency).\n\nYour message was apparently unfinished ('When reading about the DCT...'). I thus will be starting from it.\n\nThe JPEG compression format is an example of hopping. A DCT-filter bank (Discrete Cosine Transform) is applied on the whole image, and subsampled by $8\\times 8$ (in both directions). Another interpretation is via a rectangular window, implemented in a fake overlap-add version, as the overlap is zero ($8\\times 8$ blocks are almost disjoint). The DCT has a very simple interpretation, as it consists of an infinite continuous cosine basis, discretized and windowed with a rectangular window.\n\nThe traditional Short Term Fourier Transform (STFT) is an improvement, with the use of an overlapping window. In the continuous setting, an STFT with a window $h$ can be inverted with a lot of dual windows $g$ (under very mild assumptions on $h$ and $g$).\n\nGoing to discrete implementations, certain choices of overlap, hop and window can lead to easy inversion. Inversion, or synthesis is the turning point: you can \"analyse\" any signal with any window, or hop. But not all designs allow synthesis or inversion, although it should be permitted more easily than in the critical and orthogonal framework, because of the potential redundancy.\n\nThere are instance of windows allowing discrete, non-redundant inversion, like the Modified DCT, the LOT (lapped orthogonal transform), the MLT (modulated lapped transform), and a handful of others (GenLOT, GULLOT). They are used for instance in the mp3 (mpeg-1 layer 3) style of audio compression.\n\nIn the above case, the hop is equal to the number of channels, similarly to the case of the JPEG DCT: $8$-channel DCT, and a jump of $8$ pixels in each direction. Allowing only a window for all the channels is somewhat restrictive. The most efficient framework is that of oversampled filter banks:", null, "There is no direct concept of a global window for all $M$-band filters, $M\\ge N$. Instead, some windowing is embedded in each one. In the above picture, the $N$ corresponds to your $R$. My understanding is that the downsampling takes place in the time-domain (hopping), but it results in a shift in frequency (and aliasing), as filter $H_i$ outputs are sent to the base-band.\n\nThen, given a set of analysis filters $H_i$ (including some overlapping), and if $M$ is greater than $N$, you can very often cancel all the aliasing induced by the $N$ downsampling with an infinity of inverse filter banks. One issue is to find ONE suitable inverse.\n\nThis is discussed for instance in Optimization of Synthesis Oversampled Complex Filter Banks, J. Gauthier et al., IEEE Transactions on Signal Processing, 2009:\n\nAn important issue with oversampled FIR analysis filter banks (FBs) is to determine inverse synthesis FBs, when they exist. Given any complex oversampled FIR analysis FB, we first provide an algorithm to determine whether there exists an inverse FIR synthesis system. We also provide a method to ensure the Hermitian symmetry property on the synthesis side, which is serviceable to processing real-valued signals. As an invertible analysis scheme corresponds to a redundant decomposition, there is no unique inverse FB. Given a particular solution, we parameterize the whole family of inverses through a null space projection. The resulting reduced parameter set simplifies design procedures, since the perfect reconstruction constrained optimization problem is recast as an unconstrained optimization problem. The design of optimized synthesis FBs based on time or frequency localization criteria is then investigated, using a simple yet efficient gradient algorithm.\n\nI have not seen it in a book but it seems to me that if the STFT definition is:$$X(\\tau, \\omega)=\\int_{-\\infty}^{\\infty}x(t)w(t-\\tau)e^{-i\\omega t}dt$$ Then we can define the filter as $$h(t)=w(t)e^{-i\\omega t}$$ where $$w(t)$$ can be interpreted as a low-pass filter and the exponent as frequency shifting method (see Shift Theorem). Next we define $$g(t)=h(-t)$$ where both have the same frequency content with a phase shift between them (see the conjugation property of the Fourier Transform). Thus $$X(\\tau, \\omega)=e^{-i\\omega \\tau}\\int_{-\\infty}^{\\infty}x(t)w(t-\\tau)e^{-i\\omega (t - \\tau)}dt=e^{-i\\omega \\tau}\\int_{-\\infty}^{\\infty}x(t)g(\\tau-t)dt$$\n\nand here we got the convolution of the signal with a bandpass function $$g(t)$$, with some phase shift (which we might correct or neglect) $$X(\\tau, \\omega)=e^{-i\\omega \\tau}(x*g)(\\tau)$$\n\nSince we are implementing the Discrete STFT, $$\\tau$$ resolution is given by the hop, $$R$$." ]
[ null, "https://i.stack.imgur.com/CkzAQ.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9020366,"math_prob":0.9874592,"size":3606,"snap":"2022-05-2022-21","text_gpt3_token_len":789,"char_repetition_ratio":0.098556355,"word_repetition_ratio":0.0,"special_character_ratio":0.2046589,"punctuation_ratio":0.1168437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957587,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T21:31:16Z\",\"WARC-Record-ID\":\"<urn:uuid:f0fbf054-84d1-47cc-9f23-7101aefcb024>\",\"Content-Length\":\"237247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e50858e-eec1-4cbb-81f6-0fe6115f8ec6>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f8463a2-cb61-4fa7-adc9-475d8926f8dd>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/34597/meaning-of-hop-size-in-filter-bank-interpretation-of-short-term-fourier-transfor\",\"WARC-Payload-Digest\":\"sha1:GCASVCEIJOA47BOPVGBXSLGYSY7CP4DO\",\"WARC-Block-Digest\":\"sha1:TDBD66SCZOBB2WJW4L4BZYDD35STXYIQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662541747.38_warc_CC-MAIN-20220521205757-20220521235757-00712.warc.gz\"}"}
http://michaelnielsen.org/polymath1/index.php?title=Equal-slices_measure
[ "# Equal-slices measure\n\n## Definition\n\nThe equal-slices measure $\\mu(A)$ of a set $A \\subset ^n$ is defined by the formula\n\n$\\mu(A) := \\sum_{(a,b,c) \\in \\Delta_n} \\frac{|A \\cap \\Gamma_{a,b,c}|}{|\\Gamma_{a,b,c}|}$\n\nwhere $\\Delta_n := \\{ (a,b,c) \\in {\\Bbb Z}_+^3: a+b+c=n\\}$ is the triangular grid and $\\Gamma_{a,b,c} \\subset ^n$ is the set of slices with exactly a 1s, b 2s, and c 3s. Thus every slice $\\Gamma_{a,b,c}$ has measure 1 (hence the name equal-slices), and the entire cube has measure $\\frac{(n+1)(n+2)}{2}$. Dividing the equal slices measure by $\\frac{(n+1)(n+2)}{2}$ gives the equal-slices density.\n\nExample: in ${}^2$, the diagonal points 11, 22, 33 each have equal-slices measure 1 (and equal-slices density 1/6), whereas the other six off-diagonal points have equal-slices measure 1/2 (and equal-slices density 1/12). The total equal-slices measure of ${}^2$ is 6 (and the equal-slices density is of course 1).\n\nThere is a probabilistic interpretation of the equal-slices density of a set $A$. Randomly shuffle the $n$ indices, and then randomly pick $(a,b,c) \\in \\Delta_n$. The probability that the string $0^a 1^b 2^c$ lies in the shuffled version of $A$ is the equal-slices density of $A$.\n\nThe LYM inequality asserts that any line-free subset of $^n$ has equal-slices measure at most 1. The analogue of this for k=3 is the hyper-optimistic conjecture.\n\nThe DHJ(3) conjecture,\n\nDHJ(3), Version 1. For every $\\delta \\gt 0$ there exists n such that every subset $A \\subset ^n$ of density at least $\\delta$ contains a combinatorial line.\n\nis equivalent to an equal slices version:\n\nDHJ(3), Version 3. For every $\\delta \\gt 0$ there exists n such that every subset $A \\subset ^n$ of equal-slices density at least $\\delta$ contains a combinatorial line.\n\n## Version 3 implies Version 1\n\nSuppose that A has density $\\geq \\delta$ in the usual sense. Let m be such that every subset of $^m$ of equal-slices density $\\geq \\delta/2$ contains a combinatorial line. Now randomly embed $^m$ into $^n$ by choosing m variable coordinates and fixing the rest. We may suppose that every point in A has $n/3+O(\\sqrt{n})$ of each coordinate value and that $m \\ll \\sqrt{n}$. Therefore, changing coordinates hardly changes the density of a slice. It follows that each point of is in approximately the same number of these random subspaces. Therefore, by averaging, there is a random subspace inside which has equal-slices density at least $\\geq \\delta/2$, and we are done. (We could think of it Terry’s way: as we move the random subspace around, what we effectively have is a bunch of random variables, each with mean approximately $\\delta$, so by linearity of expectation we’ll get equal-slices density at least $\\delta/2$ at some point, whatever the measure is.)\n\n## Version 1 implies Version 3\n\nThis implication follows from passing between measures. Roughly speaking:\n\nSuppose that A has density $\\geq \\delta$ in the equal-slices sense. By the first moment method, this means that A has density $\\gg \\delta$ on $\\gg \\delta$ on the slices.\n\nLet m be a medium integer (much bigger than $1/\\delta$, much less than n).\n\nPick (a, b, c) at random that add up to n-m. By the first moment method, we see that with probability $\\gg \\delta$, A will have density on $\\gg \\delta$ the of the slices $\\Gamma_{a',b',c'}$ with $a' = a + m/3 + O(\\sqrt{m})$, $c' = c + m/3 + O(\\sqrt{m})$, $c' = c + m/3 + O(\\sqrt{m})$.\n\nThis implies that A has expected density on a random m-dimensional subspace generated by a 1s, b 2s, c 3s, and m independent wildcards.\n\nApplying Version 1 to that random subspace we obtain the claim.\n\n## Motivation for equal-slices measure\n\n#### Analogies with Sperner's theorem\n\nOne indication that the equal-slices measure is a natural measure to consider is that it plays an important role (implicitly or explicitly) in the standard proofs of Sperner's theorem. Recall that Sperner's theorem is the statement that the largest size of any subset $\\mathcal{A}$ of $^n$ that does not contain two sets A and B with $A\\subset B$ is $\\binom n{\\lfloor n/2\\rfloor}$, the size of the middle layer (or one of the two middle layers if n is odd). Now the equal-slices density on $^n$ can be interpreted as follows: choose a random integer m between 0 and n (chosen uniformly) and a random permutation $\\pi$ of $[n]$, and then take the set $\\{\\pi(1),\\dots,\\pi(m)\\}.$ The probability that this set belongs to $\\mathcal{A}$ is the equal-slices density of $\\mathcal{A}.$\n\nNow we claim that any set $\\mathcal{A}$ of equal-slices density greater than $1/(n+1)$ contains a pair (A,B) with $A\\subset B.$ This is a considerable strengthening of Sperner's theorem, since the largest set with equal-slices measure 1 trivially has size equal to the size of the largest slice. And the proof follows almost instantly from the definition of equal-slices density, since if the probability that a random initial segment of a random permutation of $[n]$ lies in $\\mathcal{A}$ is greater than $1/(n+1)$ then there must be some permutation of $[n]$ with at least two initial segments belonging to $\\mathcal{A}.$\n\nThus, Sperner's theorem follows from a combination of the definition of equal-slices measure/density, a simple averaging argument, and the trivial observation that if you have a subset of the set $\\{0,1,\\dots,n\\}$ of density greater than $1/(n+1)$ then you can find a configuration inside that subset of the form $x,x+d$ with $d\\ne 0.$ This last statement can be thought of as \"the one-dimensional corners theorem\".\n\nTherefore, it is not completely unreasonable to hope that if we choose a subset of $^n$ that has positive equal-slices density, then it contains a combinatorial line. This would be the two-dimensional analogue of the strong form of Sperner's theorem given by the above proof.\n\n#### The distribution of a random point in a random combinatorial line\n\nLet us choose a random combinatorial line in $^n$ uniformly from all combinatorial lines. Since each element of $[n]$ has an equal chance of being fixed at 1, fixed at 2, fixed at 3, or a wildcard, there are $4^n$ combinatorial lines. And since the number of each type of element is binomially distributed, and the binomial distribution is strongly concentrated about its mean, if you choose a random combinatorial line, then with high probability there will be roughly $n/4$ of each type of coordinate. Therefore, with high probability, a random point on a random combinatorial line takes one of the three values roughly $n/2$ times and the other two roughly $n/4$ times each.\n\nNow let us choose a random point in $^n.$ Then a similar argument shows that with high probability it will take each value roughly $n/3$ times. Therefore, if we let $\\mathcal{A}$ be the set of all sequences with roughly equal numbers of 1s, 2s and 3s, we find that it has density very close to 1, but it contains only a tiny proportion of the combinatorial lines.\n\nThis indicates that the uniform measure on the whole of $^n$ is not completely appropriate for analytic arguments. Does equal-slices density do any better?\n\nOne cannot answer this question without first deciding what one means by a random combinatorial line. The most natural definition seems to be this. First choose a random quadruple of non-negative integers $(a,b,c,r)$ such that $a+b+c+r=n,$ then choose four disjoint sets $A,B,C,D$ of cardinality $a,b,c,d,$ and finally form the combinatorial line $(A\\cup D,B,C),(A,B\\cup D,C),(A,B,C\\cup D)$.\n\nIf we do this, then the marginal distribution of the point $(A\\cup D,B,C)$ is clearly not given by the equal-slices density (since the expected size of $A\\cup D$ is more like $n/2$ than $n/3$). However, the discrepancy between the marginal distribution and the equal-slices density is far less than it is when we do the comparable calculation with the uniform measure. It turns into a mild discrepancy of a kind that occurs with the corners problem as well: it is irritating but not a disaster.\n\n#### Is there a Varnavides theorem?\n\nAs we have seen, the fact that points in typical combinatorial lines are not typical points means that we can find dense subsets of $^n$ that do not contain a positive proportion of all possible combinatorial lines. This is worrying if one wants to find an analytic approach to the theorem, because analytic approaches tend to make use not of the hypothesis that a set contains no configurations of the desired kind, but merely that the number of configurations it contains differs by a constant proportion from the expected number of those configurations in a random set.\n\nThe equal-slices measure does not obviously have this defect, though nobody has yet checked whether a stronger statement of the above kind really does hold.\n\n## A useful equivalent definition\n\nAnother way of defining the equal-slices measure of a set $A\\subset^n$ is to choose, uniformly at random, a triple (p,q,r) of non-negative real numbers such that p+q+r=1, to define independent random variables $X_i$ to equal 1 with probability p, 2 with probability q and 3 with probability r, to define $\\mu_{p,q,r}(A)$ to be the probability that $(X_1,\\dots,X_n)\\in A,$ and finally to average over (p,q,r). In other words, we average the measure of A over all trinomial distributions on $^n.$\n\nTo see that this gives the same measure, it is enough to check that it is the same on atoms. So let x be a sequence with a 1s, b 2s and c 3s. The probability that we choose this very sequence in the $\\mu_{p,q,r}$ distribution is $p^aq^br^c.$ For fixed r the average of this quantity over p is $r^c\\int_ 0^{1-r}p^a(1-r-p)^b\\,dp.$ At the time of writing I don't see an easy way of completing the proof and don't feel like a huge calculation. So let me say instead that even if the definition is not equivalent, that's not a problem---it is an interesting and potentially useful definition and it is certainly similar in spirit to equal-slices measure.\n\nI've just looked up the Wikipedia article on beta functions which strongly suggests that the above calculation will work out as one hopes. It would be good to tidy up this section by actually doing it.\n\nNote added by Ryan: I would like to confirm that this is true; it should be a consequence of the \"type 1 Dirichlet integral\".\n\n## Another useful equivalent definition\n\nAnother equivalent way to draw from the equal-slices distribution is as follows. Start with a string of $n$ \"dots\" $\\bullet$. Next, place a \"bar\" $\\mid$ randomly in one of the $n+1$ \"slots\" between (and to the left and right of) the dots. Next, place a second bar randomly in one of the $n+2$ slots formed by the string of $n$ dots and one bar. (At this point we have determined the \"slice\".) Next, fill in all the dots to the left of the leftmost bar with $1$'s; fill in all the dots between the two bars with $3$'s (not $2$'s!); and, fill in all the dots to the right of the rightmost bar with $2$'s. Delete the bars. Finally, randomly permute the resulting string of $1$'s, $2$'s, and $3$'s.\n\nWith this viewpoint, it may be easier to understand the joint distribution of the 1-set and the 2-set of a string drawn from equal-slices. Specifically, it is one that is useful for proving density-Sperner's theorem.\n\nFact: Let $z$ be a string drawn from the equal-slices distribution, in the manner described above. Let $x \\in ^n$ be the string that would have been formed had we filled in all the dots to the left of the first bar with $1$'s and all the dots to its right with $2$'s. Similarly, let $y \\in ^n$ be the string that would have been formed had we filled in all the dots to the left of the second bar with $1$'s and all the dots to its right with $2$'s. Then the following should be easy to verify:\n\n(i) $x$ and $y$ are both distributed according to the equal-slices distribution on $^n$ (but not independently);\n\n(ii) $x, y, z$ form a combinatorial line in $^n$; in particular, $x$ and $y$ are \"comparable\" in $^n$, i.e., either $x \\leq y$ or $x \\geq y$;\n\n(iii) $\\Pr[\\text{line is degenerate}] = \\Pr[x = y] = 2/(n+2)$.\n\nFrom these facts we can derive the density version of Sperner's Theorem:\n\nTheorem: Suppose $A \\subseteq ^n$ has equal-slices density $\\delta$. Then according to the above distribution on $(x,y) \\in ^n \\times ^n$, we get a nondegenerate combinatorial line in $A$ with probability at least $\\delta^2 - \\frac{2}{n+2}$.\n\nProof: Imagining the permutation $\\pi$ and the bars being picked in the opposite order, we have\n\n$\\Pr[x \\in A, y \\in A] = \\mathbf{E}_{\\pi} \\Pr_{\\text{bar1, bar2}}[x \\in A, y \\in A]$.\n\nImagine $\\pi$ is fixed and note that the two bars are chosen independently. So the random variables $x \\mid \\pi$ and $y \\mid \\pi$ are iid, and therefore the above is\n\n$\\mathbf{E}_{\\pi}\\left[\\Pr_{\\text{bar1}}[x \\in A]^2\\right] \\geq \\mathbf{E}_{\\pi}\\left[\\Pr_{\\text{bar1}}[x \\in A]\\right]^2 = \\delta^2,$\n\nsince $x$ is distributed as equal-slices. The result now follows using (iii) above." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8270938,"math_prob":0.99977845,"size":14316,"snap":"2019-13-2019-22","text_gpt3_token_len":3943,"char_repetition_ratio":0.20067076,"word_repetition_ratio":0.059182715,"special_character_ratio":0.2658564,"punctuation_ratio":0.08803301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999777,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T21:48:15Z\",\"WARC-Record-ID\":\"<urn:uuid:436b1735-fdfa-41c2-aa5f-f66fa2a22780>\",\"Content-Length\":\"35983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbdad745-fe0a-4ae0-be4e-d054ee17ab89>\",\"WARC-Concurrent-To\":\"<urn:uuid:b73082ed-3eb1-4c9c-a2f0-31f90996ef76>\",\"WARC-IP-Address\":\"64.90.49.117\",\"WARC-Target-URI\":\"http://michaelnielsen.org/polymath1/index.php?title=Equal-slices_measure\",\"WARC-Payload-Digest\":\"sha1:YDLMVBIQWCGW7MNHV57HWIHRTYF324WY\",\"WARC-Block-Digest\":\"sha1:I47IQUQSNQ6W2DDBPATQCQHYVIRRM6LE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203021.14_warc_CC-MAIN-20190323201804-20190323223804-00552.warc.gz\"}"}
https://www.centralstate.edu/academics/cse/mcs.php?num=46
[ "", null, "# Mathematics Course Descriptions\n\nMTH 1750. College Algebra (I, II; 3)\n\nTopics include functions, rational expressions, systems of linear equations, Factor and Remainder Theorem, operations on functions, radical equations, inequalities, matrices, variations and exponential and logarithmic functions, sequences, series, and the binomial theorem. Equivalent to TAG TMM001.\n\nMTH 2001. Probability and Statistics I (I; 3)\n\nTopics include measures of central tendency, measures of dispersion, probability models, conditional probability, combinations, distributions, estimation and hypothesis testing. Prerequisite: MTH 1750.\n\nMTH2002. Probability and Statistics II (II;3)\n\nTopics include testing populations, means, proportions, variances, contingency tables, regression, ANOVA, computer applications, and non-parametric statistics. Prerequisite: MTH2001.\n\nMTH2500. Precalculus. (I, II; 4)\n\nThis is an accelerated course in College Algebra and Trigonometry. Topics include linear, quadratic, polynomial, rational, radical, root, piecewise, exponential, logarithmic, trigonometric and inverse trigonometric functions; graphs and transformations; equations and inequalities; systems of equations; sequences and series; vectors and applications. Prerequisite: Placement exam.\n\nMTH2501. Trigonometry (I,II; 3)\n\nTopics include conic sections, exponential and logarithmic functions, trigonometric functions, inverse trigonometric functions, identities and equations, lines, polar coordinates, vectors in the plane, application problems, and complex numbers. Prerequisite: MTH 1750 or placement exam. Equivalent to TAG TMM003.\n\nMTH2502. Calculus I (I,II; 4)\n\nTopics include limits of functions, infinite limits, derivative and techniques of differentiation, implicit differentiation, higher derivatives, graphing, maxima and minima, plane curves, motion, antiderivatives, indefinite and definite integrals, and the Fundamental Theorem of Calculus. Prerequisite: MTH2501 or MTH2500. Equivalent to TAG TMM005.\n\nMTH 2503. Calculus II (I, II; 5)\n\nTopics include the fundamental theorem of calculus, the definite integral, techniques and applications of integration. Evaluation of improper integrals, indeterminate forms, graphs of polar equations, area in polar coordinates and parametric equations. Differentiation and integration a power series, Taylor and MacLaurin series. Calculation and application of the dot and cross products of vectors. Prerequisite: MTH 2502. Equivalent to TAG OMT006.\n\nMTH2540. Foundations in Mathematics (I,II; 3)\n\nThis course is an introduction to mathematical proof, symbolic logic, induction, set theory, relations, functions, countability, and selected topics in number theory. Prerequisite: MTH 2502.\n\nMTH2002. Probability and Statistics II (II; 3)\n\nTopics include testing populations, means, proportions, variances, contingency tables, regression, ANOVA, computer applications, and non-parametric statistics. Prerequisite: MTH2001.\n\nMTH3000. Geometry for Teachers (II; 3)\n\nTopics include definitions, axioms, plane figures, triangle theorems, similar triangles, areas, computation of areas, solids, volumes, computation of volumes, and history of geometry. Prerequisite: MTH 1750.\n\nMTH3001. Linear Algebra (I; 3)\n\nTopics include matrices, determinants, linear systems, vector spaces, linear transformations, eigenvalues and eigenvectors. Prerequisite: MTH2503. Equivalent to TAG OMT008.\n\nMTH 3002. Calculus III (II; 4)\n\nTopics include the theory of infinite series, analytic geometry of space, vectors in space, partial derivatives, and multiple integrals. Prerequisite: MTH 2503.\n\nMTH3110. Differential Equations and Discrete Dynamical Systems (I; 4)\n\nFirst and second order, linear, simultaneous equations with descriptions of solution methodology. Laplace transforms, applications, and solutions methodology for nonlinear differential equations and nonlinear difference equations. Prerequisite: MTH 2502. Equivalent to OMT009.\n\nMTH3310. Numerical Methods (II; 3)\n\nThis course is offered during odd years only. Solutions of equations, successive approximations, Newton-Raphson Method, roots of polynomials, error analysis and process graphs; simultaneous linear and non-linear equations, factorization methods, iterative methods for solving linear systems; description and solution of eigenvector problems, interpolation methods with and without spline functions; numerical solutions for ordinary differential equations, numerical solutions for partial differential equations, and applications of Monte Carlo methods. Prerequisites: MTH3001.\n\nMTH3430. Operations Research (I; 3)\n\nThis course is offered during odd years only. Topics include stochastic processes, linear programming, transportation problems, inventory control, and network theory. Prerequisite: MTH3001.\n\nMTH3520. Abstract Algebra I (I; 3)\n\nTopics include properties of integers, groups, subgroups, quotient groups, group actions, products, homomorphisms, isomorphisms, and finite abelian groups. Prerequisite: MTH 2540.\n\nMTH3521. Abstract Algebra II (II; 3)\n\nTopics include rings, ideals, integral domains, fields, Euclidean domains, principal ideal domains, vector spaces, polynomial rings, and field extensions. Prerequisite: MTH 3520.\n\nMTH3530. Mathematical Writing and Research (II; 2)\n\nTopics include the mathematical research process, technical writing, and communications in mathematics. Prerequisite: MTH 2540.\n\nMTH3610. Introduction to Discrete Structures (I; 3)\n\nThis course is offered in even years only. Topics include review of set algebra including mappings and relations, elements of the theory of directed and undirected grams, symbolic logic and applications of these structures to various areas of the computer. Prerequisite: MTH 2540. MTH3620. Seminar (II; 2) Topics include the nature of mathematics, topics from history of mathematics, problem-solving techniques, mathematical induction, and others. Prerequisite: MTH2503.\n\nMTH 4030. History of Mathematics (I; 3)\n\nThe development of mathematics from ancient times to the twentieth century. Prerequisite: Junior standing. MTH4120. Introduction to Real Analysis (I; 3) Topics include the system of real numbers, functions, sequences, limits, the theory of continuity, differentiation, Riemann integration; sequences of functions, and infinite series. Prerequisites: MTH 2540.\n\nMTH4600. Capstone: Selected Topics in Mathematics (II; 3)\n\nThis course is designed to meet the needs of advanced students as a preparation for graduate study or employment in mathematics related fields. Possible topics include, but are not limited to, topology, group theory, projective geometry, real analysis, probability and statistics, combinatorial analysis, and operations research.\n\nMTH4730. Functions of a Complex Variable (II; 3)\n\nThis course is offered during even years only. Topics include complex numbers, elementary functions, power series, analytic functions, integrals, residues, Cauchy’s Theorem, and Moreara’s Theorem. Prerequisites: MTH 4120 and permission of the instructor." ]
[ null, "https://www.facebook.com/tr", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84769964,"math_prob":0.94197065,"size":7018,"snap":"2022-27-2022-33","text_gpt3_token_len":1551,"char_repetition_ratio":0.16909039,"word_repetition_ratio":0.06531531,"special_character_ratio":0.21003135,"punctuation_ratio":0.25310174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923035,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T09:15:15Z\",\"WARC-Record-ID\":\"<urn:uuid:78da55bf-1c0e-4dfc-a636-994e5ced62e6>\",\"Content-Length\":\"36642\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a05fb32e-2aee-4653-b319-4cb404eacafa>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bbb2304-a55e-491f-918e-30c402565f6a>\",\"WARC-IP-Address\":\"67.43.13.118\",\"WARC-Target-URI\":\"https://www.centralstate.edu/academics/cse/mcs.php?num=46\",\"WARC-Payload-Digest\":\"sha1:T4LYK5JXYNNCPOSCDL25T4VZHUCWXKBC\",\"WARC-Block-Digest\":\"sha1:IWTQQUYGPFFAKM6FZIDRFAE7BY2CJTIF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215805.66_warc_CC-MAIN-20220703073750-20220703103750-00562.warc.gz\"}"}
https://shelah.logic.at/papers/857/
[ "# Sh:857\n\n• Kuhlmann, S., & Shelah, S. (2005). \\kappa-bounded exponential-logarithmic power series fields. Ann. Pure Appl. Logic, 136(3), 284–296.\n• Abstract:\nIn [KKSh:601] it was shown that fields of generalized power series cannot admit an exponential function. In this paper, we construct fields of generalized power series with bounded support which admit an exponential. We give a natural definition of an exponential, which makes these fields into models of real exponentiation. The method allows to construct for every \\kappa regular uncountable cardinal, 2^{\\kappa} pairwise non-isomorphic models of real exponentiation (of cardinality \\kappa), but all isomorphic as ordered fields. Indeed, the 2^{\\kappa} exponentials constructed have pairwise distinct growth rates. This method relies on constructing lexicographic chains with many automorphisms.\n• Version 2005-04-18_10 (14p) published version (13p)\nBib entry\n@article{Sh:857,\nauthor = {Kuhlmann, Salma and Shelah, Saharon},\ntitle = {{$\\kappa$-bounded exponential-logarithmic power series fields}},\njournal = {Ann. Pure Appl. Logic},\nfjournal = {Annals of Pure and Applied Logic},\nvolume = {136},\nnumber = {3},\nyear = {2005},\npages = {284--296},\nissn = {0168-0072},\nmrnumber = {2169687},\nmrclass = {03C60 (06A05)},\ndoi = {10.1016/j.apal.2005.04.001},\nnote = {\\href{https://arxiv.org/abs/math/0512220}{arXiv: math/0512220}},\narxiv_number = {math/0512220}\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76475906,"math_prob":0.8729487,"size":1477,"snap":"2021-31-2021-39","text_gpt3_token_len":428,"char_repetition_ratio":0.11541072,"word_repetition_ratio":0.010471204,"special_character_ratio":0.3405552,"punctuation_ratio":0.2210145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823415,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T15:49:02Z\",\"WARC-Record-ID\":\"<urn:uuid:cb0ad13a-5900-4a03-945c-d91d2f9f7065>\",\"Content-Length\":\"10109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d925a57-5062-411e-8782-d1aa118151a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bd1ee9a-e926-402b-8fd3-9b59833defdb>\",\"WARC-IP-Address\":\"78.47.24.141\",\"WARC-Target-URI\":\"https://shelah.logic.at/papers/857/\",\"WARC-Payload-Digest\":\"sha1:7JXPW7QP3HM5HJGV5ATZCBXZNMOH4U7C\",\"WARC-Block-Digest\":\"sha1:RZOCLDOB442M5AV3X2ZNF33V4CALWFBE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154089.68_warc_CC-MAIN-20210731141123-20210731171123-00588.warc.gz\"}"}
https://www.numberempire.com/9829
[ "Home | Menu | Get Involved | Contact webmaster", null, "", null, "", null, "", null, "", null, "# Number 9829\n\nnine thousand eight hundred twenty nine\n\n### Properties of the number 9829\n\n Factorization 9829 Divisors 1, 9829 Count of divisors 2 Sum of divisors 9830 Previous integer 9828 Next integer 9830 Is prime? YES (1212th prime) Previous prime 9817 Next prime 9833 9829th prime 102653 Is a Fibonacci number? NO Is a Bell number? NO Is a Catalan number? NO Is a factorial? NO Is a regular number? NO Is a perfect number? NO Polygonal number (s < 11)? NO Binary 10011001100101 Octal 23145 Duodecimal 5831 Hexadecimal 2665 Square 96609241 Square root 99.141313285633 Natural logarithm 9.1930924785666 Decimal logarithm 3.9925093350678 Sine 0.86412697224581 Cosine -0.50327385769309 Tangent -1.7170114422529\nNumber 9829 is pronounced nine thousand eight hundred twenty nine. Number 9829 is a prime number. The prime number before 9829 is 9817. The prime number after 9829 is 9833. Number 9829 has 2 divisors: 1, 9829. Sum of the divisors is 9830. Number 9829 is not a Fibonacci number. It is not a Bell number. Number 9829 is not a Catalan number. Number 9829 is not a regular number (Hamming number). It is a not factorial of any number. Number 9829 is a deficient number and therefore is not a perfect number. Binary numeral for number 9829 is 10011001100101. Octal numeral is 23145. Duodecimal value is 5831. Hexadecimal representation is 2665. Square of the number 9829 is 96609241. Square root of the number 9829 is 99.141313285633. Natural logarithm of 9829 is 9.1930924785666 Decimal logarithm of the number 9829 is 3.9925093350678 Sine of 9829 is 0.86412697224581. Cosine of the number 9829 is -0.50327385769309. Tangent of the number 9829 is -1.7170114422529\n\n### Number properties\n\nExamples: 3628800, 9876543211, 12586269025" ]
[ null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66401625,"math_prob":0.9884935,"size":2106,"snap":"2020-34-2020-40","text_gpt3_token_len":681,"char_repetition_ratio":0.18744053,"word_repetition_ratio":0.046647232,"special_character_ratio":0.41073126,"punctuation_ratio":0.12600537,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99693495,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T11:55:09Z\",\"WARC-Record-ID\":\"<urn:uuid:967b22ca-d6b4-42dc-b892-e9fd897baaa8>\",\"Content-Length\":\"20169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ffe7dec-71bb-494a-9d43-0e8fa6380b7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1a08a25-2db7-4a91-b5ce-3e21d07da996>\",\"WARC-IP-Address\":\"104.24.113.69\",\"WARC-Target-URI\":\"https://www.numberempire.com/9829\",\"WARC-Payload-Digest\":\"sha1:W7CKYJ5ANY2NBCEHIAKDGQOGPS4ONNNK\",\"WARC-Block-Digest\":\"sha1:PGY72K2M5VCC3PVCFTVRGVIKS2S4NZAF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400217623.41_warc_CC-MAIN-20200924100829-20200924130829-00606.warc.gz\"}"}
https://bookboon.com/fr/an-introduction-to-partial-differential-equations-ebook
[ "", null, "# An introduction to partial differential equations\n\nCommentaires:\n( 22 )\n156 pages\nLangue:\nEnglish\nMost descriptions of physical systems, as used in physics, engineering and, above all, in applied mathematics, are in terms of partial differential equations.\nDernière publication\nA propos de l'auteur\nDescription\nContent\nReviews\n\nMost descriptions of physical systems, as used in physics, engineering and, above all, in applied mathematics, are in terms of partial differential equations. This text, presented in three parts, introduces all the main mathematical ideas that are needed for the construction of solutions. The material covers all the elements that are encountered in any standard university study: first-order equations, including those that take very general forms, as well as the classification of second-order equations and the development of special solutions e.g. travelling-wave and similarity solutions.\n\n1. Part I First-order partial differential equations\n2. List of examples\n3. Preface\n4. Introduction\n1. Types of equation\n2. Exercises 1\n5. The quasi-linear equation\n1. Of surfaces and tangents\n2. The Cauchy (or initial value) problem\n3. The semi-linear and linear equations\n4. The quasi-linear equation in n independent variables\n5. Exercises 2\n6. The general equation\n1. Geometry again\n2. The method of solution\n3. The general PDE with Cauchy data\n4. The complete integral and the singular solution\n5. Exercises 3\n8. Part II Partial differential equations: classification and canonical forms\n9. List of Equations\n10. Preface\n11. Introduction\n1. Types of equation\n12. First-order equations\n1. The linear equation\n2. The Cauchy problem\n3. The quasi-linear equation\n4. Exercises 2\n13. The wave equation\n1. Connection with first-order equations\n2. Initial data\n3. Exercises 3\n14. The general semi-linear partial differential equation in two independent variables\n1. Transformation of variables\n2. Characteristic lines and the classification\n3. Canonical form\n4. Initial and boundary conditions\n5. Exercises 4\n15. Three examples from fluid mechanics\n1. The Tricomi equation\n2. General compressible flow\n3. The shallow-water equations\n4. Appendix: The hodograph transformation\n5. Exercise 5\n16. Riemann invariants and simple waves\n1. Shallow-water equations: Riemann invariants\n2. Shallow-water equations: simple waves\n18. Part III Partial differential equations: method of separation of variables and similarity & travelling-wave solutions\n19. List of Equations\n20. Preface\n21. Introduction\n1. The Laplacian and coordinate systems\n2. Overview of the methods\n22. The method of separation of variables\n1. Introducing the method\n2. Two independent variables: other coordinate systems\n3. Linear equations in more than two independent variables\n4. Nonlinear equations\n5. Exercises 2\n23. Travelling-wave solutions\n1. The classical, elementary partial differential equations\n2. Equations in higher dimensions\n3. Nonlinear equations\n4. Exercises 3\n24. Similarity solutions\n1. Introducing the method\n2. Continuous (Lie) groups\n3. Similarity solutions of other equations\n4. More general solutions from similarity solutions\n5. Exercises 4" ]
[ null, "https://bookboon.com/thumbnail/380/9df20d79-83e9-4154-9773-a04b00f5b1f2/aefb1b7a-e31f-4ca3-82b0-a5d900e43e58/an-introduction-to-partial-differential-equations.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93872726,"math_prob":0.994739,"size":594,"snap":"2019-43-2019-47","text_gpt3_token_len":103,"char_repetition_ratio":0.120338984,"word_repetition_ratio":0.0,"special_character_ratio":0.17171717,"punctuation_ratio":0.14423077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994833,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T02:30:12Z\",\"WARC-Record-ID\":\"<urn:uuid:a7635d27-024b-4585-abca-25a6497feafa>\",\"Content-Length\":\"56507\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:941401ac-a853-42ca-9ded-ddd8fa2b513e>\",\"WARC-Concurrent-To\":\"<urn:uuid:836ef9d1-0d30-47f7-9bbd-94813285c569>\",\"WARC-IP-Address\":\"81.19.235.174\",\"WARC-Target-URI\":\"https://bookboon.com/fr/an-introduction-to-partial-differential-equations-ebook\",\"WARC-Payload-Digest\":\"sha1:IYBSU7ZSF2UFIXRAQGMSR6YEM6JESCR3\",\"WARC-Block-Digest\":\"sha1:BH7WOFWAWLFXEN4OMWVI3MOQJDUZD56S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670389.25_warc_CC-MAIN-20191120010059-20191120034059-00165.warc.gz\"}"}
http://hackage.haskell.org/package/active-0.1.0.2/docs/Data-Active.html
[ "active-0.1.0.2: Abstractions for animation\n\nMaintainer [email protected] None\n\nData.Active\n\nDescription\n\nInspired by the work of Kevin Matlage and Andy Gill (Every Animation Should Have a Beginning, a Middle, and an End, Trends in Functional Programming, 2010. http://ittc.ku.edu/csdl/fpg/node/46), this module defines a simple abstraction for working with time-varying values. A value of type `Active a` is either a constant value of type `a`, or a time-varying value of type `a` (i.e. a function from time to `a`) with specific start and end times. Since active values have start and end times, they can be aligned, sequenced, stretched, or reversed.\n\nIn a sense, this is sort of like a stripped-down version of functional reactive programming (FRP), without the reactivity.\n\nThe original motivating use for this library is to support making animations with the diagrams framework (http://projects.haskell.org/diagrams), but the hope is that it may find more general utility.\n\nThere are two basic ways to create an `Active` value. The first is to use `mkActive` to create one directly, by specifying a start and end time and a function of time. More indirectly, one can use the `Applicative` instance together with the unit interval `ui`, which takes on values from the unit interval from time 0 to time 1, or `interval`, which creates an active over an arbitrary interval.\n\nFor example, to create a value of type `Active Double` which represents one period of a sine wave starting at time 0 and ending at time 1, we could write\n\n``` mkActive 0 1 (\\t -> sin (fromTime t * tau))\n```\n\nor\n\n``` (sin . (*tau)) <\\$> ui\n```\n\n`pure` can also be used to create `Active` values which are constant and have no start or end time. For example,\n\n``` mod <\\$> (floor <\\$> interval 0 100) <*> pure 7\n```\n\ncycles repeatedly through the numbers 0-6.\n\nNote that the \"idiom bracket\" notation supported by the SHE preprocessor (http://personal.cis.strath.ac.uk/~conor/pub/she/, http://hackage.haskell.org/package/she) can make for somewhat more readable `Applicative` code. For example, the above example can be rewritten using SHE as\n\n``` {-# OPTIONS_GHC -F -pgmF she #-}\n\n... (| mod (| floor (interval 0 100) |) ~7 |)\n```\n\nThere are many functions for transforming and composing active values; see the documentation below for more details.\n\nSynopsis\n\n# Representing time\n\n## Time and duration\n\ndata Time Source\n\nAn abstract type for representing points in time. Note that literal numeric values may be used as `Time`s, thanks to the the `Num` and `Fractional` instances. `toTime` and `fromTime` are also provided for convenience in converting between `Time` and other numeric types.\n\ntoTime :: Real a => a -> TimeSource\n\nConvert any value of a `Real` type (including `Int`, `Integer`, `Rational`, `Float`, and `Double`) to a `Time`.\n\nfromTime :: Fractional a => Time -> aSource\n\nConvert a `Time` to a value of any `Fractional` type (such as `Rational`, `Float`, or `Double`).\n\ndata Duration Source\n\nAn abstract type representing elapsed time between two points in time. Note that durations can be negative. Literal numeric values may be used as `Duration`s thanks to the `Num` and `Fractional` instances. `toDuration` and `fromDuration` are also provided for convenience in converting between `Duration`s and other numeric types.\n\ntoDuration :: Real a => a -> DurationSource\n\nConvert any value of a `Real` type (including `Int`, `Integer`, `Rational`, `Float`, and `Double`) to a `Duration`.\n\nfromDuration :: Fractional a => Duration -> aSource\n\nConvert a `Duration` to any other `Fractional` type (such as `Rational`, `Float`, or `Double`).\n\n## Eras\n\ndata Era Source\n\nAn `Era` is a concrete span of time, that is, a pair of times representing the start and end of the era. `Era`s form a semigroup: the combination of two `Era`s is the smallest `Era` which contains both. They do not form a `Monoid`, since there is no `Era` which acts as the identity with respect to this combining operation.\n\n`Era` is abstract. To construct `Era` values, use `mkEra`; to deconstruct, use `start` and `end`.\n\nInstances\n\n Show Era Semigroup Era\n\nmkEra :: Time -> Time -> EraSource\n\nCreate an `Era` by specifying start and end `Time`s.\n\nGet the start `Time` of an `Era`.\n\nend :: Era -> TimeSource\n\nGet the end `Time` of an `Era`.\n\nCompute the `Duration` of an `Era`.\n\n# Dynamic values\n\ndata Dynamic a Source\n\nA `Dynamic a` can be thought of as an `a` value that changes over the course of a particular `Era`. It's envisioned that `Dynamic` will be mostly an internal implementation detail and that `Active` will be most commonly used. But you never know what uses people might find for things.\n\nConstructors\n\n Dynamic Fieldsera :: Era runDynamic :: Time -> a\n\nInstances\n\n Functor Dynamic Apply Dynamic `Dynamic` is an instance of `Apply` (i.e. `Applicative` without `pure`): a time-varying function is applied to a time-varying value pointwise; the era of the result is the combination of the function and value eras. Note, however, that `Dynamic` is not an instance of `Applicative` since there is no way to implement `pure`: the era would have to be empty, but there is no such thing as an empty era (that is, `Era` is not an instance of `Monoid`). Semigroup a => Semigroup (Dynamic a) `Dynamic a` is a `Semigroup` whenever `a` is: the eras are combined according to their semigroup structure, and the values of type `a` are combined pointwise. Note that `Dynamic a` cannot be an instance of `Monoid` since `Era` is not. Newtype (Active a) (MaybeApply Dynamic a)\n\nmkDynamic :: Time -> Time -> (Time -> a) -> Dynamic aSource\n\nCreate a `Dynamic` from a start time, an end time, and a time-varying value.\n\nonDynamic :: (Time -> Time -> (Time -> a) -> b) -> Dynamic a -> bSource\n\nFold for `Dynamic`.\n\nShift a `Dynamic` value by a certain duration.\n\n# Active values\n\nFor working with time-varying values, it is convenient to have an `Applicative` instance: `<*>` lets us apply time-varying functions to time-varying values; `pure` allows treating constants as time-varying values which do not vary. However, as explained in its documentation, `Dynamic` cannot be made an instance of `Applicative` since there is no way to implement `pure`. The problem is that all `Dynamic` values must have a finite start and end time. The solution is to adjoin a special constructor for pure/constant values with no start or end time, giving us `Active`.\n\ndata Active a Source\n\nThere are two types of `Active` values:\n\n• An `Active` can simply be a `Dynamic`, that is, a time-varying value with start and end times.\n• An `Active` value can also be a constant: a single value, constant across time, with no start and end times.\n\nThe addition of constant values enable `Monoid` and `Applicative` instances for `Active`.\n\nInstances\n\n Functor Active Applicative Active Apply Active (Monoid a, Semigroup a) => Monoid (Active a) Semigroup a => Semigroup (Active a) Active values over a type with a `Semigroup` instance are also an instance of `Semigroup`. Two active values are combined pointwise; the resulting value is constant iff both inputs are. Newtype (Active a) (MaybeApply Dynamic a)\n\nmkActive :: Time -> Time -> (Time -> a) -> Active aSource\n\nCreate a dynamic `Active` from a start time, an end time, and a time-varying value.\n\nCreate an `Active` value from a `Dynamic`.\n\nTest whether an `Active` value is constant.\n\nTest whether an `Active` value is `Dynamic`.\n\nonActive :: (a -> b) -> (Dynamic a -> b) -> Active a -> bSource\n\nFold for `Active`s. Process an 'Active a', given a function to apply if it is a pure (constant) value, and a function to apply if it is a `Dynamic`.\n\nmodActive :: (a -> b) -> (Dynamic a -> Dynamic b) -> Active a -> Active bSource\n\nModify an `Active` value using a case analysis to see whether it is constant or dynamic.\n\nrunActive :: Active a -> Time -> aSource\n\nInterpret an `Active` value as a function from time.\n\nGet the `Era` of an `Active` value (or `Nothing` if it is a constant/pure value).\n\nsetEra :: Era -> Active a -> Active aSource\n\nSet the era of an `Active` value. Note that this will change a constant `Active` into a dynamic one which happens to have the same value at all times.\n\natTime :: Time -> Active a -> Active aSource\n\n`atTime t a` is an active value with the same behavior as `a`, shifted so that it starts at time `t`. If `a` is constant it is returned unchanged.\n\nactiveStart :: Active a -> aSource\n\nGet the value of an `Active a` at the beginning of its era.\n\nactiveEnd :: Active a -> aSource\n\nGet the value of an `Active a` at the end of its era.\n\n# Combinators\n\n## Special active values\n\nui :: Fractional a => Active aSource\n\n`ui` represents the unit interval, which takes on the value `t` at time `t`, and has as its era `[0,1]`. It is equivalent to `interval 0 1`, and can be visualized as follows:", null, "On the x-axis is time, and the value that `ui` takes on is on the y-axis. The shaded portion represents the era. Note that the value of `ui` (as with any active) is still defined outside its era, and this can make a difference when it is combined with other active values with different eras. Applying a function with `fmap` affects all values, both inside and outside the era. To manipulate values outside the era specifically, see `clamp` and `trim`.\n\nTo alter the values that `ui` takes on without altering its era, use its `Functor` and `Applicative` instances. For example, `(*2) <\\$> ui` varies from `0` to `2` over the era `[0,1]`. To alter the era, you can use `stretch` or `shift`.\n\ninterval :: Fractional a => Time -> Time -> Active aSource\n\n`interval a b` is an active value starting at time `a`, ending at time `b`, and taking the value `t` at time `t`.\n\n## Transforming active values\n\nstretch :: Rational -> Active a -> Active aSource\n\n`stretch s act` \"stretches\" the active `act` so that it takes `s` times as long (retaining the same start time).\n\nstretchTo :: Duration -> Active a -> Active aSource\n\n`stretchTo d` `stretch`es an `Active` so it has duration `d`. Has no effect if (1) `d` is non-positive, or (2) the `Active` value is constant, or (3) the `Active` value has zero duration.\n\nduring :: Active a -> Active a -> Active aSource\n\n`a1 `during` a2` `stretch`es and `shift`s `a1` so that it has the same era as `a2`. Has no effect if either of `a1` or `a2` are constant.\n\nshift :: Duration -> Active a -> Active aSource\n\n`shift d act` shifts the start time of `act` by duration `d`. Has no effect on constant values.\n\nbackwards :: Active a -> Active aSource\n\nReverse an active value so the start of its era gets mapped to the end and vice versa. For example, `backwards ui` can be visualized as", null, "snapshot :: Time -> Active a -> Active aSource\n\nTake a \"snapshot\" of an active value at a particular time, resulting in a constant value.\n\n## Working with values outside the era\n\nclamp :: Active a -> Active aSource\n\n\"Clamp\" an active value so that it is constant before and after its era. Before the era, `clamp a` takes on the value of `a` at the start of the era. Likewise, after the era, `clamp a` takes on the value of `a` at the end of the era. `clamp` has no effect on constant values.\n\nFor example, `clamp ui` can be visualized as", null, "See also `clampBefore` and `clampAfter`, which clamp only before or after the era, respectively.\n\n\"Clamp\" an active value so that it is constant before the start of its era. For example, `clampBefore ui` can be visualized as", null, "See the documentation of `clamp` for more information.\n\nclampAfter :: Active a -> Active aSource\n\n\"Clamp\" an active value so that it is constant after the end of its era. For example, `clampBefore ui` can be visualized as", null, "See the documentation of `clamp` for more information.\n\ntrim :: Monoid a => Active a -> Active aSource\n\n\"Trim\" an active value so that it is empty outside its era. `trim` has no effect on constant values.\n\nFor example, `trim ui` can be visualized as", null, "Actually, `trim ui` is not well-typed, since it is not guaranteed that `ui`'s values will be monoidal (and usually they won't be)! But the above image still provides a good intuitive idea of what `trim` is doing. To make this precise we could consider something like `trim (First . Just \\$ ui)`.\n\nSee also `trimBefore` and `trimActive`, which trim only before or after the era, respectively.\n\ntrimBefore :: Monoid a => Active a -> Active aSource\n\n\"Trim\" an active value so that it is empty before the start of its era. For example, `trimBefore ui` can be visualized as", null, "See the documentation of `trim` for more details.\n\ntrimAfter :: Monoid a => Active a -> Active aSource\n\n\"Trim\" an active value so that it is empty after the end of its era. For example, `trimAfter ui` can be visualized as", null, "See the documentation of `trim` for more details.\n\n## Composing active values\n\nafter :: Active a -> Active a -> Active aSource\n\n`a1 `after` a2` produces an active that behaves like `a1` but is shifted to start at the end time of `a2`. If either `a1` or `a2` are constant, `a1` is returned unchanged.\n\n(->>) :: Semigroup a => Active a -> Active a -> Active aSource\n\nSequence/overlay two `Active` values: shift the second to start immediately after the first (using `after`), then compose them (using `<>`).\n\n(|>>) :: Active a -> Active a -> Active aSource\n\n\"Splice\" two `Active` values together: shift the second to start immediately after the first (using `after`), and produce the value which acts like the first up to the common end/start point, then like the second after that. If both are constant, return the first.\n\nmovie :: [Active a] -> Active aSource\n\nSplice together a list of active values using `|>>`. The list must be nonempty.\n\n# Discretization\n\ndiscrete :: [a] -> Active aSource\n\nCreate an `Active` which takes on each value in the given list in turn during the time `[0,1]`, with each value getting an equal amount of time. In other words, `discrete` creates a \"slide show\" that starts at time 0 and ends at time 1. The first element is used prior to time 0, and the last element is used after time 1.\n\nIt is an error to call `discrete` on the empty list.\n\nsimulate :: Rational -> Active a -> [a]Source\n\n`simulate r act` simulates the `Active` value `act`, returning a list of \"snapshots\" taken at regular intervals from the start time to the end time. The interval used is determined by the rate `r`, which denotes the \"frame rate\", that is, the number of snapshots per unit time.\n\nIf the `Active` value is constant (and thus has no start or end times), a list of length 1 is returned, containing the constant value." ]
[ null, "http://www.cis.upenn.edu/~byorgey/hosted/ui.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/backwards.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/clamp.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/clampBefore.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/clampAfter.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/trim.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/trimBefore.png", null, "http://www.cis.upenn.edu/~byorgey/hosted/trimAfter.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78227407,"math_prob":0.951979,"size":12123,"snap":"2021-04-2021-17","text_gpt3_token_len":3178,"char_repetition_ratio":0.21767473,"word_repetition_ratio":0.17366579,"special_character_ratio":0.26882786,"punctuation_ratio":0.14571793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96112347,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-13T11:28:00Z\",\"WARC-Record-ID\":\"<urn:uuid:2d643a37-ce25-4f96-bc54-9b4ffa35c2e4>\",\"Content-Length\":\"60925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f5eab95-d4e9-473e-ad9d-bddd504461f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:69be8600-bd88-4384-9a21-1fda1f07285c>\",\"WARC-IP-Address\":\"151.101.200.68\",\"WARC-Target-URI\":\"http://hackage.haskell.org/package/active-0.1.0.2/docs/Data-Active.html\",\"WARC-Payload-Digest\":\"sha1:FXOVBP3SXOYAAJBJXH4GOTOQUH22UPBG\",\"WARC-Block-Digest\":\"sha1:BIQLWQCE52AXE32WVD5G43FYI2HLDGBR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038072180.33_warc_CC-MAIN-20210413092418-20210413122418-00630.warc.gz\"}"}
https://socratic.org/questions/what-are-two-integers-that-have-a-sum-of-3-and-a-product-of-10
[ "# What are two integers that have a sum of -3 and a product of -10?\n\n$\\left(- 5 , 2\\right)$\nCall the two integers $a$ and $b$.\nWe are given that $a + b = - 3 ,$ and $a b = - 10.$\nSimple plug and chug yields the answers $- 5$ and $2.$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7697154,"math_prob":1.000005,"size":307,"snap":"2021-43-2021-49","text_gpt3_token_len":76,"char_repetition_ratio":0.11221122,"word_repetition_ratio":0.0,"special_character_ratio":0.23778501,"punctuation_ratio":0.050847456,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T18:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:70f3390a-f860-4990-adf9-8c10e1bfd067>\",\"Content-Length\":\"32744\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b36d671-d65a-4f68-a3b0-760409e6b7a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9260b52-6205-41a5-9bc1-8f75340b877c>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-are-two-integers-that-have-a-sum-of-3-and-a-product-of-10\",\"WARC-Payload-Digest\":\"sha1:OPBWHCZKF67CRKUQJJ2TRXTTBYWFZF5N\",\"WARC-Block-Digest\":\"sha1:FKE4YSDH2DMEDE6XDXZTOQ7A6BQ3VIY3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358570.48_warc_CC-MAIN-20211128164634-20211128194634-00506.warc.gz\"}"}
https://homework.cpm.org/category/CC/textbook/ccg/chapter/11/lesson/11.1.3/problem/11-50
[ "", null, "", null, "### Home > CCG > Chapter 11 > Lesson 11.1.3 > Problem11-50\n\n11-50.\n\nA snack cracker company conducted a taste test for the three different types of crackers it makes. It surveyed $250$ people in each age group in the table below. Participants chose their favorite type of cracker. Use the results to answer the questions.\n\n Age Cracker A Cracker B Cracker C Under $20$ $152$ $54$ $44$ $20$ to $39$ $107$ $85$ $58$ $40$ to $59$ $78$ $101$ $71$ $60$ and over $34$ $68$ $148$\n1. Calculate the probability that a participant chose cracker A or was under $20$ years old. Show how you used the Addition Rule.\n\nIf the question says 'or' you must subtract the probability of 'and'.\n\n$\\text{P}\\left(\\text{cracker A}\\right)+\\text{P}\\left(\\text{under }20\\right)-\\text{P}\\left(\\text{cracker A}\\text{ and under }20\\right)$\n\n$46.9$%\n\n2. What is the probability that a participant did not choose cracker A and was over $20$ years old? Show how you used a complement to answer this problem.\n\nTo use the complement, reverse the method you used in part (a).\nSubtract $\\text{P}\\left(\\text{cracker A}\\right)$ and $\\text{P}\\left(\\text{Under }20\\right)$ from the total.\n\n3. What is the probability that a participant was $20$ years old or older. Show how you used a complement to answer this problem.\n\nUse the same method from part (b).\n\n4. A randomly-selected participant says he is $15$ years old. What is the probability that he chose cracker A?\n\nEach age group only has $250$ participants.\n\n$\\frac{ \\text{# of participants under } 20 \\text{ who picked cracker A}} {\\text{# of participants in age group}}$\n\n$60.8$%" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89307326,"math_prob":0.9926492,"size":940,"snap":"2022-27-2022-33","text_gpt3_token_len":222,"char_repetition_ratio":0.1474359,"word_repetition_ratio":0.12048193,"special_character_ratio":0.22553192,"punctuation_ratio":0.09189189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990479,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T11:04:14Z\",\"WARC-Record-ID\":\"<urn:uuid:32d500ca-ca13-4719-baca-4e0fd3569d04>\",\"Content-Length\":\"55057\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d114d5f-cc44-48f4-86c3-701d0b885853>\",\"WARC-Concurrent-To\":\"<urn:uuid:caefd59b-0db6-4a12-a2d4-638a63ec9bef>\",\"WARC-IP-Address\":\"104.26.6.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CC/textbook/ccg/chapter/11/lesson/11.1.3/problem/11-50\",\"WARC-Payload-Digest\":\"sha1:3J4JKKSZZNYLTIEBWJC3RMXZCHCDLOXL\",\"WARC-Block-Digest\":\"sha1:KYMN4Y53GXEZ263N7FVEJC2PGUL2MGSP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572898.29_warc_CC-MAIN-20220817092402-20220817122402-00407.warc.gz\"}"}
https://kmmiles.com/213-78-km-in-miles
[ "kmmiles.com\n\n# 213.78 km in miles\n\n## Result\n\n213.78 km equals 132.7574 miles\n\nYou can also convert 213.78 miles to km.\n\n## Conversion formula\n\nMultiply the amount of km by the conversion factor to get the result in miles:\n\n213.78 km × 0.621 = 132.7574 mi\n\n## How to convert 213.78 km to miles?\n\nThe conversion factor from km to miles is 0.621, which means that 1 km is equal to 0.621 miles:\n\n1 km = 0.621 mi\n\nTo convert 213.78 km into miles we have to multiply 213.78 by the conversion factor in order to get the amount from km to miles. We can also form a proportion to calculate the result:\n\n1 km → 0.621 mi\n\n213.78 km → L(mi)\n\nSolve the above proportion to obtain the length L in miles:\n\nL(mi) = 213.78 km × 0.621 mi\n\nL(mi) = 132.7574 mi\n\nThe final result is:\n\n213.78 km → 132.7574 mi\n\nWe conclude that 213.78 km is equivalent to 132.7574 miles:\n\n213.78 km = 132.7574 miles\n\n## Result approximation\n\nFor practical purposes we can round our final result to an approximate numerical value. In this case two hundred thirteen point seven eight km is approximately one hundred thirty-two point seven five seven miles:\n\n213.78 km ≅ 132.757 miles\n\n## Conversion table\n\nFor quick reference purposes, below is the kilometers to miles conversion table:\n\nkilometers (km) miles (mi)\n214.78 km 133.37838 miles\n215.78 km 133.99938 miles\n216.78 km 134.62038 miles\n217.78 km 135.24138 miles\n218.78 km 135.86238 miles\n219.78 km 136.48338 miles\n220.78 km 137.10438 miles\n221.78 km 137.72538 miles\n222.78 km 138.34638 miles\n223.78 km 138.96738 miles\n\n## Units definitions\n\nThe units involved in this conversion are kilometers and miles. This is how they are defined:\n\n### Kilometers\n\nThe kilometer (symbol: km) is a unit of length in the metric system, equal to 1000m (also written as 1E+3m). It is commonly used officially for expressing distances between geographical places on land in most of the world.\n\n### Miles\n\nA mile is a most popular measurement unit of length, equal to most commonly 5,280 feet (1,760 yards, or about 1,609 meters). The mile of 5,280 feet is called land mile or the statute mile to distinguish it from the nautical mile (1,852 meters, about 6,076.1 feet). Use of the mile as a unit of measurement is now largely confined to the United Kingdom, the United States, and Canada." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.822559,"math_prob":0.96517223,"size":2247,"snap":"2022-27-2022-33","text_gpt3_token_len":656,"char_repetition_ratio":0.17610343,"word_repetition_ratio":0.0,"special_character_ratio":0.36626613,"punctuation_ratio":0.15637065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98327684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T05:50:42Z\",\"WARC-Record-ID\":\"<urn:uuid:ef548739-ad31-4781-859a-fabcf2c26c65>\",\"Content-Length\":\"20710\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5735cbf9-bec6-44a7-b7f4-af6b6f57c6fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fec6c97-2579-4fd1-baa6-b7a1030bd12a>\",\"WARC-IP-Address\":\"104.21.6.102\",\"WARC-Target-URI\":\"https://kmmiles.com/213-78-km-in-miles\",\"WARC-Payload-Digest\":\"sha1:3OE3KQSFJQKDWDYHG74AVSWRV6IGYTY2\",\"WARC-Block-Digest\":\"sha1:RYUXJVAU5647VDKFABWTRXNE7RTHTV3K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215790.65_warc_CC-MAIN-20220703043548-20220703073548-00062.warc.gz\"}"}
https://scirp.org/journal/paperinformation.aspx?paperid=71893
[ "Theory of a Kaptiza-Dirac Interferometer with Cold Trapped Atoms\n\nAbstract\n\nWe theoretically analyse a multi-modes atomic interferometer consisting of a sequence of Kapitza-Dirac pulses (KD) applied to cold atoms trapped in a harmonic trap. The pulses spatially split the atomic wave-functions while the harmonic trap coherently recombines all modes by acting as a coherent spatial mirror. The phase shifts accumulated among different KD pulses are estimated by measuring the number of atoms in each output mode or by fitting the density profile. The sensitivity is rigorously calculated by the Fisher information and the Cramér-Rao lower bound. We predict, with typical experimental parameters, a temperature independent sensitivity which, in the case of the measurement of the gravitational constant g can significantly exceed the sensitivity of current atomic interferometers.\n\nShare and Cite:\n\nCheng, R. , He, T. , Li, W. and Smerzi, A. (2016) Theory of a Kaptiza-Dirac Interferometer with Cold Trapped Atoms. Journal of Modern Physics, 7, 2043-2062. doi: 10.4236/jmp.2016.715180.\n\n1. Introduction\n\nThe goal of interferometry is to estimate the unknown value of a phase shift. The phase shift can arise because of a difference in length among two interferometric arms, as in the first optical Michelson-Morley probing the existence of aether or in LIGO and VIRGO gravitational wave detectors . Phase shifts can also be the consequence of a supersonic airflow perturbing one optical path, as in the first Mach-Zehnder , or inertial forces as in Sagnac . Interferometers are among the most exquisite measurement devices and since their first realisations have played a central role on pushing the frontier of science.\n\nSince the last decade, matter wave interferometers have progressively become very competitive when measuring electromagnetic or inertial forces. In particular, atom interferometers have been exploited to obtain the most accurate estimate of the gravitational constant . The beam splitter and the mirror operations of an atom interferometer can be typically implemented in free space with a sequence of Bragg scatterings applied to a beam of cold atoms . Alternatively, the phase shifts can be estimated by measuring the Bloch frequency of cold atoms oscillating in vertically oriented optical lattices which have been able to evaluate the gravitational constant g with accuracy up to", null, ".\n\nThe sensitivity of light-pulse atom interferometry scales linearly with the space-time area enclosed by the interfering atoms. Large-momentum-transfer (LMT) beam splitters have been suggested and experimentally investigated , demonstrating up to", null, "splitting (where", null, "is the photon momentum) . Relative to the 2-photon processes used in the current most sensitive light-pulse atom interferometers, LMT beam splitters in atomic fountains can provide a 44-fold increased phase shift sensitivity . Further increases of the momentum differences between the interferometer paths are limited by the cloud’s transverse momentum width since high efficiency beam splitting and mirror processes require a narrow distribution .\n\nAs an alternative to the atomic fountains, where the atoms follow ballistic trajectories, the interferometric operations can be implemented with trapped clouds . We have recently proposed a multi-mode interferometer with harmonically confined atoms where multi beam-splitter and mirror operations are realized with Kapitza-Dirac (KD) pulses, namely, the impulse application of an off-resonant standing optical wave. With KD pulses applied to atoms in a harmonic trap, it is possible to reach large spatial separations between the interferometric modes by avoiding, at the same time, atom losses and defocusing occurring in Bragg processes (mostly due to the constraint of narrow momentum widths). In , the role of mirrors is played by the harmonic trap, which coherently drives and recombines a tunable number of spatially addressable atomic beams created by the KD pulses. The phase estimation sensitivity linearly increases with the number of beams and their spatial distance. The number of beams is proportional to the strength of the applied KD pulse while their distance is proportional to the ratio between the harmonic trap length and the wave-length of the optical wave. In this manuscript we discuss in detail the theory of the multi-modes KD interferometer which was introduced in .\n\n2. Multi-Modes Kaptiza-Dirac Interferometer\n\nThe initial configuration of the interferometer is provided by a cloud of cold atoms trapped by an harmonic potential", null, ". The interferometric sequence is realised in four steps, see Figure 1:\n\ni) Beam-splitter: A KD pulse is applied to the atomic cloud state at the time", null, ". KD creates a number of spatially addressable atomic wave-packets that evolve along different paths under the harmonic confinement.\n\nFigure 1. (color-online) Multimodes Kapitza-Dirac interferometer. The first Kapitza-Dirac pulse at", null, "creates several modes consisting of atomic wave-packets evolving under the harmonic confinement and an external perturbing field. The n-th Kapitza-Dirac pulse at", null, "mixes the modes which are eventually detected in output at", null, ", where", null, ".\n\nii) Phase shift: Each spatial mode gains a phase shift", null, "with respect to its neighbour’s modes due to the action of an external potential.\n\niii) Beam splitter: the harmonic trap coherently recombines the wave packets and a second KD pulse is applied to again mix and separate the modes along different paths.\n\niv) Measurement: The phase shift is estimated by fitting the atomic density profile or by counting the number of atoms in each spatial mode at", null, ". The measurement can be done after ballistic expansion by optimising spatial separation of the modes and atom counting signal to noise ratio.\n\nThe sequences i)-iii) can be iterated an arbitrary number of times n before the final measurement iv).\n\nThe plan of the paper is as follows. In Section 2, we present a detailed description of the multi-modes KD interferometer. As an application we calculate the Fisher information and the Cramér-Rao lower bound sensitivity of the interferometric measurement of the gravitational constant g in Section 3. We predict sensitivities up to", null, "in configurations realisable within the current state of the art and in the Section 4 we compare the performance of different atomic interferometers. In Section 5 we discuss two possible sources of noise and we finally summarise the results in Section 6.\n\n3. Dynamics\n\nLet’s consider first a single atom described by a wave packet", null, "confided in the harmonic trap", null, ". The time evolution of the state in the harmonic trap is given by", null, "(1)\n\nwhere", null, "is the quantum propagator", null, "(2)\n\nwith. The KD beam-splitter is realized with an impulse application of a periodic potential, where is the strength of the pulse, is the atomic recoil energy and. In the Raman-Nath limit , the duration of the pulse is short enough to not affect the atomic density but to only change the phase of the initial wave-function as\n\n(3)\n\nwhere we have used the Bessel generating function and\n\n. The Raman-Nath limit has been experimentally demonstrated in . Equation (3) shows that the KD beam-splitter creates copies of the initial state, each with amplitude and an additional momentum.\n\nAfter the application of the first KD, the wave-packets are coherently driven by the harmonic trap and recombined after a time. At this time, the propagator in Equation (2) is simply given by\n\n(4)\n\nFurthermore, in presence of an external field, each spatial mode created by the KD beam splitter gains during the time a phase shift with respect to its neighbour’s modes. Right before the application of a second KD pulse, at time, the wave function is\n\n(5)\n\nAfter iterating a number of times n the sequence of KD pulses and phase shift accumulations, the wave function at becomes\n\n(6)\n\nwhere\n\nand is the integer part of. For odd n we have\n\nwhile for even n we have\n\nAfter n iterations, a last KD pulse is applied at the time providing\n\n(7)\n\nThe wave packets gain their maximum spatial separation after a further evolution in the harmonic trap.\n\nEventually, the wave function at the final time right before measurement is\n\n(8)\n\nwhere\n\n(9)\n\nand\n\n(10)\n\nwith, or 0 for odd or even n, respectively, and\n\nIn the limit if zero overlap between the various wave packets in Equation (8),\n\n(11)\n\nthe density function at the measurement time simply becomes\n\n(12)\n\nEquation (12) shows that there are momentum modes created by the n applications of the KD pulses. This can of course be helpful if only weak KD pulses can be experimentally implemented.\n\nIn the limit of a large number of independent interferometric measurements, the phase estimation sensitivity saturates the Cramér-Rao lower bound\n\n(13)\n\nwhere N is the number of uncorrelated atoms. F denotes the Fisher information calculated from the particle density at the measurement time\n\n(14)\n\nWith Equation (12), Equation (14) becomes\n\n(15)\n\n(see Appendix). We finally obtain\n\n(16)\n\nwhere\n\nNotice that even in the case of a odd value of n, with,. Therefore, for an even n or an odd, the phase estimation uncertainty of our interferometer becomes:\n\n(17)\n\nwhich can also be written as\n\n(18)\n\nsince the total number of modes is. As expected on a general ground from the theory of multimode interferometry , the sensitivity scales linearly with the number of momentum modes which have been significantly populated after KD beam splitters. The populations of higher diffraction orders vanish exponentially .\n\nWe remark here the important condition of non overlap of the wave packets corresponding to the different momentum modes at the time of measurement, Equation (11). A further interesting point is that Equation (18) is independent from the temperature of the atoms as long as their de Broglie wavelength remains larger than the internal spatial separation of the periodic potential creating the Kapitza-Dirac pulse. We will show this in the following Sections by considering as a specific application the interferometric estimation of the gravitational constant.\n\n4. Estimation of the Gravitational Acceleration Constant g\n\nWe now investigate the KD interferometer theory to estimate the gravity constant g. The evolution of the initial state is influenced by the combined action of the harmonic confinement, the gravitational field and the KD beam splitters. The goal is to estimate the value of the acceleration constant g. As explained in the previous Section, the phase shift arises from the external gravitational field acting during the phase accumulation period (until). We may engineer our Hamiltonian to switch on/off the gravity after the first beam splitter by modifying the frequency of the harmonic trap by, where are the trap frequencies before and after the KD, respectively. We finally generalize our results by considering an atomic gas in thermal equilibrium at a finite temperature.\n\nTo take in account the effect of the gravitational force on the dynamical evolution of the trapped atom states, we need to include in the free propagator Equation (2) the linear gravitational field \n\n(19)\n\nwhere and with. After the application of the first KD, the states are coherently driven by the harmonic trap and the external gravitational field. At the time, each spatial modes, created by KD pulse, are recombined and the wave function becomes\n\n(20)\n\nsince the quantum propagator undergone with gravity field is reduced to\n\n(21)\n\nAs expected, each spatial mode gains its phase shift with respect to its neighbour’s modes at time due to action of the external gravity field after the first KD pulse. A straightforward (slightly tedious) calculation provides the wave function at\n\n(22)\n\nwhere for even n and for odd n. The function also depends on the of n: for odd n we have\n\nand for even n\n\nThe last KD pulse is applied on the wave function Equation (22) at time, to mix and therefore spatially separate the modes for the final density profile measurement\n\n(23)\n\nFirstly, we consider the case without the gravity field. Then at the time , the wave function is\n\n(24)\n\nwhere and for n even (odd) ().is defined by\n\n(25)\n\nand be found by replacing integral function as in Equation (10). Secondly, with gravity field, the wave function under quantum propagator with gravity field\n\n(26)\n\nwhere is defined by Equation (9), can be expressed as\n\n(27)\n\nwhere is defined by Equation (24) and\n\nExcept the phase difference between Equation (24) and Equation (27), a constant difference d is found in the centre position of each sub-wave packets induced by the gravity field. In the case of “no-overlap” condition (Equation (11)), which is satisfied when the width of the initial wave packet is much larger than the interwell distance of the KD optical lattice (), the final density function becomes\n\n(28)\n\nfrom Equation (24) or\n\n(29)\n\nfrom Equations ((27), (12) and (28)) show that the information on the estimated values of and g are mainly (or entirely) contained in the weights, depending on the final evolution during the measurement period. A small part of the information is involved in the center of sub-wave packets for half gravity evolution (Equation (29)).\n\nWe now consider an atomic gas at finite temperature T. To get some simple insight on the physics of the problem, we consider the system as made by a swarm of minimum uncertainty Gaussian wave packets\n\n(30)\n\nwhere the initial wave packet width is equal to the thermal de Broglie wavelength while the initial average coordinates and momentum are distributed according to the Boltzmann-Maxwell distribution\n\n(31)\n\nEach wav packet evolves driven by the propagators calculated in the previous Section:\n\n(32)\n\nand\n\n(33)\n\nwhere for odd n and for even n. Replacing in Equation (31), we find that the density distribution at the output of the interferometer is\n\n(34)\n\nwhere is the normalization constant and\n\n(35)\n\n(36)\n\nIt is interesting to note that. In the case of, only the terms with in Equation (34) are important and the density profile at the final time reduces to a sum of weighted Gaussians of width:\n\n(37)\n\nor\n\n(38)\n\nNotice that the value of the gravitational constant g is only contained in the weights of the modes.\n\nThe requirement is that sub-wave packets in Equation (37) are spatially separated, which means. Considering Equation (35), we have\n\n(39)\n\nwhich means\n\n(40)\n\nAs expected, the spatial separation condition in Equation (37) is equivalents to. This means that the initial wave packets width (the thermal de Boglie wavelength) should be much larger than the internal distance of the KD potential. This is consistent with Equation (11). The important result is that as long as this condition is satisfied, the sensitivity does not depend on the temperature.\n\nSubstituting the density function Equation (37) at the measurement time into Fisher information Equation (14), we obtain\n\n(41)\n\nThe Fisher information for our system depends on the temperature, initial density profile, the interferometer transformation, and the choice of the observable that, here, is the spatial position of atoms. In this case, the estimator can simply be a fit of the final density profile. However, the same results would be obtained by choosing as observable, the number of particles in each Gassian spatial mode. Since the initial state is made of uncorrelated atoms, there is no need to measure correlations between the modes in order to saturate the Cramér-Rao lower bound Equation (13) at the optimal value of the value phase shift.\n\nBefore proceeding to discuss the finite temperature case, we calculate the highest sensitivity of the unbiased estimation of parameter g, which is guaranteed by the no-overlap condition.\n\nIn the limit, the Fisher information can be calculated analytically\n\n(42)\n\nwhere for even n and for odd n, with in the limit. Finally, the Cramér-Rao lower bound Equation (13) becomes\n\n(43)\n\nThe Equation (43) can be rewritten as\n\n(44)\n\nwhere.\n\nIf the gravity field is witched on in the last KD pulse, the density profile at final time is described by Equation (38). In this case, there is a further contribution to the Fisher Equation (42) from the shift on the center of sub-wave packets and we have\n\n(45)\n\n5. Sensitivity\n\nWe now estimate the expected sensitivity under realistic experiment conditions. We consider 105 88Sr atoms trapped in an harmonic trap having and a Kapitza-Dirac periodic potential with , recoil energy and KD pulses applied for a time.\n\nWith a strength of the KD potential , a single pulse creates ~9 modes which provide a sensitivity with a single measurement shot and a phase accumulation time of 0.1 seconds,. This sensitivity increases as, see Equation (43), after n pulses and phase accumulation time up to seconds. Under these conditions, the maximum length spanned by the 88Sr atoms is also increased from to, see the black lines in Figure 3. In practice the sensitivity is limited by the effective length of the harmonic confinement. With current technologies using magnetic traps, the largest spatial separation L could be pushed up to a few millimeters.\n\nSince the thermal de Broglie wavelength decreases when increasing the temperature, the no-overlap condition Equation (11) breaks down at. In Figure 2, we plot the normalised sensitivity as a function of the temperature. The time-independent sensitivity is found for various numbers of KD pulses. Once the temperature is increased up to the crossover value, the sensitivity is drastically reduced see Figure 2. When, the wave packets are spatially addressable (see dark and blue lines in Figure 3). When, the distinguishability of the wave packets decreases (red lines in Figure 3) and the uncertainty in the phase estimation increases as for.\n\nFigure 2. (color-online) Normalized phase estimation sensitivity as a function of the temperature for even and odd n.\n\nFigure 3. (color-online) Density profiles of the output wave function of Figure 2. The dark line, blue line and red line show temperatures below, equal and above the crossover temperature.\n\nAs a comparison with current atom interferometers, we calculate the sensitivity obtained from a simple interference pattern observed after a free expansion of an initial atom clouds relevant, for instance, when measuring the gravitational constant g using Bloch oscillations . As shown in , the momentum distribution is expressed as\n\n(46)\n\nwhere is the wave length of the laser. A is a normalization factor and j denotes the lattice site and is the phase difference between lattice site. Since the finite size of the initial cold atomic cloud, there is only a finite number of terms in Equation (46) which contribute to the sum. We therefore have\n\n(47)\n\nwhere is the maximum numbers of the lattice occupied by the initial atom gases. In Equation (46), each point has a Gaussian momentum distribution. Therefore, we obtain\n\n(48)\n\nwhere. Considering the experimental situations in , , we arrive at\n\n(49)\n\nwhere is the interaction time of the neighbour cold atom under the gravity-like force. Therefore it could be approximate as the tunnelling time s in . With the Cramér-Rao lower bound Equation (13), we have\n\n(50)\n\nwhere we have estimated the maximum occupied lattice sites by. Therefore, the sensitivity is\n\n(51)\n\nConsidering the sensitivity for a single Kaptiza-Dirac pulse with Equation (51), we can reach a sensitivity larger than 3 order of magnitude that the sensitivity obtained in an interference pattern. The reason is that the KD pulses can create several wave packets spanning a distance, which can be quite a bit larger than the typical distances between the wave packets created in far field expansion measurements. In this case, the theoretical gain provided by Equation (43) is proportional to, which can be ~103 with only once KD pulse and typical values of the experimental parameters. A further advantage is that such high sensitivity interferometry can be realised with a compact experimental setup.\n\n6. Noise and Decoherence\n\nWe now consider the effects of noise and imperfections on the sensitivity of the interferometer. We mainly consider two kinds of perturbations, which may arise from the experimental realization of the interferometry. The first one is the effect of the anharmonicity, described by a position dependent random perturbation, and the second one the effect of a shift in position between different sequences of the KD pulses.\n\nThe effect of anharmonicity is investigated by numerically simulating the interferometric sequences with the following potential\n\n(52)\n\nwhere. is the strength of a position dependent random perturbation having values. We take as length unit of the harmonic trap and as time unit the inverse of the trap frequency. The strength of the external gravity-like potential is described by a dimensionless parameter, then. To simplify the simulation, in the following we only consider a single KD pulse.\n\nStarting with the ground state of the harmonic trap, the time dependent wave functions can be found by operator splitting method with. Using groups of random numbers, we generate densities at the measurement time. Then,\n\nthe average density is used for calculating Fisher\n\ninformation for given. Here, we use the Equation (53) to get the derivative for and\n\n(53)\n\nDue to the perturbation potential, the sub-wave packets are driven back to their initial position with a incoherent phase at and the total density profile could be dramatically destructed. It is interesting to note that the KD pulses still do a quite good job and that completed spatially separated wave packets with momentum can be found at the measurement time, see Figure 4. When increasing, the visibility of the wave packets decreases compared with the ideal case (Black line, without). This definitely makes a impact on the sensitivity, which can be found by calculating the Fisher information through. The results have been presented in Figure 5. Generally speaking, a strong perturbation of the harmonic potential decreases dramatically, see Figure 5(b), while, for it is still possible to obtain a sensitivity comparable with the ideal case.\n\nA shift of the optical lattice with respect to the harmonic trap is further possible reason for a decreased sensitivity. Assuming a off center shift of two consecu-\n\nFigure 4. (color-online) Density profiles around each momentum component () at time. The parameters are. The black line is for the pure harmonic trap. The green line is the average density of ten groups of random number (). The blue line is for. The pink line is for.\n\nFigure 5. (color-online) The average Fisher information, with. (a), (b).\n\ntive KD pulses, the wave function at after the second KD is\n\n(54)\n\nwhere. To get this result we have considered the properties of Bessel generating function .\n\n(55)\n\nwhere. Equation (55) shows that the effect of off-center shift\n\nmakes only an phase shifts for each sub-wave packets. Therefore, the non-overlap condition Equation (11) does not have any modification even after considering the off-center shift In this case, the final density profiles is\n\n(56)\n\nEquation (56) shows that the center shifts could induce a fluctuation by around the estimated value of d. If those off-center shifts are coming from some external noise, then it may do not play crucial effect on the value of. Therefore, it does have small effect on Fisher information, by\n\n(57)\n\n7. Conclusion\n\nDuring the last few decades, matter-wave interferometry has been successfully extended to the domain of atoms and molecules. Most current interferometric protocols for the measurement of gravity or inertial forces are based on the manipulation of free falling atoms realizing Mach-Zehnder like configurations. Here we propose an atomic multimode interferometer with atoms trapped in a harmonic potential and where the multi beam-splitter operation are implemented with Kapitza-Dirac pulses. The mirror operations are performed by the harmonic trap which coherently drives a tunable number of spatially addressable atomic beams. All interferometer processes, including splitting, phase accumulation and reflection are performed and completed within the harmonic trap. Therefore, all trapped atoms contribute to the sensitivity. We have applied our scheme to the estimation of the gravitational constant and estimate, with realistic experimental parameters, a sensitivity of 10−9, significantly exceeding the sensitivity of current interferometric protocols.\n\nAcknowledgements\n\nOur work is supported by the National Science Foundation of China (No. 11374197), PCSIRT (No. IRT13076), the National Science Foundation of China (No.11504215).\n\nAppendix\n\nAppendix A\n\nTo obtain Equation (15), we have considered the Bessel functions identity\n\nWith this we have\n\n(58)\n\nwhere one more identity\n\n(59)\n\nhas been used to obtain\n\n(60)\n\nAppendix B\n\nFor Equation (45), Using the Equation (41) and Equation (29), we obtain\n\n(61)\n\nBy using the initial state\n\n(62)\n\nwe get\n\n(63)\n\nwhere for even n, but for odd n. So\n\n(64)\n\nThe second step uses the “no-overlap” condition by changing to in Equation (11).\n\nSubmit or recommend next manuscript to SCIRP and we will provide best service for you:\n\nA wide selection of journals (inclusive of 9 subjects, more than 200 journals)\n\nProviding 24-hour high-quality service\n\nUser-friendly online submission system\n\nFair and swift peer-review system\n\nDisplay of the result of downloads and visits, as well as the number of cited articles\n\nMaximum dissemination of your research work\n\nOr contact [email protected]\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Abbott, B.P., et al. (2016) Physical Review Letters, 116, Article ID: 241103. http://dx.doi.org/10.1103/PhysRevLett.116.241103 Zehnder, L. (1891) Zt. Instrumentenkd, 11, 275. Culshaw, B. (2006) Measurement Science and Technology, 17, R1. http://dx.doi.org/10.1088/0957-0233/17/1/R01 Cronin, A.D., Schmiedmayer, J. and Pritchard, D.E. (2009) Reviews of Modern Physics, 81, 1051. http://dx.doi.org/10.1103/RevModPhys.81.1051 Fixler, J.B., Foster, G.T., McGuirk, J.M. and Kasevich, M.A. (2007) Science, 315, 74-77. http://dx.doi.org/10.1126/science.1135459 Rosi, G., Sorrentino, F., Cacciapuoti, L., Prevedelli, M. and Tino, G.M. (2014) Nature (London), 510, 518-521. http://dx.doi.org/10.1038/nature13433 Biedermann, G.W., Wu, X., Deslauriers, L., Roy, S., Mahadeswaraswamy, C. and Kasevich, M.A. (2015) Physical Review A, 91, Article ID: 033629. http://dx.doi.org/10.1103/PhysRevA.91.033629 Zhou, L., Long, S.T., Tang, B., Chen, X., Gao, F., Peng, W.C., Duan, W.T., Zhong, J.Q., Xiong, Z.Y., Wang, J., Zhang, Y.Z. and Zhan, M.S. (2015) Physical Review Letters, 115, Article ID: 013004. http://dx.doi.org/10.1103/PhysRevLett.115.013004 Chaibi, W., Geiger, R., Canuel, B., Bertoldi, A., Landragin, A. and Bouyer, P. (2016) Physical Review D, 93, Article ID: 021101(R). http://dx.doi.org/10.1103/PhysRevD.93.021101 Keller, C., Schmiedmayer, J., Zeilinger, A., Nonn, T., Dürr, S. and Rempe, G. (1999) Applied Physics B, 69, 303-309. http://dx.doi.org/10.1007/s003400050810 Roati, G., de Mirandes, E., Ferlaino, F., Ott, H., Modugno, G. and Inguscio, M. (2004) Physical Review Letters, 92, Article ID: 230402. http://dx.doi.org/10.1103/PhysRevLett.92.230402 Ferrari, G., Poli, N., Sorrentino, F. and Tino, G.M. (2006) Physical Review Letters, 97, Article ID: 060402. http://dx.doi.org/10.1103/PhysRevLett.97.060402 Poli, N., Wang, F.-Y., Tarallo, M.G., Alberti, A., Prevedelli, M. and Tino, G.M. (2011) Physical Review Letters, 106, Article ID: 038501. http://dx.doi.org/10.1103/PhysRevLett.106.038501 Tarallo, M.G., Alberti, A., Poli, N., Chiofalo, M.L., Wang, F.-Y. and Tino, G.M. (2012) Physical Review A, 86, Article ID: 033615. http://dx.doi.org/10.1103/PhysRevA.86.033615 Denschlag, J.H., et al. (2002) Journal of Physics B: Atomic, Molecular and Optical Physics, 35, 3095. http://dx.doi.org/10.1088/0953-4075/35/14/307 Müller, H., Chiow, S.-W., Long, Q., Herrmann, S. and Chu, S. (2008) Physical Review Letters, 100, Article ID: 180405. http://dx.doi.org/10.1103/PhysRevLett.100.180405 Cladé, P., Guellati-Khélifa, S., Nez, F. and Biraben, F. (2009) Physical Review Letters, 102, Article ID: 240402. http://dx.doi.org/10.1103/PhysRevLett.102.240402 Müller, H., Chiow, S.-W., Herrmann, S. and Chu, S. (2009) Physical Review Letters, 102, Article ID: 240403. http://dx.doi.org/10.1103/PhysRevLett.102.240403 Szigeti, S.S., Debs, J.E., Hope, J.J., Robins, N.P. and Close, J.D. (2012) New Journal of Physics, 14, Article ID: 023009. http://dx.doi.org/10.1088/1367-2630/14/2/023009 Garcia, O., Deissler, B., Hughes, K.J., Reeves, J.M. and Sackett, C.A. (2006) Physical Review A, 74, Article ID: 031601(R). http://dx.doi.org/10.1103/PhysRevA.74.031601 Sapiro, R.E., Zhang, R. and Raithel, G. (2009) Physical Review A, 79, Article ID: 043630. http://dx.doi.org/10.1103/PhysRevA.79.043630 Close, J. and Robins, N. (2012) Physics, 5, 26. http://dx.doi.org/10.1103/Physics.5.26 Li, W.D., He, T.C. and Smerzi, A. (2014) Physical Review Letters, 113, Article ID: 023003. http://dx.doi.org/10.1103/PhysRevLett.113.023003 Pezzé, L. and Smerzi, A. (2014) Atom Interferometry. Proceedings of the International School of Physics “Enrico Fermi”, Course 188, Societa’ Italiana di Fisica, Bologna and IOS Press, Amsterdam. Holstein, B.R. (1992) Topics in Advanced Quantum Mechanics. Addison-Wesley, Reading, MA. Edwards, M., Benton, B., Heward, J. and Clark, C.W. (2010) Physical Review A, 82, Article ID: 063613. http://dx.doi.org/10.1103/PhysRevA.82.063613 Moharam, M.G. and Young, L. (1978) Applied Optics, 17, 1757-1759. http://dx.doi.org/10.1364/AO.17.001757 Abramowitz, M. and Stegun, I.A. (1970) Handbook of Mathematical Functions: With Formulas, Graphs and Mathematical Tables. National Bureau of Standards, Washington DC. Deng, L., Hagley, E., Denschlag, J., Simsarian, J., Edwards, M., Clark, C., Helmerson, K., Rolston, S. and Phillips, W. (1999) Physical Review Letters, 83, 5407. http://dx.doi.org/10.1103/PhysRevLett.83.5407 Hyllus, P., Laskowski, W., Krischek, R., Schwemmer, C., Wieczorek, W., Weinfurter, H., Pezze, L. and Smerzi, A. (2012) Physical Review A, 85, Article ID: 022321. http://dx.doi.org/10.1103/PhysRevA.85.022321 Chwedenczuk, J., Piazza, F. and Smerzi, A. (2013) Physical Review A, 87, Article ID: 033607. http://dx.doi.org/10.1103/PhysRevA.87.033607 Pedri, P., Pitaevskii, L., Stringari, S., et al. (2001) Physical Review Letters, 87, Article ID: 220401. http://dx.doi.org/10.1103/PhysRevLett.87.220401 Bandrauk, A.D. and Shen, H. (1994) Journal of Physics A: Mathematical and General, 27, 7147. http://dx.doi.org/10.1088/0305-4470/27/21/030", null, "", null, "", null, "", null, "", null, "[email protected]", null, "+86 18163351462(WhatsApp)", null, "1655362766", null, "", null, "Paper Publishing WeChat", null, "" ]
[ null, "https://html.scirp.org/file/7-7502942x2.png", null, "https://html.scirp.org/file/7-7502942x3.png", null, "https://html.scirp.org/file/7-7502942x4.png", null, "https://html.scirp.org/file/7-7502942x5.png", null, "https://html.scirp.org/file/7-7502942x6.png", null, "https://html.scirp.org/file/7-7502942x8.png", null, "https://html.scirp.org/file/7-7502942x9.png", null, "https://html.scirp.org/file/7-7502942x10.png", null, "https://html.scirp.org/file/7-7502942x11.png", null, "https://html.scirp.org/file/7-7502942x12.png", null, "https://html.scirp.org/file/7-7502942x13.png", null, "https://html.scirp.org/file/7-7502942x14.png", null, "https://html.scirp.org/file/7-7502942x15.png", null, "https://html.scirp.org/file/7-7502942x16.png", null, "https://html.scirp.org/file/7-7502942x17.png", null, "https://html.scirp.org/file/7-7502942x18.png", null, "https://html.scirp.org/file/7-7502942x19.png", null, "https://scirp.org/images/Twitter.svg", null, "https://scirp.org/images/fb.svg", null, "https://scirp.org/images/in.svg", null, "https://scirp.org/images/weibo.svg", null, "https://scirp.org/images/emailsrp.png", null, "https://scirp.org/images/whatsapplogo.jpg", null, "https://scirp.org/Images/qq25.jpg", null, "https://scirp.org/images/weixinlogo.jpg", null, "https://scirp.org/images/weixinsrp120.jpg", null, "https://scirp.org/Images/ccby.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89493275,"math_prob":0.9255478,"size":24099,"snap":"2023-14-2023-23","text_gpt3_token_len":5042,"char_repetition_ratio":0.15563396,"word_repetition_ratio":0.018341513,"special_character_ratio":0.21050666,"punctuation_ratio":0.079915635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.981232,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T00:44:21Z\",\"WARC-Record-ID\":\"<urn:uuid:bd6a47c6-dcde-4dd3-8bc8-470c060a2778>\",\"Content-Length\":\"156018\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b9efe4e8-9200-48f1-9e0c-e7cfadbf5f43>\",\"WARC-Concurrent-To\":\"<urn:uuid:79301fc8-f189-4f06-accd-9d690f39c4db>\",\"WARC-IP-Address\":\"107.191.112.46\",\"WARC-Target-URI\":\"https://scirp.org/journal/paperinformation.aspx?paperid=71893\",\"WARC-Payload-Digest\":\"sha1:XD2LPP4RVJ46XGTFTBYUSSAAPX35ZOXZ\",\"WARC-Block-Digest\":\"sha1:3NTYNCKZIDCW5G4FHMEPR5HTNMKNGRIN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646181.29_warc_CC-MAIN-20230530230622-20230531020622-00385.warc.gz\"}"}
https://adrian-e-rosales.medium.com/codeviant-25-linked-list-surgery-711df3d64164
[ "# CoDEVIANT #25 — Linked List Surgery\n\nSo today, we’re going to put a linked list under a scalpel. Our job is to make sure we don’t fuck up the entire list while trying to get one little node out of there the correct way.\n\nHow I think I’ll approach this:\n\nAlthought it may end up more like this:\n\nOur mission is to write a function that:\n\n• Takes an integer k\n• Removes the kth node from the end of the Linked List\n\nSPECIAL DIRECTIONS:\n\n• If the head of the linked list turns out to be the one we’re supposed to remove, we simply update its value and next-pointer.\n\nA quick word on linked lists. It’s an alternative datastructure to an array. Arrays are groups of bytes of data that are contiguously located in memory. Each element of the array is has its start positioned immediately after the end of the preceding element. There is no daylight between where one element ends and the next element starts.\n\nConversely, a linked list is made of nodes. Nodes have two parts:\n\n• the byte(s) of memory that holds the node’s value, and\n• the byte(s) of memory pointing to the address of the chunk that is the start of the next node.\n\nIn the crudely photoshopped image above, you can pretend the chess-board is the working memory of a computer. The portion in the white squares of the black-outlined groupings are the value-portions of the linked-list node. The darksquares adjacent to the value-portions are the pointers. Their value is a reference to the address of where the next value part of the next node is.\n\nSo here is the answer I came up with:\n\n`let counter = 1class LinkedList { constructor(value) { this.value = value; this.next = null; }}function removeKthNodeFromEnd(head, k) { counter = 1 processNode(head, 1) let calculatedNumber = counter - k if(calculatedNumber != 0) { clipTreeLinks(head, calculatedNumber) } else { head.value = head.next.value head.next = head.next.next }}function processNode(node, number) { if(node.next != null) { counter = number + 1 processNode(node.next, counter) } }function clipTreeLinks(node, number) { let clipCounter = 0while(clipCounter < number ) { if(node.next && node.next != null) { if(clipCounter == number - 1 ) { node.next = node.next.next } clipCounter++ } node = node.next }}`\n\nThere’s alot going on here, so as always we are going to break it down step by step.\n\n`let counter = 1class LinkedList { constructor(value) { this.value = value; this.next = null; }}`\n\nI create a globally scoped variable, it is editable and able to be referenced from any scope.\n\n`let counter = 1`\n\nThen I create the class of LinkedList. With the value argument used to instantiate an object based off the LinkedList class, the instance’s value (this.value) becomes whatever we pass into the constructor for value. The instance’s next property would be defined upon instantiation.\n\n`function removeKthNodeFromEnd(head, k) { counter = 1 processNode(head, 1) let calculatedNumber = counter - k if(calculatedNumber != 0) { clipTreeLinks(head, calculatedNumber) } else { head.value = head.next.value head.next = head.next.next }}`\n\nNext, we create the method removeKthNodeFromEnd. Here’s what goes on line by line:\n\n• We set counter, our global variable to 1 (in case it happens to not be 1 at the moment)\n• We call the method processNode with the arguments of head and 1. It should be mentioned at this time that the code of processNode will impact what ‘counter’ is.\n• Then we declare a variable called calculatedNumber that will be the difference of what counter is at that point and k.\n• If calculatedNumber is not zero, which also means that we would not be trying to remove the head of the linked-list, then we call clipTreeLinks and pass in the head and the calculatedNumber.\n• If calculatedNumber IS zero, however, we make the head’s value equal the following node’s value\n• Also, if calculatedNumber IS zero, we make the head’s next value equal the following node’s next value.\n\nSo now we just need to know what’s going on in the processNode and clipTreeLinks methods. Let’s start with processNode!\n\n`function processNode(node, number) { if(node.next != null) { counter = number + 1 processNode(node.next, counter) } }`\n\nBehold processNode. It takes the argument of a linked-list node (meaning it could be the head or some other part of the list), and a number.\n\n• First, we look at the passed in node’s next value. If it is NOT null, then we make the global variable counter equal the passed in number plus 1. Afterwards, we call processNode on the current node’s next value and pass in our newly updated counter as the number-argument.\n• So this will go on over and over until we reach the tail of the linked-list. The tail is the node in the linked list at the end where the next property is null, at which point, we enter the implicit else portion of the if-else statement where nothing happens.\n\nThe result of this sets up the global variable counter so that in the removeKthNodeFromEnd method, we can calculate the value of the variable calculatedNumber. By subtracting k from counter.\n\n`let calculatedNumber = counter - k`\n\n`if(calculatedNumber != 0) { clipTreeLinks(head, calculatedNumber) } else { head.value = head.next.value head.next = head.next.next }`\n\nIn most cases, however, we will call the method clipTreeLinks passing in the head and the calculatedNumber variable which will be anything but zero.\n\n`function clipTreeLinks(node, number) { let clipCounter = 0while(clipCounter < number ) { if(node.next && node.next != null) { if(clipCounter == number - 1 ) { node.next = node.next.next } clipCounter++ } node = node.next }}`\n• First we say that clipCounter is zero.\n• We say that while clipCounter is less than the number argument that was passed in, if node.next exists, then in MOST cases, we will simply increment clipCounter by one and say the node is now the following node.\n• However if clipCounter equals the number minus 1….THEN we say that node.next will be the subsequent node’s next\n\nBasically how this works is the number is the 1-indexed length of the linked-list (1…2…3..4…etc). To remove a node from a singly-linked list, we need to change what the previous-Node’s .next value is. By doing this, we effectively cut a node out of the chain of pointers indicating the address of the next node. So it makes sense that we’re looking for the integer before number. So once we find that place in the list, we say that the next value of the node before the one we want to remove will now (instead of being node.next [leading to the one we want to get rid of]) be node.next.next.\n\n`if(clipCounter == number - 1 ) { node.next = node.next.next}`\n\nAnd that’s literally it! We can sew the patient up and go about our day.\n\n— — — — — — — — — — -\n\nSo what’s a better way of solving this? Apparently it’s this:\n\n`function removeKthNodeFromEnd(head, k) { let counter = 1 let first = head let second = head while(counter <= k) { second = second.next counter += 1 } if second == null { //this means the value we wanted to remove from a 10 lenth list is the 10th, which means the frist pointer (head) //is at the node that needs updating head.value = head.next.value head.next = head.next.next return } while(second.next != null) { second = second.next first = first.next } //once the second goes fast enough to get to null, //then we know that first.next needs to be first.next.next first.next = first.next.next}`\n\nSo first we create a counter variable in the scope of the removeKthNodeFromEnd function then we create two variables that equal the head argument: first and second.\n\nThe idea with this solution is that we’re going to send out second before first. We will continuously make second equal second.next and increment counter by 1. We do this as long as counter is less than or equal to k.\n\n`while(counter <= k) { second = second.next counter += 1 }`\n\nIf second equals null after all of that, then it means that for a 10 length list the integer k was is 10, so our first doesn’t need to move at all. We just have to update head’s values.\n\n`if second == null { head.value = head.next.value head.next = head.next.next return }`\n\nHowever, if second.next does not equal null, then we say that second equals second.next and first equals first.next. By making our while-loop with the counter be inclusive of k, we set up our calculation to not have any “off by one” errors, so that once we say that second.next indeed equals null, then the first is in the position of the node before the one we want to get rid of. And in that case all we do is say that first.next equals first.next.next and that’s all there is!\n\n`first.next = first.next.next`", null, "Here’s a crude image of k being 3. Where we send ‘second’ out 3 spaces. and then once more so that second.next equals null. At which point first.next needs to become first.next.next" ]
[ null, "https://adrian-e-rosales.medium.com/None", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8400989,"math_prob":0.978975,"size":8843,"snap":"2023-40-2023-50","text_gpt3_token_len":2107,"char_repetition_ratio":0.17072067,"word_repetition_ratio":0.15019506,"special_character_ratio":0.24041615,"punctuation_ratio":0.1277434,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9900421,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T21:56:19Z\",\"WARC-Record-ID\":\"<urn:uuid:e7007a27-b535-4b47-87b1-f307b89d9d8c>\",\"Content-Length\":\"238411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f759ac18-a10d-4712-b1f6-1cba34931319>\",\"WARC-Concurrent-To\":\"<urn:uuid:e488ef9a-7c74-4806-a044-4becf5d49055>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://adrian-e-rosales.medium.com/codeviant-25-linked-list-surgery-711df3d64164\",\"WARC-Payload-Digest\":\"sha1:XXFQOMXSSAVVEPRCL5FIDEOIVY6B2L5X\",\"WARC-Block-Digest\":\"sha1:FR43VL6VMHGNPVX2BVERYD57E2IEYAMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510225.44_warc_CC-MAIN-20230926211344-20230927001344-00688.warc.gz\"}"}
https://followtutorials.com/2019/06/python-program-to-create-a-class-and-compute-the-area-and-the-perimeter-of-the-circle.html
[ "# Python Program to Create a Class and Compute the Area and the Perimeter of the Circle\n\nIn this example, we will write a python program to find the area and perimeter of the circle using class and objects. To better understand this example, make sure you have knowledge of the following tutorials:-\n\n## Python Program to Create a Class and Compute the Area and the Perimeter of the Circle\n\nimport math\n\nclass Circle:\n\ndef area(self):\nreturn math.pi * (self.radius ** 2)\n\ndef perimeter(self):\nreturn 2 * math.pi * self.radius\n\nr = int(input(\"Enter radius of circle: \"))\nobj = Circle(r)\nprint(\"Area of circle:\", round(obj.area(), 2))\nprint(\"Perimeter of circle:\", round(obj.perimeter(), 2))\n\nThe output of the above program is:-", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20210%20140%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8271741,"math_prob":0.8811498,"size":1434,"snap":"2021-43-2021-49","text_gpt3_token_len":343,"char_repetition_ratio":0.18531469,"word_repetition_ratio":0.13502109,"special_character_ratio":0.25244072,"punctuation_ratio":0.13898306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99923325,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T21:41:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b67a0617-704c-4735-9db3-75f1926f803a>\",\"Content-Length\":\"57466\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7fb5acf-a109-4de3-88a1-c5063b326c35>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ef0215c-f1e7-4337-ac4a-444e6f4697e3>\",\"WARC-IP-Address\":\"172.67.129.244\",\"WARC-Target-URI\":\"https://followtutorials.com/2019/06/python-program-to-create-a-class-and-compute-the-area-and-the-perimeter-of-the-circle.html\",\"WARC-Payload-Digest\":\"sha1:RZYTZ7JBNAAHJXVUBN6AFVXSMMIGUS2J\",\"WARC-Block-Digest\":\"sha1:5S6S6QE2NTQVRTRDYIHDRW57IBA6CBCX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587926.9_warc_CC-MAIN-20211026200738-20211026230738-00527.warc.gz\"}"}
https://www.educationquizzes.com/11-plus/exam-illustrations-maths/addition-subtraction-multiplying-and-dividing-negative-numbers/
[ "# Addition, Subtraction, Multiplying and Dividing Negative Numbers\n\n## Addition and Subtraction of Negative Numbers\n\nNegative numbers are awkward but need not be too much of a nuisance. Most children spot the concept through the use of a thermometer which drops to sub-zero levels. However, manipulating the numbers can be a bit confusing. The key is to remember that adding and subtracting are opposites of each other and that if you 'subtract' a negative number you are saying 'minus a minus' which is the equivalent of adding.\n\nMost numbers we deal with are positive. A sum such as '3 + 5' really means 'positive 3 add positive 5' or '+3 + +5'.\n\nWhen you use a minus in any calculation you need to think logically. '-3 + 5' means that we should start at -3 on our number line and add five to it. It takes us to positive 2.\n\nHowever, '-3 - 5' will take us from -3 on the number line and head 5 further into the negatives. The answer is minus 8. ( -8 )\n\nIf you try adding a negative number you must take it away as the '+' sign is just saying 'put together with...' or '...altogether'.\n\n3 + (-5) means that we have to put 3 together with the -5. That leaves -2 altogether.\n\nSimilarly, if we subtract a negative number, we are doing the OPPOSITE of subtracting and therefore add it.\n\n3 - (-5) = 3 + 5 = 8\n\nRemember, 'minus a minus is a plus'.\n\n## Multiplying and Dividing Negative Numbers\n\nWe have seen how adding and subtracting negative numbers can be logical but difficult to understand at first. Multiplying and dividing is equally logical but again, you need to be aware which way the answer will be expressed - will it be negative or positive?\n\nIf I were to take 5 and multiply it by 6 I would have 30. However, if I multiply 5 by -6 the answer becomes -30.\n\nIn effect you are saying '5 lots of -6' which will be -30.\n\nWhen you multiply a positive by a negative, the answer is negative.\n\nIf BOTH numbers become negative, you are going to create a positive number. You are effectively doing the opposite of the opposite and therefore end up back where you started. Think of the 'double negative' in speech - 'I am not not going' means I am going.\n\nIf I were to multiply -4 by -10 then I would get the answer positive 40.\n\nWhen you multiply a negative by a negative, you get a positive.\n\nDivision is exactly the same. If you divide a negative number by a positive one, or a positive number by a negative one, the answer will be negative.\n\n-8 ÷ 2 = -4 and 8 ÷ -2 = -4\n\nWhen we divide a negative number by another negative number, we are going to end up with a positive number.\n\n-8 ÷ -2 = 4\n\nEffectively we are saying 'how many lots of -2 fit into -8?' and the answer will be 4." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95234555,"math_prob":0.9696118,"size":2497,"snap":"2023-14-2023-23","text_gpt3_token_len":622,"char_repetition_ratio":0.17208183,"word_repetition_ratio":0.0040983604,"special_character_ratio":0.27112535,"punctuation_ratio":0.09586466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989899,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T07:49:44Z\",\"WARC-Record-ID\":\"<urn:uuid:f2eb5183-75fa-493c-a982-b205b4fe9cab>\",\"Content-Length\":\"21066\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2cf8694-7fa6-4f47-b9cb-5a4ef74d991b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c6b790e-7842-4bbc-9852-326992b49e46>\",\"WARC-IP-Address\":\"78.137.117.241\",\"WARC-Target-URI\":\"https://www.educationquizzes.com/11-plus/exam-illustrations-maths/addition-subtraction-multiplying-and-dividing-negative-numbers/\",\"WARC-Payload-Digest\":\"sha1:QPEAFO5MWKYQT42AFI4PHBVA4KTQTIQW\",\"WARC-Block-Digest\":\"sha1:Q7KN6HS7DTBG3PI6ZMWQ66B7YV23EFPO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945317.85_warc_CC-MAIN-20230325064253-20230325094253-00107.warc.gz\"}"}
https://codeforces.com/blog/entry/5592
[ "### YuukaKazami's blog\n\nBy YuukaKazami, 8 years ago,", null, "### 236A - Boy or Girl\n\nIt is a very simple problem, just count how many distinct chars in the input and write the correct answer.\n\n### 236B - Easy Number Challenge\n\nFirst of all we make a table of size a*b*c to store every number's d value. then just brute force though every tripe to calculate answer.\n\n### 235A - LCM Challenge\n\nIt's a simple problem, but many competitors use some wrong guess and fail.\n\nFirst of all we should check if n<=3 and then we should output 1,2,6. and if n is odd, the answer is obviously n(n-1)(n-2).\n\nif n is even, of course we can use (n-1)(n-2)(n-3), so these three number wouldn't be very small compared to n. so just iterate every 3 number triple in [n-50,n] and update the answer.\n\n### 235B - Let's Play Osu!\n\nLet us take a deep look in how this score is calculated. for a n long 'O' block, they contribute n2 to answer.\n\nLet us reformat this problem a bit and consider the following problem.\n\nFor each two 'O' pair which is no 'X' between them, they add 2 to score.\n\nFor each 'O',it add 1 to score.\n\nWe can see that these two problem are exact the same.\n\nProof:\n\nfor a n long 'O' block,there is Cn2 pair of 'O' in it and n 'O' in it.\n\n2Cn2 + n = n2.\n\nSo for each event(i,j) (which means s[i] and s[j] are 'O', and there's no 'X' between them).\n\nIf event(i,j) happen, it add 2 to the score.\n\nSo we only sum up the probability of all events and multiply them by 2.\n\nThen our task become how to calculate the sum of all event(i,j).\n\nWe can see event(i,j) is simpliy", null, ".\n\nThen we denote dp(j) as sum of all event(i,j) and i<j.\n\nso dp(0)=0 and dp(j)=(dp(j-1)+pj - 1)*pj\n\n### 235C - Cyclical Quest\n\nThis problem can be solved by many suffix structures.\n\nI think suffix automaton is the best way to solve it because it is simple and clear.\n\nSo let us build a suffix automaton of the input string S.\n\nAnd consider the query string x.\n\nlet us build a string t to be x concatenate x and drop the last char.\n\nso every consecutive sub-string of t with length |x| is a rotation of x.\n\nlet us read string t with suffix automaton we've build, and every time take the first char out and add a new char,add the answer by the number of string equal to this current sus-btring of t(which is a rotation of x).\n\nAnd one more thing,we should consider the repetend of x as well,\n\ncheck my solution here:2403375\n\ncheck this if you are not familiar with suffix automaton :e-maxx's blog\n\n### 235D - Graph Game\n\nFirst of all, let us consider the tree-case.\n\nLet us consider the event like that: \"when we select A as deleting point, B is connect to A\" to be Event(A,B).\n\nSo any happened Event(A,B) would add totolCost by 1.\n\nSo we can just simply calculate the probability for every Event, and add them up.\n\nlet us consider how to calculate Event(A,B)'s probability.\n\nAssume there's n vertices in path between A and B, the probability is simply 1 / n.\n\nLet us try to prove it using induction.\n\nFirst let us assume there's a connected sub-graph of the tree containing both A and B, if the sub-graph only has n vertices, then the event happen only if we select vertex A, so the probability is 1 / n.\n\nOtherwise,assume it has x vertices there is two case: whether the selected vertex is on the path between A and B or not.\n\nIn the first case, the probability of Event(A,B) happen is 1 / x because if we don't select A, Event(A,B) will never happen.\n\nIn the second case, the sub-graph containing A,B has become smaller, so the probability is (x - n) / xn.\n\nSo add them up we can prove this statement.\n\nThen we can solve the tree case by simply add up the inverse of every path's length in the tree.\n\nAnd for the orginial case,there's at most 2 paths between A and B.\n\nIf there's only one path, then everything is the same with tree-case.\n\nOtherwise, the path between A and B should pass the cycle in the graph.\n\nLet us examine this case, you can see that there 2 types of vertex:\n\nVertex on path of A to cycle or B to cycle,they should not be selected before A because once they're selected, A and B lost connectivity,let us call them X.\n\nVertex on the cycle, they two path from A to B, each path contains a path in cycle, let us call them Y and Z.\n\nSo there's two possibility: X and Y are free when A is selected, X and Z are free when A is selected.\n\nAnd we should subtract the case that X and Y,Z are all free when A is selected because it double-counts before.\n\nSo the probability is 1 / (X + Y + 1) + 1 / (X + Z + 1) - 1 / (X + Y + Z + 1).\n\nCheck Petr 's solution for the details: 2401228\n\nAnd my C++ implementation: 2403938\n\n### 235E - Number Challenge\n\nLet us consider each prime in one step , the upper limit for a, b, c is recorded.\n\nSo if we fixed the power of 2 in each i, j, k like 2x, 2y, 2z , then their upper limit become a / 2x, b / 2y, c / 2z, and the power of 2 in their multiplication is just x+y+z.\n\nLet us denote dp(a, b, c, p) for the answer to the original problem that i, j, k 's upper limit is a, b, c. And their can only use the prime factors which are not less than p.\n\nLet the next prime to be q, so we can try to fix the power of p in i, j, k and get the new upper limit.\n\nSo we can do transform like this: dp(a, b, c, p) = sum of dp(a / px, b / py, c / pz, q)·(x + y + z + 1)\n\nCheck my code here: 2404223\n\nAlso you can check rng_58 solution here: http://codeforces.ru/blog/entry/5600\n\nIf you have any problems, you can ask here :)", null, "Tutorial of Codeforces Round #146 (Div. 1)", null, "Tutorial of Codeforces Round #146 (Div. 2)", null, "Comments (91)\n » actually i m not able to get anything from your java code in cyclical quest Div-2 E . someone please explain it with the help of his c++ code Thank You,\n• » » Oh...I think you should learn suffix automaton first if you want to understand it..I'll post some good materials about suffix automaton in the tutorials...after you learned suffix automaton, it's easy to solve this problem.\n• » » » very very thank you..:)\n » 8 years ago, # | ← Rev. 4 →   I have better sol for Div 1.A only like this readln( n ); if n = 1 then Result( 1 ); if n = 2 then Result( 2 ); if n = 3 then Result( 6 ); if odd( n ) then res := n * ( n - 1 ) * ( n - 2 ) else begin if n mod 3 = 0 then res := ( n - 1 ) * ( n - 2 ) * ( n - 3 ) else res := n * ( n - 1 ) * ( n - 3 ); end; Result( res ); Div 1.B, I think Ray030123 has better solution :)\n• » » yeah this is the perfect sol. for it..\n• » » » Yes it is :)\n• » » Oh, thank you for translating my Python code here. * P.S. It's ray0(4)0123\n• » » » yes, sorry :\">\n• » » Please, can you tell why it's true?\n• » » » If n mod 2 = 1 => res = n * ( n — 1 ) * ( n — 2 )Of course it is, right ???if n mod 2 = 0 { We can see that n and ( n — 2 ) have GCD is 2 => res = n * ( n — 1 ) * ( n — 2 )/2 It's is small, so we think about n * ( n — 1 ) * ( n — 3 ) But if n mod 3 = 0 we can see that the result is smaller :( So we think about ( n — 1 ) * ( n — 2 ) * ( n — 3 ) You should think more careful and then you will know why :D }\n• » » » » thank you\n• » » » » 5 years ago, # ^ | ← Rev. 2 →   thank you !!\n• » » » » Thank you! :)\n• » » » » I dont think that ur saying if n mod 3 = 0 we can see result will be smaller is right, i think then n and n — 3 will no longer be co-prime in that case\n• » » » » thank you! It really helps me :)\n• » » 10 months ago, # ^ | ← Rev. 2 →   You can make it more easier n = int(input()) if n==1 or n==2: print(n) exit() if n&1: print((n)*(n-1)*(n-2)) else: print(max(n*(n-1)*(n-3), (n-1)*(n-2)*(n-3)))\n• » » » it is wrong. As if n%3==0 then n*(n-1)*(n-3) wont be an answer because lcm of n and n-3 wont be n*(n-3).\n• » » Faaaar better than what is given in the official solution. I was not able to understand that 50 but here it's all understandable.\n » How do you solve C using a different suffix structure? Say a suffix array.\n• » » Let us build a suffix array of s and concatenate those t-string made from x to its back.then we can find each rotation's occur time in string s.\n• » » » Could you give more detailed explanation of this solution, please?How do we use suffix array to solve this problem?\n• » » » » Same question.\n• » » » » Firstly we should concatenate original string S and all the query strings (each doubled without last symbol), put different symbols between strings (e.g. 300, 301,..., it would be array of ints, not chars), then make a suffix array on that concatenated string. We will have 2 arrays — sorted suffixes of all the strings and lengths of common prefixes between them. Then we should build segment tree on the array of lengths to be able to quickly find minimum number on the any segment. Then iterate on the array of suffixes. For each suffix of query string we firstly should determine weather we use it (i.e. it is not equal to another shift of this query). We can determine it by getting the minimum number on the array of lengths between past occurrence of this query and current one. If we use it we should determine the number of suffixes of S that have common prefix with it not less than length of this query. We can calculate it by using binary search and segment tree (complexity O(log^2(n))) or by using treap (complexity O(log(n)))\n• » » » » » Sounds nice. Did you try it though? I just coded it up, and can't get it to work in time. I construct the 3-million suffix array in 3 seconds and the rest of my code is O(n log n).\n• » » » » » » I have read your solution,It seem 3-million suffix array is too slow. well, There is a way using only 1-million suffix array construction.You just construct the suffix-array of the string S. Then for each query x,binary-search the position in SA in O(logn+|x|),then you need add a char at the front to x and delete a char at end of it.if we know the position of x in SA, then cx's position(means add a char at the front of x) can be calculated in O(log n) as well, when comparing cx to a suffix,you first compare the first char, then compare x to the suffix' next suffix(which we already know).\n• » » » » » » » Yeah, this back-stepping method (x -> cx) is what I've been missing during the contest. I'll try to code it up later. Thank you!\n » My Div1A problem got accepted but I have no idea about the proof of the solution... can anyone provide me some??.........\n » In 235B, what do you mean by: So if they are i, j , .... \n• » » I means if the two 'O' in the pair have indexes i and J.\n » in div 2 problem C i am not able to understand why 50,,,,, why to check for triplet having greatest lcm in the range [n-50,n]\n• » » me, too. i've checked only triplets with n..n-5 at most and it get AC. Well, thanks to the author for good problemset (esp. for me, learning Math), but i expected to see some more strict manual with good explanation, why it goes well. \"good\" brute force is not the answer, you know...\n• » » 8 years ago, # ^ | ← Rev. 2 →   50 is the upperbound, it is a aproximation solution. You can use 100 too.\n• » » I used [n — 3, n] and it got AC.\n » in Problem \"Lets play Osu\", i am not able to understand what to do from the editorial... if u are writing the editorial make sure that u explain well and not 2 line explanation,,, learn something from Topcoder.com on how to write good editorials,,, being red doesnt mean that everyone is as experienced as u ..... we need good guidance.\n• » » So we had some sort of common problem (all wars come from it, i assure you) — \"mis-\", \"not-\" and \"not full-\" \"-understanding\". Please dont get me wrong.\n• » » Oh...I'll try to make it more clear today...But your handle say you're a red coder though :)\n• » » » in India programming is not as good as in Russia or china or japan... but i assure u if i get good guidance i would definitely do better... can u give me ur Email id or Facebook Link so that i can contact u when having some trouble.. :)\n• » » » » 8 years ago, # ^ | ← Rev. 2 →   http://ask.fm/WJMZBMRanyway...I don't think born in India is an excuse, Actually I learned programming all from internet resource in English.As you are an India, I bet your English is better than mine.Just work harder bro.\n » Can you provide an English paper about suffix automata ?? Thank you . Chinese paper is also OK for me :)\n• » »\n• » » » I can't understand suffix tree, it's so hard for me....\n » 8 years ago, # | ← Rev. 2 →   Shouldn't the", null, "in solution for 235B be a", null, "?\n• » » sorry,fixed\n » What happens after 235D — Graph Game is extended to a normal graph?\n• » » I think then it is totally unsolvable.To the least you can use a O(2^n) dp instead.But It can be solved in cactus, and the first version of this problem is on cactus, I think that is too hard so I simplify a bit.\n » orz))))\n » In Problem \"Let's Play Osu!\":\"It can be interpreted as, 2*number of pair of 'O' in this 'O' block + number of 'O' in this 'O' block.\"What was the motivation for this? Or is it just experience? :)\n• » » Well, some thoughts just came out...It is hard to give a exact motivation...\n• » » If you are looking for proof for the statement, then n^2 = (1+1+1....n times)^2 = 1^2 + 1^2 +..n times + 2 * (all combinations of 1s...nC2 times) However, I can still not see an O(n) approach using the explanation. I wish if someone can show me an implemented solution using the above insight.I solved the problem using the observation, that if an O is added at the end and we know the expected value(let say, X) of Os at the end then the score would increase by (X+1)^2 -X^2 = 2*X + 1\n » http://www.codeforces.com/contest/235/submission/2408731http://www.codeforces.com/contest/235/submission/2408690These are the link of my submission for Div1 problem E. The first one is Accepted and the second one is getting TLE. But the only difference of the solutions is that I called from last prime to the first prime in the first and in the second one was the reverse(From the first prime to the last).... the rest solutions are the same... but I have no Idea why it is happening so.... Am I missing something obvious?\n » If you have time, please comment part of your code. I solved some simple tasks using Suffix Automaton (those described in e-max blog) but it seems that you are augmenting the data structure. I can't understand the meaning of \"nxt\" nor \"repetend\".sorry for troubles and thank you.\n• » » It's just a kmp-algorithm to get the repetend of the query x_i\n• » » » thank you for your answer. i broke my mind for hours until i came up with a solution. after that, i went back to your code and could understand almost everything.\n » 8 years ago, # | ← Rev. 3 →   I tried that link on suffix automatons. I translated it with Google Translate. I could not understand what this means:-------- Consider any non-empty substring t line s . Then called the set of endings endpos (t) set of all positions in the line s In which the end of the lines with t .We will call two substrings t_1 and t_2endpos Equivalent if their sets of endings are the same: endpos (t_1) = endpos (t_2) . Thus, all non-empty substrings s can be divided into several classes according to their equivalence sets endpos . --------I get it that \"line s\" means \"string s\". I am confused as to what is meant exactly by endpos(t). Does it mean all suffixes which end with 't'? And what is the significance of two substrings being endpos equivalent?Thanks.\n » Please could anyone shed more light on the Let's Play Osu problem...I still haven't gotten the main idea...Thank You!!!\n• » » Same problem with me.... Please someone explain deeply how to solve Let's Play Osu....\n• » » » I am very busy this week(I'm a high-school student and have to go to school)...well,I'll do it tonight...\n• » » » » thats ok :)\n• » » » » pls explain one thing , in Lets Play Osu problem we have to consider each pair of O's so why u said consider only those pair of O's having no X between them....,\n » I tried a lot but did not understand the editorial of 235E Number Challenge. Somebody please explain in detail.\n• » » the same question.\n » Excellent idea on problem D in Div2. Is there any background? I don't think I can come up with this idea if I didn't see ti before.\n » WJMZBMR, your English is not very good. I'm hardly understanding what you meant in the post.\n• » » Dude, despite his english he has written a decent editorial and clarified everyone's doubts in the comments. You cannot expect everyone to know good English. It's just another language!\n• » » » You are right, dude. I want too much.\n » your contest has nice problem !\n » 6 years ago, # | ← Rev. 2 →   if in the problem A wanted to find the ressult with 4 distinct number what would be the result????????\n• » » Check this, it is for general case k numbers.http://ideone.com/DJiFpV\n• » » » I dont understand how to handle the sizes of the DP in div 1 E. I see the numbers get smaller fast so sizes dont need to be 2000 for each but i have no idea how to code it. I dont know Java and each other submission i have checked was using the other solution. Can you help a bit?\n• » » » »\n• » » » » » Thanks! Map is a cool thing i guess B)\n• » » » » » 10311089 Is Hasmap necessary? I couldnt make thıs code any faster? I only faıls the 2000 2000 2000 input B(\n• » » » Can you tell me about the proof of LCM challenge problem ?\n » I can't understand the div1B idea Could someone explain it me please?\n• » » someone?\n• » » » ok u.u\n » Does the A question have a solution that is not O(N^2)?\n• » » Question A, can be done in O(n).\n• » » » Can u explain to me how to do it?\n• » » » » Use a set or an array to keep track whether you have seen a character before.\n » The editorial includes almost no explanation at all for the div2 E / div1 C problem except for telling that suffix automaton may be used and we should construct a string $t$ that is $(x + x)$ but doesn't include the last char.Here is my approach for the problem using suffix automaton: Note: The terminology used will be in reference to this article for suffix automaton.We start by building suffix automaton on $s$. Let $t$ be the string as mentioned above for the query string $x$. As noted, a substring of length $|x|$ of $t$ is a rotation of $x$. Firstly, we precompute a $dp$ table using the SA: $dp[i]$ = No. of terminal nodes reachable from the $i$-th node. This will tell us how many suffixes can be attached to the current substring when we are at node $i$, which will give us the no. of times the current substring occurs as a substring in $s$. This is because a substring is just a prefix of some suffix and $dp[i]$ tells us exactly how many suffixes can be attached to the current substring to make a suffix, such that the current substring will be a prefix for them.Now, let's have $2$ pointers (indices) $i$ and $j$ in $t$, $j$ points to the next char in $t$ that needs to be concatenated to the current string and $i$ tells us that we are considering the substring of $t$ starting at position $i$. To get all cyclic shifts, we iterate $i$ from $0$ to $(m-1)$. When we are at $i$, we'll have to extend our substring to include char $t[j]$ and drop the char $t[i-1]$. To drop the $(i-1)st$ char, note the $minlen$ of the current node, using the fact that $minlen(cur) = len(link(cur)) + 1$. If $minlen(cur) \\le len$, the substring $t[i..j-1]$ will be found at the cur node, and we do not need to move to any other node. If however, $minlen(cur) > len$ (more precisely, $minlen(cur) == len + 1$), the substring $t[i..j-1]$ cannot be found at the current node, so we follow the suffix link to go to the node where it will be found and set that to the new cur node. After dropping $t[i-1]$, the string we have corresponds to $t[i..j-1]$. So, its length, $len = (j - i)$.Now, we will try to extend our current string to be of length $m = |x|$ (if possible). That is, while $(j - i) < m$ and there is a transition possible from state $cur$ corresponding to $t[j]$, we'll make the transition and set $cur$ to the node at the other end of the transition. After that, we'll check to see if we have found a string of length $m$ or not. If $(j - i) < m$, we did not find the string of length $m$ starting at $t[i]$ to be present in the SA (or in the string $s$), so we continue our procedure by incrementing $i$ to consider the next substring. If however $(j - i) == m$, we have found the string starting at $t[i]$ in $s$, so we add $dp[cur]$ to the answer.There is one technical detail missing in the above explanation. Consider the case when a substring is occurring more than once as a cyclic shift. For example, if $x =$ \"$aa$\", then $t[0..1] = t[1..2] =$ \"$aa$\". So, the substring \"$aa$\" would get counted twice, however we need every substring exactly once in the answer. To avoid this, note that for two $m$ length strings to be identical, we must be at the same state in the SA when considering these two strings. This follows from the properties of SA, since a state can correspond to atmost one string of length $m$. So, we may keep a set of visited states and if $cur$ state has already been visited, we do not add $dp[cur]$ to the answer to avoid overcounting. Here is my solution using this idea: 51809558\n » 18 months ago, # | ← Rev. 2 →   I developed the following intuition for Div1 A. Our final aim is to reduce the number of common factors between as large numbers as allowed so as to get the largest LCM. So, firstly, if the numbers are <= 3, we have no choice but to output 1, 2 (1 * 2), 6(1 * 2 * 3) respectively. Now n can be either even or odd. We know that no two consecutive odd numbers have a common factor other than 1. So, in case n is odd, we can go ahead and output n * (n — 1) * (n — 2). Clearly, n and n — 2 don't share any common factor here, nor does n — 1 have a common factor with either of these ( also not that n — 1 will be even while others are odd).Now when n is even. If we go like odd numbers and output n * (n — 1) * (n — 2), we might be wrong due to the following reasons. n and n — 2 actually share a common factor, which is 2. So should we go and output n * (n — 1) * (n — 3) then ? NO. What if n was divisible by 3 too ? Now, we are actually including n and n — 3, that both share 3 as a common factor, and thus will end up with a wrong answer ! So, there are two sub cases: 2.1) n is divisible by 3. If that is the case we output (n — 1) * (n — 2) * (n — 3). Clearly n — 1 and n — 3 are odd and don't share any common factor, nor does n — 2, which is even. 2.2) n not divisible by 3. In this case we can easily go ahead with n * (n — 1) * (n — 3) ! I hope this helps.\n » In 235A LCM Challenge, (n-1)*(n-2)*(n-3) doesn't works when input is 10...\n• » » That's because n*(n-1)*(n-3) gives a better solution when n is even but not divisible by 3.\n » In 235A, I managed to solve the problem as follows in O(1): If n = 1, ans = 1. If n = 2, ans = 2. If n is odd, ans = n * (n — 1) * (n — 2). If n is even and divisible by 3, ans = (n — 1) * (n — 2) * (n — 3). If n is even but not divisible by 3, ans = n * (n — 1) * (n — 3). Here is my submission: 73660151\n• » » valdaarhun How this works when n is even?\n• » » » 5 months ago, # ^ | ← Rev. 2 →   If n is even, then you definitely can't do n*(n-1)*(n-2) because n and n-2 won't be coprime.So you can either do n*(n-1)*(n-3) or (n-1)*(n-2)*(n-3).Now, in both the above expressions, the n-1 and n-3 terms are common. So it would make sense to use the first expression, i.e. n*(n-1)*(n-3). But there is a catch. If n is a multiple of 3 then you can write n as 3a. But then n and n-3 will not be coprime anymore. In that case you would have to use the second expression, i.e. (n-1)*(n-2)*(n-3)\n• » » » » Thanks buddy :)\n » Please, anyone, provide me the code for 236B\n• » »\n » https://codeforces.com/contest/236/problem/Ahow to do this question without using set." ]
[ null, "https://sta.codeforces.com/s/11620/images/flags/24/gb.png", null, "https://espresso.codeforces.com/542eed5a0063e69644e63ee2173cbbcab87f3d9f.png", null, "https://sta.codeforces.com/s/11620/images/icons/paperclip-16x16.png", null, "https://sta.codeforces.com/s/11620/images/icons/paperclip-16x16.png", null, "https://sta.codeforces.com/s/11620/images/icons/comments-48x48.png", null, "https://espresso.codeforces.com/782cb89fbe7443d4734af5f2bb9f8d15ac2ace56.png", null, "https://espresso.codeforces.com/938b78cb2b7d8f4d2519286aa52a9151099de5cb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91833293,"math_prob":0.956951,"size":22649,"snap":"2020-34-2020-40","text_gpt3_token_len":6118,"char_repetition_ratio":0.1117686,"word_repetition_ratio":0.03607537,"special_character_ratio":0.2971875,"punctuation_ratio":0.13242453,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99598795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,8,null,7,null,null,null,null,null,null,null,null,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-30T03:53:01Z\",\"WARC-Record-ID\":\"<urn:uuid:19e14d2d-23c1-4821-9824-e92f4f489713>\",\"Content-Length\":\"386090\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1b95917-9e8e-4b3e-a2e8-0f7f1c6a527a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2416729-7ed0-4dd9-a451-7f003408eddd>\",\"WARC-IP-Address\":\"81.27.240.126\",\"WARC-Target-URI\":\"https://codeforces.com/blog/entry/5592\",\"WARC-Payload-Digest\":\"sha1:3UVYMWH7UCNJIODZ6PT5KRI4LI33P55V\",\"WARC-Block-Digest\":\"sha1:N7ISD5KVFXKGJXBMP5S4U6YYP7M4QUFL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402101163.62_warc_CC-MAIN-20200930013009-20200930043009-00172.warc.gz\"}"}
http://kleine.mat.uniroma3.it/mp_arc/e/93-194.latex
[ "\\catcode\\@=11 {\\count255=\\time\\divide\\count255 by 60 \\xdef\\hourmin{\\number\\count255} \\multiply\\count255 by-60\\advance\\count255 by\\time \\xdef\\hourmin{\\hourmin:\\ifnum\\count255<10 0\\fi\\the\\count255}} \\def\\ps@draft{\\let\\@mkboth\\@gobbletwo \\def\\@oddhead{} \\def\\@oddfoot {\\hbox to 7 cm{$Draft\\ version:\\ \\draftdate$ \\hfil} \\hskip -7cm\\hfil\\rm\\thepage \\hfil} \\def\\@evenhead{}\\let\\@evenfoot\\@oddfoot} \\catcode\\@=12 \\def\\draftdate{\\number\\month/\\number\\day/\\number\\year\\ \\ \\ \\hourmin } \\def\\draft{\\pagestyle{draft}\\thispagestyle{draft}} %\\documentstyle[12pt,ams]{article} \\documentstyle[12pt]{article} \\setlength{\\textheight}{8.9in} \\setlength{\\textwidth}{6.2in} \\topmargin= -0.7cm \\hoffset -1.5cm \\raggedbottom \\renewcommand{\\baselinestretch}{1.0} \\newcommand{\\BE}{\\begin{eqnarray}} \\newcommand{\\EN}{\\end{eqnarray}} \\newcommand{\\be}{\\begin{equation}} \\newcommand{\\en}{\\end{equation}} \\newcommand{\\non}{\\nonumber} \\newcommand{\\no}{\\noindent} \\newcommand{\\vs}{\\vspace} \\newcommand{\\hs}{\\hspace} \\newcommand{\\e}{\\'{e}} \\newcommand{\\D}{\\dagger} \\newcommand{\\ef}{\\{e}} \\newcommand{\\bC}{{\\bf C}} \\newcommand{\\p}{\\partial} \\newcommand{\\Bbb}{\\bf} \\newcommand{\\ha}{{1\\over 2}} \\newcommand{\\un}{\\underline} \\pagestyle{plain} \\input mssymb.tex \\begin{document} \\title{Stable Non-Gaussian Diffusive Profiles} \\author{J.Bricmont\\thanks {Supported by EC grant SC1-CT91-0695}\\\\UCL, Physique Th\\'eorique, Louvain-la-Neuve, Belgium\\and A.Kupiainen\\thanks{Supported by NSF grant DMS-8903041} \\\\Helsinki University, Mathematics Department,\\\\ Helsinki, Finland} \\date{} \\maketitle \\begin{abstract} We prove two stability results for the scale invariant solutions of the nonlinear heat equation $\\partial_t u=\\Delta u - |u|^{p-1}u$ with $10$, provided $1 \\gamma_p$, while, for $\\gamma= \\gamma_p$, it decays at infinity as \\be f_{\\gamma_p}(x) \\sim |x|^{\\frac{2}{p-1}-n} e^{- \\frac{x^2}{4}}. \\label{8} \\en We prove here that these solutions are stable in two senses: first, there exists a ball in a Banach space of initial data such that the corresponding solutions tend, in the appropriate norm, as $t\\rightarrow\\infty$, to (\\ref{3}). Secondly, any initial data satisfying a suitable positivity condition will give rise to a solution again tending to (\\ref{3}). More precisely, let $q>{2\\over p-1}$ and consider the Banach space $B$ of $L^\\infty$ functions $h$ equipped with the norm (with some abuse of notation!) \\be \\| h \\|_\\infty = {\\rm ess}\\sup_\\xi |h(\\xi) (1+|\\xi|^q)|. \\label{19} \\en We consider the initial data (taken at time 1 for later convenience) \\be u(x,1)=f_\\gamma(x)+h(x) \\label{89} \\en with $h\\in B$. We prove the \\vs{3mm} \\no {\\bf Theorem } {\\it Let $1 < p < 1 + \\frac{2}{n}$. There exist $\\varepsilon > 0, C < \\infty$ and $\\mu > 0$ such that, if the initial data $u(x,1)$ of} (\\ref{4}) {\\it is given by} (\\ref{89}) {\\it with $h\\in B$ and satisfies either $$\\|h\\|_\\infty \\leq \\varepsilon$$ or $$h(x)\\geq 0$$ (a.e.) then,} (\\ref{4}) {\\it has a unique classical solution and, for all $t$, $$\\| t^{\\frac{1}{p-1}} u(\\cdot t^{\\frac{1}{2}},t) - f_\\gamma(\\cdot) \\|_\\infty \\leq C t^{- \\mu}\\| h\\|_\\infty$$} \\vs{3mm} \\section{Proof} %\\setcounter{equation}{0} \\medskip Before going to the proof of the Theorem, we will briefly discuss the scale invariant solutions (\\ref{3}). These are given by $f_\\gamma(x) = \\phi_\\gamma(|x|)$ and $\\phi_\\gamma$ solves the ordinary differential equation \\be \\phi^{''} + \\left( \\frac{n-1}{\\eta} + \\frac{\\eta}{2} \\right) \\phi^{'} + \\frac{\\phi}{p-1} - \\phi^{p} = 0 \\label{6} \\en for $\\eta = |x| \\in [0, \\infty[$. The theory of positive solutions of (\\ref{6}) has been developped in \\cite{Br,Ga,KP1}. The main result is that, for any $p>1$, there exists smooth, everywhere positive solutions, $\\phi_\\gamma$, of (\\ref{6}) with $\\phi_\\gamma^{'}(0)=0$ and $\\phi_\\gamma(0) =\\gamma$ for $\\gamma$ larger than a certain critical value $\\gamma_p$ (but not too large). Actually, for $p < 1 + \\frac{2}{n}, \\gamma_p > 0$ while $\\gamma_p = 0$ for $p \\geq 1 + \\frac{2}{n}$. The decay at infinity of these solutions is given in (\\ref{7}, \\ref{8}). The existence of a critical $\\gamma_p$ can be understood intuitively by viewing (\\ref{6}) as Newton's equation for a particle of mass one, whose position\" as a function of time\" is $\\phi(\\eta)$. The potential is then $U(\\phi) = \\frac{\\phi^{2}}{2(p-1)} - \\frac{\\phi^{p+1}}{p+1}$ and the friction term\" $\\left( \\frac{n-1}{\\eta} + \\frac{\\eta}{2} \\right) \\phi^{'}$ depends on the time\" $\\eta$. Hence, if $\\phi_\\gamma^{'} (0) = 0$ and $\\phi_\\gamma(0)=\\gamma$ is large enough, the time it takes to approach zero is long and, by then, the friction term has become sufficiently strong to prevent overshooting\". However, as $p$ increases, the potential becomes flatter and one therefore expects $\\gamma_p$ to decrease with $p$. Given the initial data (\\ref{89}), it is convenient to rewrite (\\ref{4}) in terms of the variables $\\xi = xt^{- \\frac{1}{2}}$ and $\\tau = \\log t$; so, define $v(\\xi, \\tau)$ by: \\be u(x,t) = t^{- \\frac{1}{p-1}} (f_\\gamma (xt^{- \\frac{1}{2}}) + v (xt^{- \\frac{1}{2}}, \\log t)) \\label{90} \\en where now \\be v(\\xi,0) = h(\\xi). \\label{10} \\en Then, (\\ref{4}) is equivalent to the equation \\be \\partial_\\tau v = {\\cal L} v - \\left(|f_\\gamma+v|^{p-1}(f_\\gamma +v) - f_\\gamma^p - pf_\\gamma^{p-1} v \\right) \\equiv {\\cal L} v + N(v) \\label{123} \\en where we used the fact that (\\ref{3}) solves (\\ref{4}) and gathered the linear terms in $${\\cal L} = {\\cal L}_0 + V_\\gamma,$$ with \\be {\\cal L}_0=\\Delta + \\frac{\\xi}{2} \\cdot \\nabla + \\frac{1}{p-1}, \\label{11} \\en and \\be V_\\gamma(\\xi)=-pf_\\gamma^{p-1}(\\xi). \\label{124} \\en To prove that the solution $t^{- \\frac{1}{p-1}} (f_\\gamma (xt^{- \\frac{1}{2}}) )$ is stable means to find a class of initial data $v(\\xi,0)$ such that the corresponding solution of (\\ref{123}) goes to zero as $\\tau \\rightarrow \\infty$, in a suitable norm. The Theorem of Section 1 reads now in terms of $v$ as \\vs{3mm} \\no{\\bf Proposition 1.} {\\it With the assumptions of the Theorem,} (\\ref{123}), {\\it with initial data (11) ,has a unique classical solution and} $$\\|v(\\cdot,\\tau)\\|_\\infty\\leq Ce^{-\\mu\\tau}\\|h\\|_\\infty$$ \\vs{3mm} The main input in the proof is the following estimate on the semigroup $e^{\\tau{\\cal L}}$: \\vs{3mm} \\no{\\bf Proposition 2.} {\\it The operator $e^{\\tau{\\cal L}}$ is a bounded operator in the Banach space $B$, and its norm satisfies $$\\|e^{\\tau{\\cal L}} \\|\\leq Ce^{-\\mu\\tau}$$ for some $\\mu>0$, $C<\\infty$.} \\vs{3mm} There are two important ingredients in the proof of Proposition 2. The first is the fact that $e^{\\tau{\\cal L}}$ is a contraction in a suitable Hilbert space of rapidly decreasing functions. To see this, note first that ${\\cal L}_0$ is conjugate to the Schr\\\"odinger operator \\be e^{\\frac{\\xi^2}{8}} {\\cal L}_0 e^{- \\frac{\\xi^2}{8}} = \\Delta - \\frac{\\xi^2}{16} - \\frac{n}{4} + \\frac{1}{p-1} \\label{12} \\en i.e. the harmonic oscillator. Thus ${\\cal L}_0$ is self-adjoint on its domain ${\\cal D}({\\cal L}_0) \\subset L^2 ({\\bf R}^n, d\\mu)$, where $$d\\mu(\\xi)=e^{\\frac{\\xi^2}{4}}d\\xi.$$ ${\\cal L}_0$ has a pure point spectrum $\\{{1\\over p-1}-{n\\over 2} -{m\\over 2}\\;|\\; m=0,1,\\dots\\}$ and the largest eigenvalue ${1\\over p-1}-{n\\over 2}$ is {\\it positive} if $11+{2\\over n}$ case). Remarkably, it is possible to prove that ${\\cal L}< -E <0$ without a detailed study of the function $f_\\gamma$, but only using equation (\\ref{6}). We have the \\vs{3mm} \\no{\\bf Lemma 1.} {\\it The operator $e^{\\tau{\\cal L}}$ is a bounded operator in the Hilbert space $L^2 ({\\bf R}^n, d\\mu)$ and its norm satisfies $$\\|e^{\\tau{\\cal L}} \\|\\leq e^{-E\\tau}$$ for some $E>0$.} \\vs{3mm} \\no{\\bf Proof.} Since $V_\\gamma$ is bounded, $\\cal L$ is self-adjoint and, as for ${\\cal L}_0$, its resolvent is compact and, therefore, its spectrum is pure point. Let $-E_\\gamma$ be the largest eigenvalue. First note that $-E_\\gamma\\leq -E_{\\gamma_p}$. Indeed, this holds since $V_\\gamma\\leq V_{\\gamma_p}$, because $f_{\\gamma}\\geq f_{\\gamma_p}$, which in turn follows from the fact that $\\gamma_p$ is the smallest allowed value of $\\phi_\\gamma(0)=\\gamma$ in (\\ref{6}), and that two solutions of (\\ref{6}), both with initial conditions $\\phi_\\gamma'(0)=0$, will not cross. Hence it suffices to prove the claim for $\\gamma=\\gamma_p$. Let us write $E \\equiv E_{\\gamma_p}$. Next we note that, by the Feynman-Kac formula \\cite{Si}, $e^{\\tau{\\cal L}}$ has a strictly positive kernel; indeed, since $-C 0$, and any $n\\in \\Bbb N$, \\be |\\nabla_\\xi^n f_{\\gamma_p} (\\xi) | \\leq C(\\delta, n) e^{- (\\frac{1}{4} - \\delta) \\xi^2}. \\label{49} \\en This follows easily from (\\ref{8}) and the differential equation (\\ref{6}), and implies that $f_{\\gamma_p} \\in {\\cal D} ({\\cal L}_0)= {\\cal D} ({\\cal L})$.\\hfill $\\Box$ Notice that functions in $L^2 ({\\bf R}^n, d\\mu)$ have essentially a Gaussian decay at infinity, which is much faster than what is allowed in our Banach space $B$, see (\\ref{19}). This brings us to the other crucial ingredient in the proof of Proposition 2, which is that $e^{\\tau{\\cal L}_0}$ contracts functions in $B$ {\\it pointwise} for $\\xi$ large. This follows from the explicit formula (Mehler's formula \\cite{Si}): \\be (e^{\\tau{\\cal L}_0}) (\\xi,\\xi') = (4 \\pi (1-e^{-\\tau}))^{-\\frac{n}{2}} e^{\\tau \\left( \\frac{1}{p-1}- \\frac{n}{2}\\right)} \\exp \\left(- \\frac{|\\xi-e^{-\\tau/2}\\xi'|^2}{4(1-e^{-\\tau})} \\right) \\label{69} \\en Hence, if a function $v$ satisfies \\be |v(\\xi)| \\leq C(1+|\\xi|^q)^{-1}, \\label{13} \\en for some constant $C$, we have \\be |(e^{\\tau {\\cal L}_0} v) (\\xi)| \\leq C' e^{\\frac{\\tau}{p-1}} (1+|\\xi|^q e^{ \\frac{\\tau q}{2}})^{-1} \\label{14} \\en for $|\\xi|$ large enough (of order $\\sqrt \\tau$) and another constant $C'$. Hence, the operator $e^{\\tau {\\cal L}_0}$ contracts, for large $|\\xi|$ and large $\\tau$, any function that decays as in (\\ref{13}) with $q>\\frac{2}{p-1}$. By (\\ref{fk}), we see that ${\\cal L}$ behaves similarly. The idea of the proof of Proposition 2 is the following. For $|\\xi|$ small (\\ref{14}) seems to expand by $e^{\\tau\\over p-1}$: the potential $V$ is important in this region and we want to use the information we obtained in the Hilbert space, Lemma 1 (recall that these functions have rapid decay, so that this bound should be used to capture the contraction only in the small $\\xi$ region). For large $\\xi$ , we shall use (20). This small-large $\\xi$ interplay is however slightly subtle, and we need to resort to an inductive argument to control the large $\\tau$ behaviour in Proposition 2 (this is actually just the Renormalization Group idea applied to the linear problem). \\vs{3mm} \\no{\\bf Proof of Proposition 2}. It is convenient to introduce the characteristic functions $$\\chi_s = \\chi (|\\xi| \\leq \\rho)$$ $$\\chi_\\ell = \\chi (|\\xi| > \\rho)$$ where $\\rho$ will be chosen suitably below. The properties of ${\\cal L}$ that we need are summarized in the following \\vs{3mm} \\no {\\bf Lemma 2}. {\\it There exist constants $C<\\infty$, $E>0$, and $\\delta>0$, such that \\begin{enumerate} \\item[i)] For $g \\in B$, \\be \\| e^{\\tau \\cal L} g \\|_\\infty \\leq C e^{\\frac{\\tau}{p-1}} \\| g \\|_\\infty. \\label{33} \\en \\item[ii)] For $g \\in L^2 (\\Bbb R^n,d \\mu)$, \\be \\| e^{\\cal L}g \\|_\\infty \\leq C \\|g\\|_2, \\label{35} \\en where $\\| \\cdot\\|_2$ is the norm in $L^2 ({\\Bbb R}^n, d\\mu)$. \\item[iii)] For $g$ such that $\\chi_s g \\in L^2 (\\Bbb R^n, d \\mu)$, \\be \\| \\chi_\\ell e^{\\rho \\cal L} \\chi_s g \\|_\\infty \\leq e^{- \\frac{\\rho^2}{5}} \\| \\chi_s g \\|_2, \\label{36} \\en for $\\rho$ large enough. \\item[iv)] For $g \\in B$, \\be \\| \\chi_\\ell e^{\\rho \\cal L} g \\|_\\infty \\leq e^{- \\delta \\rho} \\| g \\|_\\infty, \\label{37} \\en for $\\rho$ large enough. \\end{enumerate} } \\vs{3mm} Let $\\|g\\|_\\infty =1$. Given Lemma 2, we set $\\tau_n=n\\rho$, and prove inductively that there exists $\\alpha > 0$ such that $v(\\tau_n)=e^{\\tau_n{\\cal L}}g$ satisfies, for $\\rho$ large, \\be \\| \\chi_s v (\\tau_n) \\|_2 + \\| \\chi_s v(\\tau_n)\\|_\\infty \\leq e^{\\frac{\\rho^2}{6}} e^{- \\alpha n}, \\label{39} \\en and \\be \\| \\chi_\\ell v (\\tau_n) \\|_\\infty \\leq e^{- \\alpha n}. \\label{40} \\en Proposition 2 follows from (\\ref{39},\\ref{40}), by taking $\\mu=\\frac{\\alpha}{\\rho}$ (for times not of the form $\\tau=n\\rho$, use (\\ref{33})). The bounds (\\ref{39},\\ref{40}) hold for $n=0$, for $\\rho$ large enough, using $\\|g\\|_\\infty =1$ and the obvious inequality \\be \\| \\chi_s g \\|_2 \\leq \\rho^{\\frac{1}{2}} e^{\\frac{\\rho^2}{8}} \\| g\\|_\\infty. \\label{41} \\en So, let us assume (\\ref{39}, \\ref{40}) for some $n \\geq 0$ and prove it for $n+1$. Let $v = v(\\tau_n)$ and write $$v = \\chi_s v + \\chi_\\ell v \\equiv v_s + v_\\ell.$$ We have from Lemma 1 and (25) \\be \\| e^{\\rho {\\cal L}} v_s \\|_2 \\leq e^{- \\rho E} e^{\\frac{\\rho^2}{6}} e^{- \\alpha n} \\label{42} \\en and, combining (\\ref{35}) and Lemma 1 \\be \\| e^{\\rho \\cal L} v_s \\|_\\infty \\leq C \\| e^{(\\rho-1)\\cal L} v_s \\|_2 \\leq C e^{- (\\rho-1) E} e^{\\frac{\\rho^2}{6}} e^{-\\alpha n}. \\label{43} \\en Finally, from (\\ref{33}) and (\\ref{40}), we have \\be \\| e^{\\rho \\cal L} v_\\ell \\|_\\infty \\leq C e^{\\frac{\\rho}{p-1}} e^{-\\alpha n} \\label{44} \\en and, from this and (\\ref{41}), we get \\be \\| \\chi_s e^{\\rho \\cal L} v_\\ell \\|_2 \\leq C \\rho^{\\frac{1}{2}} e^{\\frac{\\rho^2}{8}} e^{\\frac{\\rho}{p-1}} e^{- \\alpha n}. \\label{45} \\en Combining (\\ref{42}-\\ref{45}), one gets (\\ref{39}), with $n$ replaced by $n+1$ for $\\rho$ large enough and $\\alpha$ small. On the other hand, (\\ref{40}), with $n$ replaced by $n+1$, follows immediately from (\\ref{39},\\ref{40}) and (\\ref{36},\\ref{37}), taking $\\alpha<\\delta\\rho$.\\hfill $\\Box$ \\vs{3mm} We are left with the \\vs{3mm} \\no {\\bf Proof of Lemma 2}. Part (i) follows immediately from (\\ref{fk}) and (\\ref{14}). For (ii), we use Schwartz' inequality and the bound \\be \\sup_\\xi \\int | e^{\\cal L} (\\xi , \\xi') |^2 d \\xi' < \\infty, \\label{47} \\en which follows from (\\ref{fk}) and (\\ref{69}). For (iii) proceed as in (ii) by using Schwartz' inequality, but replace (\\ref{47}) by \\be \\sup_{|\\xi| > \\rho} \\left( \\int | e^{\\rho{\\cal L}} (\\xi,\\xi')|^2 \\chi (|\\xi'| \\leq \\rho) d \\xi' \\right)^{\\frac{1}{2}} \\leq e^{- \\frac{\\rho^2}{5}} \\label{48} \\en which again follows from (\\ref{fk}) and (\\ref{69}) (we can replace $\\frac{1}{5}$ in (\\ref{48}) by $\\frac{1}{4} - \\delta$ for any $\\delta >0$, if $\\rho$ is large enough). Finally, (iv) follows from (\\ref{14}) and $q>\\frac{2}{p-1}$. \\hfill $\\Box$ \\vs{3mm} \\no{\\bf Proof of Proposition 1}. The proof is straightforward, given Proposition 2. We consider the integral equation corresponding to (\\ref{123}): \\be v(\\tau) = e^{\\tau{\\cal L}} h + \\int^\\tau_{0} ds e^{(\\tau-s){\\cal L}} N (v(s))\\equiv {\\cal S}(v,\\tau) \\label{30} \\en with $v(\\tau) \\equiv v(\\cdot,\\tau)$. First, let $\\|h\\|_\\infty\\leq\\epsilon$; (\\ref{30}) is solved by the contraction mapping principle in the Banach space ${\\cal B}$ of functions $v=v(\\xi,\\tau)$, where $v(\\cdot,\\tau) \\in B$ for $\\tau \\in [0,\\infty)$, and where the norm is $$\\| v \\| = \\sup_{\\tau \\in [0,\\infty)} \\|v (\\cdot,\\tau) \\|_\\infty e^{\\tau\\mu}$$ We show that $\\cal S$ defined by (\\ref{30}) maps the ball ${\\cal B}_0 = \\{ v \\in {\\cal B} | \\; \\| v \\| \\leq C \\varepsilon \\}$ into itself, for a suitable constant $C$, and is a contraction there . This follows, since we get, using (\\ref{123}), $$|N(v)| \\leq C'|v|^{\\tilde p}$$ where $\\tilde p = \\min (2,p) >1$, and $C'$ is a constant; therefore \\be \\|N(v(s))\\|_\\infty\\leq C' \\|v(s)\\|_\\infty^{\\tilde p} \\leq C' \\epsilon^{\\tilde p}e^{-{\\tilde p}s\\mu} \\label{38} \\en and so, Proposition 2 gives the claim for $\\varepsilon$ small. The proof that ${\\cal S}$ is a contraction is similar. Finally, from the Feynman-Kac formula and the smoothness and boundedness of $V$ one deduces that $e^{\\tau{\\cal L}}(\\xi,\\xi')=e^{\\tau{\\cal L}_0}(\\xi,\\xi') K(\\xi,\\xi')$ where $K$ is smooth and bounded. The regularity of the kernel of $e^{\\tau{\\cal L}}$ (for short times, it behaves like the heat kernel), implies that the solution of (34) is actually the unique classical solution of (\\ref{123}) (for details of such arguments, see \\cite{bk3}). \\vs{3mm} Let now $h\\geq 0$. It is standard that equation (\\ref{4}) has unique solution for all times, with positive initial data such as ours \\cite{F}. So, equation (\\ref{123}) also has a unique solution for all times. Moreover, by a comparison inequality ($v=0$ is a solution of (\\ref{123})), we have $v(\\xi, \\tau)\\geq 0$ for all times. Thus, in (\\ref{30}), the second term is negative ((\\ref{123}) shows that $N(v) \\leq 0$, and (\\ref{fk}, 18) show that the kernel of $e^{\\tau{\\cal L}}$ is positive). Therefore, we have the pointwise inequality $$v(\\xi, \\tau)\\leq (e^{\\tau{\\cal L}}v(0))(\\xi)$$ and Proposition 2 proves the claim.\\hfill $\\Box$ \\section{ Extensions and concluding remarks} %\\setcounter{equation}{0} 1. In Theorems 1 and 2, we could use a norm defined by \\be \\| g \\|_\\infty = {\\rm ess}\\sup_\\xi |g(\\xi) w(\\xi)|. \\en where $w$ is any bounded positive function decaying at infinity faster than $|\\xi|^{-\\frac{2}{p-1}}$. Indeed, the norm (7) was used only in the proof of (20). However, we would not necessarily get exponential decay of $v$ as a function of $\\tau$. 2. In \\cite{Ga} and also in \\cite{KP2}, results similar to ours were obtained on the stability of the self-similar solutions for $p<1+ \\frac{2}{n}$ and $\\gamma=\\gamma_p$. However, our results apply to a ball in a Banach space (and to any $\\gamma$) while in \\cite{Ga} the initial data is assumed to satisfy a pointwise inequality (but not to be small). This is similar to the $h\\geq 0$ case in our theorem, but with another inequality. For the first part of our Theorem, $u(x,1)$, given by (8), does not even have to be positive. On the other hand, very general results on the stability of the self-similar solutions, decaying as in (5), were obtained in \\cite{KP1}, for $p>1+ \\frac{2}{n}$. Basically, it is shown there that any positive initial data decaying at infinity like $|\\xi|^{- \\frac{2}{p-1}}$ will give rise to a solution whose asymptotic behaviour in time is given by $f_\\gamma(\\xi)$. 3. With little extra work, the small $\\| v \\|$ part of the Theorem generalizes to more general nonlinearities, e.g. equations of the form \\be \\partial_t u = \\Delta u - |u|^{p-1} u + F (u, \\nabla u) \\label{4a} \\en whereby we need to add to (\\ref{123}) the term \\be \\tilde F_\\tau (v, \\nabla v) = e^{\\frac{p \\tau}{p-1}} F \\left(e^{- \\frac{\\tau}{p-1}} (f_\\gamma +v), e^{- \\frac{p+1}{2(p-1)} \\tau} \\nabla (f_\\gamma +v)\\right). \\en In order to define \"small\" initial data, we introduce the Banach space of $C^1$ functions with the norm: \\be \\| h \\| = \\| h \\|_\\infty + \\max_{1 \\leq i \\leq n} \\| \\partial_i h \\|_\\infty \\label{20} \\en where $\\| h \\|_\\infty$ is defined in (7). We assume that $F$ in (\\ref{4a}) is $C^1$ and satisfies: \\be |F(a,b)| + |a \\partial_a F (a,b)] + \\max_{1 \\leq i \\leq n} | b_i \\partial_{b_i} F (a,b)| \\leq \\lambda |a|^{q_1} \\left( \\max_{1 \\leq i \\leq n} |b_i| \\right)^{q_2} \\label{21} \\en for $|a|, |b_i| \\leq 1$, where \\be q_1 + \\frac{p+1}{2} q_2 > p \\label{22} \\en and $\\lambda$ is taken small. Using (40, 41), and the fact that $f_\\gamma$, $\\partial_i f_\\gamma$, belong to the space $B$, one gets $$\\|\\tilde F_s (v, \\nabla v)\\|_\\infty \\leq C\\lambda e^{-\\delta s}$$ for some $\\delta >0$, which is like the RHS of (35) for $\\lambda$ small. The last two terms in the LHS of (40) are used to prove that the $\\tilde F$ term defines a contraction. The only extra difficulty is to show that the solution of the fixed point problem (corresponding to (34)) is a classical solution. However, using the regularity of the kernel of $e^{\\tau{\\cal L}}$ discussed in the proof of Proposition 1, one shows first that $v$ is $C^{2-\\alpha}$ for any $\\alpha>0$ (its derivative is H\\\"older continuous of exponent $1-\\alpha$); then, one uses that information, the integral equation and the regularity of the kernel to show that $v$ is actually $C^2$. See \\cite{bk3} for details. 4. Finally we want to end with some comments on the RG picture behind these results. In \\cite{BKL} the RG map ${\\cal R}_L$, for $L>1$ was defined in a suitable Banach space of initial data $f(x)=u(x,1)$ and a suitable space of nonlinearities $F$ (taken to be holomorphic functions in \\cite{BKL}). ${\\cal R}_L$ consisted simply of solving (\\ref{2}) up to the finite time $L^2$ and performing a scale transformation on the solution and on $F$: \\be {\\cal R}_L(f,F)=(f_L,F_L) \\en where $f_L(x)=L^n u(Lx,L^2)$ and $F_L(u,v,w)=L^{2+n}F(L^{-n}u, L^{-n-1}v,L^{-n-2}w)$. This scaling assures the semigroup property ${\\cal R}_{L^k}={\\cal R}_L^k$ on a common domain. The limit (\\ref{1}) was then shown to follow from \\be {\\cal R}_{L}^k (f,F)\\rightarrow (Af^*,0) \\label{limit} \\en as $k\\rightarrow\\infty$, where $(Af^*,0)$ is a one parameter family of Gaussian ($f^*(x)=e^{-{x^2\\over 4}}$) fixed points for ${\\cal R}_L$. Universality, i.e. independence on $f$ and $F$ was then explained in terms of a dynamical systems picture: if $(f,F)$ lie on the stable manifold of the line of fixed points, all the corresponding equations and data have the same asymptotics. The Theorem of Section 1 is a statement on the stability of a family of {\\it non-Gaussian} fixed points of the RG (which are, moreover, non perturbative, i.e. we do not use any `$\\varepsilon$-expansion\"). Equation (1) is invariant under the scaling $u \\rightarrow u_L$ with $$u_L (x,t) = L^{\\frac{2}{p-1}} u (Lx,L^2t)$$ which suggests setting $$f_L (x) = L^{\\frac{2}{p-1}} u(Lx,L^2)$$ and a correspondig definition of $F_L$. Then ${\\cal R}_{L}(f_\\gamma,F^*)=(f_\\gamma,F^*)$ for $F^*(u)=-u^p$, and our Theorem constructs the stable manifold of this fixed point (in the $f$ variable; for $F$, see the previous remark). The reason one needs an iterative approach to the limit (\\ref{limit}), i.e. one controls the iteration of ${\\cal R}_L$ rather than ${\\cal R}_L^k$ directly, is the existence of the neutral direction: $A$ is a nontrivial function of the data and of the equation. Here there is no neutral direction (this is the content of Lemma 1), and no iteration is needed (although we needed to resort to one in the proof of Proposition 2! But that was for a different purpose, namely the analysis of the linear operator $e^{\\tau{\\cal L}}$). Combining our results with those of \\cite{BKL}, we obtain the following $p$-dependence of the asymptotics of the solution of $$\\partial_t u = \\partial^2 u - u^p,$$ for a suitable class of initial data. For $p > 1 + \\frac{2}{n}$, $$u(\\cdot t^{1/2},t) \\simeq At^{- \\frac{n}{2}} f^* (\\cdot)$$ where $f^*(\\xi)=e^{-\\xi^2/4}$ is independent of $p$; the prefactor $A$ depends on $p$ and on the initial data. We have $$\\int |u(x,t)|dx= {\\cal O}(1)$$ as $t \\rightarrow \\infty.$ For $p = 1 + \\frac{2}{n}$, $$u(\\cdot t^{1/2},t) \\simeq (A t \\log t)^{- \\frac{1}{2}} f^*(\\cdot),$$ i.e. it is as before, but with a logarithmic correction and $$\\int |u(x,t)|dx = {\\cal O} ((\\log t)^{- 1/2}).$$ For \\$1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6637858,"math_prob":0.99996376,"size":22799,"snap":"2019-26-2019-30","text_gpt3_token_len":8384,"char_repetition_ratio":0.12893179,"word_repetition_ratio":0.023104265,"special_character_ratio":0.382122,"punctuation_ratio":0.09229122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T13:43:11Z\",\"WARC-Record-ID\":\"<urn:uuid:780ce98f-9d1b-45e6-b0eb-9d29069781fe>\",\"Content-Length\":\"30098\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd2ddb9d-96c2-4d99-bf1f-413f5c7ffa16>\",\"WARC-Concurrent-To\":\"<urn:uuid:34b2fae9-2542-4520-a84b-860dcaa7070c>\",\"WARC-IP-Address\":\"193.204.165.193\",\"WARC-Target-URI\":\"http://kleine.mat.uniroma3.it/mp_arc/e/93-194.latex\",\"WARC-Payload-Digest\":\"sha1:VTE2WTE42LIV3GQBQF5ZUS6TGSAQ7PWG\",\"WARC-Block-Digest\":\"sha1:DWJXYE5CY6NRAKQX7JGHDPYWEEZCFG22\",\"WARC-Identified-Payload-Type\":\"application/x-latex\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999218.7_warc_CC-MAIN-20190620125520-20190620151520-00489.warc.gz\"}"}
http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Teerikorpi2_2.html
[ "", null, "Annu. Rev. Astron. Astrophys. 1997. 35: 101-136 Copyright © 1997 by Annual Reviews. All rights reserved\n\n2.2. The Classical Malmquist Bias\n\nIn his work, \"A study of the stars of spectral type A,\" Malmquist (1920) investigated how to derive the luminosity function of stars from their proper motions, provided that it is gaussian and one knows the distribution of apparent magnitudes up to some limiting magnitude. This led Malmquist to investigate the question of what is the average value (and other moments) of the quantity R, or the reduced distance, as earlier introduced by Charlier:", null, "(2)\n\nMalmquist made three assumptions: 1. There is no absorption in space. 2. The frequency function of the absolute magnitudes is gaussian (Mo,", null, "). 3. This function is the same at all distances. The third assumption is the principle of uniformity as implied in Lundmark's (1946) definition of distance indicators.\n\nUsing the fundamental equation of stellar statistics, Malmquist derived <Rn> and showed that it may be expressed in terms of the luminosity function constants Mo and", null, "and the distribution a(m) of apparent magnitudes, connected with the stellar space density law. Especially interesting for Malmquist was the case n = -1, or the mean value of the \"reduced parallax,\" that appears in the analysis of proper motions. However, for distance determination, the case n = 1 is relevant because it allows one to calculate, from the mean value of the reduced distance, the average value of the distance <r> for the stars that have their apparent magnitude in the range m ± 1/2 dm or their distance modulus µ = m - Mo in the range µ ± 1/2 dµ. The result is, written here directly in terms of the distance modulus distribution N(µ) instead of a(m),", null, "(3)\n\nwhere b = 0.2 . ln 10. This equation is encountered in connection with the general Malmquist correction in Section 6. Naturally, in Malmquist's paper one also finds his formula for the mean value of M for a given apparent magnitude m:", null, "(4)\n\nThe term including the distribution of apparent magnitudes (or distance moduli in Equation 3) reduces to a simple form when one assumes that the space density distribution d(r)", null, "r:", null, "(5)\n\nWith", null, "= 0, one finally obtains the celebrated Malmquist's formula valid for a uniform space distribution:", null, "(6)\n\nHubble (1936) used Malmquist's formula (Equation 6) when, from the brightest stars of field galaxies (and from the magnitudes of those galaxies), he derived the value of the Hubble constant. Hubble derived from a local calibrator sample the average (volume-limited) absolute photographic magnitude and its dispersion for the brightest stars. As \"the large-scale distribution of nebulae and, consequently, of brightest stars is approximately uniform,\" he derived the expected value for the mean absolute magnitude of the brightest stars <Ms> for a fixed apparent magnitude. His field galaxies were selected, Hubble maintained, on the basis of the apparent magnitudes of the brightest stars, which justified the calculation and use of <s>. In the end, he compared the mean apparent magnitudes of the brightest stars in the sample galaxies with <Ms>, calculated the average distance <r>, and derived the value of the Hubble constant (526 km/s/Mpc). Hence, it is important to recognize that this old value of Ho, canonical for some time, already includes an attempt to correct for the Malmquist bias. Also, it illustrates the role of assumptions in this type of correction (what is the space density law, what is the mode of selection of the sample, etc)." ]
[ null, "http://ned.ipac.caltech.edu/level5/GIFS/transp_ari_logo_small.gif", null, "http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Equations/eq1x.gif", null, "http://ned.ipac.caltech.edu/level5/New_Gifs/sigma.gif", null, "http://ned.ipac.caltech.edu/level5/New_Gifs/sigma.gif", null, "http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Equations/eq2x.gif", null, "http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Equations/eq3x.gif", null, "http://ned.ipac.caltech.edu/level5/New_Gifs/propto.gif", null, "http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Equations/eq4x.gif", null, "http://ned.ipac.caltech.edu/level5/New_Gifs/alpha.gif", null, "http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Equations/eq5x.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9108584,"math_prob":0.97728854,"size":3530,"snap":"2021-43-2021-49","text_gpt3_token_len":795,"char_repetition_ratio":0.15286444,"word_repetition_ratio":0.017331023,"special_character_ratio":0.22407933,"punctuation_ratio":0.108728945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9880492,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,1,null,1,null,null,null,1,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T07:18:29Z\",\"WARC-Record-ID\":\"<urn:uuid:d94b0e08-7e3f-4576-b3c3-68330dec668d>\",\"Content-Length\":\"6821\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:177a989d-eea1-4680-98fb-922660b765ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:185ba10f-5e89-4c29-aa3a-c7ec68b9849c>\",\"WARC-IP-Address\":\"134.4.36.101\",\"WARC-Target-URI\":\"http://ned.ipac.caltech.edu/level5/March03/Teerikorpi/Teerikorpi2_2.html\",\"WARC-Payload-Digest\":\"sha1:DB5DNOL5CGPZMYJ5NUOZFBRYLCE2SBWH\",\"WARC-Block-Digest\":\"sha1:35T43KKLHY2AIYTPU4PVXPPQZLSK6XU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585302.91_warc_CC-MAIN-20211020055136-20211020085136-00354.warc.gz\"}"}
https://koreauniv.pure.elsevier.com/en/publications/lower-functions-for-processes-with-stationary-independent-increme
[ "# Lower functions for processes with stationary independent increments\n\nResearch output: Contribution to journalArticle\n\n10 Citations (Scopus)\n\n### Abstract\n\nLet {Xt} be a R1-valued process with stationary independent increments and {Mathematical expression}. In this paper we find a sufficient condition for there to exist nonnegative and nondecreasing function h(t) such that lim inf At/h(t)=C a.s. as t→0 and t→∞, for some positive finite constant C when h(t) takes a particular form. Also two analytic conditions are considered as application.\n\nOriginal language English 551-566 16 Probability Theory and Related Fields 77 4 https://doi.org/10.1007/BF00959617 Published - 1988 Dec 1\n\n### Fingerprint\n\nIndependent Increments\nNon-negative\nSufficient Conditions\nForm\n\n### ASJC Scopus subject areas\n\n• Mathematics(all)\n• Analysis\n• Statistics and Probability\n\n### Cite this\n\nIn: Probability Theory and Related Fields, Vol. 77, No. 4, 01.12.1988, p. 551-566.\n\nResearch output: Contribution to journalArticle\n\n@article{729f502bb0b14900b655e6872536dd28,\ntitle = \"Lower functions for processes with stationary independent increments\",\nabstract = \"Let {Xt} be a R1-valued process with stationary independent increments and {Mathematical expression}. In this paper we find a sufficient condition for there to exist nonnegative and nondecreasing function h(t) such that lim inf At/h(t)=C a.s. as t→0 and t→∞, for some positive finite constant C when h(t) takes a particular form. Also two analytic conditions are considered as application.\",\nauthor = \"In-Suk Wee\",\nyear = \"1988\",\nmonth = \"12\",\nday = \"1\",\ndoi = \"10.1007/BF00959617\",\nlanguage = \"English\",\nvolume = \"77\",\npages = \"551--566\",\njournal = \"Probability Theory and Related Fields\",\nissn = \"0178-8051\",\npublisher = \"Springer New York\",\nnumber = \"4\",\n\n}\n\nTY - JOUR\n\nT1 - Lower functions for processes with stationary independent increments\n\nAU - Wee, In-Suk\n\nPY - 1988/12/1\n\nY1 - 1988/12/1\n\nN2 - Let {Xt} be a R1-valued process with stationary independent increments and {Mathematical expression}. In this paper we find a sufficient condition for there to exist nonnegative and nondecreasing function h(t) such that lim inf At/h(t)=C a.s. as t→0 and t→∞, for some positive finite constant C when h(t) takes a particular form. Also two analytic conditions are considered as application.\n\nAB - Let {Xt} be a R1-valued process with stationary independent increments and {Mathematical expression}. In this paper we find a sufficient condition for there to exist nonnegative and nondecreasing function h(t) such that lim inf At/h(t)=C a.s. as t→0 and t→∞, for some positive finite constant C when h(t) takes a particular form. Also two analytic conditions are considered as application.\n\nUR - http://www.scopus.com/inward/record.url?scp=0039876338&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=0039876338&partnerID=8YFLogxK\n\nU2 - 10.1007/BF00959617\n\nDO - 10.1007/BF00959617\n\nM3 - Article\n\nAN - SCOPUS:0039876338\n\nVL - 77\n\nSP - 551\n\nEP - 566\n\nJO - Probability Theory and Related Fields\n\nJF - Probability Theory and Related Fields\n\nSN - 0178-8051\n\nIS - 4\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7872448,"math_prob":0.8139125,"size":2230,"snap":"2020-10-2020-16","text_gpt3_token_len":621,"char_repetition_ratio":0.08670261,"word_repetition_ratio":0.5487805,"special_character_ratio":0.28430495,"punctuation_ratio":0.11352657,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9836789,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T00:50:34Z\",\"WARC-Record-ID\":\"<urn:uuid:e5841b84-4806-4284-90ab-c493fa7b6fb8>\",\"Content-Length\":\"35360\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1318f46-8573-4ad5-9f4f-a8b66f5bab09>\",\"WARC-Concurrent-To\":\"<urn:uuid:1dd36f8f-9630-4576-8dce-faa3152a85d7>\",\"WARC-IP-Address\":\"52.220.215.79\",\"WARC-Target-URI\":\"https://koreauniv.pure.elsevier.com/en/publications/lower-functions-for-processes-with-stationary-independent-increme\",\"WARC-Payload-Digest\":\"sha1:5Y5NYOZOLDGMDE7FDKFFTIPNHKMVAX5A\",\"WARC-Block-Digest\":\"sha1:36AXXW67CWWXYAKCPKN6AXFXSUVHQXOV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148163.71_warc_CC-MAIN-20200228231614-20200229021614-00309.warc.gz\"}"}
http://www.bromwellforge.com/kfk2du/30pck.php?id=dcf9dd-defective-eigenvalue-3x3
[ "In particular, A has distinct eigenvalues, so it is diagonalizable using the complex numbers. The function eig(A) denotes a column vector containing all the eigenvalues of … To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin, and copy and paste the Widget ID below into the \"id\" field: We appreciate your interest in Wolfram|Alpha and will be in touch soon. Add to solve later Sponsored Links When the geometric multiplicity of a repeated eigenvalue is strictly less than its algebraic multiplicity, then that eigenvalue is said to be defective. We have two cases If , then clearly we have In this case, the equilibrium point (0,0) is a sink. 5.Notice that (A I)u = v and (A I)2u = 0. image/svg+xml. Example The matrix A= 1 1 0 1 is defective. There... For matrices there is no such thing as division, you can multiply but can’t divide. To nd the eigenvector(s), we set up the system 6 2 18 6 x y = 0 0 These equations are multiples of each other, so we can set x= tand get y= 3t. Ask Question Asked 4 years, 6 months ... {det}(A−λI)=(2−λ)(3−λ)^2$so the eigenvalues of your matrix are$2$and$ 3$. How can we correct this defect? en. Multiplying by the inverse... eigenvalues\\:\\begin{pmatrix}6&-1\\\\2&3\\end{pmatrix}, eigenvalues\\:\\begin{pmatrix}1&-2\\\\-2&0\\end{pmatrix}, eigenvalues\\:\\begin{pmatrix}2&0&0\\\\1&2&1\\\\-1&0&1\\end{pmatrix}, eigenvalues\\:\\begin{pmatrix}1&2&1\\\\6&-1&0\\\\-1&-2&-1\\end{pmatrix}. Find more Mathematics widgets in Wolfram|Alpha. https://www.khanacademy.org/.../v/linear-algebra-eigenvalues-of-a-3x3-matrix I tried to prove this looking at a general 3x3 case and trying to calculate det(A-$\\lambda$I)=0, but it does not get me anywhere. Thanks for the feedback. (b) The geometric multiplicity, mg, of λ is dimnull(A − λI). Every eigenvector makes up a one-dimensional eigenspace. It is the union of zero vector and set of all eigenvector corresponding to the eigenvalue. Linear independence of eigenvectors. eigenvalues\\:\\begin{pmatrix}1&2&1\\\\6&-1&0\\\\-1&-2&-1\\end{pmatrix} matrix-eigenvalues-calculator. Example The matrix A= 1 1 0 1 is defective. Def. Note that this will not always be the case for a 3x3 matrix. This website uses cookies to ensure you get the best experience. Defective Eigenvalue. Therefore$2$is an eigenvalue with algebraic multiplicity$1,$and$3$is an eigenvalue with algebraic multiplicity$2$. A I= 0 1 0 0 3.Single eigenvector v = (1;0). It is also known as characteristic vector. To embed this widget in a post on your WordPress blog, copy and paste the shortcode below into the HTML source: To add a widget to a MediaWiki site, the wiki must have the. (i) If there are just two eigenvectors (up to multiplication by a … A I= 0 1 0 0 3.Single eigenvector v = (1;0). The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. The matrix A I= 0 1 0 0 has a one-dimensional null space spanned by the vector (1;0). Eigenvalue Decomposition For a square matrix A 2Cn n, there exists at least one such that Ax = x ) (A I)y = 0 Putting the eigenvectors x j as columns in a matrix X, and the eigenvalues j on the diagonal of a diagonal matrix , we get AX = X : A matrix is non-defective or diagonalizable if there exist n linearly Eigenvalue and eigenvector computation. In this situation we call this eigenvalue defective, and the defect of this eigenvalue is the difference beween the multiplicity of the root and the 3. number of linearly independent eigenvectors. Matrices are the foundation of Linear Algebra; which has gained more and more importance in science, physics and eningineering. This will include deriving a second linearly independent solution that we will need to form the general solution to the system. (a) The algebraic multiplicity, m, of λ is the multiplicity of λ as root of the characteristic polynomial (CN Sec. 1.Only eigenvalue is = 1. B. So our strategy will be to try to find the eigenvector with X=1 , and then if necessary scale up. Then A also has the eigenvalue λ B = λ. This definition of an eigenvalue, which does not directly involve the corresponding eigenvector, is the characteristic equation or characteristic polynomial of … (b) The geometric multiplicity, mg, of λ … This will give us one solution to … for each eigenvalue \\lambda . Let’s now get the eigenvectors. Defective eigenvalues. Eigen vector, Eigen value 3x3 Matrix Calculator. 2. Let A be a 2 × 2 matrix with a complex, non-real eigenvalue λ. where is the double eigenvalue and is the associated eigenvector. An eigenvalue that is not repeated has an associated eigenvector which is different from zero. Get the free \"Eigenvalues Calculator 3x3\" widget for your website, blog, Wordpress, Blogger, or iGoogle. 2. The sum of the multiplicity of all eigenvalues is equal to the degree of the polyno-mial, that is, Xp i k i= n: Let E ibe the subspace of eigenvectors associated to the eigenvalue i, that is, E i= fu2Cnsuch that Au= iug: Theorem 1 (from linear algebra). For the eigenvector$0$however you would need to find$2$linearly indepedent eigenvectors Yet as you said, indirectly, the eigenspace associated to$0$is the space generated by$(1,0,0)\\$. Now, every such system will have infinitely many solutions, because if {\\bf e} is an eigenvector, so is any multiple of {\\bf e} . Eigenvalues. Here we nd a repeated eigenvalue of = 4. For Example, if x is a vector that is not zero, then it is an eigenvector of a … 3X3 Eigenvalue Calculator. Show that (1) det(A)=n∏i=1λi (2) tr(A)=n∑i=1λi Here det(A) is the determinant of the matrix A and tr(A) is the trace of the matrix A. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. Please try again using a different payment method. The eigenvalue is the factor which the matrix is expanded. An eigenvalue is defective if its geometric multiplicity is less than its algebraic multiplicity. Calculate eigenvalues. We have different types of matrices, such as a row matrix, column matrix, identity matrix, square matrix, rectangular matrix. ... And the lambda, the multiple that it becomes-- this is the eigenvalue associated with that eigenvector. Distinct eigenvalues, so it is the factor which the matrix can be diagonalised on! Equation are the foundation of linear algebra, the one with numbers arranged... Somewhat messier Subsection 5.5.3 Geometry of 2 × 2 and there is a repeated eigenvalue is defective matrix! Be defective algebra ; which has gained more and more importance in science, and. 2X2, 3x3 or higher-order square matrix 0,0 ) is a sink } } = )! All other elements of the solutions when ( meaning the future ) is the factor which matrix. Complex eigenvalues ¶ permalink Objectives future ) the complex numbers z= sand y= then... Eigenvalue … Subsection 5.5.3 Geometry of 2 × 2 matrices with a complex, defective eigenvalue 3x3 eigenvalue λ complete a.... ( I ) if there is only one linearly independent eigenvector, then it always... Eigenvalues are distinct can be diagonalised identical to the eigenvalue associated with that eigenvector if are. Multiply but can ’ t divide can ’ t divide that if a matrix! Calculator computes the inverse of a repeated eigenvalue is strictly less than its algebraic multiplicity −... Then if necessary scale up: //www.khanacademy.org/... /v/linear-algebra-eigenvalues-of-a-3x3-matrix for each eigenvalue \\lambda so let! And eigenvectors of a square matrix that eigenvector a be a 2 × matrix... Diagonalizable ) different from zero 2\\ ): defective eigenvalues extremely useful in most scientific fields 2 × and. A fact of life with eigenvalue/eigenvector problems so get used to them multiplicity then! N×N matrix and let λ1, …, λn be its eigenvalues we have in this case, eigenvector. 3X3 matrix always has an eigenvector is given by u 1 = 2 multiplicity... Less than its algebraic multiplicity, then clearly we have two cases if, then that eigenvalue is to! V = ( 1 ; 0 ) multiplicity 2 a I= 0 0. The factor which the matrix a I= 0 1 0 0 has a one-dimensional null space spanned by the (... Has an associated eigenvector which is different from zero, from the rst equation which is different from.... = 0 however, a second order system needs two independent solutions is... Which are definitely a fact of life with eigenvalue/eigenvector problems so get used to them case for a matrix. Be its eigenvalues by u 1 = 2 with multiplicity 2 ) u = v and a... 1 is defective if its geometric multiplicity of a matrix click the link in email! But it defective eigenvalue 3x3 be somewhat messier set of all eigenvector corresponding to eigenvalue! Strategy will be somewhat messier that is not unique... and the lambda, the does... V = ( +2 ) 2 and there is one eigenvalue 1 = +2. Portraits associated with real repeated eigenvalues ( improper nodes ) X=1, then... Our strategy will be to try to find complex eigenvalues ¶ permalink Objectives not unique and 3 3... Solution to the previous two examples, but it will be to try to find the eigenvector not. Of its characteristic equation: |tI-A| = 0: second eigenvalue: second:... Dimnull ( a − λI ) multiplicity 2 2\\ ): defective eigenvalues ( \\lambda... × 2 matrix with a complex eigenvalue to sketch phase portraits associated with repeated. The Geometry of 2 × 2 matrices with a complex, non-real eigenvalue λ b = λ solutions! Password, just click the link in the matrix is not unique get... ( improper nodes ) is diagonalizable using the complex numbers, any by... Eigenvector v = ( +2 ) 2 and there is no such thing division! The equation are the generalized eigenvalues assuming that if a 3x3 matrix of the solutions (! Non-Defective and hence diagonalizable password, just click the link in the matrix a I= 0 1 defective. Polynomial is P ( ) = ( 1 ; 0 ) b = λ a rotation-scaling matrix the... Sand y= t. then x= y 2z= s 2t, from the rst equation all eigenvector corresponding to previous... A sink A−λI is singular and hence diagonalizable case for a 3x3 matrix always has eigenvalue... 4.We could use u = ( 1 ; 0 ) 3x3 or higher-order square matrix can written... Independent solutions point ( 0,0 ) is a sink therefore not diagonalizable ) the we. Distinct eigenvalues, so it is diagonalizable using the complex numbers any 3 3... Different from zero calculator to find the eigenvector does not change its direction the. Point ( 0,0 ) is a sink we sent you are the foundation of linear,! Rotates and scales the space generated by the vector ( 1 ; 0 ) that det ( A−λI ) =! Website uses cookies to ensure you get the best experience eigenvector corresponding to the system the... An n×n matrix and let λ1, …, λn be its eigenvalues let a be a 2 2., so it is diagonalizable using the complex numbers x ̸= 0: //www.khanacademy.org/... /v/linear-algebra-eigenvalues-of-a-3x3-matrix each..., arranged with rows and columns, is extremely defective eigenvalue 3x3 in most scientific.. ; 1 ) to complete a basis becomes -- this is the eigenvalue is said to be defective does! Sent you ( improper nodes ) 1 0 0 3.Single eigenvector v = 0. Your new password, just click the link in the matrix can be diagonalised depends on the eigenvectors am! Each eigenvalue \\lambda by definition the matrix A= 1 1 0 1 0 0 3.Single eigenvector v = ( ;. A − λI ) defective eigenvalue 3x3 ̸= 0 5.notice that ( a − λI ) it also has. Simple online EigenSpace calculator to find complex eigenvalues is identical to the previous two examples, but it will to... 3X3 matrix v and ( a − λI ) } = 2\\ ): defective.! Repeated has an eigenvalue that is not unique strategy will be somewhat messier the rst equation online inverse eigenvalue computes! The multiple that it becomes -- this is the union of zero vector and set of eigenvector. General solution to the system, whether or not the matrix can be written ( A−λI ) =. 2 with multiplicity 2 any 3 by 3 matrix whose eigenvalues are distinct can be diagonalised repeated has an.. To defective eigenvalue 3x3 to find complex eigenvalues is identical to the eigenvalue is eigenvalue... Assuming that if a 3x3 matrix: defective eigenvalues, of λ that the! Find complex eigenvalues and eigenvectors of a square matrix and ( a − λI ) with! Are definitely a fact of life with eigenvalue/eigenvector problems so get used to them … λn! So our strategy will be somewhat messier of life with eigenvalue/eigenvector problems so used. The one with numbers, arranged with rows and columns, is extremely useful in most scientific.! Click the link in the email we sent you strictly less than its algebraic multiplicity, then clearly we two. } = 2\\ ): defective eigenvalues and there is one eigenvalue =... Particular, a has distinct eigenvalues, so it is the factor which the matrix can be defective eigenvalue 3x3! Needs two independent solutions therefore not diagonalizable ) 0 ), mg, of λ is dimnull a! A 2 × 2 matrix with a complex eigenvalue in particular, a has distinct,. Of all eigenvector corresponding to the system only one linearly independent eigenvector, then clearly we two... Calculator computes the inverse of a 2x2, 3x3 or higher-order square matrix can diagonalised. Lambda, the one with numbers, arranged with rows and columns, is defective eigenvalue 3x3 useful in scientific... = 2\\ ): defective eigenvalues need to form the general solution to the system: //www.khanacademy.org/... for. Union of zero vector and set of all eigenvector corresponding to the system 5.5 complex eigenvalues which definitely. I am assuming that if a 3x3 defective eigenvalue 3x3 always has an eigenvector, then that eigenvalue is strictly than. Email we sent you that we will also show how to sketch phase portraits associated with real eigenvalues! The eigenvalue-eigenvector equation for a 3x3 matrix always has an associated eigenvector which is different from zero can! General solution to the previous two examples, but it will be somewhat messier not unique portraits with... Depends on the eigenvectors click the link in the matrix can be diagonalised depends the! Equation: |tI-A| = 0, physics and eningineering is strictly less than its algebraic multiplicity, 3... There are just two eigenvectors ( up to multiplication by a eigenvector does change. Your new password, just click the link in the email we sent you eigenvalue: Discover the of! Satisfy the equation are the roots of its characteristic equation: |tI-A| = 0: second eigenvalue: Discover beauty., λn be its eigenvalues somewhat messier |tI-A| = 0 s 2t, from rst... Is non-defective and hence diagonalizable a basis multiple that it becomes -- this is factor... ( 1 ; 0 ) eigenvalue, whether or not the matrix a I= 0 0! Direction under the associated linear transformation, 3x3 or higher-order square matrix can be written A−λI... No such thing as division, you can multiply but can ’ t.! Eigenvector, 1 3 independent solutions independent solution that we will need to form the defective eigenvalue 3x3! Only one linearly independent eigenvector, then clearly we have in this case we get complex ¶! Learn to recognize a rotation-scaling matrix, the eigenvector does not change its direction under the associated linear transformation repeated. +2 ) 2 and 3 × 3 matrices with a complex eigenvalue Jordan matrix zero. The solutions when ( meaning the future ) identical to the eigenvalue with!\n\n## defective eigenvalue 3x3\n\nHow To Counter Bowser Jr, Cloud Architect Interview Questions, Kode300ess Vs Kode500ess, Aletsch Arena Wetter, Golf Club Shafts, Oldcart Vs Pqrst, Asus Flying Fortress 7, Scalloped Lace Fabric By The Yard," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83816195,"math_prob":0.9965922,"size":15409,"snap":"2021-21-2021-25","text_gpt3_token_len":3977,"char_repetition_ratio":0.19909121,"word_repetition_ratio":0.18091223,"special_character_ratio":0.24732299,"punctuation_ratio":0.14868805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999226,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T01:24:09Z\",\"WARC-Record-ID\":\"<urn:uuid:06232146-99e1-478b-9684-12633b401b6c>\",\"Content-Length\":\"20088\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fa850a4-7613-4f68-a333-a9ea1c77efd0>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e4dceda-a601-403b-bc73-f1e431f32f87>\",\"WARC-IP-Address\":\"50.62.114.1\",\"WARC-Target-URI\":\"http://www.bromwellforge.com/kfk2du/30pck.php?id=dcf9dd-defective-eigenvalue-3x3\",\"WARC-Payload-Digest\":\"sha1:EW3LEXPLZSFASDI5FG3QMIIBDGNYCYNW\",\"WARC-Block-Digest\":\"sha1:JZUSARMCECIQLBKSILIIB53BIGFIXOPJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988774.18_warc_CC-MAIN-20210506235514-20210507025514-00386.warc.gz\"}"}
https://speedyguy17.info/wiki/index.php?title=Compare_Two_Integers
[ "# Compare Two Integers\n\n## Introduction\n\nYou are given two very long integers a, b (leading zeroes are allowed). You should check what number a or b is greater or determine that they are equal.\n\nThe input size is very large so don't use the reading of symbols one by one. Instead of that use the reading of a whole line or token.\n\nAs input/output can reach huge size it is recommended to use fast input/output methods: for example, prefer to use scanf/printf instead of cin/cout in C++, prefer to use BufferedReader/PrintWriter instead of Scanner/System.out in Java. Don't use the function input() in Python2 instead of it use the function raw_input().\n\n## Solutions\n\n### Idea\n\nThis problem cannot be solved simply using \"parseInt\" and compareTo to return the greater integer, because the numbers can be too big for Java to process using Integer. So I went about solving the problem with the simplest way I could think of, which would be to compare the length of the two integers. Of course, this is after the two integers are processed to remove any leading zeroes to make sure the test of checking their lengths works and does not take the leading zeroes as actual digit-values. If one number is longer than another, it is automatically larger in value than the other one. If the two integers are the same length, I iterate through each digit and find the first point of difference. Because I start at the largest digit, the first point of difference will yield which integer is larger, without having to consider any subsequent digits (they are all smaller in value than the one currently being compared).\n\n### Runtime\n\nWorst case runtime is O(n), where n is the length of the longer integer.\n\n### Code\n\n#### Solution - Java\n\n``` 1 import java.util.Scanner;\n2\n3 /**\n4 *\n5 * @author Created by harrisonsuh\n6 *\n7 */\n8\n9 public class CompareTwoIntegers {\n10 \tpublic static void main(String[] args) {\n11 \t\tScanner in = new Scanner(System.in);\n12 \t\tString a = in.next();\n13 \t\tString b = in.next();\n14 \t\t//remove any leading zeroes from first int\n15 \t\tfor (int i = 0; i < a.length(); i++) {\n16 \t\t\tif (Integer.parseInt(a.substring(i, i+1)) != 0 || i+1 == a.length()) {\n17 \t\t\t\ta = a.substring(i);\n18 \t\t\t\tbreak;\n19 \t\t\t}\n20 \t\t}\n21 \t\t//remove any leading zeroes for second int\n22 \t\tfor (int i = 0; i < b.length(); i++) {\n23 \t\t\tif (Integer.parseInt(b.substring(i, i+1)) != 0 || i+1 == b.length()) {\n24 \t\t\t\tb = b.substring(i);\n25 \t\t\t\tbreak;\n26 \t\t\t}\n27 \t\t}\n28 \t\t//compare lengths of the two integers\n29 \t\tif (a.length() < b.length()) {\n30 \t\t\tSystem.out.println('<');\n31 \t\t} else if (a.length() > b.length()) {\n32 \t\t\tSystem.out.println('>');\n33 \t\t//if they are the same length, iterate through each digit and find first point of difference\n34 \t\t} else if (a.length() == b.length()) {\n35 \t\t\tfor (int i = 0; i < a.length(); i++) {\n36 \t\t\t\tif (a.charAt(i) > b.charAt(i)) {\n37 \t\t\t\t\tSystem.out.println('>');\n38 \t\t\t\t\treturn;\n39 \t\t\t\t}\n40 \t\t\t\tif (a.charAt(i) < b.charAt(i)) {\n41 \t\t\t\t\tSystem.out.println('<');\n42 \t\t\t\t\treturn;\n43 \t\t\t\t}\n44 \t\t\t}\n45 \t\t//if all tests pass (all digits are equal), return they are equal.\n46 \t\tSystem.out.println('=');\n47 \t\treturn;\n48 \t\t}\n49 \t}\n50 }\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77839535,"math_prob":0.9532815,"size":2998,"snap":"2020-34-2020-40","text_gpt3_token_len":730,"char_repetition_ratio":0.113894455,"word_repetition_ratio":0.043726236,"special_character_ratio":0.30853903,"punctuation_ratio":0.1421875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932532,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T18:50:14Z\",\"WARC-Record-ID\":\"<urn:uuid:8a54551c-c67c-4808-ae9b-ec6ba8a1125e>\",\"Content-Length\":\"27966\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08d36d4d-2735-43cf-b1dc-dd4358644bf2>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e247359-919c-4cb5-be21-61db1916478b>\",\"WARC-IP-Address\":\"107.15.234.132\",\"WARC-Target-URI\":\"https://speedyguy17.info/wiki/index.php?title=Compare_Two_Integers\",\"WARC-Payload-Digest\":\"sha1:UGBQUWT4GENN2MYWQGBCK72CFXJAD7GV\",\"WARC-Block-Digest\":\"sha1:DOOINWWOT6GOOTI4SAI7LQSODXGYVSMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737168.97_warc_CC-MAIN-20200810175614-20200810205614-00199.warc.gz\"}"}
https://stackoverflow.com/questions/12053125/php-function-to-get-mp3-duration/12053187
[ "# PHP Function to get MP3 duration\n\nIs there any PHP function that will give me the MP3 duration. I looked at ID 3 function but i don't see any thing there for duration and apart from this,id3 is some kind of tag,which will not be there in all MP3 so using this will not make any sense.\n\nThis should work for you, notice the getduration function: http://www.zedwood.com/article/127/php-calculate-duration-of-mp3\n\n• OK, works with PHP5 and not PHP4 so not sure you could get much more 'modern' Aug 21, 2012 at 11:04\n• works fine, but sometimes it return Severity: Warning --> fseek(): stream does not support seeking Nov 24, 2016 at 2:29\n\nInstall getid3, but if you only need duration, you can delete all but these modules:\n\n• module.audio.mp3.php\n• module.tag.id3v1.php\n• module.tag.apetag.php\n• module.tag.id3v2.php\n\nAccess the duration with code like this:\n\n``````\\$getID3 = new getID3;\n\\$ThisFileInfo = \\$getID3->analyze(\\$pathName);\n\\$len= @\\$ThisFileInfo['playtime_string']; // playtime in minutes:seconds, formatted string\n``````\n\nGet it at Sourceforge\n\n• There's even a Drupal module for this drupal.org/project/getid3 and the documentation on getid3.sourceforge.net is great! Jan 15, 2015 at 17:24\n• WordPress (4.7.5) includes version 1.9.9-20141121 of getID3 in core, #fyi. Jul 10, 2017 at 1:18\n\nI have passed so many time, but without getID3 (http://getid3.sourceforge.net/) to get duration of audio file not possible.\n\n`````` https://github.com/JamesHeinrich/getID3/archive/master.zip\n``````\n\n2) Try this below code:\n\n`````` <?php\ninclude(\"getid3/getid3.php\");\n\\$filename = 'bcd4ecc6bf521da9b9a2d8b9616d1505.wav';\n\\$getID3 = new getID3;\n\\$file = \\$getID3->analyze(\\$filename);\n\\$playtime_seconds = \\$file['playtime_seconds'];\necho gmdate(\"H:i:s\", \\$playtime_seconds);\n?>\n``````\n\nYou can get the duration of an mp3 or many other audio/video files by using ffmpeg.\n\n1. Install ffmpeg in your server.\n2. Make sure that php shell_exec is not restricted in your php.\n\n`````` // Discriminate only the audio/video files you want\nif(preg_match('/[^?#]+\\.(?:wma|mp3|wav|mp4)/', strtolower(\\$file))){\n\\$filepath = /* your file path */;\n// execute ffmpeg form linux shell and grab duration from output\n\\$result = shell_exec(\"ffmpeg -i \".\\$filepath.' 2>&1 | grep -o \\'Duration: [0-9:.]*\\'');\n\\$duration = str_replace('Duration: ', '', \\$result); // 00:05:03.25\n\n//get the duration in seconds\n\\$timeArr = preg_split('/:/', str_replace('s', '', \\$duration));\n\\$t = \\$this->_times[\\$file] = ((\\$timeArr)? \\$timeArr*1 + \\$timeArr * 60 + \\$timeArr * 60 * 60 : \\$timeArr + \\$timeArr * 60)*1000;\n\n}\n``````\n• Seems a good thinking style for those unseekable remote URLs. Jun 2, 2018 at 14:30\n• This is a good answer for me because it doesn't need to load whole file to get the duration. And can be used for remote file. Jul 22, 2018 at 19:56\n``````<?php\nclass MP3File\n{\nprotected \\$filename;\npublic function __construct(\\$filename)\n{\n\\$this->filename = \\$filename;\n}\n\npublic static function formatTime(\\$duration) //as hh:mm:ss\n{\n//return sprintf(\"%d:%02d\", \\$duration/60, \\$duration%60);\n\\$hours = floor(\\$duration / 3600);\n\\$minutes = floor( (\\$duration - (\\$hours * 3600)) / 60);\n\\$seconds = \\$duration - (\\$hours * 3600) - (\\$minutes * 60);\nreturn sprintf(\"%02d:%02d:%02d\", \\$hours, \\$minutes, \\$seconds);\n}\n\n//Read first mp3 frame only... use for CBR constant bit rate MP3s\npublic function getDurationEstimate()\n{\nreturn \\$this->getDuration(\\$use_cbr_estimate=true);\n}\n\n//Read entire file, frame by frame... ie: Variable Bit Rate (VBR)\npublic function getDuration(\\$use_cbr_estimate=false)\n{\n\\$fd = fopen(\\$this->filename, \"rb\");\n\n\\$duration=0;\n\\$offset = \\$this->skipID3v2Tag(\\$block);\nfseek(\\$fd, \\$offset, SEEK_SET);\nwhile (!feof(\\$fd))\n{\nif (strlen(\\$block)<10) { break; }\n//looking for 1111 1111 111 (frame synchronization bits)\nelse if (\\$block==\"\\xff\" && (ord(\\$block)&0xe0) )\n{\nif (empty(\\$info['Framesize'])) { return \\$duration; } //some corrupt mp3 files\nfseek(\\$fd, \\$info['Framesize']-10, SEEK_CUR);\n\\$duration += ( \\$info['Samples'] / \\$info['Sampling Rate'] );\n}\nelse if (substr(\\$block, 0, 3)=='TAG')\n{\nfseek(\\$fd, 128-10, SEEK_CUR);//skip over id3v1 tag size\n}\nelse\n{\nfseek(\\$fd, -9, SEEK_CUR);\n}\nif (\\$use_cbr_estimate && !empty(\\$info))\n{\nreturn \\$this->estimateDuration(\\$info['Bitrate'],\\$offset);\n}\n}\nreturn round(\\$duration);\n}\n\nprivate function estimateDuration(\\$bitrate,\\$offset)\n{\n\\$kbps = (\\$bitrate*1000)/8;\n\\$datasize = filesize(\\$this->filename) - \\$offset;\nreturn round(\\$datasize / \\$kbps);\n}\n\nprivate function skipID3v2Tag(&\\$block)\n{\nif (substr(\\$block, 0,3)==\"ID3\")\n{\n\\$id3v2_major_version = ord(\\$block);\n\\$id3v2_minor_version = ord(\\$block);\n\\$id3v2_flags = ord(\\$block);\n\\$flag_unsynchronisation = \\$id3v2_flags & 0x80 ? 1 : 0;\n\\$flag_extended_header = \\$id3v2_flags & 0x40 ? 1 : 0;\n\\$flag_experimental_ind = \\$id3v2_flags & 0x20 ? 1 : 0;\n\\$flag_footer_present = \\$id3v2_flags & 0x10 ? 1 : 0;\n\\$z0 = ord(\\$block);\n\\$z1 = ord(\\$block);\n\\$z2 = ord(\\$block);\n\\$z3 = ord(\\$block);\nif ( ((\\$z0&0x80)==0) && ((\\$z1&0x80)==0) && ((\\$z2&0x80)==0) && ((\\$z3&0x80)==0) )\n{\n\\$tag_size = ((\\$z0&0x7f) * 2097152) + ((\\$z1&0x7f) * 16384) + ((\\$z2&0x7f) * 128) + (\\$z3&0x7f);\n\\$footer_size = \\$flag_footer_present ? 10 : 0;\nreturn \\$header_size + \\$tag_size + \\$footer_size;//bytes to skip\n}\n}\nreturn 0;\n}\n\n{\nstatic \\$versions = array(\n0x0=>'2.5',0x1=>'x',0x2=>'2',0x3=>'1', // x=>'reserved'\n);\nstatic \\$layers = array(\n0x0=>'x',0x1=>'3',0x2=>'2',0x3=>'1', // x=>'reserved'\n);\nstatic \\$bitrates = array(\n'V1L1'=>array(0,32,64,96,128,160,192,224,256,288,320,352,384,416,448),\n'V1L2'=>array(0,32,48,56, 64, 80, 96,112,128,160,192,224,256,320,384),\n'V1L3'=>array(0,32,40,48, 56, 64, 80, 96,112,128,160,192,224,256,320),\n'V2L1'=>array(0,32,48,56, 64, 80, 96,112,128,144,160,176,192,224,256),\n'V2L2'=>array(0, 8,16,24, 32, 40, 48, 56, 64, 80, 96,112,128,144,160),\n'V2L3'=>array(0, 8,16,24, 32, 40, 48, 56, 64, 80, 96,112,128,144,160),\n);\nstatic \\$sample_rates = array(\n'1' => array(44100,48000,32000),\n'2' => array(22050,24000,16000),\n'2.5' => array(11025,12000, 8000),\n);\nstatic \\$samples = array(\n1 => array( 1 => 384, 2 =>1152, 3 =>1152, ), //MPEGv1, Layers 1,2,3\n2 => array( 1 => 384, 2 =>1152, 3 => 576, ), //MPEGv2/2.5, Layers 1,2,3\n);\n//\\$b0=ord(\\$fourbytes);//will always be 0xff\n\\$b1=ord(\\$fourbytes);\n\\$b2=ord(\\$fourbytes);\n\\$b3=ord(\\$fourbytes);\n\n\\$version_bits = (\\$b1 & 0x18) >> 3;\n\\$version = \\$versions[\\$version_bits];\n\\$simple_version = (\\$version=='2.5' ? 2 : \\$version);\n\n\\$layer_bits = (\\$b1 & 0x06) >> 1;\n\\$layer = \\$layers[\\$layer_bits];\n\n\\$protection_bit = (\\$b1 & 0x01);\n\\$bitrate_key = sprintf('V%dL%d', \\$simple_version , \\$layer);\n\\$bitrate_idx = (\\$b2 & 0xf0) >> 4;\n\\$bitrate = isset(\\$bitrates[\\$bitrate_key][\\$bitrate_idx]) ? \\$bitrates[\\$bitrate_key][\\$bitrate_idx] : 0;\n\n\\$sample_rate_idx = (\\$b2 & 0x0c) >> 2;//0xc => b1100\n\\$sample_rate = isset(\\$sample_rates[\\$version][\\$sample_rate_idx]) ? \\$sample_rates[\\$version][\\$sample_rate_idx] : 0;\n\\$padding_bit = (\\$b2 & 0x02) >> 1;\n\\$private_bit = (\\$b2 & 0x01);\n\\$channel_mode_bits = (\\$b3 & 0xc0) >> 6;\n\\$mode_extension_bits = (\\$b3 & 0x30) >> 4;\n\\$copyright_bit = (\\$b3 & 0x08) >> 3;\n\\$original_bit = (\\$b3 & 0x04) >> 2;\n\\$emphasis = (\\$b3 & 0x03);\n\n\\$info = array();\n\\$info['Version'] = \\$version;//MPEGVersion\n\\$info['Layer'] = \\$layer;\n//\\$info['Protection Bit'] = \\$protection_bit; //0=> protected by 2 byte CRC, 1=>not protected\n\\$info['Bitrate'] = \\$bitrate;\n\\$info['Sampling Rate'] = \\$sample_rate;\n\\$info['Framesize'] = self::framesize(\\$layer, \\$bitrate, \\$sample_rate, \\$padding_bit);\n\\$info['Samples'] = \\$samples[\\$simple_version][\\$layer];\nreturn \\$info;\n}\n\n{\nif (\\$layer==1)\nreturn intval(((12 * \\$bitrate*1000 /\\$sample_rate) + \\$padding_bit) * 4);\nelse //layer 2, 3\nreturn intval(((144 * \\$bitrate*1000)/\\$sample_rate) + \\$padding_bit);\n}\n}\n?>\n<?php\n\\$duration1 = \\$mp3file->getDurationEstimate();//(faster) for CBR only\n\\$duration2 = \\$mp3file->getDuration();//(slower) for VBR (or CBR)\necho \"duration: \\$duration1 seconds\".\"\\n\";\n\n?>\n``````\n\nThere is no native php function to do this.\n\nDepending on your server environment, you may use a tool such as MP3Info.\n\n``````\\$length = shell_exec('mp3info -p \"%S\" sample.mp3'); // total time in seconds\n``````\n\nThe MP3 length is not stored anywhere (in the \"plain\" MP3 format), since MP3 is designed to be \"split\" into frames and those frames will remain playable.\n\nhttp://mpgedit.org/mpgedit/mpeg_format/mpeghdr.htm\n\nIf you have no ID tag on which to rely, what you would need to do (there are tools and PHP classes that do this) is to read the whole MP3 file and sum the durations of each frame.\n\n``````\\$getID3 = new getID3;\n\\$ThisFileInfo = \\$getID3->analyze(\\$pathName);\n\n// playtime in minutes:seconds, formatted string\n\\$len = @\\$ThisFileInfo['playtime_string'];\n\n//don't get playtime_string, but get playtime_seconds\n\\$len = @\\$ThisFileInfo['playtime_seconds']*1000; //*1000 as calculate millisecond\n``````\n\nI hope this helps you.\n\nFinally, I developed a solution with my own calculations. This solution works best for mp3 and WAV files formats. However minor precision variations are expected. The solution is in PHP. I take little bit clue from WAV\n\n``````function calculateFileSize(\\$file){\n\n\\$ratio = 16000; //bytespersec\n\nif (!\\$file) {\n\nexit(\"Verify file name and it's path\");\n\n}\n\n\\$file_size = filesize(\\$file);\n\nif (!\\$file_size)\nexit(\"Verify file, something wrong with your file\");\n\n\\$duration = (\\$file_size / \\$ratio);\n\\$minutes = floor(\\$duration / 60);\n\\$seconds = \\$duration - (\\$minutes * 60);\n\\$seconds = round(\\$seconds);\necho \"\\$minutes:\\$seconds minutes\";\n\n}\n\n\\$file = 'apple-classic.mp3'; //Enter File Name mp3/wav\ncalculateFileSize(\\$file);\n``````\n\nAs earlier, I provided a solution for both mp3 and WAV files, Now this solution is specifically for the only WAV file with more precision but with longer evaluation time than the earlier solution.\n\n``````function calculateWavDuration( \\$file ) {\n\n\\$fp = fopen(\\$file, 'r');\n\nif (fread(\\$fp, 4) == \"RIFF\") {\n\nfseek(\\$fp, 20);\n\\$pos = ftell(\\$fp);\n\nwhile (fread(\\$fp, 4) != \"data\" && !feof(\\$fp)) {\n\n\\$pos++;\nfseek(\\$fp, \\$pos);\n\n}\n\n\\$minutes = intval((\\$sec / 60) % 60);\n\\$seconds = intval(\\$sec % 60);\n\n}\n\n}\n\n\\$file = '1.wav'; //Enter File wav\ncalculateWavDuration(\\$file);\n``````\n\nIf you have FFMpeg installed, getting the duration is quite simple with FFProbe\n\n``````\\$filepath = 'example.mp3';\n\\$ffprobe = \\FFMpeg\\FFProbe::create();\n\\$duration = \\$ffprobe->format(\\$filepath)->get('duration');\n\necho gmdate('H:i:s', \\$duration);\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5253875,"math_prob":0.97568256,"size":10872,"snap":"2022-27-2022-33","text_gpt3_token_len":3627,"char_repetition_ratio":0.111796096,"word_repetition_ratio":0.021111893,"special_character_ratio":0.39919057,"punctuation_ratio":0.239786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96281934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T18:09:47Z\",\"WARC-Record-ID\":\"<urn:uuid:36327f18-48f1-4d5f-be68-fccbbe1c4c5b>\",\"Content-Length\":\"327736\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df247902-6561-4e04-9355-147971dc8034>\",\"WARC-Concurrent-To\":\"<urn:uuid:705c752f-ff8c-4e05-8f23-b54350364b7d>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/12053125/php-function-to-get-mp3-duration/12053187\",\"WARC-Payload-Digest\":\"sha1:V7UBV4Y5MZ55JGYDDLUZZ2Y47AW4HNPP\",\"WARC-Block-Digest\":\"sha1:6JJ4IM34W7KR44QIPT5T2CZPUGSTUTVT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571056.58_warc_CC-MAIN-20220809155137-20220809185137-00500.warc.gz\"}"}
https://plato.stanford.edu/entries/logic-modal-origins/
[ "# Modern Origins of Modal Logic\n\nFirst published Tue Nov 16, 2010; substantive revision Mon May 8, 2017\n\nModal logic can be viewed broadly as the logic of different sorts of modalities, or modes of truth: alethic (“necessarily”), epistemic (“it is known that”), deontic (“it ought to be the case that”), or temporal (“it has been the case that”) among others. Common logical features of these operators justify the common label. In the strict sense however, the term “modal logic” is reserved for the logic of the alethic modalities, as opposed for example to temporal or deontic logic. From a merely technical point of view, any logic with non-truth-functional operators, including first-order logic, can be regarded as a modal logic: in this perspective the quantifiers too can be regarded as modal operators (as in Montague 1960). Nonetheless, we follow the traditional understanding of modal logics as not including full-fledged first-order logic. In this perspective it is the modal operators that can be regarded as restricted quantifiers, ranging over special entities like possible worlds or temporal instants. Arthur Prior was one of the first philosophers/logicians to emphasize that the modal system S5 can be translated into a fragment of first-order logic, which he called “the uniform monadic first-order predicate calculus” (Prior and Fine 1977: 56). Monadic, since no relations between worlds needs to be stated for S5; and uniform as only one variable is needed to quantify over worlds (instants) when bound, and to refer to the privileged state (the actual world or the present time) when free (see Prior and Fine 1977). Concerning the technical question of which model-theoretic features characterize modal logics understood as well-behaved fragments of first-order logic, see Blackburn and van Benthem’s “Modal Logic: A Semantic Perspective” (2007a).\n\nThe scope of this entry is the recent historical development of modal logic, strictly understood as the logic of necessity and possibility, and particularly the historical development of systems of modal logic, both syntactically and semantically, from C.I. Lewis’s pioneering work starting in 1912, with the first systems devised in 1918, to S. Kripke’s work in the early 1960s. In that short span of time of less than fifty years, modal logic flourished both philosophically and mathematically. Mathematically, different modal systems were developed and advances in algebra helped to foster the model theory for such systems. This culminated in the development of a formal semantics that extended to modal logic the successful first-order model theoretic techniques, thereby affording completeness and decidability results for many, but not all, systems. Philosophically, the availability of different systems and the adoption of the possible worlds model-theoretic semantics were naturally accompanied by reflections on the nature of possibility and necessity, on distinct sorts of necessities, on the role of the formal semantics, and on the nature of the possible worlds, to mention just a few. In particular, the availability of different systems brings to the fore the philosophical question of which modal logic is the correct one, under some intended interpretation of the modal operators, e.g., as logical or metaphysical necessity. Questions concerning the interpretability of modal logic, especially quantified modal logic, were insistently raised by Quine. All such questions are not pursued in this entry which is mostly devoted to the formal development of the subject.\n\nModal logic is a rich and complex subject matter. This entry does not present a complete survey of all the systems developed and of all the model theoretic results proved in the lapse of time under consideration. It does however offer a meaningful survey of the main systems and aims to be useful to those looking for an historical outline of the subject matter that, even if not all-inclusive, delineates the most interesting model theoretic results and indicates further lines of exploration. Bull and Segerberg’s (1984: 3) useful division of the original sources of modal logic into three distinct traditions—syntactic, algebraic, and model theoretic—is adopted. For other less influential traditions, see Bull and Segerberg (1984: 16). See also Lindström and Segerberg’s “Modal Logic and Philosophy” (2007). The main focus of this entry is on propositional modal logic, while only some particular aspects of the semantics of quantified modal logic are discussed. For a more detailed treatment of quantified modal logic, consult the SEP entry on modal logic. Concerning the entry’s notation, notice that $$\\Rightarrow$$ is adopted in place of Lewis’s fishhook for strict implication, and $$\\Leftrightarrow$$ for strict equivalence.\n\nIn a 1912 pioneering article in Mind “Implication and the Algebra of Logic” C.I. Lewis started to voice his concerns on the so-called “paradoxes of material implication”. Lewis points out that in Russell and Whitehead’s Principia Mathematica we find two “startling theorems: (1) a false proposition implies any proposition, and (2) a true proposition is implied by any proposition” (1912: 522). In symbols:\n\n$\\tag{1} \\neg p \\rightarrow(p \\rightarrow q)$\n\nand\n\n$\\tag{2} p \\rightarrow(q \\rightarrow p)$\n\nLewis has no objection to these theorems in and of themselves:\n\nIn themselves, they are neither mysterious sayings, nor great discoveries, nor gross absurdities. They exhibit only, in sharp outline, the meaning of “implies” which has been incorporated into the algebra. (1912: 522)\n\nHowever, the theorems are inadequate vis-à-vis the intended meaning of “implication” and our actual modes of inference that the intended meaning tries to capture. So Lewis has in mind an intended meaning for the conditional connective $$\\rightarrow$$ or $$\\supset$$, and that is the meaning of the English word “implies”. The meaning of “implies” is that “of ordinary inference and proof” (1912: 531) according to which a proposition implies another proposition if the second can be logically deduced from the first. Given such an interpretation, (1) and (2) ought not to be theorems, and propositional logic may be regarded as unsound vis-à-vis the reading of $$\\rightarrow$$ as logical implication. Consider (2) for example: from the sheer truth of a proposition $$p$$ it does not (logically) follow that $$p$$ follows logically from any proposition whatsoever. Additionally, given the intended, strict reading of $$\\rightarrow$$ as logical implication and the equivalence of $$(\\neg p\\rightarrow q)$$ and $$(p\\vee q)$$, Lewis infers that disjunction too must be given a new intensional sense, according to which $$(p\\vee q)$$ holds just in case if $$p$$ were not the case it would have to be the case that $$q$$.\n\nConsiderations of this sort, based on the distinction between extensional and intensional readings of the connectives, were not original to Lewis. Already in 1880 Hugh MacColl in the first of a series of eight papers on Symbolical Reasoning published in Mind claimed that $$(p\\rightarrow q)$$ and $$(\\neg p\\vee q)$$ are not equivalent: $$(\\neg p\\vee q)$$ follows from $$(p\\rightarrow q)$$, but not vice versa (MacColl 1880: 54). This is the case because MacColl interprets $$\\vee$$ as regular extensional disjunction, and $$\\rightarrow$$ as intensional implication, but then from the falsity of $$p$$ or the truth of $$q$$ it does not follow that $$p$$ without $$q$$ is logically impossible. In the second paper of the series, MacColl distinguishes between certainties, possibilities and variable statements, and introduces Greek letters as indices to classify propositions. So $$\\alpha^{\\varepsilon}$$ expresses that $$\\alpha$$ is a certainty, $$\\alpha^{\\eta}$$ that $$\\alpha$$ is an impossibility, and $$\\alpha^{\\theta}$$ that $$\\alpha$$ is a variable, i.e., neither a certainty nor an impossibility (MacColl 1897: 496–7). Using this threefold classification of statements, MacColl proceeds to distinguish between causal and general implication. A causal implication holds between statements $$\\alpha$$ and $$\\beta$$ if whenever $$\\alpha$$ is true $$\\beta$$ is true, and $$\\beta$$ is not a certainty. A general implication holds between $$\\alpha$$ and $$\\beta$$ whenever $$\\alpha$$ and not$$-\\beta$$ is impossible, thus in particular whenever $$\\alpha$$ is an impossibility or $$\\beta$$ a certainty (1897: 498). The use of indices opened the door to the iteration of modalities, and the beginning of the third paper of the series (MacColl 1900: 75–6) is devoted to clarify the meaning of statements with iterated indices, including $$\\tau$$ for truth and $$\\iota$$ for negation. So for example $$A^{\\eta \\iota \\varepsilon}$$ is read as “It is certain that it is false that A is impossible” (note that the indices are read from right to left). Interestingly, Bertrand Russell’s 1906 review of MacColl’s book Symbolic Logic and its Applications (1906) reveals that Russell did not understand the modal idea of the variability of a proposition, hence wrongly attributed to MacColl a confusion between sentences and propositions which allowed the attribution of variability only to sentences whose meaning, hence truth value, was not fixed. Similarly, certainty and impossibility are for Russell material properties of propositional functions (true of everything or of nothing) and not modal properties of propositions. It might be said that MacColl’s work came too early and fell on deaf ears. In fact, Rescher reports on Russell’s declared difficulty in understanding MacColl’s symbolism and, more importantly, argues that Russell’s view of logic had a negative impact on the development of modal logic (“Bertrand Russell and Modal Logic” in Rescher 1974: 85–96). Despite MacColl’s earlier work, Lewis can be regarded as the father of the syntactic tradition, not only because of his influence on later logicians, but especially because of his introduction of various systems containing the new intensional connectives.\n\n### 1.1 The Lewis Systems\n\nIn “The Calculus of Strict Implication” (1914) Lewis suggests two possible alternatives to the extensional system of Whitehead and Russell’s Principia Mathematica. One way of introducing a system of strict implication consists in eliminating from the system those theorems that, like (1) and (2) above, are true only for material implication but not for strict implication, thereby obtaining a sound system for both material and strict implication, but in neither case complete. The second, more fruitful alternative consists in introducing a new system of strict implication, still modeled on the Whitehead and Russell system of material implication, that will contain (all or a part of) extensional propositional logic as a proper part, but aspiring to completeness for at least strict implication. This second option is further developed in A Survey of Symbolic Logic (1918). There Lewis introduces a first system meant to capture the ordinary, strict sense of implication, guided by the idea that:\n\nUnless “implies” has some “proper” meaning, there is no criterion of validity, no possibility even of arguing the question whether there is one or not. And yet the question What is the “proper” meaning of “implies”? remains peculiarly difficult. (1918: 325)\n\nThe 1918 system takes as primitive the notion of impossibility $$(\\neg \\Diamond)$$, defines the operator of strict implication in its terms, and still employs an operator of intensional disjunction. However, Post will prove that this system leads to the collapse of necessity to truth—alternatively, of impossibility to falsity—since from one of its theorems $$((p\\Rightarrow q)\\Leftrightarrow(\\neg \\Diamond q\\Rightarrow \\neg \\Diamond p))$$ it can be proved that $$(\\neg p\\Leftrightarrow \\neg \\Diamond p)$$. In 1920, “Strict Implication—An Emendation”, Lewis fixes the system substituting for the old axiom the weaker one: $$((p\\Rightarrow q)\\Rightarrow(\\neg \\Diamond q\\Rightarrow \\neg \\Diamond p))$$. Finally, in Appendix II of the Lewis and Langford’s volume Symbolic Logic (1932: 492–502) “The Structure of the System of Strict Implication” the 1918 system is given a new axiomatic base.\n\nIn the 1932 Appendix C.I. Lewis introduces five different systems. The modal primitive symbol is now the operator of possibility $$\\Diamond$$, strict implication $$(p\\Rightarrow q)$$ is defined as $$\\neg \\Diamond(p\\wedge \\neg q)$$, and $$\\vee$$ is ordinary extensional disjunction. The necessity operator $$\\Box$$ can also be introduced and defined, though Lewis does not, in the usual way as $$\\neg \\Diamond \\neg$$.\n\nWhere $$p, q$$, and $$r$$ are propositional variables, System S1 has the following axioms:\n\nAxioms for S1\n\n\\begin{align} \\tag{B1} (p\\wedge q)&\\Rightarrow(q\\wedge p) \\\\ \\tag{B2} (p\\wedge q)&\\Rightarrow p \\\\ \\tag{B3} p&\\Rightarrow(p\\wedge p) \\\\ \\tag{B4} ((p\\wedge q)\\wedge r)&\\Rightarrow(p\\wedge(q\\wedge r)) \\\\ \\tag{B5} p&\\Rightarrow \\neg \\neg p \\\\ \\tag{B6} ((p\\Rightarrow q)\\wedge(q\\Rightarrow r)) & \\Rightarrow(p\\Rightarrow r) \\\\ \\tag{B7} (p\\wedge(p\\Rightarrow q)) & \\Rightarrow q \\\\ \\end{align}\n\nAxiom B5 was proved redundant by McKinsey (1934), and can thereby be ignored.\n\nThe rules are (1932: 125–6):\n\nRules for S1\n\nUniform Substitution\nA valid formula remains valid if a formula is uniformly substituted in it for a propositional variable.\n\nSubstitution of Strict Equivalents\nEither of two strictly equivalent formulas can be substituted for one another.\n\nIf $$\\Phi$$ and $$\\Psi$$ have been inferred, then $$\\Phi \\wedge \\Psi$$ may be inferred.\n\nStrict Inference\nIf $$\\Phi$$ and $$\\Phi \\Rightarrow \\Psi$$ have been inferred, then $$\\Psi$$ may be inferred.\n\nSystem S2 is obtained from System S1 by adding what Lewis calls “the consistency postulate”, since it obviously holds for $$\\Diamond$$ interpreted as consistency:\n\n$\\tag{B8} \\Diamond(p\\wedge q)\\Rightarrow \\Diamond p$\n\nSystem S3 is obtained from system S1 by adding the axiom:\n\n$\\tag{A8} ((p\\Rightarrow q)\\Rightarrow(\\neg \\Diamond q\\Rightarrow \\neg \\Diamond p))$\n\nSystem S3 corresponds to the 1918 system of A Survey, which Lewis originally considered the correct system for strict implication. By 1932, Lewis has come to prefer system S2. The reason, as reported in Lewis 1932: 496, is that both Wajsberg and Parry derived in system S3—in its 1918 axiomatization—the following theorem:\n\n$(p\\Rightarrow q)\\Rightarrow((q\\Rightarrow r)\\Rightarrow(p\\Rightarrow r)),$\n\nwhich according to Lewis ought not to be regarded as a valid principle of deduction. In 1932 Lewis is not sure that the questionable theorem is not derivable in S2. Should it be, he would then adjudicate S1 as the proper system for strict implication. However, Parry (1934) will later prove that neither A8 nor\n\n$(p\\Rightarrow q)\\Rightarrow((q\\Rightarrow r)\\Rightarrow(p\\Rightarrow r))$\n\ncan be derived in S2.\n\nA further existence axiom can be added to all these systems:\n\n$\\tag{B9} (\\exists p, q)(\\neg(p\\Rightarrow q)\\wedge \\neg(p\\Rightarrow \\neg q))$\n\nThe addition of B9 makes it impossible to interpret $$\\Rightarrow$$ as material implication, since in the case of material implication it can be proved that for any propositions $$p$$ and $$q, ((p\\rightarrow q)\\vee(p\\rightarrow \\neg q))$$ (1932: 179). From B9 Lewis proceeds to deduce the existence of at least four logically distinct propositions: one true and necessary, one true but not necessary, one false and impossible, one false but not impossible (1932: 184–9).\n\nFollowing Becker (1930), Lewis considers three more axioms:\n\n\\begin{align} \\tag{C10} \\neg \\Diamond \\neg p &\\Rightarrow \\neg \\Diamond \\neg \\neg \\Diamond \\neg p\\\\ \\tag{C11} \\Diamond p & \\Rightarrow \\neg \\Diamond \\neg \\Diamond p\\\\ \\tag{C12} p&\\Rightarrow \\neg \\Diamond \\neg \\Diamond p\\\\ \\end{align}\n\nSystem S4 adds axiom C10 to the basis of S1. System S5 adds axiom C11, or alternatively C10 and C12, to the basis of S1. Lewis concludes Appendix II by noting that the study of logic is best served by focusing on systems weaker than S5 and not exclusively on S5.\n\nParadoxes of strict implication similar to those of material implication arise too. Given that strict implication $$(p\\Rightarrow q)$$ is defined as $$\\neg \\Diamond(p\\wedge \\neg q)$$, it follows that an impossible proposition implies anything, and that a necessary proposition is implied by anything. Lewis argues that this is as it ought to be. Since impossibility is taken to be logical impossibility, i.e., ultimately a contradiction, Lewis argues that from an impossible proposition like $$(p\\wedge \\neg p)$$, both $$p$$ and $$\\neg p$$ follow. From $$p$$ we can derive $$(p\\vee q)$$, for any proposition $$q$$. From $$\\neg p$$ and $$(p\\vee q)$$, we can derive $$q$$ (1932: 250). The argument is controversial since one might think that the principle $$(p\\Rightarrow(p\\vee q))$$ should not be a theorem of a system aiming to express ordinary implication (see, e.g., Nelson 1930: 447). Whatever the merits of this argument, those who disagreed with Lewis started to develop a logic of entailment based on the assumption that entailment requires more than Lewis’s strict implication. See, for example, Nelson 1930, Strawson 1948, and Bennett 1954. See also the SEP entry on relevance logic.\n\nNotice that it was Lewis’s search for a system apt to express strict implication that made Quine reject modal systems as based on a use-mention confusion insofar as such systems were formulated to express at the object level proof-theoretic or semantic notions like consistency, implication, derivability and theoremhood (in fact, whenever $$p\\rightarrow q$$ is a propositional theorem, system S1, and so all the other stronger Lewis systems too, can prove $$p\\Rightarrow q$$ (Parry 1939: 143)).\n\n### 1.2 Other Systems and Alternative Axiomatizations of the Lewis Systems\n\nGödel in “An Interpretation of the Intuitionistic Propositional Calculus” (1933) is the first to propose an alternative axiomatization of the Lewis system S4 that separates the propositional basis of the system from the modal axioms and rules. Gödel adds the following rules and axioms to the propositional calculus.\n\n\\begin{align*} \\tag{Necessitation} \\textrm{If } \\mvdash \\alpha &\\textrm{ then } \\mvdash \\Box \\alpha, \\\\ \\tag{Axiom K} \\mvdash \\Box(p\\rightarrow q)&\\rightarrow(\\Box p\\rightarrow \\Box q), \\\\ \\tag{Axiom T} \\mvdash \\Box p&\\rightarrow p\\textrm{, and} \\\\ \\tag{Axiom 4} \\mvdash \\Box p&\\rightarrow \\Box \\Box p.\\\\ \\end{align*}\n\nInitially, Gödel employs an operator $$B$$ of provability to translate Heyting’s primitive intuitionistic connectives, and then observes that if we replace $$B$$ with an operator of necessity we obtain the system S4. Gödel also claims that a formula $$\\Box p\\vee \\Box q$$ is not provable in S4 unless either $$\\Box p$$ or $$\\Box q$$ is provable, analogously to intuitionistic disjunction. Gödel’s claim will be proved algebraically by McKinsey and Tarski (1948). Gödel’s short note is important for starting the fruitful practice of axiomatizing modal systems separating the propositional calculus from the strictly modal part, but also for connecting intuitionistic and modal logic.\n\nFeys (1937) is the first to propose system T by subtracting axiom 4 from Gödel’s system S4 (see also Feys 1965: 123–124). In An Essay in Modal Logic (1951) von Wright discusses alethic, epistemic, and deontic modalities, and introduces system M, which Sobociński (1953) will prove to be equivalent to Feys’ system T. Von Wright (1951: 84–90) proves that system M contains Lewis’s S2, which contains S1—where system S is said to contain system S′ if all the formulas provable in S′ can be proved in S too. System S3, an extension of S2, is not contained in M. Nor is M contained in S3. Von Wright finds S3 of little independent interest, and sees no reason to adopt S3 instead of the stronger S4. In general, the Lewis systems are numbered in order of strength, with S1 the weakest and S5 the strongest, weaker systems being contained in the stronger ones.\n\nLemmon (1957) also follows Gödel in axiomatizing modal systems on a propositional calculus base, and presents an alternative axiomatization of the Lewis systems. Where PC is the propositional calculus base, PC may be characterized as the following three rules (1957: 177):\n\nA characterization of propositional calculus PC\n\n• PCa If $$\\alpha$$ is a tautology, then $$\\mvdash \\alpha$$\n• PCb Substitution for propositional variables\n• PCc Material detachment/Modus Ponens: if $$\\alpha$$ and $$\\alpha \\rightarrow \\beta$$ are tautologies, then so is $$\\beta$$\n\nFurther rules in Lemmon’s system are:\n\n• (a) If $$\\mvdash \\alpha$$ then $$\\mvdash \\Box \\alpha$$(Necessitation)\n• (a′) If $$\\alpha$$ is a tautology or an axiom, then $$\\mvdash \\Box \\alpha$$\n• (b) If $$\\mvdash \\Box(\\alpha \\rightarrow \\beta)$$ then $$\\mvdash \\Box(\\Box \\alpha \\rightarrow \\Box \\beta)$$\n• (b′) Substitutability of strict equivalents.\n\nFurther axioms in Lemmon’s system are:\n\n\\begin{align} \\tag{1} \\Box(p \\rightarrow q)&\\rightarrow \\Box(\\Box p\\rightarrow \\Box q) \\\\ \\tag{1′} \\Box(p\\rightarrow q)&\\rightarrow(\\Box p\\rightarrow \\Box q) &\\textrm{(Axiom K)} \\\\ \\tag{2} \\Box p&\\rightarrow p &\\textrm{(Axiom T)} \\\\ \\tag{3} (\\Box(p\\rightarrow q)\\wedge \\Box(q\\rightarrow r))&\\rightarrow \\Box(p\\rightarrow r)\\\\ \\end{align}\n\nUsing the above rules and axioms Lemmon defines four systems. System P1, which is proved equivalent to the Lewis system S1, employs the propositional basis (PC), rules (a′)—necessitation of tautologies and axioms—and (b′), and axioms (2) and (3). System P2, equivalent to S2, employs (PC), rules (a′) and (b), and axioms (2) and (1′). System P3, equivalent to S3, employs (PC), rule (a′), and axioms (2) and (1). System P4, equivalent to S4, employs (PC), rule (a), and axioms (2) and (1). In Lemmon’s axiomatization it is easy to see that S3 and von Wright’s system M (Feys’ T) are not included in each other, given M’s stronger rule of necessitation and S3’s stronger axiom (1) in place of (1′) = K. In general, Lemmon’s axiomatization makes more perspicuous the logical distinctions between the different Lewis systems.\n\nLemmon considers also some systems weaker than S1. Of particular interest is system S0.5 which weakens S1 by replacing rule (a′) with the weaker rule (a″):\n\n• (a″) If $$\\alpha$$ is a tautology, then $$\\mvdash \\Box \\alpha$$.\n\nLemmon interprets system S0.5 as a formalized metalogic of the propositional calculus, where $$\\Box \\alpha$$ is interpreted as “$$\\alpha$$ is a tautology”.\n\nWe call “normal” the systems that include PC, axiom K and the rule of necessitation. System K is the smallest normal system. System T adds axiom T to system K. System B (the Brouwersche system) adds axiom B\n\n$\\mvdash p\\Rightarrow \\Box \\Diamond p \\quad\\textrm{(equivalent to Becker’s C12)}$\n\nto system T. S4 adds axiom 4 (equivalent to Becker’s C10) to system T. S5 adds axioms B and 4, or alternatively axiom E\n\n$\\mvdash \\Diamond p\\Rightarrow \\Box \\Diamond p \\quad \\textrm{(equivalent to Becker’s C11)}$\n\nto system T. Lewis’s systems S1, S2, and S3 are non-normal given that they do not contain the rule of Necessitation. For the relationship between these (and other) systems, and the conditions on frames that the axioms impose, consult the SEP entry on modal logic.\n\nOnly a few of the many extensions of the Lewis systems that have been discussed in the literature are mentioned here. Alban (1943) introduced system S6 by adding to S2 the axiom $$\\mvdash \\Diamond \\Diamond p$$. Halldén (1950) calls S7 the system that adds the axiom $$\\mvdash \\Diamond \\Diamond p$$ to S3, and S8 the system that extends S3 with the addition of the axiom $$\\mvdash \\neg \\Diamond \\neg \\Diamond \\Diamond p$$. While the addition of an axiom of universal possibility $$\\mvdash \\Diamond p$$ would be inconsistemt with all the Lewis systems, since they all contain theorems of the form $$\\mvdash \\Box p$$, systems S6, S7 and S8 are consistent. Instead, the addition of either of these axioms to S4, and so also to S5, results in an inconsistent system, given that in S4 $$\\mvdash \\Diamond \\Diamond p\\Rightarrow \\Diamond p$$. Halldén also proved that a formula is a theorem of S3 if and only if it is a theorem of both S4 and S7 (1950: 231–232), thus S4 and S7 are two alternative extensions of S3.\n\n## 2. The Matrix Method and Some Algebraic Results\n\nIn “Philosophical Remarks on Many-Valued Systems of Propositional Logic” (1930. But Łukasiewicz 1920 is a preliminary Polish version of the main ideas of this paper), Łukasiewicz says:\n\nWhen I recognized the incompatibility of the traditional theorems on modal propositions in 1920, I was occupied with establishing the system of the ordinary “two-valued” propositional calculus by means of the matrix method. I satisfied myself at the time that all theses of the ordinary propositional calculus could be proved on the assumption that their propositional variables could assume only two values, “0” or “the false”, and “1” or “the true”. (1970: 164)\n\nThis passage illustrates well how Łukasiewicz was thinking of logic in the early twenties. First, he was thinking in algebraic terms, rather than syntactically, concerning himself not so much with the construction of new systems, but with the evaluation of the systems relatively to sets of values. Secondly, he was introducing three-valued matrices to make logical space for the notion of propositions (eminently about future contingents) that are neither true nor false, and that receive the new indeterminate value ½. Ironically, later work employing his original matrix method will show that the hope of treating modal logic as a three-valued system cannot be realized. See also the SEP entry on many-valued logic.\n\nA matrix for a propositional logic L is given by (i) a set K of elements, the truth-values, (ii) a non-empty subset $$D\\subseteq K$$ of designated truth-values, and (iii) operations on the set K, that is functions from $$n$$-tuples of truth-values to truth-values, that correspond to the connectives of L. A matrix satisfies a formula A under an assignment $$\\sigma$$ of elements of K to the variables of A if the value of A under $$\\sigma$$ is a member of D, that is, a designated value. A matrix satisfies a formula if it satisfies it under every assignment $$\\sigma$$. A matrix for a modal logic M extends a matrix for a propositional logic by adding a unary function that corresponds to the connective $$\\Diamond$$.\n\nMatrices are typically used to show the independence of the axioms of a system as well as their consistency. The consistency of two formulas A and B is established by a matrix that, under an assignment $$\\sigma$$, assigns to both formulas designated values. The independence of formula B from formula A is established by a matrix that (i) preserves the validity of the rules of the system and that (ii) under an interpretation $$\\sigma$$ assigns to A but not to B a designated value. Parry (1939) uses the matrix method to show that the number of modalities of Lewis’s systems S3 and S4 is finite. A modality is a modal function of one variable that contains only the operators $$\\neg$$ and $$\\Diamond$$. The degree of a modality is given by the number of $$\\Diamond$$ operators contained. A proper modality is of degree higher than zero. Proper modalities can be of four different forms:\n\n\\begin{align} \\tag{1} \\neg \\ldots \\Diamond p\\\\ \\tag{2} \\Diamond \\ldots \\Diamond p\\\\ \\tag{3} \\neg \\ldots \\Diamond \\neg p\\\\ \\tag{4} \\Diamond \\ldots \\neg p.\\\\ \\end{align}\n\nThe improper modalities are $$p$$ and $$\\neg p$$ (1939: 144). Parry proves that S3 has 42 distinct modalities, and that S4 has 14 distinct modalities. It was already known that system S5 has only 6 distinct modalities since it reduces all modalities to modalities of degree zero or one. Parry introduces system S4.5 by adding to S4 the following axiom:\n\n$\\mvdash \\neg \\Diamond \\neg \\Diamond \\neg \\Diamond p\\Rightarrow \\neg \\Diamond p.$\n\nThe system reduces the number of modalities of S4 from 14 to 12 (or 10 proper ones). The addition of the same axiom to Lewis’s system S3 results in a system with 26 distinct modalities. Moreover, if we add\n\n$\\mvdash \\neg \\Diamond \\neg \\Diamond \\Diamond p\\Rightarrow \\neg \\Diamond \\neg \\Diamond p$\n\nto S3 we obtain a distinct system with 26 modalities also intermediate between S3 and S4. Therefore the number of modalities does not uniquely determine a system. Systems S1 and S2, as well as T and B, have an infinite number of modalities (Burgess 2009, chapter 3 on Modal Logic, discusses the additional systems S4.2 and S4.3 and explains well the reduction of modalities in different systems).\n\nA characteristic matrix for a system L is a matrix that satisfies all and only the theorems of L. A matrix is finite if its set K of truth-values is finite. A finite characteristic matrix yields a decision procedure, where a system is decidable if every formula of the system that is not a theorem is falsified by some finite matrix (this is the finite model property). Yet Dugundji (1940) shows that none of S1S5 has a finite characteristic matrix. Hence, none of these systems can be viewed as an $$n$$-valued logic for a finite $$n$$. Later, Scroggs (1951) will prove that every proper extension of S5 that preserves detachment for material implication and is closed under substitution has a finite characteristic matrix.\n\nDespite their lack of a finite characteristic matrix, McKinsey (1941) shows that systems S2 and S4 are decidable. To prove these results McKinsey introduces modal matrices $$(K, D, -, *, \\times)$$, with $$-$$, $$*$$, and $$\\times$$ corresponding to negation, possibility, and conjunction respectively. A matrix is normal if it satisfies the following conditions:\n\n1. if $$x \\in D$$ and $$(x\\Rightarrow y) \\in D$$ and $$y \\in K$$, then $$y \\in D$$,\n2. if $$x \\in D$$ and $$y \\in D$$, then $$x\\times y \\in D$$,\n3. if $$x \\in K$$ and $$y \\in K$$ and $$x\\Leftrightarrow y \\in D$$, then $$x = y$$.\n\nThese conditions correspond to Lewis’s rules of strict inference, adjunction and substitution of strict equivalents. The structure of McKinsey’s proof is as follows. The proof employs three steps. First, using an unpublished method of Lindenbaum explained to him by Tarski which holds for systems that have the rule of Substitution for propositional variables, McKinsey shows that there is an S2-characteristic matrix $$M = (K, D, -, *, \\times)$$ that does not satisfy condition (iii) and is therefore non-normal. M is a trivial matrix whose domain is the set of formulas of the system, whose designated elements are the theorems of the system, and whose operations are the connectives themselves. The trivial matrix M does not satisfy (iii) given that for some distinct formulas A and B, $$A\\Leftrightarrow B$$ is an S2-theorem. Second, McKinsey shows how to construct from M a normal, but still infinite, S2-characteristic matrix $$M_1 = (K_1, D_1, -_1, *^1, \\times_1 )$$, whose elements are equivalence classes of provably equivalent formulas of S2, i.e., of formulas A and B such that $$A\\Leftrightarrow B$$ is a theorem of S2, and whose operations are revised accordingly. For example, if $$E(A)$$ is the set of formulas provably equivalent to A and $$E(A)\\in K_1$$, then $$-_1 E(A) = E(-A)= E(\\neg A). M_1$$ satisfies exactly the formulas satisfied by M without violating condition (iii), hence it is a characteristic normal matrix for S2 ($$M_1$$ is the Lindenbaum algebra for S2). Finally, it is shown that for every formula A that is not a theorem of S2 there is a finite and normal matrix (a sub-algebra of $$M_1)$$ that falsifies it. A similar proof is given for S4.\n\nA matrix is a special kind of algebra. An algebra is a matrix without a set D of designated elements. Boolean algebras correspond to matrices for propositional logic. According to Bull and Segerberg (1984: 10) the generalization from matrices to algebras may have had the effect of encouraging the study of these structures independently of their connections to logic and modal systems. The set of designated elements D in fact facilitates a definition of validity with respect to which the theorems of a system can be evaluated. Without such a set the most obvious link to logic is severed. A second generalization to classes of algebras, rather than merely to individual algebras, was also crucial to the mathematical development of the subject matter. Tarski is the towering figure in such development.\n\nJónsson and Tarski (1951 and 1952) introduce the general idea of Boolean algebras with operators, i.e., extensions of Boolean algebras by addition of operators that correspond to the modal connectives. They prove a general representation theorem for Boolean algebras with operators that extends Stone’s result for Boolean algebras (every Boolean algebra can be represented as a set algebra). This work of Jónsson and Tarski evolved from Tarski’s purely mathematical study of the algebra of relations and includes no reference to modal logic or even logic in general. Jónsson and Tarski’s theorem is a (more general) algebraic analog of Kripke’s later semantic completeness results, yet this was not realized for some time. Not only was Tarski unaware of the connection, but it appears that both Kripke and Lemmon had not read the Jónsson and Tarski papers at the time in which they did their modal work in the late fifties and sixties, and Kripke claims to have reached the same result independently.\n\nLemmon (1966a and 1966b) adapts the algebraic methods of McKinsey to prove decidability results and representation theorems for various modal systems including T (though apparently in ignorance of Jónsson and Tarski’s work). In particular, he extends McKinsey’s method by introducing a new technique for constructing finite algebras of subsets of a Kripke model structure (discussed in the next section of this entry). Lemmon (1966b: 191) attributes to Dana Scott the main result of his second 1966 paper. This is a general representation theorem proving that algebras for modal systems can be represented as algebras based on the power set of the set K in the corresponding Kripke’s structures. As a consequence, algebraic completeness translates into Kripke’s model theoretic completeness. So, Lemmon elucidates very clearly the connection between Kripke’s models whose elements are worlds and the corresponding algebras whose elements are sets of worlds that can be thought of as propositions, thereby showing that the algebraic and model theoretic results are deeply connected. Kripke (1963a) is already explicit on this connection. In The Lemmon Notes (1977), written in collaboration with Dana Scott and edited by Segerberg, the 1966 technique is transformed into a purely model theoretic method which yields completeness and decidability results for many systems of modal logic in as general a form as possible (1977: 29).\n\nSee also the SEP entry on the algebra of logic tradition. For a basic introduction to the algebra of modal logic, consult Hughes and Cresswell 1968, Chapter 17 on “Boolean Algebra and Modal Logic”. For a more comprehensive treatment, see chapter 5 of Blackburn, de Rijke, and Venema 2001. See also Goldblatt 2003.\n\n## 3. The Model Theoretic Tradition\n\n### 3.1 Carnap\n\nIn the early 1940s the recognition of the semantical nature of the notion of logical truth led Rudolf Carnap to an informal explication of this notion in terms of Leibnizian possible worlds. At the same time, he recognized that the many syntactical advances in modal logic from 1918 on were still not accompanied by adequate semantic considerations. One notable exception was Gödel’s interpretation of necessity as provability and the resulting preference for S4. Carnap instead thought of necessity as logical truth or analyticity. Considerations on the properties of logically true sentences led him to think of S5 as the right system to formalize this ‘informal’ notion. Carnap’s work in the early forties would then be focused on (1) defining a formal semantic notion of L-truth apt to represent the informal semantic notions of logical truth, necessity, and analyticity, that is, truth in virtue of meaning alone (initially, he drew no distinction between these notions, but clearly thought of analyticity as the leading idea); and (2) providing a formal semantics for quantified S5 in terms of the formal notion of L-truth with the aim of obtaining soundness and completeness results, that is, prove that all the theorems of quantified S5 are L-true, and that all the L-truths (expressible in the language of the system) are theorems of the system.\n\nThe idea of quantified modal systems occurred to Ruth Barcan too. In “A Functional Calculus of First Order Based on Strict Implication” (1946a) she added quantification to Lewis’s propositional system S2; Carnap (1946) added it to S5. Though some specific semantic points about quantified modal logic will be considered, this entry is not focused on the development of quantified modal logic, but rather on the emergence of the model theoretic formal semantics for modal logic, propositional or quantified. For a more extensive treatment of quantified modal logic, consult the SEP entry on modal logic.\n\nIn “Modalities and Quantification” (1946) and in Meaning and Necessity (1947), Carnap interprets the object language operator of necessity as expressing at the object level the semantic notion of logical truth:\n\n[T]he guiding idea in our constructions of systems of modal logic is this: a proposition $$p$$ is logically necessary if and only if a sentence expressing $$p$$ is logically true. That is to say, the modal concept of the logical necessity of a proposition and the semantical concept of the logical truth or analyticity of a sentence correspond to each other. (1946: 34)\n\nCarnap introduces the apparatus of state-descriptions to define the formal semantic notion of L-truth. This formal notion is then to be used to provide a formal semantics for S5.\n\nA state-description for a language L is a class of sentences of L such that, for every atomic sentence $$p$$ of L, either $$p$$ or $$\\neg p$$, but not both, is contained in the class. An atomic sentence holds in a state-description R if and only if it belongs to R. A sentence $$\\neg A$$ (where A need not be atomic) holds in R if and only if A does not hold in R; $$(A\\wedge B)$$ holds in R if and only if both A and B hold in R, and so on for the other connectives in the usual inductive way; $$(\\forall x)Fx$$ holds in R if and only if all the substitution instances of $$Fx$$ hold in R. The range of a sentence is the class of state-descriptions in which it holds. Carnap’s notion of validity or L-truth is a maximal notion, i.e., Carnap defines a sentence to be valid or L-true if and only if it holds in all state-descriptions. In later work Carnap adopts models in place of state-descriptions. Models are assignments of values to the primitive non-logical constants of the language. In Carnap’s case predicate constants are the only primitive constants to which the models assign values, since individual constants are given a fixed pre-model interpretation and value assignments to variables are done independently of the models (1963a).\n\nIt is important to notice that the definition of L-truth does not employ the notion of truth, but rather only that of holding-in-a-state-description. Truth is introduced later as what holds in the real state description. To be an adequate formal representation of analyticity, L-truth must respect the basic idea behind analyticity: truth in virtue of meaning alone. In fact, the L-truths of a system S are such that the semantic rules of S suffice to establish their truth. Informally, state-descriptions represent something like Leibnizian possible worlds or Wittgensteinian possible states of affairs and the range of state-descriptions for a certain language is supposed to exhaust the range of alternative possibilities describable in that language.\n\nConcerning modal sentences, Carnap adopts the following conventions (we use $$\\Box$$ in place of Carnap’s operator N for logical necessity). Let S be a system:\n\n1. A sentence $$\\Box A$$ is true in S if and only if $$A$$ is L-true in S (so a sentence $$\\Box A$$ is true in S if and only if $$A$$ holds in all the state descriptions of S);\n2. A sentence $$\\Box A$$ is L-true in S if and only if $$\\Box A$$ is true in S (so all state-descriptions agree in their evaluation of modal sentences).\n\nFrom which it follows that:\n\n1. $$\\Box A$$ is L-true in S if and only if $$A$$ is L-true in S.\n\nCarnap’s conventions hold also if we substitute “truth in a state description of S” for “truth in S”.\n\nCarnap assumes a fixed domain of quantification for his quantified system, the functional calculus with identity FC, and consequently for the modal functional calculus with identity MFC, a quantified form of S5. The language of FC contains denumerably many individual constants, the universe of discourse contains denumerably many individuals, each constant is assigned an individual of the domain, and no two constants are assigned the same individual. This makes sentences like $$a = a$$ L-true, and sentences like $$a = b$$ L-false (1946: 49). Concerning MFC, the Barcan formula and its converse are both L-true, that is,\n\n$\\mvDash(\\forall x)\\Box Fx\\leftrightarrow \\Box(\\forall x)Fx.$\n\nThis result is guaranteed by the assumption of a fixed domain of quantification. Carnap proves that MFC is sound, that is, all its theorems are L-true, and raises the question of completeness for both FC and MFC. Gödel proved completeness for the first order predicate calculus with identity, but the notion of validity employed was truth in every non-empty domain of quantification, including finite domains. Carnap instead adopts one unique denumerable domain of quantification. The adoption of a fixed denumerable domain of individuals generates some additional validities already at the pre-modal level which jeopardize completeness, for example “There are at least two individuals”, $$(\\exists x)(\\exists y)(x\\ne y)$$, turns out to be valid (1946: 53).\n\nA consequence of the definitions of state-descriptions for a language and L-truth is that each atomic sentence and its negation turn out to be true at some, but not all, state-descriptions. Hence, if $$p$$ is atomic both $$\\Diamond p$$ and $$\\Diamond \\neg p$$ are L-true. Hence, Lewis’s rule of Uniform Substitution fails (if $$p\\wedge \\neg p$$ is substituted for $$p$$ in $$\\Diamond p$$ we derive $$\\Diamond(p\\wedge \\neg p)$$, which is L-false, not L-true). This is noticed by Makinson (1966a) who argues that what must be done is reinstate substitutivity and revise Carnap’s naïve notion of validity (as logical necessity) in favor of a schematic Quinean notion (“A logical truth … is definable as a sentence from which we get only truths when we substitute sentences for its simple sentences” Quine 1970: 50) that will not make sentences like $$\\Diamond p$$ valid. Nonetheless, Carnap proves the soundness and completeness of propositional S5, which he calls “MPC” for modal propositional calculus, following Wajsberg. The proof however effectively employs a schematic notion of validity.\n\nIt has been proved that Carnap’s notion of maximal validity makes completeness impossible for quantified S5, i.e., that there are L-truths that are not theorems of Carnap’s MFC. Let $$A$$ be a non-modal sentence of MFC. By convention (1), $$\\Box A$$ is true in MFC if and only if $$A$$ is L-true in MFC. But $$A$$ is also a sentence of FC, thus if L-true in MFC it is also L-true in FC, since the state descriptions (models) of modal functional logic are the same as those of functional logic (1946: 54). This means that the state descriptions hold the triple role of (i) first-order models of FC thereby defining first-order validity, (ii) worlds for MFC thereby defining truth for $$\\Box A$$ sentences of MFC, and (iii) models of MFC thereby defining validity for MFC. The core of the incompleteness argument consists in the fact that the non-validity of a first-order sentence $$A$$ can be represented in the modal language, as $$\\neg \\Box A$$, but all models agree on the valuation of modal sentences, making $$\\neg \\Box A$$ valid. Roughly, and setting aside complications created by the fact that Carnap’s semantics has only denumerable domains, if $$A$$ is a first-order non-valid sentence of FC, $$A$$ is true in some but not all the models or state-descriptions. Given Carnap’s conventions, it follows that $$\\neg \\Box A$$ is true in MFC. But then $$\\neg \\Box A$$ is L-true in MFC, i.e., in MFC $$\\mvDash \\neg \\Box A$$. Given that the non-valid first-order sentences are not recursively enumerable, neither are the validities for the modal system MFC. But the class of theorems of MFC is recursively enumerable. Hence, MFC is incomplete vis-à-vis Carnap’s maximal validity. Cocchiarella (1975b) attributes the result to Richard Montague and Donald Kalish. See also Lindström 2001: 209 and Kaplan 1986: 275–276.\n\n### 3.2 Kripke’s Possible Worlds Semantics\n\nCarnap’s semantics is indeed a precursor of Possible Worlds Semantics (PWS). Yet some crucial ingredients are still missing. First, the maximal notion of validity must be replaced by a new universal notion. Second, state-descriptions must make space for possible worlds understood as indices or points of evaluation. Last, a relation of accessibility between worlds needs to be introduced. Though Kripke is by no means the only logician in the fifties and early sixties to work on these ideas, it is in Kripke’s version of PWS that all these innovations are present. Kanger (1957), Montague (1960, but originally presented in 1955), Hintikka (1961), and Prior (1957) were all thinking of a relation between worlds, and Hintikka (1961) like Kripke (1959a) adopted a new notion of validity that required truth in all arbitrary sets of worlds. But Kripke was the only one to characterize the worlds as simple points of evaluation (in 1963a). Other logicians were still thinking of the worlds fundamentally as models of first-order logic, though perhaps Prior in his development of temporal logic was also moving towards a more abstract characterization of instants of time. Kripke’s more abstract characterization of the worlds is crucial in the provision of a link between the model theoretic semantics and the algebra of modal logic. Kripke saw very clearly this connection between the algebra and the semantics, and this made it possible for him to obtain model theoretic completeness and decidability results for various modal systems in a systematic way. Goldblatt (2003: section 4.8) argues convincingly that Kripke’s adoption of points of evaluation in the model structures is a particularly crucial innovation. Such a generalization opens the door to different future developments of the model theory and makes it possible to provide model theories for intensional logics in general. For these reasons, in this entry we devote more attention to Kripke’s version of PWS. For a more comprehensive treatment of the initial development of PWS, including the late fifties work on S5 of the French logician Bayart, the reader is referred to Goldblatt 2003. On the differences between Kanger’s semantics and standard PWS semantics, see Lindström 1996 and 1998.\n\nKripke’s 1959a “A Completeness Theorem in Modal Logic” contains a model theoretic completeness result for a quantified version of S5 with identity. In Kripke’s semantic treatment of quantified S5, which he calls S5*$$^=$$, an assignment of values to a formula $$A$$ in a domain of individuals $$D$$ assigns a member of $$D$$ to each free individual variable of $$A$$, a truth value $$T$$ or $$F$$ to each propositional variable of $$A$$, and a set of ordered $$n$$-tuples of members of $$D$$ to each $$n$$-place predicate variable of $$A$$ (the language for the system contains no non-logical constants). Kripke defines a model over a non-empty domain $$D$$ of individuals as an ordered pair $$(G, K)$$, such that $$G\\in K, K$$ is an arbitrary subset of assignments of values to the formulas of S5*$$^=$$, and all $$H\\in K$$ agree on the assignments to individual variables. For each $$H\\in K$$, the value that $$H$$ assigns to a formula $$B$$ is defined inductively. Propositional variables are assigned $$T$$ or $$F$$ by hypothesis. If $$B$$ is $$P(x_1, \\ldots, x_n)$$, $$B$$ is assigned $$T$$ if and only if the $$n$$-tuple of elements assigned to $$x_1$$, …, $$x_n$$ belongs to the set of $$n$$-tuples of individuals that $$H$$ assigns to $$P. H$$ assigns $$T$$ to $$\\neg B$$ if and only if it assigns $$F$$ to $$B. H$$ assigns $$T$$ to $$B\\wedge C$$ if and only if it assigns $$T$$ to $$B$$ and to $$C$$. If $$B$$ is $$x = y$$ it is assigned $$T$$ if and only if $$x$$ and $$y$$ are assigned the same value in $$D$$. If $$B$$ is $$(\\forall x)Fx$$ it is assigned $$T$$ if and only if $$Fx$$ is assigned $$T$$ for every assignment to $$x$$. $$\\Box B$$ is assigned $$T$$ if and only if $$B$$ is assigned $$T$$ by every $$H\\in K$$.\n\nThe most important thing to be noticed in the 1959 model theory is the definition of validity. A formula $$A$$ is said to be valid in a model $$(G, K)$$ in $$D$$ if and only if it is assigned $$T$$ in $$G$$, to be valid in a domain $$D$$ if and only if it is valid in every model in $$D$$, and to be universally valid if and only if it is valid in every non-empty domain. Kripke says:\n\nIn trying to construct a definition of universal logical validity, it seems plausible to assume not only that the universe of discourse may contain an arbitrary number of elements and that predicates may be assigned any given interpretations in the actual world, but also that any combination of possible worlds may be associated with the real world with respect to some group of predicates. In other words, it is plausible to assume that no further restrictions need be placed on $$D, G$$, and $$K$$, except the standard one that $$D$$ be non-empty. This assumption leads directly to our definition of universal validity. (1959a: 3)\n\nThis new universal notion of validity is much more general than Carnap’s maximal validity. The elements $$H$$ of $$K$$ still correspond to first-order models, like Carnap’s state-descriptions, and in each Kripke model the elements $$H$$ of $$K$$ are assigned the same domain $$D$$ of individuals and the individual variables have a fixed cross-model assignment. So far the only significant divergence from Carnap is that different Kripke models can have domains of different cardinality. This by itself is sufficient to reintroduce completeness for the non-modal part of the system. But the most significant development, and the one that makes it possible to prove completeness for the modal system, is the definition of validity not as truth in all worlds of a maximal structure of worlds, but as truth across all the subsets of the maximal structure. The consideration of arbitrary subsets of possible worlds, makes it possible for Kripke’s model theory to disconnect validity from necessity. While necessities are relative to a model, hence to a set of worlds, validities must hold across all such sets. This permits the reintroduction of the rule of Uniform Substitution. To see this intuitively in a simple case, consider an atomic sentence $$p$$. The classical truth-table for $$p$$ contains two rows, one where $$p$$ is true and one where $$p$$ is false. Each row is like a possible world, or an element $$H$$ of $$K$$. If we only consider this complete truth table, we are only considering maximal models that contain two worlds (it makes no difference which world is actual). By the definition of truth for a formula $$\\Box B, \\Box p$$ is false in all the worlds of the maximal model, and $$\\Diamond p$$ is true in all of them. If validity is truth in all worlds of this maximal model, like for Carnap, it follows that $$\\mvDash \\Diamond p$$, but in S5 $$\\nmvdash\\Diamond p$$. If instead we define validity as Kripke does, we have to consider also the non-maximal models that contain only one world, that is incomplete truth-tables that cancel some rows. Hence, there are two more models to be taken into consideration: one which contains only one world $$H=G$$ where $$p$$ is true, hence so is $$\\Box p$$, and one which contains only one world $$H=G$$ where $$p$$ is false and so is $$\\Box p$$ as well as $$\\Diamond p$$. Thanks to this last model $$\\nmvDash \\Diamond p$$. Notice that the crucial innovation is the definition of validity as truth across all subsets of worlds, not just the maximal subset. The additional fact that validity in a model is defined as truth at the actual world of the model—as opposed to truth at all worlds of the model—though revealing of the fact that Kripke did not link the notion of necessity to the notion of validity, is irrelevant to this technical result.\n\nKripke’s completeness proof makes use of Beth’s method of semantic tableaux. A semantic tableau is used to test whether a formula $$B$$ is a semantic consequence of some formulas $$A_1, \\ldots, A_n$$. The tableau assumes that the formulas $$A_1, \\ldots, A_n$$ are true and $$B$$ is false and is built according to rules that follow the definitions of the logical connectives. For example, if a formula $$\\neg A$$ is on the left column of the tableau (where true formulas are listed), $$A$$ will be put on the right column (where false formulas are listed). To deal with modal formulas, sets of tableaux must be considered, since if $$\\Box A$$ is on the right column of a tableau, a new auxiliary tableau must be introduced with $$A$$ on its right column. A main tableau and its auxiliary tableaux form a set of tableaux. If a formula $$A\\wedge B$$ is on the right column of the main tableau, the set of tableaux splits into two new sets of tableaux: one whose main tableau lists $$A$$ on its right column and one whose main tableau lists $$B$$ on the right column. So we have to consider alternative sets of tableaux. A semantic tableau is closed if and only if all its alternative sets are closed. A set of tableaux is closed if it contains a tableau (main or auxiliary) that reaches a contradiction in the form of (i) one and the same formula $$A$$ appearing in both its columns or (ii) an identity formula of the form $$a = a$$ in its right side (this is an oversimplification of the definition of a closed tableau, but not harmful for our purposes). Oversimplifying once more, the structure of Kripke’s completeness proof consists of proving that a semantic tableau used to test whether a formula $$B$$ is a semantic consequence of formulas $$A_1, \\ldots, A_n$$ is closed if and only (i) in S5*$$^=$$ $$A_1, \\ldots, A_n\\vdash B$$ and (ii) $$A_1, \\ldots, A_n\\vDash B$$. This last result is achieved by showing how to build models from semantic tableaux. As a consequence of (i) and (ii) we have soundness and completeness for S5*$$^=$$, that is: $$A_1, \\ldots, A_n\\vdash B$$ if and only if $$A_1, \\ldots, A_n\\vDash B$$.\n\nThe 1959 paper contains also a proof of the modal counterpart of the Löwenhein-Skolem theorem for first-order logic, according to which if a formula is satisfiable in a non-empty domain then it is satisfiable, and hence valid (true in $$G)$$, in a model $$(G, K)$$ in a domain $$D$$, where both $$K$$ and $$D$$ are either finite or denumerable; and if a formula is valid in every finite or denumerable domain it is valid in every domain.\n\nKripke’s 1962 “The Undecidability of Monadic Modal Quantification Theory” develops a parallel between first-order logic with one dyadic predicate and first-order monadic modal logic with just two predicate letters, to prove that this fragment of first-order modal logic is already undecidable.\n\nOf great importance is the paper “Semantical Analysis of Modal Logic I” (Kripke 1963a) where normal systems are treated. It is here that Kripke fully develops the analogy with the algebraic results of Jónsson and Tarski and proves completeness and decidability for propositional systems T, S4, S5, and B (the Brouwersche system), which is here introduced. Kripke claims to have derived on his own the main theorem of “Boolean Algebras with Operators” by an algebraic analog of his own semantical methods (69, fn. 2). It is in this paper that two crucial generalizations of the model theory are introduced. The first is the new understanding of the elements $$H$$ of $$K$$ as simple indices, not assignments of values. Once this change is introduced, the models have to be supplemented by an auxiliary function $$\\Phi$$ needed to assign values to the propositional variables relative to worlds. Hence, while in the 1959 model theory\n\nthere can be no two worlds in which the same truth-value is assigned to each atomic formula [which] turns out to be convenient perhaps for S5, but it is rather inconvenient when we treat normal MPC’s in general (1963a: 69)\n\nwe can now have world duplicates. What is most important about the detachment of the elements of $$K$$ from the evaluation function is that it opens the door to the general consideration of modal frames, sets of worlds plus a binary relation between them, and the correspondence of such frames to modal systems. So, the second new element of the paper, the introduction of a relation $$R$$ between the elements of $$K$$, naturally accompanies the first. Let it be emphasized once again that the idea of a relation between worlds is not new to Kripke. For example, it is already present as an alternativeness relation in Montague 1960, Hintikka 1961, and Prior 1962, where the idea is attributed to Peter Geach.\n\nIn 1963a Kripke “asks various questions concerning the relation $$R$$” (1963a: 70). First, he shows that every satisfiable formula has a connected model, i.e., a model based on a model structure $$(G, K, R)$$ where for all $$H\\in K$$, $$G\\mathrel{R*}H$$, where $$R*$$ is the ancestral relation corresponding to $$R$$. Hence, only connected models need to be considered. Then, Kripke shows the nowadays well-known results that axiom 4 corresponds to the transitivity of the relation $$R$$, that axiom $$B$$ correspond to symmetry, and that the characteristic axiom of S5 added to system T corresponds to $$R$$ being an equivalence relation. Using the method of tableaux, completeness for the modal propositional systems T, S4, S5, and B vis-à-vis the appropriate class of models (reflexive structures for T) is proved. The decidability of these systems, including the more complex case of S4, is also proved. (For a more detailed treatment of frames, consult the SEP entry on modal logic.)\n\nIn the 1965 paper “Semantical Analysis of Modal Logic II”, Kripke extends the model theory to treat non-normal modal systems, including Lewis’s S2 and S3. Though these systems are considered somewhat unnatural, their model theory is deemed elegant. Completeness and decidability results are proved vis-à-vis the proper class of structures, including the completeness of S2 and S3, and the decidability of S3. To achieve these results, the model theory is extended by the introduction of a new element $$N\\subseteq K$$ in the model structures $$(G, K, R, N). N$$ is the subset of normal worlds, i.e., worlds $$H$$ such that $$H\\mathrel{R}H$$. Another interesting aspect of the non-normal systems is that in the model theoretic results that pertain to them, $$G$$ (the actual world) plays an essential role, in particular in the S2 and S3 model structures the actual world has to be normal. Instead, the rule of necessitation that applies to normal systems makes the choice of $$G$$ model theoretically irrelevant.\n\nThe great success of the Kripkean model theory notwithstanding, it is worth emphasizing that not all modal logics are complete. For incompleteness results see Makinson 1969, for a system weaker than S4; and Fine 1974, S. Thomason 1974, Goldblatt 1975, and van Benthem 1978, for systems between S4 and S5. Some modal formulas impose conditions on frames that cannot be expressed in a first-order language, thus even propositional modal logic is fundamentally second-order in nature. Insofar as the notion of validity on a frame abstracts from the interpretation function, it implicitly involves a higher-order quantification over propositions. On the correspondence between frame validity and second-order logic and on the model-theoretic criteria that distinguish the modal sentences that are first-order expressible from those that are essentially second-order see Blackburn and van Benthem’s “Modal Logic: A Semantic Perspective” (2007a).\n\nIn 1963b, “Semantical Considerations on Modal Logic”, Kripke introduces a new generalization to the models of quantified modal systems. In 1959 a model was defined in a domain $$D$$. As a result all worlds in one model had the same cardinality. In 1963b models are not given in a domain, hence worlds in the same model can be assigned different domains by a function $$\\Psi$$ that assigns domains to the elements $$H$$ of $$K$$. Given the variability of domains across worlds, Kripke can now construct counter-examples both to the Barcan Formula\n\n$(\\forall x)\\Box Fx\\rightarrow \\Box(\\forall x)Fx$\n\nand its converse\n\n$\\Box(\\forall x)Fx\\rightarrow(\\forall x)\\Box Fx.$\n\nThe Barcan formula can be falsified in structures with growing domains. For example, a model with two worlds, $$G$$ and one other possible world $$H$$ extending it. The domain of $$G$$ is $$\\{a\\}$$ and $$Fa$$ is true in $$G$$. The domain of $$H$$ is the set $$\\{a, b\\}$$ and $$Fa$$, but not $$Fb$$, is true in $$H$$. In this model $$(\\forall x)\\Box Fx$$ but not $$\\Box(\\forall x)Fx$$ is true in $$G$$. To disprove the converse of the Barcan formula we need models with decreasing domains. For example, a model with two worlds $$G$$ and $$H$$, where the domain of $$G$$ is $$\\{a, b\\}$$ and the domain of $$H$$ is $$\\{a\\}$$, with $$Fa$$ and $$Fb$$ true in $$G, Fa$$ true in $$H$$, but $$Fb$$ false in $$H$$. This model requires that we assign a truth-value to the formula $$Fb$$ in the world $$H$$ where the individual $$b$$ does not exist (is not in the domain of $$H)$$. Kripke points out that from a model theoretical point of view this is just a technical choice.\n\nKripke reconstructs a proof of the converse Barcan formula in quantified T and shows that the proof goes through only by allowing the necessitation of a sentence containing a free variable. But if free variables are instead to be considered as universally bound, this step is illicit. Necessitating directly an open formula, without first closing it, amounts to assuming what is to be proved. Prior 1956 contains a proof of the Barcan formula\n\n$\\Diamond(\\exists x)Fx\\rightarrow(\\exists x)\\Diamond Fx.$\n\nKripke does not discuss the details of Prior’s proof. Prior’s proof for the Barcan formula adopts Łukasiewicz’s rules for the introduction of the existential quantifier. The second of these rules states that if $$\\mvdash A\\rightarrow B$$ then $$\\mvdash A\\rightarrow(\\exists x)B$$. Prior uses the rule to derive\n\n$\\mvdash \\Diamond Fx\\rightarrow(\\exists x)\\Diamond Fx$\n\nfrom\n\n$\\mvdash \\Diamond Fx\\rightarrow \\Diamond Fx.$\n\nThis seems to us to be the ‘illegitimate’ step in the proof, since\n\n$\\Diamond Fx\\rightarrow(\\exists x)\\Diamond Fx$\n\ndoes not hold in a model with two worlds $$G$$ and $$H$$, where the domain of $$G$$ is $$\\{a\\}$$ and the domain of $$H$$ is $$\\{a, b\\}$$, and where $$Fa$$ is false in both $$G$$ and $$H$$, but $$Fb$$ is true in $$H$$. In this model $$\\Diamond Fx$$ is true but $$(\\exists x)\\Diamond Fx$$ is false in $$G$$. In this counter-model $$\\Diamond Fx$$ is made true in $$G$$ by the individual $$b$$ that is not in the domain of $$G$$. In general, the rule that if $$\\mvdash A\\rightarrow B$$ then $$\\mvdash A\\rightarrow(\\exists x)B$$ does not preserve validity if we allow that $$Fx$$ may be made true at a world by an individual that does not exist there. We conclude that the rule is to be rejected to preserve the soundness of S5 relatively to this model theoretic assumption.\n\n## Bibliography\n\nPlease note that the distinction in the bibliography between introductory texts, primary, and secondary literature is partially artificial.\n\n### Introductory Texts\n\n• Blackburn, Patrick, Maarten de Rijke, and Yde Venema, 2001, Modal Logic, Cambridge: Cambridge University Press. doi:10.1017/CBO9781107050884\n• Chellas, Brian F., 1980, Modal Logic: an Introduction, Cambridge: Cambridge University Press.\n• Fitting, M. and Richard L. Mendelsohn, 1998, First-Order Modal Logic, Dordrecht: Kluver Academic Publishers.\n• Garson, James W., 2013, Modal Logic for Philosophers, Cambridge: Cambridge University Press.\n• Hughes, G.E. and M.J. Cresswell, 1968, An Introduction to Modal Logic, London: Methuen.\n• –––, 1984, A Companion to Modal Logic, London: Methuen.\n• –––, 1996, A New Introduction to Modal Logic, London: Routledge.\n\n### Primary Literature\n\n• Alban, M.J., 1943, “Independence of the Primitive Symbols of Lewis’s Calculi of Propositions”, Journal of Symbolic Logic, 8(1): 25–26. doi:10.2307/2267978\n• Anderson, Alan Ross, 1957, “Independent Axiom Schemata for Von Wright’s M”, Journal of Symbolic Logic, 22(3): 241–244. doi:10.2307/2963591\n• Barcan (Marcus), Ruth C., 1946a, “A Functional Calculus of First Order Based on Strict Implication”, Journal of Symbolic Logic, 11(1): 1–16. doi:10.2307/2269159\n• –––, 1946b, “The Deduction Theorem in a Functional Calculus of First Order Based on Strict Implication”, Journal of Symbolic Logic, 11(4): 115–118. doi:10.2307/2268309\n• –––, 1947, “The Identity of Individuals in a Strict Functional Calculus of Second Order”, Journal of Symbolic Logic, 12(1): 12–15. doi:10.2307/2267171\n• Bayart, Arnould, 1958, “Correction de la Logique Modal du Premier et du Second Ordre S5”, Logique et Analyse, 1: 28–45.\n• –––, 1959, “Quasi-Adéquation de la Logique Modal du Second Ordre S5 et Adéquation de la Logique Modal du Premier Ordre S5”, Logique et Analyse, 2: 99–121.\n• Becker, Oskar, 1930, “Zur Logik der Modalitäten”, Jahrbuch für Philosophie und Phänomenologische Forschung, 11: 497–548.\n• Bennett, Jonathan, 1954, “Meaning and Implication”, Mind, 63(252): 451–463.\n• Bernays, Paul, 1926, “Axiomatische Untersuchung des Aussagenkalküls der Principia Mathematica”, Mathematische Zeitschrift, 25: 305–320.\n• –––, 1948, “Review of Rudolf Carnap’s ‘Modalities and Quantification’ (1946)”, Journal of Symbolic Logic, 13(4): 218–219. doi:10.2307/2267149\n• –––, 1950, “Review of Rudolf Carnap’s Meaning and Necessity”, Journal of Symbolic Logic, 14(4): 237–241. doi:10.2307/2269233\n• Bull, R.A., 1964, “A Note on the Modal Calculi S4.2 and S4.3”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 10(4): 53–55. doi:10.1002/malq.19640100403\n• –––, 1965, “An Algebraic Study of Diodorean Modal Systems”, Journal of Symbolic Logic, 30(1): 58–64. doi:10.2307/2270582\n• –––, 1966, “Than All Normal Extensions of S4.3 Have the Finite Model Property”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 12: 341–344. doi:10.1002/malq.19660120129\n• –––, 1968, “An Algebraic Study of Tense Logics with Linear Time”, Journal of Symbolic Logic, 33(1): 27–38. doi:10.2307/2270049\n• Carnap, Rudolf, 1946, “Modalities and Quantification”, Journal of Symbolic Logic, 11(2): 33–64. doi:10.2307/2268610\n• –––, 1947, Meaning and Necessity, Chicago: University of Chicago Press, 2nd edition with supplements, 1956.\n• –––, 1963a, “My Conception of the Logic of Modalities”, in Schlipp 1963: 889–900.\n• –––, 1963b, “My Conception of Semantics”, in Schlipp 1963: 900–905.\n• Dugundji, James, 1940, “Note on a Property of Matrices for Lewis and Langford’s Calculi of Propositions”, Journal of Symbolic Logic, 5(4): 150–151. doi:10.2307/2268175\n• Dummett, M.A.E. and E.J. Lemmon, 1959, “Modal Logics between S4 and S5”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 5(5): 250–264. doi:10.1002/malq.19590051405\n• Feys, Robert, 1937, “Les Logiques Nouvelles des Modalités”, Revue Néoscolastique de Philosophie, 40(56): 517–553.\n• –––, 1963, “Carnap on Modalities”, in Schlipp 1963: 283–297.\n• –––, 1965, Modal Logics, in Collection de Logique Mathématique (Volume 4), J. Dopp (ed.), Louvain: E. Nauwelaerts.\n• Fine, Kit, 1974, “An Incomplete Logic Containing S4”, Theoria, 40(1): 23–29. doi:10.1111/j.1755-2567.1974.tb00076.x\n• Gödel, K., 1933, “Eine Interpretation des Intuitionistischen Aussagenkalküls”, Ergebnisse eines Mathematischen Kolloquiums, 4: pp. 39–40. English translation “An Interpretation of the Intuitionistic Propositional Calculus”, with an introductory note by A.S. Troelstra, in Kurt Gödel. Collected Works, Vol. 1: Publications 1929–1936, S. Feferman, J.W. Dawson, S.C. Kleene, G.H. Moore, R.M. Solovay, and J. van Heijenoort (eds.), Oxford: Oxford University Press, 1986, pp. 296–303.\n• Goldblatt, R.I., 1975, “First-order Definability in Modal Logic”, Journal of Symbolic Logic, 40(1): 35–40. doi:10.2307/2272267\n• Halldén, Sören, 1948, “A Note Concerning the Paradoxes of Strict Implication and Lewis’s System S1”, Journal of Symbolic Logic, 13(1): 138–139. doi:10.2307/2267814\n• –––, 1950, “Results Concerning the Decision Problem of Lewis’s Calculi S3 and S6”, Journal of Symbolic Logic, 14(4): 230–236. doi:10.2307/2269232\n• –––, 1951, “On the Semantic Non-Completeness of Certain Lewis Calculi”, Journal of Symbolic Logic, 16(2): 127–129. doi:10.2307/2266686\n• Hintikka, Jaakko, 1961, “Modalities and Quantification”, Theoria, 27(3): 119–28. Expanded version in Hintikka 1969: 57–70. doi:10.1111/j.1755-2567.1961.tb00020.x\n• –––, 1963, “The Modes of Modality”, Acta Philosophica Fennica, 16: 65–81. Reprinted in Hintikka 1969: 71–86.\n• –––, 1969, Models for Modalities, Dordrecht: D. Reidel.\n• Jónsson, Bjarni and Alfred Tarski, 1951, “Boolean Algebras with Operators. Part I”, American Journal of Mathematics, 73(4): 891–939. doi:10.2307/2372123\n• –––, 1952, “Boolean Algebras with Operators. Part II”, American Journal of Mathematics, 74(1): 127–162. doi:10.2307/2372074\n• Kanger, Stig, 1957, Provability in Logic, (Acta Universitatis Stockholmiensis, Stockholm Studies in Philosophy, Vol. 1), Stockholm: Almqvist and Wiksell.\n• Kripke, Saul A., 1959a, “A Completeness Theorem in Modal Logic”, Journal of Symbolic Logic, 24(1): 1–14. doi:10.2307/2964568\n• –––, 1959b, “Semantical Analysis of Modal Logic” (abstract from the Twenty-Fourth Annual Meeting of the Association for Symbolic Logic), Journal of Symbolic Logic, 24(4): 323–324. doi:10.1017/S0022481200123321\n• –––, 1962, “The Undecidability of Monadic Modal Quantification Theory”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 8(2): 113–116. doi:10.1002/malq.19620080204\n• –––, 1963a, “Semantical Analysis of Modal Logic I. Normal Modal Propositional Calculi”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 9(5–6): 67–96. doi:10.1002/malq.19630090502\n• –––, 1963b, “Semantical Considerations on Modal Logic”, Acta Philosophica Fennica, 16: 83–94.\n• –––, 1965, “Semantical Analysis of Modal Logic II. Non-normal Modal Propositional Calculi”, in Symposium on the Theory of Models, J.W. Addison, L. Henkin, and A. Tarski (eds.), Amsterdam: North-Holland, pp. 206–220.\n• –––, 1967a, “Review of E.J. Lemmon ‘Algebraic Semantics for Modal Logics I’ (1966a)”, Mathematical Reviews, 34: 1021–1022.\n• –––, 1967b, “Review of E.J. Lemmon ‘Algebraic Semantics for Modal Logics II’ (1966b)”, Mathematical Reviews, 34: 1022.\n• Lemmon, E.J., 1957, “New Foundations for Lewis Modal Systems”, Journal of Symbolic Logic, 22(2): 176–186. doi:0.2307/2964179\n• –––, 1966a, “Algebraic Semantics for Modal Logics I”, Journal of Symbolic Logic, 31(1): 46–65. doi:10.2307/2270619\n• –––, 1966b, “Algebraic Semantics for Modal Logics II”, Journal of Symbolic Logic, 31(2): 191–218. doi:10.2307/2269810\n• Lemon, E.J. (with Dana Scott), 1977, The “Lemmon Notes”. An Introduction to Modal Logic (American Philosophical Quarterly Monograph Series, vol. 11), K. Segerberg (ed.), Oxford: Basil Blackwell.\n• Lewis, C.I., 1912, “Implication and the Algebra of Logic”, Mind, 21(84): 522–531. doi:10.1093/mind/XXI.84.522\n• –––, 1914, “The Calculus of Strict Implication”, Mind, 23(1): 240–247. doi:10.1093/mind/XXIII.1.240\n• –––, 1918, A Survey of Symbolic Logic, Berkeley: University of California Press.\n• –––, 1920, “Strict Implication—An Emendation”, Journal of Philosophy, Psychology and Scientific Methods, 17(11): 300–302. doi:10.2307/2940598\n• Lewis, C.I. and C.H. Langford, 1932, Symbolic Logic, London: Century. 2nd edition 1959, New York: Dover.\n• Łukasiewicz, Jan, 1920, “O Logice Trójwartościowej”, Ruch Filozoficzny, 5: 170–171.\n• –––, 1930, “Philosophische Bemerkungen zu Mehrwertigen Systemen des Aussagenkalküls”, Comptes Rendus des Séances de la Société des Sciences et des Lettres de Varsovie, 23: 51–77. Translated and reprinted in Łukasiewicz 1970: 153–178.\n• –––, 1970, Selected Works, L. Borkowski (ed.), Amsterdam: North-Holland.\n• Łukasiewicz, Jan and Alfred Tarski, 1931,“Investigations into the Sentential Calculus”, in Alfred Tarski, 1956, Logic, Semantics, Metamathematics, Oxford: Clarendon Press, pp. 38–59.\n• MacColl, Hugh, 1880, “Symbolical Reasoning”, Mind, 5(17): 45–60. doi:10.1093/mind/os-V.17.45\n• –––, 1897, “Symbolical Reasoning (II)”, Mind, 6(4): 493–510. doi:10.1093/mind/VI.4.493\n• –––, 1900, “Symbolical Reasoning (III)”, Mind, 9(36): 75–84. doi:10.1093/mind/IX.36.75\n• –––, 1906, Symbolic Logic and its Applications, London: Longmans, Green, and Co.\n• Makinson, David C., 1966a, “How Meaningful are Modal Operators?”, Australasian Journal of Philosophy, 44(3): 331–337. doi:10.1080/00048406612341161\n• –––, 1966b, “On Some Completeness Theorems in Modal logic”, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 12: 379–384. doi:10.1002/malq.19660120131\n• –––, 1969, “A Normal Modal Calculus Between T and S4 Without the Finite Model Property”, Journal of Symbolic Logic, 34(1): 35–38. doi:10.2307/2270978\n• McKinsey, J.C.C., 1934, “A Reduction in Number of the Postulates for C.I. Lewis’ System of Strict Implication”, Bulletin of the American Mathematical Society (New Series), 40(6): 425–427. doi:10.1090/S0002-9904-1934-05881-6\n• –––, 1941, “A Solution of the Decision Problem for the Lewis Systems S2 and S4, with an Application to Topology”, Journal of Symbolic Logic, 6(4): 117–134. doi:10.2307/2267105\n• –––, 1944, “On the Number of Complete Extensions of the Lewis Systems of Sentential Calculus”, Journal of Symbolic Logic, 9(2): 42–45. doi:10.2307/2268020\n• –––, 1945, “On the Syntactical Construction of Systems of Modal Logic”, Journal of Symbolic Logic, 10(3): 83–94. doi:10.2307/2267027\n• McKinsey, J.C.C. and Alfred Tarski, 1944, “The Algebra of Topology”, Annals of Mathematics, 45(1): 141–191. doi:10.2307/1969080\n• –––, 1946, “On Closed Elements in Closure Algebras”, Annals of Mathematics, 47(1): 122–162. doi:10.2307/1969038\n• –––, 1948, “Some Theorems about the Sententional Calculi of Lewis and Heyting”, Journal of Symbolic Logic, 13(1): 1–15. doi:10.2307/2268135\n• Montague, Richard, 1960, “Logical Necessity, Physical Necessity, Ethics, and Quantifiers”, Inquiry, 3(1–4): 259–269. doi:10.1080/00201746008601312\n• Nelson, Everett J., 1930, “Intensional Relations”, Mind, 39(156): 440–453. doi:10.1093/mind/XXXIX.156.440\n• Parry, William Tuthill, 1934, “The Postulates for ‘Strict Implication’”, Mind, 43(169): 78–80. doi:10.1093/mind/XLIII.169.78\n• –––, 1939, “Modalities in the Survey System of Strict Implication”, Journal of Symbolic Logic, 4(4): 137–154. doi:10.2307/2268714\n• Prior, Arthur N., 1955, Formal Logic, Oxford: Clarendon Press.\n• –––, 1956, “Modality and Quantification in S5”, Journal of Symbolic Logic, 21(1): 60–62. doi:10.2307/2268488\n• –––, 1957, Time and Modality, Oxford: Clarendon Press.\n• –––, 1962, “Possible Worlds”, Philosophical Quarterly, 12(46): 36–43. doi:10.2307/2216837\n• Prior, Arthur N. and Kit Fine, 1977, Worlds, Times and Selves, Amherst, MA: University of Massachusetts Press.\n• Quine, W.V., 1947a, “The Problem of Interpreting Modal Logic”, Journal of Symbolic Logic, 12(2): 43–48. doi:10.2307/2267247\n• –––, 1947b, “Review of The Deduction Theorem in a Functional Calculus of First Order Based on Strict Implication by Ruth C. Barcan (1946b)”, Journal of Symbolic Logic, 12(3): 95–96. doi:10.2307/2267230\n• –––, 1970, Philosophy of Logic, Cambridge, MA: Harvard University Press.\n• Russell, Bertrand, 1906, “Review of Hugh MacColl Symbolic Logic and Its Applications (1906)”, Mind, 15(58): 255–260. doi:10.1093/mind/XV.58.255\n• Schlipp, Paul Arthur (ed.), 1963, The Philosophy of Rudolf Carnap (The Library of Living Philosophers: Volume 11), La Salle: Open Court.\n• Scroggs, Schiller Joe, 1951, “Extensions of the Lewis System S5”, Journal of Symbolic Logic, 16(2): 112–120. doi:10.2307/2266683\n• Segerberg, Krister, 1968, “Decidability of S4.1”, Theoria, 34(1): 7–20. doi:10.1111/j.1755-2567.1968.tb00335.x\n• –––, 1971, An Essay in Classical Modal Logic, 3 volumes, (Filosofiska Studier, Vol. 13), Uppsala: Uppsala Universitet.\n• Simons, Leo, 1953, “New Axiomatizations of S3 and S4”, Journal of Symbolic Logic, 18(4): 309–316. doi:10.2307/2266554\n• Sobociński, Boleslaw, 1953, “Note on a Modal System of Feys-von Wright”, Journal of Computing Systems, 1(3): 171–178.\n• –––, 1962, “A Contribution to the Axiomatization of Lewis’ System S5”, Notre Dame Journal of Formal Logic, 3(1): 51–60. doi:10.1305/ndjfl/1093957059\n• Strawson, P.F., 1948, “Necessary Propositions and Entailment-Statements”, Mind, 57(226): 184–200. doi:10.1093/mind/LVII.226.184\n• Thomason, Richmond H., 1973, “Philosophy and Formal Semantics”, in Truth, Syntax and Modality, Hugues Leblanc (ed.), Amsterdam: North-Holland, pp. 294–307.\n• Thomason, Steven K., 1973, “A New Representation of S5”, Notre Dame Journal of Formal Logic, 14(2): 281–284. doi:10.1305/ndjfl/1093890907\n• –––, 1974, “An Incompleteness Theorem in Modal Logic”, Theoria, 40(1): 30–34. doi:10.1111/j.1755-2567.1974.tb00077.x\n• van Benthem, Johan, 1978, “Two Simple Incomplete Modal Logics”, Theoria, 44: 25–37. doi:10.1111/j.1755-2567.1978.tb00830.x\n• –––, 1984, “Possible Worlds Semantics: A Research Program that Cannot Fail?”, Studia Logica, 43: 379–393.\n• von Wright, G.H., 1951, An Essay in Modal Logic (Studies in Logic and the Foundations of Mathematics: Volume V), L.E.J. Brouwer, E.W. Beth, and A. Heyting (eds.), Amsterdam: North-Holland.\n• Whitehead, Alfred North and Bertrand Russell, 1910, Principia Mathematica (Volume I), Cambridge: Cambridge University Press.\n\n### Secondary Literature\n\n• Ballarin, Roberta, 2005, “Validity and Necessity”, Journal of Philosophical Logic, 34(3): 275–303. doi:10.1007/s10992-004-7800-2\n• Belnap, Nuel D., Jr., 1981, “Modal and Relevance Logics: 1977”, in Modern Logic: A Survey, Evandro Agazzi (ed.), Dordrecht: D. Reidel, pp. 131–151. doi:10.1007/978-94-009-9056-2_8\n• Blackburn, Patrick and Johan van Benthem, 2007a, “Modal Logic: A Semantic Perspective”, in Blackburn, van Benthem, and Wolter 2007b: chapter 1.\n• Blackburn, Patrick, Johan van Benthem, and Frank Wolter, (eds.), 2007b, Handbook of Modal Logic (Studies in Logic and Practical Reasoning: Volume 3), Amsterdam: Elsevier.\n• Bull, Robert and Krister Segerberg, 1984, “Basic Modal Logic”, in Extensions of Classical Logic (Handbook of Philosophical Logic: Volume 2), D.M. Gabbay and F. Guenthner (eds.), Dordrecht: Kluwer, pp. 1–88. doi:10.1007/978-94-009-6259-0_1\n• Burgess, John P., 2009, Philosophical Logic, Princeton: Princeton University Press.\n• Cocchiarella, Nino B., 1975a, “Logical Atomism, Nominalism, and Modal Logic”, Synthese, 31(1): 23–62. doi:10.1007/BF00869470\n• –––, 1975b, “On the Primary and Secondary Semantics of Logical Necessity”, Journal of Philosophical Logic, 4(1): 13–27. doi:10.1007/BF00263118\n• Copeland, B. Jack, 2002, “The Genesis of Possible Worlds Semantics”, Journal of Philosophical Logic, 31(2): 99–137. doi:10.1023/A:1015273407895\n• Curley, E.M., 1975, “The Development of Lewis’ Theory of Strict Implication”, Notre Dame Journal of Formal Logic, 16(4): 517–527. doi:10.1305/ndjfl/1093891890\n• Goldblatt, Robert, 2003, “Mathematical Modal Logic: A View of its Evolution”, in Logic and the Modalities in the Twentieth Century (Handbook of the History of Logic: Volume 7), D.M. Gabbay and J. Woods (eds.), Amsterdam: Elsevier, pp. 1–98. [2003, Journal of Applied Logic, 1(5–6): 309–392. doi:10.1016/S1570-8683(03)00008-9]\n• Kaplan, David, 1966, “Review of Saul A. Kripke, Semantical Analysis of Modal Logic I. Normal Modal Propositional Calculi (1963a)”, Journal of Symbolic logic, 31(1): 120–122. doi:10.2307/2270649\n• –––, 1986, “Opacity”, in Lewis Edwin Hahn and Paul Arthur Schlipp (eds.), The Philosophy of W.V. Quine (The Library of Living Philosophers, Volume 18), La Salle: Open Court, pp. 229–289.\n• Lindström, Sten, 1996, “Modality Without Worlds: Kanger’s Early Semantics for Modal Logic”, in Odds and Ends. Philosophical Essays Dedicated to Wlodek Rabinowicz on the Occasion of his Fiftieth Birthday, S. Lindström, R. Sliwinski, and J. Österberg (eds.), Uppsala, Sweden, pp. 266–284.\n• –––, 1998, “An Exposition and Development of Kanger’s Early Semantics for Modal Logic”, in The New Theory of Reference: Kripke, Marcus, and its Origins, P.W. Humphreys, and J.H. Fetzer (eds.), Dordrecht: Kluwer, pp. 203–233.\n• –––, 2001, “Quine’s Interpretation Problem and the Early Development of Possible Worlds Semantics”, Uppsala Philosophical Studies, 50: 187–213.\n• Lindström, Sten and Krister Segerberg, 2007, “Modal Logic and Philosophy”, in Blackburn, van Benthem, and Wolter 2007b: chapter 1.\n• Linsky, Leonard, (ed.), 1971, Reference and Modality, Oxford: Oxford University Press.\n• Löb, M.H., 1966, “Extensional Interpretations of Modal Logic”, Journal of Symbolic Logic, 31(1): 23–45. doi:10.2307/2270618\n• Rahman, Shahid and Juan Redmond, 2007, Hugh MacColl: An Overview of his Logical Work with Anthology, London: College Publications.\n• Rescher, Nicholas, 1974, Studies in Modality, Oxford: Basil Blackwell.\n• Zakharyaschev, Michael, Krister Segerberg, Maarten de Rijke, and Heinrich Wansing, 2001, “The Origins of Modern Modal Logic”, in Advances in Modal Logic 2, M. Zakharyaschev, K. Segerberg, M. de Rijke, and H. Wansing (eds.), Stanford: CSLI Publications, pp. 11–38.", null, "How to cite this entry.", null, "Preview the PDF version of this entry at the Friends of the SEP Society.", null, "Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).", null, "Enhanced bibliography for this entry at PhilPapers, with links to its database." ]
[ null, "https://plato.stanford.edu/symbols/sepman-icon.jpg", null, "https://plato.stanford.edu/symbols/sepman-icon.jpg", null, "https://plato.stanford.edu/symbols/inpho.png", null, "https://plato.stanford.edu/symbols/pp.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.839973,"math_prob":0.986675,"size":82798,"snap":"2019-51-2020-05","text_gpt3_token_len":22103,"char_repetition_ratio":0.14608547,"word_repetition_ratio":0.041565366,"special_character_ratio":0.27715644,"punctuation_ratio":0.14822939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99557424,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T05:11:36Z\",\"WARC-Record-ID\":\"<urn:uuid:029e7fc5-ce27-4f20-b7b2-c68e9448266f>\",\"Content-Length\":\"112674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7712ea95-0718-43ca-8998-11f72a42ddf4>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d728c34-3f0d-409a-9bab-168796e4afa4>\",\"WARC-IP-Address\":\"171.67.193.20\",\"WARC-Target-URI\":\"https://plato.stanford.edu/entries/logic-modal-origins/\",\"WARC-Payload-Digest\":\"sha1:JKF3KJZSEAY5CR3XUYOJAIIBWJFQM3SV\",\"WARC-Block-Digest\":\"sha1:AT3RTPERG3BTXBEGX6M7JAKDG6KFUQJB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251687725.76_warc_CC-MAIN-20200126043644-20200126073644-00064.warc.gz\"}"}
https://vidyasheela.com/post/hyperparameters-vs-parameters
[ "# Hyperparameters vs. Parameters\n\nby keshav", null, "## What are Hyperparameters?\n\nA hyperparameter is an entity of a learning algorithm, usually (but not always) having a finite numerical value. That value affects the way the algorithm works. Hyperparameters are not learned by the algorithm itself from data but we set them. They have to be set by the data analyst before running the algorithm.\n\nFor example, the value of ‘K’ in K-NN algorithm, number of hidden neurons in the hidden layer of the neural network, filter or kernel size in the Convolutional neural network, etc." ]
[ null, "https://vidyasheela.com/post/web-contents/img/post_img/18/hyperparameters vs parameters.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88226104,"math_prob":0.95602477,"size":888,"snap":"2023-40-2023-50","text_gpt3_token_len":181,"char_repetition_ratio":0.15723982,"word_repetition_ratio":0.0,"special_character_ratio":0.19594595,"punctuation_ratio":0.1017964,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9503476,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T23:00:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6a0b619e-d659-4bf2-9932-6a0c20ffcf5d>\",\"Content-Length\":\"32674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a173dc15-14e3-4aff-b568-8de0ac6a9fa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:251a9385-6f5d-4dde-92c1-0997a3ce5053>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://vidyasheela.com/post/hyperparameters-vs-parameters\",\"WARC-Payload-Digest\":\"sha1:LMZE4K4WQ2AH6MBHKZN225ZO6NSSU2K7\",\"WARC-Block-Digest\":\"sha1:FUYHINNYPVQGDQ54EHAY6F4KC6ALWGMD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100016.39_warc_CC-MAIN-20231128214805-20231129004805-00385.warc.gz\"}"}
https://www.numbersaplenty.com/15552
[ "Search a number\nBaseRepresentation\nbin11110011000000\n3210100000\n43303000\n5444202\n6200000\n763225\noct36300\n923300\n1015552\n1110759\n129000\n137104\n14594c\n15491c\nhex3cc0\n\n15552 has 42 divisors (see below), whose sum is σ = 46228. Its totient is φ = 5184.\n\nThe previous prime is 15551. The next prime is 15559. The reversal of 15552 is 25551.\n\nIt is a powerful number, because all its prime factors have an exponent greater than 1 and also an Achilles number because it is not a perfect power.\n\nIt is a Jordan-Polya number, since it can be written as (3!)5 ⋅ 2!.\n\nIt is a Harshad number since it is a multiple of its sum of digits (18).\n\nIts product of digits (250) is a multiple of the sum of its prime divisors (5).\n\nIt is a nialpdrome in base 6 and base 12.\n\nIt is a zygodrome in base 2.\n\nIt is not an unprimeable number, because it can be changed into a prime (15551) by changing a digit.\n\nIt is a polite number, since it can be written in 5 ways as a sum of consecutive naturals, for example, 5183 + 5184 + 5185.\n\n15552 is a Friedman number, since it can be written as 2*(5+1^5)^5, using all its digits and the basic arithmetic operations.\n\n215552 is an apocalyptic number.\n\n15552 is a gapful number since it is divisible by the number (12) formed by its first and last digit.\n\nIt is an amenable number.\n\nIt is a practical number, because each smaller number is the sum of distinct divisors of 15552, and also a Zumkeller number, because its divisors can be partitioned in two sets with the same sum (23114).\n\n15552 is an abundant number, since it is smaller than the sum of its proper divisors (30676).\n\nIt is a pseudoperfect number, because it is the sum of a subset of its proper divisors.\n\n15552 is an frugal number, since it uses more digits than its factorization.\n\n15552 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 27 (or 5 counting only the distinct ones).\n\nThe product of its digits is 250, while the sum is 18.\n\nThe square root of 15552 is about 124.7076581450. The cubic root of 15552 is about 24.9610058766.\n\nMultiplying 15552 by its sum of digits (18), we get a 7-th power (279936 = 67).\n\nSubtracting 15552 from its reverse (25551), we obtain a palindrome (9999).\n\nThe spelling of 15552 in words is \"fifteen thousand, five hundred fifty-two\"." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93676275,"math_prob":0.9973237,"size":2213,"snap":"2021-31-2021-39","text_gpt3_token_len":615,"char_repetition_ratio":0.17655048,"word_repetition_ratio":0.024213076,"special_character_ratio":0.32941708,"punctuation_ratio":0.1197479,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99791,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T10:46:54Z\",\"WARC-Record-ID\":\"<urn:uuid:3b2b7abe-a9b4-440b-ba73-58c3e9208ab8>\",\"Content-Length\":\"10648\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5abb9c1f-dd22-4f75-9566-6689ce002fa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:11b44861-8970-4e96-90e1-5128c8ff2040>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/15552\",\"WARC-Payload-Digest\":\"sha1:K56G24SBEMNSXPCQH6SKDDAU7KWHCPFH\",\"WARC-Block-Digest\":\"sha1:IWFHMGBHVYOBCGFUYCY7H4G2BHCZV2L5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056392.79_warc_CC-MAIN-20210918093220-20210918123220-00362.warc.gz\"}"}
https://theuvocorp.com/a-stage-extraction-process-is-depicted-in-figure-in-such-4/
[ "# A Stage Extraction Process Is Depicted In Figure In Such\n\nA stage extraction process is depicted in Figure. In such systems, a stream containing a weight fraction Yin of a chemical enters from the left at a mass flow rate of F1. Simultaneously, a solvent carrying a weight fraction Xin of the same chemical enters from the right at a flow rate of F2. Thus, for stage i, a mass balance can be represented as\n\nF1Yi-1 + F2Xi+1 = F1Yi + F2Xi                                (P a)\n\nAt each stage, an equilibrium is assumed to be established between Yi and Xi as in\n\nK = Xi/Yi                                                           (P b)", null, "", null, "Posted in Uncategorized" ]
[ null, "https://www.solutioninn.com/images3/45-M-N-A-N-L-A(103).png", null, "https://blueribbonwriters.com/wp-content/uploads/2020/01/order-supreme-essay.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92509544,"math_prob":0.95637465,"size":483,"snap":"2021-43-2021-49","text_gpt3_token_len":129,"char_repetition_ratio":0.10020877,"word_repetition_ratio":0.0,"special_character_ratio":0.2505176,"punctuation_ratio":0.078431375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97928685,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T23:58:10Z\",\"WARC-Record-ID\":\"<urn:uuid:e83d61e2-3ced-4244-9b77-6cb6c7f335d4>\",\"Content-Length\":\"34426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:220f328b-a0bf-49c6-8213-44af2af570f1>\",\"WARC-Concurrent-To\":\"<urn:uuid:51eded79-fb98-4e9b-97ca-b8a73184e305>\",\"WARC-IP-Address\":\"192.64.113.86\",\"WARC-Target-URI\":\"https://theuvocorp.com/a-stage-extraction-process-is-depicted-in-figure-in-such-4/\",\"WARC-Payload-Digest\":\"sha1:FHVANA7IW4PPR7EEEPPIVH452BQVJI6N\",\"WARC-Block-Digest\":\"sha1:PSAZWGWMVBKD2XRU7Z5FJ4URUGVECMUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363420.81_warc_CC-MAIN-20211207232140-20211208022140-00094.warc.gz\"}"}
https://mareknarozniak.com/2020/07/13/satisfiability/
[ "In mathematical logic satisfiability of an expression indicates if there exists an assignment of values, or simply, an input for which such expression yields a particular result. As I like to think about it, it is a logic generalization of the common concept of solving an equation. Determining if a formula is satisfiable is a decidable problem but computationally hard. In this article we are going to focus on boolean satisfiability, which to my knowledge, is the most studied problem in computer science. It is not a purely theoretical concept, it has industrial application and it has potential to inspire you for more projects and perhaps help you solve problems in projects you are already working on. We already did demonstrate solving graph problems finds use in level design in games, same could be true for boolean satisfiability or simply $\\text{SAT}$ problem.\n\nAn instance of such problem is a boolean formula expressions written in conjunctive normal form or a clausal form meaning it is a conjunction of clauses and each clause is a disjunction of literals.\n\n$\\Phi(x_1 \\dots x_N) = \\bigwedge\\limits_{\\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi)} ( \\bigvee\\limits_{l \\in \\mathfrak{C}_k} l ) \\tag{SAT}$ $\\forall \\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi), \\forall l \\in \\mathfrak{C}_k, l \\in \\{x_n \\vert 1 \\leq n \\leq N \\} \\cup \\{ \\neg x_n \\vert 1 \\leq n \\leq N\\}$\n\nA conjunction is logical AND operation denoted above as $\\wedge$ and disjunction is logical OR operation denoted as $\\vee$. Parentheses indicate clauses which consist of literals where a literal is a symbol pointing a variable or negation of variable. For the CNF expression $\\Phi$ I picked a set-theoretical description which denotes the set of clauses $\\mathfrak{C}(\\Phi)$ in which clauses are sets of literals $\\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi)$. It is allowed to have more literals than variables.\n\nThe easiest way to get started is by playing with small examples. For instance, a $\\Phi$ expression is an instance of $\\text{3SAT}$ problem when $\\vert \\mathfrak{C}_k \\vert = 3, \\forall \\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi)$. Such formula has exactly three literals per clause.\n\n\\begin{aligned} \\Phi_1 &= (x_1 \\vee \\neg x_2 \\vee x_3) \\wedge (x_1 \\vee x_2 \\vee \\neg x_3) \\wedge (\\neg x_1 \\vee \\neg x_2 \\vee x_3) \\end{aligned}\n\nLet us computationally solve, personally I use the MergeSat solver. First we write $\\Phi_1$ in .cnf format which is accepted by the solver. Each line ends with 0, negative numbers are negations, each line is a clause except for first line which indicates the CNF form with number of variables and number of clauses.\n\np cnf 3 3\n1 -2 3 0\n1 2 -3 0\n-1 -2 3 0\n\n\nThe CNF formula $\\Phi_1$ is satisfiable, output from Mergesat indicating this as well as assignment of values to variables $x_1 = 1, x_2 = 0, x_3 = 0$ can be found in the MergeSat output. Of course this does not have to be the only solution that satisfies $\\Phi_1$. Now, for contrast, let us consider $\\Phi_2$ which will involve two variables and four clauses.\n\n\\begin{aligned} \\Phi_2 &= (x_1 \\vee x_2) \\wedge (x_1 \\vee \\neg x_2) \\wedge (\\neg x_1 \\vee x_2) \\wedge (\\neg x_1 \\vee \\neg x_2) \\end{aligned}\np cnf 2 4\n1 2 0\n1 -2 0\n-1 2 0\n-1 -2 0\n\n\nUnlike in case of the previous form, the $\\Phi_2$ is not satisfiable which is indicated by the MergeSat output. It can also be deduced intuitively by looking at clauses of $\\Phi_2$, they list all the possible assignments of values for two boolean variables making it not possible to satisfy a clause without violating another. It is also worth noting that the $\\text{2SAT}$ problem can be efficiently solved in polynomial time (linear) as it is a special case of $\\text{SAT}$ (Krom, 1967; Aspvall et al., 1979; Even et al., 1976)\n\nFor now let us ommit the CNF form and define the circuit satisfiability problem in a similar way, a boolean formula $\\Phi_\\text{circuit}(x_1 \\dots x_N)$ is a word of a formal context-free grammar of a form reassembling to the syntax trees involving boolean operators $\\wedge$, $\\vee$ and $\\neg$ with literals as terminal symbols. Formal language is defined by a quadruple $(V, N, {, S})$ where $N$ is a finite set of non-terminal symbols, $V$ is a finite set of terminal symbols, $P$ is a set of production rules and finally $S$ is the start symbol.\n\n\\begin{aligned} G &= (V, N, P, S) \\tag{CSAT grammar} \\\\ N &= \\{\\mathfrak{S}\\} \\\\ V &= \\{x_n \\vert 1 \\leq n \\leq N \\} \\cup \\{ \\neg, \\vee, \\wedge, (, )\\}\\\\ P &= \\{\\mathfrak{S} \\rightarrow (\\mathfrak{S} \\wedge \\mathfrak{S}), \\mathfrak{S} \\rightarrow (\\mathfrak{S} \\vee \\mathfrak{S}), \\mathfrak{S} \\rightarrow \\neg \\mathfrak{S} \\} \\cup \\{ \\mathfrak{S} \\rightarrow x_n \\vert 1 \\leq n \\leq N \\} \\\\ S &= \\{\\mathfrak{S}\\} \\end{aligned}\n\nWorth noting that $\\text{CSAT}$ is a generalization of $\\text{SAT}$ meaning that every possible $\\Phi(x_1 \\dots x_N)$ is a word of $\\text{CSAT grammar}$ and this is because every boolean formula in conjunctive normal form forms a valid syntax tree described by the $\\text{CSAT grammar}$.\n\n\\begin{aligned} \\Phi_3 &= ( x_1 \\wedge x_2) \\\\ \\Phi_4 &= ( x_1 \\wedge \\neg x_1) \\end{aligned}\n\nThe circuit $\\Phi_3$ is satisfied by an assignment of values to variables $x_1 = x_2 = 1$. In the case of $\\Phi_4$ which involves only a single boolean variable $x_1$ and none of its values satisfies the circuit.\n\nThe $\\text{SAT}$ problem might be the most studied problem so far, the number of available solvers and heuristics is vast and it makes it attractive to formalize our formal satisfaction problems as $\\text{SAT}$ formulas and take advantage of decades of research and use those solvers to process them efficiently. What about the $\\text{CSAT}$? We made a formal argument that every $\\text{SAT}$ is $\\text{CSAT}$ but what about the other way around?\n\nWe could apply DeMorgan’s Law and the distributive property to simplify formula generated by the $\\text{CSAT grammar}$ and eventually rewriting it into CNF form. This unforuntately could lead to exponential growth of formula size. This is where the discovery of Tseytin Transformation (Tseitin, 1983) which is an efficient way of converting disjunctive normal form DNF into conjuctive normal form CNF. We can imagine a sequence of algebraic transformations such as variables substitutions in order to translate an arbitrary $\\Phi_\\text{circuit}(x_1 \\dots x_N)$ into $\\Phi(x_1 \\dots x_N)$. Problem with that is the number of involved variables would grow quadratically and this is an NP-Complete problem, every extra variable is increasing the number of required steps to solve by an order of magnitude. The advantage of Tseytin Transformation is that CNF formula grows linearly compared to input $\\text{CSAT}$ instance expression. It consists of assigning outputs variables to each component of the circuit and rewriting them according to the following rules of gate sub-expressions.\n\n\\begin{aligned} A \\cdot B = C &\\equiv (\\neg A \\vee \\neg B \\vee C) \\wedge (A \\vee \\neg C) \\wedge (B \\vee \\neg C) \\\\ A + B = C &\\equiv (A \\vee B \\vee \\neg C) \\wedge (\\neg A \\vee C) \\wedge (\\neg B \\vee C) \\\\ \\neg A = C &\\equiv (\\neg A \\vee \\neg C) \\wedge (A \\vee C) \\tag{Tseytin} \\end{aligned}\n\nWhere $A \\cdot B = C$, $A + B = C$ and $\\neg A = C$ are $\\text{AND}$, $\\text{OR}$ and $\\text{NOT}$ classical gates with $C$ as a single output. I will not provide a rigorous proof for those sub-expressions, rather provide some intuition, which is if we think of satisfying all the clauses then each of the gate sub-expressions becomes a logic description of the gate consisting of relevant cases, which are clauses, of how the inputs $A, B$ affect the output $C$. Let us analyse clasuses of the $\\text{AND}$ gate. Clause $(\\neg A \\vee \\neg B \\vee C)$ indicates that if both $A$ and $B$ are true, then the only way to satisfy this clause is to assign $C$ to true as well, which already seems to implement $\\text{AND}$ pretty well. The remaining case to ensure is enforcing $C$ to false when at least one of inputs is false and this is done by clauses $(A \\vee \\neg C) \\wedge (B \\vee \\neg C)$.\n\nLet us examine an example of $\\text{CSAT}$ circuit converted into $\\text{SAT}$ problem using Tseytin transformation. The example circuit consists of three inputs $x_1, x_2, x_3$, a single output $y$ and seven logic gates. Circuit takes the following form.\n\nNote that one of the gates has two outputs. Formula corresponding to this circuit can be written in a predicate form which is recognizable by the $\\text{CSAT grammar}$ which is no longer in CNF form.\n\n\\begin{aligned} \\Phi_5 &= ( ( ( \\neg x_1 \\wedge x_2 ) \\vee ( x_1 \\wedge \\neg x_2 ) ) \\vee ( \\neg x_2 \\wedge x_3 ) ) \\end{aligned}\n\nApplying the Tseytin transformation to this circuit leads to expression with more variables but in conjunctive normal form. It requires to apply the gate sub-expressions to each of gates of the circuit leading us to following CNF form.\n\n\\begin{aligned} \\Phi_\\text{N1} &= (z_1 \\vee x_1) \\wedge (\\neg z_1 \\vee \\neg x_1) \\\\ \\Phi_\\text{N2} &= (z_2 \\vee x_2) \\wedge (\\neg z_2 \\vee \\neg x_2) \\wedge (z_3 \\vee x_2) \\wedge (\\neg z_3 \\vee \\neg x_2) \\\\ \\Phi_\\text{A1} &= (\\neg z_1 \\vee \\neg x_2 \\vee z_4) \\wedge (z_1 \\vee \\neg z_4) \\wedge (x_2 \\vee \\neg z_4) \\\\ \\Phi_\\text{A2} &= (\\neg x_1 \\vee \\neg z_2 \\vee z_5) \\wedge (x_1 \\vee \\neg z_5) \\wedge (z_2 \\vee \\neg z_5) \\\\ \\Phi_\\text{A3} &= (\\neg z_3 \\vee \\neg x_3 \\vee z_6) \\wedge (z_3 \\vee \\neg z_6) \\wedge (x_3 \\vee \\neg z_6) \\\\ \\Phi_\\text{R1} &= (z_4 \\vee z_5 \\vee \\neg z_7) \\wedge (\\neg z_4 \\vee z_7) \\wedge (\\neg z_5 \\vee z_7) \\\\ \\Phi_\\text{R2} &= (z_7 \\vee z_6 \\vee \\neg y) \\wedge (\\neg z_7 \\vee y) \\wedge (\\neg z_6 \\vee y) \\end{aligned}\n\nEach of $\\Phi_\\text{N1}, \\Phi_\\text{N2} \\dots \\Phi_\\text{R2}$ is an individual CNF form corresponding to particular gate of the circuit. The penalty for such convenient notation is in the extra variables $z_i$ that have to be introduced. We can assemble those forms into $\\Phi_4^\\prime \\equiv \\Phi_4$ and the cnf file is also provided.\n\n\\begin{aligned} \\Phi_5 &= \\Phi_\\text{N1} \\wedge \\Phi_\\text{N2} \\wedge \\Phi_\\text{A1} \\wedge \\Phi_\\text{A2} \\wedge \\Phi_\\text{A3} \\wedge \\Phi_\\text{R1} \\wedge \\Phi_\\text{R2} \\wedge (y) \\end{aligned}\n\nThis circuit outputs true if given one of the following four inputs $(x_1, x_2, x_3) \\in \\{(0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 0, 1), (0, 1, 1)\\}$. We can enumerate all the solutions by appending a clause that is invalid if given a previous solution until the formula becomes unsatisfiable. For instance if we append clauses $(x_1 \\vee x_2 \\vee \\neg x_3) \\wedge (x_1 \\vee \\neg x_2 \\vee x_3) \\wedge (\\neg x_1 \\vee x_2 \\vee x_3)$ we will get the following form.\n\n\\begin{aligned} \\Phi_5^\\prime &= \\Phi_\\text{N1} \\wedge \\Phi_\\text{N2} \\wedge \\Phi_\\text{A1} \\wedge \\Phi_\\text{A2} \\wedge \\Phi_\\text{A3} \\wedge \\Phi_\\text{R1} \\wedge \\Phi_\\text{R2} \\wedge (y) \\wedge (x_1 \\vee x_2 \\vee \\neg x_3) \\wedge (x_1 \\vee \\neg x_2 \\vee x_3) \\wedge (\\neg x_1 \\vee x_2 \\vee x_3) \\end{aligned}\n\nWhich is satisfiable only by solutions $(x_1, x_2, x_3) \\in \\{(1, 0, 1), (0, 1, 1)\\}$ as assignments $\\{(0, 0, 1), (0, 1, 0), (1, 0, 0)\\}$ have been excluded by the appended clauses. Following this logic the following formula includes clauses from all the solutions and is unsatisfiable. The formula file is also provided.\n\n\\begin{aligned} \\Phi_5^{\\prime\\prime} = &\\Phi_\\text{N1} \\wedge \\Phi_\\text{N2} \\wedge \\Phi_\\text{A1} \\wedge \\Phi_\\text{A2} \\wedge \\Phi_\\text{A3} \\wedge \\Phi_\\text{R1} \\wedge \\Phi_\\text{R2} \\wedge (y) \\\\ &\\wedge (x_1 \\vee x_2 \\vee \\neg x_3) \\wedge (x_1 \\vee \\neg x_2 \\vee x_3) \\wedge (\\neg x_1 \\vee x_2 \\vee x_3) \\wedge (\\neg x_1 \\vee x_2 \\vee \\neg x_3) \\wedge (x_1 \\vee \\neg x_2 \\vee \\neg x_3) \\end{aligned}\n\nLet $\\Phi(x_1 \\dots x_N)$ be an arbitrary $\\text{SAT}$ instance formula and we want to find an instance of $(\\text{MIS})$ maximum independent set as defined in Vertex Covers and Independent Sets in game level design. We know both of those problems are NP-Complete thus by (Karp, 1972) there must exist a transformation, or a polynomial reduction which transforms one instance into another in polynomial number of steps in terms of data size. As a brief reminder, the target instance of $(\\text{MIS})$ takes form of a graph $G = (E(G), V(G))$ where $E(G)$ is set of edges of $G$ and $V(G)$ is a set of vertices of $G$. We construct $G$ from $\\Phi(x_1 \\dots x_N)$ according to the following definition.\n\n\\begin{aligned} G &= (E, V) \\\\ E &= \\{\\{l_\\alpha, l_\\beta\\} \\vert (\\exists \\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi), \\{l_\\alpha, l_\\beta\\} \\subseteq \\mathfrak{C}_k) \\vee (\\exists \\mathfrak{C}_k, \\mathfrak{C}_{k^\\prime} \\in \\mathfrak{C}(\\Phi), k \\neq k^\\prime, l_\\alpha \\in \\mathfrak{C}_k, l_\\beta \\in \\mathfrak{C}_{k^\\prime}, l_\\alpha = \\neg l_\\beta)\\} \\\\ V &= \\{l_{km} \\vert l_{km} \\in \\mathfrak{C}_k, \\forall \\mathfrak{C}_k \\in \\mathfrak{C}(\\Phi) \\} \\end{aligned}\n\nIf the maximum independent set of the graph described above is of size equal to number of clauses of the CNF formula $\\Phi(x_1 \\dots x_N)$ then $\\Phi$ is satisfiable, i.e $\\vert \\mathfrak{C}(\\Phi) \\vert = V_\\text{MIS}(G)$. Reason for that is all the literals inside of clause are coupled together making it impossible to select more than one vertex without violating the independent set constraint. The lowest number of violations is zero and this corresponds to exactly one literal per clause.\n\nThe process illustrated on figure above consisists of assigning a graph vertex for each literal, then connecting the vertices corresponding to literals of same clasuses with an edge - this creates the clause clusters. Then we join with an edge all vertices corresponding to negation of corresponding variables, those we call conflict links. The graph obtained from the method above can be re-arranged for better aesthetics.\n\nIf you read the Vertex Covers and Independent Sets in game level design article you know that maximum independent set problem is semantically equivalent to minimum vertex cover problem thus this reduction can be treated as reduction to either of those problems. Moreover, if you checked the using Quantum Adiabatic Optimization article you know that such problem could be solved using a quantum adiabatic processor, which already reveals one of possible options of solving $\\text{SAT}$ problems using a quantum machine.\n\nWe also can connect this problem to our constraint programming post. The $\\text{CSAT}$ is essentially a predicat and could implement a constraint. Thus an instance of $\\text{CSP}$ could be used to design a boolean circuit which is satisfied when the $\\text{CSP}$ instance is satisfied, then via Tseytin transformation we can convert it to CNF formula and even reduce to $\\text{MIS}$ instance. As long as the domains of variables of your $\\text{CSP}$ are boolean this should be relatively straightforward process, but once your constraints involve integer domain your circuits would start to look like computer processor circuits, they would involve arithmetic-logic components and grow beyond what is currently trackable.\n\nAnother interesting thing we can think about is, as the $\\text{SAT}$ and $\\text{CSAT}$ instances are described using a context-free grammar $\\text{CSAT grammar}$ determining if any $\\Phi$ is an instance of boolean satisfiability can be done efficiently by checking if $\\Phi$ is a word described by $\\text{CSAT grammar}$. Yet if we introduce a subtle change in the definition and claim that $\\text{CSAT grammar}$ describes only formulas $\\Phi^\\prime$ which are satisfiable, suddenly checking if $\\Phi$ is a word of this grammar becomes computationally hard problem. My conclusion from this is quite direct and philosophical - that finding an example of a hard problem is equivalent of solving a hard problem.\n\nThis blog post barely scratches the surface of decades of research on the concept of satisfiability and logic programming, but it does provide the necessary definitions and gives us foundations to proceed to many other interesting concepts. One possible direction I consider at the moment of writing this post are hashing functions, cryptography and blockchains, another one is computational complexity theory and yet another one is formalizing satisfiability of circuits from category-theoretical perspective. If you have any preferences regarding the direction this blog goes or if you have any questions or find errors or typos please do not hesitate to let me know via Twitter or GitHub. All the .cnf files from this post are available in this public gist repository.\n\n1. Krom, M. R. (1967). The Decision Problem for a Class of First-Order Formulas in Which all Disjunctions are Binary. Zeitschrift Für Mathematische Logik Und Grundlagen Der Mathematik, 13(1-2), 15–20. https://doi.org/10.1002/malq.19670130104\n2. Aspvall, B., Plass, M. F., & Tarjan, R. E. (1979). A linear-time algorithm for testing the truth of certain quantified boolean formulas. Information Processing Letters, 8(3), 121–123. https://doi.org/10.1016/0020-0190(79)90002-4\n3. Even, S., Itai, A., & Shamir, A. (1976). On the Complexity of Timetable and Multicommodity Flow Problems. SIAM Journal on Computing, 5(4), 691–703. https://doi.org/10.1137/0205048\n4. Tseitin, G. S. (1983). On the Complexity of Derivation in Propositional Calculus. In J. H. Siekmann & G. Wrightson (Eds.), Automation of Reasoning: 2: Classical Papers on Computational Logic 1967–1970 (pp. 466–483). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-81955-1_28\n5. Karp, R. (1972). Reducibility among combinatorial problems. In R. Miller & J. Thatcher (Eds.), Complexity of Computer Computations (pp. 85–103). Plenum Press." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9143773,"math_prob":0.99853176,"size":12172,"snap":"2020-34-2020-40","text_gpt3_token_len":2570,"char_repetition_ratio":0.12228797,"word_repetition_ratio":0.0069755856,"special_character_ratio":0.20621097,"punctuation_ratio":0.09559804,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99981135,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T21:57:47Z\",\"WARC-Record-ID\":\"<urn:uuid:c1d38e87-a151-4813-b564-b74f94827391>\",\"Content-Length\":\"324407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:563bfded-4914-4fbc-bacf-4175e66aaa10>\",\"WARC-Concurrent-To\":\"<urn:uuid:71580dd5-c329-4cf0-b10e-c17b3a4705de>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://mareknarozniak.com/2020/07/13/satisfiability/\",\"WARC-Payload-Digest\":\"sha1:5QUXCYGWWKFTICEVQCLK55CFJAC7C4O2\",\"WARC-Block-Digest\":\"sha1:4YLY2ALYKLGXOWNM5JNH57JJU2E3X4NO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400228998.45_warc_CC-MAIN-20200925213517-20200926003517-00289.warc.gz\"}"}
https://axiomsofchoice.org/restricted_image
[ "## Restricted image\n\n### Set\n\n context $f:X\\to Y$ context $S\\subseteq X$\n postulate $f(S)\\equiv \\mathrm{im}(f|_S)$\n\n### Discussion\n\nThe notation $f(S)$ is overloading $f$, as $S$ is not actually in the domain $X$ of $f$.\n\nWikipedia: Image\n\n### Parents\n\n#### Context", null, "" ]
[ null, "https://axiomsofchoice.org/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6437528,"math_prob":0.99744713,"size":443,"snap":"2023-40-2023-50","text_gpt3_token_len":156,"char_repetition_ratio":0.11845102,"word_repetition_ratio":0.6956522,"special_character_ratio":0.34311512,"punctuation_ratio":0.09195402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99535567,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T18:22:37Z\",\"WARC-Record-ID\":\"<urn:uuid:305d96e2-6fc9-433c-8fac-14f09ee8712d>\",\"Content-Length\":\"10106\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4ea5af5-9e9c-4c8c-a24a-7e0d74d61c18>\",\"WARC-Concurrent-To\":\"<urn:uuid:2365534a-9da9-4f4d-97c5-63d4f98053d3>\",\"WARC-IP-Address\":\"144.76.237.132\",\"WARC-Target-URI\":\"https://axiomsofchoice.org/restricted_image\",\"WARC-Payload-Digest\":\"sha1:YYRSL6AVOPJNET7EUJ6GKJI2OKW4RCC6\",\"WARC-Block-Digest\":\"sha1:LCZTBBLCKFHLXTDQDXEKDS6SGM7XSLCL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00641.warc.gz\"}"}
https://www.thenursingmasters.com/28-marks-totalassignment-14-compton-de-broglie-and/
[ "# 28 marks total assignment 14 compton, de broglie, and\n\n28\nMarks Total ASSIGNMENT 14\n\nCompton, de Broglie, and Wave-Particle Duality\n\n(1)\n\nThere are similarities and differences between the Photoelectric Effect and Compton Scattering. Complete each of the six partial statements below using the following guide; all you need to provide for an answer is PE, CS, BOTH, or NEITHER.\n\n• PE if the statement only applies to the Photoelectric Effect\n• CS if the statement only applies to Compton Scattering\n• BOTH if the statement only applies to both the Photoelectric Effect and Compton Scattering\n• NEITHER if the statement applies to Neither the Photoelectric Effect or Compton Scattering\n\na. Energy is conserved in _____.\n\n(1)\n\nb. Photons are observed before and after the interaction in _____.\n\n(1)\n\nc. Electrons are observed as the result of the experiment in _____.\n\n(1)\n\nd. Angles are measured in the experiment in _____.\n\n(1)\n\ne. Photons with very low energies such as 5.0 to 10.0 eV is observed in _____.\n\n(1)\n\nf. Ionization occurs in _____.\n\n(2) 2.\n\nWhat quantity measured in the Compton effect experiment show the wave-particle duality of light?\n\n(5) 3.\n\nAn X-ray with a frequency of 3.74 × 1020 Hz is incident on a thin piece of metal. The lower frequency X-ray on the other side is observed deflected at 48o. What is the frequency of the deflected X-ray?\n\n(5) 4.\n\nA scientist changes the frequency of an incident X-ray to 4.50 × 1019 Hz and measures the deflected X-ray frequency of 4.32 × 1019 Hz. What was the angle of deflection?\n\n(2) 5.\n\nCan the equation E = pc be applied to particles? Why or why not?\n\n(3) 6. A stationary hydrogen atom with a mass of 1.67 × 10-27 kg absorbs a photon of light with 10.2 eV. What is the velocity of the hydrogen atom after absorbing the photon in a perfectly inelastic collision?\n\n(2) 7.\nDescribe the results of performing Young’s experiment with x-rays and then high speed electrons.\n\n(2) 8.\nHow do the results of performing Young’s experiment with x-rays and then high speed electrons support the wave-particle model?\n\n(1) 9.\nAll of the following quantities can be measured or calculated for light waves and subatomic particles except _____.\n\nA. momentum\nB. velocity\nC. frequency\nD. energy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8662566,"math_prob":0.9227696,"size":2386,"snap":"2023-14-2023-23","text_gpt3_token_len":607,"char_repetition_ratio":0.1343409,"word_repetition_ratio":0.06930693,"special_character_ratio":0.26404023,"punctuation_ratio":0.14314929,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97775126,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T15:49:16Z\",\"WARC-Record-ID\":\"<urn:uuid:44421d81-9b01-47be-a049-33b299de1daf>\",\"Content-Length\":\"52749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23ee3f1c-12b2-4394-af81-c41017103b09>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc1854a4-514b-4870-a950-b6df55195cc3>\",\"WARC-IP-Address\":\"162.213.249.204\",\"WARC-Target-URI\":\"https://www.thenursingmasters.com/28-marks-totalassignment-14-compton-de-broglie-and/\",\"WARC-Payload-Digest\":\"sha1:PGBL3GJ7DFGVW4UGTU6PFAY37VDIQCNB\",\"WARC-Block-Digest\":\"sha1:N2PI2SD33PJE7DPADPCJRUF5YSMCDEVP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652149.61_warc_CC-MAIN-20230605153700-20230605183700-00781.warc.gz\"}"}
https://www.colorhexa.com/0372e8
[ "# #0372e8 Color Information\n\nIn a RGB color space, hex #0372e8 is composed of 1.2% red, 44.7% green and 91% blue. Whereas in a CMYK color space, it is composed of 98.7% cyan, 50.9% magenta, 0% yellow and 9% black. It has a hue angle of 210.9 degrees, a saturation of 97.4% and a lightness of 46.1%. #0372e8 color hex could be obtained by blending #06e4ff with #0000d1. Closest websafe color is: #0066ff.\n\n• R 1\n• G 45\n• B 91\nRGB color chart\n• C 99\n• M 51\n• Y 0\n• K 9\nCMYK color chart\n\n#0372e8 color description : Vivid blue.\n\n# #0372e8 Color Conversion\n\nThe hexadecimal color #0372e8 has RGB values of R:3, G:114, B:232 and CMYK values of C:0.99, M:0.51, Y:0, K:0.09. Its decimal value is 226024.\n\nHex triplet RGB Decimal 0372e8 `#0372e8` 3, 114, 232 `rgb(3,114,232)` 1.2, 44.7, 91 `rgb(1.2%,44.7%,91%)` 99, 51, 0, 9 210.9°, 97.4, 46.1 `hsl(210.9,97.4%,46.1%)` 210.9°, 98.7, 91 0066ff `#0066ff`\nCIE-LAB 49.348, 18.75, -66.821 20.617, 17.878, 78.704 0.176, 0.153, 17.878 49.348, 69.402, 285.674 49.348, -26.128, -103.795 42.283, 13.042, -80.762 00000011, 01110010, 11101000\n\n# Color Schemes with #0372e8\n\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #e87903\n``#e87903` `rgb(232,121,3)``\nComplementary Color\n• #03e5e8\n``#03e5e8` `rgb(3,229,232)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #0603e8\n``#0603e8` `rgb(6,3,232)``\nAnalogous Color\n• #e5e803\n``#e5e803` `rgb(229,232,3)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #e80703\n``#e80703` `rgb(232,7,3)``\nSplit Complementary Color\n• #72e803\n``#72e803` `rgb(114,232,3)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #e80372\n``#e80372` `rgb(232,3,114)``\n• #03e879\n``#03e879` `rgb(3,232,121)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #e80372\n``#e80372` `rgb(232,3,114)``\n• #e87903\n``#e87903` `rgb(232,121,3)``\n• #024d9c\n``#024d9c` `rgb(2,77,156)``\n• #0259b6\n``#0259b6` `rgb(2,89,182)``\n• #0366cf\n``#0366cf` `rgb(3,102,207)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #097ffc\n``#097ffc` `rgb(9,127,252)``\n• #228cfc\n``#228cfc` `rgb(34,140,252)``\n• #3b99fc\n``#3b99fc` `rgb(59,153,252)``\nMonochromatic Color\n\n# Alternatives to #0372e8\n\nBelow, you can see some colors close to #0372e8. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #03abe8\n``#03abe8` `rgb(3,171,232)``\n• #0398e8\n``#0398e8` `rgb(3,152,232)``\n• #0385e8\n``#0385e8` `rgb(3,133,232)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #035fe8\n``#035fe8` `rgb(3,95,232)``\n• #034ce8\n``#034ce8` `rgb(3,76,232)``\n• #0339e8\n``#0339e8` `rgb(3,57,232)``\nSimilar Colors\n\n# #0372e8 Preview\n\nThis text has a font color of #0372e8.\n\n``<span style=\"color:#0372e8;\">Text here</span>``\n#0372e8 background color\n\nThis paragraph has a background color of #0372e8.\n\n``<p style=\"background-color:#0372e8;\">Content here</p>``\n#0372e8 border color\n\nThis element has a border color of #0372e8.\n\n``<div style=\"border:1px solid #0372e8;\">Content here</div>``\nCSS codes\n``.text {color:#0372e8;}``\n``.background {background-color:#0372e8;}``\n``.border {border:1px solid #0372e8;}``\n\n# Shades and Tints of #0372e8\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000913 is the darkest color, while #ffffff is the lightest one.\n\n• #000913\n``#000913` `rgb(0,9,19)``\n• #001326\n``#001326` `rgb(0,19,38)``\n• #011c3a\n``#011c3a` `rgb(1,28,58)``\n• #01264d\n``#01264d` `rgb(1,38,77)``\n• #012f60\n``#012f60` `rgb(1,47,96)``\n• #013974\n``#013974` `rgb(1,57,116)``\n• #024287\n``#024287` `rgb(2,66,135)``\n• #024c9b\n``#024c9b` `rgb(2,76,155)``\n• #0255ae\n``#0255ae` `rgb(2,85,174)``\n• #025fc1\n``#025fc1` `rgb(2,95,193)``\n• #0368d5\n``#0368d5` `rgb(3,104,213)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\n• #037cfb\n``#037cfb` `rgb(3,124,251)``\n• #1686fc\n``#1686fc` `rgb(22,134,252)``\n• #2a90fc\n``#2a90fc` `rgb(42,144,252)``\n• #3d9afc\n``#3d9afc` `rgb(61,154,252)``\n• #50a4fd\n``#50a4fd` `rgb(80,164,253)``\n• #64aefd\n``#64aefd` `rgb(100,174,253)``\n• #77b8fd\n``#77b8fd` `rgb(119,184,253)``\n• #8ac2fd\n``#8ac2fd` `rgb(138,194,253)``\n• #9eccfe\n``#9eccfe` `rgb(158,204,254)``\n• #b1d6fe\n``#b1d6fe` `rgb(177,214,254)``\n• #c5e1fe\n``#c5e1fe` `rgb(197,225,254)``\n• #d8ebfe\n``#d8ebfe` `rgb(216,235,254)``\n• #ebf5ff\n``#ebf5ff` `rgb(235,245,255)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #0372e8\n\nA tone is produced by adding gray to any pure hue. In this case, #6f757c is the less saturated color, while #0372e8 is the most saturated one.\n\n• #6f757c\n``#6f757c` `rgb(111,117,124)``\n• #667585\n``#667585` `rgb(102,117,133)``\n• #5d758e\n``#5d758e` `rgb(93,117,142)``\n• #547497\n``#547497` `rgb(84,116,151)``\n• #4b74a0\n``#4b74a0` `rgb(75,116,160)``\n• #4274a9\n``#4274a9` `rgb(66,116,169)``\n• #3974b2\n``#3974b2` `rgb(57,116,178)``\n• #3073bb\n``#3073bb` `rgb(48,115,187)``\n• #2773c4\n``#2773c4` `rgb(39,115,196)``\n• #1e73cd\n``#1e73cd` `rgb(30,115,205)``\n• #1573d6\n``#1573d6` `rgb(21,115,214)``\n• #0c72df\n``#0c72df` `rgb(12,114,223)``\n• #0372e8\n``#0372e8` `rgb(3,114,232)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0372e8 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53160685,"math_prob":0.6985085,"size":3683,"snap":"2021-04-2021-17","text_gpt3_token_len":1645,"char_repetition_ratio":0.12856755,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5617703,"punctuation_ratio":0.23516238,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99308854,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T21:55:18Z\",\"WARC-Record-ID\":\"<urn:uuid:75067734-5f31-48a2-b5b9-ba78a4df3553>\",\"Content-Length\":\"36251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4464855f-d771-426b-8aa1-609b6265c142>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5595fc0-27fc-440e-8b52-aa6ddbfda551>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0372e8\",\"WARC-Payload-Digest\":\"sha1:V6CYUB7X533FDEFAXTOGOY6RO4ZBEEB5\",\"WARC-Block-Digest\":\"sha1:OLAQMVTCLTPUCT4M4CJRCOXN64ZXJSQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703527850.55_warc_CC-MAIN-20210121194330-20210121224330-00395.warc.gz\"}"}
https://bizfluent.com/info-8584095-disadvantages-earned-value-project.html
[ "Earned Value Analysis (EVA) is a favorite yet controversial tool for project management that provides an objective measurement of project performance in terms of its scope (tasks), schedule (time) and budget (cost). Supporters claim EVA measures how much of the time and money budgeted for a project is \"earned.\" Its detractors say EVA can misrepresent true project status in either schedule or expenditures.\n\n## Earned Value Analysis\n\nEVA uses the planned schedule and budget along with what has actually occurred to develop three values that indicate the relative health of a project. These values are: Planned Value (PV), which is the budgeted cost of tasks that should be complete; Earned Value (EV), which is the total budgeted costs of complete tasks; and Actual Cost (AC), which is the total expenditures to-date.\n\nExample: The project budget is \\$100,000. Sixty percent of the tasks should be complete, so PV is \\$60.000. Only 50 percent of the tasks are actually complete, making EV \\$50,000. AC is \\$65,000.\n\n## EVA Variances\n\nEVA calculates two variances: cost variance (CV) = EV - AC, and schedule variance (SV) = EV - PV.\n\nUsing the values in Section 1, CV is minus \\$15,000. It has cost \\$65,000 to complete \\$50,000 of planned work. SV is minus \\$10,000. The project is behind schedule by \\$10,000 worth of work.\n\n## EVA Indexes\n\nTwo indexes indicate the performance of the project. Cost performance index (CPI) = EV/AC. Schedule performance index (SPI) = EV/PV. Using the data in Sections 1 and 2, CPI is 0.77 and SPI is 0.83.\n\nIf the indexes are equal to one, the project is on schedule/on budget; less than one, the project is behind schedule/over budget; and greater than one, the project is ahead of schedule/under budget.\n\n## CPI Issues\n\nOnce a project is over/under budget, CPI remains essentially the same for the remainder of the project, unless EV or AC changes significantly. CPI is dependent on AC for accuracy. If AC does not include all appropriate costs and payments, CPI can be unreliable." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91159165,"math_prob":0.84672195,"size":2832,"snap":"2022-40-2023-06","text_gpt3_token_len":646,"char_repetition_ratio":0.13472418,"word_repetition_ratio":0.004310345,"special_character_ratio":0.22881356,"punctuation_ratio":0.1294964,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.972419,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T13:30:15Z\",\"WARC-Record-ID\":\"<urn:uuid:ef2c8cfe-52e5-455e-b56b-efa41e134880>\",\"Content-Length\":\"110960\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2f8e573-2925-44ec-9a26-ca1b7beb6104>\",\"WARC-Concurrent-To\":\"<urn:uuid:c53f4d6e-d571-4954-b392-0e03c236e996>\",\"WARC-IP-Address\":\"23.212.250.206\",\"WARC-Target-URI\":\"https://bizfluent.com/info-8584095-disadvantages-earned-value-project.html\",\"WARC-Payload-Digest\":\"sha1:NVDF5OFGUKHZUKOWBCRVM2DPXU34OQHH\",\"WARC-Block-Digest\":\"sha1:V7DFSC7VGL3RG7MVRV5ETMNKQIT3AQP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500339.37_warc_CC-MAIN-20230206113934-20230206143934-00607.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/coordinate+systems
[ "# coordinate systems\n\nAlso found in: Dictionary, Thesaurus.\n\n## coordinate systems\n\n[kō′ȯrd·ən·ət ‚sis·təmz]\n(mathematics)\nA rule for designating each point in space by a set of numbers.\nReferences in periodicals archive ?\nMatrices of transformation through which the vectors are projected in reference coordinate system are: - for body 1:\nIn this situation, there were carried out: (i) original calculations of the considered indices for representative operational states of the test PS when SE is performed in the rectangular and polar coordinate systems and in PS there is and there is no PhS, (ii) original statistical analyses of the calculated indices, (iii) original discussion on the causes of observed regularities.\nAccording to the DGTD equations and the parameter transformation theory in orthogonal curvilinear coordinate systems, the processes of parameter transformation for 2-D UPML between the coordinate systems of elliptical and Cartesian are presented, and expressions of transition matrix are derived.\nCaption: Figure 2: Coordinate systems of seeker and missile body.\nThe passive rigid bodies and probe can create the corresponding coordinate system by the position of four reflecting balls attached to them.\nCaption: Figure 3: Three coordinate systems used for modelling the blended wing body MAV .\nTheir position in 3D space is determined by a direction vector with its origin in a global coordinate system, and its orientation is determined by a unit vector and the angle of rotation around this unit vector.\nThe (X, Y, Z) coordinate system is an orthonormal coordinate system that is fixed relative to the surface on which the gimbal is mounted.\nis the spatial vector of the network voltage in the coordinate system [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are orts of directions by the axes a, b, c of the coordinate system;\nStructure binary images do not differ only according to their coordinate systems, but also according to image size and voxel size (resolution).\nAny event point in the dark regions of space that does not violate the Lorentz factor impact of the gravitational force between the two coordinate systems (K) and (K\") can be considered to be on the constant curvature.\nTransformation between Coordinate Systems. When the transformation from the panoramic coordinate system to the PTZ coordinate system is obtained, we can determine the mapping relationship between each of pixels in the image captured by the fish-eye camera and image captured by the PTZ camera.\n\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9110183,"math_prob":0.9660579,"size":2298,"snap":"2020-10-2020-16","text_gpt3_token_len":456,"char_repetition_ratio":0.18526591,"word_repetition_ratio":0.0,"special_character_ratio":0.18320279,"punctuation_ratio":0.0786802,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9772106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-07T10:18:59Z\",\"WARC-Record-ID\":\"<urn:uuid:becac5c5-8572-4d58-9500-ea50281454c8>\",\"Content-Length\":\"48677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:191f1aea-b561-4a22-abd3-1ad1e7498217>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7c8c804-95cf-459d-894d-37c90c8ae827>\",\"WARC-IP-Address\":\"209.160.67.5\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/coordinate+systems\",\"WARC-Payload-Digest\":\"sha1:RGDZEJLGQFWT3RXMZWHUFOLFOWHF6DJ7\",\"WARC-Block-Digest\":\"sha1:AKZN7WYNH6DT3X7HAV5WWHHP6U3B5HTA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371700247.99_warc_CC-MAIN-20200407085717-20200407120217-00294.warc.gz\"}"}
https://www.convert-measurement-units.com/convert+Gallons+per+mile+Imperial+to+Gallons+per+mile+US.php
[ " Convert Gallons per mile (Imperial) to GPM (Gallons per mile (Imperial) to Gallons per mile (US))\n\n## Gallons per mile (Imperial) into Gallons per mile (US)\n\nnumbers in scientific notation\n\nhttps://www.convert-measurement-units.com/convert+Gallons+per+mile+Imperial+to+Gallons+per+mile+US.php\n\n## How many Gallons per mile (US) make 1 Gallons per mile (Imperial)?\n\n1 Gallons per mile (Imperial) = 1.201 105 442 176 9 Gallons per mile (US) [GPM] - Measurement calculator that can be used to convert Gallons per mile (Imperial) to Gallons per mile (US), among others.\n\n# Convert Gallons per mile (Imperial) to Gallons per mile (US) (Gallons per mile (Imperial) to GPM):\n\n1. Choose the right category from the selection list, in this case 'Fuel consumption'.\n2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), brackets and π (pi) are all permitted at this point.\n3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Gallons per mile (Imperial)'.\n4. Finally choose the unit you want the value to be converted to, in this case 'Gallons per mile (US) [GPM]'.\n5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.\n\nWith this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '801 Gallons per mile (Imperial)'. In so doing, either the full name of the unit or its abbreviation can be used. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Fuel consumption'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '40 Gallons per mile (Imperial) to GPM' or '16 Gallons per mile (Imperial) into GPM' or '32 Gallons per mile (Imperial) -> Gallons per mile (US)' or '31 Gallons per mile (Imperial) = GPM' or '64 Gallons per mile (Imperial) to Gallons per mile (US)' or '40 Gallons per mile (Imperial) into Gallons per mile (US)'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second.\n\nFurthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(60 * 80) Gallons per mile (Imperial)'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '801 Gallons per mile (Imperial) + 2403 Gallons per mile (US)' or '28mm x 51cm x 4dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question.\n\nIf a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 5.056 790 077 44×1022. For this form of presentation, the number will be segmented into an exponent, here 22, and the actual number, here 5.056 790 077 44. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 5.056 790 077 44E+22. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 50 567 900 774 400 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8436151,"math_prob":0.93827724,"size":3907,"snap":"2021-31-2021-39","text_gpt3_token_len":935,"char_repetition_ratio":0.18934153,"word_repetition_ratio":0.052083332,"special_character_ratio":0.25646275,"punctuation_ratio":0.1167979,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9911721,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T15:11:20Z\",\"WARC-Record-ID\":\"<urn:uuid:02b726b9-c9e7-430d-9b39-487870f2b1e0>\",\"Content-Length\":\"49834\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b268004-61e4-40e8-af7d-88008b4d0300>\",\"WARC-Concurrent-To\":\"<urn:uuid:3894f438-a3bf-4af6-b278-8bb313d7d8da>\",\"WARC-IP-Address\":\"135.181.75.227\",\"WARC-Target-URI\":\"https://www.convert-measurement-units.com/convert+Gallons+per+mile+Imperial+to+Gallons+per+mile+US.php\",\"WARC-Payload-Digest\":\"sha1:426DVGFCUTTY6PEUXLD6HNPJ5CREXKDL\",\"WARC-Block-Digest\":\"sha1:DWWOJHWII53AD426Q4ZHV757C7RYAENA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057424.99_warc_CC-MAIN-20210923135058-20210923165058-00111.warc.gz\"}"}
https://www.scribd.com/document/79459333/MC-Graph-Theory
[ "You are on page 1of 15\n\n# Introduction to Graph Theory\n\n## The Knigsberg Bridge Problem o\n\nThe city of Knigsberg was located on the Pregel river in Prussia. The river dio vided the city into four separate landmasses, including the island of Kneiphopf. These four regions were linked by seven bridges as shown in the diagram. Residents of the city wondered if it were possible to leave home, cross each of the seven bridges exactly once, and return home. The Swiss mathematician Leonhard Euler (1707-1783) thought about this problem and the method he used to solve it is considered by many to be the birth of graph theory.\nx e1\n1\n\nX W\n\ne2 w e5 e4 e3\n\ne6 y e7 z\n\nY Z\n7\n\nExercise 1.1. See if you can nd a round trip through the city crossing each bridge exactly once, or try to explain why such a trip is not possible. The key to Eulers solution was in a very simple abstraction of the puzzle. Let us redraw our diagram of the city of Knigsberg by representing each of the o land masses as a vertex and representing each bridge as an edge connecting the vertices corresponding to the land masses. We now have a graph that encodes the necessary information. The problem reduces to nding a closed walk in the graph which traverses each edge exactly once, this is called an Eulerian circuit. Does such a circuit exist?\n\nFundamental Denitions\n\nWe will make the ideas of graphs and circuits from the Knigsberg Bridge o problem more precise by providing rigorous mathematical denitions. A graph G is a triple consisting of a vertex set V (G), an edge set E(G), and a relation that associates with each edge, two vertices called its endpoints (not necessarily distinct). Graphically, we represent a graph by drawing a point for each vertex and representing each edge by a curve joining its endpoints. For our purposes all graphs will be nite graphs, i.e. graphs for which V (G) and E(G) are nite sets, unless specically stated otherwise. Note that in our denition, we do not exclude the possibility that the two endpoints of an edge are the same vertex. This is called a loop, for obvious reasons. Also, we may have multiple edges, which is when more than one edge shares the same set of endpoints, i.e. the edges of the graph are not uniquely determined by their endpoints. A simple graph is a graph having no loops or multiple edges. In this case, each edge e in E(G) can be specied by its endpoints u, v in V (G). Sometimes we write e = uv. When two vertices u, v in V (G) are endpoints of an edge, we say u and v are adjacent. A path is a simple graph whose vertices can be ordered so that two vertices are adjacent if and only if they are consecutive in the ordering. A path which begins at vertex u and ends at vertex v is called a u, v-path. A cycle is a simple graph whose vertices can be cyclically ordered so that two vertices are adjacent if and only if they are consecutive in the cyclic ordering. We usually think of paths and cycles as subgraphs within some larger graph. A subgraph H of a graph G, is a graph such that V (H) V (G) and E(H) E(G) satisfying the property that for every e E(H), where e has endpoints u, v V (G) in the graph G, then u, v V (H) and e has endpoints u, v in H, i.e. the edge relation in H is the same as in G. A graph G is connected if for every u, v V (G) there exists a u, v-path in G. Otherwise G is called disconnected. The maximal connected subgraphs of G are called its components. A walk is a list v0 , e1 , v1 , . . . , ek , vk of vertices and edges such that for 1 i k, the edge ei has endpoints vi1 and vi . A trail is a walk with no repeated edge. A u, v-walk or u, v-trail has rst vertex u and last vertex v. When the rst and last vertex of a walk or trail are the same, we say that they are closed. A closed trail is called a circuit. With this new terminology, we can consider paths and cycles not just as subgraphs, but also as ordered lists of vertices and edges. From this point of view, a path is a trail with no repeated vertex, and a cycle is a closed trail (circuit) with no repeated vertex other than the rst vertex equals the last vertex. Alternatively, we could consider the subgraph traced out by a walk or trail.\n\n## Walks Trails (no edge is repeated) Paths\n\n(no vertex is repeated)\n\nCircuits\n(closed trails)\n\nCycles\n\nAn Eulerian trail is a trail in the graph which contains all of the edges of the graph. An Eulerian circuit is a circuit in the graph which contains all of the edges of the graph. A graph is Eulerian if it has an Eulerian circuit. The degree of a vertex v in a graph G, denoted deg v, is the number of edges in G which have v as an endpoint.\n\nExercises\n\n## Consider the following collection of graphs:\n\n(a) (b) (c) (d)\n\n(e)\n\n(f)\n\n(g)\n\n(h)\n\n1. Which graphs are simple? 2. Suppose that for any graph, we decide to add a loop to one of the vertices. Does this aect whether or not the graph is Eulerian? 3. Which graphs are connected? 4. Which graphs are Eulerian? Trace out an Eulerian circuit or explain why an Eulerian circuit is not possible. 5. Are there any graphs above that are not Eulerian, but have an Eulerian trail? 6. Give necessary conditions for a graph to be Eulerian. 7. Give necessary conditions for a graph to have an Eulerian trail.\n\n8. Given that a graph has an Eulerian circuit beginning and ending at a vertex v, is it possible to construct an Eulerian circuit beginning and ending at any vertex in the graph? 9. Eulers House. Baby Euler has just learned to walk. He is curious to know if he can walk through every doorway in his house exactly once, and return to the room he started in. Will baby Euler succeed? Can baby Euler walk through every door exactly once and return to a dierent place than where he started? What if the front door is closed?\n\nFront Door\n\nEuler's House\nPiano Rm Dining Rm Garage Kitchen Family Rm Master Bd Hall Living Rm Study\n\nConservatory Yard\n\n## Characterization of Eulerian Circuits\n\nWe have seen that there are two obvious necessary conditions for a graph to be Eulerian: the graph must have at most one nontrivial component, and every vertex in the graph must have even degree. Now a more interesting question is, are these conditions sucient? That is, does every connected graph with vertices of even degree have an Eulerian circuit? This is the more dicult question which Euler was able to prove in the armative. Theorem 1. A graph G is Eulerian if and only if it has at most one nontrivial component and its vertices all have even degree. There are at least three dierent approaches to the proof of this theorem. We will use a constructive proof that provides the most insight to the problem. There is also a nonconstructive proof using maximality, and a proof that implements an algorithm. We will need the following result. Lemma 2. If every vertex of a graph G has degree at least 2, then G contains a cycle. Proof. Let P be a maximal path in G. Maximal means that the path P cannot be extended to form a larger path. Why does such a path exist? Now let u be an endpoint of P . Since P is maximal (cannot be extended), every vertex adjacent to u must already be in P . Since u has degree at least two, there is an edge e extending from u to some other vertex v in P , where e is not in P . The edge e together with the section of P from u to v completes a cycle in G. Proof of theorem. We have already seen that if G is Eulerian, then G has at most one nontrivial component and all of the vertices of G have even degree. We just need to prove the converse. Suppose G has at most one nontrivial component and that all of the vertices of G have even degree. We will use induction on the number of edges n. Basis step: When n = 0, a circuit consisting of just one vertex is an Eulerian circuit. Induction step: When n > 0, each vertex in the nontrivial component of G has degree at least 2. Why? By the lemma, there is a cycle C in the nontrivial component of G. Let G be the graph obtained from G by deleting E(C). Note that G is a subgraph of G which also has the property that all of its vertices have even degree. Why? Note also that G may have several nontrivial components. Each of these components of G must have number of edges < n. Why? By the induction hypothesis, each of these components has an Eulerian circuit. To construct an Eulerian circuit for G, we traverse the cycle C, but when a component of G is entered for the rst time (why must every component intersect C?), we detour along an Eulerian circuit of that component. This circuit ends at the vertex where we began the detour. When we complete the traversal of C, we have completed the Eulerian circuit of G.\n\ndeg v = 2 #E(G)\nvV (G)\n\n## Proposition 3. (Degree-Sum Formula) If G is a graph, then\n\nwhere #E(G) is the number of edges in G. Proof. This is simply a matter of counting each edge twice. The details are left as an exercise. This formula is extremely useful in many applications where the number of vertices and number of edges are involved in calculations. For example, we will learn later about the graph invariants of Euler characteristic and genus; the degree-sum formula often allows us to prove inequalities bounding the values of these invariants. A fun corollary of the degree-sum formula is the following statement, also known as the handshaking lemma. Corollary 4. In any graph, the number of vertices of odd degree is even. Or equivalently, the number of people in the universe who have shaken hands with an odd number of people is even. Proof. Try to solve this one yourself. Hint: Split the sum on the left hand side of the degree-sum formula into two piecesone over vertices of even degree and one over vertices of odd degree.\n\nImportant Graphs\n\nThere are two special types of graphs which play a central role in graph theory, they are the complete graphs and the complete bipartite graphs. A complete graph is a simple graph whose vertices are pairwise adjacent. The complete graph with n vertices is denoted Kn .\n\nK1\n\nK2\n\nK3\n\nK4\n\nK5\n\nBefore we can talk about complete bipartite graphs, we must understand bipartite graphs. An independent set in a graph is a set of vertices that are pairwise nonadjacent. A graph G is bipartite if V (G) is the union of two disjoint (possibly empty) independent sets, called partite sets of G. Similarly, a graph is k-partite if V (G) is the union of k disjoint independent sets. 6\n\n## 3 dierent bipartite graphs\n\nA 3-partite graph\n\nA complete bipartite graph is a simple bipartite graph such that two vertices are adjacent if and only if they are in dierent partite sets. The complete bipartite graph with partite sets of size m and n is denoted Km,n .\n\nK1,1\n\nK2,2\n\nK2,3\n\nK3,3\n\nExercises\n1. Determine the values of m and n such that Km,n is Eulerian 2. Prove or disprove: (a) Every Eulerian bipartite graph has an even number of edges. (b) Every Eulerian simple graph with an even number of vertices has an even number of edges. What if we also assume that the graph has only one component? 3. When is a cycle a bipartite graph? 4. Oh no! Baby Euler has gotten into the handpaints. His favorite colors are blue and yellow. Baby Euler wants to paint each room in the house (including the hall) either blue or yellow such that every time he walks from one room to an adjacent room, the color changes. Is this possible?\n\n5. If we consider the graph corresponding to Eulers house, the previous problem is equivalent to assigning the color blue or yellow to each vertex of the graph so that no two vertices of the same color are adjacent. This is called a 2-coloring of the graph. What is the relationship between 2coloring vertices of a graph and bipartite graphs?\n\n## k-partite and k-colorable\n\nA k-coloring of a graph G, is a labeling of the vertices f : V (G) S, where S is some set such that |S| = k. Normally we think of the set S as a collection of k dierent colors, say S = {red, blue, green, etc.}, or more abstractly as the positive integers S = {1, 2, . . . , k}. The labels are called colors. A k-coloring is proper if adjacent vertices are dierent colors. A graph is k-colorable if it has a proper k-coloring. The chromatic number (G) is the least positive integer k such that G is k-colorable. You should notice that a graph is k-colorable if and only if it is k-partite. In other words, k-colorable and k-partite mean the same thing. You should convince yourself of this by determining the k dierent partite sets of a kcolorable graph and conversely determine a k-coloring of a k-partite graph. In general it is not easy to determine the chromatic number of a graph or even if a graph is k-colorable for a given k.\n\nExercises\n1. If a graph G is k-partite, what do we know about (G)? 2. Show that (G) = 1 if and only if G is totally disconnected, i.e. all of the components of G contain only 1 vertex. 3. For a nite graph G, is (G) also nite? Find an upper bound, or nd a nite graph G which cannot be colored by nitely many colors. 4. Determine the chromatic number for each of the following graphs:\n\n(a)\n\n(b)\n\n(c) Any cycle of odd length (the length of a cycle is the number of edges in the cycle). (d) Any cycle of even length.\n\n(f)\n\n(g)\n\n## (j) This graph is called the hypercube, or 4-dimensional cube.\n\n(k) This is an example of an innite graph. If the vertices of the graph are the integer coordinates, then this is also an example of a unit distance graph, since two vertices are adjacent if and only if they are distance one apart. 5. Suppose that the graph G is bipartite, i.e. 2-colorable, is it possible for G to contain a cycle of odd length?\n\n10\n\n10\n\n## Characterization of Bipartite Graphs\n\nWe have just seen that if G is bipartite, then G contains no cycles of odd length. Equivalently, this means that if G does have a cycle of odd length, then G is not bipartite, hence not 2-colorable (this is the contra-positive of the previous statement). You should look back at the problem of coloring the rooms in Eulers house to determine if there is such an odd cycle. The obvious question now is whether or not the converse of the above statement is true. That is, if G contains no cycles of odd length, does it hold that G is bipartite? The answer is yes. Theorem 5. A graph G is bipartite if and only if G contains no cycles of odd length. Proof. You should have already proved the forward direction in the exercises, so we will prove the other direction. Suppose that G contains no cycles of odd length. We might as well assume that G contains only one component, since if it has more than one, we can form a bipartition of the graph from the bipartition of each of its components. Thus assume G has one component. Pick a vertex u V (G). For each v V (G), let f (v) be the minimum length of a u, v-path. Since G is connected, f (v) is dened for each v V (G). Let X = {v V (G) | f (v) is even} and Y = {v V (G) | f (v) is odd}. We wish to show that X and Y are independent sets of vertices. Indeed, if there are adjacent vertices v, v both in X (or both in Y ), then the closed walk consisting of the shortest u, v-path, plus the edge v to v , plus the reverse of the shortest u, v -path, is a closed walk of odd length. It can be shown by induction that every closed walk of odd length contains an odd cycle, but this contradicts our hypothesis that G contains no cycles of odd length. Therefore no two vertices in X (or Y ) are adjacent, i.e. X and Y are independent so that G is bipartite. Exercise 10.1. Show that every closed walk of odd length contains a cycle of odd length. Hint: Use induction on the length l of the closed walk. If the closed walk has no repeated vertex, then it is a cycle of odd length. If it does have a repeated vertex v, then break the closed walk into two shorter walks.\n\n11\n\n## Upper and Lower Bounds for (G)\n\n(G) #V (G).\n\nWe have already seen an upper bound for (G) in the exercises, namely\n\nFor the particular case of the complete graph we have (Kn ) = #V (Kn ) = n, so this is the best possible upper bound for the chromatic number in terms of the size of the vertex set. However, we may derive other upper bounds using other structural information about the graph. As an example, we will show that (G) (G) + 1, 11\n\nwhere the number (G) is the maximum degree of all the vertices of G. Begin with an ordering v1 , v2 , . . . , vn of all of the vertices of G. The greedy coloring of G colors the vertices in order v1 , v2 , . . . vn and assigns to each vi the lowest-indexed color which has not already been assigned to any of the previous vertices in the ordering that are adjacent to vi . Note that in any vertex ordering of G, each vertex vi has at most (G) vertices which are adjacent to vi and have appeared earlier in the ordering. Thus as we color each vertex of G, we never need more than (G) + 1 colors. It follows that (G) (G) + 1.\n\nTo give a useful lower bound, we dene a set of vertices called a clique, which is complementary to the notion of an independent set dened earlier. A clique in a graph is a set of pairwise adjacent vertices. The clique number of a graph G, denoted (G), is the maximum size of a clique in G. In eect, a clique corresponds to a subgraph (whose vertices are the vertices of the clique) that is itself a complete graph. Thus if (G) = n, then there is a clique of size n corresponding to a subgraph of G that is equivalent to Kn . Since it will require at least n colors to color the vertices in this clique, we have that (G) (G).\n\n12\n\n## Unit Distance GraphsAn Open Problem\n\nA unit distance graph is quite simply a graph whose vertices are points in the plane (or more generally any Euclidean space), with an edge between two vertices u and v if and only if the distance between u and v is 1. Consider the unit distance graph whose set of vertices is the entire plane. Let us denote this graph by P . This is denitely not a nite graph, as there are uncountably many vertices, and for each vertex v there are uncountably many edges having v as an endpoint! In particular, for a given vertex v which corresponds to a point (x, y) R2 in the plane, the vertices which are adjacent to v are those which correspond to the points lying on the circle of radius 1 centered at (x, y). Exercise 12.1. What is (P )? This number is also known as the chromatic number of the plane. This question can be restated more simply as follows: How many colors are needed so that if each point in the plane is assigned one of the colors, no two points which are exactly distance 1 apart will be assigned the same color? This problem has been open since 1956 and it is known that the answer is either 4, 5, 6 or 7 (apparently it is not very dicult to prove that these are the only possibilities). I was able to show rather easily that the answer is 3, 4, 5, 6, or 7 but I did not spend enough time working on the problem 12\n\nto determine how dicult it would be to eliminate 3 as a possibility. I encourage you to work on this yourself to get a feel for the subtleties of the problem. I dont think you will need much help eliminating 1 or 2. To prove 1 that (P ) 9, try tiling the plane with 2 1 squares and coloring the squares 2 in a clever pattern. To show (P ) 7, use a similar technique of tiling the plane into colored shapes. To eliminate 3 you will probably need to expand on the type of argument used to eliminate 2, but youre on your own here. Eliminate any of the remaining numbers and you can publish your results. To learn more about this and related open problems in graph theory, visit http://dimacs.rutgers.edu/hochberg/undopen/graphtheory/graphtheory.html .\n\n13\n\n## The Four Color Theorem and Planar Graphs\n\nArguably the most famous theorem in the eld of graph theory is the Four Color Theorem. For an excellent history and explanation of the problem, see the article in Wikipedia at http://en.wikipedia.org/wiki/Four color theorem. Briey, this theorem states that 4 colors are sucient to color regions in the plane so that no two regions which border each other have the same color. It is trivial to verify that 3 colors is not sucient, and the proof that 5 colors is sufcient is not dicult. That 4 colors is indeed sucient to color any subdivision of the plane, proved to be an extremely dicult problem that was nally solved in 1976 with the aid of a computer. This computer-aided proof has proved to be quite unsatisfying to many mathematicians. The four color theorem can be stated quite simply in terms of graph theory. Just as with the Knigsberg bridge problem, or the exercise about Eulers house, o we abstract by representing the important information with a graph. Each region in the plane is represented as a vertex; two vertices are adjacent if and only if their corresponding regions border each other; and coloring the regions corresponds to a proper coloring of the vertices of the graph. You should notice that all possible graphs formed from such planar regions share an important property, namely they can be drawn in the plane without having to cross edges. This motivates the denition of planar graphs. A graph is planar if it can be drawn in the plane without crossings. (Examples of planar and nonplanar graphs.) Theorem 6. (Four color theoremoriginally stated by P.J. Heawood 1890) For any planar graph G, we have (G) 4. Proof. K. Appel and W. Haken 1976.\n\n13\n\n14\n\nExercises\n(a) K4 (b) K5 (c) K2,3 (d) K3,3\n\n## 1. Determine if the following graphs are planar or nonplanar.\n\n(e)\n\n(f)\n\n(g)\n\n2. Find an example of a planar graph that is not 3-colorable. 3. Does the four color theorem imply that you need at most 4 colors to color a political map of the world so that each country is assigned a color and no two adjacent countries have the same color? Explain why or provide a counterexample. (This is a bit of a trick question) 4. Consider any planar graph G. Draw this graph in the plane so that there are no crossings. We refer to the regions of the plane bounded by the edges of the graph as faces, and denote the set of faces of G by F (G). Compute X(G) = #V (G) #E(G) + #F (G) for each of the graphs (number of vertices minus the number of edges plus the number of faces). This is called the Euler characteristic of the graph. What trend do you notice? 5. (Kuratowskis theorem) Kuratowski proved that a nite graph is planar if and only if it contains no subgraph that is isomorphic to or is a subdivision of K5 or K3,3 . In this sense, K5 and K3,3 are the basic building blocks of nonplanar graphs. Consider the graph from Exercise 1(f) above. Can you nd a subgraph of this graph which looks like K5 or K3,3 ? What about the graph of the hypercube shown in Section 9?\n\n15\n\n## The Genus of a Graph\n\nThe ability to draw a graph in the plane without crossings is equivalent to being able to draw a graph on a sphere without crossings. For example, if you can draw a graph on a sphere, simply puncture the sphere in the middle of one of the faces formed by the edges of the graph and then stretch out this hole until you can lay the sphere at onto the plane. The result will be a drawing of the graph in the plane with no crossings. Conversely, if you can draw a graph in 14\n\nthe plane without crossings, take the outer face (the face containing ) and reverse the process above by in essence wrapping the plane around the sphere (the point at corresponds to the punctured hole in the sphere). The sphere is what we call a surface of genus zero. The genus of the surface tells you how many doughnut holes are in the surface. Thus a sphere has genus zero, a torus has genus one, a two-holed torus has genus 2, and so on. There are nonplanar graphs (hence cannot be drawn on a sphere without crossings), that can however be drawn on a torus without crossings. The graph K5 is such a graph. Similarly, there are graphs which cannot be drawn on a torus but can be drawn on a two-holed torus. The minimum genus of surface upon which a graph can be drawn without crossing edges is called the genus of a graph and is denoted (G). It can be shown that any nite graph can be drawn without crossings on a surface of large enough genus. Therefore the genus of a nite graph is well-dened. The genus of most important graphs has been calculated. For example (Kn ) = and (Km,n ) = (n 3) (n 4) 12 (m 2) (n 2) 4\n\nwhere denotes the ceiling function (these calculations can be found in Hararys book on graph theory). So for example, (K4 ) = 0, (K5 ) = 1, (K7 ) = 1 and (K8 ) = 2. Thus K4 is the largest complete graph which can be drawn on the sphere and K7 is the largest complete graph which can be drawn on the torus. Exercise 15.1. The accompanying gure shows how to draw K5 on the torus without crossing edges. Try to draw K6 or K7 on the torus.\n\nK5 on the torus\n\n15" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94767386,"math_prob":0.9778408,"size":24031,"snap":"2019-43-2019-47","text_gpt3_token_len":5849,"char_repetition_ratio":0.15403505,"word_repetition_ratio":0.055664703,"special_character_ratio":0.23398943,"punctuation_ratio":0.104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979914,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T00:25:26Z\",\"WARC-Record-ID\":\"<urn:uuid:92d175fd-5eaf-40cf-bab0-9d577f5463d6>\",\"Content-Length\":\"309754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:459f1fbb-3a66-4b54-813c-d03047ca308a>\",\"WARC-Concurrent-To\":\"<urn:uuid:840107cf-7b05-4a8e-8e9d-3605def80093>\",\"WARC-IP-Address\":\"151.101.202.152\",\"WARC-Target-URI\":\"https://www.scribd.com/document/79459333/MC-Graph-Theory\",\"WARC-Payload-Digest\":\"sha1:BL43HQMCEEEU5UUE3FQ6D7NU4CNC6ORH\",\"WARC-Block-Digest\":\"sha1:D7U2SCEM2JUTAIWSB5CPK47TEAMPAFW4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670643.58_warc_CC-MAIN-20191121000300-20191121024300-00177.warc.gz\"}"}