URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://nl.mathworks.com/matlabcentral/cody/problems/1054-what-s-your-bmi/solutions/1636849 | [
"Cody\n\n# Problem 1054. What's Your BMI?\n\nSolution 1636849\n\nSubmitted on 2 Oct 2018 by Suraj Gurav\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\nh= 1; m= 1; y_correct = 703.06957964; assert(isequal(BMI(h,m),y_correct))\n\nans = []\n\n2 Pass\nh= 70; m= 90; y_correct = 12.91352289134694; assert(isequal(BMI(h,m),y_correct))\n\nans = []"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6247871,"math_prob":0.95250875,"size":532,"snap":"2019-51-2020-05","text_gpt3_token_len":161,"char_repetition_ratio":0.14015152,"word_repetition_ratio":0.024096385,"special_character_ratio":0.35902256,"punctuation_ratio":0.14150943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853797,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T21:19:14Z\",\"WARC-Record-ID\":\"<urn:uuid:f241392c-a376-40c5-841a-947cb044eed8>\",\"Content-Length\":\"71911\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a62a95ff-8812-40ed-986c-27435339b76f>\",\"WARC-Concurrent-To\":\"<urn:uuid:02f5c36b-a366-41a3-86f0-f1cdd3a48734>\",\"WARC-IP-Address\":\"23.50.228.199\",\"WARC-Target-URI\":\"https://nl.mathworks.com/matlabcentral/cody/problems/1054-what-s-your-bmi/solutions/1636849\",\"WARC-Payload-Digest\":\"sha1:P3PVDVFQ6HVJNUOJWBSCY5X7ZWUP5T3U\",\"WARC-Block-Digest\":\"sha1:NGAA2XQ7O5BPQJWIIG5BMLI6F2YJ2DLV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540514893.41_warc_CC-MAIN-20191208202454-20191208230454-00271.warc.gz\"}"} |
https://whatisconvert.com/585-imperial-gallons-in-liters | [
"## Convert 585 Imperial Gallons to Liters\n\nTo calculate 585 Imperial Gallons to the corresponding value in Liters, multiply the quantity in Imperial Gallons by 4.54609 (conversion factor). In this case we should multiply 585 Imperial Gallons by 4.54609 to get the equivalent result in Liters:\n\n585 Imperial Gallons x 4.54609 = 2659.46265 Liters\n\n585 Imperial Gallons is equivalent to 2659.46265 Liters.\n\n## How to convert from Imperial Gallons to Liters\n\nThe conversion factor from Imperial Gallons to Liters is 4.54609. To find out how many Imperial Gallons in Liters, multiply by the conversion factor or use the Volume converter above. Five hundred eighty-five Imperial Gallons is equivalent to two thousand six hundred fifty-nine point four six three Liters.\n\n## Definition of Imperial Gallon\n\nThe imperial (UK) gallon, now defined as exactly 4.54609 litres (about 277.42 cubic inches), is used in some Commonwealth countries and was originally based on the volume of 10 pounds (approximately 4.54 kg) of water at 62 °F (17 °C). The imperial fluid ounce is defined as 1⁄160 of an imperial gallon; there are four quarts in a gallon, two pints in a quart, and 20 Imperial fluid ounces in an imperial pint.\n\n## Definition of Liter\n\nThe liter (also written \"litre\"; SI symbol L or l) is a non-SI metric system unit of volume. It is equal to 1 cubic decimeter (dm3), 1,000 cubic centimeters (cm3) or 1/1,000 cubic meter. The mass of one liter liquid water is almost exactly one kilogram. A liter is defined as a special name for a cubic decimeter or 10 centimeters × 10 centimeters × 10 centimeters, thus, 1 L ≡ 1 dm3 ≡ 1000 cm3.\n\n## Using the Imperial Gallons to Liters converter you can get answers to questions like the following:\n\n• How many Liters are in 585 Imperial Gallons?\n• 585 Imperial Gallons is equal to how many Liters?\n• How to convert 585 Imperial Gallons to Liters?\n• How many is 585 Imperial Gallons in Liters?\n• What is 585 Imperial Gallons in Liters?\n• How much is 585 Imperial Gallons in Liters?\n• How many L are in 585 uk gal?\n• 585 uk gal is equal to how many L?\n• How to convert 585 uk gal to L?\n• How many is 585 uk gal in L?\n• What is 585 uk gal in L?\n• How much is 585 uk gal in L?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8419184,"math_prob":0.9592188,"size":2196,"snap":"2023-14-2023-23","text_gpt3_token_len":584,"char_repetition_ratio":0.21578467,"word_repetition_ratio":0.07848101,"special_character_ratio":0.28688523,"punctuation_ratio":0.10561798,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T19:33:28Z\",\"WARC-Record-ID\":\"<urn:uuid:e415a5bf-5aed-402f-aec9-7027417709ee>\",\"Content-Length\":\"30372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbf607f1-ea85-418d-a4d0-9c74e2963ad5>\",\"WARC-Concurrent-To\":\"<urn:uuid:919ec687-4bbf-44e7-a6ea-fd0020046301>\",\"WARC-IP-Address\":\"104.21.13.210\",\"WARC-Target-URI\":\"https://whatisconvert.com/585-imperial-gallons-in-liters\",\"WARC-Payload-Digest\":\"sha1:XBLB3UOCPN5CKSP25QN5UAC2ERBROKVQ\",\"WARC-Block-Digest\":\"sha1:KYUP4JGQPY6FIOST4IXUW5GM5C7SZKBR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648000.54_warc_CC-MAIN-20230601175345-20230601205345-00244.warc.gz\"}"} |
https://proofwiki.org/wiki/Definition:Convex_Real_Function/Definition_1/Strictly | [
"# Definition:Convex Real Function/Definition 1/Strictly\n\n## Definition\n\nLet $f$ be a real function which is defined on a real interval $I$.\n\n$f$ is strictly convex on $I$ if and only if:\n\n$\\forall x, y \\in I, x \\ne y: \\forall \\alpha, \\beta \\in \\R_{>0}, \\alpha + \\beta = 1: \\map f {\\alpha x + \\beta y} < \\alpha \\map f x + \\beta \\map f y$",
null,
"The geometric interpretation is that any point on the chord drawn on the graph of any convex function always lies above the graph.\n\n## Also defined as\n\nBy setting $\\alpha = t$ and $\\beta = 1 - t$, this can also be written as:\n\n$\\forall x, y \\in I, x \\ne y: \\forall t \\in \\openint 0 1 : \\map f {t x + \\paren {1 - t} y} < t \\map f x + \\paren {1 - t} \\map f y$"
] | [
null,
"https://proofwiki.org/w/images/thumb/e/ef/ConvexFunction1.png/700px-ConvexFunction1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58952695,"math_prob":0.99997187,"size":717,"snap":"2021-31-2021-39","text_gpt3_token_len":236,"char_repetition_ratio":0.13464236,"word_repetition_ratio":0.093959734,"special_character_ratio":0.35006973,"punctuation_ratio":0.10191083,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999423,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T12:32:53Z\",\"WARC-Record-ID\":\"<urn:uuid:e80be7a3-1db2-4a81-bc47-d1a653d41741>\",\"Content-Length\":\"34019\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a14d502-a775-4c9e-ab81-65fbae3f4b29>\",\"WARC-Concurrent-To\":\"<urn:uuid:77d2a717-5fa6-43af-9088-42808b5f4378>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Definition:Convex_Real_Function/Definition_1/Strictly\",\"WARC-Payload-Digest\":\"sha1:SMMETM3AYGP2DTGDPWL4APA4NOKBDXM3\",\"WARC-Block-Digest\":\"sha1:UDHXX4PVA4TNMCOXDIR37ZIRFZKFI6AG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155529.97_warc_CC-MAIN-20210805095314-20210805125314-00047.warc.gz\"}"} |
https://gourocklawnsllp.com/minimum-jumps-to-traverse-all-integers-in-range-1-n-such-that-integer-i-can-jump-i-steps/ | [
"Tuesday, 19 Oct 2021\n\n# Minimum jumps to traverse all integers in range [1, N] such that integer i can jump i steps\n\nGiven an integer N, the task is to find the minimum steps to visit all integers in the range [1, N] by selecting any integer and jump i steps at every ith jump.Note: It is possible to revisit an integer more than once. Examples:Input: N = 6Output: 3Explanation: One of the following way is: First start at first number and visit the integers {1, 2, 4}.Now start at 2nd number and visit the integers as {2, 3, 5}Now start at the last number and visit it.Therefor, in total of 3 steps one can visit all the numbers in the range [1, 6]. And also it is the minimum number of steps needed.Input: N = 4Output: 2Naive Approach: The given problem can be solved based on the following observations: In each step the sizes of jumps increases therefore some numbers remains unvisited in a step.Starting from the first number and performing the jumps it can be observed that the maximum size of the jump is the total number of steps needed to visit every number. As in one move, one cannot visit each number between two unvisited numbers.Follow the below steps to solve the problem:Initialize two variables, say count = 1 and res = 0.Traverse over the range [1, N] and increment i by count and update res as res =max(res, count) and increment count by 1.After completing the above steps print the res.Below is the implementation of the above approach:C++#include using namespace std; int minSteps(int N){ int count = 1, res = 0; for (int i = 1; i"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8771824,"math_prob":0.9942948,"size":1526,"snap":"2021-43-2021-49","text_gpt3_token_len":373,"char_repetition_ratio":0.14914586,"word_repetition_ratio":0.0073260074,"special_character_ratio":0.24180865,"punctuation_ratio":0.12538226,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99333704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T21:09:47Z\",\"WARC-Record-ID\":\"<urn:uuid:146c8420-fd13-4f37-9c42-4f3569a72377>\",\"Content-Length\":\"42625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b1232c0-ee52-4cf7-a740-44ae1b059649>\",\"WARC-Concurrent-To\":\"<urn:uuid:3471566d-c90d-4d66-beb8-f154b7b7a92d>\",\"WARC-IP-Address\":\"104.21.14.17\",\"WARC-Target-URI\":\"https://gourocklawnsllp.com/minimum-jumps-to-traverse-all-integers-in-range-1-n-such-that-integer-i-can-jump-i-steps/\",\"WARC-Payload-Digest\":\"sha1:C7LL3UXWUCV3UGNULOY2FIQN3CUZUKYM\",\"WARC-Block-Digest\":\"sha1:WR27WYOWXWOSQ3ZYKA2XBZGSQ5LPKW3N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585281.35_warc_CC-MAIN-20211019202148-20211019232148-00563.warc.gz\"}"} |
https://de.scribd.com/document/66806060/c-Notes-Complet | [
"Sie sind auf Seite 1von 144\n\n# C\n\n## #include <stdio.h> Function <printf Scanf\n\nC++\n#include <iostream.h> cout. in are objects cout << Hello; Insertion operator Or Insertors Cout << value = <<a; No need of format specefiers In C++ in >>a; Extraction operator Or Extractor In >>a >>b; Cascading of extractor; cout << Hello \\n User; Or Cout << Hello <<end <<User; in C++ default return type is an integer\n\n## In C, default return type Is void\n\nHISTORY OF C++\nYear 1982 Developed Bjarne stroustrap Lab Bell Labs Company At & T C is procedure oriented language Easy & fast programming in C. Logics can be easily developed Developed. C is object oriented language C++ closely models sear world problems\n\n## CLASSES AND OBJECTS\n\nIn structure of C :- only data can be member of structure and not functions All member of structure of are public by default In class of C++ data + functions accessing those data are member of class and All member of class are private by default Class stud { Int roll; Char grade; Float par; Public: Void get( ); Void show( ); }; Void stud : : get( ) { Cout << enter roll. Grade and per; Cin>>roll>> grade >> per; } Void stud : : show { Cout <<roll << << grade << <<per<< end1; } Void main( ) { Stud s; S. get ( ); s. show( ); } get show roll grade Per Function are never seplicated there is only one copy of function no matter now many objects are created only once memory is allocated to functions for all objects where as multiple copies of data are created for multiple objects.\n\n: : Scope resolution operator helps compiler to identify functions of which class if two classes have the same name. Q. 1 wap to add two numbers give by user Class add { Int a, b, c Public : void get( ); Void sum( ); Void show( ); }; Void add : : get ( ) { Cout << Enter no; Cin >> a >>b; } Void add : : sum( ) { C= a+b; } Void add : : show( ) { Cout << Numbers are = << a << << b; Cout << sum = <<c; } Void main( ) { Add obj; Obj. get( ); Obj.sum( ); Obj. show( ); Getch ( ); } C++ (Terminology) 1. objects 2. data members 3. member function 4. function call OOPs (Terminology) instances properties & attributes methods & behaviors message passing\n\n## BASIC PRINCIPLES OF OOP\n\nENCAPSULATION Word has been derived from a word capsule which means multiple medicines packed in one single unit. Similarly in a software there are two major unit data & functions acting on that data since functions and data are related entities it is advisable to score them within a single unit. Thus according to oops Encapsulation means building or wrapping up of data members and f n acting on those data members with in a single unit. Since a class allows us to hold data & functions within it are say that it supports the principle of encapsulation. POLYMORPHISM The word polymorphism is derived from combination of two words poly mcarning multiple and morph means the ability to have multiple forms. In other words if an entity can acquire multiple forms in different situation we say that its behaviors is polymorphic for eg in c++ it is possible for programmer to redife operator + in such a way that it can be used to add two integers as well as at the same time it can add two abject or two strings. So it a programmer define + to behave in above mentioned member we say that + behaves polymorphically . In C++, polymorphism is implemented in two way:(1). Compile time polymorphism :- function overloading and operator Overloading (ii). RunTime polymorphism:- Virtual functions, pure virtual functions Ads tract classes (iii). Inheritance :- to inherit means to acquire properties and fea of an Existing entity into a newly created entity Like a child acquire properties of his or has Parent, similary when designing software if a programmer wishes then he can acquire the fratures (data and member function) of an existing class in his own class with the help of inheritance. The class which gets inherited is known as bass class and the class. Thus by inheriting the members of base class it becomes possible to access them through the objects of derive class. The major advantage offered by principle of Inheritance is rcusability and reliability\n\nCREATING PARAMETERIZED FUNCTIONS WITHIN CLASSES Class Emp { Int age; Char name; Float salary; Public: Void set (int, char *, float); Void show( ) }; Void Emp: : set (int I, char *j, float K) { Age =I; Strcpy (name, j); Salary =k; } Void Emp : : show( ) { Cout <<age << <<name << salary; } ASSIGNMENT Wap to create a class c/a string. The class must contain a character array of size 20. class should have following fucnitons: (1). Getstring( ) which accept string as parameter + stores it in array str. (2). Show string( ) which display the string stored in str[]. (3). Reversestring ( ) which reverses the string stored in str[]. (4). Add string( ) which accepts string as parameter and if possible Concatenates in str. Write an object oriented program to calculate the factorial of a given by user. Provide separated function for initialization, inputting, calculating and display. Class Fact {\n\nInt I, n; Public: Void init( ); Void getno( ); Void calculate( ); Void display( ); }; Void fact : : init( ) { F=1; } Void fact : : getno( ) { Cout << Enter a number; Cin >> n; } Void fact : : calculate( ) { Int I; For (i=1; i<n; i++) F=f*:; } Void fact : : display( ) { Cout << Number = << n; Cout << factorial= <<f; } Void main( ) { Fact obj ; Obj. init( ); Obj.getno( ); Obj.get calculate( ); Obj. display( ); }\n\nCONTRUCTOR\nConstructor :- are special member f n of a class with the following properties\n\n1. They have the same name as that of the class 2. They dont have any return type not even void 3. They are automatically called as soon as the object of class is created i.e. their calling is implicit 4. They cant be declared as static 5. They cant be declared as virtual Any class which does not contain any constructor then compiler from itself supplier a constructor but it is hidden. For programmer these constructors are default Class fact {\n\n} Constructors are automatically called even when not declared, at that time default constructors are called. Default contractors are destroyed as soon as we declare constructor Example :Class fact { Int I, b; Public: Fact( ) { F=1; } Void getno( ); Void calculate( ); Void display( ); }; Void fact : : getno( ) { cout << enter a no; an >> n; }\n\nVoid fact : : calculate( ) { For (i=1; i<=n; i++) F=f*I; } Void fact : : display ( ) { Cout << no= <<n<<end1; Cout << factorial= <<f; } Void main( ) { Fact obj; Obj.getno( ); Obj.calculate( ); Obj.display( 0; }\n\nPARAMETERIZED CONSTRUCTOR\nClass Emp { Int age; Char name; Flaot sal; Public: Emp (int, char *. Float); Void show( ); }; Emp : : Emp (int I, char *j, float k) { Age =I; Strcpy (name, j); Sal=k; } Void Emp : : show( ) { Cout << age << << name<< <<sal; }\n\n1. No of Arguments 2. Type of Argument 3. Order of argument Int show ( ) Not allowed (return type differs) Void show( ) Compiler does not consider return type, other wise constructor can never be overloaded as they have no returns type. Function overloading allows some function name in same scope but there should be some different in functions. Void vol (int); Void vol (int, int, int ); Void main ( ) { Int choice; Cout << select a figure; Cout << (1) cube \\n (2) cuboid; cin >> choice ( ); Swith (choice) { Case 1: Int s; Cout << enter side of cube; Cin >> s; Vol (s); Break; Case 2: Int l.\\, b, h; Cout << Enter l, band h of alboid; Cin >> l >> b >> h; Vol (l,b, h); Breck; Default: Cout < wrong choice; Void show (int, int, int) void show (int) void show (double) void show (int, double) Void show (double, int)\n\n} Void vol (int s) { Cout << value of cube = << s*s*s; } Void vol (int l, int b, int h) { Cout << value of alboid= << l*b*h; } ADVANTAGES OF FUNCTION OVERLODING 1. Overload of remembering the name is transformed from programmer to compiler 2. it develops symmetry & incrrases the readability of program CONSTRUCTURE OVERLOADING Class Box { Int l, b,h; Public: Box( ); //constructor for user defined Box Box (int); //constructor for albe }; Box : : Box( ) { Cout << enter l, b and h of box; Cin >> l >> b>> h; } Box : : Box(int s) { L=b=h=s; } Box : : Box (int I, int j, int k) { L=I; B=j; H=k; }\n\nVoid BOX : : show( ) { Cout << l << <<b<< << h; } Void main ( ) { Box B1; Box B2 (5,7,11); Box B2 (10); B1. show( ); B2. show ( ); B3. show ( ); }\n\nCOPY CONSTRUCTOR\nIt is a special constructor of a class which accepts the reference of the object of its our class as a parameter. It is called by c++ compiler in there situations 1. when ever programmer created an object and at the same time passes another object of the same class as a parameter 2. whenever a f n accepts an object as a parameter by value. 3. whenever a f n returns an object by value. Reference variable:Syntax:- <data type> & <sef-name> = <var-name>; Void main( ) { Int a = 10; Int *p; P=&a; Cout <<*p <<end1; } Void main ( ) { Int a=10; Int &p=a; prowbacks of pointer 1. occupy 2 bytes of memory 2. will be initialized by garbage 3. necessary to initialize before their use 4. very caseful in using indirection operator Advantage of Reference variable 1. we can have n reference variables of one variable\n\n## Cout <<a<< <<p; Cout << &a << <<&p;\n\n2. both variables get interlocked on each Other 3. does not require any memory space it Only reuse the memory of nay variable\n\nReference variable is a technology provided by c++ which allow a programmer to create anew variable which stores (not holds) the memory location of an existing variable. In other words, reference variable is a technique using which programmer can allot multiple mames to same memory location. POINTER TO REMEMBER 1. int &p = a; 2. int 7p =; multiple declarations for the variable p. 3. In case of array, always use pointer. Reference variable can not work with array 4. we can not make array of reference variable int &p C C pass by value pass by reference void swap (int, int) void swap (int*, int*) void main( ) void main( ) cout << Enter2 number; cout << Enter2no; cin >>a>>b; cin >> a >> b; swap (a,b); swap (&a, &b) cout << a << <<b; cout <<a<< <<b; } } Void swap(Int p,intq) void swap (int*p, int*q) } } Int temp; int temp; Temp =p; temp=*p; P=q; *p=*q; Q=temp; *q= temp; } } C++ pass by reference void swap (int7,int&) void main( ) { int a, b; cout << enter2no) cin>>a>>b; swap (a,b); cout <<a<< <<b <<end1; } void wsap(int&pint&q) { int temp; temp=p; P=q; Q=temp; }\n\nNote :- By call it is not possible to call whether it is call by value or call by reference Q. WAP to use a function call maximum which accepts an integer array of size pure as an argument & retirns the largest & smallest element of that array to main. Without changing the arcginal position of element of ht array. Maximum (int a[], int &, int&) Void main( ) { Int a, I, large, small; For (i=0; i<5; i++) { Cout << enter elements of array; Cin >>a[i]; } Maximum (a, large, small); Cout << maximum element= <<large; Cout << smallert element= <<small; } Void maximum (int a, int &max, int & main) { Int f; Max = a; Min=a; For 9j=1; j<5; i++) { If (* (a+j) >max) Max=* (a+j); Else if (* (a+j) <min) Min = *(a+j); } } Class Box { Int l, b, h; Public: Box( );\n\nBox (int); Box (int, int, int); Box (Box &); Void show( ); }; Box : : Box( ) { Cout << enter l, b and h of Box; Cin >> l >> b >> h; } Box : : Box (int S) { L=b=h=s; } Box : : Box (Box &p) { L=p.l; B B=p.b; b1s l, b,h. H=p.h; } Box : : Box (int I, int j, int k) { L=I; B=j; H=k; } Void Box : : show( ) { Cout << L << << b << << h; } Void main ( ) B2. show( ) { Box B1; B3. show( ); Box B2 (10); B4. show( ); Box B2 (5,7,11); } Box B4 (B1); B1. show( 0; Box B2 (10) Box B4 (B1) Box B2 = 10; Box B4 = B;\n\nCall for copy constructor Box B4; B4 = B1 No call for copy constructor use of assignment operator other destination object is already made.\n\nDEFUALT ARGUMENTS\nVoid printline (char =*, int=1) Void main ( ) { Printline ( ); // printline (*, 1) Printline (#); // printline ( #, 1) Printline (!, 10) // printline (1, 10); } Void printline (char ch, int n) { For (int i=1 i<=n; i++) Cout <<ch; } Note;- Default arguments must always be trailing arguments. Printline(50); printline (ASCII of 50, 1); Class stud { Int age; Char grade; Float per; Public: Stud (int, char, float); // stud (int=o, char= ; float=0.0); Void get( ); Void show( ); }; Stud : : stud (int I, char j, float k) { Age =I; Char j; Per k;\n\n} Void std : : get( 0 { Cout << Enter age, grade & per ; Cin >> age >> grade >> per; } Void stud : : show( 0 { Stud main( ) { Stud t ( 15, A, 75); Stud p; p. get( ); t. show( ); p. show( 0; } Note:- In class at same time it is not possible to have default argument constructor and default constructor.\n\nDESTRUCTOR\nAre special member f n of class which have same name as that of the class but preceded with the symbol of field ( ). They are automatically called whenever an object goes out of scope & is about to be destroyed When an object is created first function which is automatically called is constructor & when object ends its life a last function is called to free occupied memory is destructor Class Emp { Int age; Char name ; Float sal; Public: Emp( ); Emp( ); Void show( 0; };\n\nEmp : : emp( ) { Cout << Enter age, name & sal; Cin >> age >> name >> sal; } Void emp : : show( ) Note { A class by default has 3 built in Cout <<age << name <<sal; fucntion } 1. Constructor Emp : : Emp( ) 2. copy constructor { 3. Destructor Cout << Object destroyed; } Note:- Destructor are always called in Void main( ) reverse order. { Emp e, f,; E. show( ); f. show( ); } Create a class c/o student having 3 data members (i) for storing Roll no. (ii) for storing name (iii) for storing marks in subjects The member for storing name should be char *, member for storing marks should be int *; Create constructor of class which prompts user to enter length of name, allocates sufficient memory & accepts the name from user. The same constructor asks the user how many subject he want to enter, againallocate sufficient memory for that & accepts the marks given by user. Finally provide appropriate member f n which display % & grade of student. At the end define the destructor in proper manner to deallocate memory allocated by constructor. Class student { Int roll non; Char * name, grade; Int *marks; Float per;\n\nPublic: Student ( ); Void get( 0; Void calculate( ); Void show( 0 Student ( ) } Student : : student ( ) { Cout << how many letters; Cin >>n; Name = (int *) malloc (n+1) * size of (char)); If (p= = null) Exit(1); Get( ); } Void student : : get( ) { Cout << enter roll no; Cin >> roll ; Cout << enter name; Cin >> name; For (i=0; i<n; i++) { Cout << Enter marks; Cin >> * (q+i); } } Void student : : calculate ( ) { Per=0; For (int i=0; i<n; i++) Per+= * (q+i) Per / = n; If (per >=75) Grade = A; Else if (per >=60) Grade = B;\n\nElse grad = f; } Student : : student ( ) { Free (Q); Free(p); } Void student : : show( ) { } Void main( ) { Student s; s. calculate( ); s. show( ); Note In C++, cin dont accepts the space Void main( ) { Char str; Cout << enter name; Cin >> str; } Cin.getline (str, 80); //Enter Key /9. Prototype of get line ( ) Void get line (char *, int ); Member of istream header file COMPARISION BETWEEN CONSTRUCTOR & DESTRUCTOR CONSTRUCTOR 1. are special member f n of a class having same name as that of the class. 2. constructors are automatically called as soon as object of class is created i.e. their calling is implicit 3. constructors can be parameterized. 4. since constructors accepts parameters they can be overloaded & thus a class can have multiple constructors.\n\n5. constructors are called in the orders in which objects are created. 6. constructors can not be inherited. 7. constructors can not be declared as virtual. DESTRUCTOR\n1. are special member fn of class having same name as that of the class but\n\n2. 3.\n4.\n\n5. 6. 7.\n\npreceded with the symbol of tilde a destructor is also automatically called but whenever an object is about to be destructor or goes out of scope thus these calling is implicit. we can not pass parameters to a destructor. as they dont accept parameters we can not overload then, thus a class can not have multiple destructor calling of destructor is always done in reverse or des of the creation of objects. inheriting of destructor is also not possible we can have virtual destructor in a class.\n\nINLINE FUCNTIONS\nInline functions Are those fn whose call is replaced by their body during compilation. Declaring a fn as inline has two major advantages.\n1. compiler does not has to leave the calling fn as it finds definition of fn\n\nbeing called their it self. 2. The overhead of maintaing the stack in belureen function call is seduced Thus declaring a function as inline increases the execution Speed, s\\reuces execution time & thus enhances the overall efficiency of program. But two ptr must be considered before declaring a fn as inline\n1. The definition of inline function must appear before its call i.e. if a non\n\nmember fn is to be made inline then its declaration & definition must appear about main as a single unit. 2. the body of inline fn must be short & small 3. it should not contain any complex keywords or keyword or statement like for, if, while, dowhile do,\n\nif any of the aboul sules are violated then compiler ignores the keyword inline an treats the fn as offline or normal one. Moreover a class can have two kinds of inline functions 1.Implicit Inline :- fn efined within the body of class 2. Explicit Inline :- fn declared within the class but defined outside the class preceded with the keyword inline. Thus at the end we can say, that declaring a fn as line is a request made by programmer which the later may agrel or deny. Class Emp { Char name ; Int age; Float sal; Public : Void get( ) Implicit { Inline cout << enter age, name and sal; Cin >> age >> name >> sal; } Void show( ); }; Explicit Inline inline void Emp : : show( ) { Cout << age << <<name << <<sal; }\n\nVoid main( ) { Emp E; Int I; For (i=0; i<5; i++) E[i]. get( ); For (i=0; i<5; i++) E[i] show( )(; }\n\nSTORAGE CLASSES\nStorage class decides the following 1. Default value 2. life (persistence) 3. scope (accessibility) storage default 1.auto garbage (automatic) 2. static 3. register 4. global Zero garbage zero li ti scope limited to limited to declaration block their Declaration Block throughout The program same as auto same as auto throughout throughout the program The program Static void display ( ); void main ( ) { display( ); display ( ); display( ); } void display( 0 { static int a; cout <<a<<end1; a++; } o/p o 1 2\n\nAuto Void display ( ); Void main( ) { Display( ); Display ( ); Display ( ); } Void display( ) { Cout <<a<<end1; A++; } o/p 3 garbage values will be generated\n\nSTATIC DATA\nStatic data members with in the class Class data { Int a ; Static int b; }; a int Data : : b; d1 D2 static variable dont wait for creation of object, before object creation memory is allocated for then of class 1. A static data member has a single copy shared amongst each object of that class. On other hand if a class has non static data then every object of that class has its own copy of non static data 2. static data members arrive in memory even before objects of the class get created. Because of their feature it becomes necessary to redefine them outside the class so that they can be given space in RAM without the help of object. 3. since they are not related to any particular object of the class & have a single copy in the entire class, a static data member never contributes in the size of object. In other words size of object is always calculated by summing up sizes of non static members. WAP to create a class c/o student having two data members roll & count. The member roll should keep the roll no. allocated to every student object while the members count should keep track of total no. of student objects currently in the memory. Finally provide appropriate members fn to initialize & display the values of Roll no. & count. Class student { Int roll; Static int count; Public: Student (int i) {\n\n0 a\n\nRoll=I; ++ count ; } Void show( ) { Cout << Roll no= <<roll<<end1; Cout << total objects alive = <<count; } Student ( ) { Count; } }; Int student : : count; Void main( ) { Student S=10; Student P=20; Student Q=30; S. show( ); P. show( ); Q. show( ); { Student X = 40; Student Y=50; X. show( ); Y. show( ); } }\n\nSTATIC FUCNITON\n<obj-name>. <function-name> Syntax <class-name> : : <function-name> DRAWBACKS OF STATIC FUNCTIONS\n1. static f n can never non static data numbers. But reverse is true.\n\n## 2. constructor & Destructor can never be made or declared as static.\n\nClass student { Int roll; Static int count; Public : Student (int i) { Roll =I; ++arunt; } Static void show( ) { Count<<Roll.X Count << total objects alive = <<count; } Student ( ) { Count; } }; Int student : : count; Void main( ) { Student S(10). P(20), Q(30); Student : : show( ); { Student X(40), Y(50); Student : : show( ); } Student : : show( ); } Student : : show ( ); Getch( ); } Program :- Create a class c/a employa having data members for storing age, name & id. Also provide another member which stores the next id which will be allocated to next incoming object. Provide a member f n to initialize\n\nthis variable with its initial value & also provide appropriate constructor & member f n for initialized all other members of class. Class Emp { Int age; Char name ; Int id; Static int n-id; Public: Static void init( ) { Id=n-id; } Emp( ) { Cout << enter age, name; Cin >> age >> name; Id=n-id; ++n-id; } Void show( ) { Cout <<age<<name<<id; } Solution :Class Emp { Int hae; Char name; Int id; Static int n-id; Public: Employce (int, char *); Static void init id( ); Void show( ); Static void get next id( );\n\nEmployce( ); }; Employa :: int next-id; Employce : : employce (int I, char * j) { Age =I; Strcpy (name if); Id = next=id; } Void employce : : init id( ) { Next-id=1; } Void employce :: show( ) { Cout <<age<< <name << << id<<end1; } Void employce : : get next id( ) { Cout << next object wil be given id= <<next id; } Employce : : employce( ) { Next id; } Void main( ) { Employce : : init id( ); Employce : : get next id( ); { Employce { (25, Rahul); Employce f (30, vipin); e. show( ); f. show( ); empoloyce : : get next id( ); } Emloyce : : get next id( ); }\n\nTHIS POINTER\nClass Emp { Int age; Char name; Float sal; Public : Void get( ) { Cout << enter age, name & sal: Cin >> age>> name>>sal; Cout << Add of calling object = this; } Void show( ) { Cout <<age<<name<<sal; Cout << Add of calling object= <<this; } } Void main( ) { Emp E, f; e.get( ); f.get( ); e.show( ); f.show( ); }\n1. Every member f n of class has this pointer.\n\n2. No need to declare & initialize this pointer. Compiler initialize it with base address of calling object.\n\nThis Pointer\n\nThis is a special pointer available in every member foundation of a class except static member f n of a class. Whenever the object of a class puts a call to any non static member f n of that class, the this pointer available in that member f n implicitly starts pointing to the address of calling object. Thus we can say that a this pointer always points or referr to the aurrent object. By default class has 3 this pointer. Constructor copy constructor Destructor Accessing Data Member Using this Class Box { Int l, b, h; Public: Box( ); Box (int, int, int( ); Box (Box &); Void show( ); }; Box : : Box( ) { Cout << Enter l, b, and h; Cin >> l >> b >> h; } Box : : Box (int I, int j, int k) { L=I; thisl=I; B=j; thisb=j; H=k; thish=k; } Box : ; Box (Box &p) { L=p.l; *this=p; B=p.b; H=p.h }\n\nVoid Box : : show( ) { Cout << l << <<b<< <<h; Cout << this l << thisb << this h; } Void main( ) { Box B1; Box B2 (5, 7, 11); Box B3 = B1; B1. show( ); B2. show( ); B3. show( ); }\n\nLIMITATIONS OF THIS\nThis pointer always points to calling object thus it can not incremented or decremented. It is a constant pointer. Box = *q; Q = this+1 valid Q=++this notvalid USING THE CONST KEYWORD 1. 2. 3. 4. Const variable pointer parameters C data members of class 5. member f n of a class\n\nC++\n\nConst Variables Void main ( ) { Const float pi=3.14 Const variable are initialized at the pt of declaration menns they are read only variables & their values can not be manipulated.\n\n## Cout <<pi++ X not valid.\n\nConst Pointer Const int *p => pis a pointer a const int void main( ) { Int b=50; Int a =10; Const int *p; or int const *p P= &a; *p=20 P=*b; } This pointer comes in this category Int * const P P is a const pointer to an integer means P cant be incremented or decremented. Selin they are initialized at the time of declaration. Void main( ) { Int b=50; Int a=10; Int * const P=&a; *p=20; P=&b Const parameter Int strlen (const char * p) { Int I; For (i=0; *(p+i); i++) *p= A X // value of parameter can not be changed } Box (const Box &p) // prototype of default copy constructor { L=p.l ++;\n\nB=p.b H=p.h } Const data members of class Class circle { Int rad; Const flaot pi X; // const var are initialized at the point of declaration thus this is not valid statement Circle (int r) : pie (3.14), rad(r) Initialiser Const Member Function:If we want that a f n do not change value of class member than we make them const. Class circle { Int r; Float a; Void get (int x) { R=x; } Void cal-area ( ) { A=r * r * r * 3.14; } Void show ( ) const { Count << Rad= <<r++ << Area= <<a; } If const is there then value of r will not be changed other wise at last it will be incremented.\n\nINITIALISES\nInialises It is a special syntax allotted by C++ used to initialize there thing . 1. const data members with in the class. 2. reference variables 3. calling parameterized constructor of the base class from derived class. They are always written in front of the constructors Orders of initializes must be same as that of the order of data members Class Data { Int x, y; Public: Data (int I, int j): x(i), y(j); { } Data (int I, int j): x(i+j), y(x+i); { } } Ist case Data D (s,10) X=5 Y=10 Data D(7,14) X=21 Y=28 PASSING OBEJCTS USING POINTER Class Box { Int l, b, h; Public: Void get( ) {\n\n// same as previous } Int compare (Box *) }; Int Box : : compare (Box *p) { Int x, y; X= l*b*h; Y=P l* pb* ph; If (x= = y) Return (0); Else if (x>y) Return (1); Else Return (+) } Void main( ) { Box B1, B2; B1. get( ); B2. get( ); B1. show( ); B2. show( ); Int ans ; And = B1. compare (&B2);\n\nif (and= =0) cout << equal; else if (ans >=1) cout << B1 is greater; else cout << B2 is greater }\n\nPassing Objects by Reference Using Reference Variable Class Box { Int l, b, h; Public: Void get( ) { // same as previous; } Void show( ) { // same as previous; }\n\nInt compare (Box &); }; Int Box : : compare (const Box &P) { Int x, y; X = l*b*h; Y = p.l*p.b*p.h; If (x==y) Return (0); Else if (x > y) Return (1); Else Return (-1); } Void main( ) { Box B1, B2; B1.get( ); B2.get( ); B1.show( ); B2.show( ); Int and; Ans=B1. compare (B2); If (ans= =0) Cout << equal; else if (ans >1) cout << B1 is greater; else cout << B2 is greater; }\n\nFRIEND FUNCTIONS\nA friend function is a special function which despite of not being member f n of class has full access to the private, protected members of the class.\n\n## ATTRIBUTES OF FRIEND FUNCTIONS\n\n1. If a fn is to be made friend of a class then it has to be declared\n\nwithin the body of the class preceded with the keyword friend.\n\n## 2. whenever a friend f n is defined neither, the name of class nor scope\n\nresolution operator appears in its definition. Moreour the keyword friend also does not appear. 3. whenever a friend fn is called, neither the name of the object nor dot operator appears toward its left. It may however accept the object as a parameter whose members it wants to access. 4. It does not matter in which diction of class we declare a friend function as we can always access it from any portion of program 5. If a friend fn want to manipulate the values of data members of an object it veds the reference of object to be passed as parameter Example:- // Demonstrating the use of friend function Class student { Int roll; Char grade; Float per; Public: Void get( ); Friend void show (student); }; Void student : : get( ) { Cout << enter roll. Grade & per; Cin >> roll>> grade>>per; } Void show (student P) { Cout << p.roll<< << P.grade<< <<P.per; } Void main( ) { Student S; S. get( ); Show(S); } Demonstrating the use of friend function\n\nClass student { Int roll; Char grade; Float per; Public: Friend void get (student &); Void show( ); }; Void get (student & P) { Cout << enter roll, grade & per; Cin >> P.roll >> P.grade >> P.per; } Void student : : show( ) { Cout << roll << grade << per; } Void main( ) { Student S; Void get(S); s. show( ); } Class student { Int roll; Char grade; Float per; Public: Friend void get (student *P); Void show( ); }; Void get (student *q) { Cout << enter roll, grade & per; Cin >>q roll >> q grade >> q per; } Void student : : show ( )\n\n{ Cout << roll << grade << per; } Void main( ) { Student s; Get (&s0; s.show( ); } A FUNCTION BEING FRIEND OF TWO CLASSES Class Beta; // forward declaration Class Alpha { Int x; Public: Void get( ) { Cout << enter x=; Cin >> x; } Friend void compare (Alpha, Beta); }; Class Beta { Int y; Public: Void set( ) { Cout << enter y=; Cin >>y; } Friend void compare (Alpha, Beta); } Void compare (Alpha A, Beta B) { If (A, X > B. Y) Cout << greater= << A, X;\n\nElse if (B.Y> A.X) Cout << Greater= << B.Y; Else Cout << equal; } Void main( ) { Alpha Obj1; Beta Obj2; Obj1. get( ); Obj2. set( ); Compare (obj1, obj2); }\nADDING TWO OBJECTS OF A CLASS USING MEMEBR FUCNTIONS\n\nClass Distance { Int feet; Int in cher; Public: Void get( ) { Cout << enter feet & inches; Cin >> feet >> inches; } Distance add (Distance &) Void show( ) { Cout << feet= << feet<< inches= <<inches; } }; Distance Distance : : add (Distance &P) { Distance temp; Temp. feet = feet +P. feet; Temp.inches= inches + P.inches; If (temp. inches>=12) { temp. feer += temp. inches/12;\n\ntemp. inches % =12 } Return (temp); } Void main( ) { Distance D1, D2, D3; D1. get( ); D2, get( ); D3 = D1. add (d2); D3. show( ); } Class Distance { Int feet, inches; Public: Void get( 0 { // same as previous } Void add (distance &, distance &); Void show( ) { // same as previous; } }; Void distance : : add (distance &d1, distance &d2) { Feet = d1. feet + d2, feet; Inches = d1. inches + d2, inches If (inches >12) { Feet = feet + inches /12; Inches % = 12; } } Void main( ) { Distance d1, d2, d3; d1.get( );, d2.get( );\n\n## D3, add (d1, d2); D3. show( ); }\n\nADDING TWO AF CLASS UISNG PRIED FUNCTION Class distance { Int feet; Int inches; Public: Void get( ) { Void << enter feet & inches; Cin >>feet >> inches; } Void show( ) { //same } Friend distance add (distance, distance ): }; Distance add (Distance P, Distance Q) { Distance temp; Temp. feet = P.feet +Q.ffet; Temp. inches= P.inches + Q.inches; If (temp. inches>=12) { Temp.feet += temp. inches/12; Temp. inches % = 12; } Return (temp); } Void main( ) { Distance D1, D2, D3; D1.get( );\n\n## D2.get( ); D3 = add (D1, D2) D3.show( ); }\n\n{ Count = c } Void operator ++( ); Void show( ) { Cout << count <<end1; } }; Void counter : : operator ++ ( ) { ++ count; } Void main( ) { Counter a = 10; // Single parameterize constractor a.show( ); ++ a; // a. operator + +( ); a. show( ); } Class counter { Int count; Public: Counter( ) { Count = 0; } Counter (int c) { Count = c; } Counter operator ++ ( ); Void show( ) { Cout << count; } }; Counter &\n\nOr Counter counter : : operator ++ ( ) { Counter temp; ++ count; or Temp. count = count; ++ count Return (temp); return (* this); } Void main( ) { Counter c1 = 10, c2; C1. show( ); C2 = ++ c1; C1. show( ); C2. show ( ); } Overloading Of Post Increment Operator Using Member Function Of this class Class counter { Int count; Public: Counter( ) { Count=0; } Counter (int C) { Count = c; } Counter operator ++ (int); Void show( ) { Cout << count; } }; Counter counter : : operator ++ (int) {\n\nCounter temp; Temp. count = count++; Return (temp); } Void main( ) { Coun c1 = 10, c2; C2 = c1++; C1. show( ); C2. show( ); } Overloading Unary Operator As Friend Of The Class Class counter { Int count; Public: Counter( 0 { Count=0; } Counter (int i) { Count = I; } Void show( ) { Cout << count << end1; } Friend void operator ++ (counter &); }; Void operator ++ (counter &C) { ++ c.count Return (C); } Void main ( ) Or counter temp; temp. count = c.count++; return (temp);\n\n{ Counter C1 = 10, C2; C1. show ( ); C2. = ++(); C1. show ( ); C2. show( ); } Note:when unary operators are overloading using member function then dont accept any argument but when they are overloaded using friend fn then they have one argument of type object (class). Class counter { Int count; Public: Friend counter operator ++ (counter &); Counter { Count =0; } Count (int i) { Count = I; { Void show( ) { Cout << count << end1; } }; Void main( ) counter & operator ++(counter &c) { { Counter c1 =10, c2; counter temp; C1. show( 0; temp. count = C, count; C2= ++ C1; C1. show( ); return (temp); C2. show( ); } } Overloading Binary Operators As Member Functions Of The Class\n\nD3 = D1 + D2; D3 = D1. operator + (D2); Class Distance { Int feet, inches; Public: Void get( ) { Count << enter feet and inches; Cin >> feet >> inches; } Void show( ) { Cout << feet << inches; } Distance operator + (Distance); }; Distance Disttacne : : operator + (Distance P) { Distance t; t.feet = feet + P. feet t. inches = inches + P. inches; if (t.inches>= /12) { t.feet + = t.inches/12; t.inches % = 12; } Return (t); } Void main ( ) { Distance D1, D2, D3; D1, get( ); D2, get( ); D3 = D1+D2; D3. show( ); Getch( ); }\n\nOverloading Binary Operator Using Friend Function Class Distance { Int feet, inches; Public: Void get( ) { Cout << enter feet and inches: Cin >> feet>> inches; } Void show( ) { Cout << feet<< inches; } Friend Distance operator + (Distance, Distance); }; Distance Operator + (Distance p, Distance Q) { Distance t; t.feet = P.feet +Q.feet; t.inches = P.inches+ Q.inches; if (t. feet>=12) { t.feet = t.feet +t.inches/12; t.inches % = 12; } Return (t); } Void main ( ) { Distance D1, D2, D3; D1.get( ); D2.get( ); D3=D1+d2; D3.show( ); } Assignment:D2 = D1+n\n\nD3 = n + D1 Class Distance { Int feet, inches; Public: Void get( ) { // same as previous } Void show( ) { // same as previous } Distance Distance : : operator + (int n) { Distance temp; Temp feet = feet + n; Temp. inches = inches + n; If (temp. inches > = 12) { Temp. feet + = temp. inches / 12; Temp. inches % = 12; } Return (temp); } Distance operator + (int P, Distance Q) { Distance temp; Temp.feet = P+Q.feet; Temp. inches= P+Q. inches; If (temp. inches> = 12) { Temp.feet = temp. feet + temp. inches/12; Temp. inches % = 12; } Return (temp); } Void main( ) {\n\nDistance D1, D2, D3; Int n; Cout << enter an integer; Cin >> n; D1. get ( ); I D2 = D1+n; // can be done using member fn & friend fn D2. show( ); Cout << Enter an integer; Cin >>n; D3 = n+D1; // not possible through member fn D3. show( );\n\nII }\n\nNote:II can not be done using member function becoz n is an integer and only an object can call function not integer. So I call can be made using member function as well as friend function but II can be done only friend function Assignment :D1+=D2 using member fn & friend fn\n\nOverloading Relational Operators If (D1= = D2) Compiler if (D1.operator = = (D2) ) Class Distance { Int feet, inches; Public: Void get( ) { Cout << enter feet & inches; Cin >> feet >> inches; } Void show( ) { Cout << feet << << inches; }\n\nInt operator = = (distance D); }; Int Distance : : operator = = (Distance D) { Int x, y; X= feet &12 + inches; Y = D. feet *12 + D.inches; If (x= = y) Return (1); Else Return (0); } Void main( 0 { Distance D1, D2; D1. get( ); D2. get( ); D1. show( ); D2.show( ); If (D1= = D2) Cout << equal; Else Cout << not equal; } Assignment:Modify the above program so that now your cosle prents either of there message i. Object are equal. ii. Dl is greater. iii. D2 is greater Use minimum possible operator Overloading Binary Operator On String Class string { Char str; Public:\n\nString ( ) { Str = \\0; } String (char *P); { Strcpy (str, P); } Void show( ) { Cout << str; } String operator + (string S) }; String string : : operator + (string S) { String t; Int I, j; For (i=0; str[i]; i++) t.str[i] = str[i]; for (j=0; s.str[j]; i++, j++) t.str[i] = s.str[j]; return (t); } Void main( ) { String s1= Hello; String s2 = How are you ?; String s3; S3 = s1 +s2; S1. show( ), s2. show( ); }\n\ns3. show( );\n\nAssignment :WAP which compares two string objects & checks which one is greater. Operators Which Can Not Be Overloaded\n1>\n\n:: (scope resolution operator) it already works on classes we overload only those operators which works on primitine and not on non prin 3> ?: (conditional Operator) It requires there arguments overloading of operators atmost can take 2 argument 4> >* Pointer to member operator 5> Size of operator\n2>\n\n## DYNAMIC MEMORY ALLOCATION (DMA)\n\nMALLOC 1. It is a function 2. Necessary to include the header file 3. It takes no, of bytes as parameter 4. brings memory from local heap 5. Maximum memory that malloc can allocate = 64KB 6. Malloc retirens the address of Allocate block in from of (void*) Thus its return value has to be type Casted accordingly 7. when malloc fails it returns null 8. Malloc only creates a new block But not call the constructor of the Class for initialized the data member Of that block Such NEW It is a operator header file not required It takes no. of elements Required as parameter bring has no such memory store New has no such memory limitation New automatically converts according to the datatype given thus no need to explicitly type cast its return value. when new fails it returns zero on the other hand operator new can allocate memory as well as call the class for initializing the members of block created. Constructors are called dynamic Constructors. Syntax for new;(i) (ii) new <data type> new <data-type> [int];\n\nAllocations Of New :1. int * p; P= new int (10); Cout << *P; Delete P; new allows programmer to initialize the memory allocated but malloc does not provide nay such feature\n\n2000 P 2. int &P; P=new int ; Delete [ ] P; 3000 P Syntax for deletes .. 3000 2000\n\n10 2002\n\nHere value 10 denotes that programmer is interested in creating memory for elements\n\n1. Delete <ptr-name>; // deletes one block of data type 2. Delete [ ] <ptr-name>; // deletes whole allocated memory Eg:- float *P; Delete P; // four bytes free\n\nDelete is a request to OS it does not work immediately where as new creates memory immediately WAP which creates a dynamic array of user defined size Accept values from user in that array & then find largest element in that array along with its position. Finally it should display the largest element & delete the array. Void main ( ) {\n\nInt n; Cout << How many integers; Cin >> n; Int *P; P+ new int [n]; If (P= = 0) { Cout << Insufficient memory; Getch( ); Exit(1); } For (int i=0; i<n; i++) { Cout << enter element; Cin >> * (p=i); } Int max = *p; Int pos = 0; For (i=1; i<n; i++) { If (* (p+i) > max) { Max = * (p+i); Pos = I; } } Cout << largest element= << max; Cout << Position = << pos; Getch( ); Delete [ ] P; } WAP to create a class c/a string having a character pointer P. provide constructor in the class which accept int as parameter and allocates dynamic array of characters of special size. Now provide following member fn in the class. 1. get string which prompts users to enter string in dynamic array. 2. show string which display the string 3. Reverse string which accept an object as parameter and copies the\n\nReverse of string stored in calling object in object Passed as parameter. Class string { Char *p; Public : String (int); String ( ); Void show str( ); Void reverse str ( str &); Void get str( ); }; String : : string (int size) { P = new char [size +1]; If (P= =0) Exit (1); N = size +1; } Void string : : get str( ) { Cout << enter string; Cin. Getline (P,n); } Void string : : reverse str (string &s) { Int I, j; For (i=0, j=strlen (p)-1; j>=0; j--, i++) { s.p[i] = p[j]; } S. P [i] = \\0; } Void string : : show str( ) { Cout << P << end1; }\n\nString : : string ( ) { Delete [ ]P; Cout << Array destroyed; } Void main ( ) { Int n; Cout << How may character; Cin >> n String S1 =n; String S2 =n; S1. getstr ( ); S1. reverse str(S2); S1. show str( ); S2. show str( ); }\n\nDYNAMIC OBJECTS\nAllocating memory for class with the help of new. Class Emp { Int age; Char name; Float sal; Public: Void get( ) { Cout << enter age, name & sal; Cin >> age >> name >> sal; } Void show( ) { Cout << age << << name<< <<sal; } }; Void main( ) {\n\nEmp *P; P=new emp; If (p= =0) { Cout << Memory Insufficient; Exit (1); } p get( ); p show( ); getch ( ); delete p; }\n\n## ARRAY OF DYNAMIC OBJECTS\n\nModify the previous code so that now your program creates an array of a objects where n is given by user. Then it should accept values for each object and finally display them. Class Emp { Int age; Char anem; Float sal; Void get( ) { Cout << enter are, name and sal; Cin >> age>> name>> sal; } Void show( ) { Cout <<age<< name<<sal<<end1; } };\n\nVoid main( ) { Emp *p; Int I, n; Cout << how many employees?; Cin >> n; New Emp [n]; If ( p = =0) { Cout << error in crating objects; Exit (1); } For (i=0; i<n; i++) (p+i) get( ); or p[i]. get( ); For (i=0; i<n; i++) (p+i) show( ); or p[i]. show( ); Getch( ); Delete[ ]p; } Enhance the above program so that after accepting records of n employees. Your program prompts the user to enter another name & display the record of that employee only. If employee name is not available then program should display the message record not found. Class Emp { Int age; Char name ; Float sal; Public: Void get( ) { Cout << enter age, name and sal; Cin >> age>> name>> sal; } Void show( ) { Cout << age<< name << sal; }\n\nInt compare (char *); }; Void main ( ) { Emp *p; Int n; Cout << how many employees?; Cin >> n; P= new emp [n]; If (p = =0) { Cout << Insufficient Memory; Exit (1); } For (i=0; i<n; i++0 (p+i) get( ); or p[i]. get( ); Cout << enter name to search; Char str ; Cin. I gnore( ); Cin. Get line (str, 20); For (i=0; i<n; i++0 { If ( (p+i) compare (str) = = 0) { (p+i) show ( ); Break; } } If (I = = n) Cout << Record not found; Getch ( ); Delete [ ] p; } Int compare (char *p) { Return (strcmp ( name.p) ); } Assignment\n\nWAP which creates an array of n objects. Accept values from user in then & display them Now sort the records on the basic of name in ascending order & again display the sorted record Class age; { Int age; Char name ; Float sal; Public: Void get( ) { Cout enter age, name and sal; Cin >> age >> name >> sal; } Void show( ) { Cout << age << name << sal; } Void sort (int); }; Void Emp : : sort (int r) { Int j, I, t; Char temp ; For (i=0; i<r; i++) { For (j=i+1; j<r; j++) { If (str cmp( (this +i) name, (this +j) name)>0) { Strcpy (temp, (this +i) name); Strcpy ((this +i) name, (this+jname); Strcpy ((this+j)naem, temp); T=(this+i) age; (this +i) age = t; T= (this+i) sal; (this +i)sal (this+j)sal; or Emp F; *(this+i) =* (this+j); *(this+j) = F;\n\n## (this+j) sal=t; } //closing of it } // inner for } /outer for } // closing of function\n\nDYNAMIC COSNTRUCTORS\nClass Emp { Int age; Char age; Char name ; Float sal; Public; Emp( ) { Cout << enter age, name and sal; Cin >> age >> name >> sal; } Emp (int I, char *j , float k) { Age = I; Strcpy (name, j); Sal = k; } Void show( 0 { Cout << age << name << sal; } Emp( ) { Cout << Object destroyed; } }; Void main ( ) { Emp *P, *q; P=new Emp;\n\nQ=new Emp (30, vineet, 200000); p show( ); q show( ); delete q; delete p; } Note:- The above Program will not call the destructor of the class even at the termination. This is because in C++ memory which is allocated using new can only be deal located using delete and calling of destructor is only made when memory gets deallocated. Since in above code, call for delete is not present so memory block still remains in RAM & might be collected in future through garbage collection. Thus if memory is freed for a dynamic object it can only be done through delete operator. Terms Used For Dynamic Objects Live Object :- Those object which are initialized using constructor are constructor are c/o live objects. Partially live objects\n\nINHERITANCE\n1. Single Inheritance A Base class 2. Multi level Inheritance A indirect base class of C\n\nDerived class\n\nB c\n\nbase class of C\n\n3. Multiple Inheritance A B\n\n4. Hierarchial Inheritance A\n\n## 5. Hybrid Inheritance or Multipath Inheritance A B C\n\nD Syntax for Inheritance:Class <class name>: public / private / protected < clas-name> Derived class Mode of Inheritance Class Box { Int l, b, h; Public: Void get( 0 { Cout << enter l, band; Cin >> l>>b>>h; } Void show( ) { Cout <<l<< << b<< << h; Base class\n\n} }; Class carton : public Box { Char type; Public: Void set( ) { Cout << enter material name; Cin. Get line (type, 20); } Void display ( ) { Cout << material = << type; } }; Void main( ) { Carton obj; Obj. get( ); Obj. set( ); Obj. show( ); Obj. display( ); } Accessability Rules When Mode Of Inheritance Is Public When a base class is inheritance in public mode then :1. All the public members of base class becomes public members of derived class i.e. they can be accessed through the function of derived class as well as by objects of derived class. 2. All the protected members of base class becomes protected members of derive class & thus can only be accessed through functions of derived class but not by the object (main) of derived class. 3. All private members of base remain private to their own class & thus can neither be accessed through functions of derive class nor by object of derive class. EARLY BINDING:-\n\nIt is a prours which is executed during compilation in which compiler binary the fn body with the fn call i.e. even before program starts executing decision regarding the fn call and fn body to be executed are taken by the compiler since this happens before execution, we can say it is early binding. Early binding is always an action until & unless keyword virtual is used infront of the fn return type. It is done on the basis of there criterias:(i) (ii) (iii) function name or calling object type or function parameter type in other words early binding is always in action in normal fn calls, overloaded fn calls and overloaded operators The major benefit of early binding is speed of execution i.e. function calls which are bound using early binding get executed at a fastest speed as compared to late binding fn calls. OVERRIING The function overriding is a term which is used when a derive class contains a function with the same prototype as its bass class. In other words, fn provided by base class has same prototype in derive class but in different body. Overriding Scope must be different By the classes related by Inheritance Prototype of functions Must be same . Overloading always in same class means at single level prototype must be different\n\nWhen Mode Of Inheritance Is Protected When base class is inheritance in protected mode then:-\n\n1. All public members of base class becomes protected member of derive class i.e. they can be accessed only through function of derived class but not by object of derive class. 2. All protected members of base class become protected members of derive class i.e. they two can only be accessed through functions of derived class but not by object of derive class. 3. All private members of base class remains private to their own class & thus neither accessible through the object nor through functions of derive class. Class Num { Protected: Int a, b; Public: Void get( ) { Cout << enter two numbers; Cin >> a >> b; } Void show( ) { Cout << Numbers are =; Cout <<a << << b; } }; Class Add Num : : protected Num { Protected : Int c; Public: Void set( ) { Get( ); } Void add( ) { C=a+b; }\n\nVoid display( ) { Show( ); Cout << sum= <<c; } }; Void main ( ) { Add Num obj; Obj. set( ); Obj/./ add( ); Obj. display( ); }\n\n## PRIVATE MODE OF INHERITANCE\n\nWhen a base class is inheritance in private mode then:1. All the public members of base class become private members of\n\nderive class i.e. they can be only accessed through fn of derive class but not by the objects of derive class. 2. All the protected members of base become private members of derive class i.e. they two can only be accessed through the function of derive class but not by objects of derive class. 3. All private members of base class remain private to their own class & thus can neither be accessed by fn nor by objects of derives class. Class Num { Protected : Int a, b; Public: Void get( ) { Cout << enter a and b; Cin >> a >> b; } Void show( ) { Cout << a = <<a<<end1;\n\nCout << b= << b<< end1; } }; Class Add Num: Private Num { Protected : Int c; Public: Void set( ) { Get( ); } Void add( ) { C=a+b; } Void display ( ) { Show( ); Cout << same = <<c; } }; Void main( ) { Add Num obj; Obj.set( ); Obj. add( ); Obj. display( ); } Note:- At single level Inheritance private & protected inheritance seems to be similar.\n\nMULTILEVEL INHERITANCE\nClass Num { Protected: Int a, b;\n\nPublic: Void get( ) { Cout << enter a nad b: Cin >> a >> b; } Void show( ) { Cout << a= <<a<<end1; Cout << b= <<b<<end1; } }; Class Add-Num : public Num { Protected : Int c; Public : Void set( ) { Get( ); } Void display( ) { Show( ); Cout << sum= <<c; } Void add( ) { C=a+b; } }; Class Diff Num : public Add Num { Int d; Public: Void accept( ) { Set( ); }\n\nVoid diff( ) { D= a b; } Void print( ) { Display ( ); Cout << Difference= <<d; } }; Void main( ) { Diff Num obj; Obj. accept( ); Obj. add( ); Obj. diff( ); Obj. print( ); } Program Class counter { Protected: Int count; Public: Void init (int a) { Count =a; } Void operator ++( ) { Count ++; } Void show( ) { Cout << count= <<count; } }; Class Dec Counter : public counter\n\n{ Public: Void operator - -( ) { - - count; } }; Void main( ) { Dec counter D; D.int(10); D.show( ); ++D; D.show( ); - - D; D. show( ); } Assignment :Create a class c/a array which contains an integer array of size 10. The class should have two member functions called get arr and showarr, which should accept values in the array & display its values respectively. Now create a derive class of array c/a sortarr, the class should accept a string as parameter if string contains asc then sorting should be done in ascending order & if it contains desc sorting should be done in descending order. Finally create the function main which should contain menu drive interface for the user having 5 options:(i) (ii) (iii) (iv) (v) input display sort in ascending sort in descending quit\n\n## Solution #include <iostream.h> #include <stdlib.h> Class Array { Protected:\n\nInt a; Public: Void get( ); Void display( ); }; Void Array : : get( ) { Int I; Cout << enter array elements; For (i=0; i<5; i++) Cin >>a[i]; } Void Array : : display ( ) { For (int i=0; i<5; i++) Cout << \\n elements are = << a[i]; } Class sortarr : public Array { Public: Void ascsort( ); Void descsort( ); }; Void sortarr : : arcsort ( ) { Int I, j, t; For (i=0; i<5; i++) { For (j=0; j<4; j++) { If (a [j] > a [j+1]) { T = a [j]; A [j] = a [j+1]; A [j+1] = j; } } } }\n\nVoid sortarr : : descsort( ) { Int I, j, t; For (i=0; i<5; i++) { For (j=0; j<4; j++) { If (a[j] <a [j+1]) { T=a[j]; A[j]=a[j+1] A[j+1] = t; } } } } Void main( ) { Clrscr( ); Sortarr sr; Int ch=0; Do { Cout << \\n \\t Enter (1) for Input data; Cout << \\n \\t Enter (2) for Display Data; Cout << \\n \\t Enter (3) sort in Ascending order; Cout << \\n \\t Enter (4) sort in Descending order; Cout << \\n \\t Enter (5) for quit; Cout << enter choice; Cin >> ch; Clrscr( ); Switch (ch) { Case (1): Sr.get( ); Break; Case (2): Sr.display( ); Break;\n\nCase (3): Case (4): Case (5): Exit(1) } } while (ch !=5) Getch( ); } Sr.ascsort( ); Break; sr.descsort( ); Break;\n\nMULTIPLE INHERITANCE\nClass Base 1 { Protected: Int a; Public: Void get( ) { Cout << enter a =; Cin >>a; } Void show( ) { Cout <<a << end1; } }; Class Base2 { Protected: Int b; Public: Void set( ) { Cout << enter b=; Cin >> b; } Void display( )\n\n{ Cout <<b << end1; } }; Class drv : public base1, public base2 { Int c; Public : Void accept ( ) { Get( ); Set( ); } Void add ( ) { C = a+d; } Void print( ) { Show( ); Display( ); Cout << sem = <<c; } }; Void main( ) { Drv obj; Obj. accept( ); Obj. add( 0; Obj. print( ); } Program :Base & derive having the function with same name & arguments. Class Base1 { Protected:\n\nInt a; Public: Void get( ) { Cout << enter a=; Cin >>a; } Void show( ) { Cout << a << end1; } }; Class Base 2 { Protected: Int b; Public: Void set( 0 { Cout << enter b=; Cin >> b; } Void show( ) { Cout <<b<<end1; } }; Class drv : public Base1, public Base2 { Int c; Public: Void accept( ) { Get( ); Set( ); } Void add( ) { C= a+b;\n\n} Void print( ) { Will show show( ); Error ambiguity error } }; Void main( ) { Drv obj; Obj. accept( ); Obj. add( ); Obj. print( ); }\n\n## base1: : show( ); Base2: : show( );\n\nRole Of Constructor & Destructor In Inheritance As a basic rule in inheritance, it a base class contains a constructor and destructor as well as derive class also contains constructor & destructor, then when object of derive class is created, constructor of base class gets executed first followed by constructor of derive class and the destructor is called in reverse order i.e. the destructor of derive class is called first followed by the destructor of base class. Thus we can say constructor are always called in the order of inheritance and destructor are called in reverse order. Class Base { Public : Base ( ) { Cout << In bases constructor <<end1; } Base( 0 { Cout << In bases destructor <<end1; } };\n\nClass drv: public name { Public: Drv( ) { Cout << In derives const <<end1; } Drv( ) { Cout << In derives destructor <<end1: } }; Void main( ) { { Drv obj; } Getch( ); } Class Base { Protected a, b; Public: Base (int I, int j) { A = I; B = j; } Void show( ) { Cout << a << << b; } }; Class drv : public base { Int c; Public: Drv ( ): base (10, 20)\n\n{ C=a+b; } Void show( ) { Base : : show( ); Cout << sum = << c; } }; Void main( ) { Drv obj1 Drv obj2 Obj1. show( ); Obj2. show( ); } Constructor Calling In Multiple Inheritance Base 1 Int a; Base1 (int); Void show( ); base2 int b; base2 (int); void display rv nt c;\n\nClass Base1 { Protected: Int a; Public: Base (int i) { A = I; } Void show( ) { Cout << a; } };\n\nClass Base2 { Protected : Int a; Public: Base2 (int j) { B = j; } Void display( ) { Cout << b; } }; Class drv: public base1, public base2 { Protected : Int c; Public: Drv (int p, int q) : base 1 (p), base2(q) { C=a+b; } Void print ( ) { Show ( ); Display( ); Cout << their sum = << c; } }; Void main( ) { Drv obj1 (10, 20); Drv obj2 (20, 70); Obj1. print ( ); Obj2. print( ); } Note :-\n\nIf the constructor of base class is parameterized then (i) (ii) derive class must have constructor Base class constructor should be called in derives constructor.\n\nConstructor Calling In Multilevel Inheritance Class Base1 { Protected: Int a ; Public: Base 1(int i) { A=1; } Void show( ) { Cout << a << end1; } }; Class Base2 { Protected: Int b; Public: Base2 (int I, int j): Base1 (i) { B=j; } Void display( ) { Cout << b << end 1; } }; Class drv : public Base1, public Base2 { Protected: Int c; Public:\n\nDrv (int x, int y) : base2 (x, y) { C=a+b; } Void print ( ) { Show( ); Display ( ); Cout << their sum= << c; } }; Void main ( ) { Drv obj(10, 20); Drv obj (50, 60); Obj1. print( ); Obj2. print( ); } Note Constructors can not be inherited :Constructors are special member fn which are exclusively used for initialing private data members of class. Now since the private data of class is not passed on to its derive classes, so the functions which explicitly initialize then (constructor) are not inherited. Same is the case with destructor,\n\nHIERARCHIAL INHERITANCE\nClass num { Protected : Int a, b: Public : Num (int I, int j) { A = I; B = j; }\n\nVoid show( ) { Cout << a = << a; Cout << b= << b; } }; Class Add Num : public Num { Int c; Public: Add num (int I, intj) : Num (I, j) { C=a+b; { Void show( ) { Num : : show( ); Cout << sum = <<c; } }; Class Diff num : public Num { Int d; Public : Diff Num (int x, int y) : Num (x, y) { b= a b; } Void show( ) { Num : : show( ); Cout << Difference = << d; } }; Void main( ) { Add Num addobj (10, 20); DiffNUm diffobj (30, 70); Add obj. show( );\n\nDiffobj. Show( ); }\n\nHYBRID INHERITANCE\nClass Base { Public: Int a; }; Class drv1 : virtual public base { Public: Int b; }; Class drv2: virtual pubic base { Public: Int c; }; Class drv3 : public drv1, public drv2 { Public: Int d; }; Void main( ) { Drv obj; Obj. a =10; Obj. b = 20 Obj. c 30; Obj.d= obj a + obj.b + obj. c; Cout << sum = << obj,d; }\n\nGRATING ACCESS\nClass Data\n\n{ Private : Int a; Protected : Int b; Public : Int c; }; Class drv : protected Data { Public: Data : : C // bring C in public mode instead of protected }; Foreg:Class Bank { Public : Void deposit( ); Void withdraw( ); Void int-cal( ); }; Class saving acct: Private Bank { Public : Bank : : deposit; Bank : : with draw; };\n\nPOLYMORPHISM\nClass Base { Public: Void show( ) { Cout << In base & show; } };\n\nClass drv : public Base { Public : Void show( ) { Cout << In drvs show; } }; Void main( ) { Base b, *ptr; Drv d; Ptr = & b; Ptr show( ); Ptr = & d; Ptr show( ); } VIRTUAL FUNCTION & MULTILEVEL INHERITANCE Class Base { Public: Virtual void show( ) { Cout << In bases show; } }; Class drv1 : public Base { Public : Void show( ) { Cout << In drv1s show; } }; Class drv2: public drv1 { Public: Void show( )\n\n{ Cout << In drv2s show; } }; Void main( ) { Base * ptr, b; Drv1 d1; Drv2 d2; Ptr = & b; Ptr show( ); Ptr = & d1; Ptr show( ); Ptr = &d2; Ptr show( ); } Note :Top level class ptr can access member of lower level class. Virtual is mandaroty in base show as function is originally from base. Virtual functions are those functions for which no decision regarding the call and the definition to be executed is token by the compiler during compilation. In other words, if a fn is preceded with keyword virtual then if never become the past of early binding & compiler delays its binding until runtime. Thus all the decisions regarding the call and the body to be executed are taken at runtime. These decisions are not based on the type of caller (as was the case with early binding) but on the bases of contents of the caller. If the calling pointer is storing the address of base class object then bases version of virtual function will be executed & if it pointing to an object of derive class then derive version of virtual fn is executed. But to fully uses the potential of virtual function the derive class while giving its own body /definition for virtual fn must keep its prototype same as the base class i.e. derive should override the virtual function of base class if it wants to place its own definition of virtual function. This is because pointer of base class can access only those function of derive class which are overridden in the derive class but not those which are hidden or added by derive class.\n\nInternal Working Of Virtual Function Class A { Int a; Public: Void f1( ) { } Virtual void f2( ) obj1 VPTR a { } }; Class B : public A { int x; Public: Void f4( ); Void f2( ); obj2 VPTR }; VTABLE FOR A &A : : f2( ) &A : : f3( 0\n\n## VTABLE FOR B &B : : f2( ) &A : f3( )\n\nVoid main( ) { size of class A = 4 bytes A obj1; 2 bytes for variable a Ptr = & obj1; 2 bytes for VPTR Ptr f2( ); Ptr = & obj2; Ptr f3( ); Ptr f3( ) } This will not be executed since this is not virtual & compiler will go to Early binding table for base class A where there is no function f4.\n\nVTABLE:For every class in C++ which contains at least one virtual function a special look up table for storing addresses of there virtual function is created which is known as VTABLE or virtual Table. Thus in short VTABLE is a table containing addresses of virtual functions of the class as well as virtual function of base class if they are not overridden. This table is then consulted by the compiler when a call for virtual function is encountered during runtime. VVPTR:(Virtual Void Pointer) For every class which contains a virtual function special pointer is created by the compiler known as VVPTR, which is used for pointing to the virtual table. Thus every object of class containing virtual function has its size incremented by2. Now whenever compiler encounters call for virtual function through a pointer, it first refers to the address of object to which pointer is pointing from their it reads the address contained in VVPTR of the object the which is the address of VTABLE of class. Lastly within the VATBLE it executes the virtual function called. Thus virtual function enhances the size of code as reduces the execution speed but provides a flexible way of designing program which can respond to the changes which accur at run time. Class Data { Int a ; Data (int ); Public: Static Data get Objects( ); Void show( ); }; Data : : Data (int i) { A = I; } Data Data : : get object( ) { Data D (10); Return (D); } Void Data : : show( )\n\n{ Cout << a; } Void main( ) { Data D = Data. Get object( ); D. show( ); } Note When only one object of class is created then that type of class is c/a single to n class. Polymorphism Compile Time Polymorphism 1. Function Overloading 2. Operator Overloading 3. Early Binding Virtual Inheritance Run Time Polymorphism 1. Function Overriding 2. Virtual Function 3. Late Binding avoids multiple copies Polymorphism (converts parly binding to late binding) Class Figure { Protected: Int dim1, dim2; Public : Void get( ) { Cout << enter 1st & 2nd dimension; Cin >> dim1>> dim2; } }; Class Rectangle : public Figure { Public:\n\nVoid area( ) { Cout << area of Rectangle=; Cout << dim1 * dim2; } }; Class Triangle : public Figure { Public: Void area( ) { Cout << area = <<5 * dim1 *dim2; } }; Void main( ) { Figure *p; Figure F; P = &F; p get( 0; p area( ); rectangle R; p=&R; p get( ); p area( ); triangle T; p = & T; p get( ); p area( ); }\n\n## PURE VIRTAUL FUNCTIONS\n\nDef:- If a virtual functions declarations is equated with zero then it is known as pure virtual fn. In other words, it we do not want to provide any definition for a virtual fn wee can equate with zero then it will be known as\n\npure virtual fn. Thus by definition a pure virtual fn is one which has no body define with in its own class or base class.\n\nABSTRACT CLASS\n1. If a class contain at least one pure virtual fn than it is known as\n\nabstract Base class. 2. we can never create any object of abstract class but we can always create its pointers. This is because whenever an object of a class containing virtual function is created a VTABLE is setup by the complete, which stores the address of virtual function has no body it can not have memory address & thus it can not places in VTABLE. So to prevent any accidental calls to a non-existing pure virtual function compiler prohibits the creation of an object of an abstract class. 3. An class which extends an Abstract Base class must provide its own definition of pure virtual fn available in its base class other wise the class itself would be created as an abstract class & then it too can not be instantiated. Note:- Constructors can not be virtual since link between VPTR VTABLE is made by constructor Virtual Destructor:Class Base { Protected: Public: Int *p; base( ) { P=new int ; Cout << p constructed!; } base { Delete [ ] p; Cout << memory deallocated; }\n\nVirtual\n\n}; Class drv : public Base { Int *q; Public : Drv( ) { Q = new int ; Cout << q constructed; } Drv( ) { Delete [ ] q; Cout << memory deallocated for q; } }; Void main( ) { Base *ptr; Ptr = new drv; Delete ptr; } Destructor can not be declared as pure virtual. Since it is not possible to leave empty body of destructor since at the time of call of destructor same activity must be performed.\n\n## FILE HANDLING (STREAM I/O)\n\nFlow of data If flow of data is from program towards device then o/p stream eg. Cout . is used. If flow of data is from device to program then i/p stream eg cin is used. Monitor / Buffer Monitor\n\n## Input stream C++ Program Input stream\n\noutput stream\n\noutput stream\n\nHard-disk Classes Available In C++ For File Handling 1. Of Stream:2. It Stream:3. Fstream :-\n\nHard disk\n\nIf a class whose objects can be created for writing data to secondary memory of file. is a class whose objects can be created for reading data from file. whose object can read / write the data in a file.\n\nSteap Required For Writing Data In File Step 1 Create the object of Of stream class Eg:- Of stream obj; Step 2 Connect the object with file on your system. Step 3 write the data through the object in file. Step 4 Close the file. C++ Implement Of Above Steps Step 1 Step 2 Step 3 Step 4 (a) (a) (b) Of stream obj; obj. Open ( Data. Txt); obj << Hello; obj. put (H); obj.close ( );\n\n## static data File Opening Mode\n\n3 Of stream obj ( Data. Txt, ios : : app) Step For Creating A Program For Reading A File 1. Create the object of ifstream class. 2. Connect the object with the file on your system & check whether file is available or not. 3. Read the data through object from file. 4. class file. C++ IMPLEMENTATION 1. If stream obj 2. (a) Obj. Open ( Data.txt); Or If stream obj; Obj.open ( Data. Txt, ios : : in); Or If stream obj (Data.txt, ios: : in); Creating Object Of fstream Class 1. fstream obj; Obj.open ( Data.text, ios : : out | ios : : in); 2. Using Constructor Fstream obj ( Data.txt, ios : : out | ios : : in); WAP to create a file c/a message. text Accept a line of text from user & write it in file character by character #include <stdlib.h> #include <iostream.h> #include <fstream.h> #include <conio.h> Void main ( )\n\n{ Of stream out ( Message. Text); If (!out) { Cout << file can not be opened; Getch ( ); Exit(1); } Char str; Cout << enter a line of text; Cin.qetline (str, 80); Int i=0; While (str[i]) { Out.put (str[i]); I++; } Cout << file written successfully; Getch ( ); Out. Close( ); } 1 Note:isopen ( ) 0 1 Fail( ) 0 Que:- WAP to open the file created by the previous program. Read it on character by character basic & display its contents on the screen. Solution :Void main ( ) { If stream in ( message.txt); If (! in) { Cout << filoe can not be opened; Exit(1); Retruns I if not connected Returns 1 if conceted\n\n} Char ch; While (! In.enf( ) ) { Ch=in.get( ); Cout << ch; } Getch( ); In.close( ); } Que:- WAP to open a file c/a Data.txt. Accept a line of text from user & write it on file character by character basic and it in the same program read file and print its content on the screen . Void main( ) { Fstream Nisha ( Data, ios: : out | ios: : in); If (! Nisha) { Cout << error opening file; Exit(1); } Char str ; Cout << enter a line of text; Cin.get line (str, 80); Int i=0; Char ch; While (str [i]) { Ch=str[i]; Obj.put (ch); I++; } Obj.seekg (0); While (! Obj.eof( ) ) { Ch=obj.get( ); Cout << ch; }\n\nGetch( ); Obj. close( ); } Que Assume there is a file c/a Message. Txt containing certain line of txt. Create a file c/a Message2.txt, and copy the contents of messages.txt into it. Before coping the character must be converted upper to lower & vice versa & spaces must be skipper. Finally display the contents of Message2 txt. Void main( ) { If stream obj1 ( Messages.txt); Stream obj2 (Message2. txt, ios : : out | ios : : in); Char ch; If (! Obj1) { Cout << sourcl file cant be opened; Exit(1); } While (! Obj1, eof) { Ch = obj1. get( ); If (ch! = 32) { If (ch>=65 & & ch<=90) Ch=ch+32; Else if (ch > =97 && ch <=122) Ch=ch-32; } Obj2.put(ch); } Obj2.seekg(0); While ( ! obj2. eof( ) ) { Ch=obj2. get( ); Cout << ch; } Getch( ); Obj2.closs( );\n\nObj1. close( ); } READING AND WRITING STRINGS Void main ( ) { Fstream obj ( Data.txt, ios : : out | ios : : in); If !(!obj) { Cout << error; Exit (1); } Char text ; Cout << how many lines; Int n; Cin >> n; Cout << Enter << n<< lines each terminated by enter : <<end1; For (in i=1; i<=n; i++) { Cin.get line (text, 80); Obj << text << end1; } Obj. seekg (0); Cout << File written press nay key to read; Gecth( 0; While (!obj.eof( ) ) { Obj.getline (text,80); Cout << text << end1; } Gecth( ); Obj.close( ); } Void main( ) { Fstream obj ( Data.txt, ios : : out | ios : : in); If (!obj) { Cout << error;\n\nExit(1); } Char text ; Cout << enter lines and press enter on new line to stop; While (1) { Cin.getline (text, 80); If (strlen (text) = =0) Break; Obj << text << end1; } } FILE OPENING MODES 1. ios : : out (Default mode of ofstream) If file is not existing then it is created. Otherwise data gets Erased and pointer is placed at the beginning. If file is existing, pointer is placed a the beginning otherwise error is generated. 3. ios : : app It can not alter previous contents but can add new content at the End of file. 4. ios : : ate (ate stands for at the end) Allows updations as well as adding of new data. In this pointer can move forward and backward.\n\n2. ios : : in\n\n5. ios : : trunc Needed only in fstream. Used with ios : : out Truncate previous data & brings pointer to beginning. 6. ios : : nerplace Used with ios : : out if file is existing do not replace it otherwise create it. 7. ios : : nocreate\n\nUsed with ios : : out if file is existing overwrite it otherwise do not create it. 8. ios : : binary If we want to write data in binary form.\n\nBINARY I /O\nVoid write Of stream Member (char *, address of variable whose data is to be Written int ) no of bytes to be written\n\nInt a = 23091 ; Obj, write (( char *) &a, size of (int) ); Char b = x; Obj.seekg(0); Int c; Obj. read ( (char *) &c, size of (int ) ); Cout << c; Char d; Obj. read (&d, size of (char) ); Cout <<d; Int read (char *, int) Address of variable whose data is to be stored after reading From file On successfully reading from file it returns 1 on error it return 0. Reading and writing class objects Class Emp { Int age; Char name ; Float sal; Public: Void get( ) {\n\nCout << enter age, name and sal; Cin >> age >> name>> sal; } Void show( ) { Cout << age << } }; Void main ( ) { Emp E; Fstream obj ( Records.dat, ios : : out | ios : : in | ios : : trunc| ios : : Binary ); If (! Obj) { Cout << error in opening file; Exit (1); } E.get( ); Obj.write ((char *) &E, size of (Emp) ); Obj.seekg(0); Emp F; Obj.read ((char *) &F, size of (Emp)); F.show( ); Getch( ); Obj.close( ); } Reading And Writing Multiple Objects Void main ( ) { Emp E; Fstream obj ( Record.dat, ios: : out| ios : : in | ios : : trunc | ios : : Binary); If (! Obj) { Out << error in opening file; Exit (1);\n\n## <<name << sal <<end1;\n\n} Char choice; { e.get( ); obj.write ((char *) &E, size of (Emp)); cout << Any More (Y/N); cin.ignore( ); cin >> choice; } while ( ch = = y); Obj. seekg (0); While (opj.read (char *) &E, size of (emp)) E.show( ); getch( ); obj.close( ); } Typical Feature Of Read:Note :If in any ane program, we want to read file successively to eof we must clear the flag Obj. seekg (0); Obj. clear( );\n\nQ. WAP to write multiple records of type emp in a file. Accept a name from user & display the record of the employee by searching it with in the file and display the appropriate message Void main( ) { Int flat = 0 Emp E; Fstream obj1 ( Ekta, txt, ios : : out | ios : : in | ios : : trunc | Ios : ; binary); If (! Obj) { Cout << error; Exit(1); } Char ch;\n\nDo { E.get( ); Obj.write ( ( char *) &E, size of (Emp)); Cout << Ant more (y/n); Cin . ignore ( ); } while (ch = = Y); Char name ; Cout << enter name to search; Cin >> name; Obj. seekg(0); While (obj.read (char *) &E, size of (emp)) { If ( ! ( E = = name) ) { E. show( ); Flag =1; Break; } } If ( ! flag) or if (flag = =0) { Cout << Record not found; Obj1. close( ); } Int operator = = (char * ptr) { Return (strcmp (name, ptr) ); } } RANDOM I/O Q. Assume there is a file c/a Record.dat which contains several records of type emp. WAP to open this file & read the last record. Void main ( ) {\n\nIfstream in ( Records.dal, ios : : in | ios : : binary); If ( ! in) { Cout << error; Exit (1); } In.seekg (-1 * size of (emp), ios : : end); Emp E; In. read ( ( char *) &E, size of (emp)); E.show( ); In.close( ); Getch( ); } Note:Prototype of seekg( ) Void seekg (int, int) No, of position of movement Bytes to ios : : beg Move ios : : cur Ios : : end WAP to accept a name from user. Search the record of that employee in Records.dat & Add a new record at its position by accepting it from user Void main( ) { Char temp; Emp E; Fstream in ( Records.dat, ios : : in | ios : : ate| ios : : binary); Cout << enter name to update; Cin >> temp; In.seekg(0); While (in.read ( (char *) &E, size of (emp) ) { If ( ( E = =temp) = =0) { E.get( ); In.seekg (-1 * size of (Emp), ios : : cur); In.write ( ( char *) &E, size of (emp));\n\nBreack; } } In.clear( ); In.seekg(0); While (in.read ( ( char *) &E, size of (Emp) ) E.show( ); Int operator = =(char *n) { Return (strcmp (temp, ptr) ); } } Assignment :1. WAP to delete record given by user from file. 2. WAP to update just the salary of employee whose name given by user.\n\nTEMPLATES\nTemplates are a technique provide by C++ using which a programmer can define a single function in which last to be carried out is mentioned but the datatype is not mentioned. During runtime by looking at the call of the function & its parameter type the compiler generates specific version of that function to act according to parameter passed. Thus in other words, we can say templates are generic functions which at the time of creation are only told what to do & during run time they become aware on what type of argument it has to be done. Templates are of two type (i) Function Template (ii) Class Template Syntax Function Template\n\nTemplate <class <type-name> > <return type> < function-name> (< type-name> <arg-name>) Example Template <class T> Void display ( T n) { Cout << n << end1; } Void main ( ) { Int a =10; Char b = x; Float c= 11.5; Display (a); Display (b); Display (c); } Write a function template c/a swap which accepts two parameters & swaps then. Parameters passed can be two integer, two floats two class. Finally two swapped values must be displayed in the function main Template <class T> Void swap ( T &a, T &b) { T temp; Temp = a; A= b; B = temp; } Void main ( ) { Int a, b; Cout << enter two integers; Cin >> a >> b; Swap (a, b); Cout << a= <<a<< b= <<b<<end1; Cout << enter two character;\n\nChar p, q; Cin >> p >>q; Swap (p, q); Cout << p= <<p<<q = <<q <<end1; Float x, y; Cout << enter two float numbers; Cin >> x>>y; Swap (x, y); Cout << x= <<x<< y= <<y << end1; } Q. write a function template which accepts two values as an argument & return maximum amongst then. Template <class T> T greater ( T &P, T 7q) { If ( p > q) Return (p); Else Return (q); } Void main ( ) { Int a, b; Cout << enter two integers:; Cin >> a >> b; Cout << maximum = << greater (a, b); Cout << Enter two charr: Char x, y; Cin >> x >>y; Cout << maximum = << greater (x, y); } Write a function template c/a greatest which accepts an array as an argument & returns the largest element of that array. The array passed can be of different type and different size. Template <class T>\n\nT greatest ( T *a, int n) { Int I; T max = *a; For (i=1; i<n ; i++) { If ( * (a+i) > max) Max = * (a+i); } Return (max); } Void main ( ) { Int arr[ ] = {7, 11, 2, 3, 4, 8}; Float brr[ ] = { 10.4, 11.3, 6.5, 8.2}; Cout << max int = << max (arr, 6); Cout max float= << max (brr, 4); }\n\nCLASS TEMPLATE\nTemplate <class T> Class Data { T a; T b; T c; Public: Void get( ) { Cin >> a >> b; } Void add ( ) { C = a+b; } Void show( )\n\n{ Cout << values are = << a << end1; Cout << b; Cout << sum= <<c; } }; Void main ( ) { Data <int> obj1; Data ,double> bj2; Cout << enter two int; Obj1.add( ); Cout << enter two doubles; Obj1, get( ); Obj2. add( ); Obj1. show( ); Obj2. show( ); } Class Template With Different Generic Type Template <class T1, class T2> Class Data { T1, a; T2, b; Public: Void get( ) { Cin >> a >>b; } Void show( ) { Cout << values are = << a << and << b<< end1; } }; Void main( ) { Data <int, float> obj1;\n\nData <double, char> obj2; Cout << enter an int and float; Obj1.get( ); Obj1.show( ); Cout << enter double and char; Obj2.get( ) Obj2.show( ); } Q. write a class template for a class called stack and implement three basic operations of stack push, pop and peek. The stack can be of int, char and flots. Template <class> Class stack { T arr [S]; Int tos; Public: Stack ( ) { Tos = -1; } T pop ( ); }; Template <class T> Void stack <T> : : push (T n) { If (tos = = 4) { Cout << stack overflow; Return; } ++ tos Arr [tos] = n; } Template <class T> T stack <T> : : pop ( ) { If (tos = = -1)\n\n{ Cout << stack under floaw; Return(-1) } Return (arr [tos - -]); } Void main ( ) { Stack <int> sa; Int n; Stack <char> s2; Char ch; For (int i=1; i<=6; i++) { Cout << enter int to be pushed; Cin >> n; S1.push(n); } For (int i=1; I,6; i++) Cout << element popped = << s1.pop( ); } Overloading Of Assignment Operator Class string { Char *p; Public: String (int); Void set string (char *); Void resetstrign ( char *); Void display ( ); String( ); }; String : : string (int n) { P=new int [n+1]; } Void string : : setstring (char *str)\n\n{ Strcpy (p, str); } Void string : : resetstring (char *s) { Strcpy (p, s); } Void string : : display ( ) { Cout << p <<end1; } String : : string( ) { Delete [ ] p; Cout << Memory deallocated; } Void main ( ) { String s1 = 20; String s2 = 20; S11. setstring ( Hello User); S2=s1; S1. display ( ); S2. display ( ); S1. resetstring ( Welcome); S1. display ( ); S2. display( ); } Note:when ever the class is containing dynamic pointer then it is necessary to overload assignment operator At the time of overloading = it is necessary to pass parameter by reference. ( eg ( string & s) ). Note :For cascading of assignment operator it is necessary that return type must be string\n\n## void string : : operator (string &s) { strcpy 9p, s.p); x=s.x; }\n\nString string : : operator = (string &s) { String (p, s, p); X= s,x; Return (* this); } Note :Q. why do we overload assignment operator? Ans:- The default assignment operator provided by C++ is used for copying one object of class to another but it copies the value bit by bit i.e. the value contained in every bit of source object are copied in every bit of destination object. This behaviour is perfect if source cosle is not containing any pointer to dynamically alloveated block, but if it is so then the default equl to (=) operator will simply copy the address stored in the pointer with in source object to pointer in the destination object. This will make two the defferent pointers point to same location which may cause multiple problems like the destructor calling delete operator for the same memory to overcome this, a programmer should overload the equal (=) operator & provide his own definition for copying the data pointer by pointer of source object rather than address. Overloading Of Insertion And Extraction Operator Prototy Of << operator:Friend ostream & operator << (ostream & cout, strudent &p); Class Emp { Int age; Char name ; Float sal; Public: Friend istream & operator >> (istream 7, Emp&); Friend ostream & operator << (ostream &, Emp &); }; Istream & operator >> (istream & in, Emp & p) {\n\nIn >> P.age >> P.sal >> P.name; Return (in); } Ostream & operator << (ostream & out, Emp &P) { Out << P.age << << P.name<< << p.sal; Return out; } Void main( ) { Emp E, F; Cout << Enter age, name and sal; Cin >> E; Cout << Enter age, name & sal; Cin >> F; Cout << E << end1; Cout << F <<end1; Q. What is the need of overloading insertion and extraction operator? Ans:- In C++, all primitive types like int, float, char etc. are display on console using predefined object cout along with insertion operator (<<). Now if programmer wishes to use the same way of displaying objects of his class on screen then he has to overload insertion and extraction operator so that they can be derictly used with user defined objects. Q. Why dont we overload insertion and extraction operator as member function? Ans:- Insertion and Extraction operator are binary operator and if binary operator is overloaded as member function of class then a compulsion is their to keep the object of same class towards left of operator while calling. Thus if it is done then wll would become P << cout where P is object af class. Now this violotes the regular symmetry of insertion operator and so it must be overloaded as friend function. Q. Equal Operator can never be overloaded as friend function. Ans:- Since equal operator should return the reference of calling object (to support coscading). It has to be overloaded as member function as friend function do not have any calling object\n\nNote:Equal operator if defined by base class never inherited by derive class because it needs the same members in source as well as destination object.\n\nTYPE CONVERSION\nType conversion is the process using a programmer can convert value from primitive to non primitive and vice versa as well as from one object of class to another object of different class. Thus type conversion falls in three categories:1> Basic to User Defined 2> User Defined to Basic (Primitive ) 3> User Defined to User Defined Conversion Between Basic TO User Defined Constructor (Basic type { // steps to convert basic to user defined } Conversion Between User Defined TO Basic Operator primitive type of C++ ( ) return type { // steps to convert basic to user defined Return (<basic val>); } Example User To Basic & Basic To User Class Meter { Float length; Public : Meter ( ) { Length = 0.0;\n\n} Meter (float cm) { Length = CM/100; } Operator float( ) { Flaot ans= length * 100.0; Return (ans); } Void accept meters( ) { Cout << enter length in (int meters); Cin >> length ; } Void show meter( ) { Cout << In meter = << length ; } }; Void main ( ) { Meter M1 = 250; Meter M2; M2.accept Meter( ); Float cm = m2; // float CM= M2. operator float( ); Cout << 250 Meters when converted; M1. show Meter( ); M2. show Meter( ); Cout when converted to cms = << cm } Program :- Asignment Class String { Char str ; Public: String (int n) { Itoa (n, str, 10);\n\n} String( ) { Str = \\0; } Operator int( ) { K=1; I= strlen (str); While (i>=0) { Sum = sum + (str [i] -48)* k K * = 10; I - -; } Conversion From User Defined To User Defined 1. Conversion Routine In Source Object:Operator <destination_class-name> ( ) { //routines to convert source to destination Return ( <destination object>); } 2. Conversion Routine In Destination Object:Constructor ( <source class name>) { //routines to convert source to diction } Conversion From User Defined To User Defined By Placing Conversion Routine In Source Object Class Radian { Float rad; Public: Radian ( ) {\n\nRad = 0.0; } Radian (float r) { Rad = r; } Void show( ) { Cout << Radian = << rad << end1; } }; Class Degree { Float deg; Public: Degree( ) { Beg = 0.0; } Operator Radian ( ) { Float x = deg * PI/180; Radian obj = x; Return (obj ); } Void show( ) { Cout << Degree= << deg; } Void getdagree( ) { Cout << enter angle in degree=; Cin >> deg; } }; Void main ( ) { Degree D; d. getdegree( ); Radian R;\n\nR=D; D. show( ); R. show( ); } Class Degree { Float deg; Public: Degree( ) { Deg =0.0; } Void show( )_ { Cout << Degrees= << deg; } Void getdata( ) { Cout << enter angle in degrees; Cin >> deg; } Flaot getDree( ) { Return (deg); } }; Class Radian { Float rad; Public: Radian (Degree Obj) { Rad = obj.get Dece ( ) * PI/180; } Radian ( ) { Rad=0.0; } Void show( )\n\n{ Cout << Radian + << rad; } }; Void main( ) { Degree D; D.getData( ); Radian R = D; D. show( ); R.show( ); } Assignment:#include <iostream.h> #include <conio.h> Class hour { Float time; Public: Hour( ) { Time = 0.0; } Hour (double x) { Time =x/3600; } Operator float( ) { Float x = time * 3600; Return(x); } Void accept( ) { Cout << enter time in hours; Cin >> time; } Void show( )\n\n{ Cout << in hour = << time; } }; Void main( ) { Hour t1=3600; Hour t2; T2. accept( ); Float x = t2; Cout << 3600 when converted to hours=; T1.show( ); T2.shjow( ); Cout << when converted to sconds = <<x; Getch( ); }\n\nCONSOLE I/O OPERATIONS Unformatted 1. Unformatted I/O (a) istream & get (char &) (b) int get( ) (c) istream & get (char *, int) Example :- cin.get(ch); Can read only characters Cin=cin.get( ); Istream & getline (char *, int num, char delimiter); Istream & getline (char *, int num); Example :Void main ( ) { Formatted can be called by cin\n\nChar ch; Cout << enter text and press enter to stop; Cin. Get (ch); While (ch ! =\\n) { Cout << ch; Cin.get(ch); } Getch( ); } Void main ( ) { Char country , capital ; Cout << enter country name:; Cin.get (country,20); cin.getline (country, 20); Cout << enter capital; Cin.getline (capital, 20); Cout << contry is << country << end1; Cout << its capital = << capital << end1; } Note:cin.getline (country, 20, *)\n\nThis will terminate after 19 character or on pressing *, nothing will happen on pressing enter rey. Functions Of Ostream Class (a) ostream & put (char) (b) ostream & write (char *, int) This will start from base add upto no. of integers given. Example :Void main( ) { Char str[ ] = { programming }; Int I; For (i=0; i<strlen (str); i++) { Cout.put (str [i]);\n\nCout << end1; } For (i=1; i<=strlen(str); i++) { Cout.write (str, i); Cout << end1; } For (i=strlen (str) ; i>=1; i--) { Cout.write (str, i); Cout << end1; } }\n\nFORMATTED I/O\nFunction (i)width( ) (ii) precision( ) (iii) fill( ) (iv) setf( ) (v) unsetf( ) Description specifies required no of fields to be used for displaying the output. This fn is used for alignment. specifies the no of values to be displayed after decimal pt specifies a character to be used to fill the unused area of field. By default the fulling is done using spaces. sets the format flag to be used while displaying the output. clears the flogs set using setf & restones the default settling.\n\nDefining field Width Prototype:- int width( ) Returns the current width. Int width (int) Old width aurrent width Void main( )\n\n{ Cout. Width(4); Cout << 123; Cout.width(4); Cout. << 39; Cout.width(2); Cout << 2345 } Setting Precision Prototype :-int precision ( ) By default Int precision (int) precision is of 6 pt. Void main ( ) { Cout precision (2); Cout << 2.23 << end1; Cout << 5. 169 << end1; Cout << 4.003 << end1; } Output 2,23 5.17 4 Filling :Int fill( ) Int fill (char) Void main( ) { Cout.file ( *); Cout.precision(2); Cout.width(6); Cout << 12.53; Cout.width(6); Cout << 20.5; Cout. width(6); Cout << 2; }\n\no/p * 12.53* * * 20.5 * * * * * 2 Note There is no need to set fill and precision again and again while width flag must be set again and again. Formatting With Flags & Bit Fields 1> setf long self (log-setbits, long field) Flag-value (1st argument) Bit field (2nd Arguments Description Ios : : left ios : : adjust field justifies the output on left side. Ios : : right ios : : adjeist field justifies the output in sight align mennes. Ios : : internal ios : : adjust field passing accurs between the sign or base indicator & the value when the value fails to fill the entire width ios : : dec ios : : base field display the data in decimal conversion ios : : oct display the data in actor ios : : hax display the data in hexadecimal ios : : scientific ios : : float field user exponential floating notation ios : : fixed user normal floating notation Example :Void main ( ) { Cout.self (ios : ; left, ios : : adjust field); Cout.fill (*); Cout. precision (2); Cout.width (6); Cout << 12.53; Cout.width (6);\n\nCout << 20.5; Cout.witdth (6); Cout <<2; } 12.53 * 20.5 * * 2 * * * * * Void main ( ) { Cout.self (ios : : internal | ios : : adjust field ); Cout.field (*); Cout. precision (2); Cout. width (10); Cout << -420.53; } Output : - - * * * 4 2 0 . 5 3 Displaying Trailing Zeros & Plus Sign Long self (long setbits) (i) ios : : show pint (for trailing zeros) (ii) ios : : show pos (for plus sign) void main ( ) { Cout.setf (ios : : show point); Cout.precision (2); Cout << 20.55 << end1; Cout << 55.55 << end1; Cout << 20.40 << end1; } Example :Void main ( ) { Cout.setf (ios : : showpoint); Cout.setf (ios : : show pos); Cout.setf (ios : : internal, ios : : adjust field); Cout.precison(3); Cout.width (10); Cout << 420.53;\n\n## output 20.55 55.55 20.40\n\n} Output + * 4 2 0 . 5 3 0\n\nFormatting Data Using Manipulators Non Parameterised Manipulators Manipulator 1. end1 2. dec 3. bex 4. oct 5. flush Description\n\nterminates the line and transfers the cursor to next row. set conversion base to 10. set the conversion field to 16. set the conversion field to 8 flushes the output screen\n\nExample :1. WAP to read a number in decimal and display it in hxadecimal. Void main ( ) { Cout << enter number; Cin a; Cin >> a; Cout << no is = << n << end; Cout << Its hexadecimal value= << hex<<n; Cout. self (ios : : show base); Cout << a; } Output:- no is = 64 Its hexaslccimal value = 40 0x 40 Example :- void main ( ) { Int n; Cout << enter number; Cin >> nex>> n;\n\nCout << number= << n; } Parameterised Manipulators Manipulator 1. setw (int) 2. setprecision (int) 3. setfil (char) 4. setbase (int) 5. setiosflag (long) 6. resetiosflag (long) Description sets the field width sets number of digits to be displayed after decimal pint sets the character to be field set the conversion base. Here possible value of int are 8.10 and 16. sets the format flag. resets the format flag.\n\nExample :Void main ( ) { Int n = 100; Cout << hex <<n << << dec <<n << end1; Float f = 122.3434; Cout << f << end1; Cout << setprecision (3); Cout << f << end1; Cout << setiosflag (ios : : internal } ios : : show base); Cout << hex << n << end1; Cout << setiosfloag (ios : : scientifie) << f << end1; } Output:64 100 122.34339 9 122. 343 0x0064 1. 2 2 3 c + 0. 2\n\nClass Array { Int *P; Int n; Public : Array (int size) { P=new int [size]; N=size; } Int & operator [ ] (int i) { Return (* (p+i) ); } Void fil (int pos, int value) { * (p+pos) = value; } Array ( ) { Delete [ ]p; } }; Void main ( ) { Array obj(5); Int x; For (int i=0; i<5; i++) { Cin >> x; Obj.fill (I, x); or obj [i] = x } For (I = 0; i<5; i++) { Cout << obj [i]; } } Overloading Of Operator ( )\n\nClass Box { Int l, b, h; Public: Box operator ( ) (int, int, int); Void get( ) { Cout << enter l, b and h; Cin >> l >> b >> h; } Void show( ) { Cout << l, << << b << << h; } }; Box Box : : Operator (int l, int b, int h) { Box temp; Temp.l = l + this l; Temp.b = b + this b; Temp.h = h + this h; Return (temp); } Void main ( ) { Box B1; B1.get( ); Box B2; B2 = B1 (5, 3, 9); // B2 = B1. operator ( ) (5, 3, 9); B1. show( ); B2. show( 0; }\n\n## GRAPHICS MODE PROGRAMMING\n\nSteps Required For Designing Graphics Mode Program\n\n1> Concert the display monitor from text mode to graphics mode. 2> Perform the required graphics operation like filling coloring, drawing etc. 3> Finally close the graphics mode and restore character mode. Converting Character To Pixels 1. void init gragraph (int *, int *, char *) Driver resolution path of BGI file Or Mode 2. Drawing using built in functions. 3. void close graph( ); Void restorecrtmode( ); Program:- WAP to couvert monitor display mode from char to pixel and print welcome message. #include <graphics.h> #include <conio.h> #include <stdlib.h> Void main( ) { Int gd, gm, ec; Gd= DETECT; Inttgraph (&gd, &gm, C:\\\\TC\\\\BGI); Ec= graphresult( ); If (ec!=grOk) or (ec!=0) { Printf (error in initialising); Exit(1); } Cleardevice Outtext ( welcome to graphics); Getch( ); Closegraph( ); Restarectmode( ); } Note:-\n\n## Int getmaxx( ); Int getmaxy( ):Assignment:-\n\nreturns the maximum value of x coordinate. returns the minimum value of if coordinate. Modify the above code so that now code display the Message at the necter of screen.\n\nVoid main ( ) { Int gd, gm, ec; Gd = DETECT; Initgraph (&gd, & gm, C:\\\\TC\\\\BGI); Ec=graphresult( ); If (ec ! = grOk) { Printf ( error); Exit(1); } Cleardevice( ); Int a = getmaxx( ); Int b = getmaxy( ); Outtextry ( ( a/2), (b/2), welcome to graphics); Getch( ); Closegraph( ); Restorecrtmode( ); } Q. WAP to accept user name & then convert screen to graphics mode & display the name given by user at the center of the screen on char at a time. Void main ( ) { Char str Int gd, gm, ec; Gd = DETECT; Initgraph (&gd, &gm, c:\\\\TC\\\\BGI); Ec = graphresult( ); If (ec!=grOk) { Printf ( error); Exit(1);\n\n} Cleardevicl ( ); printf (enter name); Move to (getmaxx( ) /2, getmaxy( )/2); For (i=0; str[i]; i++) { Spritf (msg, %c, str[i]); Out text (msg); Delay (1000); } } Changing the font style and size 1. void settextstyle (int font, int dir, int charsize) description of parameters font default FONT (0) TRIPLEX FONT (1) SMALL FONT (2) SANS SERIT FONT (3) GOTHIC FONT (4) SCRIPT FONT (5) SIMPLEX FONT (6) PRIPLEX SCRIPT FONT (7) COMPLEX FONT (8) EUROPEAN FONT (9) BOLD FONT (10) horiz dir (0) Vert dir (1)\n\nDir\n\nCharsize :- 1 to 10 1 being size of 8 x 8 pixel. Program:Void main( ) { Char * font style [ ] = { Defualt font ----, BOLDFONT); Int gd, gm, ec; char msg; Gd = DETECT;\n\nInitgraph (&gd, &gm, C: \\\\ TC\\\\ BGI); Ec = graphresult ( ); If (ec ! = 0) { Printf ( error); Exit (1); } Cleardevicl ( ) Move to (getmaxx( )/2, getmaxy( )/2); For (1=0; i<=10; i++) { Printf (msg = shiv - %s, font-style [i]); Settext style (I, 0, 1); Outtextry (getmaxx( )/2, getmaxy( )/2, msg); Getch( ); Cleardevicl ( ); } Cleardevicl( ); Restorecrtdevicl ( ); }\n\nDRAWING IN GRAPHICS\n1. For Drawing Line:a. void line (int x1, int y1, int x2, int y2) b. void linerel (int, int) c. void line to (int, int) Example :Void maiN ( ) { Int gd, gm, ec, x, y; Gd = DETECT; Initgraph ( &gd, &gm, c: \\\\TC\\\\BGI); Ec = graphresult( ); If (ec ! = 0) { Printf ( error);\n\nExit (1); } Cleardevicl ( ); X=getmaxx( )/2; Line (0, 0, x, y); Getch ( ); Closegraph( ); Restorertmode( ); y=getmaxy( )/2\n\nQ.WAP which draws a line from 20, 30 to 100 pixels further & display the wordinate of line at its end point. Void main ( ) { Int gd, gm, ec, x, y; Gd = DETECT; Initgraph (&gd, & gm, c: \\\\TC\\\\BGI); Ec = graphresult ( ); If (ec! = 0) { Printf ( error); Exit (1); } Cleardericl ( ); Move to (20, 30); Sprintf (msg, %d %d, get x( ), gety( ) ); Outtext (msg); Linerel (100, 100); Sprintf (msg, %d %d, getx( ), get y( ) ); Outtext (msg); } Drawing Linee In Specific Width & Style Void setline style (int style, int pattern, int thickness) Style:- No 0 1 2 Constant SOLID-LINE DOTTED LINE CENTER-LINE meaning solid line line withdot & dash\n\n3 4 Pattern:Thickners:-\n\nDASHED-LINE USERBIT-LINE\n\n## line with dashes\n\nIt is always zero except when first parameter is userbit line. 1 THICK-WIDTH 3 NORM-WIDTH\n\nVoid main( ) { Int gd, gm, ec, I; Char * style [ ] = { solidline}, Dotted-line,----, Dashedline}; Gd = DETECT; Initgraph ( &gd, & gm, C: \\\\TC\\\\BGI); Ec=graphresult( ); If (ec != grok) { Printf ( error); Exit(1); } Deardvice( ); For (i=0; i<4; i++) { Spritf (msg, %s in normal width, style[i]); Outtext (getmaxx( )/2-20, getmaxy( )/2-10, msg); Setlinestyle (I,0, NORM-WIDTH); Line (getmaxx( )/2-50, getmaxy( )/2 +20, getmaxx( )/2 +50, Getmaxy( )/2 +100); Getch( ); Cleardevicl( ); } Restorectdevicl( ); } Defining Pattens in User Defined Way Io define a user bit line wee have to build a sixbit pattern. In this pattern wheever a bit is one the curresposing pixel in the line is drawn in the current drawing colour. For eg:65535 or\n\nsetline style (4, OXFFFF, NORM-WIDTH); this will draw a solid line, setlinestyle (4, Ox3333, NORM-WIDTH) this will draw a dashed line. Drawing Arcs Or Circles Or Rectangles 1. void arc (int x, int y, intstangle, int endangle, int rad) 2. void piesline (int x, int y, int stangle, int endangle, int rad)\n\n3. void circle (int x, int y, int rad) 4. void rectangle (int left, int top, int right, int bottom) 5. void bar (int left, in top, int right, int bottom) Filling Image With Different Patterns Void set color (int) Void setfillstyle (int pattern, int style) Pattern:The pattern parameter signifies the pattern in which filing is to be made. value 0 1 2 3 Result Background color Solid Filling --------------------------\n\n## 5. SLASH-FILL 6 BKLASH-FILE 7. LTBKSLASH-FILE HATCH-FILE XHATCH-FILE INTERLEAVE-FILE WIDE-DOT-FILE CLOSE-DOT-FILE\n\n4 5 6 7 8 9 10 11\n\nWAP which draws a sectangle with white outline & red fill color & should display the fitling in all of the twelve patterns one at a time. Also name of pattern should be displayed with in the rectangle. Void main ( ) { Int gd, gm, ec, left, right, top, bottom, I; Char * style [ ] = { EMPTY-FILE, SOLID-FILE, ----, CLOSED-DOT-FILE}; Gd=DETECT Initgraph (&gd, &gm, C:\\\\TC\\\\BGI); Ec=grapgresult( ); If (ec !=0) { Printf (error); Exit (1); } Cleardevicl ( ); Left = getmaxx( )/2-100; Top=getmaxy( )/2-100 Right getmaxx( )/2+100; Bottom=getmaxy( )/2;\n\nFor(i=0; i<12; i++) { Setfill style (I, RED); Bar(left, top, right bottom); Rectangle (left, top, right, bottom); Outtextxy (left+50, top+50, style[i]); } }\n\n## FILLING CIRCLES & TRIANGLESS\n\n1. void floodfill (int x, int y, int boundary) (x, y) a point with in the figure which has to be filled using flodfill also known as sad. Boundary color:- Color at which filling should stop. Filling Circles With Different Pattern Void main ( ) { Char * pattern = { EMPTY-FILE, --------, CLOSE-DOT-FILE}; Int gd, gm, ec, x, y, 0; Gd=DETECT; Initgraph (&gd, &gm, C:\\\\TC\\\\BGI); Ec=graphresult( ); If (ec! = grOk) { Printf ( error); Exit(1); } Cleardevice( ); X=getmaxx( )/2; y=getmaxy( )/2; For (i=0; i<12; i++) { Setcolor (GREEN); Circle (x, y, 100); Setfill style (I, RED); Floodfill (x, y, GREER); Setcolor (WHITE);\n\nOuttextxy (x-50, y-50, patter[i]); Getch( ); Cleardivice( ); } Getch ( ); Restorecrtdevice( ); } Storing and Drawing Images On Screen 1. void getline (int, int, int, int, void *) 2. int imagesize (int, int, int, int) 3. void putimage (int, int, void *, int option) void getimage (int, int, int, int, void *) copies the bit image of specified postion in memoery 1st parameter indicates left coordinates 2nd parameterer to rd 3 parameter right th 4 parameter bottom th 5 parameter pointer to an array large enough to store bit pattern. Void putimage [int x, int y, void * are, int iption) Copies or outputs the image pattern form memoery to the specified portion on the screen. X = starting left coordinate Y = starting top coordinate Arr = maner un which color of the resultant pixel is to be decided, taking into consideration the pixels stored in memory & the pixels stores on screen. Int image size (int, int, int, int);Returns no. of bytes required to store the image, on any kind of error return -1 Void main ( ) {\n\nInt gd, gm, ec; Char * buffer, msg ; Int size -0f-image; Gd = DETECT; Initgraph ( &gd, &gm, C:\\\\TC\\\\BGI); Ec = graphresult( ); If (ec ! =0) { Printf (error); Exit (1); } Rectangle (150, 150, 200, 200); Size-of-image = image size (150, 150, 200, 200); If (size-of-image = = -1) { Outtextxy (getmaxx( )/2, getmaxy( )/2, Error); Getch( ); Close graph( ); Exit(1); } Buffer = (char *) malloc (size-of-image * size of (char) ); If (buffer = = NULL) { Outtextxy (getmaxx( )/2, getmaxy( )/2, Can not allocate memory); Gecth( ); Exit(1); Close graph( ); } Getimage (150, 150, 200, 200, buffer); Line (200, 220, 220, 220); Putimage (175, 200, buffer, COPY-PUT); Getch( ); Closegraph( ); Restore crtdevice( ); } VALUES COPY-PUT CREEN ON OFF EMORY ON ON OUTPUT ON ON\n\nON OFF XOR-PUT ON OFF ON OFF ON OFF ON OFF ON OFF ON OFF THE END\n\nOR-PUT\n\nAND-PUT"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.65817374,"math_prob":0.8851826,"size":93672,"snap":"2019-51-2020-05","text_gpt3_token_len":27006,"char_repetition_ratio":0.18238886,"word_repetition_ratio":0.15148419,"special_character_ratio":0.34407294,"punctuation_ratio":0.21095529,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852558,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T20:58:26Z\",\"WARC-Record-ID\":\"<urn:uuid:08fd6ec5-c384-4aad-8cdd-32bbf9ca6025>\",\"Content-Length\":\"426279\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecb06f41-a987-44b1-9978-31e70ce14fdd>\",\"WARC-Concurrent-To\":\"<urn:uuid:19829dc5-2986-46d6-847a-6efe94a707f7>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://de.scribd.com/document/66806060/c-Notes-Complet\",\"WARC-Payload-Digest\":\"sha1:JSNTFBNYWFT42YURYUUWAWKXLLKNXZ64\",\"WARC-Block-Digest\":\"sha1:GNOKVPTHUKTSZTVXC7AHXZBLTA3MKVRN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540490972.13_warc_CC-MAIN-20191206200121-20191206224121-00177.warc.gz\"}"} |
https://waseda.pure.elsevier.com/ja/publications/on-the-%CF%84-functions-of-the-reduced-ostrovsky-equation-and-the-a-su | [
"# On the τ-functions of the reduced Ostrovsky equation and the A (2) 2 two-dimensional Toda system\n\nBao Feng Feng*, Ken Ichi Maruno, Yasuhiro Ohta\n\n*この研究の対応する著者\n\n8 被引用数 (Scopus)\n\n## 抄録\n\nThe reciprocal link between the reduced Ostrovsky equation and the A (2) 2 two-dimensional Toda (2D-Toda) system is used to construct the N-soliton solution of the reduced Ostrovsky equation. The N-soliton solution of the reduced Ostrovsky equation is presented in the form of pfaffian through a hodograph (reciprocal) transformation. The bilinear equations and the τ-function of the reduced Ostrovsky equation are obtained from the period 3-reduction of the B or C 2D-Toda system, i.e. the A (2) 2 2D-Toda system. One of the τ-functions of the A (2) 2 2D-Toda system becomes the square of a pfaffian which also becomes a solution of the reduced Ostrovsky equation. There is another bilinear equation which is a member of the 3-reduced extended BKP hierarchy. Using this bilinear equation, we can also construct the same pfaffian solution.\n\n本文言語 English 355203 Journal of Physics A: Mathematical and Theoretical 45 35 https://doi.org/10.1088/1751-8113/45/35/355203 Published - 2012 はい\n\n## ASJC Scopus subject areas\n\n• 統計物理学および非線形物理学\n• 統計学および確率\n• モデリングとシミュレーション\n• 数理物理学\n• 物理学および天文学(全般)\n\n## フィンガープリント\n\n「On the τ-functions of the reduced Ostrovsky equation and the A <sup>(2)</sup> <sub>2</sub> two-dimensional Toda system」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8807845,"math_prob":0.88496417,"size":2313,"snap":"2021-43-2021-49","text_gpt3_token_len":674,"char_repetition_ratio":0.17973149,"word_repetition_ratio":0.77437323,"special_character_ratio":0.2741029,"punctuation_ratio":0.08656036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9808993,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T05:00:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7be8f247-c4f5-436c-8740-956757a7aa7d>\",\"Content-Length\":\"48154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa1f085f-9d43-456b-8ef1-f237b6e063dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:3350aeb3-afbb-4688-a02d-b38b4ee58468>\",\"WARC-IP-Address\":\"13.228.199.194\",\"WARC-Target-URI\":\"https://waseda.pure.elsevier.com/ja/publications/on-the-%CF%84-functions-of-the-reduced-ostrovsky-equation-and-the-a-su\",\"WARC-Payload-Digest\":\"sha1:YAXTXXL7SLDLIZHFCXECHV34EYIBMG7M\",\"WARC-Block-Digest\":\"sha1:KPEVD4LI7VJAXMM4TMMNHXFCHAZGCJK6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363659.21_warc_CC-MAIN-20211209030858-20211209060858-00457.warc.gz\"}"} |
https://www.intechopen.com/books/topics-in-adaptive-optics/measurement-error-of-shack-hartmann-wavefront-sensor | [
"Open access peer-reviewed chapter\n\n# Measurement Error of Shack-Hartmann Wavefront Sensor\n\nBy Chaohong Li, Hao Xian, Wenhan Jiang and Changhui Rao\n\nSubmitted: March 8th 2011Reviewed: August 12th 2011Published: January 20th 2012\n\nDOI: 10.5772/29430\n\n## 1. Introduction\n\nA Shack-Hartmann sensor is one of the most important and popular wavefront sensors used in an adaptive optics system to measure the aberrations caused by either atmospheric turbulence, laser transmission, or the living eye [1-7]. Its design was based on an aperture array that was developed in 1900 by Johannes Franz Hartmann as a means to trace individual rays of light through the optical system of a large telescope, thereby testing the quality of the image. In the late 1960s Roland Shack and Platt modified the Hartmann screen by replacing the apertures in an opaque screen by an array of lenslets [9-10]. The terminology as proposed by Shack and Platt was “Hartmann-screen”. The fundamental principle seems to be documented even before Huygens by the Jesuit philosopher, Christopher Scheiner .\n\nThe schematic of a Shack-Hartman wavefront sensor is shown in Figure 1. It consists of an array of lenses (called lenslets, see Figure 1) of the same focal length. Each is focused onto a photon sensor (typically a CCD array or quad-cell). The local tilt of the wavefront across each lens can then be calculated from the position of the focal spot on the sensor. Any phase aberration can be approximated to a set of discrete tilts. By sampling an array of lenslets, all of these tilts can be measured and the whole wavefront can be approximated. Since only tilts are measured, the Shack-Hartmann can not measure the discontinuous steps of wavefront.\n\nTyler and Fried have obtained the theory expression, which evaluates the angular position error when a quadrant detector is used in the SHWFS . The formula they obtained, based on circular aperture diffraction, is shown in Eq. (1)\n\nσ=3π161SNRλDE1\n\nwhere SNR is defined as the ratio of the signal’s photoelectron counts to the noise’s fluctuation intensity within the detection area, λ is the wavelength, and D is the diameter of the aperture. Their analysis did not discuss the size of the incoming light spot on the detection area in detail. The formula was obtained based on a quadrant detector alone. Generally, the theory expression obtained by Tyler and Fried is not suitable for describing the angular position error when the scale of the discrete detector arrays is greater than 2×2 pixels.\n\nHardy has described formulas that can be used to evaluate the angular position error , under the conditions that the photon shot noise of signal is dominant. Although his formulas discussed the size of the diffraction-limited spot on the discrete detector arrays, it is reliable only under the approximation condition that l/f>>λ/Dor l/f<<λ/Dis satisfied, where l is the length of a pixel, f is the focal length, and Vsis the count of signal photoelectrons. Eq. (2) shows the formulas based on square aperture diffraction.\n\nσ={0.277λD/Vs1/2(whenl/f<<λ/D)0.500λD/Vs1/2(whenl/f>>λ/D)E2\n\nCao et al. have also analyzed the measurement error of a SHWFS. Their work emphasized the discrete sampling error of a CCD and obtained a formula which is used to describe the centroid position error induced by the readout noise of the CCD and the photon shot noise of the signal . Their research results are only the approximation of some cases discussed in this article. Jiang et al. have partially analyzed the measurement error of SHWFS and performed their method by setting a fixed threshold to suppress the impact of the random noise .\n\nIn this chapter, the wavefront error of a Shack-Hartmann wavefront sensor was analyzed in detail based on the research results of angular position error and wavefront error [16-17]. and the formula used to evaluate the wavefront error was derived, it concerns with the signal to noise ratio, number of photons and reconstruction matrix also.\n\n## 2. The angular position error caused by random noise\n\nThe wavefront to be measured is segmented into many subwavefronts by lenslet arrays, and the light spots at the focal plane of the subapertures are detected by the CCD. Particularly, the analysis is based on the notion that the wavefront is essentially flat over each subaperture and r0>D(r0is the coherent length of incoming wavefront). The centroid position can be calculated by Eq. (3) . The detection area of the subaperture is L1×L2pixels, and xnmand ynmare the (n,m)th pixel’s X coordinate and Y coordinates, respectively. Inmis the total intensity in the (n,m)th pixel, including the signal photons and all other noises.\n\nxi=m=1L1n=1L2xnmInmm=1L1n=1L2Inm,yi=m=1L1n=1L2ynmInmm=1L1n=1L2InmE3\n\nThe formulas which evaluate the centroid error associated with the signal’s photon shot noise and the readout noise of the detector, respectively, have been derived by Cao et al. When the photon shot noise of the signal is considered alone, the centroid fluctuation error was obtained by introducing the Gauss width of the signal. When the readout noise of the detector is considered alone, the centroid fluctuation error was also obtained, and its results are shown in Eq. (4) and Eq. (5), respectively . Nr is the rms error induced by the fluctuation of the readout noise in each pixel (with units of photoelectron counts). Vr is the sum of the readout noise’s photoelectron count among all of the pixels within the corresponding subaperture.\n\nϕcs2=Gs2VsE4\nϕcr2=Nr2Vr2L1L2L21112E5\nϕcs2is the variance of the centroid fluctuation in one direction ( X or Y), induced by the photon shot noise of signal itself, and ϕcr2is the variance of the centroid fluctuation in one direction ( X or Y), induced by the readout noise of the detector. ϕcr2and ϕcs2are both defined in pixel2 units. Gsis the equivalent Gauss width of the signal spot and is defined in pixel units by the expression:\nGs=ηflλDE6\n\nwhere η is the positive constant. η is 0.353 when the diffraction aperture is square and 0.431 when the diffraction aperture is circular.\n\nBased on Eq. (3), the centroid position in the X direction can be expressed by Eq. (7). The detailed derivation process of this expression is shown in appendix 1.1.\n\nxc=sbr1+sbrxcs+11+sbrxcbE7\n\nwhere xcis the calculated centroid position of the signal in the X direction, xcsis the real centroid position of the signal in the X direction, and xcbis the centroid position induced by the total noise, except for the signal in the X direction, where the total noise largely includes the readout noise of the detector and the heterogeneous light noise.sbr=<Sn,m>/<Bn,m>, <Snm>is the collective average of the signal intensity in the (n,m)th pixel, and <Bnm>is the collective average of the total noise intensity in the (n,m)th pixel (with units of ADU).\n\nBased on the error transition principles, the rms error of centroid measurement induced by random noise in the X direction can be written in Eq. (8) as:\n\nϕc=[(sbr1+sbr)2ϕcs2+(11+sbr)2ϕcb2]1/2E8\n\nwhere ϕcsis the rms error of centroid measurement in the X direction induced by the signal’s photon shot noise andϕcbis the rms error of centroid measurement in the X direction induced by all the other noise. The signal is mostly comprised of heterogeneous light and readout noise.\n\nIf there were no heterogeneous light and readout noise in the detection area, the signal’s photon shot noise should be the unique noise resource which affects centroid measurement. Based on Eq. (4) and Eq. (6), when the discrete sampling error of the detector is ignored, the rms error of angular position in the X direction caused by the photon shot noise of the signal can be written as:\n\nσ1=(Gs2Vs)1/2lf=ηλDVs1/2E9\n\nWhen the photon shot noise of the signal is small compared with the readout noise and the heterogeneous light noise, the heterogeneous light noise and the readout noise become the primary noise, which affects centroid calculation. When the heterogeneous light noise can be considered as a uniform noise, like the readout noise of the detector, it exists in each pixel and it has the same fluctuation characteristics among the pixels in the detection area. So, the noise in one pixel (including the heterogeneous light noise and the readout noise of the CCD) can be summed and described byNb. Nbis defined as the rms error of the heterogeneous light noise and the readout noise photoelectron count in one pixel, and it has the same fluctuation characteristics as the readout noise of the detector. Nbhas units of ADU. Subsequently, the rms error of centroid measurement in the X direction caused by the heterogeneous light noise and the readout noise of the CCD can be written as:\n\nϕc=11+sbrϕcb=(1+Snm/Bnm)1[L1L2(L121)/12]1/2Nb/Vb=(Bnm/Nb+C(λ,D,l,f)snr)1[L1L2(L121)/12]1/2E10\n\nwheresnr=max(Snm)/Nb, Vb=Bnm, and max(Snm)is the signal’s peak intensity. snr is defined as the ratio of the signal’s peak intensity to the rms error induced by the background noise, Bnmis the average intensity of noise in the (n, m)th pixel, which includes the heterogeneous light noise and the readout noise of the CCD. C(λ,D,l,f)is the light spot constant which is defined as the ratio of the total signal intensity to the signal’s peak intensity in the subaperture, and its value can be measured or calculated exactly byC(λ,D,l,f)=Snm/max(Snm).\n\nThe intensity distribution of the signal’s light spot at the focal plane of the subaperture can be calculated by circular or square aperture diffraction approximations. On the other hand, the Gauss distribution can also be used to approximately describe the intensity distribution of the light spot. The analytic expressions of C(λ,D,l,f)are described in Eq. (11) with different approximation conditions. The detailed derivation process is shown in appendix 2.1~2.3. The value of C(λ,D,l,f)can be calculated by Eq. (11).\n\nC(λ,D,l,f)={1/(1J02(r1)J12(r1))(circularaperturediffractionapproximation)[π2/(sin2x1x1+k=0+(1)k×(2x1)2k+1(2k+1)×(2k+1)!)]2(squareaperturediffractionapproximation)1/[1exp(n2l2D22πf2λ2)](Gaussdistributionapproximation)E11\n\nwherex1=πDλsin(θ)=πDλsin(l/2f)=πDl2λf,r14πx1=4ππDl2λf, n is a positive constant, and θ is the diffraction angle.\n\nWhen the direct current part of the noise (including the heterogeneous light noise and the readout noise of the CCD) is subtracted, it can be considered as white noise, andBnm=0. Then, the standard deviation of the angular position error in the X direction caused by noise can be described by Eq. (12):\n\nσ2=[L1L2(L121)/12]1/2C(λ,D,l,f)1snrlfE12\nWhenL1=L2=L,the Eq. (12) can also be expressed by Eq. (13)\nσ2=[L2(L21)/12]1/2Nb/(L2Nb+Vs1/2)C(λ,D,l,f)max(Snm)/(L2Nb+Vs1/2)lf=[(L21)/(12L2)]1/2L2Nb(L2Nb+Vs1/2)l/fλ/D1SNRλD=ω1SNRλDE13\n\nwhere SNR has the same definition as in Eq. (1). L2Nbis the sum of the rms error of total noise among all of the pixels within the detection area, and expresses the total intensity of noise fluctuation. Vs1/2expresses the photon shot noise induced only by the incoming signal. ω is the position error constant, and it is weighted by the intensity of background noise and the intensity of the signal’s photon shot noise (defined in Eq. (14)):\n\nω=[(L21)/(12L2)]1/2l/fλ/DL2Nb(L2Nb+Vs1/2)E14\n\nSubstituting Eq. (13) and Eq. (9) into Eq. (8), with the assumed condition that there are no correlations among the photon shot noise of the signal, the heterogeneous light noise, and the readout noise of CCD, then the total rms error of angular position in the X direction caused by random noise can be obtained:\n\nσ=(σ12+σ22)1/2ω1SNRλD+ηλDVs1/2E15\n\nEq. (15) is the desired result which can be used to precisely describe the angular position error of a Shack-Hartmann wavefront sensor caused by random noise, and therefore, the centroid algorithm is used to calculate the spot position of the incoming light. Generally, when the ideal detector with very small readout noise is used and there is no background light noise (ω0), the photon shot noise of the signal becomes the theoretical limits imposed on the angular position measurement. Eq. (9) showed this expression. In practice, the theoretical limits may not be achieved for the hardware and environment limitations. When the photon shot noise is small enough compared with the heterogeneous light noise and readout noise, it could be ignored in Eq. (15), and Eq. (13) could be used to describe the angular position error caused by the random noise approximately. Commonly, it has enough accuracy. The position-error constant ω described in Eq. (14) is concerned with the scale of the discrete detector arrays in the detection area, the noise characteristics of the detector, and the system parameters. Clearly, the formula based on a quadrant detector obtained by Tyler and Fried is only a special case in this article. On the other hand, the formula obtained in Eq. (13) is suitable to evaluate the angular position error for both a circular and square aperture.\n\n## 3. Wavefront measurement error caused by centroid position random error\n\nIn this chapter, Zernike modes are used as the basis for wavefront reconstruction. The wavefront measurement error can be written as [13, 19]\n\nΔϕ=ϕϕ'=j=1P(ajaj')Zj=j=1PΔajZjE16\n\nwhereΔϕis the wavefront measurement error induced by centroid position random error, ϕis the wavefront to be measured, ϕ'is the wavefront detected, P is the total number of Zernike modals, aj is the jth Zernike coefficient, and Z expresses the Zernike polynomial. Then, the mean-square of wavefront measurement error can be written as shown in Eq. (17) . The angle brackets denote a collective average.\n\nσϕ2=<ϕ2><ϕ'2>=j=1P<|Δaj|2>E17\n\nBased on the principles of the Zernike modal wavefront reconstruction algorithm , the Zernike-coefficients vector of a wavefront can be obtained:\n\nA=EHE18\n\nwhere E is the modal reconstruction matrix and H is the wavefront slope vector.\n\nTherefore, the variance of the modal Zernike coefficients that describe the wavefront measurement error can be written as:\n\n<|Δaj|2>=<|k=12Qej,kΔhk|2>=k=12Ql=12Qej,kej,l<ΔhkΔhl>E19\n\nwhere Q is the total number of subapertures, ej,kis the element of E, and Δhkis the error of the kth slope element.\n\nIn order to simplify analysis, we assume that there are no correlations among the different slope vectors in the corresponding subapertures and the intensity of the signals are uniform and isotropic among the different subapertures Subsequently, the following expression can be obtained:\n\n<ΔhkΔhl>=σc2f2δ(jkjl)E20\n\nwhere σc2is the variance of centroid position random error induced by random noise, f is the focal length of lenslets, δ(x,y)is the Kronecker delta function , and j and k are the subapertures which are connected with the slope hk and hl. Substituting Eq. (20) and Eq. (19) into Eq. (17), the mean-square of wavefront measurement error can be written as:\n\nσϕ2=j=1P<|Δaj|2>=j=1Pσg2K(j,Q)=j=1P(σcff0)2K(j,Q)E21\n\nwhere σgis the wavefront average slope of the corresponding subaperture in the unit circle,K(j,Q)=k=1Q(ej,2k1+ej,2k)2. It is concerned with the subaperture segmentation number and the distribution of subapertures. f0describes the normalized relationship between the real wavefront slope vector and the normalized wavefront slope vector in the unit circle, and is defined by the expression:\n\nf0=D2λE22\n\nwhere D is the diameter of the aperture and λ is the measuring wavelength.\n\nThen, the root mean square value of wavefront measurement error caused by centroid position random error is obtained:\n\nσϕ=σcD2λf[j=1Pk=1Q(ej,2k1+ej,2k)2]1/2E23\n\nEq. (23) is the desired expression used to evaluate the wavefront measurement error associated with the centroid position random error. σcis the standard deviation in pixels of centroid position random error caused by random noise. The formula described in Eq. (23) can help us to decide what the wavefront measurement error will be when the centroid position randomly fluctuates due to random noise, and it may be a factor that must be considered during the design of the SHWFS.\n\n## 4. Wavefront measurement error analysis based on Zernike modal reconstruction\n\nIn a Shack-Hartmann wavefront sensor, the angular position can be calculated from the centroid position in each subaperture and is proportional to the centroid position. The relationship between centroid and angular position can be described by\n\nσ=σcfE24\n\nIn Eq. (15), the angular position error caused by random noise was obtained. In Eq. (23), the wavefront error caused by random centroid error was obtained. Therefore, the total wavefront measurement error can be described by Eq. (25):\n\nσϕ=σfD2λf[j=1Pk=1Q(ej,2k1+ej,2k)2]1/2=12(1SNR+η/Vs)j=1Pk=1Q(ej,2k1+ej,2k)2E25\n\nIn this formula, we can determine the wavefront measurement error concerned with SNR (see the definition in Eq. (1)), aperture of lenslets (see the definition in Eq. (6)), counts of effective signal, and the reconstruction matrix parameters (see the definition in Eq. (19)).\n\n## 5. Conclusions\n\nIn this chapter, the exact formula (Eq. (25)), which evaluates the Shack-Hartmann wavefront sensor’s measurement error associated with the signal to noise ratio of effective signal, was derived in detail. This study was performed based on a modal wavefront reconstruction with Zernike polynomials, and provided an exact and universal formula to describe the wavefront measurement error of a Shack-Hartmann wavefront sensor with discrete detector arrays. It is critical to an adaptive optics system when the Shack-Hartmann sensor is used as the wavefront sensor, and it provides a reference when designing a Shack-Hartmann wavefront sensor and calculating its reconstruction matrix.\n\n## Acknowledgments\n\nWe would like to give our thanks to Shanqiu Chen, Li Shao, Daoai Dong, and Xuejun Zhang for their great discussion and assistance. We will also give our special thanks to Kevin M. Ivers for his great help in writing this chapter.\n\nchapter PDF\nCitations in RIS format\nCitations in bibtex format\n\n## More\n\n© 2012 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\n## How to cite and reference\n\n### Cite this chapter Copy to clipboard\n\nChaohong Li, Hao Xian, Wenhan Jiang and Changhui Rao (January 20th 2012). Measurement Error of Shack-Hartmann Wavefront Sensor, Topics in Adaptive Optics, Robert K. Tyson, IntechOpen, DOI: 10.5772/29430. Available from:\n\n### chapter statistics\n\n1Crossref citations\n\n### Related Content\n\n#### This Book\n\nEdited by Robert Tyson\n\nNext chapter\n\n#### Acceleration of Computation Speed for Wavefront Phase Recovery Using Programmable Logic\n\nBy Eduardo Magdaleno and Manuel Rodríguez"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91842324,"math_prob":0.9600151,"size":10280,"snap":"2021-04-2021-17","text_gpt3_token_len":2124,"char_repetition_ratio":0.17127287,"word_repetition_ratio":0.0809838,"special_character_ratio":0.19747081,"punctuation_ratio":0.08803006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897733,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T03:21:17Z\",\"WARC-Record-ID\":\"<urn:uuid:edbbe7ff-09d4-47eb-bcf1-97a81cb36c3e>\",\"Content-Length\":\"512866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2001166a-c382-40a6-93f3-626646901e6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:2383fac6-cd00-47a2-8610-eb7c028b6a0d>\",\"WARC-IP-Address\":\"35.171.73.43\",\"WARC-Target-URI\":\"https://www.intechopen.com/books/topics-in-adaptive-optics/measurement-error-of-shack-hartmann-wavefront-sensor\",\"WARC-Payload-Digest\":\"sha1:LVQRM5MVC3JPFFVH4KYXILS7SM65TVMX\",\"WARC-Block-Digest\":\"sha1:YXEVEFL7C6GWLCVQ7PDRADUCNOBDCZQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703522150.18_warc_CC-MAIN-20210121004224-20210121034224-00226.warc.gz\"}"} |
https://socratic.org/questions/58d2b088b72cff2f1f5fd5f0 | [
"# Question #fd5f0\n\nJun 7, 2017\n\nI'll do the first one to show the general method of solving differential equations of this form.\n\nThese are all separable differential equations, meaning that all terms with $x$, including $\\text{d} x$, can be moved to one side of the equation, while all terms including $y$ can be moved to the other.\n\nFrom there, both sides can be integrated independently, and the constant of integration found using the initial condition.\n\nDepending on context, it may or may not be possible to solve for $y$ explicitly.\n\nIn the first example\n\n$\\left(\\text{d\"y)/(\"d} x\\right) = \\frac{x}{y} ^ 2$\n\nwe can cross multiply to isolate $x$ terms and $y$ terms as:\n\n${y}^{2} \\textcolor{w h i t e}{.} \\text{d\"y=xcolor(white).\"d} x$\n\nIntegrate both sides:\n\n$\\int {y}^{2} \\textcolor{w h i t e}{.} \\text{d\"y=intxcolor(white).\"d} x$\n\n$\\frac{1}{3} {y}^{3} = \\frac{1}{2} {x}^{2} + C$\n\nNote that $C$, an arbitrary constant of integration, has been added to only the right-hand side of the equation. This is arbitrary—we just as easily could have written $\\left(1 / 3\\right) {y}^{3} + {C}_{1} = \\left(1 / 2\\right) {x}^{2} + {C}_{2}$ or $\\left(1 / 3\\right) {y}^{3} + {C}_{3} = \\left(1 / 2\\right) {x}^{2}$. All are analogous forms.\n\nProceeding with the more standard form, which is to include the constant with the $x$ terms, we can solve for $C$ using the initial condition—that when $x = 1$, $y = 1$ as well.\n\n$\\frac{1}{3} {\\left(1\\right)}^{3} = \\frac{1}{2} {\\left(1\\right)}^{2} + C$\n\n$C = - \\frac{1}{6}$\n\nSo:\n\n$\\frac{1}{3} {y}^{3} = \\frac{1}{2} {x}^{2} - \\frac{1}{6}$\n\n$y = f \\left(x\\right) = {\\left(\\frac{3 {x}^{2} - 1}{2}\\right)}^{1 / 3}$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79859424,"math_prob":0.9999987,"size":828,"snap":"2021-43-2021-49","text_gpt3_token_len":265,"char_repetition_ratio":0.1092233,"word_repetition_ratio":0.0,"special_character_ratio":0.3031401,"punctuation_ratio":0.07909604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99992836,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T16:12:38Z\",\"WARC-Record-ID\":\"<urn:uuid:13ec1b69-6ed1-4843-943d-9a10d63bdaad>\",\"Content-Length\":\"36628\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5eda3710-39e2-44ef-854f-3f5bc4448412>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2f9e306-773c-4129-a3da-81ec689bb529>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/58d2b088b72cff2f1f5fd5f0\",\"WARC-Payload-Digest\":\"sha1:27PA3DFYO5WQYTDPUN2467FY6JQALBFO\",\"WARC-Block-Digest\":\"sha1:RC56E5SR7KWX6VJIKB32HXVFAMJD2LHO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363400.19_warc_CC-MAIN-20211207140255-20211207170255-00589.warc.gz\"}"} |
http://pubs.sciepub.com/education/6/11/12/index.html | [
" Influence of Problem Based Learning Model and Early Mathematics Ability to Mathematical Communication Skills and Self-Confidence in Junior High School",
null,
"Publications are Open\nAccess in this journal\nArticle Versions\nExport Article\n• Normal Style\n• MLA Style\n• APA Style\n• Chicago Style\nResearch Article\nOpen Access Peer-reviewed\n\nInfluence of Problem Based Learning Model and Early Mathematics Ability to Mathematical Communication Skills and Self-Confidence in Junior High School\n\nAmerican Journal of Educational Research. 2018, 6(11), 1539-1545. DOI: 10.12691/education-6-11-12\nReceived September 10, 2018; Revised October 30, 2018; Accepted November 23, 2018\n\nAbstract\n\nThis study aims to determine: (1) the effect of problem based learning model on students mathematical communication skills, (2) the effect of problem based learning model on students self-confidence, (3) the interaction between learning models (problem based learning and conventional learning) and early mathematics ability to students mathematical communication skills, and (4) the interaction between learning models (problem based learning and conventional learning) and early mathematics ability to student's self-confidence. This study is a quasi-experimental research. The population in this study were all students of class VIII junior high school 2 Tanjung Pura consisting of six classes. Samples were selected by simple random sampling as much as two classes. The selected class is VIII-2 as the experimental class (31 students) and VIII-1 as the control class (32 students). The instruments used are mathematical communication skills test, early mathematics ability test, self-confidence questionnaire. Data obtained from the research instruments were analyzed using two-way ANOVA by SPSS program. The results showed that: (1) there is significant effect of problem based learning model to students mathematical communication skill, (2) there is a significant effect of problem based learning model to students self-confidence, (3) there is interaction between learning models (problem based learning and conventional learning) and early mathematics ability to students' mathematical communication skills, (4) there is interaction between learning models (problem based learning and conventional learning) and early mathematics ability to student's self-confidence.\n\n1. Introduction\n\nIn general mathematics has five objectives in mathematics learning. These objectives include: developing mathematical attitudes, gaining proficiency in the use of mathematical language, gaining insight into applications from mathematics in other disciplines 1. One of the goals of mathematics learning has a purpose so students can communicate well. This is in accordance with the standard of the mathematics learning process formulated by the National Council of Teacher of Mathematics 2, namely: 1) mathematical problem solving, 2) mathematical reasoning, 3) mathematical communication, 4) mathematical connections, and 5) mathematical representation. Learn most of the communication and construct their own knowledge. In addition, with communication students can improve vocabulary, develop speaking skills, write ideas systematically and have better learning abilities 3, 4. Mathematical communication has an important role for students in formulating mathematical concepts and strategies, investing students in the completion of exploration and mathematical investigation, and means for students to communicate to obtain information, share ideas and findings 5, 6. However, the results of the study show that many students experience difficulties in mathematics. Arifin, Kartono, and Sutarto 7 that students' mathematical communication skills are still low. Students have difficulty identifying and conveying ideas contained in a problem 8.\n\nLearning objectives are also seen from the affective domain. In this case one of the affective domains that students need to have in mathematics learning is students' self-confidence attitude. Self-confidence can be interpreted as one part of feelings and thoughts about who we really are 9. In relation to mathematics learning, self-confidence can be built by erasing the impression that mathematics lessons place students as objects by accepting the theory and memorizing formulas. The ease in learning mathematics can make students appreciate and love mathematics 10, 11. From the results of other studies indicate that there is an influence between students 'confidence in students' mathematics learning outcomes 12, 13. In fact, students still have problems with confidence. Students always complain that they have no ability, especially in learning mathematics. When learning, students easily give up and complain that learning is difficult. If asked to work on a question in front of the class, students are overly afraid and feel uncertain about the answer 10, 14.\n\nMore specifically, the low mathematical communication skills and self-confidence of students cannot be separated from the teacher's view of the meaning and model of learning. This is in accordance with what was revealed by Slameto 15 that the role of teachers in the teaching and learning process is to encourage, guide, and provide learning facilities for students to achieve goals. One learning model that can be used to answer these problems is a problem based learning model. Rusman 16 suggests that problem based learning is useful to facilitate the success of problem solving, communication, group work and interpersonal skills better than other approaches. The results show that problem-based learning is effective to improve students' mathematical communication skills. When problem solving is used as a context in mathematics, the focus of learning activities is entirely on students, namely the process to understand a mathematical concept and procedure contained in the problem 17, 18. Amalia, Surya, and Syahputra 19 says that in the use of problem based learning, students were guided to find their own answers by following the steps of the PBL model. As in other studies problem based learning makes students more creative, dare to make decisions, think rationally and collaborate effectively with their classmates 20.\n\nThe success of students in taking mathematics lessons is also very influential on the early mathematical ability factor. Akramunnisa and Sulestry 21 revealed in the learning process, the teacher must pay attention to the students' early mathematical abilities in solving mathematical problems, because the mathematical concepts that are related to each other and form new concepts that are more complex. By knowing the early ability of each student's mathematics, the teacher will be easier in determining the method or strategy suitable for use in the classroom so that the learning carried out will be more effective and efficient 22. The expected end result, the problem based learning model can stimulate students to be more confident both in terms of communication verbally and in writing because each student always interacts between the teacher and other students. This is reinforced by the results of research which revealed that students who have high confidence in mathematics will easily answer mathematical communication problems 23.\n\n2. Literature Review\n\n2.1. Mathematical Communication Skills\n\nFiske 24 also states that communication as social interaction through messages and all communication involves signs and codes. Effendy 25 the communication process is essentially a process of delivering thoughts or feelings by a person (communicator) to others (communicant). The mind can be in the form of ideas, information, opinions, and others that emerge from his mind. Baroody 26 says that education should help children to communicate mathematical ideas with the representation, listening, reading, discussion and writing. Communication through mathematics, teachers can foster student engagement and participation while focusing on a deep conceptual understanding has been mentioned in the general standard of mathematics. Development the language of mathematics to note for the students to better understand mathematical concepts underlying 27. Yusra and Saragih 28 says that the mathematical communication skills is the ability to disclosure of mathematical ideas with symbols, tables, diagrams, or other media to clarify the issue of mathematics and delivered with a mathematical language in teaching and learning mathematics in the learning process of mathematics.\n\nIn this research study, mathematical communication skills of junior high school (SMP) has been expressed by Manitoba Education 29 that the communication in mathematics students daily delivery either orally, through diagrams and pictures, and writing about mathematics. Students need opportunities to read, present, view, write, listen, and discuss mathematical ideas. It is almost similar is said by NCTM 30 that the communication in mathematics for secondary level there are a few things to note are (1) related to mathematical ideas, (2) access some answers solution method, (3) allows multiple presentation of diverse, and (4) a chance in the interpretation, and hypothesizing. Mathematical communication skills Junior high school students are measured from four aspect, namely: 1) convert mathematical situations or ideas into drawing, 2) formulate mathematical ideas into language and mathematical symbols, 3) chage information from an image/table into mathematical language and symbols, and, 4) explain procedures settlement.\n\n2.2. Self-Confidence\n\nSelf-confidence is a positive mental attitude from a person individuals who position or condition themselves to be able to evaluate themselves and their environment so that they fell comfortable doing activities in an effort achieve planned goals 31, 32. Brewer and Barlow 33 also suggest that self-confidence is closely related to ability and knowledge in a particular domain and is influenced by the amount of information taken, the clarity and details of the target. As well as the influence of the domain of knowledge, self-confidence is also influenced by students’ beliefs about, awareness of themselves, general abilities in learning and strategy. Similar to this, Pierce and Stacey 34 defines mathematical beliefs, as students' perceptions of their ability to achieve good results and their assurance that they can handle difficulties in mathematics. Margono 35 divides ones’s confidence in mathematics into three components, namely as follows: 1) trust in understanding and self-awareness regarding mathematical abilities, 2) ability to realistically determine the goals to be achieved and develop an action plan as an effort to achieve the intended goals, and 3) trust in mathematics itself.\n\n2.3. Early Mathematics Ability\n\nAccording to Rusman 16 knowledge of students' early abilities is important for teachers to be able to provide the right portion of the lesson and it is useful to take the necessary steps. Early mathematical abilities describe the readiness of students in doing mathematics learning activities. Another thing explained by Astuti 36 is that initial knowledge is a framework in which students filter new information and find meaning about what is being learned in the learning process. Russeffendi 37 states that from a group of students who are randomly selected (not specifically chosen) there are always a number of children with high, moderate and low ability who are normally distributed. The student's initial mathematical abilities are important to know before he starts with his learning, because it can be known whether students already have or knowledge that is a prerequisite for learning. The extent to which students have known what material will be presented. By knowing this, the teacher will be able to design learning better. Because if students are given material that is already known, they will feel bored quickly. Classification of students' initial knowledge consists of groups consisting of smart groups, moderate and less 38. Akramunnisa and Sulestry 21 reveal the initial ability of mathematics is the level of students' ability to solve existing mathematical problems its relationship with the material underlying these questions. The students' initial mathematical abilities that vary greatly influence the achievement of the next mathematics learning outcomes.\n\n2.4. Model Problem Based Learning\n\nProblem based learning is learning that optimizes students' thinking skills through a systematic process of group work or teams, so students can empower, sharpen, test, and develop their thinking skills in a sustainable manner 16. Problem based learning will encourage students to think divergently. Divergent thought patterns will lead them to the formation of creativity. Learning by giving problems that contain many solutions or many ways of solving can improve students' mathematical creativity 39. Problem based learning has an effect on content knowledge which provides greater opportunities for students to learn with more involvement and increase student active participation, motivation and interest among students. This causes students to have a positive attitude towards mathematics and help them to improve their performance for the most part and which will cause long-term memory 26.\n\nProblem Based Learning is an effective approach to high level thinking. This learning helps students to process the information that has been formed in their minds and compiles their own knowledge about the social world and its surroundings. In this study the stages of the problem based learning model are: 1) Giving problems to students; 2) Reviewing the problem given; 3) Guiding individual and group mastery; 4) Develop and present the work; and 5) Analyze and evaluate the problem solving process. By using problem based learning that is able to develop inquiry and problem solving as well as collaborative, communicative, cooperative learning and learning processes, self-direction can be influenced by students' mathematical communication skills and self-confidence. From the various issues above, the problems that will be discussed in this study are:\n\n1. Is there an effect of problem based learning model on students' mathematical communication skills?\n\n2. Is there an effect of problem based learning model on student self-confidence?\n\n3. Are there interactions between the learning model and early mathematical abilities towards students' mathematical communication skills?\n\n4. Are there interactions between the learning model and the students early mathematical abilities towards students' self-confidence?\n\n3. Methods\n\nThe type of research used in this study is quasi-experimental. This research was conducted at junior high school 2 Tanjung Pura. The population in this study were all eighth grade students. While sample selection is done by cluster random sampling technique. The sample in this study was class, VIII-2 which was used as an experimental class with total 31 students and class VIII-1 was used as a control group totaling 32 students. This study involved two classes treated differently. The experimental group is treated by applying the problem based learning model, while the control group is treated by applying conventional learning. The instruments used are mathematical communication skills test consists of 4 essay, early mathematics ability test amounted to 6 essay, self-confidence questionnaire of 33 statements. Data obtained from research instruments were analyzed using two-way ANOVA through the SPSS program.\n\n4. Result\n\n4.1. Findings\n\nInitial description presented is the result of test early mathematical ability is given to know the average equality experimental class and control class. This test is also conducted to group students, namely high, medium, and low. The results of the summary presented in Table 1. below.\n\nTable 1. gives a conclusion that the average score of the early mathematical abilities for each sample class is relatively the same. For the problem based learning class obtained 73.26 while the conventional learning class was obtained 71.7. So if there is a difference in the results of students' abilities in each sample class is caused by different treatments, not because there are differences before learning. Next, the results of the calculation of mathematical communication skills and self-confidence of students can be seen in the following Table 2:\n\nIn Table 1, it can be seen the average mathematical communication skills and self-confidence of the two groups of students who are taught with the problem based learning and conventional learning models. The problem based learning model obtained an average mathematical communication ability of 76.63, while students who received regular learning obtained an average mathematical communication skills of 69.34. While the problem based learning model obtained an average self-confidence of 72.24, while students who received conventional learning gained an average self-confidence of 68.18. To determine the significance of the data in statistical testing with a two-way anava test, it was previously tested for data normality and homogeneity. The results of homogeneity and normality of the two classes from the tests of mathematical communication skills and self-confidence questionnaire indicated that the two sample groups had homogeneous variance and normal data distribution. The following are the results of the two-way anava calculation of mathematical communication skills presented in Table 3, namely.\n\nBased on the results of a two-way ANOVA in Table 3 above, the p-value was obtained for the study was 0,010 < 0,05, that is enough evidence to reject H0 and accept H1. Means that there is influence of problem based learning models on students' mathematical communication skills. In other words the influence of the problem based learning model on mathematical communication skills is better than conventional learning on mathematical communication skills. In the interaction of learning models (problem based learning and conventional learning) with early mathematical ability obtained p-value = 0.044 < 0.05 or in other words enough evidence to reject H0, thus significantly there is an influence of interaction between learning (problem based learning and conventional) with the early ability of mathematics towards students' mathematical communication skills. The following are the results of the two-way analysis of students' self-confidence in those presented in Table 4, namely.\n\nBased on the results of the two-way ANOVA test in Table 4 above, the p-value was obtained for the study was 0.028 < 0.05, that is enough evidence to reject H0 and accept H1. It means that there is the influence of the problem based learning model on students' self-confidence. In other words, the effect of the problem based learning model on self-confidence is better than conventional learning about self-confidence. In the interaction of learning models (problem based learning and conventional) with early mathematical abilities obtained p-value = 0.035 < 0.05 or in other words enough evidence to reject H0, thus significantly there is an influence of interaction between learning (problem based learning and conventional learning) with early mathematical abilities towards student self-confidence.\n\n4.2. Discussions\n\nThe research findings reveal that the average value of students 'mathematical communication skills taught with the problem based learning model is higher than the average value of students' mathematical communication skills taught by conventional learning. This proves that the problem based learning model is better than the usual learning done by the teacher in developing students' mathematical communication skills. The results of previous studies explain that mathematical communication skills in the experimental class (model problem based learning) is better than conventional learning by teachers 41, 42, 43, 44.\n\nSurya, Syahputra, and Juniati 43 during the process of PBL lead to interaction between students and teachers to learn is often the case, it will eventually make students brainstorm and to reflect the understanding that he had before. Khun-Inkeeree, Omar-Fauzee, and Othman 45 revealed confidence built by the ability to interact in the classroom. This is probably because the module or student activity sheet used in classroom interaction that creates opportunities for students to interact freely during class activities. The learning process is presented by the model problem based learning not only to transfer knowledge from teacher to student, but a process that is conditioned by the teacher so that students are active in a variety of ways to build his own knowledge. Kadir and Parman 46 gives a contextual problem students' attention and challenge students to solve it by means of a mathematical method or communicate their mathematical ideas. Kodariyati and Astuti 36 describes in more visible model of problem based learning students actively in learning. In a study using problem based learning models, communication skills can be developed in the form of questions at the beginning of learning. The use of student worksheets given to each group also influences the course of the learning process.\n\nThe study's findings prove that the use of problem-based student activity sheet structured nature of the inquiry was able to grow in the early mathematical concepts. With the activity sheet, students have no difficulty in answering math problems. students simply follow the patterns that have been provided in the sheet. It tersebur accordance with Kodariyati and Astuti 42 In a study using problem based learning models, communication skills can be developed in the form of questions at the beginning of learning. In addition, the presence of a class discussion makes the students are encouraged to bring an idea or ideas. Use of worksheets that are given to each group also influences the course of the learning process. Astriani, Surya, and Syahputra 47 explains when students are working on the student worksheet and test math skills through contextual problems would encourage students in learning activities to help each other, sharing, respect between the different learning abilities possessed by each student.\n\nThe above explanation is supported by learning theory that supports the mathematical communication skills contained in Runtukahu 48 the symbolic stage in the theory of child development from Bruner states that children manipulate symbols or symbols of certain objects. Students are able to use notation without relying on real objects. Other cognitive theories from Piaget say cognitive development as a child process actively builds systems of meaning and understanding of reality through their experiences and interactions 49. The study's findings also support previous research thatNurbaiti, Irawati and Lichteria 44 found that the model of problem based learning and expository able to jointly provide a positive influence on students' mathematical communication skills. Other results also indicate the interaction between early ability learning model mathematics to students' mathematical ability 50, 51. Problem based learning to students' attitudes that students use problem-based learning show a positive attitude in dealing with problems that are superior because of the element of interaction and constructivism were very prominent in problem-based learning 52.\n\nThe findings also showed that the average value of the self-confidence of students who are taught by a model problem based learning is higher than the average value of the self-confidence of students who are taught by conventional learning. This proved that the model of problem based learning is better than the usual lesson by the teacher in developing students' self-confidence. Results of previous studies also showed that the attitude of self-confidence of students who are taught by a model problem based learning is better than the expository 53, 54, 55. Nurqolbiah 52 the self-confidence of students who are built through problem-based learning with a scientific approach is superior in showing a positive attitude in dealing with problems.\n\nSelf-confidence can also be developed by doing rational and realistic learning in the student environment. This is in line with the problem based learning model which begins to present mathematical problems for students, so students are required to solve problems that are rich in mathematical concepts 56. Previous research by Cerezo 57 suggests that student responses are very positive by liking problem-based learning. Students are able to work in groups. Students feel problem-based learning makes them challenged to think differently, be open to new ideas and not judgmental, and to be supportive of each other. Rokhmawati, Djatmika, and Wardana 58 the application of PBL models can also improve students' positive attitudes. This shows that students have developed skills to adapt to the environment and can have self-control. This can be seen from the courage of students appearing to give opinions, the ability to think positively, and the confidence to communicate in class.\n\nFinally, this study shows that students who have the initial ability of high and moderate mathematics are more benefited than students who have low initial math skills. This means that the problem based learning model is well used for students who have high and moderate categories of early mathematical abilities in developing students' mathematical communication skills and self-confidence. Darkasyi, Johar, and Ahmad 59 explain learning models that enable students to improve students 'mathematical communication skills in terms of students' early mathematical abilities (low, medium, high). Utami and Misnasanti 60 if students can use the early knowledge well in understanding new material, it will affect the ability to solve mathematical problems. Students will be able to solve the problems faced by linking the knowledge they have with new knowledge.\n\n5. Conclusion\n\nBased on the research described in the previous section, can be summed up as follows:\n\n1. Mathematical communication ability of students taught using problem based learning models better than taught using conventional learning.\n\n2. The attitude of self-confidence of students showed better results in the classroom with a model of problem based learning than classroom with conventional learning. This study shows that there are positive effects of problem based learning model with self-confidence of students.\n\n3. There is an interaction between learning models (problem based learning and conventional learning) and early ability of mathematics to students' mathematical communication skills.\n\n4. There is an interaction between learning models (problem based learning and conventional learning) and early ability of mathematics to students' self-confidence.\n\n6. Suggestions for Future Studies\n\nFrom the findings obtained by feeding can be given to the next researcher who develops the learning model, first, the researcher can develop the learning components used in the problem-based learning model and computer-based model development. Second, before giving a learning model, the researcher should further examine students 'and teachers' perceptions of the learning model provided. So that students and teachers really understand the learning model use.\n\nAcknowledgements\n\nThe author would like to thank who have helped write this simple journal. I am really thankful to them.\n\nReferences",
null,
"This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/"
] | [
null,
"http://www.sciepub.com/common/showimages.aspx",
null,
"http://pubs.sciepub.com/images/icon-by.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81056553,"math_prob":0.64032733,"size":51645,"snap":"2019-26-2019-30","text_gpt3_token_len":12972,"char_repetition_ratio":0.20479852,"word_repetition_ratio":0.51307684,"special_character_ratio":0.22819246,"punctuation_ratio":0.16604763,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98477286,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T09:45:22Z\",\"WARC-Record-ID\":\"<urn:uuid:93abb510-53cc-4d0e-a840-da06a57d6ce7>\",\"Content-Length\":\"167290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63e22d68-5787-4236-b92e-65e064083a39>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d4d573a-a6b1-499c-a1c8-bd33d7b212ce>\",\"WARC-IP-Address\":\"70.39.102.108\",\"WARC-Target-URI\":\"http://pubs.sciepub.com/education/6/11/12/index.html\",\"WARC-Payload-Digest\":\"sha1:XWYCBEGDGBBNVUQQI2JQUJKSZAXGLPCP\",\"WARC-Block-Digest\":\"sha1:IRJ3A4EQU5LLJVBA4BYS223XVESLPPBY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000266.39_warc_CC-MAIN-20190626094111-20190626120111-00136.warc.gz\"}"} |
https://bob.cs.sonoma.edu/IntroCompOrg-RPi/sec-codes.html | [
"## Section4.6Other Codes.\n\nThus far in this chapter we have used the binary number system to represent numerical values. It is an efficient code in the sense that each of the $2^{n}$ bit patterns represents a value. On the other hand, there are some limitations in the code. We will explore some other codes in this section.\n\n### Subsection4.6.1BCD Code\n\nOne limitation of using the binary number system is that a decimal number must be converted to binary before storing or performing arithmetic operations on it. And binary numbers must be converted to decimal for most real-world display purposes.\n\nThe Binary Coded Decimal (BCD) code is a code for individual decimal digits. Since there are ten decimal digits, the code must use four bits for each digit. The BCD code is shown in Table 4.6.1.\n\nFor example, in a 16-bit storage location the decimal number 1234 would be stored in the BCD code as\n\n\\begin{gather*} \\binary{0001 \\; 0010 \\; 0011 \\; 0100} \\end{gather*}\n\nand in binary as\n\n\\begin{gather*} \\binary{0000 \\; 0100 \\; 1101 \\; 0010} \\end{gather*}\n\nFrom Table 4.6.1 we can see that six bit patterns are “wasted.” The effect of this inefficiency is that a 16-bit storage location has a range of $0$ – $9999$ if we use BCD, but the range is $0$ – $65535$ if we use binary.\n\nBCD is important in specialized systems that deal primarily with numerical data. There are I/O devices that deal directly with numbers in BCD without converting to/from a character code {for example, ASCII). The COBOL programming language supports a packed BCD format where two digits (in BCD code) are stored in each 8-bit byte. The last (4-bit) digit is used to store the sign of the number as shown in Table 4.6.2. The specific codes used depend upon the particular implementation.\n\nFor example, $\\binary{0001 \\; 0010 \\; 0011 \\; 1010}$ would represent $+123\\text{,}$ $\\binary{0001 \\; 0010 \\; 0011 \\; 1011}$ would represent $-123\\text{,}$ and $\\binary{0001 \\; 0010 \\; 0011 \\; 1111}$ would represent $123\\text{.}$\n\n### Subsection4.6.2Gray Code\n\nOne of the problems with both the binary and BCD codes is that the difference between two adjacent values often requires that more than one bit be changed. For example, three bits must be changed when incrementing from $3$ to $4$ ($\\binary{0011}$ to $\\binary{0100}$). If the value is read during the time when the bits are being switched there may be an error. This is more apt to occur if the bits are implemented with, say, mechanical switches instead of electronic.\n\nThe Gray code is one where there is only one bit that differs between any two adjacent values. As you will see in Section 5.5, this property also allows for a very useful visual tool for simplifying Boolean algebra expressions.\n\n decimal Gray code $0$ $\\binary{0}$ $1$ $\\binary{1}$\n\nTo add a bit, first duplicate the existing pattern, but reflected:\n\n Gray code $\\binary{0}$ $\\binary{1}$ $\\binary{1}$ $\\binary{0}$\n\nThen add a zero to the beginning of each of the original bit patterns and a one to the beginning of each of the reflected set:\n\n decimal Gray code 0 $\\binary{00}$ 1 $\\binary{01}$ 2 $\\binary{11}$ 3 $\\binary{10}$\n\nThe Gray code for four bits is shown in Table 4.6.3. Notice that the pattern of only changing one bit between adjacent values also holds when the bit pattern “wraps around.” That is, only one bit is changed when going from the highest value ($15$ for four bits) to the lowest ($0$)."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84707373,"math_prob":0.9970338,"size":4182,"snap":"2020-34-2020-40","text_gpt3_token_len":1279,"char_repetition_ratio":0.21182384,"word_repetition_ratio":0.01904762,"special_character_ratio":0.37996173,"punctuation_ratio":0.088507265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99880457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T04:41:35Z\",\"WARC-Record-ID\":\"<urn:uuid:5fcd9691-a869-436c-992d-b5ab0a717eb9>\",\"Content-Length\":\"35171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43d4618a-8099-4cd8-b66d-934df829a59e>\",\"WARC-Concurrent-To\":\"<urn:uuid:38b8d671-9743-4d2d-9437-c18296075463>\",\"WARC-IP-Address\":\"130.157.166.29\",\"WARC-Target-URI\":\"https://bob.cs.sonoma.edu/IntroCompOrg-RPi/sec-codes.html\",\"WARC-Payload-Digest\":\"sha1:6CFX2MV7Z7TYNH4VS7A7ZLZDFAPI7YWM\",\"WARC-Block-Digest\":\"sha1:EZZYQGKEZ3G36V3QV6M3ADO3T3OTOM65\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740679.96_warc_CC-MAIN-20200815035250-20200815065250-00158.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/tagged/education?tab=newest&page=2 | [
"# Questions tagged [education]\n\nQuestions about the use of Mathematica/WL in education and/or about teaching Mathematica, or learning it.\n\n127 questions\nFilter by\nSorted by\nTagged with\n58 views\n\n### Looking for introduction to equation solving [closed]\n\nI am a beginner with Mathematica. I wanna learn how to make a program to solve a system of algebraic equations related to my master [?], so I am looking for a way to learn this in as little time as ...\n78 views\n\n### Strain-stress relationship in spherical coordinates with Poisson's ratio [closed]\n\nI am wondering if there is a way to get the relationship between strain and stress in spherical coordinates with Poisson's ratio. I read the documentation here: https://reference.wolfram.com/...\n55 views\n\n### Two variable derivative function mathematica [closed]\n\nI am trying to find the point where the partial derivatives are equal to zero. The function I am analyzing: f[x, y] := 3 xy - x^2 y^2; The area it is ...\n106 views\n\n### Will my procedure be correct?\n\nI am facing a problem when trying to solve the exercise of page 210 of the book GRAPHICS with MATHEMATICA, FRACTALS, JULIA SETS, PATTERNS and NATURAL FORMS of the authors CHONAT GETZ and JANET ...\n137 views\n\n### How can I compute $\\sqrt 5$ to 200 places in base 40 arithmetic\n\nWhat code should I write to compute $\\sqrt{5_{40}}$ to 200 places in base 40 digits.\n71 views\n\n### Detection the coordinates of all points of a curve on an image file [duplicate]\n\nI have a plot of a curve (the black curve without dashes) on an image file.jpg: http://s000.tinyupload.com/index.php?file_id=48857135704766357735 I'd like to detect the coordinates of all point of ...\n473 views\n\n### How can I define a variable so it will be treated as a real number?\n\nI want to define a variable like d as a Real variable and then using that in the other equation like that: $\\qquad d$ is Real $\\qquad f = 5 + (1 + i) d$ But ...\n246 views\n\n### Mathematica problem- consecutive integers\n\nQuestion: The sequence of consecutive integers 140, 141, 142, 143, 144, 145, 146, 147, 148 consists only of composite (non-prime) numbers, and has length 9. Find the longest sequence of consecutive ...\n89 views\n\n### Understanding the question\n\nLet n be the integer shown below: ...\n46 views\n\n### Trying to find how to find how many prime factors have an odd exponent?\n\nHi I am doing a question for Mathematica and I'm having difficulties.The question is in the prime factorization of n, how many prime factors have an odd exponent? The given n is shown below (very ...\n938 views\n\n### Finding slope from a 2D Plot\n\nImagine I have a set of data, the plot is as following(just as an example consider a Gaussian curve): Is there any way to obtain the slop of this curve slop and plotted just using the initial data. ...\n166 views\n\n137 views\n\n### NonlinearModelFit understanding [closed]\n\nI have a trouble with NonlinearModelFit questions. Question: V is a function of T and ...\n114 views\n\n### Limit of an infinite nested radical?\n\nIs it possible to compute the limit of this infinite nested radical in Mathematica: Limit[Sqrt[n+Sqrt[n+Sqrt[n+Sqrt[n+...]]]], n->0]\n139 views\n\n### Mathematica Progression Path from Apprentice to Guru [closed]\n\nThere's an excellent set of responses to a question about learning Python on stack overflow. As there seems to be a large number of Mathematica experts here, it seems of value (to me at least) to ask ...\n43 views\n\n### Speak stops on first Hyphen with IntegerName\n\nIntegerName returns the text of an integer. However, when Speak is applied to the result it stops at the fist hyphen. For ...\n81 views\n\n### How do I use Mathematica to extract data points from DICOM images? [closed]\n\nI have few DICOM images whose data points I would like to extract and probably export (which would really be excellent) to an excel file.\n271 views\n\n### need help to enter the orthogonal sign, the upside down T.\n\nI have search on Google and never came across any idea how to enter this damn sign. would anyone give me some hint?\n332 views\n\n### Solving an ODE using the Taylor Method Help?\n\nNot sure how to proceed after this... ...\n506 views\n\n### How do I recursively calculate this equation and generate a list of iteration?\n\nHow do I write a recursive equation to compute a list of answers? I tried NestList, but it didn't work. ...\n269 views\n\n### Resources to learn about Neural Nets using Mathematica?\n\nI'm interested in learning about Neural Nets and I'd really like my instruction to be in Mathematica if possible. For instance, a tutorial like this one: http://neuralnetworksanddeeplearning.com/...\n345 views\n\n### Timeline Plot with Date Intervals and Caption Array\n\nI'm trying to achieve something similar to this example mentioned in the mathematica website below (https://reference.wolfram.com/language/ref/TimelinePlot.html), except I'm trying to get the event ...\n139 views\n\n### Why does this whole number have a recurring value? [duplicate]\n\nI have a math question which that may seem simple to some but not to me if you where to do the calculation 12.30 * 12 it would equal 147.60 but why is that when I show the answer to more then two ...\n234 views\n\n### Drawing a figure showing an $n$-gon and its dissection into triangles\n\nI don't have so much experience with Mathematica. Could anyone help me to reproduce this picture with Mathematica?\n103 views\n\n### Graph not plotting any points [closed]\n\nf [x_] := (x + ln[x] - 3) Plot[f[x], {x, 0, 5}] How come this comes out with an empty graph? I tried this and same results: ...\n2k views\n\n### List of amazing things you can do with Mathematica [closed]\n\nWhat is the coolest (amazing) thing you can do with Mathematica? for instance there are my favorite things Automating xkcd Diagrams Extending Van Gogh’s Starry Night with Inpainting Making ...\n92 views\n\n### What syntax refers to “this notebook” object?\n\nI want a notebook to automatically save after any cell is evaluated. I found that I should use the command SetOptions[XXX, NotebookAutoSave -> True] to do ...\n933 views\n\n### Do[ ]s and Don't[ ]s when teaching Mathematica in undergraduate courses\n\nQuestion motivated by some horrific homework-related posts that I don't dare to link: What are the top Do's and Dont's when teaching Mathematica to undergrads? I don't ask for the design of a ...\n121 views"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8941707,"math_prob":0.63428336,"size":12297,"snap":"2020-10-2020-16","text_gpt3_token_len":3203,"char_repetition_ratio":0.14430977,"word_repetition_ratio":0.010466223,"special_character_ratio":0.26819548,"punctuation_ratio":0.1324,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99242395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T04:48:59Z\",\"WARC-Record-ID\":\"<urn:uuid:33b1cffa-af72-44ce-9cf0-87f1542073a9>\",\"Content-Length\":\"248384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fe5102d-c264-4d41-b175-dabcd0a05670>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5b94551-78d2-47f9-bc3e-563fb48fcac6>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/tagged/education?tab=newest&page=2\",\"WARC-Payload-Digest\":\"sha1:LVL53F7N2JRPRF7GAOKHBFOBJDODUW2G\",\"WARC-Block-Digest\":\"sha1:MY6RKKPHOKCRFEYDWOUAJWSCFYU7H4PM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371829677.89_warc_CC-MAIN-20200409024535-20200409055035-00127.warc.gz\"}"} |
https://cs50.stackexchange.com/questions/37163/ps2-substitution/37168 | [
"# PS2 substitution\n\nI am trying to check that the substitution key contains all the alphabets but am getting error: expression result unused for this line of code: s, l ++; Can anyone help, this is my code as of now:\n\n`````` #include <cs50.h>\n#include <stdio.h>\n#include <ctype.h>\n#include <string.h>\n\nint main(int argc, string argv[])\n{\nint n = strlen(argv);\nint s = 97, l = 65, count = 0;\nif (argc != 2) //check 1 argument given only\n{\nprintf(\"Usage: ./substitution key\\n\");\nreturn 1;\n}\nelse if (n != 26) //check if key contains exactly 26 characters\n{\nprintf(\"Usage: ./substitution key\\n\");\nreturn 1;\n}\nelse //check key contains each alphabet exactly once\n{\nfor (int i = 0; i < n; i++)\n{\nfor (int j = 0; j < n; j++)\n{\nif (argv[i] == s || argv[i] == l)\n{\ncount ++;\n}\n}\ns, l ++;\n}\n}\nif (count == 26)\n{\nprintf(\"OKAY!\");\n}\n}\n``````\n• If your intention is to increment `s` and `l` then use`s++, l++` – DinoCoderSaurus Apr 29 '20 at 21:14\n\nI'm going to suggest you an easier method. You can simply use the ctype.h library. The library provides you with a function called isalpha(). This function literally checks whether a string contains only alphabetical characters. And I do believe instead of s,l++ you better write them separately.\n\nAnother thing that I think I found is you could edit your code this way\n\n``````for (int i = 0; i < n; i++)\n{\nfor (int j = 0; j < n; j++)\n{\nif (argv[i] == s || argv[i] == l)\n{\ncount ++;\n}\ns++;\nl++;\n}\n}\n``````\n• Thanks for your suggestion but in this case, i am not just checking if all the characters are alphabets but rather whether all 26 alphabets.- a,b,c,d etc are present. – Olivia Apr 30 '20 at 10:35\n\nIf you want to check whether all 26 alphabetical characters are present, you can check whether the characters are not repeated. It works the same way. This was my code. Take a look. Here I checked whether a character is repeated again. :)\n\n``````string repeated = key; //duplicating another equal string to check if same character repeats itself\nfor (int i = 0; i < len; i++) // loop which increases value of i after checking 26 times of k\n{\nfor (int k = 0; k < len; k++)\n{\nif (i != k) //when i is equal to k it does not check the conditition since that step must be skipped\n{\nif (key[i] == repeated[k]) //checking if same character is repeated\n{\nprintf(\"Key must not contain repeated characters\\n\");\nreturn 1;\n}\n}\n}\n}\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5922864,"math_prob":0.9573935,"size":865,"snap":"2021-04-2021-17","text_gpt3_token_len":278,"char_repetition_ratio":0.09872241,"word_repetition_ratio":0.048780486,"special_character_ratio":0.39884394,"punctuation_ratio":0.19251336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97926366,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-17T22:17:57Z\",\"WARC-Record-ID\":\"<urn:uuid:378d5a70-6056-49a1-9ce1-2f0f3e9b85cf>\",\"Content-Length\":\"127486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7dc2d86d-baf7-4e78-8c62-0df238eed691>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae2143ce-b84d-40b8-8a18-0a36a99e8165>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs50.stackexchange.com/questions/37163/ps2-substitution/37168\",\"WARC-Payload-Digest\":\"sha1:SWDAPNZOSXALK3DYLV3BFUE3RMYV7BPC\",\"WARC-Block-Digest\":\"sha1:NXV4Z7CXUWKVGT7KKJO2YL3YJZ6XEMVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703513194.17_warc_CC-MAIN-20210117205246-20210117235246-00102.warc.gz\"}"} |
https://mne.tools/dev/generated/mne.decoding.SSD.html | [
"# mne.decoding.SSD¶\n\nclass mne.decoding.SSD(info, filt_params_signal, filt_params_noise, reg=None, n_components=None, picks=None, sort_by_spectral_ratio=True, return_filtered=False, n_fft=None, cov_method_params=None, rank=None)[source]\n\nM/EEG signal decomposition using the Spatio-Spectral Decomposition (SSD).\n\nSSD seeks to maximize the power at a frequency band of interest while simultaneously minimizing it at the flanking (surrounding) frequency bins (considered noise). It extremizes the covariance matrices associated with signal and noise 1.\n\nSSD can either be used as a dimensionality reduction method or a ‘denoised’ low rank factorization method 2.\n\nParameters\ninfo`mne.Info`\n\nThe `mne.Info` object with information about the sensors and methods of measurement. Must match the input data.\n\nfilt_params_signal`dict`\n\nFiltering for the frequencies of interest.\n\nfilt_params_noise`dict`\n\nFiltering for the frequencies of non-interest.\n\nreg`float` | `str` | `None` (default)\n\nWhich covariance estimator to use. If not None (same as ‘empirical’), allow regularization for covariance estimation. If float, shrinkage is used (0 <= shrinkage <= 1). For str options, reg will be passed to method to `mne.compute_covariance()`.\n\nn_components\n\nThe number of components to extract from the signal. If n_components is None, no dimensionality reduction is applied.\n\npicks\n\nThe indices of good channels.\n\nsort_by_spectral_ratiobool (default `False`)\n\nIf set to True, the components are sorted accordingly to the spectral ratio. See Eq. (24) in 1.\n\nreturn_filteredbool (default `True`)\n\nIf return_filtered is True, data is bandpassed and projected onto the SSD components.\n\nn_fft`int` (default `None`)\n\nIf sort_by_spectral_ratio is set to True, then the SSD sources will be sorted accordingly to their spectral ratio which is calculated based on `mne.time_frequency.psd_array_welch()` function. The n_fft parameter set the length of FFT used. See `mne.time_frequency.psd_array_welch()` for more information.\n\ncov_method_params\n\nAs in `mne.decoding.SPoC` The default is None.\n\nrank`None` | `dict` | ‘info’ | ‘full’\n\nAs in `mne.decoding.SPoC` This controls the rank computation that can be read from the measurement info or estimated from the data. See Notes of `mne.compute_rank()` for details. We recommend to use ‘full’ when working with epoched data.\n\nReferences\n\n1(1,2,3)\n\nVadim V Nikulin, Guido Nolte, and Gabriel Curio. A novel method for reliable and fast extraction of neuronal EEG/MEG oscillations on the basis of spatio-spectral decomposition. NeuroImage, 55(4):1528–1535, 2011. doi:10.1016/j.neuroimage.2011.01.057.\n\n2(1,2)\n\nStefan Haufe, Sven Dähne, and Vadim V Nikulin. Dimensionality reduction for the analysis of brain oscillations. NeuroImage, 101:583–597, 2014. doi:https://doi.org/10.1016/j.neuroimage.2014.06.073.\n\nAttributes\nfilters_`array`, shape (n_channels, n_components)\n\nThe spatial filters to be multiplied with the signal.\n\npatterns_`array`, shape (n_components, n_channels)\n\nThe patterns for reconstructing the signal from the filtered data.\n\nMethods\n\n `__hash__`(/) Return hash(self). Remove selected components from the signal. `fit`(X[, y]) Estimate the SSD decomposition on raw or epoched data. `fit_transform`(X[, y]) Fit to data, then transform it. `get_params`([deep]) Get parameters for this estimator. `get_spectral_ratio`(ssd_sources) Get the spectal signal-to-noise ratio for each spatial filter. Not implemented yet. `set_params`(**params) Set the parameters of this estimator. Estimate epochs sources given the SSD filters.\napply(X)[source]\n\nRemove selected components from the signal.\n\nThis procedure will reconstruct M/EEG signals from which the dynamics described by the excluded components is subtracted (denoised by low-rank factorization). See 2 for more information.\n\nNote\n\nUnlike in other classes with an apply method, only NumPy arrays are supported (not instances of MNE objects).\n\nParameters\nXarray, shape ([n_epochs, ]n_channels, n_times)\n\nThe input data from which to estimate the SSD. Either 2D array obtained from continuous data or 3D array obtained from epoched data.\n\nReturns\nXarray, shape ([n_epochs, ]n_channels, n_times)\n\nThe processed data.\n\nfit(X, y=None)[source]\n\nEstimate the SSD decomposition on raw or epoched data.\n\nParameters\nXarray, shape ([n_epochs, ]n_channels, n_times)\n\nThe input data from which to estimate the SSD. Either 2D array obtained from continuous data or 3D array obtained from epoched data.\n\ny`None` | `array`, shape (n_samples,)\n\nUsed for scikit-learn compatibility.\n\nReturns\nselfinstance of `SSD`\n\nReturns the modified instance.\n\nExamples using `fit`:\n\nfit_transform(X, y=None, **fit_params)[source]\n\nFit to data, then transform it.\n\nFits transformer to X and y with optional parameters fit_params and returns a transformed version of X.\n\nParameters\nX`array`, shape (n_samples, n_features)\n\nTraining set.\n\ny`array`, shape (n_samples,)\n\nTarget values.\n\n**fit_params`dict`\n\nAdditional fitting parameters passed to `self.fit`.\n\nReturns\nX_new`array`, shape (n_samples, n_features_new)\n\nTransformed array.\n\nget_params(deep=True)[source]\n\nGet parameters for this estimator.\n\nParameters\ndeepbool, optional\n\nIf True, will return the parameters for this estimator and contained subobjects that are estimators.\n\nReturns\nparams`dict`\n\nParameter names mapped to their values.\n\nget_spectral_ratio(ssd_sources)[source]\n\nGet the spectal signal-to-noise ratio for each spatial filter.\n\nSpectral ratio measure for best n_components selection See 1, Eq. (24).\n\nParameters\nssd_sources`array`\n\nData projected to SSD space.\n\nReturns\nspec_ratio`array`, shape (n_channels)\n\nArray with the sprectal ratio value for each component.\n\nsorter_spec`array`, shape (n_channels)\n\nArray of indices for sorting spec_ratio.\n\nReferences\n\nExamples using `get_spectral_ratio`:\n\ninverse_transform()[source]\n\nNot implemented yet.\n\nset_params(**params)[source]\n\nSet the parameters of this estimator.\n\nThe method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.\n\nParameters\n**params`dict`\n\nParameters.\n\nReturns\ninstinstance\n\nThe object.\n\ntransform(X)[source]\n\nEstimate epochs sources given the SSD filters.\n\nParameters\nXarray, shape ([n_epochs, ]n_channels, n_times)\n\nThe input data from which to estimate the SSD. Either 2D array obtained from continuous data or 3D array obtained from epoched data.\n\nReturns\nX_ssdarray, shape ([n_epochs, ]n_components, n_times)\n\nThe processed data.\n\nExamples using `transform`:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.60033226,"math_prob":0.82162255,"size":5539,"snap":"2021-43-2021-49","text_gpt3_token_len":1405,"char_repetition_ratio":0.11996387,"word_repetition_ratio":0.1281709,"special_character_ratio":0.2332551,"punctuation_ratio":0.16067654,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9700053,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T19:36:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6e9cd5d2-c137-4c67-b535-7871bb67aff5>\",\"Content-Length\":\"139168\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57111d26-c269-4a0a-93ed-095815e21512>\",\"WARC-Concurrent-To\":\"<urn:uuid:1757063c-46ed-4169-baa1-ca3e96c84456>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://mne.tools/dev/generated/mne.decoding.SSD.html\",\"WARC-Payload-Digest\":\"sha1:PK3UGRWVEKQ6KH6BKOGYAPEOEL46GBZ3\",\"WARC-Block-Digest\":\"sha1:NCUFEQASTJN7TSXDTG2IHOPLJNTWMNEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585348.66_warc_CC-MAIN-20211020183354-20211020213354-00166.warc.gz\"}"} |
https://www.mathematicshomeworkhelp.com/how-to-solve-ordinary-differential-equation-homework/ | [
"# Strategies for Solving Ordinary Differential Equation Homework Problems with Ease\n\nMay 16, 2023",
null,
"United Kingdom\nMathematics\nEmily Adams is a PhD candidate in Mathematics Education at the University of Cambridge. With years of experience teaching and tutoring students of all levels.\n\nSolving Ordinary Differential Equations (ODEs) can be difficult, especially for novices. Solving ODE homework problems, however, may be both a pleasurable and fruitful endeavor if you properly approach them. In this article, we'll go through nine tried-and-true methods for quickly and easily resolving ODE difficulties. These methods will aid you in comprehending the issue at hand, categorizing the ODE, selecting a suitable approach, checking for errors, gaining experience, making use of software, asking for assistance, pausing for reflection, and showing perseverance. These methods will help you master tackling ODE homework problems and increase your grasp of the material.\n\nIntroduction\n\nStudents fresh to the field may find it difficult to solve ordinary differential equations (ODEs). Solving ODE homework problems, however, may be both a pleasurable and fruitful endeavor if you properly approach them. This blog post will go over some of the most effective methods for quickly and easily resolving ODE difficulties at home.\n\n## Understand the Problem\n\nTo solve ODE homework problems, one must first and foremost comprehend the nature of the problem at hand. Reading the problem statement intently and picking out the crucial details is an important first step in solving the issue. The issue statement's wording, assumptions, and constraints must all make sense to you. The starting and boundary conditions are particularly important to consider because of the effect they can have on the final answer.\n\nThe ability to convert a verbal explanation of an ODE problem into a mathematical form is a crucial skill for solving such problems. To solve many problems, you must convert physical phenomena, such as motion or chemical reactions, into mathematical equations. To accomplish this, you must have a firm grasp of the mathematical principles at play.\n\nRecognizing the problem's classification is also crucial to getting a handle on it. Order, linearity, homogeneity, and boundary/initial conditions are some of the criteria that can be used to categorize ODEs. Knowing the type of ODE you're dealing with can aid in selecting the most effective strategy for solving it.\n\nFinally, make sure you fully understand the question posed by the difficulty. It is crucial to know what is expected of you in each step of an ODE problem. Keep in mind whether a precise solution, an approximate solution, or a solution presented in a series is required.\n\n## Classify the ODE\n\nThe first step in doing homework involving Ordinary Differential Equations (ODEs) is classifying the ODE. Order, linearity, homogeneity, and boundary/initial conditions are some of the criteria used to categorize ODEs. Identifying the correct strategies for tackling the problem is made easier by classifying the ODE first.\n\nTo determine the order of an ODE, we look at the largest derivative of the dependent variable. When y is the dependent variable, the form of an ODE is dy/dx = f(x,y). For example, d2y/dx2 = f(x,y,dy/dx) is an ODE of the second order. If you know the order of the ODE, you can figure out what method will work best to solve it.\n\nIf the dependent variable and its derivatives occur linearly in an ODE, we say that the ODE is linear. When both a(x) and b(x) are functions of x, then a linear ODE takes the form a(x)dy/dx + b(x)y = f(x). Where f(x,y) is a nonlinear function of x and y, we have a nonlinear ordinary differential equation of the type dy/dx = f(x,y). While linear ODEs can be solved with techniques like variable separation, integrating factors, and Laplace transforms, nonlinear ODEs cannot.\n\nWhether or not all terms in an ODE are of the same degree is what is meant by the term's homogeneity. When both f(x,y) and g(x,y) are functions of x and y of the same degree, then the resulting differential equation is said to be homogeneous. When x and y are functions of different degrees, the nonhomogeneous ODE takes the form dy/dx = f(x,y)/g(x,y) + h(x,y)/g(x,y), where h(x,y) is a function of x and y. Substitution methods and variable separation can be used to solve homogeneous ODEs, while more advanced approaches like the variation of parameters and the method of indeterminate coefficients are required to solve nonhomogeneous ODEs.\n\nThe final step in solving an ODE issue is to employ boundary and beginning conditions to identify a single solution. The dependent variable's values at discrete locations are defined by the boundary conditions, whereas the dependent variable's value and its derivative at a single point are defined by the initial conditions. The techniques utilized to solve the ODE may change depending on the boundary/initial circumstances.\n\n## Pick An approach\n\nTo solve problems involving Ordinary Differential Equations (ODEs), you must first understand the problem and categorize the ODE before moving on to selecting an acceptable strategy for solving the ODE. Using the appropriate strategy can streamline the process and guarantee a successful outcome.\n\nSolving ODEs can be done in a few different ways, each with its own set of benefits and drawbacks. Separation of variables, integrating factors, Laplace transforms, the power series approach and numerical methods are all frequently employed in the solution of ODEs.\n\nIf the dependent variable can be expressed as a product of functions of x and y, then the first-order ODE can be solved using the separation of variables approach. Integrating both sides of the equation concerning their variables is what this method is all about.\n\nFor first-order linear ODEs, the integrating factor technique is employed. The left-hand side of the equation becomes a product rule derivative by multiplying it by an appropriate integrating factor, which is incorporated into the multiplication of both sides of the equation. Integration can then be used to resolve the problem.\n\nFor constant-coefficient linear ODEs, the Laplace transform approach is utilized. The Laplace transform is used to convert the ODE into an algebraic equation, which can then be solved for the unknown function.\n\nWhen previous methods fail, the power series method is employed to resolve the underlying ODEs. By inserting the expanded power series into the ODE, we may determine the series' coefficients and so solve the unknown function.\n\nWhen analytic solutions to ODEs cannot be found, they are solved numerically. When solving an ODE numerically, we use numerical algorithms to approximate the solution as a series of discrete points.\n\n## Check for Errors\n\nAfter solving an Ordinary Differential Equation (ODE) problem, it is crucial to double-check it for accuracy. Error checking is a useful tool for catching flaws before submitting a solution.\n\nSeveral techniques exist for verifying the accuracy of an ODE solution. Substituting the answer back into the ODE and checking for satisfaction is one of the simplest approaches. Verification refers to this checking process.\n\nSolution verification requires both the solution and its derivative to be plugged back into the original ODE. The solution is valid if and only if it completely solves the equation. If the equation is not satisfied by the solution, then the solution is incorrect.\n\nThe solution can also be validated by contrasting it with the specified starting and ending points. If the answer is right, it will meet either the initial or boundary requirements. There is a problem with the solution if it fails to meet the starting or boundary requirements.\n\nMathematical flaws, such as algebraic blunders or improper differentiation or integration, should also be double-checked. Mathematical proofreading entails checking each calculation and solution step for accuracy.\n\n## Repeated Practice is Essential\n\nSolving Ordinary Differential Equation (ODE) homework problems is a skill that can only be developed by repeated practice. Solving ODE problems is like learning any other ability; the more you do it, the better you get.\n\nSolving ODE issues is a great way to hone your problem-solving abilities and expand your knowledge of the various approaches available for doing so. ODEs can be solved more quickly and easily if you can spot patterns and similarities between issues.\n\nWorking through textbook exercises and examples is a great method to gain experience in ODE problem-solving. Most textbooks include a variety of challenges, ranging in difficulty, which can be used to progressively hone your abilities.\n\nYou can also use online resources like math discussion boards and educational websites to practice answering ODE issues. You can hone your problem-solving abilities by working through the various scenarios presented in these materials.\n\nIt's also helpful to practice solving ODE problems with a study group or instructor. Working with others is a great way to expand your horizons and learn new ways to tackle difficult situations.\n\nKeeping tabs on your development is crucial for making the most of your practice time. You can gauge your progress and pinpoint trouble areas using practice puzzles of varied difficulties.\n\n## Use Software\n\nOrdinary Differential Equation (ODE) homework can also be tackled by turning to computer applications. Using software to solve problems is an easy way to save time and get better results.\n\nMATLAB, Mathematica, Maple, and Python are just some of the software packages that can be used to solve ODEs. These programs use numerical methods to solve ODEs, allowing for solutions to be obtained even when analytic ones are unavailable.\n\nThe behavior of the solution to ODEs over time can be visualized and explored with the use of the software. This can be very helpful when attempting to solve higher-order or nonlinear ODEs analytically.\n\nUsing software has the potential to lessen the likelihood of arithmetic mistakes. There is less room for error in software calculations because the algorithms used have been refined and tested repeatedly.\n\nThe software has the added benefit of being able to solve massive, complicated issues that would be impossible to tackle by hand. Since ODEs can simulate intricate systems and processes, they can prove extremely helpful in scientific and engineering contexts.\n\nWhile software can be helpful, it should never be utilized in place of a solid grasp of the topic at hand and the mathematics behind it. Before turning to software, it is still crucial to grasp the nature of the issue at hand, categorize the ODE, and settle on an approach.\n\n## Get Some Aid\n\nIf you're struggling with your Ordinary Differential Equation (ODE) homework, don't hesitate to ask for assistance. Seeking assistance when stuck with a problem might lead to fresh perspectives and ideas that can lead to a better resolution.\n\nHelp with ODE homework can be found in a variety of sources. One typical method is to seek help from a teacher or teaching assistant. They can offer advice on how to approach a subject and shed light on obscure ideas.\n\nJoining a study club or getting a tutor are two other options for those who need assistance. Working with others can foster an encouraging learning atmosphere and introduce you to different ways of thinking about ODE challenges.\n\nOnline resources, such as math discussion boards and educational websites, can also be useful. These materials help shed light on typical approaches to problems and answer frequently asked issues.\n\nWhen confronted with a challenging condition, it's also crucial to seek assistance as soon as possible. If you put off getting aid until the last minute, you can end up stressed out and with less-than-ideal results.\n\nSeeking assistance is not a show of weakness, which is something to keep in mind. Getting some outside assistance is a normal and healthy component of learning, especially while working on ODE homework issues. You can learn more about the issue and build better strategies for handling it by getting some outside input.\n\n## Take Breaks\n\nTaking a break is a crucial tactic for completing Ordinary Differential Equation (ODE) homework. Concentration and mental stamina tend to suffer when working on difficult problems for long periods. Taking a short break can assist restore mental energy and sharpen concentration, allowing for more efficient problem-solving once you return to work.\n\nTaking a break might help restore mental energy and boost performance. Taking short rests between problem-solving sessions has been demonstrated to boost performance, accuracy, motivation, and originality.\n\nWhen working on difficult problems that require sustained mental effort, frequent breaks are extremely vital. Taking a short break, even if only for ten or fifteen minutes, every hour may do wonders for the mind and make it easier to focus once you get back on the subject.\n\nRelaxing and enjoyable activities, such as going for a stroll, listening to music, or practicing mindfulness, can be quite beneficial during breaks. Resuming work on the issue after engaging in these pursuits may be approached with greater clarity and vigour.\n\nHaving a balanced schedule of work and play is also essential. Overwhelming and lack of interest might result from spending too much time on ODE homework. Maintaining a sense of equilibrium and developing better problem-solving skills over time requires setting reasonable goals and making self-care a top priority.\n\n## Be Patient\n\nThe solution to Ordinary Differential Equation (ODE) homework problems requires patience. ODEs are difficult mathematical problems to solve, especially for those who are just starting in the field. Therefore, working on ODE problems requires patience and perseverance.\n\nThe iterative procedure of solving ODE issues can be time-consuming and error-prone. Each challenge requires a curious mind and a thirst for knowledge. This entails having patience, both with oneself and the process of finding a solution.\n\nThe time required to resolve ODE difficulties should also be taken into account. When dealing with more complex issues, the time required to find a solution can easily exceed a day. This is frustrating, but keep in mind that even delayed growth is progress.\n\nBreaking down ODE problems into smaller, more manageable pieces can help you keep your patience levels up as you work on them. This may help the problem seem less daunting and more manageable. Setting attainable goals and rewarding progress along the way may keep the motivation and momentum going strong.\n\nLast but not least, keep in mind that patience is essential during the learning process. ODEs are a difficult topic that calls for dedicated study. A thorough comprehension of ODEs and enhanced problem-solving abilities can be attained with time, effort, and dedication.\n\n## Concluding Remarks\n\nFinally, ODE homework problems can be difficult to solve, but they are not insurmountable. You can improve your skill at solving ODE issues and your grasp of the subject by following the advice given in this blog. It's important to think through the problem, categorize the ODE, select the right approach, double-check your work, drill for errors, use software, get feedback, rest, and persevere. These methods will make solving ODE issues a breeze"
] | [
null,
"https://www.mathematicshomeworkhelp.com/uploads/topic_files/Ordinary-Differential-Equation-Homework-Expert.webp",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9368012,"math_prob":0.91443497,"size":15090,"snap":"2023-40-2023-50","text_gpt3_token_len":2963,"char_repetition_ratio":0.13244067,"word_repetition_ratio":0.025427261,"special_character_ratio":0.18422796,"punctuation_ratio":0.09660767,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9578353,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T01:18:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b6ff43cc-7c5a-4898-8ca3-00affefdc04b>\",\"Content-Length\":\"49903\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f26787a3-e7fe-487c-9772-d46f4194f763>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c419098-c456-498b-abcd-12b1c0b00d11>\",\"WARC-IP-Address\":\"104.21.38.36\",\"WARC-Target-URI\":\"https://www.mathematicshomeworkhelp.com/how-to-solve-ordinary-differential-equation-homework/\",\"WARC-Payload-Digest\":\"sha1:BZUZ7O3CIRZY3DPSFSR65GM4RUUG5EFN\",\"WARC-Block-Digest\":\"sha1:2ZC7DPOJMDBDXRW2YEQDQMKCRO6FHDSQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100781.60_warc_CC-MAIN-20231209004202-20231209034202-00876.warc.gz\"}"} |
https://labs.tib.eu/arxiv/?author=Sandro%20Wenzel | [
"• Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.\n• ### Vectorising the detector geometry to optimize particle transport(1312.0816)\n\nDec. 3, 2013 hep-ex, physics.comp-ph\nAmong the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to \"basic\" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critical navigation algorithms in the ROOT geometry library. Starting from a short critical discussion about the programming model, we present the current status and first benchmark results of the vectorisation of some elementary geometry shape algorithms. On the path towards a full vector-based geometry navigator, we also investigate the performance benefits in connecting these elementary functions together to develop algorithms which are entirely based on the flow of vector-data. To this end, we discuss core components of a simple vector navigator that is tested and evaluated on a toy detector setup.\n• ### Zero-temperature Monte Carlo study of the non-coplanar phase of the classical bilinear-biquadratic Heisenberg model on the triangular lattice(1305.6418)\n\nSept. 6, 2013 cond-mat.str-el\nWe investigate the ground-state properties of the highly degenerate non-coplanar phase of the classical bilinear-biquadratic Heisenberg model on the triangular lattice with Monte Carlo simulations. For that purpose, we introduce an Ising pseudospin representation of the ground states, and we use a simple Metropolis algorithm with local updates, as well as a powerful cluster algorithm. At sizes that can be sampled with local updates, the presence of long-range order is surprisingly combined with an algebraic decay of correlations and the complete disordering of the chirality. It is only thanks to the investigation of unusually large systems (containing $\\sim 10^8$ spins) with cluster updates that the true asymptotic regime can be reached and that the system can be proven to consist of equivalent (i.e., equally ordered) sublattices. These large-scale simulations also demonstrate that the scalar chirality exhibits long-range order at zero temperature, implying that the system has to undergo a finite-temperature phase transition. Finally, we show that the average distance in the order parameter space, which has the structure of an infinite Cayley tree, remains remarkably small between any pair of points, even in the limit when the real space distance between them tends to infinity.\n• ### Evidence of columnar order in the fully frustrated transverse field Ising model on the square lattice(1207.1618)\n\nNov. 27, 2012 cond-mat.str-el\nUsing extensive classical and quantum Monte Carlo simulations, we investigate the ground-state phase diagram of the fully frustrated transverse field Ising model on the square lattice. We show that pure columnar order develops in the low-field phase above a surprisingly large length scale, below which an effective U(1) symmetry is present. The same conclusion applies to the Quantum Dimer Model with purely kinetic energy, to which the model reduces in the zero-field limit, as well as to the stacked classical version of the model. By contrast, the 2D classical version of the model is shown to develop plaquette order. Semiclassical arguments show that the transition from plaquette to columnar order is a consequence of quantum fluctuations.\n• ### Monte Carlo study of the critical properties of the three-dimensional 120-degree model(1106.3426)\n\nWe report on large scale finite-temperature Monte Carlo simulations of the classical $120^\\circ$ or $e_g$ orbital-only model on the simple cubic lattice in three dimensions with a focus towards its critical properties. This model displays a continuous phase transition to an orbitally ordered phase. While the correlation length exponent $\\nu\\approx0.665$ is close to the 3D XY value, the exponent $\\eta \\approx 0.15$ differs substantially from O(N) values. We also introduce a discrete variant of the $e_g$ model, called $e_g$-clock model, which is found to display the same set of exponents. Further, an emergent U(1) symmetry is found at the critical point $T_c$, which persists for $T<T_c$ below a crossover length scaling as $\\Lambda \\sim \\xi^a$, with an unusually small $a\\approx1.3$.\n• ### Unveiling the nature of three dimensional orbital ordering transitions: the case of $e_g$ and $t_{2g}$ models on the cubic lattice(1101.3259)\n\nMay 17, 2011 cond-mat.str-el\nWe perform large scale finite-temperature Monte Carlo simulations of the classical $e_g$ and $t_{2g}$ orbital models on the simple cubic lattice in three dimensions. The $e_g$ model displays a continuous phase transition to an orbitally ordered phase. While the correlation length exponent $\\nu\\approx0.66(1)$ is close to the 3D XY value, the exponent $\\eta \\approx 0.15(1)$ differs substantially from O(N) values. At $T_c$ a U(1) symmetry emerges, which persists for $T<T_c$ below a crossover length scaling as $\\Lambda \\sim \\xi^a$, with an unusually small $a\\approx1.3$. Finally, for the $t_{2g}$ model we find a {\\em first order} transition into a low-temperature lattice-nematic phase without orbital order.\n• ### Re-examining the directional-ordering transition in the compass model with screw-periodic boundary conditions(1002.3508)\n\nWe study the directional-ordering transition in the two-dimensional classical and quantum compass models on the square lattice by means of Monte Carlo simulations. An improved algorithm is presented which builds on the Wolff cluster algorithm in one-dimensional subspaces of the configuration space. This improvement allows us to study classical systems up to $L=512$. Based on the new algorithm we give evidence for the presence of strongly anomalous scaling for periodic boundary conditions which is much worse than anticipated before. We propose and study alternative boundary conditions for the compass model which do not make use of extended configuration spaces and show that they completely remove the problem with finite-size scaling. In the last part, we apply these boundary conditions to the quantum problem and present a considerably improved estimate for the critical temperature which should be of interest for future studies on the compass model. Our investigation identifies a strong one-dimensional magnetic ordering tendency with a large correlation length as the cause of the unusual scaling and moreover allows for a precise quantification of the anomalous length scale involved.\n• ### Finite-Temperature N\\'eel Ordering of Fluctuations in a Plaquette Orbital Model(0810.2378)\n\nAug. 10, 2009 cond-mat.stat-mech\nWe present a pseudospin model which should be experimentally accessible using solid-state devices and, being a variation on the compass model, adds to the toolbox for the protection of qubits in the area of quantum information. Using Monte Carlo methods, we find for both classical and quantum spins in two and three dimensions Ising-type Neel ordering of energy fluctuations at finite temperatures without magnetic order. We also readdress the controversy concerning the stability of the ordered state in the presence of quenched impurities and present numerical results which are at clear variance with earlier claims in the literature.\n• ### Comprehensive quantum Monte Carlo study of the quantum critical points in planar dimerized/quadrumerized Heisenberg models(0808.1418)\n\nWe study two planar square lattice Heisenberg models with explicit dimerization or quadrumerization of the couplings in the form of ladder and plaquette arrangements. We investigate the quantum critical points of those models by means of (stochastic series expansion) quantum Monte Carlo simulations as a function of the coupling ratio $\\alpha = J^\\prime/J$. The critical point of the order-disorder quantum phase transition in the ladder model is determined as $\\alpha_\\mathrm{c} = 1.9096(2)$ improving on previous studies. For the plaquette model we obtain $\\alpha_\\mathrm{c} = 1.8230(2)$ establishing a first benchmark for this model from quantum Monte Carlo simulations. Based on those values we give further convincing evidence that the models are in the three-dimensional (3D) classical Heisenberg universality class. The results of this contribution shall be useful as references for future investigations on planar Heisenberg models such as concerning the influence of non-magnetic impurities at the quantum critical point.\n• ### Evidence of Unconventional Universality Class in a Two-Dimensional Dimerized Quantum Heisenberg Model(0805.2500)\n\nThe two-dimensional $J$-$J^\\prime$ dimerized quantum Heisenberg model is studied on the square lattice by means of (stochastic series expansion) quantum Monte Carlo simulations as a function of the coupling ratio \\hbox{$\\alpha=J^\\prime/J$}. The critical point of the order-disorder quantum phase transition in the $J$-$J^\\prime$ model is determined as \\hbox{$\\alpha_\\mathrm{c}=2.5196(2)$} by finite-size scaling for up to approximately $10 000$ quantum spins. By comparing six dimerized models we show, contrary to the current belief, that the critical exponents of the $J$-$J^\\prime$ model are not in agreement with the three-dimensional classical Heisenberg universality class. This lends support to the notion of nontrivial critical excitations at the quantum critical point.\n• ### Monte Carlo simulations of the directional-ordering transition in the two-dimensional classical and quantum compass model(0804.2972)\n\nSept. 5, 2008 cond-mat.stat-mech\nA comprehensive study of the two-dimensional (2D) compass model on the square lattice is performed for classical and quantum spin degrees of freedom using Monte Carlo and quantum Monte Carlo methods. We employ state-of-the-art implementations using Metropolis, stochastic series expansion and parallel tempering techniques to obtain the critical ordering temperatures and critical exponents. In a pre-investigation we reconsider the classical compass model where we study and contrast the finite-size scaling behavior of ordinary periodic boundary conditions against annealed boundary conditions. It is shown that periodic boundary conditions suffer from extreme finite-size effects which might be caused by closed loop excitations on the torus. These excitations also appear to have severe effects on the Binder parameter. On this footing we report on a systematic Monte Carlo study of the quantum compass model. Our numerical results are at odds with recent literature on the subject which we trace back to neglecting the strong finite-size effects on periodic lattices. The critical temperatures are obtained as $T_\\mathrm{c}=0.1464(2)J$ and $T_\\mathrm{c}=0.055(1)J$ for the classical and quantum version, respectively, and our data support a transition in the 2D Ising universality class for both cases.\n• ### Percolation of Vortices in the 3D Abelian Lattice Higgs Model(0708.0903)\n\nJuly 9, 2008 hep-lat\nThe compact Abelian Higgs model is simulated on a cubic lattice where it possesses vortex lines and pointlike magnetic monopoles as topological defects. The focus of this high-precision Monte Carlo study is on the vortex network, which is investigated by means of percolation observables. In the region of the phase diagram where the Higgs and confinement phases are separated by a first-order transition, it is shown that the vortices percolate right at the phase boundary, and that the first-order nature of the transition is reflected by the network. In the crossover region, where the phase boundary ceases to be first order, the vortices are shown to still percolate. In contrast to other observables, the percolation observables show finite-size scaling. The exponents characterizing the critical behavior of the vortices in this region are shown to fall in the random percolation universality class.\n• ### Vortex Proliferation and the Dual Superconductor Scenario for Confinement: The 3D Compact U(1) Lattice Higgs Model(hep-lat/0510099)\n\nOct. 24, 2005 hep-lat\nIt is argued that the phase diagram of the 3D Compact U(1) Lattice Higgs Model is more refined than generally thought. The confined and Higgs phases are separated by a well-defined phase boundary, marked by proliferating vortices. It is shown that the confinement mechanism at work is precisely the dual superconductor scenario.\n• ### Kertesz Line in the Three-Dimensional Compact U(1) Lattice Higgs Model(cond-mat/0503599)\n\nMarch 24, 2005 cond-mat.str-el, hep-lat\nThe three-dimensional lattice Higgs model with compact U(1) gauge symmetry and unit charge is investigated by means of Monte Carlo simulations. The full model with fluctuating Higgs amplitude is simulated, and both energy as well as topological observables are measured. The data show a Higgs and a confined phase separated by a well-defined phase boundary, which is argued to be caused by proliferating vortices. For fixed gauge coupling, the phase boundary consists of a line of first-order phase transitions at small Higgs self-coupling, ending at a critical point. The phase boundary then continues as a Kertesz line across which thermodynamic quantities are nonsingular. Symmetry arguments are given to support these findings."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8723821,"math_prob":0.9759108,"size":12811,"snap":"2020-45-2020-50","text_gpt3_token_len":2739,"char_repetition_ratio":0.12727414,"word_repetition_ratio":0.07639979,"special_character_ratio":0.18905628,"punctuation_ratio":0.070935056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97609127,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T20:53:01Z\",\"WARC-Record-ID\":\"<urn:uuid:28749a79-a985-431e-8b49-cdec2713ed7e>\",\"Content-Length\":\"116201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cee4c9e1-f791-491f-a1f5-c60a529c0194>\",\"WARC-Concurrent-To\":\"<urn:uuid:86978a55-39e9-4757-9d90-b322ebcff13c>\",\"WARC-IP-Address\":\"194.95.114.13\",\"WARC-Target-URI\":\"https://labs.tib.eu/arxiv/?author=Sandro%20Wenzel\",\"WARC-Payload-Digest\":\"sha1:TLZ3SCB22QDZFDNZACYBETXPAYQP3D75\",\"WARC-Block-Digest\":\"sha1:Z2XNYHHYVAAWO264GUISSUFSVD3XXRXE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141743438.76_warc_CC-MAIN-20201204193220-20201204223220-00367.warc.gz\"}"} |
https://jamesotto852.github.io/ggdensity/articles/method.html | [
"Almost every function in ggdensity accepts a method argument—this is true for geom_hdr() and other layer functions (geom_hdr_lines(), geom_hdr_points(), …), as well as get_hdr() and get_hdr_1d(). This vignette summarizes the many ways in which the method argument can be specified; first looking at it from a more basic perspective, then from the perspective of a developer wanting to implement additional estimators.\n\n## Using ggdensity’s method_*() functions\n\nFirst, let’s load the necessary packages and generate some sample data.\n\nlibrary(\"ggdensity\"); theme_set(theme_minimal(8))\ntheme_update(legend.position = \"none\") # Suppressing legends for readability\nset.seed(1)\ndf <- data.frame(x = rnorm(500), y = rnorm(500))\np <- ggplot(df, aes(x, y))\np + geom_point()",
null,
"The easiest way to plot HDRs with geom_hdr() (or any other layer function from ggdensity) with a specified density estimator is to provide a character object to the method argument:\n\np + geom_hdr(method = \"kde\")\n\np + geom_hdr(method = \"mvnorm\")\n\np + geom_hdr(method = \"histogram\")\n\np + geom_hdr(method = \"freqpoly\")",
null,
"",
null,
"",
null,
"",
null,
"However, as of ggdensity v1.0.0 there is an alternative approach—providing a method_*() function call:\n\np + geom_hdr(method = method_kde())\n\np + geom_hdr(method = method_mvnorm())\n\np + geom_hdr(method = method_histogram())\n\np + geom_hdr(method = method_freqpoly())",
null,
"",
null,
"",
null,
"",
null,
"The default behaviors of these two approaches are the same and always will be—in this way, they are completely interchangeable. However, the method_*() function call is required to estimate HDRs with non-default estimator parameters. For example, we can set the adjust parameter to apply a multiplicative adjustment to the heuristically determined bandwidth in method_kde() (which itself uses the one computed by MASS::bandwidth.nrd()):\n\np + geom_hdr(method = method_kde(adjust = 1/2))",
null,
"The relevant parameters for each method are documented in their respective ?method_* help pages. Note that these parameters can not be provided to geom_hdr() or stat_hdr() and thus are not accessible if a character value is provided to method.\n\nThe method argument of get_hdr() functions in the same way:\n\nres <- get_hdr(df, method = method_kde(adjust = 1/2))\n\nstr(res)\n#> List of 3\n#> $df_est:'data.frame': 10000 obs. of 5 variables: #> ..$ x : num [1:10000] -3.01 -2.94 -2.87 -2.8 -2.73 ...\n#> ..$y : num [1:10000] -3 -3 -3 -3 -3 ... #> ..$ fhat : num [1:10000] 4.72e-17 1.30e-15 2.88e-14 5.16e-13 7.44e-12 ...\n#> ..$fhat_discretized: num [1:10000] 2.00e-19 5.50e-18 1.22e-16 2.18e-15 3.15e-14 ... #> ..$ hdr : num [1:10000] 1 1 1 1 1 1 1 1 1 1 ...\n#> $breaks: Named num [1:5] 0.00422 0.01273 0.03024 0.07544 Inf #> ..- attr(*, \"names\")= chr [1:5] \"99%\" \"95%\" \"80%\" \"50%\" ... #>$ data :'data.frame': 500 obs. of 3 variables:\n#> ..$x : num [1:500] -0.626 0.184 -0.836 1.595 0.33 ... #> ..$ y : num [1:500] 0.0773 -0.2969 -1.1832 0.0113 0.9916 ...\n#> ..$hdr_membership: num [1:500] 0.5 0.5 0.8 0.8 0.5 0.95 0.8 0.5 0.5 0.5 ... For details on the output of get_hdr(), see ?get_hdr. ### method_*_1d() functions In ggdensity, it is possible to estimate and plot 1-dimensional HDRs with geom_hdr_rug() and get_hdr_1d(). These functions also accept a method argument, but they do not accept the previously discussed method_*() functions. Instead they accept the 1-dimensional analogues: method_*_1d(). p + geom_point() + geom_hdr_rug(method = method_kde_1d()) p + geom_point() + geom_hdr_rug(method = method_norm_1d()) p + geom_point() + geom_hdr_rug(method = method_histogram_1d()) p + geom_point() + geom_hdr_rug(method = method_freqpoly_1d())",
null,
"",
null,
"",
null,
"",
null,
"Just like we saw with geom_hdr(), geom_hdr_rug() also accepts character values for method: p + geom_point() + geom_hdr_rug(method = \"kde\") p + geom_point() + geom_hdr_rug(method = \"norm\") p + geom_point() + geom_hdr_rug(method = \"histogram\") p + geom_point() + geom_hdr_rug(method = \"freqpoly\")",
null,
"",
null,
"",
null,
"",
null,
"Because the return values of the method_*() functions are incompatible with the 1-dimensional HDR estimation procedure, if a 2-dimensional method is specified the following error message is issued: p + geom_point() + geom_hdr_rug(method = method_kde()) #> Warning: Computation failed in stat_hdr_rug() #> Caused by error in get_hdr_1d(): #> ! Invalid method argument -- did you forget the _1d()? Lastly, we see that the method argument of get_hdr_1d() behaves similarly. res <- get_hdr_1d(df$x, method = method_kde_1d())\n\nstr(res)\n#> List of 3\n#> $df_est:'data.frame': 512 obs. of 4 variables: #> ..$ x : num [1:512] -3.01 -2.99 -2.98 -2.97 -2.95 ...\n#> ..$fhat : num [1:512] 0.00728 0.00748 0.00769 0.00789 0.00809 ... #> ..$ fhat_discretized: num [1:512] 9.73e-05 1.00e-04 1.03e-04 1.05e-04 1.08e-04 ...\n#> ..$hdr : num [1:512] 1 1 1 1 1 1 1 1 1 1 ... #>$ breaks: Named num [1:5] 0.0188 0.0562 0.1601 0.3146 Inf\n#> ..- attr(*, \"names\")= chr [1:5] \"99%\" \"95%\" \"80%\" \"50%\" ...\n#> $data :'data.frame': 500 obs. of 2 variables: #> ..$ x : num [1:500] -0.626 0.184 -0.836 1.595 0.33 ...\n#> ..$hdr_membership: num [1:500] 0.5 0.5 0.8 0.95 0.5 0.8 0.5 0.8 0.5 0.5 ... Again, for details on the above output of get_hdr_1d(), see ?get_hdr_1d. ## A detailed look at method_*() functions Now that we understand the ways in which method can be specified let’s look at the internals of the method_*() functions. Note: the implementations discussed in this section depend heavily on topics in functional programming, especially closures and function factories. While not necessary, a good understanding of these ideas is helpful—the linked chapters from Hadley Wickham’s Advanced R are a great place to start. Looking at the definition of method_kde(), we see that it is a function of h and adjust, returning a closure with arguments data, n, rangex, and rangey. The closure passes the x and y columns of data to MASS::kde2d(), returning the estimated density evaluated on a grid with columns x, y, and fhat. This closure is what geom_hdr() expects as its method argument, and is how the HDRs are estimated (via get_hdr()). method_kde function (h = NULL, adjust = c(1, 1)) { function(data, n, rangex, rangey) { if (is.null(h)) { h <- c(MASS::bandwidth.nrd(data$x), MASS::bandwidth.nrd(data$y)) } h <- h * adjust kdeout <- MASS::kde2d(x = data$x, y = data$y, n = n, h = h, lims = c(rangex, rangey)) df <- with(kdeout, expand.grid(x = x, y = y)) df$fhat <- as.vector(kdeout$z) df } } <bytecode: 0x560c272be498> <environment: namespace:ggdensity> Both method_histogram() and method_freqpoly() behave similarly, accepting parameters governing the density estimation procedure and returning a closure with arguments data, n, rangex, and rangey. However, these functions are significantly more complicated as the density estimation procedures are implemented entirely in ggdensity. method_mvnorm() is different in a few ways. The closure it returns is a function of just one argument: data. This is because it does not return the estimated density evaluated on a grid. Instead, it returns yet another closure with (vectorized) arguments x and y. As in method_kde(), the return value of the closure is a representation of the estimated pdf. The difference is the manner in which the pdf is represented. Whereas before we had a pdf defined by a discrete approximation on a grid, we now have an explicit definition of the pdf in terms of x and y. method_mvnorm function () { function(data) { data_matrix <- with(data, cbind(x, y)) mu_hat <- colMeans(data_matrix) R <- chol(cov(data_matrix)) function(x, y) { X <- cbind(x, y) tmp <- backsolve(R, t(X) - mu_hat, transpose = TRUE) logretval <- -sum(log(diag(R))) - log(2 * pi) - 0.5 * colSums(tmp^2) exp(logretval) } } } <bytecode: 0x560c26eab8c0> <environment: namespace:ggdensity> To summarize each of the above cases: in the first example, the method_*() function returned a closure with arguments data, n, rangex, and rangey which itself returned the estimated density evaluated on a grid; in the second, the method_*() function returned a closure with a single argument, data, which itself returned a closure with arguments x and y, representing the estimated density explicitly. In both cases, the method_*() function can have any number of parameters governing the density estimation procedure. These are the two ways the method argument may be specified. The first is necessary for cases in which an explicit definition of the estimated density is not computationally feasible (for example, KDEs). The second is an easier option for the cases in which a closed form of the estimated density is available (for example, parametric estimators). Let’s look at how we might define our own method_*() functions in each case, beginning with a simple parametric estimator. ### Implementing a method returning a PDF In ggdensity, method_mvnorm() estimates HDRs based on the parametric multivariate normal model. If we wanted to fit a simpler model in which the data is further assumed to be independent, we could implement method_mvnorm_ind(). method_mvnorm_ind <- function() { function(data) { xbar <- mean(data$x)\nybar <- mean(data$y) sx <- sd(data$x)\nsy <- sd(data$y) # joint pdf is simply the product of the marginals function(x, y) dnorm(x, xbar, sx) * dnorm(y, ybar, sy) } } To use our method_mvnorm_ind(), we just need to supply it to geom_hdr()’s method argument. ggplot(df, aes(x, y)) + geom_hdr(method = method_mvnorm_ind())",
null,
"If we transform our data to have non-zero covariance we still see the major and minor axes of the contours coincide with the plot axes—exactly what we would expect with this (incorrectly) constrained model. A <- matrix(c( 2*cos(pi/6), -2*sin(pi/6), 1*sin(pi/6), 1*cos(pi/6) ), byrow = TRUE, ncol = 2) df_rot <- as.data.frame(as.matrix(df) %*% A) colnames(df_rot) <- c(\"x\", \"y\") ggplot(df_rot, aes(x, y)) + geom_hdr(method = method_mvnorm_ind()) + geom_point(size = .4) + coord_fixed(xlim = c(-6, 6), ylim = c(-6, 6))",
null,
"Notice, method_mvnorm_ind() accepts no arguments. The density estimation procedure is so simple that there are no parameters to govern it. To allow for circular models in which the fitted variances are required to be equal, we can implement a circular argument. method_mvnorm_ind <- function(circular = FALSE) { function(data) { xbar <- mean(data$x)\nybar <- mean(data$y) if (circular) { sx <- sd(c(data$x - xbar, data$y - ybar)) sy <- sx } else { sx <- sd(data$x)\nsy <- sd(data$y) } function(x, y) dnorm(x, xbar, sx) * dnorm(y, ybar, sy) } } Now, the contours are perfectly circular. ggplot(df_rot, aes(x, y)) + geom_hdr(method = method_mvnorm_ind(circular = TRUE)) + geom_point(size = .4) + coord_fixed(xlim = c(-6, 6), ylim = c(-6, 6))",
null,
"In the above plot, the upper and lower portions of the HDRs are cut off. This is because the default behavior of ggdensity is to not draw HDRs outside of the “bounding box” around observed data. This is not because we are using a custom method_*() function. To fix this, we need to either set a better ylim value for geom_hdr() or specify a larger range in scale_y_continuous(). ggplot(df_rot, aes(x, y)) + geom_hdr(method = method_mvnorm_ind(circular = TRUE), ylim = c(-6, 6)) + geom_point(size = .4) + coord_fixed(xlim = c(-6, 6), ylim = c(-6, 6)) ggplot(df_rot, aes(x, y)) + geom_hdr(method = method_mvnorm_ind(circular = TRUE)) + geom_point(size = .4) + scale_y_continuous(limits = c(-6, 6)) + coord_fixed(xlim = c(-6, 6), ylim = c(-6, 6))",
null,
"",
null,
"Notice, neither of these approaches involve arguments to method_mvnorm_ind(). Internally, the closure returned by method_mvnorm_ind() is used by get_hdr(), along with information from the scales associated with the ggplot object. It is the scales that need adjusting, not anything related to the method argument. ### Implementing a method returning an evaluated PDF To illustrate the other case, in which the object returned by the closure is the estimated density evaluated on a grid, we implement method_mvnorm_ind_grid(). This estimates the same independent normal density as method_mvnorm_ind(), the only difference is the behavior of the returned closure. method_mvnorm_ind_grid <- function() { function(data, n, rangex, rangey) { # First, we estimate the density ----------------------------- xbar <- mean(data$x)\nybar <- mean(data$y) sx <- sd(data$x)\nsy <- sd(data$y) f_est <- function(x, y) dnorm(x, xbar, sx) * dnorm(y, ybar, sy) # Return the density evaluated on a grid --------------------- # df_grid defined by rangex, rangey, and n df_grid <- expand.grid( x = seq(rangex, rangex, length.out = n), y = seq(rangey, rangey, length.out = n) ) df_grid$fhat <- f_est(df_grid$x, df_grid$y)\n\ndf_grid\n}\n\n}\n\nSee that returned closure has additional arguments n, rangex, and rangey which define the grid. Also, the grid is represented a data.frame with columns x, y, and fhat, where fhat is the (potentially unnormalized) density estimate.\n\nAgain, to use our method_mvnorm_ind_grid() we provide it to geom_hdr()’s method argument.\n\nggplot(df, aes(x, y)) +\ngeom_hdr(method = method_mvnorm_ind_grid())",
null,
"Like we saw in the previous example, we could prevent the HDRs from being “cut off” by specifying either the x/ylim arguments in geom_hdr() or by setting a larger range in scale_x/y_continuous().\n\n## The method_*_1d() functions\n\nWe saw before that ggdensity uses method_*_1d() functions for the estimation of 1-dimensional densities. The internals of these functions are very similar to the method_*() functions, the only differences are slight changes to the arguments and return values of the returned closures.\n\nLooking at the definition of method_kde_1d(), we see the returned closure has arguments x, n, and range. This is very similar to method_kde(), the only difference is we are now dealing with univariate data: the vector argument x is used instead of data, and we have a single range parameter instead of rangex and rangey. Similarly, the closure now returns the estimated density evaluated on a univariate grid, with columns x and fhat instead of the bivariate grid with columns x, y, and fhat. Finally, see that method_kde_1d() accepts several arguments governing the density estimation procedure just like method_kde().\n\nmethod_kde_1d\nfunction (bw = \"nrd0\", adjust = 1, kernel = \"gaussian\", weights = NULL,\nwindow = kernel)\n{\nfunction(x, n, range) {\nnx <- length(x)\nif (is.null(weights)) {\nweights <- rep(1/nx, nx)\n}\nelse {\nweights <- normalize(weights)\n}\ndens <- stats::density(x, bw = bw, adjust = adjust, kernel = kernel,\nweights = weights, window = window, n = n, from = range,\nto = range)\ndata.frame(x = dens$x, fhat = dens$y)\n}\n}\n<bytecode: 0x560c28d2c850>\n<environment: namespace:ggdensity>\n\nEstimated univariate densities can also be represented explicitly, as illustrated by method_norm_1d(). Comparing this to the previously discussed method_mvnorm() we see that little has changed: the closure is now a function of a vector x instead of data and returns a function of one variable (x) instead of two (x and y).\n\nmethod_norm_1d\nfunction ()\n{\nfunction(x) {\nmu_hat <- mean(x)\nsigma_hat <- sd(x)\nfunction(x) dnorm(x, mu_hat, sigma_hat)\n}\n}\n<bytecode: 0x560c2edd98b0>\n<environment: namespace:ggdensity>\n\nAdditional method_*_1d() functions can be implemented in the same way as the 2-dimensional method_*() functions, so long as the returned closure is structured in one of the two ways we have seen here."
] | [
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-2-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-3-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-3-2.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-3-3.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-3-4.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-4-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-4-2.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-4-3.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-4-4.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-5-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-7-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-7-2.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-7-3.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-7-4.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-8-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-8-2.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-8-3.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-8-4.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-14-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-15-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-17-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-18-1.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-18-2.png",
null,
"https://jamesotto852.github.io/ggdensity/articles/method_files/figure-html/unnamed-chunk-20-1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6423976,"math_prob":0.994042,"size":15200,"snap":"2023-14-2023-23","text_gpt3_token_len":4505,"char_repetition_ratio":0.15569887,"word_repetition_ratio":0.12542808,"special_character_ratio":0.31822369,"punctuation_ratio":0.18085831,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99746805,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T05:36:32Z\",\"WARC-Record-ID\":\"<urn:uuid:21caf1d5-8569-4ef4-a743-b42230a9d0b9>\",\"Content-Length\":\"75044\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31beea7a-9d34-45a8-bc0b-c026e6a3c7f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:421ea940-3d01-4cfd-8646-e067c731f0b3>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://jamesotto852.github.io/ggdensity/articles/method.html\",\"WARC-Payload-Digest\":\"sha1:XCN6CKSYYWF3O25AOJBNNM6ZHOTHUPAC\",\"WARC-Block-Digest\":\"sha1:4J5HREONAOABFP4EAMAHOQFWG7QPHQML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647614.56_warc_CC-MAIN-20230601042457-20230601072457-00312.warc.gz\"}"} |
https://arguablywrong.home.blog/2019/10/17/selection-pressure-on-prevalence-of-a-dominant-mutation/ | [
"Design a site like this with WordPress.com\n\n# Selection pressure on prevalence of a dominant mutation\n\nSuppose we have a dominant mutation in the population with some prevalence",
null,
"$p$. That means that a fraction",
null,
"$p$ of the alleles at that locus in the population are our mutation, while the rest are wild-type. We’d like to know how that prevalence changes over time. Some of those changes will be random, but we can average out to ask what the expectation of the change is at any particular point. We’ll have to make some assumptions first:\n\n• Generations will be single events. We’ll start off with some generation, apply a selection model, and use that to generate the next generation, ad infinitum.\n• Individuals will randomly mate to produce the next generation. This means there won’t be any population substructure and we can calculate the fraction of homozygotes and heterozygotes using only the mutation prevalence",
null,
"$p$.\n• The mutation confers a selective advantage",
null,
"$s$ that is equal in both homozygote and heterozygote carriers.\n\nGiven this, we can work out a table comparing different parts of the population, where",
null,
"$q = 1 - p$ is the probability of the wild-type allele:\n\nRelative fitness here means that parents with the allele will have on average",
null,
"$1 + s$ children if wild type parents have",
null,
"$1$. That lets us calculate the fraction of the parents of the next generation, adjusting for increased reproduction of the carriers.\n\nNow that we know what fraction of the parental population has each status, we can calculate the expected prevalence of the allele after one generation. The next generation will have the same prevalence as in the parental population.",
null,
"$p' = \\frac{pq(1+s) + p^2(1+s)}{1 + sp + spq}$.\n\nThat is, we sum up half the fraction of heterozygotes and the fraction of homozygotes. This simplifies to",
null,
"$p' = \\frac{p (1 + s)}{1 + sp + spq}$.\n\nNote that for the case where",
null,
"$s=0$, this reduces to",
null,
"$p' = p$, indicating no expected change in allele prevalence when there is no selective pressure, as we would expect. We can then calculate the change in prevalence as",
null,
"$\\Delta p = p' - p$, which simplifies down to",
null,
"$\\Delta p = \\frac{spq^2}{1 + sp + spq}$.\n\nThis change is zero if and only if",
null,
"$spq^2$ is zero. That is, if any of",
null,
"$s$,",
null,
"$p$, or",
null,
"$q$ are zero. This means that the allele changes frequency unless there is no selective pressure or the frequency is fixed at 0 or 1.\n\nWe can get a better handle on the behavior of this by treating it as a differential equation and solving. We can rewrite it as",
null,
"$\\frac{dp}{dt} \\frac{-p^2 + 2p + \\frac{1}{s}}{p^3 - 2p^2 + s} = 1$.\n\nThen we integrate both sides with respect to t:",
null,
"$\\int \\frac{-p^2 + 2p + \\frac{1}{s}}{p^3 - 2p^2 + s} \\frac{dp}{dt} dt = \\int 1 dt$.\n\nSimplify the left side by partial fraction decomposition and solve the right side:",
null,
"$\\int \\left [\\frac{1}{sp} + \\frac{1}{1-p} + \\frac{1}{s(1-p)} + \\frac{1}{(1-p)^2} + \\frac{1}{s(1-p)^2} \\right] dp = t + c$.\n\nThen we simply integrate each piece of the left integral separately and combine the constants of integration.",
null,
"$\\frac{1 + s}{sq} + \\frac{1}{s}ln(\\frac{p}{q}) - ln(q) = t + c$.\n\nThere’s no way to solve explicitly for",
null,
"$p$ at any particular number of generations. Instead what this gives us is a solution for",
null,
"$t$, the number of generations it takes to reach any particular prevalence, where the constant of integration is provided by the initial prevalence of the allele.",
null,
"$t(p | s) = \\frac{1+s}{s} \\left ( \\frac{q_0 - q}{q_0q} - ln \\frac{q}{q_0} \\right ) + \\frac{1}{s} ln \\frac{p}{p_0}$.\n\nThis lets us answer the question of how changes in the selection pressure affects the speed of selection. Suppose we keep the same starting prevalence and we wish to know how fast the population will reach some ending prevalence. The time needed is of the form",
null,
"$t(p | s) = \\frac{1}{s}(s K(p) + Q(p))$",
null,
"$K(p) = \\frac{q_0 - q}{q_0q} - ln \\frac{q}{q_0}$",
null,
"$Q(p) = \\frac{q_0 - q}{q_0q} - ln \\frac{q}{q_0} + ln\\frac{p}{p_0}$\n\nIf",
null,
"$K(p) >> Q(p)$ then the time becomes approximately independent of",
null,
"$s$. If",
null,
"$K(p) << Q(p)$ then the time becomes approximately inversely proportional to the selection pressure. This gives us a bound on how fast increased selection can work: at most, doubling the selection pressure will half the time needed to increase allele prevalence to a given point."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89112264,"math_prob":0.9998037,"size":3509,"snap":"2022-40-2023-06","text_gpt3_token_len":714,"char_repetition_ratio":0.15378031,"word_repetition_ratio":0.006779661,"special_character_ratio":0.19321744,"punctuation_ratio":0.093603745,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990976,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T04:52:35Z\",\"WARC-Record-ID\":\"<urn:uuid:98f0bed5-4eb8-4e75-81e6-44cc770b398a>\",\"Content-Length\":\"111204\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52815365-4c39-4102-89d0-4d63f04f2b8f>\",\"WARC-Concurrent-To\":\"<urn:uuid:02219ec9-3331-48ec-86db-26fc07d3cdd8>\",\"WARC-IP-Address\":\"192.0.78.31\",\"WARC-Target-URI\":\"https://arguablywrong.home.blog/2019/10/17/selection-pressure-on-prevalence-of-a-dominant-mutation/\",\"WARC-Payload-Digest\":\"sha1:AZ7JEW3MD6AK3YVVVKBTXXJEH5U7IST5\",\"WARC-Block-Digest\":\"sha1:XBSHZE6PZFQAOVCCBTUYF2TF35BFTIXA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499470.19_warc_CC-MAIN-20230128023233-20230128053233-00212.warc.gz\"}"} |
https://thtsearch.com/content/Arc_(geometry)/ | [
"You Might Like\n\nIn Euclidean geometry, an arc (symbol: ) is a closed segment of a differentiable curve. A common example in the plane (a two-dimensional manifold), is a segment of a circle called a circular arc. In space, if the arc is part of a great circle (or great ellipse), it is called a great arc.\n\nEvery pair of distinct points on a circle determines two arcs. If the two points are not directly opposite each other, one of these arcs, the minor arc, will subtend an angle at the centre of the circle that is less than π radians (180 degrees), and the other arc, the major arc, will subtend an angle greater than π radians.\n\n# Circular arcs\n\nThe length (more precisely, arc length) of an arc of a circle with radius r and subtending an angle θ (measured in radians) with the circle center — i.e., the central angle — is\n\nThis is because\n\nSubstituting in the circumference\n\nand, with α being the same angle measured in degrees, since θ = α/180π, the arc length equals\n\nA practical way to determine the length of an arc in a circle is to plot two lines from the arc's endpoints to the center of the circle, measure the angle where the two lines meet the center, then solve for L by cross-multiplying the statement:\n\nFor example, if the measure of the angle is 60 degrees and the circumference is 24 inches, then\n\nThis is so because the circumference of a circle and the degrees of a circle, of which there are always 360, are directly proportional.\n\nThe upper half of a circle can be parameterized as\n\nThe area of the sector formed by an arc and the center of a circle (bounded by the arc and the two radii drawn to its endpoints) is\n\nThe area A has the same proportion to the circle area as the angle θ to a full circle:\n\nWe can cancel π on both sides:\n\nBy multiplying both sides by r2, we get the final result:\n\nUsing the conversion described above, we find that the area of the sector for a central angle measured in degrees is\n\nThe area of the shape bounded by the arc and the straight line between its two end points is\n\nUsing the intersecting chords theorem (also known as power of a point or secant tangent theorem) it is possible to calculate the radius r of a circle given the height H and the width W of an arc:\n\nConsider the chord with the same endpoints as the arc. Its perpendicular bisector is another chord, which is a diameter of the circle. The length of the first chord is W, and it is divided by the bisector into two equal halves, each with length W/2. The total length of the diameter is 2r, and it is divided into two parts by the first chord. The length of one part is the sagitta of the arc, H, and the other part is the remainder of the diameter, with length 2r − H. Applying the intersecting chords theorem to these two chords produces\n\nwhence\n\nso"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92033637,"math_prob":0.99274236,"size":2742,"snap":"2022-40-2023-06","text_gpt3_token_len":634,"char_repetition_ratio":0.16325785,"word_repetition_ratio":0.007736944,"special_character_ratio":0.22428884,"punctuation_ratio":0.08596491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995291,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T09:04:20Z\",\"WARC-Record-ID\":\"<urn:uuid:9ab1167e-c5db-4788-b3e1-872057e55d1f>\",\"Content-Length\":\"20048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca6b56ae-3a8c-4650-a732-14d0a9fc5d1a>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3013bc3-e713-4a99-9877-cc2d9ed74b9b>\",\"WARC-IP-Address\":\"104.21.23.82\",\"WARC-Target-URI\":\"https://thtsearch.com/content/Arc_(geometry)/\",\"WARC-Payload-Digest\":\"sha1:U3UULORRKMYHJCFGKJJ32MP6CAA3VTIH\",\"WARC-Block-Digest\":\"sha1:V4HT6ULNA4F3KQWOKG3VERNR53RYP5ED\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334515.14_warc_CC-MAIN-20220925070216-20220925100216-00345.warc.gz\"}"} |
https://adaic.org/resources/add_content/standards/95lrm/ARM_HTML/RM-4-4.html | [
"# 4.4 Expressions\n\n1\nAn expression is a formula that defines the computation or retrieval of a value. In this International Standard, the term ``expression'' refers to a construct of the syntactic category expression or of any of the other five syntactic categories defined below.\n\n#### Syntax\n\n2\nexpression ::=\nrelation {and relation} | relation {and then relation}\n| relation {or relation} | relation {or else relation}\n| relation {xor relation}\n3\nrelation ::=\nsimple_expression [relational_operator simple_expression]\n| simple_expression [notin range\n| simple_expression [notin subtype_mark\n4\n5\nterm ::= factor {multiplying_operator factor}\n6\nfactor ::= primary [** primary] | abs primary | not primary\n7\nprimary ::=\nnumeric_literal | null | string_literal | aggregate\n| name | qualified_expression | allocator | (expression)\n\n#### Name Resolution Rules\n\n8\nA name used as a primary shall resolve to denote an object or a value.\n\n#### Static Semantics\n\n9\nEach expression has a type; it specifies the computation or retrieval of a value of that type.\n\n#### Dynamic Semantics\n\n10\nThe value of a primary that is a name denoting an object is the value of the object.\n\n#### Implementation Permissions\n\n11\nFor the evaluation of a primary that is a name denoting an object of an unconstrained numeric subtype, if the value of the object is outside the base range of its type, the implementation may either raise Constraint_Error or return the value of the object.\n\n#### Examples\n\n12\nExamples of primaries:\n13\n4.0 -- real literal\nPi -- named number\n(1 .. 10 => 0) -- array aggregate\nSum -- variable\nInteger'Last -- attribute\nSine(X) -- function call\nColor'(Blue) -- qualified expression\nReal(M*N) -- conversion\n(Line_Count + 10) -- parenthesized expression\n14\nExamples of expressions:\n15\nVolume -- primary\nnot Destroyed -- factor\n2*Line_Count -- term\n-4.0 -- simple expression\n-4.0 + A -- simple expression\nB**2 - 4.0*A*C -- simple expression\nPassword(1 .. 3) = \"Bwv\" -- relation\nCount in Small_Int -- relation\nCount not in Small_Int -- relation\nIndex = 0 or Item_Hit -- expression\n(Cold and Sunny) or Warm -- expression (parentheses are required)\nA**(B**C) -- expression (parentheses are required)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54017705,"math_prob":0.9290947,"size":2325,"snap":"2020-45-2020-50","text_gpt3_token_len":583,"char_repetition_ratio":0.17277035,"word_repetition_ratio":0.06970509,"special_character_ratio":0.2627957,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98522025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T20:26:12Z\",\"WARC-Record-ID\":\"<urn:uuid:aba2347e-6724-4cd3-9146-1ae40d7ddaa3>\",\"Content-Length\":\"18335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25754b79-c831-4989-b26f-8c6e3dd124c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:24201d2b-0ea2-4751-a8dc-369c3b38004c>\",\"WARC-IP-Address\":\"35.214.219.108\",\"WARC-Target-URI\":\"https://adaic.org/resources/add_content/standards/95lrm/ARM_HTML/RM-4-4.html\",\"WARC-Payload-Digest\":\"sha1:WSULSUQMWGVS4RWITXOAW4WWSYXNWA2Q\",\"WARC-Block-Digest\":\"sha1:EPNPDHK5K3DI4XBPCO4NRWDP7NBN6KP6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141194171.48_warc_CC-MAIN-20201127191451-20201127221451-00395.warc.gz\"}"} |
http://oeis.org/A165726/internal | [
"The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.",
null,
"Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)\n A165726 Number of reduced words of length n in Coxeter group on 50 generators S_i with relations (S_i)^2 = (S_i S_j)^9 = I. 0\n\n%I\n\n%S 1,50,2450,120050,5882450,288240050,14123762450,692064360050,\n\n%T 33911153642450,1661646528478825,81420679895402400,\n\n%U 3989613314871777600,195491052428573042400,9579061568993020137600,469374016880312098682400\n\n%N Number of reduced words of length n in Coxeter group on 50 generators S_i with relations (S_i)^2 = (S_i S_j)^9 = I.\n\n%C The initial terms coincide with those of A170769, although the two sequences are eventually different.\n\n%C Computed with MAGMA using commands similar to those used to compute A154638.\n\n%H <a href=\"/index/Rec#order_09\">Index entries for linear recurrences with constant coefficients</a>, signature (48, 48, 48, 48, 48, 48, 48, 48, -1176).\n\n%F G.f. (t^9 + 2*t^8 + 2*t^7 + 2*t^6 + 2*t^5 + 2*t^4 + 2*t^3 + 2*t^2 + 2*t +\n\n%F 1)/(1176*t^9 - 48*t^8 - 48*t^7 - 48*t^6 - 48*t^5 - 48*t^4 - 48*t^3 -\n\n%F 48*t^2 - 48*t + 1)\n\n%K nonn\n\n%O 0,2\n\n%A _John Cannon_ and _N. J. A. Sloane_, Dec 03 2009\n\nLookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam\nContribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent\nThe OEIS Community | Maintained by The OEIS Foundation Inc.\n\nLast modified April 19 17:46 EDT 2021. Contains 343117 sequences. (Running on oeis4.)"
] | [
null,
"http://oeis.org/banner2021.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6269025,"math_prob":0.92813677,"size":911,"snap":"2021-04-2021-17","text_gpt3_token_len":376,"char_repetition_ratio":0.13340683,"word_repetition_ratio":0.022727273,"special_character_ratio":0.6026345,"punctuation_ratio":0.17073171,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9640882,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T23:09:32Z\",\"WARC-Record-ID\":\"<urn:uuid:33e4d5f3-23ea-4f9a-9ea5-0e0ee0af48ff>\",\"Content-Length\":\"8321\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fbeb8ff0-60c7-4ded-a60b-23ae79db5d2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a9dd96f-5513-48f1-9ed2-639fde71f456>\",\"WARC-IP-Address\":\"104.239.138.29\",\"WARC-Target-URI\":\"http://oeis.org/A165726/internal\",\"WARC-Payload-Digest\":\"sha1:CVQOR3I3XWZKJR6VG6OHFFRDEBIDRTQ3\",\"WARC-Block-Digest\":\"sha1:E3GMGMZ5GT5IUEKF4D2LJC4EGBPXHJQI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038917413.71_warc_CC-MAIN-20210419204416-20210419234416-00520.warc.gz\"}"} |
https://www.jiskha.com/questions/514200/perimeter-of-a-equilateral-triangle-is-given-as-42-cm-find-the-perimeter-of-a-shaded-part | [
"# maths\n\nperimeter of a equilateral triangle is given as 42 cm. Find the perimeter of a shaded part of triangle if the 3 sectors of a circle marked are identical\n\n1. 👍 0\n2. 👎 0\n3. 👁 206\n1. Sorry, we do not see the sectors or the shading. You will need to describe the figure or try to post a link to the figure.\n\n1. 👍 0\n2. 👎 0\n\n## Similar Questions\n\n1. ### Math\n\nThe formula for finding the perimeter of an equilateral triangle is P = 3s. How can you transform the formula so that it can be used to determine the length of one side of the triangle when given the perimeter?\n\n2. ### Algebra\n\nA regular pentagon and an equilateral triangle have the same perimeter. The perimeter of the pentagon is 5 (1/2x +2) inches. The perimeter of the triangle is 4(x-2) inches. What is the perimeter of each figure?\n\n3. ### Maths\n\nThe sides of the equilateral triangle is 3x+2,2y-x and y+3 find x and y.and the perimeter of the triangle\n\n4. ### geometry\n\nhow do you find if the perimeter of a triangle EFG is 32, triangle EFG scalene, isosceles, or equilateral?\n\n1. ### Algebra\n\nA square and an equilateral triangle have the same perimeter.Each side of the triangle is 5 inches longer than each side of the square. What is the perimeter of the square.\n\n2. ### math\n\nAn equilateral triangle has a perimeter of 15x +18. What is the length of each side of the triangle?\n\n3. ### geometry\n\none side of an equilateral triangle is 7 dm shorter than one side of square.the sum of the perimeter of the two figure is 49 dm . find the perimeter of each figure.\n\n4. ### geometry\n\nfind the perimeter of an equilateral triangle with one side that measures 9.6 centimeters\n\n1. ### Geometry\n\nthe perimeter of an equilateral triangle is 32 centimeters. find the length of an altitude of the triangleto the nearest tenth of a centimeter.\n\n2. ### Geometry\n\nAn equilateral triangle has a perimeter of 120 inches. What is the area of the triangle? Express your answer in simplest radical form.\n\n3. ### Centroids and Triangles - determining perimeter?\n\nLet G denote the centroid of triangle ABC. If triangle ABG is equilateral with side length 2, then determine the perimeter of triangle ABC. I drew the diagram, but it doesn't really help....\n\n4. ### Algebra\n\nThe perimeter of an equilateral triangle is 7 in. more than the perimeter of a square. The side of the triangle is 5 in. longer than the side of the square. Find the length of each side of the triangle. (Note: An equilateral"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84280735,"math_prob":0.9972501,"size":2158,"snap":"2020-34-2020-40","text_gpt3_token_len":545,"char_repetition_ratio":0.2650882,"word_repetition_ratio":0.1043257,"special_character_ratio":0.23586655,"punctuation_ratio":0.0990566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999361,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T00:17:25Z\",\"WARC-Record-ID\":\"<urn:uuid:88d8c68e-1c14-4f7b-b985-ac04f498a4f2>\",\"Content-Length\":\"17143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a665541-1f61-4f79-9309-2e64b99941d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d31eef8b-6ef7-414e-8808-e8cbbaed86ed>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/514200/perimeter-of-a-equilateral-triangle-is-given-as-42-cm-find-the-perimeter-of-a-shaded-part\",\"WARC-Payload-Digest\":\"sha1:E4Y62KP7EFAKIAESGH3FHR6XTZJIDISK\",\"WARC-Block-Digest\":\"sha1:KHQSHE265OJHHXDPSHW4NBCTFELPAYLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400249545.55_warc_CC-MAIN-20200926231818-20200927021818-00480.warc.gz\"}"} |
https://in.mathworks.com/help/robotics/ug/design-position-controlled-manipulator-using-simscape.html | [
"# Design Position Controlled Manipulator Using Simscape\n\nThis example shows you how to use Simulink® with Robotics System Toolbox™ to design a position controller for a manipulator and compute joint position required to drive the Simscape™ Multibody™ model of the manipulator.\n\nRobotics System Toolbox, Simscape Multibody, and Robotics System Toolbox Robot Library Data support package are required to run this example.\n\n### Introduction\n\nIn this example, you will load an included robot model using `loadrobot` as a `rigidBodyTree` object, then create a Simscape Multibody model of the robot using `smimport`. Configure the model to accept joint torque and return the computed joint position and velocity. Implement a computed torque controller with joint position and velocity feedback using manipulator algorithm blocks. The controller receives joint position and velocity information from the robot model and sends torque commands to drive the robot to the desired joint position computed using Inverse Kinematics (IK).\n\n### Load Robot Model in Workspace\n\nThis example uses a model of the KINOVA® Gen3, a 7 degree-of-freedom robot manipulator. Call `loadrobot` to generate a `rigidBodyTree` model of the robot. Set the `DataFormat` properties to be consistent with Simscape.\n\n`robot = loadrobot(\"kinovaGen3\",DataFormat=\"column\");`\n\n### Generate Simscape Multibody Model from Rigid Body Tree\n\nImport the `robot` object into Simscape Multibody and get the model parameters.\n\n```robotSM = smimport(robot,ModelName=\"ManipulatorPositionControl_Subsystem\"); sm_mdl = get_param(robotSM,\"Name\");```",
null,
"### Configure Simscape Multibody Model\n\nPrepare the Simscape Multibody model to accept the joint torque inputs and return the joint positions and velocities. You can follow the steps below to manually configure the model or use the `helperInstrumentSMModels` helper function to automatically configure the model.\n\n#### Manual Configuration of Simscape Multibody Model\n\n1. In your model, double-click a Joint block. The Property Inspector dialog box opens.\n\n2. In the Property Inspector dialog box, select Z Revolute Primitive (Rz) > Actuation > Torque > Provided by Input, and select Z Revolute Primitive (Rz) > Actuation > Motion > Automatically computed. The block exposes a physical signal input port, labeled `t`.",
null,
"3. Select Z Revolute Primitive (Rz) > Sensing and enable Position, Velocity, and Acceleration. The block exposes a physical signal output ports, labeled `q`, `w`, and `b`.",
null,
"4. Add a Simulink-PS Converter block from the Simscape > Utilities library, connect the Simulink-PS Converter block to physical signal input port `t` of the Joint block.\n\n5. Add a From block from the Simulink > Signal Routing library to the input port of the Simulink-PS Converter block.\n\n6. Add three PS-Simulink Converter blocks from the Simscape > Utilities library, connect the PS-Simulink Converter blocks to physical signal output ports `q`, `w`, and `b` of the Joint block.\n\n7. Add three Goto blocks from the Simulink > Signal Routing library to the output port of the PS-Simulink Converter blocks.",
null,
"8. Repeat these steps for all the Joint blocks.\n\n9. Add a Demux block from the Simulink > Signal Routing library and connect the joint torque Goto blocks related to the respective joint torque From blocks.\n\n10. Add three Mux blocks from the Simulink > Signal Routing library and connect the joint motions From blocks related to the respective joint motions Goto blocks.\n\n11. Create a subsystem of the Simscape Multibody model.\n\n#### Configure Simscape Multibody Model Using Helper Function\n\nUse the `helperInstrumentSMModels` helper function to automatically configure the model.\n\nCall the helper function to automatically configure the Simscape Multibody model to accept torque input.\n\n`helperInstrumentSMModels.instrumentRBTSupportedJointInputs(sm_mdl,robot,\"torque\")`\n\nCall the helper function again to configure the Simscape Multibody model to enable position, velocity, and acceleration sensing at each joint.\n\n`helperInstrumentSMModels.instrumentRBTSupportedJointOutputs(sm_mdl,robot,\"motion\")`\n\nCreate a subsystem of the Simscape Multibody model.\n\n`helperInstrumentSMModels.convertToSubsystem(sm_mdl)`",
null,
"#### Set Up Variables in Model Workspace\n\nSet up the variables in the model workspace that specify the start and end waypoints, and the joint starting position and velocity.\n\n```mdlWks = get_param(robotSM,\"ModelWorkspace\"); assignin(mdlWks,\"robotToTest\",robot) assignin(mdlWks,\"q0\",robot.homeConfiguration) assignin(mdlWks,\"dq0\",zeros(size(robot.homeConfiguration)))```\n\n### Computed Torque Controller\n\nThe Computed Torque Controller subsystem is built using three robotics manipulator blocks: Joint Space Mass Matrix, Velocity Product Torque, and Gravity Torque. The `rigidBodyTree` model, `robotToTest`, is assigned in all those blocks.",
null,
"The Computed Torque Controller subsystem accepts Measured Configuration, Measured Velocities and Desired Configuration and returns Applied Torque for each joint of the manipulator.\n\n### Set Up Controller Input\n\n1. Add a Coordinate Transformation Conversion block from the Robotics System Toolbox > Utilities library to the model. Set the input representation as Translation Vector and the output representation as Homogeneous Transformation.",
null,
"2. Add a Constant block and set the value as `[0.5 0.5 0.5]`. Connect the Constant block to the input port of the Coordinate Transformation Conversion block.\n\n3. Add a Inverse Kinematics block from the Robotics System Toolbox > Manipulator Algorithms library to the model.\n\n4. In the Inverse Kinematics block, specify the Rigid body tree model as `robotToTest`, then click Select body next to the End effector to select the end effector body.",
null,
"5. Connect the output port of Coordinate Transformation Conversion block to the Pose port of the Inverse Kinematics block.\n\n6. Add another Constant block and set the value as `[0 0 0 1 1 1]`. Connect the Constant block to the Weights port of the Inverse Kinematics block.\n\n7. Connect a Delay block to the Config port of the Inverse Kinematics block and specify the Initial condition as `q0`.",
null,
"8. Connect the output of the Delay block to the InitialGuess port of the Inverse Kinematics block.\n\n### Final Setup\n\nConnect the Simscape Multibody Model subsystem, Computed Torque Controller subsystem, Controller Input blocks, and a Scope block as shown in figure.",
null,
"### Simulate Model\n\nOpen the provided ManipulatorPositionControl.slx model and replace the Robot subsystem with the subsystem created in ManipulatorPositionControl_Subsystem model above, for it to be able to fetch the meshes correctly.\n\n```open_system(\"ManipulatorPositionControl.slx\") ```\n\nSave the model and simulate it.\n\n```sim(\"ManipulatorPositionControl.slx\",\"StopTime\",\"5\") ```\n\nVisualize the multibody model in the Mechanics Explorer.",
null,
"Visualize the joint positions in the Scope.",
null,
""
] | [
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_01.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_02.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_03.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_04.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_05.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_06.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_07.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_08.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_09.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_10.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_11.png",
null,
"https://in.mathworks.com/help/examples/robotics/win64/DesignPositionControlledManipulatorUsingSimscapeExample_12.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.677656,"math_prob":0.7258762,"size":6804,"snap":"2022-27-2022-33","text_gpt3_token_len":1460,"char_repetition_ratio":0.1525,"word_repetition_ratio":0.1138976,"special_character_ratio":0.18239272,"punctuation_ratio":0.11722913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9633154,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T21:31:55Z\",\"WARC-Record-ID\":\"<urn:uuid:ffe2c8aa-bb0a-4940-bdf0-adf6e296d09a>\",\"Content-Length\":\"89689\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61911300-1938-4da0-92d5-d00c8aa07b10>\",\"WARC-Concurrent-To\":\"<urn:uuid:044a8b7a-2449-4fe5-9253-5c5f10bb3f2e>\",\"WARC-IP-Address\":\"23.39.174.83\",\"WARC-Target-URI\":\"https://in.mathworks.com/help/robotics/ug/design-position-controlled-manipulator-using-simscape.html\",\"WARC-Payload-Digest\":\"sha1:GXJMEZB4ZNOBMVXCDOLYSBY2FQMGNDJT\",\"WARC-Block-Digest\":\"sha1:ER5ZFDXZNQIOLQFWFETD2PVU76WFUKW6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571502.25_warc_CC-MAIN-20220811194507-20220811224507-00680.warc.gz\"}"} |
https://wizardcalc.com/what-is/7-of-120 | [
"",
null,
"Percent Calculator\n\n## 7% of 120 is 8.4\n\nCalculate percent\nWhat is% of?\n\n## How to calculate 7 percent of 120\n\n• step 1: 7%*120 =\n• step 2: (7:100)*120 =\n• step 3: (7*120):100 =\n• step 4: 840.0:100=8.4\nAnswer: 7 of 120 is 8.4\n\n## 7% of other values:\n\n• 7% of 132 = 9.24\n• 7% of 144 = 10.08\n• 7% of 156 = 10.92\n• 7% of 168 = 11.76\n• 7% of 180 = 12.6\n• 7% of 192 = 13.44\n• 7% of 204 = 14.28\n• 7% of 216 = 15.12\n• 7% of 228 = 15.96\n• 7% of 240 = 16.8\n• 7% of 252 = 17.64\n• 7% of 264 = 18.48\n• 7% of 276 = 19.32\n• 7% of 288 = 20.16\n• 7% of 300 = 21\n• 7% of 312 = 21.84\n• 7% of 324 = 22.68\n\n## How much will you save if product costs 120 and discount is 7%\n\nSavings: Original price * percentage off / 100\nAmount saved: (7 * 120) / 100 =\nSavings: \\$8.4\n\nThat means for an original price of 120 and a 7% discount, you would pay \\$111.6 and save \\$8.4\n\n## What is 7 percent (calculated percentage %) of number 120?\n\n7% of 120 is equal to their multiplication: 7% * 120.\n7 percent of 120:\n• 7% of 120 =\n• 7/100 * 120 =\n• = 8.4"
] | [
null,
"https://wizardcalc.com/backArrow.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91163146,"math_prob":0.9963641,"size":1029,"snap":"2021-31-2021-39","text_gpt3_token_len":458,"char_repetition_ratio":0.21658537,"word_repetition_ratio":0.008196721,"special_character_ratio":0.6122449,"punctuation_ratio":0.16356878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998766,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T13:24:41Z\",\"WARC-Record-ID\":\"<urn:uuid:120b2dec-d16d-4b91-86d9-6089db0c9a00>\",\"Content-Length\":\"11288\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9b9b082-d478-4a2e-ae12-97ef37f511d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0abe13f7-1617-4f1f-975f-26b55e5757df>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://wizardcalc.com/what-is/7-of-120\",\"WARC-Payload-Digest\":\"sha1:7E5U5757AOZQNX63RD6FDKHLLMQ2Q4VV\",\"WARC-Block-Digest\":\"sha1:VIDBTJ4F4PFZ6LQTORO6OEN5KAEGXL72\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056476.66_warc_CC-MAIN-20210918123546-20210918153546-00487.warc.gz\"}"} |
http://rportal.lib.ntnu.edu.tw/items/f7c101a0-3ac4-47c9-9ece-8ea6c3b84e17 | [
"# 完全圖上最大權重配對問題之自我穩定演算法的設計及分析\n\nNo Thumbnail Available\n\n2000-04-??\n\n## Publisher\n\nOffice of Research and Development\n\n## Abstract\n\nIn 1974, Dijsktra defined a self-stabilizing system as a system which is guaranteed to arrive at a legitimate state in a finite number of steps regardless of its initial state. Since his introduction, self-stabilizing algorithms gained wide-spread research interest. The objectives of this research are to design and analyze self-stabilizing algorithms for maximal weight matching problem. Firstly, Hsu and Huang proved that the time complexity of their self-stabilizing algorithm for finding a maximal matching in distributed networks is O(n3), where n is the number of nodes in the graph. In 1994, Tel introduced a variant function to show that the time complexity of Hsu-Huang's algorithm is O(n2). In this paper, we design a self-stabilizing algorithm for maximal weight matching of the complete graph and prove its correctness. The maximal weight matching problem is defined not only to find the maximal matching of the complete graph, but also to let the total weight of the matching edges be maximal. We combine Hsu-Huang's maximal matching alogrithm and new swapping rules. This system possesses the properties of fault tolerance and self-stabilization and has a time complexity O(n2+nk), where k is the largest weight over all edges in the graph."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7420317,"math_prob":0.9627091,"size":3182,"snap":"2023-40-2023-50","text_gpt3_token_len":1194,"char_repetition_ratio":0.14348647,"word_repetition_ratio":0.93012047,"special_character_ratio":0.17850408,"punctuation_ratio":0.06990291,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98091626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T19:44:11Z\",\"WARC-Record-ID\":\"<urn:uuid:38e4caa4-4691-4ed0-baed-6b2f320cc17b>\",\"Content-Length\":\"433778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59da1f87-cf89-4b5a-aca7-f5fa7b631798>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1276a6c-d45c-4711-adcb-a842efe38ffd>\",\"WARC-IP-Address\":\"140.122.127.138\",\"WARC-Target-URI\":\"http://rportal.lib.ntnu.edu.tw/items/f7c101a0-3ac4-47c9-9ece-8ea6c3b84e17\",\"WARC-Payload-Digest\":\"sha1:O34QWTFBB2QCCMKBHWPN53UQFTV5KI6J\",\"WARC-Block-Digest\":\"sha1:7QLZX4LVNPR7QKIAOMKOI5XYDJ5DLJ7Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679102637.84_warc_CC-MAIN-20231210190744-20231210220744-00270.warc.gz\"}"} |
https://www.physicsforums.com/threads/finding-critical-numbers.388758/ | [
"# Finding critical numbers\n\n## Homework Statement\n\nFind all critical numbers of:\ng(x)= sqrt(x2-4)\nand\nf(x)= (1)/(x2-9)\n\nn/a\n\n## The Attempt at a Solution\n\n1) sqrt(x2-4)\nand got zeroes as x=0, x=2, x=-2\nand I got confused because if you do g(2) and g(-2) it equals zero. For some reason I can't tell if they are defined or undefined. x=0 works, so that is a critical number. The other two are throwing me off.\n\n2) (1)/(x2-9)\nzeroes were x=-3, x=3, x=0. Plugging 3 and -3 back into f(x) gave me undefined, so I'm pretty sure 0 is the only critical number.\n\nIf you could please check the second and help with the first that would be great. Thank you\n\nHomework Helper\n\n\"1) sqrt(x2-4)\nsimplified to x(x-2)(x+2)-1/2\"\n\n- is that supposed to be the derivative of $$\\sqrt{\\,x^2 -4}$$? if so, it isn't correct.\n\nwhat is your definition of a critical value? (writing it out can help you see the appropriate path)\n\nI'm honestly not sure what I did there... I re-did my derivative and found\n\nx/((x2-4)1/2)\n\nwhich would go to x/(((x+2)(x-2))1/2)\n\nand give zeroes of -2,2,0. -2 and 2 would make the denominator 0 and be undefined, leaving 0 as the only CP. Sound right? Sorry if any errors I did that really quick.\n\nHomework Helper\nyes, the derivative is\n\n$$\\frac{x}{\\sqrt{\\, x^2 - 4}}$$\n\nand this is zero or undefined for $0 \\text{ and } \\pm 2$.\n\nAgain, what is your definition of a critical number? (same as critical value)\n\nDefinition is:\nx=c is a critical number for f(x) if f(c) is defined and f'(c)=0 or f'(c) is undefined.\n\nSo that means 2 and -2 WOULD be CPs?\n\nHomework Helper\nDefinition is:\nx=c is a critical number for f(x) if f(c) is defined and f'(c)=0 or f'(c) is undefined.\n\nSo that means 2 and -2 WOULD be CPs?\n\nyes.\n\nAnd 0 would not be one because it is undefined in f(x) (sqrt of a negative makes it undefined), correct?\n\nAnd for the second problem, zero was the only one defined in f(x). Therefore it IS indeed a CP."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93120223,"math_prob":0.99545145,"size":686,"snap":"2021-21-2021-25","text_gpt3_token_len":228,"char_repetition_ratio":0.09530792,"word_repetition_ratio":0.0,"special_character_ratio":0.33965015,"punctuation_ratio":0.085365854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99873143,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T14:36:15Z\",\"WARC-Record-ID\":\"<urn:uuid:b44f3388-e36e-455f-afec-bb84981535cd>\",\"Content-Length\":\"74415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aaec179b-6a92-4d57-a7f2-a53300d66a9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6cab115b-2790-4155-813a-c343aa98fb04>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/finding-critical-numbers.388758/\",\"WARC-Payload-Digest\":\"sha1:RWVKU2EKKT7N3BQHVDDCFBDC53I523VM\",\"WARC-Block-Digest\":\"sha1:OYHLTXTCB44FTJI544QAY5QVYOLV7CUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989526.42_warc_CC-MAIN-20210514121902-20210514151902-00138.warc.gz\"}"} |
https://grinebiter.com/Coin/CoinConverter/Nickels-in-Dimes/How-many-Nickels-are-in-1-Dimes.html | [
"How many nickels are in 1 dimes?",
null,
"Here, we will show you how to calculate how many nickels there are in 1 dimes.\n\nFirst, calculate how many cents there are in 1 dimes by multiplying 1 by 10, and then divide that result by 5 cents to get the answer.\n\nHere is the math to illustrate better:\n\n1 dimes x 10 cents\n= 10 cents\n\n10 cents / 5 cents\n= 2 nickels\n\nThus, the answer to the question \"How many nickels are in 1 dimes?\" is as follows:\n\n2 nickels\n\nNote: We multiplied 1 by 10, because there are 10 cents in a dime, and we divided 10 by 5, because there are 5 cents in a nickel.\n\nCoin Converter\nFill out the form below or go here if you need to convert another coin denomination.\n\nHow many\n\nare in\n\nHow many nickels are in 2 dimes?\nHere is the next number of coins we converted."
] | [
null,
"https://grinebiter.com/Images/CoinConverter.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.932305,"math_prob":0.9781114,"size":766,"snap":"2021-31-2021-39","text_gpt3_token_len":205,"char_repetition_ratio":0.15223098,"word_repetition_ratio":0.0,"special_character_ratio":0.26762402,"punctuation_ratio":0.1030303,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99979085,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T08:46:51Z\",\"WARC-Record-ID\":\"<urn:uuid:a4f29f8d-d5f2-468b-9176-7a2740839498>\",\"Content-Length\":\"8170\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17ae92f5-2cdd-47d7-959a-58730605d4ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:5901371d-92e8-4587-be0b-59f7dbc9ecf2>\",\"WARC-IP-Address\":\"99.86.230.8\",\"WARC-Target-URI\":\"https://grinebiter.com/Coin/CoinConverter/Nickels-in-Dimes/How-many-Nickels-are-in-1-Dimes.html\",\"WARC-Payload-Digest\":\"sha1:5APHWTAKFNDDSZHJFIR7CTFUGN7XKKMS\",\"WARC-Block-Digest\":\"sha1:GAACLELHU2HIP4OJN2RVDOHJ4VY7NH4X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056752.16_warc_CC-MAIN-20210919065755-20210919095755-00079.warc.gz\"}"} |
https://www.physicsforums.com/threads/net-thermal-radiation-from-an-object-outside-at-night.546962/ | [
"# Net thermal radiation from an object outside at night\n\nI am trying to model the cooling of an object (for example, a sheet of glass) placed outside at night. At the moment I am only considering heat loss by radiation.\n\nI know that the net radiation from the object will be:\n\nRnet = Robj - Rsky\n\nwhere:\nRnet = the net radiation from the object\nRobj = the total thermal radiation from the object\nRsky = the thermal downwelling radiation from the sky\n\nI have come across a formula which does the above, but also takes into consideration the absorptivity of the sheet of glass at the 8-14μm wavelength (I have only considered emissivity of the glass in this wavelength as I'm only concerned with thermal radiation heat loss to the sky).\n\nThe formula is:\n\nRnet = A((εobj2obj2)σTobj4 - εskyσTamb4)\n\nwhere:\nεobj = the emissivity of the object at 8-14μm\nαobj = the absorptivity of the object at 8-14μm\nεsky = the emissivity of the sky\nTobj = the temperature of the object\nTamb4 = the ambient air temperature\nA = the area of the object\n\nWhat I would like to know is where does the εobj2obj2 bit come from? I know if I wasn't considering absorptivity then it would just be εobj, but why are εobj and αobj now squared? I have since lost where I saw it, and I'm pretty sure there wasn't an explanation there anyway. I have searched all over for a derivation of it but have had no luck.\n\nCan anyone help?\n\n## Answers and Replies\n\nI am not familiar with this, but I would agree that something is off here:\n\nSince the eqn is Rnet= Robj - Rsky\n\nthese terms should have equal forms.\n\nBut as you pointed out there is a squared ratio only on one term.\nMy guess is that it may be simplified, but who knows.\n\nSo I can't really answer your question, but I can say that I agree with it.\nPerhaps try to find a different formula / approach to use?\n\nI did find this though - a program for modelling thermal radiation. Looks like it is free to use, and mentions finding radiation from any given objects of given temperature. Maybe it will help. Here's the link http://www.fire-engineering-software.com/tra.html\n\ngood luck\n\nLast edited:\nWithout knowing how the terms are defined, and what units they have, it's impossible to say what is going on - \"I found it on the internet somewhere\" isn't very helpful.",
null,
"What units does each term in this equation have, and does the result (\"net radiation\" is not precise either) make sense?\n\nThanks for the link, elegysix.\n\nJeffKoch - my problem in finding the source of the equation stems from the fact that it was found in a Solar Energy journal somewhere but I can no longer gain access to Solar Energy journals via my university account. Hence why I'm trying to find it by other means. I thought perhaps it may have been used elsewhere but now I'm beginning to realise that perhaps the assumptions and simplifications made in the particular journal article are more important than I'd first thought!\n\nAs for the units, they are all SI units, i.e. T in Kelvin, Stefan-Boltzmann in W(m^−2)(K−4). Emissivity and absorptivity are dimensionless. Net radiation is therefore in units of W. So the equation is dimensionally correct as far as I can tell.\n\nWhat do you mean about \"net radiation\" not being precise? By net radiation, I mean the net thermal radiation (i.e. primarily in the 8-14micro m wavelength band) leaving the surface of the glass, as opposed to the thermal radiation leaving the glass before one has taken into account the downwelling thermal radiation from the sky.\n\n256bits\nGold Member\nThe formula is:\n\nRnet = A((εobj2/αobj2)σTobj4 - εskyσTamb4)\n\nwhere:\nεobj = the emissivity of the object at 8-14μm\nαobj = the absorptivity of the object at 8-14μm\nεsky = the emissivity of the sky\nTobj = the temperature of the object\nTamb4 = the ambient air temperature\nA = the area of the object\n\nYour equation is most likely derived for a particluar situation since it has a ratio of emissivity/ absortivity, which is what I have no idea, and the ambiant air temperature rather than the temperature of the sky ( which is different than the sky temperature ),\n\nWhat do you mean about \"net radiation\" not being precise? By net radiation, I mean the net thermal radiation (i.e. primarily in the 8-14micro m wavelength band) leaving the surface of the glass, as opposed to the thermal radiation leaving the glass before one has taken into account the downwelling thermal radiation from the sky.\n\nThink, man. Net thermal radiation per unit time? Per unit area? Per Hz? Per Sr? Integrated over any of these quantities? Photons or joules? What units do each of the terms have, and does the result make sense? Find a book on this stuff, there are many - for example the very useful one by Rybicki and Lightman."
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.95843196,"math_prob":0.8919592,"size":2515,"snap":"2021-04-2021-17","text_gpt3_token_len":671,"char_repetition_ratio":0.1668658,"word_repetition_ratio":0.9102564,"special_character_ratio":0.23777336,"punctuation_ratio":0.059793815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9811241,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-15T12:05:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c10fa7df-8765-4e5d-b406-72a3f832626e>\",\"Content-Length\":\"78660\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:931ca4f7-a33c-4a8a-9fb4-01e4fd05338c>\",\"WARC-Concurrent-To\":\"<urn:uuid:049e0b46-df82-4ce3-960d-b4c739d9b9bb>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/net-thermal-radiation-from-an-object-outside-at-night.546962/\",\"WARC-Payload-Digest\":\"sha1:B2Z2RR567IR4IHXUJV7M5S3EA5E6LWEM\",\"WARC-Block-Digest\":\"sha1:JG4ZUYGXN7GSAPR7FBED3PAD4MBVWRP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038084765.46_warc_CC-MAIN-20210415095505-20210415125505-00248.warc.gz\"}"} |
https://vincenttam.github.io/blog/2014/11/13/plot-polar-coordinates-graphs/ | [
"# Plot Polar Coordinates Graphs\n\n| Comments |\n\nGoogling “tikz tutorial”, one can find several useful PDF documents which are easy to follow, such as the first two results.\n\n1. PGF/TikZ - Graphics for $\\rm \\LaTeX$ —A tutorial by Meik Hellmund\n2. A very minimal introduction to TikZ by Jacques Crémer\n\nThe second PDF document covers plotting graphs in rectangular coordinates. How about polar coordinates?\n\nIt’s easy to find a solution online, but understanding the code and adapting them according to your needs are much harder. One has to make use of “cs”, which stands for “coordinate system”. If one is too lazy to read more webpages, then one may use the code below.\n\nWhen I drew figures with TikZ a recent post on limits of composite functions on $\\R^n$, I found a way to construct random region with TikZ in a $\\rm \\TeX$–$\\rm \\LaTeX$ Stack Exchange question.1 This inspired me to come up with the following graph.\n\n## Output",
null,
"## Source code\n\n• I got the idea of for loop in TikZ from TikZ and PGF Manual.\n• I learnt the syntax for plotting functions from the PDF in item (2).\n\n1. Refer to TikZ: Arbitrary shapes and filling? for details."
] | [
null,
"https://vincenttam.github.io/images/posts/TikZPolar/graph.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8695698,"math_prob":0.65550834,"size":1125,"snap":"2019-13-2019-22","text_gpt3_token_len":296,"char_repetition_ratio":0.088314004,"word_repetition_ratio":0.0,"special_character_ratio":0.25955555,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98940873,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T08:46:57Z\",\"WARC-Record-ID\":\"<urn:uuid:ff2fd306-9f14-4ee6-a2d7-073777da1bc0>\",\"Content-Length\":\"27206\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a70f0cf4-b144-41e6-b748-a12a4e819477>\",\"WARC-Concurrent-To\":\"<urn:uuid:9575d6ff-eba3-45f3-9ef9-8fb65f6240cb>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://vincenttam.github.io/blog/2014/11/13/plot-polar-coordinates-graphs/\",\"WARC-Payload-Digest\":\"sha1:LZRXLL2WFMWAQMK4TWB72HR6NAIJXQUW\",\"WARC-Block-Digest\":\"sha1:HWKSQJSXHIJABYQSSMXYVRVRXMTDTVHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232254731.5_warc_CC-MAIN-20190519081519-20190519103519-00283.warc.gz\"}"} |
http://computerchess.org.uk/ccrl/404/cgi/engine_details.cgi?match_length=30&print=Details&each_game=1&eng=Asterisk%200.6b | [
"Contents: CCRL Blitz Downloads and Statistics January 9, 2021 Testing summary: Total: 2'437'048 games played by 2'898 programs White wins: 937'753 (38.5%) Black wins: 752'816 (30.9%) Draws: 746'479 (30.6%) White score: 53.8%\n\n## Engine Details\n\n Options Show each game results\nAsterisk 0.6b (2316+36\n−36\n)Quote\n Author: Peter Horvath (Hungary) Link: Homepage\nThis is one of the 2 Asterisk versions we tested: Compare them!\n Opponent Elo Diff Results Score LOS Perf – RESP 0.19 64-bit 2363 +20−20 (+47) 10.5 − 21.5(+8−19=5) 32.8%10.5 / 32 1.1% −85 = 0 1 = 1 1 0 1 0 1 = 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 = 0 0 = 0 0 – Wasabi 1.3.0 64-bit 2347 +20−20 (+31) 15.5 − 15.5(+14−14=3) 50.0%15.5 / 31 6.5% +34 = 1 0 0 0 1 = 1 0 0 1 0 1 0 0 0 1 1 1 1 1 1 0 = 1 0 0 0 1 0 1 – Halogen 5 64-bit 2340 +20−20 (+24) 15.5 − 15.5(+10−10=11) 50.0%15.5 / 31 11.6% +24 1 0 = = 1 = 0 0 0 = 1 1 0 1 0 = 0 0 1 = 1 1 = = 1 0 = 0 = = 1 – Ceibo 0.5 64-bit 2329 +17−17 (+13) 6 − 12(+3−9=6) 33.3%6.0 / 18 25.2% −102 1 = 0 1 0 0 0 = 0 0 1 = 0 0 = = 0 = – FoxSEE 5.0.1 64-bit 2321 +21−22 (+5) 12.5 − 19.5(+10−17=5) 39.1%12.5 / 32 40.5% −82 1 = 1 0 1 = 0 0 1 0 0 0 1 0 = 0 0 0 0 0 1 0 1 0 0 = 1 = 1 0 0 1 – Fornax 1-2 64-bit 2292 +19−19 (−24) 16.5 − 15.5(+13−12=7) 51.6%16.5 / 32 87.1% −11 0 0 1 0 = 1 = 1 = 0 1 = 1 1 0 0 0 = 1 0 = 1 0 1 1 0 1 0 1 = 1 0 – FoxSEE 5.0.0 64-bit 2283 +19−19 (−33) 19 − 13(+15−9=8) 59.4%19.0 / 32 94.1% +35 1 1 0 0 1 1 1 0 1 1 = 1 = = 0 1 1 0 1 0 = 1 1 1 0 = 1 = = 0 = 0 – Wasabi 1.2.1 64-bit 2265 +21−21 (−51) 45.5 − 24.5(+40−19=11) 65.0%45.5 / 70 99.4% +65 1 1 = 1 0 = = 1 = 1 1 = = 0 = 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 0 1 0 0 0 1 1 1 1 1 1 1 1 1 0 0 1 0 1 0 1 1 1 0 1 1 0 0 0 = 1 1 1 = 1 1 1 = =\n\n### Rating changes by day",
null,
"### Rating changes with played games",
null,
"Created in 2005-2013 by CCRL team Last games added on January 9, 2021"
] | [
null,
"http://computerchess.org.uk/ccrl/404/rating-history-by-day-graphs/Asterisk_0_6b.png",
null,
"http://computerchess.org.uk/ccrl/404/rating-history-by-day-graphs-2/Asterisk_0_6b.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5352794,"math_prob":0.99983555,"size":1348,"snap":"2021-04-2021-17","text_gpt3_token_len":992,"char_repetition_ratio":0.34598213,"word_repetition_ratio":0.39529413,"special_character_ratio":0.92284864,"punctuation_ratio":0.098712444,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98430216,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-15T23:35:20Z\",\"WARC-Record-ID\":\"<urn:uuid:39d81593-bf40-4fc6-956e-bf94c73a57cd>\",\"Content-Length\":\"14128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1549d477-516c-4b0f-a53e-0c42fcabe198>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad680508-639d-40fd-b407-eb8bd6fd4b86>\",\"WARC-IP-Address\":\"185.45.66.155\",\"WARC-Target-URI\":\"http://computerchess.org.uk/ccrl/404/cgi/engine_details.cgi?match_length=30&print=Details&each_game=1&eng=Asterisk%200.6b\",\"WARC-Payload-Digest\":\"sha1:KJ7XH4JWNFB2XWY5HPKEU4XLUDOWLUNS\",\"WARC-Block-Digest\":\"sha1:PESJ37B6CQ5GHWXSMDRCSCWK7WTRPQCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703497681.4_warc_CC-MAIN-20210115224908-20210116014908-00504.warc.gz\"}"} |
http://conversion.org/volume/fluid-dram-us/bushel-us-dry-level | [
"# fluid dram (US) to bushel (US dry level) conversion\n\nConversion number between fluid dram (US) [fl dr] and bushel (US dry level) [bu (US lvl)] is 0.00010490319914249. This means, that fluid dram (US) is smaller unit than bushel (US dry level).\n\n### Contents [show][hide]",
null,
"Switch to reverse conversion:\nfrom bushel (US dry level) to fluid dram (US) conversion\n\n### Enter the number in fluid dram (US):\n\nDecimal Fraction Exponential Expression\n [fl dr]\neg.: 10.12345 or 1.123e5\n\nResult in bushel (US dry level)\n\n?\n precision 0 1 2 3 4 5 6 7 8 9 [info] Decimal: Exponential:\n\n### Calculation process of conversion value\n\n• 1 fluid dram (US) = (exactly) (3.6966911953125*10^-06) / (0.03523907016688) = 0.00010490319914249 bushel (US dry level)\n• 1 bushel (US dry level) = (exactly) (0.03523907016688) / (3.6966911953125*10^-06) = 9532.5977489177 fluid dram (US)\n• ? fluid dram (US) × (3.6966911953125*10^-06 (\"m³\"/\"fluid dram (US)\")) / (0.03523907016688 (\"m³\"/\"bushel (US dry level)\")) = ? bushel (US dry level)\n\n### High precision conversion\n\nIf conversion between fluid dram (US) to cubic-metre and cubic-metre to bushel (US dry level) is exactly definied, high precision conversion from fluid dram (US) to bushel (US dry level) is enabled.\n\nDecimal places: (0-800)\n\nfluid dram (US)\nResult in bushel (US dry level):\n?\n\n### fluid dram (US) to bushel (US dry level) conversion chart\n\n Start value: [fluid dram (US)] Step size [fluid dram (US)] How many lines? (max 100)\n\nvisual:\nfluid dram (US)bushel (US dry level)\n00\n100.0010490319914249\n200.0020980639828499\n300.0031470959742748\n400.0041961279656997\n500.0052451599571247\n600.0062941919485496\n700.0073432239399745\n800.0083922559313994\n900.0094412879228244\n1000.010490319914249\n1100.011539351905674\nCopy to Excel\n\n## Multiple conversion\n\nEnter numbers in fluid dram (US) and click convert button.\nOne number per line.\n\nConverted numbers in bushel (US dry level):\nClick to select all\n\n## Details about fluid dram (US) and bushel (US dry level) units:\n\nConvert Fluid dram (US) to other unit:\n\n### fluid dram (US)\n\nDefinition of fluid dram (US) unit: ≡ 1⁄8 US fl oz. Fluid dram (or Drachm in UK spelling) = US fl oz / 8 = 3.6966911953125×10−6 m³\n\nConvert Bushel (US dry level) to other unit:\n\n### bushel (US dry level)\n\nDefinition of bushel (US dry level) unit: ≡ 2150.42 cu in. = 2150.42 × 0.0254³ = 0.03523907016688 m³",
null,
""
] | [
null,
"http://conversion.org/images/switch.png",
null,
"http://conversion.org/menufiles/top.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7180213,"math_prob":0.57854134,"size":1213,"snap":"2019-43-2019-47","text_gpt3_token_len":429,"char_repetition_ratio":0.2158809,"word_repetition_ratio":0.021857923,"special_character_ratio":0.5012366,"punctuation_ratio":0.13304721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9759735,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T23:36:33Z\",\"WARC-Record-ID\":\"<urn:uuid:be40276a-17f4-4152-8602-53c655fcd61f>\",\"Content-Length\":\"35490\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:858dec35-68f9-420d-85c6-e1d9033f658c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9438cb71-33b1-45d7-8f2d-e3ef9af0ae1a>\",\"WARC-IP-Address\":\"142.54.171.162\",\"WARC-Target-URI\":\"http://conversion.org/volume/fluid-dram-us/bushel-us-dry-level\",\"WARC-Payload-Digest\":\"sha1:FGKPMTSMKTC2FM6AXXPDVLCWX3RI3GLW\",\"WARC-Block-Digest\":\"sha1:7DJTBWSM6XLYDHQMKDCRDGLJ37FAKI3J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667442.36_warc_CC-MAIN-20191113215021-20191114003021-00285.warc.gz\"}"} |
https://www.expertsmind.com/library/calculate-the-daily-market-return-over-the-last-five-years-51169072.aspx | [
"### Calculate the daily market return over the last five years\n\nAssignment Help Financial Management\n##### Reference no: EM131169072\n\nAssignment\n\nQuestion 1 Risk and Return\n\nCalculate the following using the data from Yahoo Finance (https://au.finance.yahoo.com/) for the company you selected for Question 1 of Assignment .\n\n1. Calculate the daily market return over the last five years from the daily prices, calculate the monthly returns from the daily returns, and calculate the yearly returns from the monthly returns.\n\n2. Calculate the total risk (i.e. yearly standard deviation of the daily returns).\n\n3. Calculate the yearly systematic / market risk using the daily returns of the stock and daily return of the market index.\n\n4. Calculate the unsystematic risk / firm specific risk. Suggest whether this company is a good investment. Answer the following questions while making your suggestion.\n\na) What is the basis for selection of this stock if you suggest this as a good investment?\n\nb) Would you invest all your money into this stock? If not, why not? How will you address this concern?\n\nQuestion 2 Capital Budgeting\n\nABC Ltd. would like to set up a new expansion plant. Currently, ABC has an option to buy an existing building at a cost of AUD 24 000. Necessary equipment for the plant will cost AUD 16 000, including installation costs. The economic life of the equipment and building are 5\nand 40 years, respectively. The project also requires an initial investment of AUD 12 000 in net working capital. The initial working capital investment will be made at the time of the purchase of the building and equipment.\n\nThe project's estimated economic life is four years. At the end of that time, the building is expected to have a market value of AUD 15 000 and a book value of AUD 21 600, whereas the equipment is expected to have a market value of AUD 4 000 and a book value of AUD 3\n200.\n\nAnnual sales will be AUD 80 000. The production department has estimated that variable manufacturing costs will total 60% of sales and that fixed overhead costs, excluding depreciation, will be AUD 10 000 a year. Depreciation expense will be determined for the year using straight line depreciation method.\n\nABC's tax rate is 40%; its cost of capital is 12%; and, for capital budgeting purposes, the company's policy is to assume that operating cash flows occur at the end of each year. The plant will begin operations immediately after the investment is made, and the first operating cash flows will occur exactly one year later.\n\nRequirements:\n\n1. Compute the initial investment outlay, operating cash flow over the project's life, and the terminal-year cash flows for ABC's expansion project.\n\n2. Determine whether the project should be accepted using NPV analysis.\n\n3. Do the sensitivity analysis using different levels of change (e.g. 2%, 5% and 10% increase and decrease) of each of the key inputs (e.g., sales, variable costs and cost of capital)\n\n4. Identify the most sensitive factor\n\n5. Perform the scenario analysis\n\nReturn and Risk Calculation\n\n1. Calculation of daily market returns for the company selected for Module 1 assignment for over 5 years What is daily market return?\nA: Percentage change in prices over one day holding period.\nHow to calculate?\n\nRt=((Pt-Pt-1)/Pt-1)\nWhere,\n\nPt = Price today\nPt-1= Price yesterday\n\nAlternatively:\n\nRt= ln(Pt/Pt-1)\n\nWhere,\n\nln = Natural logarithm\n\nWhat is the source of market price data?\nA: yahoo finance\n\n2. Calculation of monthly market returns from the daily market returns.\n\nHow to calculate monthly returns from daily returns?\n\nA: Demonstrated in the \"return calculation.xlsx\" file available under key resources.\n\nStep 1: Add 1 to the daily returns calculated using either Equation (1) or Equation (2)\n\nStep 2: Use the product function in Excel (i.e., = PRODUCT (select the daily returns in a month)\n\nStep 3: Subtract 1 from the product (I have combined step 2 and step 3 in my calculations)\n\n3. Calculation of yearly market returns from the monthly market returns.\n\nHow to calculate yearly market returns from the monthly market returns?\n\nStep 1: Add 1 to the monthly returns\n\nStep 2: Use the product function in Excel (i.e., = PRODUCT (select the 12 monthly returns in a year)\n\nStep 3: Subtract 1 from the product (I have combined step 2 and step 3 in my calculations)\n\n4. Should I do calculations for five years in one file?\n\nA: For simplicity, it is better to do the calculations for each year separately (so five files for five years). I am, however, happy if you can do all calculations in one file.\n\n5. Calculate yearly standard deviation (i.e. total risk) of the daily returns.\n\nHow to calculate standard deviation of the daily returns?\n\nA: Please use the Excel formula (=STDEV.S (select the range of daily returns))\n\n6. Calculate yearly market risk (or systematic risk).\n\nA: It is a yearly calculation using the daily returns of the company and daily returns of the market\n\nSteps in Excel:\n\ni) Data>Analysis>Regression (if you do not find analysis tab under Data, please add the analysis tool pack from options)\n\nii) Select the range of stock return as Y inputs\n\niii) Select the range of market returns as X inputs\n\niv) Tick the label box\n\nv)Select a cell for the output range\n\nvi) Click OK\n\nvii) The coefficient of the market return is the systematic risk (commonly known as beta). Please ignore the other statistics.\n\n SUMMARY OUTPUT Regression Statistics Multiple R 0.2154 R Square 0.0464 Adjusted R Square 0.0426 Standard Error 0.0265 Observations 252\n\n ANOVA df SS MS F Significance F Regression 1 0.0085 0.0085 12.1683 0.0006 Residual 250 0.1753 0.0007 Total 251 0.1838 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept -0.0015 0.0017 -0.8779 0.3808 -0.0048 0.0018 Market return 0.5428 0.1556 3.4883 0.0006 0.2364 0.8493\n\n7. How to calculate the daily market returns?\n\nStep 1: Please use the daily price of a market index (i.e., ASX All Ordinaries or ASX S&P200)\n\nStep 2: Use either Equation (1) or Equation (2) to calculate the daily market retruns.\n\n8. How to calculate the unsystematic risk (or firm specific risk)?\n\nStep 1: Use the following model to calculate the daily fitted returns (or forecasted returns) of the stock of the company\n\nE(Ri)= α +β Rm\n\nWhere,\n\nα = Intercept\nβ = Coefficient of X Variable 1 (i.e., beta)\nRm = Daily market return\n\nStep 2: Calculate the residuals (i.e., actual return minus expected return) for every day.\nε=Ri-E( Ri )\n\nStep 3: Calculate the yearly standard deviation of the daily residuals. This is defined as the unsystematic risk.\n\n### Write a Review\n\n#### Unsalable inventory-uncollectible accounts receivable\n\nFinancial statement analysis can be used to identify weaknesses in a firm’s operations. Uncollectible accounts receivable – you could compare the firm’s average collection period to other peer firms and look for trends over time that might suggest de..\n\n#### What is the macaulay duration\n\nThere is a 9 percent coupon bond with six years to maturity and a current price of \\$958.50. What is the dollar value of an 01 for the bond? You find a bond with 14 years until maturity that has a coupon rate of 8.2 percent and a yield to maturity of..\n\n#### Development and testing of the new app will take four months\n\nTake Five Systems, a new start-up, is developing a new iPhone application (“app”) and provides you with the following assumptions: Development and testing of the new app will take four months. Month five is the first month of revenue generation. Init..\n\n#### Basic trade-off between efficiency and equity\n\nThere is a basic trade-off between efficiency and equity because A. Income redistribution tends to reduce incentives for efficient behavior. B. People who are efficient dislike equity. C. Pareto improvements can only be made by sacrificing efficiency..\n\n#### What was the arithmetic average return on the stock\n\nYou’ve observed the following returns on Doyscher Corporation’s stock over the past five years: –12 percent, 21 percent, 27 percent, 6 percent, and 17 percent. What was the arithmetic average return on the stock over this five-year period? What was ..\n\n#### Declining growth stock valuation\n\nDeclining Growth Stock Valuation Brushy Mountain Mining Company's coal reserves are being depleted, so its sales are falling. Also, environmental costs increase each year, so its costs are rising. As a result, the company's earnings and dividends are..\n\n#### Capital asset pricing model and wacc-beta coefficient\n\nThe “beta” coefficient for ABC is 1.25 based on the past information. The 30-day T-bill rate is 1.5%, The 5-year average market return of (say, S&P 500 index) in the same period is 11%. Suppose that ABC's current dividend is \\$1.24 per share with poss..\n\n#### How reasoning errors influence financial decisions\n\nThe textbook describes the field of Behavioral Finance as the study of “how reasoning errors influence financial decisions.” In this context, explain the difference between biases, framing effects and heuristics with examples.\n\n#### Explain what is the price of the coupon bond\n\nWhat is the price of the coupon bond. What is the yield to maturity of the coupon bond. Under the expectations hypothesis, what is the expected realized compound yield of the coupon bond\n\n#### What would be the maintenance margin if a margin call\n\nAssume you sold short 100 shares of common stock at \\$50 per share. The initial margin is 60%. What would be the maintenance margin if a margin call is made at a stock price of \\$60?\n\n#### What annual rate of return is analyst assuming you can earn\n\nA financial analyst tells you that investing in stocks will allow you to double your money in 7 years. What annual rate of return is the analyst assuming you can earn?\n\n#### What will the spot rate be in one year according to the ife\n\nThe one-year interest rate is 11% in the United Kingdom and 7% in Singapore. What will the spot rate be in one year according to the IFE?",
null,
""
] | [
null,
"https://www.expertsmind.com/prostyles/images/3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84803194,"math_prob":0.9208385,"size":6606,"snap":"2023-40-2023-50","text_gpt3_token_len":1656,"char_repetition_ratio":0.17540139,"word_repetition_ratio":0.08732877,"special_character_ratio":0.27066302,"punctuation_ratio":0.12930375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98398983,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T06:01:49Z\",\"WARC-Record-ID\":\"<urn:uuid:aa385994-cff1-4a94-9639-68c829c10856>\",\"Content-Length\":\"69774\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:286bd6d8-65da-4c37-9788-9a6b6fa6669c>\",\"WARC-Concurrent-To\":\"<urn:uuid:c43f291b-725b-4968-828f-7c506bd9a474>\",\"WARC-IP-Address\":\"104.21.1.109\",\"WARC-Target-URI\":\"https://www.expertsmind.com/library/calculate-the-daily-market-return-over-the-last-five-years-51169072.aspx\",\"WARC-Payload-Digest\":\"sha1:44XM7LMXF64IB3OTPT6A7D54N2FK2P6I\",\"WARC-Block-Digest\":\"sha1:TDKZZAXQSSFYBUOKLLJJXWFKITJE3M4Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100650.21_warc_CC-MAIN-20231207054219-20231207084219-00076.warc.gz\"}"} |
https://silosy-zbozowe.com.pl/coal/Oct-4755/ | [
" how to calculate mill copper recovery\n\n## how to calculate mill copper recovery",
null,
"### Calculate Copper Wire Recovery Rates Gardner Metal\n\n2021-3-15 Fortunately, calculating the copper wire recovery percentage is a fairly simple process. Simply take a small segment of cable and weigh it. Take note of that number and then strip the casing so that you are left with only the copper wire at the core of the cable. Weigh the copper by itself, then divide that number by the overall weight of the ...\n\nMore",
null,
"### How to Calculate the Recovery Rate of your Scrap Copper ...\n\n2021-5-21 You calculate the recovery rate of your scrap copper wire by dividing the weight of the piece of copper before and after the rubber or plastic casing that encloses the wire itself has been removed. It’s also a good idea to have a piece of paper and a pen to help you record the weights. Then, you’re going to want to have a calculator close ...\n\nMore",
null,
"### How To Calculate Copper Wire Recovery Rates » Super\n\n2021-9-20 Fortunately, calculating the copper wire recovery percentage is a fairly simple process. Simply take a small segment of cable and weigh it. Take note of that number and then strip the casing so that you are left with only the copper wire at the core of the cable. Weigh the copper by itself, then divide that number by the overall weight of the ...\n\nMore",
null,
"### Metallurgical Accounting Formulas Concentration and ...\n\n2013-9-8 R = 100 c/Kf = recovery % By weights F and C, plus assays c and t; R = 100 Cc / (Cc+t(F—C)) = recovery % A copper concentrator is milling 15,000 tons/day of a chalcopyrite ore assaying 1.15% copper. The concentrate and tailings produced average 32.7% and 0.18% copper\n\nMore",
null,
"### Common Basic Formulas for Mineral Processing\n\n2016-3-20 Concentration and Recovery Formulas. These are used to compute the production of concentrate in a mill or in a particular circuit. The formulas are based on assays of samples, and the results of the calculations are generally accurate— as\n\nMore",
null,
"### Recovery of Copper - Valencia College\n\n2009-10-27 After our product has been weighed we will need to calculate the percent recovery. The equation for percent recovery can be seen in equation 7. (7) Percent Recovery = (mass of Cu recovered / mass of initial Cu) 100%. Procedure . 1. Weigh 0.35-0.40 g of copper wire and place into a 250 mL graduated beaker. 2.\n\nMore",
null,
"### How to Calculate Percent Recovery - Science Struck\n\nPercent recovery = (8.67 ÷ 11.23) × 100 = 77.20 %. 77.20% of zinc is recouped in this process. Problem II: 14.18 gm of copper is used for a recrystallization experiment. The amount of copper recovered at the end of the purification process is 18.29 gm. Calculate the percentage of copper\n\nMore",
null,
"### Solved Calculate the percent recovery of copper. Use the ...\n\nThis problem has been solved! See the answer. See the answer See the answer done loading. Calculate the percent recovery of copper. Use the formula % recovery= ending mass of copper (g)/inital mass of copper x 100%. Mass of empty centrifuge tube- 2.639 g. Mass of centrifuge + starting copper- 2.868 g. MAss of centrifuge + dried copper- 3.125 g.\n\nMore",
null,
"### How to Calculate Percent Recovery.\n\nFormula to calculate percent recovery. Example: Suppose you had 15g of blue Copper (II) sulfate, after heating it, you were left with 12.8g of white Copper (II) sulfate, Calculate the percent recovery of the compound. Thus, the percent recovery of the substance is 85.3%. Prev Article.\n\nMore",
null,
"### Calculate Copper Wire Recovery Rates Gardner Metal\n\n2021-3-15 Fortunately, calculating the copper wire recovery percentage is a fairly simple process. Simply take a small segment of cable and weigh it. Take note of that number and then strip the casing so that you are left with only the copper wire at the core of the cable. Weigh the copper by itself, then divide that number by the overall weight of the ...\n\nMore",
null,
"### Calculating Copper Recovery Rate For Scrap Cables\n\n2021-6-8 Once you have the weight of your copper wire on the inside of the cable, take that number and divide it by the overall weight of the sample and the result will be the percentage of copper recovery rate from your scrap cable or wire. You can bring that\n\nMore",
null,
"### OneClass: Calculate the percent recovery of copper. Use ...\n\nCalculate the percent recovery of copper. Use the formula % recovery= ending mass of copper (g)/inital mass of copper x 100%. Mass of empty centrifuge tube- 2.639 g. Mass of centrifuge + starting copper- 2.868 g. MAss of centrifuge + dried copper- 3.125 g. Would it be 2.12 or 212 %?\n\nMore",
null,
"### Recovery of Copper - Valencia College\n\n2009-10-27 After our product has been weighed we will need to calculate the percent recovery. The equation for percent recovery can be seen in equation 7. (7) Percent Recovery = (mass of Cu recovered / mass of initial Cu) 100%. Procedure . 1. Weigh 0.35-0.40 g of copper wire and place into a 250 mL graduated beaker. 2.\n\nMore",
null,
"### How to Calculate Percent Recovery - Science Struck\n\nPercent recovery = (8.67 ÷ 11.23) × 100 = 77.20 %. 77.20% of zinc is recouped in this process. Problem II: 14.18 gm of copper is used for a recrystallization experiment. The amount of copper recovered at the end of the purification process is 18.29 gm. Calculate the percentage of copper\n\nMore",
null,
"### Metal Recovery Rate - How to Interpreted the Mineral ...\n\nThe metal recovery rate can be found in a mining company’s National Instrument 43-101 or in a similar applicable international reporting standard. Example: When you read a press release in which a mining company announces a resource of 500 million tonnes of 1% copper and a mineral recovery\n\nMore",
null,
"### Estimation of Open Cut Mining Recovery and Mining\n\n2013-9-27 Mine to mill is a process for optimisation of mining and processing recovery and costs. The choice of either selective mining or bulk mining techniques is a major consideration, however evaluations rarely undertake careful consideration of Mining Recovery and Mining Dilution in the context of mine to mill optimisation.\n\nMore",
null,
"### Solved Calculate the percent recovery of copper. Use the ...\n\nThis problem has been solved! See the answer. See the answer See the answer done loading. Calculate the percent recovery of copper. Use the formula % recovery= ending mass of copper (g)/inital mass of copper x 100%. Mass of empty centrifuge tube- 2.639 g. Mass of centrifuge + starting copper- 2.868 g. MAss of centrifuge + dried copper- 3.125 g.\n\nMore",
null,
"### Copper recovery using leach/solvent extraction ...\n\n2009-8-27 Copper recovery by L/SX/EX in 1968 In 1968 there were only two widely practiced copper leaching processes using dilute sulphuric acid. The first process, vat leaching of high-grade copper oxide ore followed by EW of copper from the leach solution, produced low quality copper cathode at relatively high cost. In 1968 the tonnage of high-\n\nMore",
null,
"### What is a theoretical grade-recovery curve? An example ...\n\n2009-10-26 The theoretical grade-recovery curve for an ore is a definition of the maximum expected recovery by flotation of a mineral or element at a given grade. This is defined by the surface area liberation of the value minerals and is consequently directly related to the grind size utilised in the process. The theoretical grade-recovery can be readily used to quickly identify potential recovery ...\n\nMore",
null,
"### Calculate Copper Wire Recovery Rates Gardner Metal\n\n2021-3-15 Fortunately, calculating the copper wire recovery percentage is a fairly simple process. Simply take a small segment of cable and weigh it. Take note of that number and then strip the casing so that you are left with only the copper wire at the core of the cable. Weigh the copper by itself, then divide that number by the overall weight of the ...\n\nMore",
null,
"### The electrolytic recovery of copper from rod mill pickling ...\n\n2020-10-13 COPPER AVAILABLE FOR RECOVERY. The. two. fao. tors governing. the. amount of copper entering the piokling solution are (ll. the rod mill production rate, and (2) the amount of oopper oxide coating dissolved from each coil of rod. Rod Kl11. Production Rate; In es. t. imatlng the quant. i. ty. ot. oopper available. for. recovery annually. the ...\n\nMore",
null,
"### OneClass: Calculate the percent recovery of copper. Use ...\n\nCalculate the percent recovery of copper. Use the formula % recovery= ending mass of copper (g)/inital mass of copper x 100%. Mass of empty centrifuge tube- 2.639 g. Mass of centrifuge + starting copper- 2.868 g. MAss of centrifuge + dried copper- 3.125 g. Would it be 2.12 or 212 %?\n\nMore",
null,
"### Recovery of Copper - Valencia College\n\n2009-10-27 After our product has been weighed we will need to calculate the percent recovery. The equation for percent recovery can be seen in equation 7. (7) Percent Recovery = (mass of Cu recovered / mass of initial Cu) 100%. Procedure . 1. Weigh 0.35-0.40 g of copper wire and place into a 250 mL graduated beaker. 2.\n\nMore",
null,
"### How to Calculate Percent Recovery - Science Struck\n\nPercent recovery = (8.67 ÷ 11.23) × 100 = 77.20 %. 77.20% of zinc is recouped in this process. Problem II: 14.18 gm of copper is used for a recrystallization experiment. The amount of copper recovered at the end of the purification process is 18.29 gm. Calculate the percentage of copper\n\nMore",
null,
"### Copper recovery using leach/solvent extraction ...\n\n2009-8-27 Copper recovery by L/SX/EX in 1968 In 1968 there were only two widely practiced copper leaching processes using dilute sulphuric acid. The first process, vat leaching of high-grade copper oxide ore followed by EW of copper from the leach solution, produced low quality copper cathode at relatively high cost. In 1968 the tonnage of high-\n\nMore",
null,
"### Dilution and ore recovery - QueensMineDesignWiki\n\n2019-6-28 The generalized equation for recovery is given in the following equation: More specifically, ore recovery can be defined by the percentage of minable reserves extracted in the mining process. The issue of balancing dilution and ore recovery is a challenging one as profitability is to be optimized while not effecting the efficiency of operation.\n\nMore",
null,
"### Copper Recovery - INDUSTRIAL SCRAP RECYCLING\n\nINDUSTRIAL SCRAP. RECYCLING EQUIPMENT. FOR RESOURCE RECOVERY. Copper Recovery is a manufacturer of wire and cable recycling equipment, offering sales and service worldwide. We also act as agent or representative for some of the finest European manufacturers of recycling machinery. Most recently, we infused our knowledge and know-how gained over ...\n\nMore",
null,
"### Recommended machining parameters for copper and\n\n2021-8-12 for copper and copper alloys” contin-ues a long tradition established by the German Copper Institute (DKI). The publication “Processing Copper and Copper Alloys” (“Das Bearbeiten von Kupfer und Kupferlegierungen”) first appeared in 1938 and again in 1940. The handbook “Metal cutting tech-niques for copper and copper alloys”\n\nMore",
null,
"### Percent Recovery Calculator - Calculator Academy\n\n2021-10-13 Percent Recovery Formula. The following equation is used to calculate the percent recovery from a purification process. PR = SAP / SBP *100. Where PR s the percent recovery. SAP is the substance amount after purification. SBP is the\n\nMore"
] | [
null,
"https://silosy-zbozowe.com.pl/pic/yt/69.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/109.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/94.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/7.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/161.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/98.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/124.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/134.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/177.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/69.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/166.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/173.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/98.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/124.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/43.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/150.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/134.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/7.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/189.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/69.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/56.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/173.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/98.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/124.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/7.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/3.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/131.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/142.jpg",
null,
"https://silosy-zbozowe.com.pl/pic/yt/25.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.878884,"math_prob":0.94997233,"size":10920,"snap":"2022-27-2022-33","text_gpt3_token_len":2576,"char_repetition_ratio":0.19961524,"word_repetition_ratio":0.5512465,"special_character_ratio":0.24624541,"punctuation_ratio":0.124824025,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9517807,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,5,null,1,null,3,null,3,null,1,null,5,null,5,null,2,null,6,null,5,null,1,null,5,null,5,null,5,null,1,null,1,null,2,null,3,null,1,null,5,null,1,null,5,null,5,null,5,null,3,null,1,null,4,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T05:54:41Z\",\"WARC-Record-ID\":\"<urn:uuid:ef320ea5-f645-48ea-8a3c-ce5f949b1a62>\",\"Content-Length\":\"23856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95032fdb-4c32-45f5-9b1e-f176090fe890>\",\"WARC-Concurrent-To\":\"<urn:uuid:84ac8f1a-ad3f-48fa-a8de-cd75e3b2a5ad>\",\"WARC-IP-Address\":\"172.67.157.221\",\"WARC-Target-URI\":\"https://silosy-zbozowe.com.pl/coal/Oct-4755/\",\"WARC-Payload-Digest\":\"sha1:VZKVEJMOYF5EKFMHBGJDV3OTJIFPPR7Q\",\"WARC-Block-Digest\":\"sha1:PTXOEYXYQ2RT2TLCM6TPMVTYF57BDAN6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104514861.81_warc_CC-MAIN-20220705053147-20220705083147-00779.warc.gz\"}"} |
https://codereview.stackexchange.com/questions/187283/aoc-day-22-sporifica-virus-part-1-a-solution-in-beginners-haskell | [
"# AoC Day 22: Sporifica Virus Part 1, a Solution in Beginner's Haskell\n\nUntil recently, I was very much only devoted to imperative languages (mainly C++ and C, to be precise), when I decided to venture into unknown waters by picking up a new, completely different language, which happened to be Haskell, a decision which happened to be influenced by the fact that I owned a copy of \"Learn You a Haskell for Great Good!\", a book which I very much enjoyed learning from.\n\nSome days ago, I finished said book, and, wanting to apply my newly acquired knowledge, ventured our to find some programming exercises. I quickly remembered Advent of Code, which offers a whole pre-Christmas period's worth of easy to mildly difficult programming problems, a few of which I had already solved during the holidays.\n\nI skimmed through the exercises, looking for one simple enough to be conquerable with my still very inadequate and shaky Haskell skills, and finally chose the task of Day 22.\n\n# Problem Description\n\nDiagnostics indicate that the local grid computing cluster has been contaminated with the Sporifica Virus. The grid computing cluster is a seemingly-infinite two-dimensional grid of compute nodes. Each node is either clean or infected by the virus.\n\nTo prevent overloading the nodes (which would render them useless to the virus) or detection by system administrators, exactly one virus carrier moves through the network, infecting or cleaning nodes as it moves. The virus carrier is always located on a single node in the network (the current node) and keeps track of the direction it is facing.\n\nTo avoid detection, the virus carrier works in bursts; in each burst, it wakes up, does some work, and goes back to sleep. The following steps are all executed in order one time each burst:\n\n• If the current node is infected, it turns to its right. Otherwise, it turns to its left. (Turning is done in-place; the current node does not change.)\n• If the current node is clean, it becomes infected. Otherwise, it becomes cleaned. (This is done after the node is considered for the purposes of changing direction.)\n• The virus carrier moves forward one node in the direction it is facing. Diagnostics have also provided a map of the node infection status (your puzzle input).\n\nClean nodes are shown as .; infected nodes are shown as #. This map only shows the center of the grid; there are many more nodes beyond those shown, but none of them are currently infected.\n\nThe virus carrier begins in the middle of the map facing up.\n\n(The full puzzle description, including examples, is available on the official AoC website.)\n\nThe mentioned puzzle input consists of a file containing a grid of . and #.\n\n# My Solution\n\nimport Data.List.Index\nimport qualified Data.Set as Set\nimport System.Environment\nimport System.IO\nimport qualified System.IO.Strict as IOS\n\ntype Position = (Int, Int)\ntype Dimensions = (Int, Int)\ntype NodeMap = Set.Set Position\n\ndata Rotation = Clockwise | Counterclockwise deriving (Eq, Show)\ndata Direction = North | East | South | West deriving (Eq, Show)\n\nnextDirection :: Rotation -> Direction -> Direction\nnextDirection rot dir\n| rot == Clockwise && dir == West = North\n| rot == Clockwise && dir == East = South\n| rot == Clockwise && dir == North = East\n| rot == Clockwise && dir == South = West\n| rot == Counterclockwise && dir == West = South\n| rot == Counterclockwise && dir == East = North\n| rot == Counterclockwise && dir == North = West\n| rot == Counterclockwise && dir == South = East\n\nparseInput :: String -> (Dimensions, NodeMap)\nparseInput s = ((length . head . lines $s, length . lines$ s),\nfoldl (\\set (index, char) ->\nif char == '#'\nthen Set.insert index set\nelse set)\nSet.empty\n. ifoldl (\\ls index line ->\nls ++ zipWith (\\ix (i, char) ->\n((i, ix), char))\n(repeat index) (indexed line))\n[]\n. lines $s) simulateNBurstsImpl :: Int -> (Int, Position, Direction, NodeMap) -> (Int, Position, Direction, NodeMap) simulateNBurstsImpl 0 x = x simulateNBurstsImpl i (count, pos, dir, map) = simulateNBurstsImpl (i - 1) transitionFunction where step (a, b) dir | dir == West = (a - 1, b) | dir == East = (a + 1, b) | dir == North = (a, b - 1) | dir == South = (a, b + 1) transitionFunction | Set.member pos map == True = let nextDir = nextDirection Clockwise dir in (count, step pos nextDir, nextDir, Set.delete pos map) | Set.member pos map == False = let nextDir = nextDirection Counterclockwise dir in (count + 1, step pos nextDir, nextDir, Set.insert pos map) simulateNBursts :: Int -> Dimensions -> NodeMap -> Int simulateNBursts i (width, height) map = first (simulateNBurstsImpl i (0, startingPos, North, map)) where startingPos = (width quot 2, height quot 2) first (a, _, _, _) = a main = getArgs >>= \\(filename : _) -> withFile filename ReadMode IOS.hGetContents >>= \\content -> let parsed = parseInput content in print . simulateNBursts 10000 (fst parsed) . snd$ parsed\n\n\nNotes:\n\n• The program takes a single argument on the commandline; the path to the file containing the starting grid.\n• I decided on using a set as the underlying data structure as it makes it easy to work with growing grids. My first attempt used a two-dimensional sequence, but extending the grid turned out to be too much of a hassle for my taste.\n• The code assumes that all inputs are valid, including the fact that a commandline parameter was passed and points to a valid file.\n\n# Review Requests\n\nPlease feel free to review anything and everything that comes to mind! That said, I do have a few concrete questions:\n\n1. Having simulateNBursts as a beautified interface to simulateNBurstsImpl seems kind of ugly to me. Is there a way to clean this up and join the two functions? Or is this a common pattern?\n2. How readable is this code? As its author, I find it hard to judge how (un-)pleasant this code is to the eye of a third person, especially since I have next to no experience in reading and writing Haskell code. What can I do to improve readability?\n3. nextDirection seems very verbose to me. Is there a more concise way to implement it?\n\n## 1 Answer\n\nYou can inline simulateNBurstsImpl if you get rid of the recursion:\n\nsimulateNBurstsImpl n = foldr (.) id $replicate n transitionFunction Combining zipWith with repeat is a fool's errand. (I don't know where you get ifoldl and indexed, so I'll assume they start at 0.) parseInput :: String -> (Dimensions, NodeMap) parseInput s = ((length . head$ lines s, length $lines s), Set.fromList [ (x, y) | (y, line) <- zip [0..]$ lines s\n, (x, char) <- zip [0..] line\n, char == '#'\n])\n\n\nDirection and Rotation can be directly represented as offsets and functions on offsets.\n\nimport Data.NumInstances.Tuple\n\ntype Rotation = Direction -> Direction\ntype Direction = (Int, Int)\n\ntransitionFunction (count, pos, (dx,dy), map) = if Set.member pos map\nthen let nextdir = (-dy,dx)\nin (count , pos + nextDir, nextDir, Set.delete pos map)\nelse let nextdir = (dy,-dx)\nin (count + 1, pos + nextDir, nextDir, Set.insert pos map)\n\n\nOptionally: lens and State specialize in this fiddly sort of stuff.\n\ndata S = S\n{ _count :: Int\n, _pos :: Position\n, _dir :: Int\n, _map :: NodeMap\n}\n\nmakeLenses ''S\n\nsimulateNBursts i (width, height) map =\n(evalState (0, (width quot 2, height quot 2), (0,-1), map)) $do replicateM_ i$ do\nhashtagged <- map . contains pos <<%= negate\nnextDir <- dir <%= \\(x,y) -> if hashtagged then (-y,x) else (y,-x)\npos += nextDir\nunless hashtagged \\$ count += 1\nuse count\n\n• Thank you for the answer. ifoldl and indexed are from the ilist package (from Data.List.Index and start, as you guessed, from 0. – Ben Steffan Feb 11 '18 at 16:23"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89705175,"math_prob":0.89612436,"size":5957,"snap":"2019-35-2019-39","text_gpt3_token_len":1421,"char_repetition_ratio":0.10431715,"word_repetition_ratio":0.05167464,"special_character_ratio":0.25029376,"punctuation_ratio":0.13748854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9543783,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T01:47:36Z\",\"WARC-Record-ID\":\"<urn:uuid:736ee9d1-a3f4-43e5-a30b-9be3934cabd5>\",\"Content-Length\":\"143965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:435ad3f4-e415-47d3-ae1f-5cbf77783f53>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ebb9ebb-f4e5-4882-9d90-5d49062351c5>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/187283/aoc-day-22-sporifica-virus-part-1-a-solution-in-beginners-haskell\",\"WARC-Payload-Digest\":\"sha1:J7TJ2L7RYLYMNIAJKLYV4R2JR5RGR5QS\",\"WARC-Block-Digest\":\"sha1:QRJMNSS6VRGY7BCOQ3KO5QJNL3A2UWBA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315695.36_warc_CC-MAIN-20190821001802-20190821023802-00307.warc.gz\"}"} |
https://docs.derivative.ca/LSystem_SOP | [
"# LSystem SOP\n\n## Summary\n\nThe Lsystem SOP implements L-systems (Lindenmayer-systems, named after Aristid Lindenmayer (1925-1989)), allow definition of complex shapes through the use of iteration. They use a mathematical language in which an initial string of characters is evaluated repeatedly, and the results are used to generate geometry. The result of each evaluation becomes the basis for the next iteration of geometry, giving the illusion of growth.\n\nYou begin building an L-system by defining a sequence of rules which are evaluated to produce a new string of characters. Each character of the new string represents one command which affects an imaginary stylus, or \"turtle\". Repeating this process will grow your geometry.\n\nYou can use L-systems to create things such as:\n\n• Create organic objects such as trees, plants, flowers over time.\n• Create animated branching objects such as lightning and snowflakes.\n\nThe file can be read in from disk or from the web. Use http:// when specifying a URL.\n\n### The Algorithmic Beauty of Plants\n\nThe descriptions located here should be enough to get you started in writing your own L-system rules, however, if you have any serious interests in creating L-systems, you should obtain the book:\n\n```The Algorithmic Beauty of Plants\nPrzemyslaw Prusinkiewicz & Aristid Lindenmayer\nSpringer-Verlag, New York, Phone: 212.460.1500\nISBN: 0-387-94676-4, 1996.\n```\n\nwhich is the definitive reference on the subject. It contains a multitude of L-systems examples complete with descriptions of the ideas and theories behind modelling realistic plant growth.\n\n## Parameters - Geometry Page\n\nType `type` - - Provides two options for output geometry:\n\n• Skeleton `skel` - Creates wire frame geometry. This option is ideal for geometry that is stiff and jagged like lightning or snowflakes. It is also useful to reduce SOP cooking time.\n• Tube `tube` - Creates tube geometry. This option can be used with solid geometry that would need smooth curves, like trees or shrubs. Parameters on the Tube Page are only enabled when this Type is selected.\n\nGenerations `generations` - Determines the number of times to apply the rules to the initial string. This value controls the growth of the L-system. Place a time-based function here to animate the L-system growth.\n\nRandom Scale `randscale` - Random Scale as a percentage. This will apply a random scale to the changing geometry's lengths, angles and thickness.\n\nRandom Seed `randseed` - Random Seed for the SOP. This value can be used to select different sequences of random values.\n\nContinuous Angles `contangl` - Calculates the incremental angles of branches, if a non-integer generational value is used. If the Generations field is animating, this should be set to ensure smooth growth.\n\nContinuous Length `contlength` - Calculates the incremental lengths of the geometry points if a non-integer generational value is used. As with Continuous Angles, if the Generations field is animating, this should be set to ensure smooth, continuous growth. The Continuous Width field applies to tube thickness.\n\nContinuous Width `contwidth` - Calculates the incremental lengths of the geometry points if a non-integer generational value is used. As with Continuous Angles, if the Generations field is animating, this should be set to ensure smooth, continuous growth. The Continuous Width field applies to tube thickness.\n\nApply Color `docolor` - Use a TOP to apply color to the L-system as it grows.\n\nImage File `colormap` - Defines a TOP to use when the Apply Color button is selected. Also see the ` and # turtle operators.\n\nUV Increment `inc` - - Defines the default color U, V index increments when the turtle symbols ` or # are used.\n\n• `incu` -\n• `incv` -\n\nPoint Width Attribute `pointwidth` - Adds a point `width` attribute to each point in the geometry. This width is effected by the Thickness and Thickness Scale parameters on the Tube Page.\n\n## Parameters - Tube Page\n\nThe parameters on this page are active only if Geometry Page > Type has been set to the Tube type.\n\nRows `rows` - The first option sets the number of tube sides and the second sets the number of divisions per step length if tube geometry is selected.\n\nColumns `cols` - The first option sets the number of tube sides and the second sets the number of divisions per step length if tube geometry is selected.\n\nTension `tension` - Tension defines the smoothness of branching corners.\n\nBranch Blend `smooth` - Enabling this option allows a child branch to be continuously joined to its parent branch.\n\nThickness `thickinit` - This number defines the default tube thickness.\n\nThickness Scale `thickscale` - This number is the scale factor used with the ! or ? operator.\n\nApply Tube Texture Coordinates `dotexture` - When enabled, UV texture coordinates are applied to the tube segments, such that the texture wraps smoothly and continuously over branches.\n\nVertical Increment `vertinc` - Defines the vertical spacing of texture coordinates over tube geometry when tube texture is applied.\n\n## Parameters - Values Page\n\nStep Size `stepinit` - Step Size allows you to define the default length of the edges when new geometry is generated.\n\nStep Size Scale `stepscale` - Step Size Scale defines the scale by which the geometry will be modified by the \" or _ (double quote, or underscore) turtle operators.\n\nAngle `angleinit` - Angle defines the default turning angle for turns, rolls and pitches.\n\nAngle Scale `anglescale` - Angle Scale allows you to enter the scaling factor to be employed when the ; or @ operators are used.\n\nVariable b `varb` - Substitutes user-defined b, c and d variables in rules or premise. These variables are expanded and so may include system variables such as `\\$F` and `\\$T`.\n\nVariable c `varc` - Substitutes user-defined b, c and d variables in rules or premise. These variables are expanded and so may include system variables such as `\\$F` and `\\$T`.\n\nVariable d `vard` - Substitutes user-defined b, c and d variables in rules or premise. These variables are expanded and so may include system variables such as `\\$F` and `\\$T`.\n\nGravity `gravity` - This parameter determines the amount of gravity applied to the geometry via the T (tropism vector) turtle operator. Tropism is when a plant bends or curves in response to an external stimulus. L-systems employ a tropism vector to simulate this behaviour. The bending is characterised by the fact that the thicker or shorter parts bend less than the longer or thinner parts.\n\n## Parameters - Funcs Page\n\nThe parameters on this page allow you to stamp your leaf geometry (each copy can be different) as opposed to simply copying them. See the example in Example - Stamping L-system Leaves.\n\nPic Image TOP `pictop` - This is the TOP which the pic() function uses. See #Expressions L-system Specific Expression Functions below.\n\nGroup Prefix `grpprefix` - If the production g(n) is encountered, all subsequent geometry is included in a primitive group prefixed with this label and ending with the ascii value of n. See #CreateGroup Creating Groups within L-systems below for an example.\n\nChannel Prefix `chanprefix` - If the expression chan(n) is encountered, it is replaced with the local channel prefixed with this label and ending with the ascii value of n.\n\nLeaf Param A `stampa` - You can determine which parameters are used by leaves.\nSee #CreateGroup Creating Groups within L-systems below for an example.\n\nLeaf Param B `stampb` - You can determine which parameters are used by leaves.\nSee #CreateGroup Creating Groups within L-systems below for an example.\n\nLeaf Param C `stampc` - You can determine which parameters are used by leaves.\nSee #CreateGroup Creating Groups within L-systems below for an example.\n\nRules DAT `rules` - Path to the DAT defining the rules for the LSystem.\n\n• Context Ignore `context_ignore:` - Defining this in the Rules DAT specifies all characters which are to be skipped when testing context sensitivity in the rules below.\n• Premise `premise:` - Define an initial string of characters to which the substitution rules are applied.\n• Rules - This is where the turtle substitution rules are defined.\n\n## Rule Substitution\n\nYou create the highly structured organic and branching objects using L-systems grammar. An L-system is a process in which a sequence of rules are applied to an initial string of characters to create a new string. To build the geometry, each character of the final string represents one command which affects an imaginary stylus, or \"turtle\".\n\nThe process begins by examining the first character of the premise string. All sixteen rules are searched sequentially until an applicable rule is found. The current character is then replaced with one or more characters as defined by the rule. The remaining characters in the premise string are replaced in a similar fashion. The entire process is repeated once for each generation.\n\n### Limitations to Rules\n\n• Polygon {} and branch [] operators can be nested 30 levels deep\n• Rules can be 256 characters in length\n• Variables can have up to 5 parameters\n• Up to 25 rules can be defined\n\n### Turtle Operators\n\n`F` Move forward (creating geometry)\n\n`H` Move forward half the length (creating geometry)\n\n`G` Move forward but don't record a vertex\n\n`f` Move forward (no geometry created)\n\n`h` Move forward a half length (no geometry created)\n\n`J K M` Copy geometry source J, K or M at the turtle's position\nafter rescaling and reorienting the geometry.\n\n`T` Apply tropism vector\n\n`+` Turn right\n\n`-` Turn left (minus sign)\n\n`&` Pitch up\n\n`^` Pitch down\n\n`\\` Roll clockwise\n\n`/` Roll counter-clockwise\n\n`|` Turn 180 degrees\n\n`*` Roll 180 degrees\n\n`~` Pitch / Roll / Turn random amount\n\n`\"` Multiply current length\n\n`!` Multiply current thickness\n\n`;` Multiply current angle\n\n`_` Divide current length (underscore)\n\n`?` Divides current width\n\n`@` Divide current angle\n\n`'` Increment color index U (single quote)\n\n`#` Increment color index V\n\n`%` Cut off remainder of branch\n\n`\\$` Rotate `up' towards the sun about heading\n\n`[` Push turtle state (start a branch)\n\n`]` Pop turtle state (end a branch)\n\n`{` Start a polygon\n\n`.` Make a polygon vertex\n\n`}` End a polygon\n\n`g` Create a new primitive group to which subsequent geometry is added\n\n## Production Rule Syntax\n\nA Touch L-system rule is specified as:\n\n[lc<] pred [>rc] [:cond]=succ [:prob]\n\nwhere:\n\n• lc - Optional left context\n• pred - predecessor symbol to be replaced\n• rc - Optional right context\n• cond - Condition expression (optional)\n• succ - Replacement string\n• prob - Probability of rule execution (optional)\n\n### Context Sensitivity\n\nThe most basic type of rule is:\n\npred = succ\n\nIn this case, a character is replaced with the characters of succ if, and only if, it matches pred.\n\nFor example:\n\nPremise: `ABC`\nRule 1: `B=DOG`\n\nwill result in `ADOGC`\n\npred can only specify one letter, but left and right context symbols can be specified. The general syntax is [lc<] pred [>rc] = succ.\n\nFor example:\n\nPremise: `ABC`\nRule 1: `A<B=DOG`\n\nagain results in `ADOGC` because `B` is preceded by `A`. If the rule were: `Z<B=DOG` or `B>A=DOG` the rule would not be applied.\n\n## Parameter Symbols\n\nEach symbol can have up to five user-defined variables associated with it which can be referenced or assigned in expressions. Variables in the predecessor are instanced while variables in the successor are assigned.\n\nFor example:\n\nThe rule `A(i, j) = A(i+1, j-1)`, will replace each `A` with a new `A` in which the first parameter has been incremented and the second parameter decremented.\n\nNote that the variables in the predecessor can also be referenced by the condition or probability portions of the rule.\n\nFor example:\n\nThe rule `A(i):i<5 = A(i+1) A(i+1)`, will double each `A` a maximum of five times (assuming a Premise of `A(0)`).\n\nParameters assigned to geometric symbols (e.g. `F`, `+`, or `!`) are interpreted geometrically.\n\nFor example:\n\nThe rule `F(i, j) = F(0.5*i, 2*j)`, will again replace each `F` with a new `F` containing modified parameters. In addition to this, the new `F` will now be drawn at half the length and twice the width.\n\n## Operator Override\n\nNormally turtle symbols use the current length/angle/thickness etc. to determine their effect. By providing a turtle operator with an explicit parameter, it will override the value normally used by the turtle operator.\n\nOverride parameters for `F`, `f`, `G`, `h`, `H` take the form of:\n\n`F(i,j,k,l,m)`\n\n• `i` - Override Length.\n• `j` - Override Thickness.\n• `k` - Override # Tube Segments.\n• `l` - Override # Tube Rows.\n• `m` - User parameter.\n\nThe k and l override parameters allow dynamic resolution of tube segments.\n\n### Examples\n\n`F` - Moves forward current length creating geometry.\n\n`H` - Moves forward half current length creating geometry.\n\n`F(i, j)` - Moves forward a distance of `i,` creating geometry of thickness `j`.\n\n`H(i, j)` - Move forward half the distance of `i`, creating geometry of thickness half of `j`.\n\n`+` - Turn by current angle amount.\n`~` - Rotate by random angle.\n\n`+(i)` - Turn by i degrees.\n`~(i)` - Override random angle with value of `i`.\n\n`\\$(x0,y0,z0)` - Points the turtle to location `(x0,y0,z0)`.\n\nGiven the above, the Premise:\n\n`F(1) +(90) F(1) +(90) F(1) +(90) F(1)`\n\ngenerates a unit box regardless of the default Step Size or Angle settings.\n\n### List of Operator Overrides\n\nThe following list describes the geometric interpretation of parameters assigned to certain turtle symbols:\n\n## Edge Rewriting\n\nIn The Algorithmic Beauty of Plants, many examples use a technique called Edge Rewriting which involve left and right subscripts. A typical example might look like:\n\nGenerations `10`\nAngle `90`\nPremise `F(l)`\nRules\n`F(l) = F(l)+F(r)+ F(r) = -F(l)-F(r)`\n\nHowever, Touch doesn't know what `F(l)` and `F(r)` are. In this case, we can modify the rules to use parameter passing. For the `F` turtle symbol, the first four parameters are length, width, tubesides, and tubesegs, leaving the last parameter user-definable. We can define this last parameter such that `0` is left, and `1` is right:\n\nGenerations `10`\nAngle `90`\nPremise `F(1,1,3,3,0)`\nRules\n\nAfter two generations this produces: `Fl+Fr+-Fl-Fr`. There should not be any difference between this final string and `F+F+-F-F.`\n\nAnother approach is to use two new variables, and use a conditional statement on the final step to convert them to `F`:\n\nVariable b `ch(\"generations\")`\nPremise `l`\nRules\n`l:t<b=l+r+ r:t<b=-l-r l=F r=F`\nOutput `l`\n`F`\n`F+F+`\n`F+F++-F-F+`\n\n## Expressions\n\nIn the earlier example, the expressions `0.5*i` and `2*j` are used. In fact, expressions can be used anywhere a numeric field is expected. Currently the following symbols can be used in expressions:\n\n`( )` - brackets for nesting priority\n\n`^ + - * / %` - arithmetic operators\n\n`min() max() sin() cos() asin() acos() pic() in()` - supported functions\n\n`== != = < <= > >=` - logical operators\n\n`& | !` - logical operators: and, or, not\n\n`b c d` - SOP b, c, d parameters after expansion\n\n`x y z` - current turtle position\n\n`g` - age of symbol\n\n`t` - time (generations) of L-system\n\n`a` - SOP angle parameter\n\n`T` - SOP tropism (gravity) parameter\n\nThe pre-defined variables above should not be used in the arguments of the predecessor.\n\nFor example: `A(a,b) = B(a*2,b*2)` is wrong (a is the SOP Angle parameter). `a<b (A,B) = b(A+1,B)` is right.\n\nThe last statement is correct because `a` and `b` are used as symbols and not variable names. `A` and `B` are correct because variable names are case sensitive.\n\n### L-System Specific Expression Functions\n\n• `pic(u, v, c)` - Using the image specified with Pic Image File, this function returns a normalized value (between `0` and `1`) of the pixel at the normalized coordinates `(u,v)`; `c` selects one of four channels to examine:\n`0`\n`1`\n`2`\n`3`\n• `in(q, r, s)` - Given a MetaTest input source containing a metaball geometry, this function returns a `1` if the point `(q, r, s)` is contained within the metaball, and `0` if not. Use `in(x, y, z)` (the letters `x`, `y`, and `z` are special and contain the X, Y, and Z location of the turtle) to test whether or not the turtle is currently inside the metaball to create pruned outputs.\n\n### Conditions\n\nEach rule may have an optional condition specified. The syntax is:\n\n[lc<] pred [>rc]:cond = succ\n\nFor example:\n\nThe Rule1 `A:y>2=J` includes source `J` at all `A`'s above the height of `2`.\n\n### Probability\n\nEach rule can specify the probability that it is used (provided it is otherwise applicable). The syntax is:\n\n[lc<] pred [>rc]:cond=succ=prob\n\nFor example:\n\nRule1 `A=B:0.5`\nRule2 `A=C:0.5 `\n\nwill replace `A` with either `B` or ` C` with equal probability.\n\n## Creating Groups with L-Systems\n\nThere is a group operator '`g`' which lumps all geometry currently being built into group g.\n\nFor example: `g[F]` lumps geometry from F into a group called `lsys0`. You can set the lsys prefix from the Funcs page.\n\n### Optional Group Parameters\n\n`g` takes an optional parameter as well.\n\nFor example: `g(1)[F]` lumps geometry from `f` into a group called `lsys1`. If no parameter is given, the default index is bumped up appropriately.\n\nThe current group container is pushed/popped with the turtle state so you can do things like:\n\n`gF [ gFF] F `- The first and last `F`'s are put into group `0`, and the middle `FF`'s are put into group `1`.\n\n`gF [ FF] F` - The geometry from all four `F`'s are put into group `0`, (pushing the turtle adopts the parent's group).\n\nTo exclude the middle `FF` from the parent's group type: `gF [ g(-1)FF ] F`\n\n## Controlling the Length over Time\n\nTo create an L-system which goes forward X percent less for each iteration, you need to start your Premise with a value, and then within a rule, multiply that value by the percentage you want to remain:\n\nPremise `A(1)`\nRule `A(i) = F(i)A(i*0.5)`\n\nThis way \"`i`\" is scaled before `A` is again evaluated. The important part is the Premise. You need to start with a value to be able to scale that value.\n\n## Example\n\nStep 1) Place a Circle SOP, and set the Number of Divisions to`: param(\"lsys\", 3`)\n\nIt then displays a triangle (3 is default value).\n\nStep 2) Pipe this into the J input of a L-System SOP. If the L-system Premise is:\n\n`J A`\n`J(,,4) A`\n`J(,,5) A`\n\nThis way, you can customize each leaf before it gets copied.\n\nStep 3) Change the Premise and Rule to:\n\nPremise`A(0)`\nRule1`A(i)=FJ(,,i+3)A(i+1)`\n\nThis creates a line of increasing-order polygons.\n\nStep 4) Finally, we will want to create 20 leaves, and put them all into a Switch SOP. Do this by entering the following expression into the Switch SOP's Select Input: `param(\"lsys\",0)`\nStep 5) Then in your L-system, J(,,0) gives you the first SOP, J(,,1) gives you the second, and so on. This solves the problem of a limited number of leaves using only JKM.\n\nAlso note that these examples use only the first stamp parameter, you can use up to three parameters: e.g. J(,,1,2,3)\n\nThe first two parameters of J, K, M are used to override length and width, like symbol F.\n\n• Input 0 -\n• Input 1 -\n• Input 2 -\n• Input 3 -\n\n## Info CHOP Channels\n\nExtra Information for the LSystem SOP can be accessed via an Info CHOP.\n\n### Common SOP Info Channels\n\n• num_points - Number of points in this SOP.\n• num_prims - Number of primitives in this SOP.\n• num_particles - Number of particles in this SOP.\n• last_vbo_update_time - Time spent in another thread updating geometry data on the GPU from the SOP's CPU data. As it is part of another thread, this time is not part of the usual frame time.\n• last_meta_vbo_update_time - Time spent in another thread updating meta surface geometry data (such as metaballs or nurbs) on the GPU from the SOP's CPU data. As it is part of another thread, this time is not part of the usual frame time.\n\n### Common Operator Info Channels\n\n• total_cooks - Number of times the operator has cooked since the process started.\n• cook_time - Duration of the last cook in milliseconds.\n• cook_frame - Frame number when this operator was last cooked relative to the component timeline.\n• cook_abs_frame - Frame number when this operator was last cooked relative to the absolute time.\n• cook_start_time - Time in milliseconds at which the operator started cooking in the frame it was cooked.\n• cook_end_time - Time in milliseconds at which the operator finished cooking in the frame it was cooked.\n• cooked_this_frame - 1 if operator was cooked this frame.\n• warnings - Number of warnings in this operator if any.\n• errors - Number of errors in this operator if any.\n\nTouchDesigner Build:\n\nAdd • Alembic • Align • Arm • Attribute Create • Attribute • Basis • Blend • Bone Group • Boolean • Box • Experimental:Box • Bridge • Cache • Cap • Capture Region • Capture • Carve • CHOP to • Circle • Experimental:Circle • Clay • Clip • Convert • Copy • CPlusPlus • Creep • Curveclay • Curvesect • DAT to • Deform • Delete • Divide • Extrude • Face Track • Facet • File In • Fillet • Fit • Font • Force • Fractal • Grid • Experimental:Grid • Group • Hole • Import Select • In • Introduction To s Vid • Inverse Curve • Iso Surface • Join • Joint • Kinect • Lattice • Limit • Line • Line Thick • LOD • LSystem • Magnet • Material • Merge • Metaball • Model • Noise • Null • Object Merge • Oculus Rift • OpenVR • Out • Particle • Point • Polyloft • Polypatch • Polyreduce • Polyspline • Polystitch • Primitive • Profile • Project • Rails • Raster • Ray • Rectangle • Experimental:Rectangle • Refine • Resample • Revolve • Script • Select • Sequence Blend • Skin • Sort • Sphere • Experimental:Sphere • Spring • Sprinkle • Sprite • Stitch • Subdivide • Superquad • Experimental:Superquad • Surfsect • Sweep • Switch • Text • Texture • Torus • Experimental:Torus • Trace • Trail • Transform • Trim • Tristrip • Tube • Experimental:Tube • Twist • Vertex • Wireframe • ZED"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7875646,"math_prob":0.87549835,"size":20031,"snap":"2022-05-2022-21","text_gpt3_token_len":4818,"char_repetition_ratio":0.12872621,"word_repetition_ratio":0.14794755,"special_character_ratio":0.23977834,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98141855,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T14:00:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c1ed4b8d-ba35-4704-a2a6-dc20b12474a3>\",\"Content-Length\":\"83161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebd4c70f-9b55-4abe-9b97-395f4be17fa8>\",\"WARC-Concurrent-To\":\"<urn:uuid:4465b195-cb42-4168-8c6f-e92ec83ecb69>\",\"WARC-IP-Address\":\"3.15.54.19\",\"WARC-Target-URI\":\"https://docs.derivative.ca/LSystem_SOP\",\"WARC-Payload-Digest\":\"sha1:ESJXSMBNXJRK22744UYZJKI5NPWOENHQ\",\"WARC-Block-Digest\":\"sha1:RLYN36FPZOKL2IAD5EH3KB4LXVSWZR2F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662517485.8_warc_CC-MAIN-20220517130706-20220517160706-00419.warc.gz\"}"} |
https://aalopes.com/blog/?p=291 | [
"### Implementing mod function in C\n\nSomething some people don’t realize is that C and several other programming languages don’t offer you the standard modulo operation but simply a remainder operation (this issue has been in any case discussed countless times, e.g. see, although most of the times without offering a solution).\n\nThe problem arises when people try to use the remainder operator, % or",
null,
"$\\text{rem}$, for the modulo without realizing the implications of doing so.\nI recall when a programmer I knew tried to write C code for a steering angle sensor system using the % operator for the mod operation (which was needed by the algorithm), leading to erratic and difficult to debug results (in this case, a function that was supposed to be smooth exhibited erratic jumps), leaving me the job to understand the algorithm and then find the bug.\nNote that the C standard defines the remainder of an integer division via the % operator for positive integers. For negative integers the operation is implementation dependent and only fixed by the C99 standard or newer. Note that the remainder can be negative but the modulo can’t, and that",
null,
"$\\mod(a,b)$ is contained in",
null,
"$[0,b[$:\n\nThe question is then, if the remainder operation, % here also denoted as rem is available, how to leverage this to calculate the modulo operation? And here’s one way of doing it:",
null,
"$\\mod (a,b) = \\begin{cases} 0, & a = 0 \\vee b = 0\\\\ b - \\text{rem}(-a -1, b) -1, &a<0 \\\\ \\text{rem} (a,b), &a>0\\end{cases}$.\nNote that this is only valid for a non-negative divisor",
null,
"$b$, however the generalization is rather trivial and as is common practice to say, left as an exercise for the reader :)."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9398961,"math_prob":0.98257655,"size":1507,"snap":"2021-21-2021-25","text_gpt3_token_len":297,"char_repetition_ratio":0.13040586,"word_repetition_ratio":0.0,"special_character_ratio":0.20238885,"punctuation_ratio":0.09122807,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9865527,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T14:13:38Z\",\"WARC-Record-ID\":\"<urn:uuid:0c6f3db5-c7fe-4b25-8f40-d98eed960328>\",\"Content-Length\":\"19140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e9fe2c1-3a5d-4759-a4c2-10d6691531a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:bed3ed01-d7ee-4e4b-8e50-05ab6f989b59>\",\"WARC-IP-Address\":\"188.93.230.163\",\"WARC-Target-URI\":\"https://aalopes.com/blog/?p=291\",\"WARC-Payload-Digest\":\"sha1:MJHFZMROIWNVIFD5K7OJZQL3DJZMZTQI\",\"WARC-Block-Digest\":\"sha1:3BMWD57PUW32MDDBECKH6WZDAIU6W4NL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989637.86_warc_CC-MAIN-20210518125638-20210518155638-00346.warc.gz\"}"} |
https://www.enoisecontrol.com/reverberation-time-gymnasiums-multi-purpose-rooms/ | [
"## Reverberation Time for Gymnasiums and Multi-Purpose Rooms\n\nThe below excerpt from a booklet on classroom acoustics published by the Acoustical Society of America can help understand reverberation time for community and industrial applications. Although this was written about classrooms, the principle is the same for all indoor spaces.\n\nSource: Acoustical Society of America, A publication of the technical committee on architectural acoustics of the Acoustical Society of America. https://asa.aip.org/classroom/booklet.html\n\nOver 100 years ago, a Harvard physics professor named Wallace Clement Sabine developed the first equation for reverberation time, which has since been named after him and is still used today. Reverberation time is defined as the length of time required for sound to decay 60 dB from its initial level. Sabine’s simple formula is",
null,
"where:\n\nRT (60) = reverberation time (seconds)\n\nV = room volume (cubic feet)\n\nS = surface area (square feet)\n\n? = absorption coefficient of material(s) at given frequency\n\n? indicates the summation of S times ? for all room surfaces\n\nTo use this formula, the volume of the room, surface area of each material in the room, and absorption coefficients for those materials must be known. Absorption coefficients are measured in specialized laboratories, and represent the fraction of sound energy (not sound level – dB) the material will absorb as a decimal from 0 to 1. Figure 15 gives absorption coefficients for common classroom materials.",
null,
"A commonly used one-number rating called NRC, noise reduction coefficient, is simply the average of the absorption coefficients at 250, 500, 1000, and 2000 Hz. This simple, one-number rating can be useful for comparing the relative absorption of two materials; however, examining absorption coefficients in each octave band gives a better idea of the performance of a material at various frequencies.\n\nReverberation time is often calculated with the room unoccupied. Since people and their clothing provide additional sound absorption, an unoccupied room is the worst-case scenario, though not an unreasonable one, since occupancy of most classrooms varies. In a complete analysis, this calculation should be performed for each octave band, as the RT can vary widely at different frequencies. However, for a quick estimate, the RT of a classroom can be calculated for just one octave band representative of speech frequencies, such as 1000 Hz. If this RT is acceptable, then the RT throughout the speech range will likely be acceptable."
] | [
null,
"https://www.enoisecontrol.com/wp-content/uploads/2015/01/sabine-formula.jpg",
null,
"https://www.enoisecontrol.com/wp-content/uploads/2015/01/absorption-coefficients.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9296257,"math_prob":0.91018623,"size":2522,"snap":"2023-40-2023-50","text_gpt3_token_len":513,"char_repetition_ratio":0.12986498,"word_repetition_ratio":0.0,"special_character_ratio":0.19547978,"punctuation_ratio":0.11060948,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96365225,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T22:34:02Z\",\"WARC-Record-ID\":\"<urn:uuid:89fb727c-8949-45e9-994a-5f444b0a407d>\",\"Content-Length\":\"188038\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a636a187-4fa9-41a8-80fd-5a66e0383708>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6839a1b-b3f8-4666-a847-1f191b7ed916>\",\"WARC-IP-Address\":\"64.255.252.21\",\"WARC-Target-URI\":\"https://www.enoisecontrol.com/reverberation-time-gymnasiums-multi-purpose-rooms/\",\"WARC-Payload-Digest\":\"sha1:T54NGXIGWEUHHL4ESWSNH6OCU55QUDSY\",\"WARC-Block-Digest\":\"sha1:BHHQ7E7TIBTNMK66MBH2ZC7EAKGV646X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510225.44_warc_CC-MAIN-20230926211344-20230927001344-00686.warc.gz\"}"} |
https://wintertalesevents.com/ounces-in-a-pound/ | [
"# How many ounces in a pound\n\nA pound is a unit of weight in the United States customary measurement system. One of the most common questions that people have when purchasing items by weight is, “How many ounces in a pound?” The answer to this question is particularly helpful when calculating the price per pound. It is equivalent to sixteen ounces. Therefore, one pound is equal to sixteen ounces. In other words, there are sixteen ounces in one pound. One pound is a unit of weight in the United States customary measurement system. It equals sixteen ounces, the same as four-quarters, eight-eighths, or sixteen sixteenths.\n\nIn other words, there are sixteen ounces in a pound. A pound is a unit of weight equal to sixteen ounces in the United States standard measurement system. An ounce is 16 ounces, therefore a pound is 16 ounces. This can also be expressed as four-quarters, eight-eighths, or sixteen-sixteenths. The traditional system of measurement used in the United States defines a pound as a unit of weight equal to sixteen ounces. Consequently, one pound contains sixteen ounces. This can also be expressed as four-quarters, eight-eighths, or sixteen-sixteenths.\n\nWhat are ounces?\n\nOunces are a unit of weight measurement used in the imperial and United States customary systems. The weight of an ounce is approximately 28.35 grams. Ounces are commonly used to measure the weight of food, liquids, and other small objects.\n\nWhat is a pound?\n\nThe Imperial system of measurement uses pounds to measure mass or weight. It is equal to 16 ounces and is commonly used to measure a variety of items, including food, medicines, and other materials. The pound is widely used in the United States and around the world in a variety of industries, including retail, health care, and manufacturing. It is also used to measure the weight of people, animals, and other objects. In the United States, the pound is often abbreviated as “lb”.\n\nIs there a difference between imperial and metric units?\n\n1. The Imperial System is based on units of measurement like feet, inches, ounces, pounds, and gallons. The Metric System is based on units of measurement like meters, liters, grams, and kilograms.\n\n2. The Imperial System uses ounces to measure a pound, while the Metric System uses grams to measure a kilogram.\n\n3. The Imperial System is used mainly in the United States, while the Metric System is used by most of the rest of the world.\n\n4. The Imperial System is more complicated and less precise than the Metric System.\n\n5. The Imperial System is based on the British Imperial System of measurement, while the Metric System is based on the metric system of measurement.\n\nOunces in a pound conversion table"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9306485,"math_prob":0.9786085,"size":2915,"snap":"2023-40-2023-50","text_gpt3_token_len":702,"char_repetition_ratio":0.16901408,"word_repetition_ratio":0.14481409,"special_character_ratio":0.24493997,"punctuation_ratio":0.123333335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9606183,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T15:48:27Z\",\"WARC-Record-ID\":\"<urn:uuid:e0b14a78-c838-48e5-b081-ae15af429c3d>\",\"Content-Length\":\"86110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2e9c987c-f9b4-4eb4-b145-fd2edf1bc7ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbaa9350-f795-4d9f-b2e9-4345546d8549>\",\"WARC-IP-Address\":\"172.67.203.230\",\"WARC-Target-URI\":\"https://wintertalesevents.com/ounces-in-a-pound/\",\"WARC-Payload-Digest\":\"sha1:CEKMOXIE5DFT6WXLB5M4WTRGIF7FHMOG\",\"WARC-Block-Digest\":\"sha1:BUTDTEQXYACHPE22V7QFCIS4N6FW72TH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00834.warc.gz\"}"} |
https://www.etoolsage.com/Calculator/MolarMass.asp?CalText=c6h5n2 | [
"Home|Add to Favorites",
null,
"|Add to My Toolkit Petroleum & Chemicals",
null,
"Molar Mass Calculator",
null,
"|Register|Sign in|Customization",
null,
"# Calculate molar mass\n\nEnter a chemical formula in the field below, pay attention to formula is case sensitive!!!\n\nThe formula should be entered as:\nH2SO4= H2SO4\nCuSO4.5H2O=CuSO4.5H2O\n3BeO.Al2O3.6SiO2= (BeO)3.Al2O3.6SiO2\n Calculations:Formula: c6h5n2Molar Mass: 105.1195 g/mol 1g=9.51298284333544E-03 molPercent composition (by mass):Element Count Atom Mass %(by mass)c 6 12.011 68.56%h 5 1.0079 4.79%n 2 14.007 26.65%\n Top Use:",
null,
"Molar Mass Calculator Formula: Co2(Cr2O7)3Recent user inquiry:2020/6/2 15:08Molar Mass Calculator Formula: Kr2020/6/2 15:07Molar Mass Calculator Formula: n2h42020/6/2 15:06Molar Mass Calculator Formula: (NH4)6.Mo7.O24.4H2O2020/6/2 15:06Molar Mass Calculator Formula: C6H6O32020/6/2 15:06Molar Mass Calculator Formula: (K)2.MnO4"
] | [
null,
"https://www.etoolsage.com/images/noteD.gif",
null,
"https://www.etoolsage.com/images/arrowRight.gif",
null,
"https://www.etoolsage.com/images/DownArrow.gif",
null,
"https://www.etoolsage.com/images/eTools.gif",
null,
"https://www.etoolsage.com/images/rightArrow.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.54526174,"math_prob":0.9585135,"size":423,"snap":"2020-24-2020-29","text_gpt3_token_len":152,"char_repetition_ratio":0.13365155,"word_repetition_ratio":0.0,"special_character_ratio":0.38297874,"punctuation_ratio":0.1724138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916902,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T07:08:23Z\",\"WARC-Record-ID\":\"<urn:uuid:70206f8a-6859-4677-a279-ad039c1a9ad9>\",\"Content-Length\":\"26152\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c2024c3-fdea-4e98-b255-963ad20e78de>\",\"WARC-Concurrent-To\":\"<urn:uuid:b63f1225-99ad-483d-a68c-a43d833f798d>\",\"WARC-IP-Address\":\"172.67.157.159\",\"WARC-Target-URI\":\"https://www.etoolsage.com/Calculator/MolarMass.asp?CalText=c6h5n2\",\"WARC-Payload-Digest\":\"sha1:YVSXOAHW3X3NNUOBOSNMH2XU3WPALPUN\",\"WARC-Block-Digest\":\"sha1:B4CKCIK4AYPBCEO3R46DYF62FIFZA3TN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347423915.42_warc_CC-MAIN-20200602064854-20200602094854-00353.warc.gz\"}"} |
https://alien.ren/article/7.html | [
"## ES6 对数组的扩展\n\nAlien| 阅读:1795 发表时间:2018-04-08 10:37:08 JavaScript\n\n``````var arr=Array.of(1,2,3,4)\nconsole.log(arr) // [1,2,3,4]``````\n\n1.可以将类似数组的对象或者可遍历的对象转换成真正的数组。\n\n``````let ele = document.getElementsByTagName('a');\nele instanceof Array; //结果:false,不是数组\nele instanceof Object; //结果:true,是对象\nArray.from(ele) instanceof Array; //结果:true,是数组``````\n\n2.将字符串转换成数组\n\n`````` let str='hello';\nlet newarr=Array.from(str)\nconsole.log(newarr) // ['h','e','l','l','o']``````\n\n``````let arr=[1,3,5,6,7]\nlet x=arr.find(function(num){\nreturn num<8 // 1\nnum>2 //3\nnum>9 // underfined\n})``````\n\n``````let arr=[1,3,5,6,7];\nlet x=arr.findIndex(function(num){\nreturn num<8 // 0\nnum>2 // 1\nnum>9 // -1\n})``````\n\n``````let arr=[1,3,5,6,7]\narr.fill(10,0,1)\nconsole.log(arr) // [10,3,5,6,7]``````\n\n``````for(let[key,value] of ['aa','bb'].entries()){\nconsole.log(key,value); // 0 'aa' 1 'bb'\n}``````\n\n``````for(let index of ['aa','bb'].keys()){\nconsole.log(index); // 0 1\n}``````\n\n``````for(let value of ['aa','bb'].values()){\nconsole.log(value); // aa bb\n}``````\n\n*本文由Alien发表并编辑,转载此文章请附上出处及本页链接。如有侵权,请联系本站删除。"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.5623868,"math_prob":0.9792599,"size":1344,"snap":"2020-45-2020-50","text_gpt3_token_len":669,"char_repetition_ratio":0.11492537,"word_repetition_ratio":0.0,"special_character_ratio":0.34151787,"punctuation_ratio":0.22408026,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98479545,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T11:01:58Z\",\"WARC-Record-ID\":\"<urn:uuid:19cf1916-2762-46a7-850c-066f433cc97e>\",\"Content-Length\":\"17202\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a03d68e2-7731-4ac6-94bc-071cbe63c9f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3c1c4f2-3d52-4fbf-8eb9-6a5783069a4d>\",\"WARC-IP-Address\":\"47.106.69.132\",\"WARC-Target-URI\":\"https://alien.ren/article/7.html\",\"WARC-Payload-Digest\":\"sha1:ITYNWYO4KRJQYLO4YAFWPSDSKSASAN4Z\",\"WARC-Block-Digest\":\"sha1:ZZBTPDMUJNDZCRT7HFLJW6M74TC544GW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876307.21_warc_CC-MAIN-20201021093214-20201021123214-00182.warc.gz\"}"} |
https://itectec.com/matlab/matlab-multiple-plot-lines-on-a-single-plot/ | [
"# MATLAB: Multiple Plot lines on a single plot\n\nMATLABmultiple plotsplot linesplotting coordinates\n\nHi\nI would like to join x,y coordinates for 2 points together on a plot. But also plot all 361 lines.\nI have all coordinates saved as 361 x 1\nx coordinate left = xl\ny coordinate left = yl\nx coordinate right = xr\ny coordinate right = yr\nI can combine into 2 columns if required so which will determine a point on a plot a(1), a(2), b(1), b(2) etc to 361\na = [xl, yl]\nb = [xr, yr]\nBut I basically need to plot a line between\na(1) and b(1)\na(2) and b(2)\nand so on for 361 seperate lines on 1 plot.\nIs this possible and is there a quicker way than using hold on 361 times?\n``x = [xl.'; xr.']; % xl, xr, yl and yr should be column vectorsy = [yl.'; yr.'];plot(x, y)``"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81053454,"math_prob":0.99893636,"size":609,"snap":"2020-34-2020-40","text_gpt3_token_len":186,"char_repetition_ratio":0.14214876,"word_repetition_ratio":0.0,"special_character_ratio":0.3136289,"punctuation_ratio":0.07042254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T21:02:51Z\",\"WARC-Record-ID\":\"<urn:uuid:b46bb214-0c68-4152-a9ab-d75f1f9c29f3>\",\"Content-Length\":\"12672\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98c9956d-1e9f-483e-baf1-48ebebecd833>\",\"WARC-Concurrent-To\":\"<urn:uuid:f94a66de-d76c-44e3-8c72-16e4473d9b3b>\",\"WARC-IP-Address\":\"45.76.174.189\",\"WARC-Target-URI\":\"https://itectec.com/matlab/matlab-multiple-plot-lines-on-a-single-plot/\",\"WARC-Payload-Digest\":\"sha1:57R53LSVEZYNK4JKE2FMLYMBXCF2CZOT\",\"WARC-Block-Digest\":\"sha1:KXHO7OQKSBZDA2Z2TOUGS32ONWTRF6KZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738573.99_warc_CC-MAIN-20200809192123-20200809222123-00485.warc.gz\"}"} |
https://academictask.com/maths/mathematics-mcqs/67/ | [
"661. A bag contains 1 rupee, 2 rupee and 5 rupee coins amounting to Rs.264. If the ratio of the number of these coins is 3:5:4, find the number of 1 rupee coins.\nA. 66\nB. 48\nC. 24\nD. 8\n\nExplanation:\nLet the number of 1 rupee, 2 rupee and 5 rupee coins be 3x, 5x and 4x. (1*3x)+(2*5x)+(5*4x)=264 3x+10x+20x=264 33x=264 x=8 No. of 1 rupee coins\n= 3x = 24\n\n662. Divide Rs.2000 into two shares in the ratio 3:2.\nA. Rs.1200,Rs.800\nB. Rs.800,Rs.1200\nC. Rs.1500,Rs.500\nD. Rs.1100, Rs.900\n\nExplanation:\n3x+2x=2000 5x=2000 x=400 3x=1200 2x=800\n\n663. The ratio of two numbers is 5:9. If each number is decreased by 5, the ratio becomes 5:11. Find the numbers.\nA. 30, 19\nB. 21, 37\nC. 15, 34\nD. 15, 27\n\nExplanation:\nLet the two numbers be 5x and 9x.\n(5x-5)/(9x-5) = 5:11\n(5x-5)*11 = (9x-5)*5\n55x – 55 = 45x – 25\n10x = 30\nx = 3\nTherefore, the numbers are 15 and 27.\n\n664. The third proportional to 3.5, 5.6 is:___________?\nA. 8.96\nB. 8\nC. 4.5\nD. 6.2\n\nExplanation:\n3.5/5/6 = 5.6/x ; x = 5.6 * 5.6 / 3.5 = 8.96\n\n665.mAccording to a recipe, 400 grams of flour should be mixed with 500 grams of sugar to bake cookies. If I have only 300 grams of flour, how much sugar should I mix to maintain the same proportion?\nA. 360\nB. 380\nC. 375\nD. 400\n\nExplanation:\nFlour and sugar have to mixed in the ratio 4:5 4:5::300:x 4x=1500 x=375 375 grams of sugar should be mixed.\n\n666. X, Y and Z are quantities of the same kind such that X:Y=5:8 and Y:Z=4:7. Find X:Z.\nA. 32:35\nB. 67:56\nC. 5:14\nD. 5:7\n\nExplanation:\nX:Z=(X:Y)*(Y:Z) =(5:8)*(4:7) = (5/8)*(4/7) = 5/14\n\n667. Two kinds of rice, 1st costs Rs.13 per kg and 2nd costs Rs.19 per kg are mixed together. Find the ratio in which the 2 types are mixed so that the mixture costs Rs.14.2 per kg?\nA. 3:1\nB. 4:1\nC. 3:4\nD. 4:3\n\nExplanation:\nLet the total quantity of mixture be 1. If quantity of 1st type is x then quantity of the 2nd type will be (1-x)\nTherefore, 13x + 19(1-x) = 14.2\n13x -19x + 19 = 14.2\n4.8 = 6x\nx = 0.8 ; (1-x) = 0.2\nTherefore, the ratio is 0.8 : 0.2 = 4:1\n\n668. A, B, C and D divide a sum of money among themselves in the ratio 7:4:3:2. If D gets Rs. 500 less than A, find the total amount.\nA. Rs.100\nB. Rs.700\nC. Rs.1600\nD. Rs.2000\n\nExplanation:\nLet the amount be divided into 7x,4x,3x and 2x.\n7x-2x = 500 5x=500 x = 100\nTotal amount = 7x+4x+3x+2x = 16x = Rs.1600\n\n669. x is 20% more than z and y is 80% more than z. Find x:y.\nA. 1:4\nB. 3:4\nC. 1:3\nD. 2:3\n\nExpanation:\nx = z+(20/100)z = 1.2z y = 1.8z x:y = 1.2z:1.8z = 12:18 = 2:3\n\n670. Solve for x\n2x:25 = 6:(x/3)\nA. 18\nB. 15\nC. 12\nD. 5\n\nExplanation:\n2x/25 = 6/(x/3) 2x/25 = 18/x 2x*x = 18*25 x² = 9*25 x=3*5 x=15"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79604214,"math_prob":0.9998821,"size":2585,"snap":"2023-40-2023-50","text_gpt3_token_len":1214,"char_repetition_ratio":0.12475785,"word_repetition_ratio":0.015238095,"special_character_ratio":0.5156673,"punctuation_ratio":0.23837902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999176,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T22:18:09Z\",\"WARC-Record-ID\":\"<urn:uuid:faee38d4-e2b0-4cb8-90df-7f5691057249>\",\"Content-Length\":\"65754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fc4c6a5-2024-4f66-95b1-c0321cd61553>\",\"WARC-Concurrent-To\":\"<urn:uuid:6659a053-69a4-4af6-95b4-5f1393fae2aa>\",\"WARC-IP-Address\":\"104.21.23.157\",\"WARC-Target-URI\":\"https://academictask.com/maths/mathematics-mcqs/67/\",\"WARC-Payload-Digest\":\"sha1:NQ72EW6RJ5WMBNWGJTYF2X4O532W3WJ2\",\"WARC-Block-Digest\":\"sha1:ATKE54DKTGNYXJ7FHMLNQLWHY6BV4AIC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679518883.99_warc_CC-MAIN-20231211210408-20231212000408-00838.warc.gz\"}"} |
https://zbmath.org/0969.19004 | [
"## Motivic symmetric spectra.(English)Zbl 0969.19004\n\nLet $$T$$ denote the quotient of sheaves $${\\mathbb A}^1/({\\mathbb A}^1-0)$$ in motivic homotopy theory, and let $$X$$ denote a $$T$$-spectrum. Morel and Voevodsky introduced a motivic stable category, which can be obtained by formally inverting the functor $$X\\rightarrow T\\wedge X$$. This category is fundamental for Voevodsky’s proof of the Milnor conjecture. The underlying paper gives a method for importing the stable homotopy theory of symmetric spectra as developed by M. Hovey, B. Shipley and J. Smith [J. Am. Math. Soc. 13, 149-208 (2000; Zbl 0931.55006)] into Morel’s and Voevodsky’s motivic stable category.\nThe paper consists of four chapters, two appendices and an index. The first chapter supplies the necessary tools such as motivic homotopy theory, controlled fibrant models, Nisnevich descent and flasque simplicial presheaves. The second chapter deals with motivic stable categories. Topics discussed are level structures, compact objects, stable closed model structures, change of suspension and bounded cofibrations. The third chapter is on fibre and cofibre sequences. It is subdivided into four sections on exact sequences for $$S^1$$-spectra, weighted stable homotopy groups, fibre and cofibre sequences, and $$T$$-suspensions and $$T$$-loops, respectively. The fourth chapter gives the main results. It discusses motivic symmetric spectra. Subjects dealt with are level structures, stable structures, smash product, equivalence of stable categories, and symmetric $$S^1$$-spectra. On the whole the paper is rather technical and probably meant for specialists in the field. The key result states that “The category $${\\mathcal S}pt^{\\Sigma}_T(Sm|_S)_ {\\text{Nis}}$$ of symmetric $$T$$-spectra on the smooth Nisnevich site, and the classes of stable equivalences, stable fibrations and stable cofibrations, together satisfy the axioms for a proper closed simplicial model category”. The same result holds for the category $${\\mathcal S}pt^{\\Sigma}_{S^1}(Sm|_S)_{\\text{Nis}}$$ of symmetric $$S^1$$-spectra. The first appendix is on properness, and the second deals with motivic homotopy theory of presheaves.\n\n### MSC:\n\n 19E15 Algebraic cycles and motivic cohomology ($$K$$-theoretic aspects) 14F42 Motivic cohomology; motivic homotopy theory 55P42 Stable homotopy theory, spectra 18D15 Closed categories (closed monoidal and Cartesian closed categories, etc.) 14F35 Homotopy theory and fundamental groups in algebraic geometry 55U35 Abstract and axiomatic homotopy theory in algebraic topology\n\nZbl 0931.55006\nFull Text:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8297317,"math_prob":0.99264437,"size":3057,"snap":"2023-14-2023-23","text_gpt3_token_len":819,"char_repetition_ratio":0.1431379,"word_repetition_ratio":0.0,"special_character_ratio":0.24566568,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99730057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T09:26:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b4e1bd19-213d-4ada-89a8-301dd9a2ec72>\",\"Content-Length\":\"55387\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39bcb548-15a2-4fb7-a571-783e7efc56c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfa7f3fe-83e5-465d-a088-4db47e7b2516>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/0969.19004\",\"WARC-Payload-Digest\":\"sha1:OOM2FQL6QBVG5RM6H5XASUX6GWXX5V5M\",\"WARC-Block-Digest\":\"sha1:YEPB77A4GRYYZ4OBSPALNLY5CHROKBG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646457.49_warc_CC-MAIN-20230531090221-20230531120221-00675.warc.gz\"}"} |
https://www.hackmath.net/en/word-math-problems/right-triangle?page_num=7 | [
"# Right triangle - math word problems\n\n1. Cube diagonals",
null,
"Calculate the length of the side and the diagonals of the cube with a volume of 27 cm3.\n2. Diamond diagonals",
null,
"Find the diamond diagonal's lengths if the area is 156 cm2 and side is 13 cm long.\n3. Waste",
null,
"How many percents are waste from a circular plate with a radius of 1 m from which we cut a square with the highest area?\n4. Diagonals of a rhombus 2",
null,
"One diagonal of a rhombus is greater than other by 4 cm . If the area of the rhombus is 96 cm2, find the side of the rhombus.\n5. Spruce height",
null,
"How tall was spruce that was cut at an altitude of 8m above the ground and the top landed at a distance of 15m from the heel of the tree?\n6. Pilot",
null,
"How high is the airplane's pilot to see 0.001 of Earth's surface?\n7. Isosceles trapezium",
null,
"Calculate the area of an isosceles trapezium ABCD if a = 10cm, b = 5cm, c = 4cm.\n8. Isosceles trapezoid",
null,
"The old father decided to change the top plate of an isosceles-like trapezoid with the basic dimensions of 120 cm and 60 cm, and the shoulder is 50 centimeters long. How much does it pay for a new plate and a square meter worth 17 euros?\n9. Diagonals of the rhombus",
null,
"How long are the diagonals e, f in the diamond, if its side is 5 cm long and its area is 20 cm2?\n10. Rectangle diagonal",
null,
"The rectangle, one side of which is 5 cm long, is divided by a 13 cm diagonal into two triangles. Calculate the area of one of these triangles in cm2.\n11. Body diagonal",
null,
"Calculate the cube volume, whose body diagonal size is 75 dm. Draw a picture and highlight the body diagonal.\n12. Rectangular trapezoid",
null,
"The ABCD rectangular trapezoid with the AB and CD bases is divided by the diagonal AC into two equilateral rectangular triangles. The length of the diagonal AC is 62cm. Calculate trapezium area in cm square and calculate how many differs perimeters of the\n13. RT - inscribed circle",
null,
"In a rectangular triangle has sides lengths> a = 30cm, b = 12.5cm. The right angle is at the vertex C. Calculate the radius of the inscribed circle.",
null,
"Adam placed the ladder of the house, the upper end reaching to the window at the height of 3.6m, and the lower end standing on level ground and was distant from a wall of 1.5m. What is the length of the ladder?\n15. Rectangular garden",
null,
"The sides of the rectangular garden are in ratio 1: 2. The diagonal has a length of 20 meters. Calculate the area and perimeter of the garden.\n16. Right 24",
null,
"Right isosceles triangle has an altitude x drawn from the right angle to the hypotenuse dividing it into 2 unequal segments. The length of one segment is 5 cm. What is the area of the triangle? Thank you.\n17. Right isosceles triangle",
null,
"Right isosceles triangle has an altitude x drawn from the right angle to the hypotenuse dividing it into 2 equal segments. The length of one segment is 5 cm. What is the area of the triangle?\n18. The garden",
null,
"The garden has the shape of a rectangular trapezium. The bases have lengths of 27 meters and 36 meters, the trapezoid's height is 12 meters. Calculate how much fence will cost this garden if one meter costs 1.5 €?\n19. The mast",
null,
"The top of the pole we see at an angle of 45°. If we approach the pole by 10 m, we see the top of the pole at an angle of 60°. What is the height of the pole?\n20. Three points",
null,
"Three points A (-3;-5) B (9;-10) and C (2;k) . AB=AC What is value of k?\n\nDo you have an interesting mathematical word problem that you can't solve it? Enter it, and we can try to solve it.\n\nWe will send a solution to your e-mail address. Solved examples are also published here. Please enter the e-mail correctly and check whether you don't have a full mailbox.\n\nPlease do not submit problems from current active competitions such as Mathematical Olympiad, correspondence seminars etc..."
] | [
null,
"https://www.hackmath.net/thumb/13/t_7513.jpg",
null,
"https://www.hackmath.net/thumb/7/t_7507.jpg",
null,
"https://www.hackmath.net/thumb/71/t_7471.jpg",
null,
"https://www.hackmath.net/thumb/20/t_7420.jpg",
null,
"https://www.hackmath.net/thumb/14/t_7414.jpg",
null,
"https://www.hackmath.net/thumb/19/t_7419.jpg",
null,
"https://www.hackmath.net/thumb/95/t_7395.jpg",
null,
"https://www.hackmath.net/thumb/16/t_7416.jpg",
null,
"https://www.hackmath.net/thumb/99/t_7399.jpg",
null,
"https://www.hackmath.net/thumb/77/t_7277.jpg",
null,
"https://www.hackmath.net/thumb/78/t_7278.jpg",
null,
"https://www.hackmath.net/thumb/89/t_7289.jpg",
null,
"https://www.hackmath.net/thumb/51/t_7251.jpg",
null,
"https://www.hackmath.net/thumb/37/t_7237.jpg",
null,
"https://www.hackmath.net/thumb/27/t_7227.jpg",
null,
"https://www.hackmath.net/thumb/16/t_7216.jpg",
null,
"https://www.hackmath.net/thumb/17/t_7217.jpg",
null,
"https://www.hackmath.net/thumb/12/t_7212.jpg",
null,
"https://www.hackmath.net/thumb/93/t_7193.jpg",
null,
"https://www.hackmath.net/thumb/81/t_7181.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89518195,"math_prob":0.99226075,"size":3451,"snap":"2020-10-2020-16","text_gpt3_token_len":926,"char_repetition_ratio":0.15694807,"word_repetition_ratio":0.08071749,"special_character_ratio":0.2526804,"punctuation_ratio":0.093055554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990502,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T18:36:17Z\",\"WARC-Record-ID\":\"<urn:uuid:9bf27f2c-2d99-4c4a-ba03-67cb0e918bee>\",\"Content-Length\":\"29219\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ff09d61-546a-4b44-bfc2-e677adf4f4dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3cad97f8-4d0e-4c89-a68a-a1e4027d51cc>\",\"WARC-IP-Address\":\"104.24.105.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/word-math-problems/right-triangle?page_num=7\",\"WARC-Payload-Digest\":\"sha1:K3DI46GBA4HQDFG6TYTQ2EYGQQQGZYAR\",\"WARC-Block-Digest\":\"sha1:TBU75MCTVN4SOG7WO7SC4PN7MSUJOT6K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370497301.29_warc_CC-MAIN-20200330181842-20200330211842-00387.warc.gz\"}"} |
https://www.masterorganicchemistry.com/2010/07/19/from-gen-chem-to-organic-chem-pt-3-effective-nuclear-charge/ | [
"From Gen Chem to Organic Chem, Pt. 3 – Effective Nuclear Charge\n\nLast updated: March 18th, 2021 |\n\nIf you’ve taken a physics course, you’ve probably covered the thrilling (to some) topic of electrostatics. There’s a basic formula which allows you to calculate the force between two point charges. We call this relationship Coulomb’s Law.",
null,
"For our purposes we’re not really interested in quantifying the magnitude of the forces. This is the point where you can make fun of organic chemists for not liking math. I prefer to think of it as that we’re more interested in understanding how different variables are related to each other. So let’s boil it down a little bit and then go into detail.We’ll call the first charge q1 the electron charge. This is always going to be –1. The second component q2 is the nuclear charge. This will vary with the number of protons in the nucleus, as we shall see. Finally, we have r, the distance between them. Finally, we’ll get rid of the equals sign and the constant to highlight the proportional nature of these relationships.",
null,
"One thing to note – the signs are opposite, so we’ll get a negative number, which implies an attractive force. If the charges were the same, the force would be the same but act in the opposite direction (i.e. repulsion). Secondly, while it’s true that the electrons are also going to repel each other, it’s relatively small for our purposes and we’re going to ignore it.\n\nTwo observations:\n\n1) the force falls off in proportion to the square of the distance. So in our example, as the electron moves twice as far away from the nucleus, the force will be 1/4 as great. Remember that electrons can’t just go anywhere – their positions are determined by the orbitals. So as the orbitals fill up, the interaction between the valence electrons (the highest energy electrons) and the nucleus is going to be less.\n\n2) Since the electronic charge is constant, the magnitude of the interaction is going to be extremely dependent on the nuclear charge. How do we calculate the nuclear charge? Let’s look at sodium. The valence electron in sodium is in the 3s orbital. The atomic number of sodium is 11, meaning it has 11 protons in the nucleus. So naïvely we might assign +11 as the nuclear charge. This doesn’t make much sense though. If you look at the chart of ionization energies versus atomic number, they have a periodic relationship, so something more complicated is going on.",
null,
"What’s actually going on is that the lower energy electrons in the filled shells are shielding the nuclear charge. So to get a better picture of the true force acting on the electron, we have to account for that. We call this the effective nuclear charge, or Zeff.\n\nIf we naïvely ignore the repulsive effects of the electrons on each other [and also ignore the howls of outrage from the theoretical chemists as we do so], we can come up with a rough value for the effective nuclear charge as follows:\n\nZeff = atomic number – atomic number of the preceding noble gas.\n\nUsing this, we get Zeff(Na) = 11 – 10 [Ne] = 1\n\nUsing this formula, all the alkali metals have a Zeff of +1, all the alkaline earths have a Zeff of +2, the halogens are +7 and so on.\n\nIt is this number, the Zeff, which has the largest impact on the personalities of each of the classes of atoms.\n\nSo based on our equation we can confidently make several important predictions\n\n1. Ionization energy (the energy required to take one electron away) is going to decrease as we go down the periodic table, since we are increasing the distance. This is most vividly demonstrated by the reactions of the different alkali metals with water.\n2. Ionization energy is going to increase as we move across the periodic table, since we’re increasing Zeff.\n3. Atomic radius will increase as we go down the periodic table, since the electrons will be less tightly held.\n4. Zeff reaches a maximum at the noble gas configuration (Zeff = 8). With this electronic configuration, each electron feels the pull of 8 protons on it.\n\nThis is the deep reason for the octet rule. If we tried to add an extra electron to neon (which would have to go into the 3s orbital) the electron would feel a Zeff of 0; we’d be putting it in an orbital where the attraction between the nucleus and the electron would be shielded by the intermediate electrons. Conversely, trying to remove an electron from neon is difficult becuase each electron feels the pull of eight protons on it, which is a very powerful electrostatic interaction.\n\nThis equation also goes a long way toward explaining the electronegativity of elements, Why is fluorine the most electronegative element? Because the electrons feel the highest effective nuclear charge (Z=7) and they are the closest to the nucleus (i.e. d is the smallest).\n\nElectronegativity has a huge effect in organic chemistry, as we will see. So understanding how it is generated from the formula above will help to give you an intuitive feel for how it works.\n\nComment section\n\n15 thoughts on “From Gen Chem to Organic Chem, Pt. 3 – Effective Nuclear Charge”\n\n1.",
null,
"Iz says:\n\nI’m a little confused about how effective nuclear charge relates to electronegativity and ionization energy. Effective nuclear charge is basically the charge of the nucleus “felt” by the valence electrons after the blocking effect of the shielding electrons has been taken into account, right? I don’t know why I’m confused. Maybe you can sum up ENC and how it relates to other periodic trends in other terms? I think it may be because it has been put in my mind to think of things in terms of the octet rule, rather than understanding where the octet rule itself originates and going from there. Thanks, love this blog.\n\n1.",
null,
"james says:\n\nWhat we commonly refer to as “electronegativity” is in fact the attractive force exerted by the nucleus towards electrons in the valence shell. With fluorine essentially you’ve got 7 protons in the nucleus [not counting the 2 whose effects are cancelled by the electrons in the 1s valence shell] exerting an attractive force on 7 electrons in hybrid orbitals of identical energy. You can imagine the effect would be less for oxygen (6 protons) and greater for neon (8 protons), a fact borne out by their measured ionization energies.\n\n2.",
null,
"Anu says:\n\nMy textbook says,\nZeff = Z – S ; where,\nZ is the number of protons in the nucleus (atomic number), and\nS is the average number of electrons between the nucleus and the electron in question (the number of nonvalence electrons).\n\n1.",
null,
"SaO. p says:\n\nWhere Z= atomic number\nand S= number of non-valence electrons\nFor Na:\nZ=11; S=10(since the valence electron, which is the electron found in the outermost shell, is 1)\nZeff=Z-S\nZeff=11-10= +1.\n\n2.",
null,
"Clinton says:\n\nYou are both correct! Think back to electron configurations; there was a short hand and long hand way of writing them. The short hand allows us to ignore all the non-valence electrons and replace them instead with the preceding noble gas. The atomic number of the preceding noble gas = the number of non-valence electrons for an atom of interest. So when James says to subtract the atomic number of the preceding noble gas, that means the same thing as the equation Zeff = Z – S, because S represents the non-valence electrons.\n\n3.",
null,
"Saloni says:\n\nHey\nI’m a bit confused as well. Can you put up a diagram of what is happening here?\nThanks. :)\n\n4.",
null,
"Saloni says:\n5.",
null,
"Jounayd says:\n\nWe created this octet rule checker page in order to help the user to see if the element confirm the octet rule or not. http://mon-ip.awardspace.com/octetrule/\n\n6.",
null,
"jounayd says:\n\nYou can visit my octet rule checker tool here http://mon-ip.awardspace.com/octetrule/ you have to enter the element symbol and you get the explanation\n\n7.",
null,
"SaO. p says:\n\nThanks a whole lot for this website! My fear of gen-chem & orgo is diminishing by every read of your subsequent articles.\n\n8.",
null,
"Shannon says:\n\nYou’re an angel for taking the time to explain all of this. I’ve always wondered what the mathematical/conceptual explanation for all of those numbers is. (I suppose because most chemistry classes gloss over the magnetic quantum number). This makes so much sense, thank you.\n\n1.",
null,
"James Ashenhurst says:\n\nGlad you found it helpful Shannon. The teacher I had in 3rd year quantum chem (Axel Becke) did a fantastic job of explaining all the components of the wave equation.\n\n9.",
null,
"Vikram Nathan says:\n\nThis is incredibly helpful when reviewing for an upcoming Organic Chemistry course. Thank you!!\n\n1.",
null,
"James Ashenhurst says:"
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null,
"data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9114755,"math_prob":0.9634025,"size":8636,"snap":"2022-05-2022-21","text_gpt3_token_len":1934,"char_repetition_ratio":0.14457831,"word_repetition_ratio":0.027665317,"special_character_ratio":0.2153775,"punctuation_ratio":0.0975754,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96931064,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T23:28:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a1a782be-e31f-4ef6-aa6e-a132652efd6b>\",\"Content-Length\":\"263045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c585e92-f063-4cf0-b82a-81f319dd4e7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3362020-d473-4673-ae6b-11f557057323>\",\"WARC-IP-Address\":\"52.1.96.42\",\"WARC-Target-URI\":\"https://www.masterorganicchemistry.com/2010/07/19/from-gen-chem-to-organic-chem-pt-3-effective-nuclear-charge/\",\"WARC-Payload-Digest\":\"sha1:VB2TYF47DMVENXPVNKHIZQMYWDNBLOZD\",\"WARC-Block-Digest\":\"sha1:V4ERKJLXN2A35IWLE5RDSZFNRQQXGCWO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305317.17_warc_CC-MAIN-20220127223432-20220128013432-00430.warc.gz\"}"} |
https://www.elearningfrench.com/french-course-1-lesson-9-grammar-3.html | [
"## Lesson 9 - Grammar 3 - Order of object pronouns\n\nExamples / Exemples:\nVoulez-vous me la montrer?\nLa femme de chambre va vous y conduire.\nDonnez-m'en quatre.\n\nAll the object pronoun forms and their positions relative to the verb have been presented.\n\nThe object pronoun forms can occur in sequence; more than two never occur. The sequence is shown in the tables below:\n\na. All cases except the affirmative imperative.\n\nSequence\nme\n\nnous\n\nvous\nle\n\nla\n\nles\n\nlui\n\nleur\n\ny\n\nen\n\nVerb\n\nb. Affirmative imperative.\n\nSequence\n\nVerb\nle\n\nla\n\nles\nnous\n\nvous\n\nlui\n\nleur\n\nm'\ny\n\nen\nmoi\n\n## Order of object pronouns - Learn French grammar through examples\n\nObserve, listen and repeat:\n\nLearning 1\n\n1. Il me l'a apporté.\n2. Il me l'a dit.\n3. Il me l'a laissé.\n4. Il me l'a donné.\n5. Il me l'a présenté.\n7. Il l'a envoyé.\n8. Il me l'a apporté.\n\nLearning 2\n\n1. Il me les a apportés.\n2. Il me les a lus.\n3. Il me les a laissés.\n4. Il me les a donnés.\n5. Il me les a présentés.\n6. Il me les a traduits.\n7. Il me les a envoyés.\n8. Il me les a apportés.\n\nLearning 3\n\n1. Je vous les ai montés.\n2. Je vous les ai laissés.\n3. Je vous les ai apportés.\n4. Je vous les ai donnés.\n5. Je vous les ai envoyés.\n6. Je vous les ai traduits.\n7. Je vous les ai achetés.\n8. Je vous les ai présentés.\n\nLearning 4\n\n1. Je vous l'ai dit.\n2. Je vous l'ai lu.\n3. Je vous l'ai apporté.\n4. Je vous l'ai écrit.\n5. Je vous l'ai envoyé.\n6. Je vous l'ai donne.\n7. Je vous l'ai demandé.\n9. Je vous l'ai présenté.\n\nLearning 5\n\n1. Vous nous l'avez dit. .\n2. Vous nous l'avez apporté.\n3. Vous nous l'avez présenté.\n4. Vous l'avez envoyé.\n5. Vous nous l'avez demandé.\n6. Vous nous l'avez écrit.\n7. Vous nous l'avez donné.\n9. Vous nous l'avez lu.\n\nLearning 6\n\n1. Nous vous l'avons apporté.\n2. Nous vous l'avons présenté.\n3. Nous vous l'avons dit.\n4. Nous vous l'avons écrit.\n6. Nous vous l'avons demandé.\n7. Nous vous l'avons envoyé.\n\nLearning 7\n\n1. Vous nous les avez achetés.\n2. Vous nous les avez demandés.\n3. Vous nous les avez envoyés.\n4. Vous nous les avez apportés.\n5. Vous nous les avez traduits.\n6. Vous nous les avez présentés.\n7. Vous nous les avez laissés.\n8. Vous nous les avez achetés.\n\nLearning 8\n\n1. Je vous en ai apporté.\n2. Je vous en ai laissé.\n3. Je vous en ai traduit.\n4. Je vous en ai acheté.\n5. Je vous en ai pris.\n6. Je vous en ai monté.\n7. Je vous en ai donné.\n8. Je vous en ai demandé.\n\nLearning 9\n\n1. Ils m'en ont apporté.\n2. Ils m'en ont laissé.\n4. Ils m'en ont acheté.\n5. Ils m'en ont pris.\n6. Ils m'en ont monté.\n7. Ils m'en ont donné.\n8. Ils m'en ont demandé.\n\nLearning 10\n\n1. Je vous y ai conduit, n'est-ce pas?\n2. Il vous y a conduit, n'est-ce pas?\n3. Ils vous y ont conduit, n'est-ce pas?\n4. Elle vous y a conduit, n'est-ce pas?\n5. Elles vous y ont conduit, n'est-ce pas?\n6. On vous y a conduit, n'est-ce pas?\n7. Nous les y avons conduits, n'est-ce pas?\n8. Vous les y avez conduits, n'est-ce pas?\n9. Elles les y ont conduits, n'est-ce pas?\n10. On les y a conduits, n'est-ce pas?\n11. Il les y a conduits, n'est-ce pas?\n12. Ils nous y ont conduits, n1est-ce pas?\n13. Elle nous y a conduits, n'est-ce pas?\n14. On nous y a conduits, n'est-ce pas?\n15. Vous nous y avez conduits, n'est-ce pas?\n16. Nous vous y avons conduits, n'est-ce pas?\n17. Je les y ai conduits, n'est-ce pas?\n18. Ils les y ont conduits, n'est-ce pas?\n\nLearning 11\n\n1. Ils m'y ont conduit dans l'après-midi.\n2. Il m'y a conduit dans l'après-midi.\n3. Elles m'y ont conduit dans l'après-midi.\n4. Elle m'y a conduit dans l'après-midi.\n5. On m'y a conduit dans l'après-midi.\n6. Vous m'y avez conduit dans l'après-midi.\n7. Je l'y ai conduit dans l'après-midi.\n8. Elle l'y a conduit dans l'après-midi.\n9. Elles l'y ont conduit dans l'après-midi.\n10. Il l'y a conduit dans l'après-midi.\n11. Ils l'y ont conduit dans l'après-midi.\n12. On l'y a conduit dans l'après-midi.\n13. Vous l'y avez conduit dans l'après-midi.\n14. Nous l'y avons conduit dans l'après-midi.\n\nLearning 12\n\n1. Ils m’en ont laissé.\n2. Ils vous en ont laissé.\n3. Ils nous en ont laissé.\n4. Il me les a laissés.\n5. Il vous les a laissés.\n6. Il nous les a laissés.\n7. Ils m'en ont laissé.\n\nLearning 13\n\n1. Il lui en a apporté.\n2. Il nous en a apporté.\n3. Il vous en a apporté.\n4. Il m'en a apporté.\n5. Il le lui a apporté.\n6. Il le leur a apporté.\n7. Il les leur a apportés.\n8. Il les lui a apportés.\n9. Il la lui a apportée.\n10. Il la leur a apportée.\n11. Il lui en a apporté.\n12. Il leur en a apporté.\n\nLearning 14\n\n1. Vous lui en avez envoyé.\n2. Vous m'en avez envoyé.\n3. Vous nous en avez envoyé.\n4. Vous leur en avez envoyé.\n5. Vous la lui avez envoyée.\n6. Vous la leur avez envoyée.\n7. Vous les lui avez envoyés.\n8. Vous les leur avez envoyés.\n9. Vous le leur avez envoyé.\n10. Vous le lui avez envoyé."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51880854,"math_prob":0.4591757,"size":567,"snap":"2020-10-2020-16","text_gpt3_token_len":164,"char_repetition_ratio":0.11545293,"word_repetition_ratio":0.0,"special_character_ratio":0.24691358,"punctuation_ratio":0.11320755,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95769316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T10:25:50Z\",\"WARC-Record-ID\":\"<urn:uuid:4a497211-717f-47d5-a464-a0f513c0be23>\",\"Content-Length\":\"30647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f57c9ca-e0a2-4729-950c-50c052f830b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b1482b6-9126-4363-bbea-943010c66ec3>\",\"WARC-IP-Address\":\"119.59.104.11\",\"WARC-Target-URI\":\"https://www.elearningfrench.com/french-course-1-lesson-9-grammar-3.html\",\"WARC-Payload-Digest\":\"sha1:HGJCKYHM2AQFKDUAR2BTU2F4MDCUHDOI\",\"WARC-Block-Digest\":\"sha1:CS3NXF45BL2FKPBHRORSGH7IC55GYR3U\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145500.90_warc_CC-MAIN-20200221080411-20200221110411-00436.warc.gz\"}"} |
https://community.wolfram.com/groups/-/m/t/1732535?p_p_auth=3cuh5Ehz | [
"# [WSC19] A Computational Method to Predict X Ray Diffraction (XRD) Patterns\n\nPosted 2 years ago\n8981 Views\n|\n9 Replies\n|\n9 Total Likes\n|",
null,
"## Background\n\nEver wondered how DNA's double helix structure was discovered? How drugs are investigated? Well, welcome to the world of X-ray diffraction! 13 Nobel prizes were awarded for developments involving this old but effective technique, in fields ranging from physics to medicine. But, how is it so effective?\n\nXRD is a powerful technique employed in various domains of science to determine the chemical makeup and thereby physical properties of various structures. Each lattice structure has its own \"XRD fingerprint\" which keys scientists in to its chemical makeup. This fingerprint is characterized by peaks with different intensities at different angles. Here is a sample for a face-centered cubic copper lattice structure:",
null,
"The first image in this post is a comparison of experimental and predicted results for a Silver crystal structure.\n\nHowever, predicting these fingerprints given little experimental data is a mathematically involved procedure. This summer, as part of the Wolfram High School Summer Camp, I implemented a framework for predicting these fingerprints for various cubic lattice structures.\n\n## Getting the Bragg Peak Positions\n\nYou might be wondering what the numbers on the top of the peaks mean. These numbers are Miller indices, which are descriptions of the planes in a unit cell that are producing the peaks. The first step is to use these planes to generate the Bragg peak positions:\n\n$$d=\\frac{a}{\\sqrt{h^2+k^2+l^2}}$$ $$\\theta =2\\arcsin{\\left(\\frac{\\lambda}{2 d}\\right)}$$\n\nHere, $a$ denotes the lattice constant (length of a side in a cubic unit cell), $(h,k,l)$ denote the Miller indices, and $\\lambda$ denotes the wavelength of the X-ray used. This is based on Bragg's law, see https://demonstrations.wolfram.com/BraggsLaw/ .\n\nHowever, certain $(h,k,l)$ are forbidden in some structures. For example, in a body-centered cubic structure, $h+k+l$ has to be even. The function PossiblePlanes accounts for these and has access to an extensive dataset of compounds and their structures.\n\nTo make coding easier, a list of associations was made with a certain $\\theta$ being the key for a list of $hkl$ values.\n\ngrouped[elementlist_, n_] :=\nGroupBy[ PossiblePlanes[elementlist, n],\n1/Sqrt[(#[]^2 + #[]^2 + #[]^2)] &]\n\nassociation[elementlist_, n_, wavelength_] :=\nSort[MapThread[#1 -> #2 &, {ToTheta[wavelength, elementlist, n],\ngrouped[elementlist, n] // Values}]]\n\n\n## Atomic Form Factor\n\nTo account for different electron densities, atomic form factors were calculated using a dataset tabulated by the International Tables for Crystallography: http://it.iucr.org/Cb/ch6o1v0001/. These form factors vary by angle; shown below is copper:",
null,
"These atomic form factors are then used in the structure factor calculation, which is directly proportional to the square root of intensity. For unary systems, the structure factor calculation is relatively easy. For binary systems, however, the parity of the Miller indices must be taken into account.\n\nevenodd[b_, elementlist_, theta_, w_] :=\nIf[b, Total[atomdata[#, theta, w] & /@ elementlist],\nDifferences[atomdata[#, theta, w] & /@ elementlist] // First]\n\n\nHere, atomdata gives the atomic form factor at a specific point. This function is mapped to a set of True/False (Even or not) values and returns the structure factor. For a face-centered cubic cell, if the parity of $hkl$ is even, then the atomic form factors are summed, but if the parity is odd, the atomic form factors are subtracted.\n\n## Multiplicities\n\nNow, back to the Miller indices. Take a look at the following graphic:",
null,
"You might notice that if we reflect $(100)$ we can get $(010)$ and $(001)$ . We can also get negative indices, usually denoted $\\overline{1}$ instead of $-1$. This gives us 6 total planes that are symmetry-equivalent, and correspond to the same peak. Hence, we say that the class of Miller indices $(h00)$ has a multiplicity of 6. These multiplicities range from 6 to 48 for a cubic lattice structure, but can get as low as 2 with less symmetric structures.\n\nTherefore, instead of calculating the contributed intensity of each plane, we count them as one plane and multiply the resultant intensity by a specific multiplicity. This multiplicity is then used to calculate peak intensity.\n\n## Intensity Calculation\n\n$$I_{hkl}=\\underbrace{\\frac{1+\\cos^2 (2\\theta)}{\\sin^2(\\theta)}}_{\\text{Lorentz Polarization Correction}} \\times \\ \\ \\ \\ \\ \\text{Multiplicity}_{hkl} \\ \\ \\ \\times \\underbrace{F_{hkl}^2}_{\\text{Structure Factor}}$$\n\nThe Lorentz polarization correction was introduced to improve accuracy and match experimental conditions as X-rays will not be completely polarized at every angle.\n\nintensity[w_, elementlist_, n_] :=\nTranspose@{(association[Flatten @ elementlist, n, w] //\nKeys), (.5 (1 + (Cos[#])^2)/(Sin[#/2]^2 *\nCos[#/2])) & /@ (association[Flatten @ elementlist, n, w] //\nKeys) *\n(multiplicity /@ (Last /@ (association[Flatten @ elementlist, n,\nw] // Values)))*(structurefactor[elementlist,\nw, (association[Flatten @ elementlist, n, w])]) ^2 }\n\n\nintensity gives a list of Bragg peak positions and their respective intensities using the aforementioned formula. This intensity function is then inputed in a function which finally plots the diffraction pattern.\n\npeak[{theta_, intensity_}] :=\nintensity * Exp[-10000 Pi (t - theta)^2]\n\n\nWhere $t$ is the variable to be plotted against. Here is a comparison of the predicted XRD pattern vs the real diffracted pattern for a Copper FCC structure:",
null,
"The absolute intensities have little use, as relative intensities are primarily used to analyze these patterns.\n\n## Future Research\n\nFor future research, I have many ideas I want to implement. Thanks to Mr. Wolfram, I certainly have a lot to do this summer! Perhaps the most ambitious of my future plans is doing the inverse problem: predicting the lattice structure from a given XRD pattern.\n\n## Acknowledgements\n\nI would like to thank my mentor, Eryn Gillam, for helping me throughout my project. I would also like to thank the other mentors for their help, and Mohammad Bahrami for his lectures. Wolfram Summer Camp truly gave me an outlet to express my creativity in novel ways, and the two weeks I spent here were invaluable. Wolfram Summer Camp gave me a novel perspective on how to approach all aspects of life, and key insight into how computational thinking can change the world. For these reasons and more, I am beyond grateful to have been a part of this camp, and am looking forward to apply my new skills.\n\n## Computational Essay\n\nhttps://github.com/hamza314/WSS-Template/blob/master/Final%20Project/Final%20Submission/Hamza%20Alsamraee%20WSC19.nb",
null,
"Attachments:",
null,
"Answer\n9 Replies\nSort By:\nPosted 2 years ago\n Your project is really kul !!!",
null,
"Answer\nPosted 2 years ago\n Thanks Khang.",
null,
"Answer\nPosted 2 years ago\n Nice project! Those plots are spot on.",
null,
"Answer\nPosted 2 years ago\n Hopefully I will get them even more accurate over the summer!",
null,
"Answer\nPosted 2 years ago\n Nice job!",
null,
"Answer\nPosted 2 years ago\n Thanks so much Sunny!",
null,
"Answer\nPosted 2 years ago\n Nice! Really cool project!",
null,
"Answer\nPosted 2 years ago\n Thank you, hopefully I will improve it further soon!",
null,
"Answer\nPosted 2 years ago\n Well done Hamza! Looking forward to see how much you can go with the inverse problem, which is quite challenging. BTW, you've developed some interesting functions (e.g., intensity, peak and etc) that are suitable for Wolfram Function Repository. What about do you think about submitting them to WFR, then other users can use it?",
null,
"Answer"
] | [
null,
"https://community.wolfram.com//c/portal/getImageAttachment",
null,
"https://community.wolfram.com//c/portal/getImageAttachment",
null,
"https://community.wolfram.com//c/portal/getImageAttachment",
null,
"https://community.wolfram.com//c/portal/getImageAttachment",
null,
"https://community.wolfram.com//c/portal/getImageAttachment",
null,
"https://community.wolfram.com/community-theme/images/attachment-paperclip.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null,
"https://community.wolfram.com/community-theme/images/common/checked.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86381143,"math_prob":0.9338951,"size":6684,"snap":"2021-43-2021-49","text_gpt3_token_len":1576,"char_repetition_ratio":0.10808383,"word_repetition_ratio":0.009118541,"special_character_ratio":0.2363854,"punctuation_ratio":0.12541528,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889286,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T08:24:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a77a6957-0797-4226-bcb8-7611f4afc5bf>\",\"Content-Length\":\"180173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d2151d3-c10b-4f64-8a6f-f5934bbb786a>\",\"WARC-Concurrent-To\":\"<urn:uuid:a14a7af0-df59-40ca-9328-52cfb427e501>\",\"WARC-IP-Address\":\"140.177.204.58\",\"WARC-Target-URI\":\"https://community.wolfram.com/groups/-/m/t/1732535?p_p_auth=3cuh5Ehz\",\"WARC-Payload-Digest\":\"sha1:4V5XO7KLTWMU25VNJQFH2ULZP26W5I2I\",\"WARC-Block-Digest\":\"sha1:6NSCDN7DDOD5VLBYMCC3ZZ5KKXJTGBBZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585199.76_warc_CC-MAIN-20211018062819-20211018092819-00554.warc.gz\"}"} |
https://www.ams.org/journals/tran/1996-348-05/S0002-9947-96-01480-8/home.html | [
"# Transactions of the American Mathematical Society\n\nPublished by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics.\n\nISSN 1088-6850 (online) ISSN 0002-9947 (print)\n\nThe 2020 MCQ for Transactions of the American Mathematical Society is 1.43.\n\nWhat is MCQ? The Mathematical Citation Quotient (MCQ) measures journal impact by looking at citations over a five-year period. Subscribers to MathSciNet may click through for more detailed information.\n\n## Simultaneous rational approximation to binomial functionsHTML articles powered by AMS MathViewer\n\nby Michael A. Bennett\nTrans. Amer. Math. Soc. 348 (1996), 1717-1738 Request permission\n\n## Abstract:\n\nWe apply Padé approximation techniques to deduce lower bounds for simultaneous rational approximation to one or more algebraic numbers. In particular, we strengthen work of Osgood, Fel′dman and Rickert, proving, for example, that $\\max \\left \\{ \\left | \\sqrt {2} - p_{1}/q \\right | , \\left | \\sqrt {3} - p_{2}/q \\right | \\right \\} > q^{-1.79155}$ for $q > q_{0}$ (where the latter is an effective constant). Some of the Diophantine consequences of such bounds will be discussed, specifically in the direction of solving simultaneous Pell’s equations and norm form equations.\nSimilar Articles\n• Retrieve articles in Transactions of the American Mathematical Society with MSC (1991): 11J68, 11J82, 11D57\n• Retrieve articles in all journals with MSC (1991): 11J68, 11J82, 11D57"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.63054377,"math_prob":0.8373078,"size":4348,"snap":"2022-40-2023-06","text_gpt3_token_len":1457,"char_repetition_ratio":0.12638122,"word_repetition_ratio":0.028892456,"special_character_ratio":0.40455383,"punctuation_ratio":0.2613391,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564199,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T15:47:38Z\",\"WARC-Record-ID\":\"<urn:uuid:1c155cb4-8328-4536-89f2-14efecce8413>\",\"Content-Length\":\"68847\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86bfa8db-21cc-4030-9aea-3c320b8be48b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1d303a3-0c40-410c-9f65-a16b4b096ca3>\",\"WARC-IP-Address\":\"130.44.204.100\",\"WARC-Target-URI\":\"https://www.ams.org/journals/tran/1996-348-05/S0002-9947-96-01480-8/home.html\",\"WARC-Payload-Digest\":\"sha1:VFDID5XG5BQUUTACRTX3QIYA3FPUAQYZ\",\"WARC-Block-Digest\":\"sha1:4XK3FUZFPMYBNSJGXN7QOGKB5JBA7DRU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500028.12_warc_CC-MAIN-20230202133541-20230202163541-00349.warc.gz\"}"} |
https://lifewithdata.com/2023/07/05/how-to-use-the-confint-function-in-r/ | [
"# How to Use the confint() Function in R\n\nThe confint() function is a built-in function in R that computes confidence intervals for one or more parameters in a fitted model. Confidence intervals are widely used in statistical analysis to express the degree of uncertainty or margin of error around a sample statistic.\n\n## Understanding Confidence Intervals\n\nBefore diving into the use of confint(), it’s crucial to understand what confidence intervals represent. A confidence interval is a range of values that is likely to contain the value of an unknown population parameter. The interval has an associated confidence level that quantifies the level of confidence that the parameter lies within the interval.\n\nFor example, a 95% confidence interval means that if the same population were sampled on numerous occasions, computed confidence intervals would encompass the true population parameter approximately 95% of the time.\n\n## Basics of confint( )\n\nThe generic confint() function is used in R to compute confidence intervals of one or more parameters in a fitted model. The structure of the function is as follows:\n\nconfint(object, parm, level = 0.95, ...)\n\nHere:\n\n• object is a fitted model object\n• parm is a specification of which parameters are to be given confidence intervals, either a vector of numbers or a vector of names. If omitted, all parameters are considered.\n• level specifies the confidence level and is set to 0.95 by default, which corresponds to a 95% confidence interval.\n• ... represents other arguments.\n\n## Using confint( ) in R: A Practical Example\n\nLet’s take a simple linear regression model as an example. We will use the mtcars dataset which is pre-loaded in R. This data frame comprises fuel consumption and 10 aspects of car design and performance for 32 automobiles, 1973–74 models.\n\nWe’ll model miles per gallon (mpg) based on horsepower (hp) and weight (wt).\n\nFirst, we fit a linear model using the lm() function:\n\ndata(mtcars)\nmodel <- lm(mpg ~ hp + wt, data = mtcars)\n\nNext, we use confint() to calculate the confidence intervals for the parameters of our model:\n\nconfint(model)\n\nThe output will be a matrix with columns providing the lower and upper limits of the confidence intervals, and rows corresponding to the model parameters (i.e., the intercept and the coefficients for hp and wt).\n\nIf we want to calculate the confidence interval for a specific parameter, for instance, the coefficient for wt, we would specify this parameter in the parm argument:\n\nconfint(model, parm = \"wt\")\n\nThis command would output the lower and upper limits of the confidence interval specifically for the wt coefficient.\n\n## Interpretation of the confint( ) Output\n\nThe output of the confint() function is a two-column matrix, where the first column is the lower limit of the confidence interval, and the second column is the upper limit. Each row represents a parameter in the model.\n\nThe values in the matrix represent the range in which the corresponding parameters can fall, with a specific level of confidence. If the confidence interval for a coefficient does not include zero, it suggests that the parameter is statistically significant at the given confidence level.\n\nFor example, if the 95% confidence interval for the wt coefficient is [0.5, 1.5], it indicates that we are 95% confident that the actual coefficient of wt in the population lies within this interval.\n\n## Advanced Usage of confint( )\n\nWhile confint() can be used for simple models such as linear or logistic regressions, it is versatile and can also be applied to more complex models, such as generalized linear models, mixed-effects models, survival models, and many more.\n\nFor instance, here is how you might use confint() with a generalized linear model:\n\n# Fit a generalized linear model\nmodel_glm <- glm(vs ~ hp + wt, family = binomial(), data = mtcars)\n\n# Compute confidence intervals\nconfint(model_glm)\n\nThe application of confint() remains the same regardless of the model type; it provides the confidence intervals for the parameters in the fitted model.\n\n## Potential Limitations\n\nIt’s crucial to keep in mind that the accuracy of the confint() function’s output depends on the correctness and suitability of the fitted model for the given data. If the model does not fit the data well, or if the underlying assumptions of the model are violated, the confidence intervals obtained may not be reliable.\n\nTherefore, before using confint(), it’s important to conduct proper exploratory data analysis and model checking to ensure the model is well-specified.\n\n## Conclusion\n\nIn statistical analysis, understanding the uncertainty associated with estimates is as important as the estimates themselves. The confint() function in R is a powerful tool that allows statisticians and data scientists to quantify this uncertainty by computing confidence intervals for model parameters.\n\nWhether you’re dealing with a simple linear regression model or more complex models, confint() provides a straightforward and efficient way to compute confidence intervals, making it a valuable addition to your data analysis toolkit. However, as with any statistical tool, it’s essential to understand the underlying assumptions and potential limitations to ensure accurate and reliable results.\n\nPosted in RTagged"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7977982,"math_prob":0.9569626,"size":5175,"snap":"2023-40-2023-50","text_gpt3_token_len":1051,"char_repetition_ratio":0.19145232,"word_repetition_ratio":0.029520296,"special_character_ratio":0.2005797,"punctuation_ratio":0.10262009,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99681437,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T12:35:46Z\",\"WARC-Record-ID\":\"<urn:uuid:3e319f59-869e-40b8-91c3-c1f48a22cda3>\",\"Content-Length\":\"106675\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e804aff4-66b8-47be-812b-4486f4a06d56>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eef2185-6bcd-44ab-bbce-6ec25ea95bcc>\",\"WARC-IP-Address\":\"192.0.78.145\",\"WARC-Target-URI\":\"https://lifewithdata.com/2023/07/05/how-to-use-the-confint-function-in-r/\",\"WARC-Payload-Digest\":\"sha1:6S7F6GBLCJPZ2RA3J2SACSNP723JZZZ4\",\"WARC-Block-Digest\":\"sha1:UYJPMICADQLOG66THHB4DYQCJ3PQP74Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510994.61_warc_CC-MAIN-20231002100910-20231002130910-00395.warc.gz\"}"} |
http://keywen.com/en/HYPOTENUSE | [
" \"Hypotenuse\" related terms, short phrases and links",
null,
"Web keywen.com\nHypotenuse Article History Tree Map\n Encyclopedia of Keywords > Right Triangle > Hypotenuse Michael Charnine\n\n Keywords and Sections\n Review of Short Phrases and Links\n\nThis Review contains major \"Hypotenuse\"- related terms, short phrases and links grouped together in the form of Encyclopedia article.\n\n### DefinitionsUpDw('Definitions','-Abz-');\n\n1. A hypotenuse is the longest side of a right-angled triangle, the side opposite the right angle.\n2. The hypotenuse is the side opposite the right angle, or defined as the longest side of a right-angled triangle, in this case h. (Web site)\n3. If the hypotenuse is twice as long, so are the sides. (Web site)\n\n### RatioUpDw('RATIO','-Abz-');\n\n1. That is, for any similar triangle the ratio of the hypotenuse (for example) and another of the sides remains the same.\n2. The ratio of the hypotenuse to an arm of an isosceles right triangle is a: b expressed in the smallest units possible. (Web site)\n\n### Horizontal LineUpDw('HORIZONTAL_LINE','-Abz-');\n\n1. A horizontal line through P intersects this minor auxiliary circle of radius b, establishing another right triangle with altitude y and hypotenuse b. (Web site)\n\n### Square RootUpDw('SQUARE_ROOT','-Abz-');\n\n1. By the Pythagorean theorem, the length of the hypotenuse is the length of a leg times the square root of two. (Web site)\n\n### TriangleUpDw('TRIANGLE','-Abz-');\n\n1. The remainder of the flag is medium blue with seven five-pointed white stars and two half stars top and bottom along the hypotenuse of the triangle. (Web site)\n2. The 'triangle' with sides dy, dx and dc is so small that we can treat the arc dc as if it were a straight line and the hypotenuse of this triangle. (Web site)\n\n### AngleUpDw('ANGLE','-Abz-');\n\n1. The sine of an angle is the ratio of the length of the opposite side to the length of the hypotenuse. (Web site)\n2. Angle E is therefore the adjacent angle of a right triangle with hypotenuse 1 + e cosθ, adjacent side e + cosθ, and opposite side √(1-e 2) sinθ. (Web site)\n\n### CosineUpDw('COSINE','-Abz-');\n\n1. The cosine of an angle is the ratio of the length of the adjacent side to the length of the hypotenuse. (Web site)\n2. The cosine of angle A is the ratio of the hyperbolic tangent of the adjacent leg to the hyperbolic tangent of the hypotenuse.\n\n### LegUpDw('LEG','-Abz-');\n\n1. Hypotenuse-Leg (HL) Theorem: The hypotenuse and a leg in a right triangle have the same length as those in another right triangle. (Web site)\n2. Similarly, assume an isosceles right triangle whose leg and hypotenuse have respective integer lengths n and m.\n\n### LegsUpDw('LEGS','-Abz-');\n\n1. The \"hypotenuse\" is the base of the tetrahedron at the back of the figure, and the \"legs\" are the three sides emanating from the vertex in the foreground. (Web site)\n\n### SideUpDw('SIDE','-Abz-');\n\n1. One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing a little calculus.\n2. If the angle were exactly 36 and one side were exactly 100 units, then the other side would be 72.6542 units, while the hypotenuse would be 123.6068 units. (Web site)\n\n### SidesUpDw('SIDES','-Abz-');\n\n1. Here the vectors v and w are akin to the sides of a right triangle with hypotenuse given by the vector sum v + w. (Web site)\n2. If we let c be the length of the hypotenuse and a and b be the lengths of the other two sides, the theorem can be expressed as the equation: a^2 + b^2 = c^2. (Web site)\n3. The vectors a, - 2 b, and a - 2 b form the sides of a right-angled triangle, with sides of length 8, 6 and hypotenuse of length. (Web site)\n\n### SumUpDw('SUM','-Abz-');\n\n1. Find a right triangle having the property that the hypotenuse equals the sum of one leg plus the altitude on the hypotenuse. (Web site)\n\n### SquareUpDw('SQUARE','-Abz-');\n\n1. And triangle 3 has area, and it is half of the square on the hypotenuse. (Web site)\n\n### SquaresUpDw('SQUARES','-Abz-');\n\n1. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares.\n2. In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides (Pythagoras' Theorem). (Web site)\n3. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs.\n\n### Pythagorean TheoremUpDw('PYTHAGOREAN_THEOREM','-Abz-');\n\n1. The Pythagorean theorem: The sum of the areas of the two squares on the legs (a and b) equals the area of the square on the hypotenuse (c). (Web site)\n2. This is easy to see by studying a right triangle of hypotenuse 1 and applying the Pythagorean theorem.\n3. By the Pythagorean theorem, it follows that the hypotenuse of this triangle also has length c. (Web site)\n\n### Side OppositeUpDw('SIDE_OPPOSITE','-Abz-');\n\n1. The sine of angle A is the ratio of the hyperbolic sine of the side opposite the angle to the hyperbolic sine of the hypotenuse.\n\n### Right TriangleUpDw('RIGHT_TRIANGLE','-Abz-');\n\n1. Pythagorean theorem For any right triangle, the sum of the squares of the measures of the legs equals the square of the measure of the hypotenuse. (Web site)\n2. The side opposite to the right angle is the hypotenuse; it is the longest side in the right triangle. (Web site)\n3. A right triangle with legs both equal to one unit has hypotenuse length square root of two.\n\n### HypotenuseUpDw('HYPOTENUSE','-Abz-');\n\n1. The sum of the areas of the squares on the legs of a right triangle is equal to the area of the square on the hypotenuse. (Web site)\n2. It says that the sum of the squares of the lengths of the legs is equal to the square of the length of the hypotenuse (the side opposite the right angle). (Web site)\n3. The theorem that the sum of the squares of the lengths of the sides of a right triangle is equal to the square of the length of the hypotenuse. (Web site)\n\n### CategoriesUpDw('Categories','-Abz-');\n\n1. Right Triangle\n2. Side Opposite\n3. Geometry > Polygons > Triangles > Pythagorean Theorem\n4. Right Angle\n5. Squares\n6. Books about \"Hypotenuse\" in Amazon.com",
null,
"",
null,
"Short phrases about \"Hypotenuse\" Originally created: August 01, 2010. Links checked: February 08, 2013. Please send us comments and questions by this Online Form Please click on",
null,
"to move good phrases up.\n0.0137 sec. a=1.."
] | [
null,
"http://keywen.com/Keywen.gif",
null,
"http://www.assoc-amazon.com/e/ir",
null,
"https://www.paypal.com/en_US/i/scr/pixel.gif",
null,
"http://keywen.com/Up.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8851097,"math_prob":0.96132815,"size":5210,"snap":"2021-31-2021-39","text_gpt3_token_len":1286,"char_repetition_ratio":0.2754514,"word_repetition_ratio":0.15650406,"special_character_ratio":0.23685221,"punctuation_ratio":0.069881886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979954,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T19:03:04Z\",\"WARC-Record-ID\":\"<urn:uuid:6852fd3b-1230-41af-8448-06cbc24b8dd3>\",\"Content-Length\":\"22667\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:793abd49-2b62-4587-9c9c-dccfc6536a91>\",\"WARC-Concurrent-To\":\"<urn:uuid:c71cfd08-13c7-4a71-9ce3-3ad29dba3262>\",\"WARC-IP-Address\":\"83.149.227.181\",\"WARC-Target-URI\":\"http://keywen.com/en/HYPOTENUSE\",\"WARC-Payload-Digest\":\"sha1:JXNV5EO7GWWOWDET6HVT57F7OGHXXEO5\",\"WARC-Block-Digest\":\"sha1:UNKXNWAQ75W6EIDS5ZHIEPEO2R3S573R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057427.71_warc_CC-MAIN-20210923165408-20210923195408-00486.warc.gz\"}"} |
https://scirp.org/journal/paperinformation.aspx?paperid=81913 | [
"Comparison of Stereotactic Body Radiotherapy Delivery Techniques for Early-Stage Lung Cancer Using Lung Toxicity Modeling\n\nAbstract\n\nPurpose: Lung toxicity is a primary side effect in stereotactic radiotherapy (SBRT) for early-stage non-small cell lung cancer (NSCLC). We aimed to use a set of radiobiological models to evaluate and compare modern IMRT delivery techniques with three-dimensional conformal techniques for SBRT treatment of NSCLC in terms of lung toxicity, and aimed to compare the results from different radiobiologcal models. Methods: Ten early-stage NSCLC patients treated with SBRT were retrospectively selected. Five treatment plans were generated to deliver 50 Gy in five fractions to the planning target volume for each case: a helical tomotherapy (HT) plan, two three-dimensional cofnromal radiotherapy (3D-CRT) plans using 6-MV and 10-MV photon beams respectively, and two volumetric modulated arc therapy (VMAT) plans using one and two arc fields respectively. The lung RDV was calculated with three parallel functional sub-unit (FSU) models and two normal tissue complication probability (NTCP) models. Results: Both the HT and VMAT plans showed significantly higher contralateral mean lung dose and lower ipsilateral mean lung dose compared to the 3D-CRT plans. There was no statistically significant difference in terms of lung toxicities between the IMRT and 3D-CRT techniques using either the FSU models or the NTCP models. Based on both the FSU and the NTCP models, there was strong correlation between lung toxicity and the mean lung dose in SBRT treatment plans. Conclusions: Based on both the NTCP and parallel FSU models, both IMRT and traditional 3D-CRT delivery techniques could achieve comparable lung sparing inn SBRT treatment of early-stage lung cancer. However, the validity of the radiobiological model results should be checked by clinical data.\n\nShare and Cite:\n\nHan, C. , Schultheiss, T. and Wong, J. (2018) Comparison of Stereotactic Body Radiotherapy Delivery Techniques for Early-Stage Lung Cancer Using Lung Toxicity Modeling. International Journal of Medical Physics, Clinical Engineering and Radiation Oncology, 7, 1-14. doi: 10.4236/ijmpcero.2018.71001.\n\n1. Introduction\n\nIn recent years, hypofractionated stereotactic body radiotherapy (SBRT) has been widely implemented as a definitive treatment modality for early-stage non-small cell lung cancer (NSCLC), especially for patients who are not candidates for surgery due to existing morbidities including cardiopulmonary complications . In a systematic review of thirty-five published studies, the local control rate was all above 80% at 1 - 5 years for stage I NSCLC treated with SBRT, and a significant number of patients did not have any sign of adverse effects during the course of treatment . While early-stage lung cancer patients could potentially be cured from SBRT treatments, it is imperative to minimize radiation-induced toxicities to reduce post-treatment morbidities. As multiple radiation therapy (RT) delivery techniques exist to deliver SBRT to the lung, including three-dimensional conformal radiotherapy (3D-CRT), and intensity- modulated radiotherapy (IMRT) techniques, objective and quantitative methods to evaluate treatment plan quality are needed to choose the optimal treatment plan for disease control and sparing of normal organs.\n\nIn SBRT treatment of lung cancer, normal lung toxicity is a primary concern for treatment complications . In conventional lung cancer RT, commonly used dosimetric quantities include the volume of the lung receiving dose above 20 Gy (V20), and the mean lung dose (MLD) . Due to the large dose per fraction in SBRT treatments, clinical experiences based on conventional RT fractionation schemes may not be applicable in SBRT treatment plan evaluation . More sophisticated radiobiological models have been proposed to predict radiation toxicities to the lung. The equivalent uniform dose (EUD) model has long been used to evaluate organ toxicities. In the EUD model, a power-law formula is used to convert the dose-volume histogram (DVH) from a RT plan to a single equivalent dose parameter, which can be translated to normal tissue complication probability (NTCP) through a sigmoid function . On the other hand, the DVH can be reduced to a volume parameter using mathematical formulations based on the parallel functional sub-unit (FSU) model . In contrast to the EUD concept, the parallel FSU model quantifies the percentage of lung volume damaged by RT treatment, which is potentially a clinically measurable quantity. Previous studies using the parallel FSU model indicate that this model could be relevant in evaluation of organ toxicities in RT treatments .\n\nIn this study, we aimed to use the existing lung toxicity models to quantify lung toxicities for comparison of different RT delivery techniques. We also aimed to evaluate the discrepancies between existing radiobiological models in lung toxicity modeling in SBRT treatments.\n\n2. Materials and Methods\n\nWe retrospectively selected ten patients with early-stage non-small cell lung cancer who previously received definitive SBRT treatment at our institution. Table 1 lists patient characteristics. Prior to treatment planning, each patient\n\nTable 1. Characteristics of patients in this study.\n\nreceived computed tomography (CT) scans in the thoracic region with 3 mm slice thickness in the helical mode. Three CT scans were performed for each patient, while the patient was in shallow free breathing and at the end of the inspiration and expiration phases, respectively. In a treatment planning system (Eclipse Version 11, Varian Medical Systems, Inc., Palo Alto, California), the three CT image sets were registered rigidly, and contours of critical organs including the lungs, heart, spinal cord, esophagus were drawn on the free-breathing CT images. The gross target volume (GTV) was drawn on each of the CT image sets, and then combined to form the internal target volume (ITV). The planning target volume (PTV) was created by adding 5-mm lateral margin and 10-mm superior-inferior margin to the ITV. The average PTV volume was 91.1 ± 67.7 cm3 (range: 30.2 - 206.5 cm3).\n\nIn this study, we generated the following five treatment plans for each patient:\n\n1) A 3D-CRT treatment plan using 6MV photon beams (3D-6MV plan). Each plan used 12 non-coplanar fields, with the orientation and relative weighting of each field manually chosen for each individual plan to minimize critical organ dose. In the Eclipse treatment planning system, the analytical anisotropic algorithm (AAA) was used for dosimetric calculation with heterogeneity correction applied.\n\n2) A 3D-CRT plan using 10MV photon beams (3D-10MV plan). For each case, the 3D-10MV plan used the same field geometry as the 3D-6MV plan. The same dosimetric calculation algorithm was used.\n\n3) A helical tomotherapy plan (HT plan). The jaw size of either 2.5 cm or 1.0 cm was used, depending on PTV dimension. A pitch of 0.15 was used in all the HT plans. A superposition-convolution algorithm with heterogeneity correction was used for dose calculation.\n\n4) A single-arc RapidArc VMAT plan (VMAT-1 plan). In this plan, a 6-MV single arc field rotates around the patient for a complete gantry rotation. Both the dose rate and the gantry speed were allowed to modulate during the arc rotation. The Progressive Resolution Optimization (PRO) algorithm in the Eclipse treatment planning system was used for optimization, and the AAA algorithm was used for final dose calculation with heterogeneity correction.\n\n5) A two-arc RapidArc VMAT plan (VMAT-2 plan). In this plan, two 6-MV arc fields, with identical complete gantry rotation range but opposing rotational directions, were used. The same optimization and dose calculation algorithms were used as in the VMAT-1 plans.\n\nThe RTOG Protocol 0623 was followed as the guideline for dose prescription and normal organ dose constraints. The PTV receives 50 Gy in 5 uniform fractions in each treatment plan. All the treatment plans were normalized so that 95% of the PTV received at least the prescription dose. In IMRT treatment plan optimization, higher priority was on minimizing normal lung dose, rather than on dose homogeneity of the PTV.\n\nLung toxicity was evaluated using the parallel FSU model in the following steps. First, the dose-volume histogram (DVH) of the normal lung volume in the differential form was converted to the normalized biologic equivalent (NBE) DVH using the linear-quadratic model with the α/β ratio of 3 Gy for the normal lung . Specifically, the dose in the i-th bin of the lung DVH is normalized using the following equation:\n\n${D}_{i,\\text{normalized}}={D}_{i}\\cdot \\frac{1+\\left({D}_{i}/n\\right)/3Gy}{1+2Gy/3Gy}$ , where n is the number of treatment fractions.\n\nSecond, the normalized DVH was reduced to a single lung toxicity parameter that represents the relative damaged volume (RDV) for the lung. To calculate the RDV for each treatment plan, a local effective function, E(D), was calculated as a function of the local lung dose. The RDV is then given by :\n\n$\\text{RDV}=\\underset{i}{\\sum }E\\left({D}_{i,\\text{normalized}}\\right)\\cdot {V}_{i}$\n\nwhere Di,normalized and Vi are the normalized dose and percentage volume in the i-th bin of the differential DVH, respectively. This definition is based on the assumption that the lung is composed of parallel functional sub-units with identical radiation response characteristics.\n\nMultiple mathematical formulations exist in the literature for calculating the effective dose function E(D). When E(D) is a linear function of the local dose D, RDV is mathematically equivalent to using the MLD in plan evaluation. Other authors used sigmoid forms for the calculation of E(D). To evaluate the robustness and consistency of the models, we applied the following three mathematical formulations in this study.\n\n1) The logistic model formulation . With this model, E(D) is given by a\n\nlogistic function: $E\\left(D\\right)=\\frac{1}{1+{\\left(\\frac{{D}_{\\text{L50}}}{D}\\right)}^{k}}$ . DL50 is the dose level at which the local\n\ndose effect is 50% (E(DL50) = 50%), and k represents the steepness of the logistic function curve. In this study, k is taken as 2.\n\n2) The S-shape model formulation . With this model, E(D) is given by:\n\n$E\\left(D\\right)=\\left\\{\\begin{array}{l}\\frac{\\frac{D}{{D}_{\\text{L50}}}-1}{1+{\\left(\\frac{D}{{D}_{\\text{L50}}}-1\\right)}^{2}}+1/2,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{when}\\text{\\hspace{0.17em}}D\\le 2{D}_{\\text{L50}}\\\\ 1,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{when}\\text{\\hspace{0.17em}}D>2{D}_{\\text{L50}}\\end{array}$\n\n3) The modified linear model formulation. E(D) is given by:\n\n$E\\left(D\\right)=\\left\\{\\begin{array}{l}\\frac{D}{2{D}_{\\text{L50}}},\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{when}\\text{\\hspace{0.17em}}D\\le 2{D}_{\\text{L50}}\\\\ 1,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{when}\\text{\\hspace{0.17em}}D>2{D}_{\\text{L50}}\\end{array}$\n\nFigure 1 plots the three local effective dose functions for comparison. While E(D) approaches 1 asymptotically with increasing dose in the logistic model, it reaches 1 when D = 2DL50 in both the S-shape model and the modified linear model.\n\nIn published literature, the value of DL50 had significant uncertainty due to heterogeneity in datasets used and toxicity level evaluated . To evaluate the robustness of the parallel FSU model, we allowed DL50 to vary from 20 Gy to 40 Gy in the analysis. For comparison, we also calculated normal tissue complication probability (NTCP) of the total lung for each treatment plan, based on the Lyman-Kutcher-Burman (LKB) model as well as the mean-lung-dose (MLD) model, respectively. With the LKB model, the EUD of the normal lung is given by:\n\n$\\text{EUD}={\\left({\\sum }_{i}{D}_{i}^{1/n}\\frac{{V}_{i}}{{V}_{\\text{total}}}\\right)}^{n}$\n\nand the NTCP value is obtained by:\n\nFigure 1. Three local effective dose functions used in this study: the S-shaped function (solid line), the logistic function (dotted line), and the linear function (dashed line).\n\n$\\text{NTCP}=\\frac{1}{\\sqrt{\\text{2π}}}{\\int }_{-\\infty }^{t}{\\text{e}}^{-\\frac{{x}^{2}}{2}}\\text{d}x$ , where $t=\\frac{\\text{EUD}-{\\text{TD}}_{50}}{m\\cdot {\\text{TD}}_{50}}$ .\n\nThree parameters, n, m, and TD50, exist in the LKB model.\n\nWith the MLD formulation, the NTCP can be expressed by a logistic function:\n\n$\\text{NTCP}=\\frac{{\\text{e}}^{{b}_{0}+{b}_{1}\\cdot \\text{MLD}}}{1+{\\text{e}}^{{b}_{0}+{b}_{1}\\cdot \\text{MLD}}}$\n\nParameters for the LKB and MLD models were obtained from , based on clinical data for radiation pneumonitis.\n\nCorrelation between the mean lung dose and the modeling outcomes (RDV or NTCP values) was evaluated using Spearman’s rank order correlation analysis, using the statistical computing system R .\n\n3. Results\n\nTable 2 lists average and standard deviation values for the maximum PTV dose and mean PTV dose for each treatment technique. On average, the maximum dose to the PTV was about 34% higher relative to the prescription dose in the 3D-6MV and 3D-10MV plans; it was 23% - 24% higher in the HT, VMAT-1, and VMAT-2 plans. Paired t-tests showed that both the maximum dose and the mean dose to the PTV were significantly higher in the 3D-6MV and 3D-10MV plans compared to the IMRT plans (two tailed p-value < 0.05). There was no statistically significant difference in the maximum dose or the mean dose between the 3D-6MV and 3D-10MV plans, or among the HT, VMAT-1, and VMAT-2 plans.\n\nTable 3 lists average values of the mean dose to the ipsilateral and contralateral lungs, mean dose to the heart, and the maximum dose to the spinal cord with each treatment technique. Paired t-tests were performed to evaluate the statistical significance in difference for each dosimetric parameter between any pair of treatment techniques. There was no significant difference in the maximum dose to the spinal cord or the mean dose to the heart among all the treatment techniques. As to the contralateral lung, each of the three IMRT techniques showed significantly higher mean lung dose compared to the 3D techniques, while there was no significant difference among the three IMRT techniques. For the ipsilateral lung, each of the three IMRT techniques showed significantly\n\nTable 2. PTV dose statistics for each treatment technique.\n\nDmax: maximum dose; Dmean: mean dose; StdDev: standard deviation.\n\nTable 3. Dosimetric statistics for major organs.\n\nStdDev: standard deviation.\n\nlower mean lung dose compared to the 3D techniques. There was no significant difference between the HT plans and the VMAT-1 plans, or between the HT plans and the VMAT-2 plans. However, the VMAT-2 plans showed significantly lower ipsilateral mean lung dose compared to the VMAT-1 plans (p = 0.002).\n\nFigure 2 plots the average RDV as a function of DL50 using the S-shaped model, the logistic model, and the linear model, respectively. Paired t-tests were used to compare the RDV values among different models using the same treatment technique and DL50. The logistic model gave significantly larger RDV values compared to the S-shaped model (p < 0.05), and the linear model gave significantly larger RDV values compared to the other two models (p < 0.05) over the evaluated range of DL50. Although the HT and VMAT-2 plans showed lower average RDV values compared to the other treatment techniques with each of the three RDV models, the absolute difference is relatively small. Paried t-tests showed no significant difference in RDV values in all the five delivery techniques.\n\nTable 4 lists average NTCP values for each delivery technique using the LKB model and the MLD model, respectively, at the prescription dose level of 50 Gy. Similar to results obtained using the RDV models, the differences among the delivery techniques were not statistically significant using either the LKB model or the MLD model (p > 0.05). The NTCP values obtained using the LKB model were significantly larger than those obtained using the MLD model (p < 0.05).\n\nFigure 3 shows the RDV value as a function of mean lung dose (MLD) in treatment plans with each of the three RDV models. In general, the RDV increases with increasing MLD. The correlation between the MLD and the RDV was analyzed by evaluating Spearman’s rank order correlation coefficients; the results showed strong correlation (correlation coefficient > 0.9) with statistical significance (p-value < 0.01) for any given delivery technique and the RDV formulation used. Figure 4 shows the NTCP value as a function of mean lung dose with with the LKB and the MLD models, respectively. In general, both the RDV and the NTCP values increase with increasing MLD value. The correlation between the MLD and the NTCP value was analyzed by evaluating Spearman’s rank order correlation coefficients; the results showed strong correlation (correlation coefficient > 0.95) with statistical significance (p-value < 0.01) for any\n\nFigure 2. Comparison of average RDV values with each treatment delivery technique using the S-shaped model (a), the logistic model (b), and the linear model (c).\n\ngiven delivery technique and the NTCP formulation used.\n\n4. Discussions\n\nRadiation pneumonitis (RP) is a primary concern in SBRT treatment of early stage non-small cell lung cancer. While most patients will develop asymptomatic\n\nFigure 3. RDV values as a function of mean lung dose (MLD) with each treatment delivery technique using the S-shaped model (a), the logistic model (b), and the linear model (c), respectively.\n\nGrade-1 RP, the chance of developing Grade-2 or 3 RP is relatively low . However, pulmonary toxicity is the primary type of late toxicity after SBRT treatments to the lung, and grade 5 pulmonary toxicities have been reported . As patients can be potentially cured with SBRT treatments, it is imperative to quantitatively assess RP risks during SBRT treatment planning to avoid treatment-induced morbidities. Given hypofractionation scheme in SBRT treatments, normal lung response to radiation could be different from that in conventional RT treatments. However, most recent dosimetric planning studies still used conventional dosimetric parameters in evaluation of lung toxicity .\n\nFigure 4. NTCP values as a function of mean lung dose (MLD) with each treatment delivery technique using the LKB model (a) and the MLD model (b), respectively.\n\nWe believe that this is the first study that used a comprehensive set of radiobiological models for comparison of multiple delivery techniques for SBRT treatment of early-stage NSCLC.\n\nVMAT and HT are two modern IMRT techniques that utilize a rotating gantry to delivery radiation from a large number of beam portals. When the two techniques are used to deliver radiation to the thoracic region, a significantly higher percentage of the normal lung volume typically receives low dose radiation compared to 3D-CRT techniques, which has raised concerns over increased lung toxicities . Therefore, we were motivated to carry out this study to compare the VMAT and HT techniques with conventional 3D-CRT techniques\n\nTable 4. Average normal tissue complication probability with each delivery technique using the Lyman-Kutcher-Burman (LKB) model and the mean-lung-dose (MLD) model, respectively.\n\nStdDev: standard deviation.\n\nin terms of lung toxicities by using radiobiological models. Based on the results given by existing radiobiological models, the VMAT and HT techniques could at least achieve similar levels of lung sparing compared to 3D-CRT techniques.\n\nThe EUD and FSU models are two widely used normal tissue toxicity models. While both models reduce the normal organ DVH to a single parameter, they differ by representing the DVH with a dose parameter and a volume parameter, respectively.\n\nFor a comprehensive evaluation of lung toxicity with different delivery techniques, both models were used in this study. It is interesting to note that the results from both models showed remarkable agreement in this study, indicating similar predictive power with these two types of models.\n\nFigure 3 and Figure 4 provided practical results relevant for clinical dosimetry planning in SBRT treatment of early-stage non-small lung cancer patients. Even with non-linear NTCP and RDV models, the degree of lung toxicities, as measured by the NTCP value or the RDV value, increases approximately monotonously with the mean lung dose. Note that since the normalized lung DVH was used in model calculations, the RDV value does not have a linear relationship with the mean lung dose even with the linear model. Based on results from the radiobiological models, the mean lung dose can be an efficient parameter to use in evaluation of lung toxicities in lung SBRT treatment plans.\n\nDifferent dose normalization methods could affect results in studies that correlate dosimetric parameters with RP occurrence risks. Baker et al. evaluated a set of dosimetric and clinical parameters for correlations with RP after five-frac- tion SBRT treatments . No DVH normalization was performed. While certain dosimetric parameters were found to be predictive of RP in univariate analysis, the correlations were not significant in multivariable analysis. Guckenberger et al. analyzed 59 patients who received SBRT treatments to the lung with various fractionation schemes . The lung DVHs were normalized using α/β ratio of 3 Gy. The MLD of the ipsilateral lung, as well as the ipsilateral lung volume exposed to doses between 2.5 and 50 Gy, were found to be correlated to RP incidence. Scheenstra et al. evaluated the relation between local dose and relative lung perfusion reduction after SBRT treatment, using α/β ratio of 3 Gy to normalize the lung DVH . The study found that the relation between local dose and perfusion reduction can be best modeled by a logistic function. The k value in the logistic model was found to be 2.2.\n\nThe value of DL50 depends on the type of radiation response endpoints used, and in the case of RP, the grade of RP used in the analysis. Theuws et al. and Marks et al. studied radiation-induced lung perfusion reduction for patients who received radiation treatments in the thorax region including lymphoma and breast cancer patients . Their combined data gave DL50 of 55 Gy and k value of 2.2 in the logistic model. In contrast, Scheenstra et al. evaluated lung perfusion reduction for lung cancer patients receiving SBRT treatments, and they found DL50 to be 28.7 Gy (95% confidence interval (CI): 26.3 - 31.1) and k to be 2.2 (95% CI: 1.8 - 2.5) in the logistic model (12). Marks et al. compiled clinical RP data (5). Using the MLD model and the LKB model to fit the data, DL50 was found to be 30.8 Gy (95% CI: 28.7 - 33.9) and 31.4 Gy (95% CI: 29.0 - 34.7), respectively. It should be noted that heterogeneous RP criteria were used in the compiled data. Due to the uncertainty of DL50 values in published results, we allowed DL50 to vary in the range of 20 to 40 Gy in this study.\n\n5. Conclusion\n\nUsing different radiobiological models for radiation-induced lung toxicities, a comprehensive set of delivery techniques were compared for SBRT treatment of early-stage lung cancer. The current study showed that VMAT and HT plans could achieve comparable lung sparing compared to traditional 3D-CRT techniques. The NTCP modeling results confirmed the results based on parallel FSU models. However, the validity of the radiobiological models should be tested by clinical data.\n\nCompliance with Ethical Standards\n\nConflict of Interest\n\nThe authors have no conflict of interest to declare.\n\nResource of Funding\n\nThe authors declare no funding resources used for this study.\n\nResearch Involving Human Participants and/or Animals\n\nThis study did not involve human participants and/or animals.\n\nInformed Consent\n\nInformed consent was not applicable in this study.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Chang, B.K. and Timmerman, R.D. (2007) Stereotactic Body Radiation Therapy: A Comprehensive Review. American Journal of Clinical Oncology, 30, 637-644. https://doi.org/10.1097/COC.0b013e3180ca7cb1 Timmerman, R., Paulus, R., Galvin, J., et al. (2010) Stereotactic Body Radiation Therapy for Inoperable Early Stage Lung Cancer. JAMA, 303, 1070-1076. https://doi.org/10.1001/jama.2010.261 Baker, S., Dahele, M., Lagerwaard, F.J. and Senan, S. (2016) A Critical Review of Recent Developments in Radiotherapy for Non-Small Cell Lung Cancer. Radiation Oncology, 11, 115. https://doi.org/10.1186/s13014-016-0693-8 Chi, A., Liao, Z., Nguyen, N.P., et al. (2010) Systematic Review of the Patterns of Failure Following Stereotactic Body Radiation Therapy in Early-Stage Non-Small-Cell Lung Cancer: Clinical Implications. Radiotherapy and Oncology, 94, 1-11. https://doi.org/10.1016/j.radonc.2009.12.008 Marks, L.B., Bentzen, S.M., Deasy, J.O., et al. (2010) Radiation Dose-Volume Effects in the Lung. International Journal of Radiation Oncology * Biology * Physics, 76, S70-S76. https://doi.org/10.1016/j.ijrobp.2009.06.091 Jin, J., Kong, F., Chetty, I.J., et al. (2010) Impact of Fraction Size on Lung Radiation Toxicity: Hypofractionation May Be Beneficial in Dose Escalation of Radiotherapy for Lung Cancers. International Journal of Radiation Oncology * Biology * Physics, 76, 782-788. https://doi.org/10.1016/j.ijrobp.2009.02.079 Niemierko, A. (1997) Reporting and Analyzing Dose Distributions: A Concept of Equivalent Uniform Dose. Medical Physics, 24, 103-110. https://doi.org/10.1118/1.598063 Jackson, A., Kutcher, G.J., Yorke, E.D., et al. (1993) Probability of Radiation-Induced Complications for Normal Tissue with Parallel Architecture Subject to Non-Uniform Irradiation. Medical Physics, 20, 613-625. https://doi.org/10.1118/1.597056 Yorke, E.D., Kutcher, G.J., Jackson, A. and Ling, C.C. (1993) Probability of Radiation-Induced Complications in Normal Tissues with Parallel Architecture under Conditions of Uniform Whole or Partial Organ Irradiation. Radiotherapy and Oncology, 26, 226-237. https://doi.org/10.1016/0167-8140(93)90264-9 Jackson, A., Ten Haken, R.K., Robertson, J.M., et al. (1995) Analysis of Clinical Complication Data for Radiation Hepatitis Using a Parallel Architecture Model. International Journal of Radiation Oncology * Biology * Physics, 31, 883-891. https://doi.org/10.1016/0360-3016(94)00471-4 Kwa, S.L.S., Theuws, J.C.T., Wagenaar, A., et al. (1998) Evaluation of Two Dose-Volume Histogram Reduction Models for the Prediction of Radiation Pneumonitis. Radiotherapy and Oncology, 48, 61-69. https://doi.org/10.1016/S0167-8140(98)00020-6 Scheenstra, A.E.H., Rossi, M.M.G., Belderbos, J.S.A., et al. (2013) Local Dose-Effect Relations for Lung Perfusion Post Stereotactic Body Radiotherapy. Radiotherapy and Oncology, 107, 398-402. https://doi.org/10.1016/j.radonc.2013.04.003 Guckenberger, M., Baier, K., Polat, B., et al. (2010) Dose-Response Relationship for Radiation-Induced Pneumonitis after Pulmonary Stereotactic Body Radiotherapy. Radiotherapy and Oncology, 97, 65-70. https://doi.org/10.1016/j.radonc.2010.04.027 R Core Team (2013) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna. http://www.R-project.org/ Yamashita, H., Takahashi, W. and Haga, A. and Nakagawa, K. (2014) Radiation Pneumonitis after Stereotactic Radiation Therapy for Lung Cancer. World Journal of Radiology, 6, 708-715. https://doi.org/10.4329/wjr.v6.i9.708 Ong, C.L., Verbakel, W.F.A.R., Cuijpers. J.P., et al. (2010) Stereotactic Radiotherapy for Peripheral Lung Tumors: A Comparison of Volumetric Modulated Arc Therapy with 3 Other Delivery Techniques. Radiotherapy and Oncology, 97, 437-442. https://doi.org/10.1016/j.radonc.2010.09.027 McGrath, S.D., Matuszak, M.M., Yan, D., et al. (2010) Volumetric Modulated Arc Therapy for Delivery of Hypofractionated Stereotactic Lung Radiotherapy: A Dosimetric and Treatment Efficiency Analysis. Radiotherapy and Oncology, 95, 153-157. https://doi.org/10.1016/j.radonc.2009.12.039 Jo, I., Kay, C., Kim, J., et al. (2010) Significance of Low-Dose Radiation Distribution in Development of Radiation Pneumonitis after Helical-Tomotherapy-Based Hypofractionated Radiotherapy for Pulmonary Metastases. Journal of Radiation Research, 55, 105-112. https://doi.org/10.1093/jrr/rrt080 Khalil, A.A., Hoffmann, L., Moeller, D.S., et al. (2015) New Dose Constraint Reduces Radiation-Induced Fatal Pneumonitis in Locally Advanced Non-Small Cell Lung Cancer Patients Treated with Intensity-Modulated Radiotherapy. Acta Oncologica, 54, 1343-1349. https://doi.org/10.3109/0284186X.2015.1061216 Baker, R., Han, G., Sarangkasiri, S., et al. (2013) Clinical and Dosimetric Predictors of Radiation Pneumonitis in a Large Series of Patients Treated with Stereotactic Radiation Therapy to the Lung. International Journal of Radiation Oncology Biology Physics, 85, 190-195. https://doi.org/10.1016/j.ijrobp.2012.03.041 Theuws, J.C.M., Kwa, S.L.S., Wagenaar, A.C., et al. (1998) Dose-Effect Relations for Early Local Pulmonary Injury after Irradiation for Malignant Lymphoma and Breast Cancer. Radiotherapy and Oncology, 48, 33-43.https://doi.org/10.1016/S0167-8140(98)00019-X Marks, L.B., Munley, M.T., Spencer, D.P., et al. (1997) Quantification of Radiation-Induced Regional Lung Injury with Perfusion Imaging. International Journal of Radiation Oncology Biology Physics, 38, 399-409.https://doi.org/10.1016/S0360-3016(97)00013-8\n\nJournals Menu\n\n#### Journals Menu\n\nFollow SCIRP",
null,
"",
null,
"",
null,
"",
null,
"Contact us",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Copyright © 2021 by authors and Scientific Research Publishing Inc.",
null,
"This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License."
] | [
null,
"https://scirp.org/images/Twitter.svg",
null,
"https://scirp.org/images/fb.svg",
null,
"https://scirp.org/images/in.svg",
null,
"https://scirp.org/images/weibo.svg",
null,
"https://scirp.org/images/email.svg",
null,
"https://scirp.org/images/WhatsApp.svg",
null,
"https://scirp.org/images/qq.svg",
null,
"https://scirp.org/images/weixinsrp120.jpg",
null,
"https://scirp.org/images/weixin.svg",
null,
"https://scirp.org/Images/ccby.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9084853,"math_prob":0.85195446,"size":20617,"snap":"2021-21-2021-25","text_gpt3_token_len":4558,"char_repetition_ratio":0.15529035,"word_repetition_ratio":0.06282246,"special_character_ratio":0.21108794,"punctuation_ratio":0.09763235,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96024686,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T09:57:39Z\",\"WARC-Record-ID\":\"<urn:uuid:b1028603-b4f2-4285-b557-aa9298be13b9>\",\"Content-Length\":\"141393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92e4c43c-0412-46cc-a28c-44830c602830>\",\"WARC-Concurrent-To\":\"<urn:uuid:05c903a4-886c-4796-9c26-5d22b927c944>\",\"WARC-IP-Address\":\"192.111.37.22\",\"WARC-Target-URI\":\"https://scirp.org/journal/paperinformation.aspx?paperid=81913\",\"WARC-Payload-Digest\":\"sha1:FITL6RMUL6OKIZ3LY6NGWC6P4OET4OJT\",\"WARC-Block-Digest\":\"sha1:LDCHEJTRGESQ4ARSAVUO6NMFHY5BXVEZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488536512.90_warc_CC-MAIN-20210623073050-20210623103050-00139.warc.gz\"}"} |
https://writersestate.wordpress.com/2017/09/19/business-research-paper-help/ | [
"# Business Research paper help\n\nBuy your research paper by clicking http://www.customwritings-us.com/orders.php\n\nEmail us: [email protected]\n\nInstructions: Enter all answers directly in this worksheet. When finished select Save As, and save this document using your last name and student ID as the file name. Upload the data sheet to Blackboard as a .doc, .docx or .rtf file when you are finished.\n\nQuestion 1: (10 points). (Bond valuation) Calculate the value of a bond that matures in 12 years and has \\$1,000 par value. The annual coupon interest rate is 9 percent and the market’s required yield to maturity on a comparable-risk bond is 12 percent. Round to the nearest cent.\n\n The value of the bond is\n\nQuestion 2: (10 points). (Bond valuation) Enterprise, Inc. bonds have an annual coupon rate of 11 percent. The interest is paid semiannually and the bonds mature in 9 years. Their par value is \\$1,000. If the market’s required yield to maturity on a comparable-risk bond is 14 percent, what is the value of the bond? What is its value if the interest is paid annually and semiannually? (Round to the nearest cent.)\n\n a. The value of the Enterprise bonds if the interest is paid semiannually is \\$ b. The value of the Enterprise bonds if the interest is paid annually is \\$\n\nQuestion 3: (10 points). (Yield to maturity) The market price is \\$750 for a 20-year bond (\\$1,000 par value) that pays 9 percent annual interest, but makes interest payments on a semiannual basis (4.5 percent semiannually). What is the bond’s yield to maturity? (Round to two decimal places.)\n\n The bond’s yield to maturity is %\n\nQuestion 4: (10 points). (Yield to maturity) A bond’s market price is \\$950. It has a \\$1,000 par value, will mature in 14 years, and has a coupon interest rate of 8 percent annual interest, but makes its interest payments semiannually. What is the bond’s yield to maturity? What happens to the bond’s yield to maturity if the bond matures in 28 years? What if it matures in 7 years? (Round to two decimal places.)\n\n The bond’s yield to maturity if it matures in 14 years is % The bond’s yield to maturity if it matures in 28 years is % The bond’s yield to maturity if it matures in 7 years is %\n\nQuestion 5: (15 points). (Bond valuation relationships) Arizona Public Utilities issued a bond that pays \\$70 in interest, with a \\$1,000 par value and matures in 25 years. The markers required yield to maturity on a comparable-risk bond is 8 percent. (Round to the nearest cent.) For questions with two answer options (e.g. increase/decrease) choose the best answer and write it in the answer block.\n\n Question Answer a. What is the value of the bond if the markers required yield to maturity on a comparable-risk bond is 8 percent? \\$ b. What is the value of the bond if the markers required yield to maturity on a comparable-risk bond increases to 11 percent? \\$ c. What is the value of the bond if the market’s required yield to maturity on a comparable-risk bond decreases to 7 percent? \\$ d. The change in the value of a bond caused by changing interest rates is called interest-rate risk. Based on the answer: in parts b and c, a decrease in interest rates (the yield to maturity) will cause the value of a bond to (increase/decrease): By contrast in interest rates will cause the value to (increase/decrease): Also, based on the answers in part b, if the yield to maturity (current interest rate) equals the coupon interest rate, the bond will sell at (par/face value): exceeds the bond’s coupon rate, the bond will sell at a (discount/premium): and is less than the bond’s coupon rate, the bond will sell at a (discount/premium): e. Assume the bond matures in 5 years instead of 25 years, what is the value of the bond if the yield to maturity on a comparable-risk bond is 8 percent? \\$ 960.07 Assume the bond matures in 5 years instead of 25 years, what is the value of the bond if the yield to maturity on a comparable-risk bond is 11 percent? \\$ f. Assume the bond matures in 5 years instead of 25 years, what is the value of the bond if the yield to maturity on a comparable-risk bond is 7 percent? \\$ g. From the findings in part e, we can conclude that a bondholder owning a long-term bond is exposed to (more/less) interest-rate risk than one owning a short-term bond.\n\nQuestion 6: (5 points). (Measuring growth) If Pepperdine, Inc.’s return on equity is 14 percent and the management plans to retain 55 percent of earnings for investment purposes, what will be the firm’s growth rate? (Round to two decimal places.)\n\n The firm’s growth rate will be %\n\nQuestion 7: (10 points). (Common stock valuation) The common stock of NCP paid \\$1.29 in dividends last year. Dividends are expected to grow at an annual rate of 6.00 percent for an indefinite number of years. (Round to the nearest cent.)\n\n a. If your required rate of return is 8.70 percent, the value of the stock for you is: \\$ b. You (should/should not) make the investment if your expected value of the stock is (greater/less) than the current market price because the stock would be undervalued.\n\nQuestion 8: (10 points). (Measuring growth) Given that a firm’s return on equity is 22 percent and management plans to retain 37 percent of earnings for investment purposes, what will be the firm’s growth rate? If the firm decides to increase its retention rate, what will happen to the value of its common stock? (Round to two decimal places.)\n\n a. The firm’s growth rate will be: b. If the firm decides to increase its retention ratio, what will happen to the value of its common stock? An increase in the retention rate will (increase/decrease) the rate of growth in dividends, which in turn will (increase/decrease) the value of the common stock.\n\nQuestion 9: (10 points). (Relative valuation of common stock) Using the P/E ratio approach to valuation, calculate the value of a share of stock under the following conditions:\n\n• the investor’s required rate of return is 13 percent,\n• the expected level of earnings at the end of this year (E1) is \\$8,\n• the firm follows a policy of retaining 40 percent of its earnings,\n• the return on equity (ROE) is 15 percent, and\n• similar shares of stock sell at multiples of 8.571 times earnings per share.\n\nNow show that you get the same answer using the discounted dividend model. (Round to the nearest cent.)\n\n a. The stock price using the P/E ratio valuation method is: \\$ b. The stock price using the dividend discount model is: \\$\n\nQuestion 10: (10 points) (Preferred stock valuation) Calculate the value of a preferred stock that pays a dividend of \\$8.00 per share when the market’s required yield on similar shares is 13 percent. (Round to the nearest cent.)\n\n a. The value of the preferred stock is \\$ Per share\n\nBuy your research paper by clicking http://www.customwritings-us.com/orders.php\n\nEmail us: [email protected]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8858147,"math_prob":0.95295644,"size":6837,"snap":"2019-35-2019-39","text_gpt3_token_len":1651,"char_repetition_ratio":0.17400849,"word_repetition_ratio":0.2544229,"special_character_ratio":0.25171858,"punctuation_ratio":0.11245552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99542904,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T22:04:15Z\",\"WARC-Record-ID\":\"<urn:uuid:c651b8ee-178d-4584-83d6-18c7d983f16c>\",\"Content-Length\":\"147415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1839bdc-2922-4f26-800d-a9e3aa635879>\",\"WARC-Concurrent-To\":\"<urn:uuid:f5e714e6-9141-4fe0-8a3c-6efabf820f26>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://writersestate.wordpress.com/2017/09/19/business-research-paper-help/\",\"WARC-Payload-Digest\":\"sha1:I56NVRCTBDUHS7OEX6C3EZHYJOIIB424\",\"WARC-Block-Digest\":\"sha1:7JP734JKFLSHMKB33EMCFXTQ75PDVOYM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574665.79_warc_CC-MAIN-20190921211246-20190921233246-00097.warc.gz\"}"} |
https://zbmath.org/?q=an:0817.35022 | [
"# zbMATH — the first resource for mathematics\n\nIntegral representation of solutions of semilinear elliptic equations in cylinders and applications. (English) Zbl 0817.35022\nFrom the introduction: Let $$(M,\\mu)$$ be a positive measured space and $$L$$ a linear operator of $$L^ 1_ \\mu (M)$$ with dense domain $$\\text{Dom} (L)$$. We consider the following equation in $$\\mathbb R^ + \\times M$$ $\\partial_{tt} u+a \\partial_ t u+ Lu-lu+ f(u)+ \\Phi(t)=0 \\tag{1}$ and the following hypotheses: $$f$$ is a nonnegative locally Lipschitz continuous function defined on $$\\mathbb R^ +$$ vanishing at 0 and satisfying $$\\lim f(u)/u= \\infty$$ (as $$u\\to +\\infty)$$, $$L$$ is a linear $$T$$-dissipative operator of $$L^ 1_ \\mu (M)$$ with domain $$\\text{Dom} (L)$$ and there exists $$\\zeta\\in L^ 1_ \\mu (M)\\cap L^ \\infty_ \\mu (M)$$ such that $$\\zeta\\geq 0$$, $$\\int_ M \\zeta \\,d\\mu =1$$ and $\\int_ M \\zeta L\\omega \\,d\\mu\\geq- \\lambda \\int_ M \\int_ M \\zeta\\omega \\,d\\mu \\qquad (\\forall\\omega\\in \\text{Dom} (L),\\;\\omega\\geq 0 \\text{ a.e.})$ for some constant $$\\lambda\\geq 0$$. We assume, moreover, that $$L$$ is $$m$$-dissipative with dense domain in $$L^ 1_{\\zeta\\mu} (M)$$, where $$L^ 1_{\\zeta\\mu} (M)$$ is the space of $$\\mu$$-measurable functions which are integrable for the measure $$\\zeta\\mu$$. $$\\Phi$$ belongs to $$L^ 1_{\\text{loc}} (\\mathbb R^ +; L^ 1 (M))$$, $$\\Phi\\geq 0$$ and $$a$$ and $$l$$ are constants, $$l>0$$.\nWe call $$(S(t) )_{t\\geq 0}$$ the continuous semigroup of sub-Markovian operators of $$L^ 1_{\\zeta \\mu} (M)$$ generated by $$-(-L+ (a^ 2/4+ l)I)^{1/2}$$. Our main result is the following theorem.\nTheorem. Let $$u$$ belong to $$W^{2,1}_{\\text{loc}} ([0, \\infty); (L^ 1_ \\mu (M)))\\cap L^ 1_{\\text{loc}} ([0, \\infty); \\text{Dom} (L))$$ be a solution of (1) such that $$u\\geq 0$$ a.e. on $$\\mathbb R^ + \\times M$$. Then $$\\int_ \\rho^{\\rho+1} \\int_ M (f(u)+ \\Phi) (t) \\zeta \\,d\\mu \\,dt$$ remain bounded independently of $$\\rho\\geq 0$$ and the following formula is valid $u(t)= e^{-at/2} S(t) u(0)+ \\int_ 0^ t e^{-as/2} S(s) \\int_ 0^ \\infty e^{a\\tau/2} S(\\tau) (f(u)+ \\Phi)(t+ \\tau- s)\\,d\\tau \\, ds$ for any $$t\\geq 0$$. Therefore, $$u$$ belongs to $$L^ \\infty (\\mathbb R^ +; L^ 1_{\\zeta\\mu} (M))$$.\n\n##### MSC:\n 35J61 Semilinear elliptic equations 35C15 Integral representations of solutions to PDEs 58J05 Elliptic equations on manifolds, general theory 35B45 A priori estimates in context of PDEs 47D07 Markov semigroups and applications to diffusion processes\nFull Text:\n##### References:\n Aviles, P., Local behaviour of solutions of some elliptic equations, Communs math. phys., 108, 177-192, (1987) · Zbl 0617.35040 Bidaut-Veron, M.F.; Veron, L., Nonlinear elliptic equations on compact Riemannian manifolds and asymptotics of Emden equations, Invent. math., 106, 489-539, (1991) · Zbl 0755.35036 Gilgarg, D.; Trudinger, N.S., Elliptic partial differential equations of second order, grundleheren math. wiss., Vol. 224, (1983), Springer Berlin Gidas, B.; Spruck, J., Global and local behaviour of positive solutions of nonlinear elliptic equations, Communs pure appl. math., 34, 525-598, (1981) · Zbl 0465.35003 Bidaut-Veron, M.F.; Bouhar, M., On characterization of solutions on some nonlinear differential equations and applications, SIAM J. math. analysis, 25, 859-875, (1994) · Zbl 0807.34050 Balakrishnan, I.V., Fractional powers of closed operators and the semi-groups generated by them, Pacif. J. math., 10, 419-437, (1961) · Zbl 0103.33502 Brezis, H.; Strauss, W.A., Semilinear second order elliptic equation in L1, J. math. soc. Japan, 25, 565-590, (1973) · Zbl 0278.35041 Bandle, C.; Essen, M., On the positive solutions of nonlinear elliptic equations in cone-like domains, Archs ration. mech. analysis, 112, 319-338, (1990) · Zbl 0727.35051 Bardos, C., Problèmes aux limites pour des équations aux dérivées partielles du premier ordre à coefficients réels, Ann. sci. E.N.S., 3, 185-233, (1970) · Zbl 0202.36903 Veron, L., Equations d’évolution semi-linéaires du second ordre dans L1, Rev. roum. math. pura appl., 27, 95-123, (1982) · Zbl 0489.35010 Veron, L., Comportement asymptotique des solutions d’équations elliptiques semi-linéaires dans $$R$$^N, Annali mat. pura appl., 127, 25-50, (1981) · Zbl 0467.35013 Stein, E.M., Boundary behaviour of holomorphic functions of several complex variables, (1972), Princeton University Press Princeton, (Mathematics Notes No. 9) Stein, E.M.; Weiss, G., On the convergence of Poisson integrals, Trans. am. math. soc., 140, 35-54, (1969) · Zbl 0182.10801\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6706746,"math_prob":0.99972695,"size":5328,"snap":"2021-04-2021-17","text_gpt3_token_len":1850,"char_repetition_ratio":0.11626597,"word_repetition_ratio":0.023195876,"special_character_ratio":0.37387386,"punctuation_ratio":0.21150443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999479,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T14:44:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7e40ed1a-f74f-4009-b76c-0f7340bd81b7>\",\"Content-Length\":\"54445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53483717-f6b2-43da-96bb-dbd6a8f5ddad>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc7d39ec-713d-4bf4-b069-faf7b137e2ab>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0817.35022\",\"WARC-Payload-Digest\":\"sha1:FSFR775D36UA6Q5VPUCQDQJRSW7NEKVN\",\"WARC-Block-Digest\":\"sha1:WN3NU5CFJNMXXNADH6OHPNM4KTQ6BLIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524858.74_warc_CC-MAIN-20210121132407-20210121162407-00745.warc.gz\"}"} |
https://online.2iim.com/CAT-question-paper/CAT-2020-Question-Paper-Slot-3-Quant/quants-question-25.shtml | [
"# CAT 2020 Question Paper | Quants Slot 3\n\n###### CAT Previous Year Paper | CAT Quants Questions | Question 25\n\nThis question is from Co-ordinate Geometry. Take Geometry, add one unit of algebra; take a diagram, explain it with x's and y's - you get Co-ordinate Geometry. For the purists, it is geometry without the romance, for the pragmatists it is Geometry with expanded scope. CAT exam does test one on ideas from Coordinate Geometry once in a while. It is important to cover ideas from Coordinate Geometry in your CAT Preparation.\n\nQuestion 25 : The points (2 , 1) and (-3 , -4) are opposite vertices of a parellelogram. If the other two vertices lie on the line x + 9y + c = 0, then c is\n\n1. 15\n2. 13\n3. 14\n4. 12\n\n## Best CAT Coaching in Chennai\n\n#### CAT Coaching in Chennai - CAT 2022Limited Seats Available - Register Now!\n\nThe line x + 9y + c = 0, forms one of the diagonal of the parallelogram.\nThe other diagonal connects the points (2, 1) and (-3, -4).\n\nThe diagonals of a parellogram bisect each other, hence the line x + 9y + c = 0 should pass hrough the midpoint of the line joining (2, 1) and (-3, -4).\n\nThe midpoint of the line joining (2, 1) and (-3, -4) = ( $$frac{2-3}{2}$ , $\\frac{1-4}{2}$ ) =$ $$frac{-1}{2}$ , $\\frac{-3}{2}$ ) =$-0.5 , -1.5)\nTherefore, the line x + 9y + c = 0 passes through (-0.5 , -1.5),\nHence, (-0.5) + 9(-1.5) + c = 0\n(-0.5) + 27(-0.5) + c = 0\n28 (-0.5) + c = 0\n-14 + c = 0\nc = 14.\n\nThe question is \"The points (2 , 1) and (-3 , -4) are opposite vertices of a parellelogram. If the other two vertices lie on the line x + 9y + c = 0, then c is\"\n\n##### Hence, the answer is, \"14\"\n\n###### CAT Coaching in ChennaiCAT 2023\n\nClassroom Batches Starting Now! @Gopalapuram\n\n###### Best CAT Coaching in Chennai Introductory offer of 5000/-\n\nAttend a Demo Class\n\n##### Where is 2IIM located?\n\n2IIM Online CAT Coaching\nA Fermat Education Initiative,\n58/16, Indira Gandhi Street,\nKaveri Rangan Nagar, Saligramam, Chennai 600 093\n\n##### How to reach 2IIM?\n\nMobile: (91) 99626 48484 / 94459 38484\nWhatsApp: WhatsApp Now\nEmail: [email protected]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79052407,"math_prob":0.98576206,"size":2477,"snap":"2023-14-2023-23","text_gpt3_token_len":756,"char_repetition_ratio":0.11362717,"word_repetition_ratio":0.2079646,"special_character_ratio":0.32054904,"punctuation_ratio":0.13265306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99417406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T09:59:40Z\",\"WARC-Record-ID\":\"<urn:uuid:7dc35712-6b00-45e0-9c74-9b698074850f>\",\"Content-Length\":\"56673\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:974c308c-536e-4bd9-b6f3-46326f187247>\",\"WARC-Concurrent-To\":\"<urn:uuid:5b2a55af-a63f-41f5-980f-5559c5969101>\",\"WARC-IP-Address\":\"139.59.53.99\",\"WARC-Target-URI\":\"https://online.2iim.com/CAT-question-paper/CAT-2020-Question-Paper-Slot-3-Quant/quants-question-25.shtml\",\"WARC-Payload-Digest\":\"sha1:PWXVMFNNG7AC4GXOZUTJ5ZZNNQ7EXYLY\",\"WARC-Block-Digest\":\"sha1:HP7NOAEOAOIBSTJKXB2BXOIW4XBQWGL5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652494.25_warc_CC-MAIN-20230606082037-20230606112037-00039.warc.gz\"}"} |
https://www.kofastudy.com/courses/jss2-mathematics-1st-term/lessons/significant-figures-s-f-week-8/topic/whole-number-quantities/ | [
"Lesson 8, Topic 3\nIn Progress\n\n# Whole Number & Quantities\n\nLesson Progress\n0% Complete\n\nTo round off a number or quantity to the nearest whole number or to the nearest of that quantity, compare the last digit before the decimal point and the first digit after the decimal point.\n\nExample 1\n\nRound off the following to the nearest whole number.\n\na. 6.2009\n\nb. 524.89\n\nc. 0.952\n\nSolution\n\na. 6.2009\n\n= 6 to the nearest whole number\n\nb. 524.89\n\n= 525 to the nearest whole number\n\nc. 0.952\n\n= 1 to the nearest whole number\n\nExample 2\n\nRound off each of the following;\n\na. 42.90cm to the nearest cm = 43cm\n\nb. 7.3m to the nearest m = 7m\n\nc. 3975km to the nearest 10km = 3975km = 3980km\n\nd. 6506g to the nearest 100g = 6506g = 6500g",
null,
"error:"
] | [
null,
"https://www.kofastudy.com/kike/wpfront-scroll-top/images/icons/114.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76150465,"math_prob":0.9984648,"size":717,"snap":"2021-43-2021-49","text_gpt3_token_len":239,"char_repetition_ratio":0.2398317,"word_repetition_ratio":0.02962963,"special_character_ratio":0.34588563,"punctuation_ratio":0.14375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99751866,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T15:10:55Z\",\"WARC-Record-ID\":\"<urn:uuid:75142bc0-6223-4d09-bf26-e8dc4f45aed9>\",\"Content-Length\":\"422009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f044b52c-b588-4269-bd00-d43c3875e996>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d76f22d-0ac5-43e5-ba0e-f3d610fda010>\",\"WARC-IP-Address\":\"95.179.192.21\",\"WARC-Target-URI\":\"https://www.kofastudy.com/courses/jss2-mathematics-1st-term/lessons/significant-figures-s-f-week-8/topic/whole-number-quantities/\",\"WARC-Payload-Digest\":\"sha1:I7ZDIEKGURC6HR236TQMBR5QMZ5GG2MM\",\"WARC-Block-Digest\":\"sha1:IGDUHHUNEDON5TR5FJLRS7XG3YXZOLJS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359037.96_warc_CC-MAIN-20211130141247-20211130171247-00319.warc.gz\"}"} |
http://www.electronic-circuits-diagrams.com/capacitors-tutorials/ | [
"•\n•\n\n# Capacitors Tutorials\n\n### Chapter 2: Capacitor\n\nJan 26th 2001 , Naveen P N\n\nCapacitors in electronics are like storage tanks. They store charges. Basically a capacitor is made up of 2 parallel plates separated by a very small gap, but there are other ways of making a capacitor too. A capacitor has two terminals.\n\nIt is charged by applying a voltage across its terminals, and discharged by shorting the two terminals. When a capacitor is charged, a voltage develops across its terminals even when the charging source is removed. The voltage across the terminals of a capacitor is related to the amount of charge stored in it by the relation:\n\nVoltage = Charge/Capacitance\n\nor\n\nV = Q / C\n\nWhere C is the capacitance of the capacitor and is measured in a unit called ‘farad’, denoted by F. If 1V is developed across the terminals of a capacitor by the storage of 1C(coloumb) of charge then its capacitance is 1 F.\n\nUsually a farad is a large value and for most of the applications the value is expressed in terms of microfarads, nanofarads or picofarads.\n\n1 microfarad ( uF ) = 10e-6 or 1/1000000th of a farad.\n\n1 nanofarad ( nF ) = 10e-9 farad OR 1/1000th of a mic\n\n( in uF above, ‘u’ is the greek letter mu and not the english letter ‘u’ )\n\n#### Types of capacitors:\n\nThere are many types of capacitors but the two main types are:\n\n– non-electrolytic\n\n– electrolytic\n\nNon-Electrolytic capacitors are non-polarised, i.e they can be connected either way in a circuit without having to worry about + & -. The most common is the disc-type capacitor that we normally use in electronics. The other types are ceramic, mica etc. In almost all applications we use the disc-type capacitor which is brown in color and has the shape of a disc. Its value ranges between about a few pF to as high as 1uF. ( You also get non-polarised capacitors of higher values and such capacitors have ‘NP’ written on them indicating Non Polarised)\n\nElectrolytic capacitors are polarised and they are supposed to be connected in a specific way in the circuit. Their + and – terminals have to coincide with that specified in the circuit. They are much bulkier than the non-electrolytic type and hence have to be avoided when possible. They are used only if very high capacitance values are needed. Also the electrolytic capacitors are not very stable regarding their value i.e. their values change slightly with the temperature and other physical parameters. The non-electrolytic capacitors are relatively stabler. Electrolytic capacitors are available usually 1uF and upwards upto about 4700uF!\nThey are much costlier than the non-electrolytic capacitors.\nCAUTION: Connecting an electrolytic capacitor in the wrong polarity may lead to an explosion! ( electrically controlled firecrackers? )\n\n#### Circuit Symbol:\n\nThe symbol used for electrolytic and non-electrolytic capacitors are different.\n\nThe symbol for non-electrolytic capacitor is:",
null,
"where the dark lines indicate the two plates, and the thin lines represent the two terminals.\n\nThe symbol for electrolytic capacitor is:",
null,
"The terminal marked as + is the positive (or +) terminal and the other (unmarked) terminal is the negative or – terminal. When + is not indicated, the terminal near the curved line is assumed to be negative. On the actual electrolytic package, the negative terminal is usually indicated by a black line with arrows pointing towards the negative terminal.\n\nWhen you buy a capacitor at a store, you have to specify 3 things:\n\n1) Electrolytic/ non-electrolytic.\n2) Capacitance.\n3) Max. tolerable voltage.\n\n1) Electrolytic/non-electrolytic: This depends usually on the value of the capacitance. If its less then 1uF then go for non-electrolytic and for higher values use electrolytic.\n2) As mentioned above, the value of a capacitor or its capacitance id specified in uF / nF / pF\n3) All capacitors have a max. voltage specified and which is the max. voltage that can be applied across its terminals. If a higher voltage is applied, it may damage the capacitor.\n\nThe max. voltage for a non-electrolytic capacitor is usually a few hundred volts and is not specified in circuit diagrams since this voltage is much higher than the supply voltage of many electronic circuits.\nFor the electrolytic capacitors, the max. voltage is almost always specified in the circuit. If its not specified, assume it to be a little higher than the supply voltage to the circuit. For example if circuit operates at 12V then the electrolytic capacitors can be purchased with a max. voltage of about 16V.Note here that as the max. voltage increases the cost of the capacitor also increases.\n\n#### How to read a capacitor’s value?\n\nNon-Electrolytic:\nSome capacitors have their values printed in them. Unfortunately, there are various formats for printing the values and only a few can be discussed here:\n\n1) If the printed value is like 101,102,103,204 etc then the value of the capacitor= (first 2 digits X 10 raised to the 3rd digit) pF.\nFor example if the value is 104 then capacitance = ( 10X 10e4) pF = 10e5 pF = 10e-7 F = 0.1uF\nRemember a few of them: 104 = 0.1uF , 224=0.22uF , 103= 0.01uF, 102= 0.001uF\n\n2) If the printed value is like 1K5, 100,220,10K etc,\nThen capacitance = (printed value) pF\nFor example if the value is 10K then capacitance = 10K pF = 10X10e3 pF= 10e-8 F= 0.01uF\n1K5 means 1.5K pF and so on.\n\nElectrolytic:\nFortunately, both the capacitance and the max. voltage are both printed on the electrolytic in plain English!\n\nThere are a few capacitance available with color band coding like in the resistors, but their value is in pF and has to be multiplied by 10e-12 after de-coding the value.\n\n#### Capacitors in Series and Parallel:\n\nThis is very similar to the resistors except the formula for series and parallel connections are interchanged.\nIf Cab is the effective capacitance of a series or parallel combination then, it is given by.\n\n1/Cab(series) = 1/C1 + 1/C2 + 1/C3 where C1,C2,C3 are individual capacitances.\n\nCab(parallel) = C1 + C2+ C3\n\n#### Variable Capacitors(trimmers):\n\nVariable capacitors are available only for very small values like pF and should be normally avoided. They, like variable resistors have three terminals for the same reasons as discussed in the chapter on resistors.\n\nThe main use of variable resistors are in the radio and are used for tuning.\n\nThe circuit symbol of variable capacitor is:",
null,
"Applications of capacitors:\n\nCapacitors are as indispensable as the resistor in electronics. You can find them in almost every electronic circuit. They are used mainly in delay circuits like timers, noise suppression(smoothing) , oscillators to name a few.\n\nThe capacitor is used to:\n1. Block DC\n2. Pass AC\n3. Store charges.\n\n#### Formulae to memorize:\n\n1) V=Q/C\n\n2) C(parallel)= C1+C2\n\n3) C(series)= (C1*C2)/(C1+C2)\n\nby Naveen P N\n\n\\$9600.0",
null,
"\\$50.0",
null,
"\\$1099.0",
null,
"\\$389.0",
null,
"\\$499.0",
null,
"",
null,
""
] | [
null,
"http://electronic-circuits-diagrams.com/tutorials/csymbol.gif",
null,
"http://electronic-circuits-diagrams.com/tutorials/cpolsymbol.gif",
null,
"http://electronic-circuits-diagrams.com/tutorials/cvarsymbol.gif",
null,
"http://www.electronic-circuits-diagrams.com/store/img/m8Eo8tNUncg9umlumnhZzyA/140/2012-Viora-Reaction-Radio-Frequency-Laser-for-Body.jpg",
null,
"http://www.electronic-circuits-diagrams.com/store/img/m802cJ-d9GWIPizsA5BdfRA/140/Pair-Vintage-Weston-model-401-Radio-Frequency-Pane.jpg",
null,
"http://www.electronic-circuits-diagrams.com/store/img/mnsQtVZBYZR_ddbJgV1INNQ/140/6in1-Ultrasonic-Cavitation-Radio-Frequency-Slim-Ma.jpg",
null,
"http://www.electronic-circuits-diagrams.com/store/img/m4JxZ8iXm5FSKuUe8aWXonA/140/Best-Cavitation-Machine-5-in-1-RF-Radio-Frequency-.jpg",
null,
"http://www.electronic-circuits-diagrams.com/store/img/mv3yjyPyQdvWtrRGTe0pr7Q/140/Fractional-RF-Radio-Frequency-Dot-Matrix-Cooling-S.jpg",
null,
"http://www.electronic-circuits-diagrams.com/store/e.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89557046,"math_prob":0.9523568,"size":6834,"snap":"2019-51-2020-05","text_gpt3_token_len":1723,"char_repetition_ratio":0.18740849,"word_repetition_ratio":0.012152778,"special_character_ratio":0.23456249,"punctuation_ratio":0.104055084,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846402,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,5,null,5,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T01:01:12Z\",\"WARC-Record-ID\":\"<urn:uuid:c953b3ec-ea2e-472c-a492-0d56f67334ac>\",\"Content-Length\":\"78680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad865d11-c9ce-46d6-bbe3-31aea17be9d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:897461bb-e8ad-454a-8f83-c11a515b3fb9>\",\"WARC-IP-Address\":\"208.117.87.195\",\"WARC-Target-URI\":\"http://www.electronic-circuits-diagrams.com/capacitors-tutorials/\",\"WARC-Payload-Digest\":\"sha1:DU42WGIYIIXH7R4OVFG2XELT3ILUZHRD\",\"WARC-Block-Digest\":\"sha1:DZIEM5EDECFU22PECRTRKZ7KTT7JFZAK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541297626.61_warc_CC-MAIN-20191214230830-20191215014830-00388.warc.gz\"}"} |
https://ask.libreoffice.org/en/question/88389/how-do-i-set-colors-of-datarows-in-a-linechart-by-macro/ | [
"# how do i set colors of datarows in a linechart by macro?\n\ni can set axis colors like this: Chart.Diagram.XMainGrid.LineColor = RGB(192, 192, 192) (where Chart is ThisComponent.Sheets(0).Charts(0).EmbeddedObject) but how do i set the colors of the datalines ?\n\ni've found \"interface XColorScheme\" with \"getColorByIndex\" in the docs but calling it like \"Chart.Diagram.getColorByIndex(1)\" gives me an error, not sure if \"scheme\" is the right thing to use in this case either.\n\nedit retag close merge delete\n\nSort by » oldest newest most voted",
null,
"You can set line colors with:\n\n 'Get first chart on specified sheet'\noChart = thisComponent.Sheets.getByName(\"YOUR_SHEET\").getCharts().getByIndex(0)\noEmbeddedObject = oChart.getEmbeddedObject()\noFirstDiagram = oEmbeddedObject.getFirstDiagram()\noCoordinateSystems = oFirstDiagram.getCoordinateSystems()\noXCoordinateSystem = oCoordinateSystems(0)\noChartTypes = oXCoordinateSystem.getChartTypes()\noXChartType = oChartTypes(0)\noDataSeries = oXChartType.getDataSeries()\n'Index indicates which line you are working with'\noXDataSeries = oDataSeries(0)\n'Line color to be used'\noXDataSeries.Color = RGB(192, 192, 192)\n\nmore"
] | [
null,
"https://ask.libreoffice.org/upfiles/avatars/Ratslinger/resized/32/Haleakala.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.64198333,"math_prob":0.8033591,"size":470,"snap":"2019-26-2019-30","text_gpt3_token_len":128,"char_repetition_ratio":0.105150215,"word_repetition_ratio":0.0,"special_character_ratio":0.24893618,"punctuation_ratio":0.15625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9578547,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T11:44:01Z\",\"WARC-Record-ID\":\"<urn:uuid:4a94b57a-56cb-44ec-b8b0-cd72ac50e634>\",\"Content-Length\":\"58580\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8a092bc-7b71-46e0-ad6d-98314c9cf0a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:18c66115-817c-4cda-86ef-cb11098dc074>\",\"WARC-IP-Address\":\"89.238.68.146\",\"WARC-Target-URI\":\"https://ask.libreoffice.org/en/question/88389/how-do-i-set-colors-of-datarows-in-a-linechart-by-macro/\",\"WARC-Payload-Digest\":\"sha1:GBE5OZZWERWSPX56DFKWCSP64QEEP3MU\",\"WARC-Block-Digest\":\"sha1:IMFQE5D7ASDP2UR2WBOGZKCACVPAF5JG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526508.29_warc_CC-MAIN-20190720111631-20190720133631-00039.warc.gz\"}"} |
https://briefbriefing.com/2021/04/13/analysis-of-my-tesla-financial-model-on-github/ | [
"# Analysis of My Tesla Financial Model on GitHub",
null,
"Environment\n\nYesterday, I got some good feedback on my post about releasing a Tesla Financial Model on GitHub*.\n\nIt occurred to me an analysis of my Tesla financial model would be useful. I can’t expect everyone to download it. I was able to work and create a simple macro that copied and pasted the Monte Carlo values 5,000 times. It’s not the prettiest approach, and it takes a minute to run, but it works.",
null,
"### Line-by-line assumptions\n\nLet’s go through line by line.\n\nEV Revenue growth rate — This is how much electric vehicle revenue can grow year over year. The limits I set were -20% and 80%. Over many runs, the average should be 30% growth year over year.\n\nEGS Revenue growth rate — This is how much Tesla’s Energy Generation and Storage revenue can grow year over year. The limits I set were 0% and 100%. Over many runs, the average should be 50% growth year over year. EGS revenue is expected to grow faster than EV revenue, since it is starting from a smaller base.\n\nWright’s Law decreases ASPs by — Every single time cumulative production doubles, Wright’s Law will step in and reduce revenue between 10% and 25%. (In reality, it is more of a gradual change.) In this run, the randomly generated value was 14.8%. The average over many runs should be 18%. Note that 20–25% was reported by researchers over time for Li-ion batteries. The value independently applies for EV and EGS growth, but EV and EGS may double cumulative production in different years.\n\nFree Cash Flow growth rate — Free Cash Flow is cash flow from operations minus any capital expenditures. It is “what cash is remaining” after subtracting costs to invest in the business. Future Free Cash Flow is discounted back to the present to produce an estimate of a stock’s value. Over time, -4.8% and 5% will offset each other, or one is the inverse growth of the other (1.05 * 0.952 is close to 1).\n\nInterest rate for NPV calculations — This number determines how much to discount future cash flow back to the present. I picked 2% to 8%, which is the same as ARK’s model, and I thought that made sense. In this simulation run, the independent, random variable gave an interest rate of 7.9%.\n\nEV Revenue (millions) — Tesla’s full year EV revenue for 2020.\n\nEV Produced — Tesla’s full year number of EVs produced for 2020. Both are from SEC filings and quarterly press releases.\n\nEGS Revenue (millions) — Tesla’s full year Energy Generation and Storage revenue for 2020.\n\nEnergy Generation (GWh) — Tesla’s full year EGS in GWh for 2020.\n\nCumulative EGS Produced (GWh) — I fudged this number. It’s greater than 3. I haven’t gone through the filings to update this number yet for cumulative production. I am using the Energy Generation numbers from 2020. This is a large simplification of utility Megapacks, residential Powerwalls, and residential solar and solar roofs.\n\nFree Cash Flow (millions) — This is how much Free Cash Flow Tesla had in 2020. It works out to be close to 9.5% of Tesla’s revenue in 2020. This is from its 2020 Q4 shareholder deck.\n\n### How does my model work?",
null,
"Tiny! Zoom in 200 to 250% for a better view.\n\nMy model works simply. Every year, a revenue growth number is generated between the revenue bounds we set on the model assumption page. This is a random number that is weighted between the lower and upper bounds. This growth factor is applied to our starting revenue for each area. The same percentage growth is applied to EV and EGS starting production numbers. This yearly production is added to the prior year’s cumulative growth. When the cumulative growth from our starting values is greater than multiples of 2, my model will discount revenue by Wright’s Law on the model assumption tab. CleanTechnica reader “Actually Thoughtful” pointed out that Tesla may be able to reduce revenue less than how much costs drop due to Wright’s Law, and keep the extra revenue for itself. This is a good point, and I’ll have to figure out how to model that difference. It’s not present in the above model. Wright’s Law and revenue generation are independent for now. I assume that revenue goes down by the Wright’s Law cost difference every time cumulative production doubles.\n\nThe total revenue for both areas is generated. We randomly generate a free cash flow–to–revenue factor, which is added to our starting free cash flow–to–revenue factor of 9.5%. My thought process here is we don’t know the path of Tesla’s free cash flow over time. It depends on how much they invest in the business and how successful they are in converting revenue to free cash flow. It can be wildly positive or even negative. Again, to keep things simple, both areas are independent for now. This generates a yearly free cash flow value, which is discounted by our interest rate of 7.9%.\n\nA perpetuity is a stream of constant cash flow that continues forever. To get the present value of a perpetuity, you take Tesla’s free cash value from 2030 and divide by the interest rate. Then you divide by (1.079)^10 to bring that value back to present day 2021. In our example above, Tesla’s net present value (NPV) over the next 10 years is \\$75 billion and the post-2030 value is \\$96 billion, giving us a total estimated value of \\$171 billion. Net present value is how you take a stream of cash going in and out, and determine its value in the present day.\n\n### How good is the model?\n\nI may be overstating things, but the model seems decent. I know it’s wrong, but it’s still useful.",
null,
"The model averages for individual variables is very close to their predicted average, over 5000 runs. I bet if I changed the free cash flow growth rate to multiply together, we would get it closer to 0%. I’ll make that improvement next time I re-run the numbers.",
null,
"Thank God, Vijay, a chart! Hard to read, and my eyes are watering, but we’ll take it.\n\nThis is the distribution of the Tesla model’s discounted Net Present Value. And here are some high-level statistics:\n\n Average \\$395,670.67 Standard Deviation \\$511,487.20 Maximum NPV (thousands) \\$5,976,195.02 Minimum NPV (thousands) (\\$1,064,735.57) Count below \\$0 483 Count above \\$1T 444\n\nI don’t know why, but I think a gamma model would be a good fit, perhaps because it’s easier to calculate. 🙂 A lognormal distribution looks like it would work, too.\n\nAs a reference, as of the close of the US market on 4/12/2021, Tesla’s value according to Google Finance was \\$673.8 billion.\n\n### What does the model tell us?\n\nI found it interesting to look at what model values created the maximum and minimum NPVs.\n\nThe maximum was run #582. In this example, Wright’s Law was 12%, interest rates were 2%, Tesla produced 22.7 million vehicles in 2030 and 242 GWh in EGS. The free cash flow–to–revenue ratio was 22%, giving us a value close to \\$6 trillion.\n\nThe minimum was run #1886. Here, Wright’s Law was 17%, interest rates were 2%, Tesla produced 20.7 million vehicles in 2030 and 146 GWh in EGS, but the free cash flow–to–revenue ratio was -6.5%, giving us a value of close to negative \\$1 trillion.\n\nRun #663 was close to the average. Wright’s Law was 18%, interest rates were 6%, Tesla produced 11.2 million vehicles in 2030 and 139 GWh in EGS, and the free cash flow–to–revenue ratio was 11.8%, slightly above where we ended 2020. This gave us a value of \\$395.6 billion.\n\nRun #503, with \\$673 billion, is close to Tesla’s current value. Wright’s Law was 20%, interest rates were 2%, Tesla produced 3 million vehicles in 2030 and 236 GWh in EGS, and the free cash flow–to–revenue ratio was 16.8%, well above where we ended 2020.\n\nWhat is clear is that Tesla has multiple paths to justify its current value.\n\nWhat were the best scenarios? They were future paths where Wright’s Law was less than 18% cost reduction, there were low interest rates below 5%, and there were high free cash flow rates as a percentage of revenue — above 9.5%. Stocks are supposed to be valued on future free cash flow, discounted to the present. EVs as a percentage of revenue were similar to our current ratio. This makes intuitive sense. Looking at the average of the above scenarios, I get \\$1.03 trillion in value.\n\nThe worst scenarios were a future of high interest rates, fast production cost declines of Wright’s Law of more than 18%, and low to negative free cash flow. This makes sense too. Tesla will find it tough to scale revenue when production doubles quickly, and revenue plus costs drop by 20% or more each double. Higher interest costs reduce the value of future free cash flow. If you couple that with lower or negative free cash flow rates, this is problematic for future value. When I looked at scenarios where Wright’s Law was above 18% cost reduction, interest rates were above 5%, and there were low free cash flow rates as a percentage of revenue (below 9.5%), I got \\$73.6 billion in value. (Don’t kill me! That’s what the model says, not what I believe.)\n\nThe difference in current value and average model value has to be made up of higher EV and EGS growth rates than modeled, higher than expected FCF rates, and lower than expected Wright’s Law and interest rates. The first seems a lock, based on Q1 2021 production numbers, and the other three factors are unknown and will take time to be known. The big wildcard would be expected autonomy and outside revenue, which is what I plan to model next. This is currently a large difference from current value, which gave me pause. Is much of the growth from autonomy and high EV growth baked in already? FSD 9.0 Beta renewed my optimism that perhaps autonomy is closer than we thought. Once we get the Q1 financials, I’ll weight the numbers based on my model average and the actual numbers, run the model again, and then do so once more with autonomy baked in.\n\nYou can see the numbers from my model on GitHub here, under Tesla model.xlsm.\n\nLet me know your thoughts below. I can take it. I appreciate your honesty of how much of a noob I am in data modeling and with GitHub.\n\nNote: Author purchased shares on 4/12/2021, nothing here is financial advice, more entertainment than anything. Please due your own due diligence with a proper financial advisor before making any investments, etc., etc.\n\n*GitHub, not Git. I’m learning.",
null,
"Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.\n\nHave a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here."
] | [
null,
"https://cleantechnica.com/files/2019/04/Tesla-seat-factory-fresh-13.jpg",
null,
"https://cleantechnica.com/files/2021/04/Tesla-model-assumptions-12.04.2021.jpg",
null,
"https://cleantechnica.com/files/2021/04/Tesla-model-assumptions-2-12.04.2021.jpg",
null,
"https://cleantechnica.com/files/2021/04/Tesla-model-assumptions-3-12.04.2021.jpg",
null,
"https://cleantechnica.com/files/2021/04/Tesla-model-assumptions-4-12.04.2021.jpg",
null,
"https://cleantechnica.com/files/2021/04/cleantechnica-1.1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9546491,"math_prob":0.84382284,"size":10115,"snap":"2022-27-2022-33","text_gpt3_token_len":2405,"char_repetition_ratio":0.13332015,"word_repetition_ratio":0.042134833,"special_character_ratio":0.2495304,"punctuation_ratio":0.115983024,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9587816,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,2,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T21:37:17Z\",\"WARC-Record-ID\":\"<urn:uuid:ed191898-6516-4d30-b77d-347f1b301397>\",\"Content-Length\":\"78644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a6fbf86-011d-49e0-b695-19a745e50571>\",\"WARC-Concurrent-To\":\"<urn:uuid:29fe0c64-9dcd-47f4-b614-5f30f85474d2>\",\"WARC-IP-Address\":\"104.21.46.214\",\"WARC-Target-URI\":\"https://briefbriefing.com/2021/04/13/analysis-of-my-tesla-financial-model-on-github/\",\"WARC-Payload-Digest\":\"sha1:D5ZOV37JW33624HLHRMHMPTPSQUGKWT7\",\"WARC-Block-Digest\":\"sha1:LUGF73YEAJYGGDKOBDNPBSS4DQYAM5L2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103645173.39_warc_CC-MAIN-20220629211420-20220630001420-00046.warc.gz\"}"} |
https://quantumcomputing.stackexchange.com/questions/18521/what-does-it-mean-that-1-qubit-can-do-the-job-of-1-ebit-entanglement-bit-sec | [
"# What does it mean that 1 qubit can do the job of 1 ebit (entanglement bit)? (second Bennett's law)\n\nI just came across Bennett's laws and I wonder what the second law mean. It states that 1 qubit \"can do the job\" of 1 ebit. However, the definition of ebit (entanglement bit, wiki just refers it to the Bell state) and the notion of \"can do the job\" are unclear. Can you clarify more rigorously what the second Bennett's law says?\n\nWithout sacrificing any generality we can define an ebit as a Bell state $$\\frac{1}{\\sqrt{2}}(|00\\rangle + |11\\rangle)$$ shared between two parties $$A$$ and $$B$$, and since we're concerned with communicating information each party gets to use any many local operations as they want.\n\nThen $$\\text{1 qubit} \\geq \\text{1 ebit }$$\n\ncan be understood as \"transmitting a qubit between $$A$$ and $$B$$ allows you to share an ebit over $$AB$$. The protocol is simple: $$A$$ prepares a bell state and transmits one of the two qubits to $$B$$. This consumes $$\\text{1 qubit}_\\rightarrow$$ and results in $$\\text{1 ebit}$$ shared between the parties, where the subscript indicates that we only considered one-way transmission from $$A \\rightarrow B$$ of the qubit. But since the same is true if we switch roles, we also have $$\\text{1 qubit}_\\leftarrow \\geq \\text{1 ebit}$$ and we can generally drop the arrows and recover law #2. This interpretation also applies to the other laws:\n\n(1) $$\\text{1 qubit}_\\rightarrow \\geq \\text{1 cbit}_\\rightarrow$$ means you can transmit one cbit in one direction by the following protocol: $$A$$ prepares a qubit in either $$|0\\rangle$$ or $$|1\\rangle$$, transmits it to $$B$$, and then $$B$$ performs a computational basis measurement.\n\n(3) $$\\text{1 ebit} + \\text{1 qubit}_\\rightarrow \\geq \\text{2 cbits}_\\rightarrow$$ means shared entanglement plus one-way transmission of a qubit can be used to transmit two cbits in the same direction (superdense coding).\n\n(4) $$\\text{1 ebit} + \\text{2 cbits}_\\rightarrow \\geq \\text{1 qubit}_\\rightarrow$$ means shared entanglement plus one-way transmission of two cbits can be used to transmit a qubit of information (quantum teleportation).\n\nNote since all of these protocols can be performed in the opposite direction then the arrows indicating $$A$$ transmitting to $$B$$ aren't strictly necessary and they can be omitted. But they help clarify that in any of the above protocols information flows in a specific direction; for example it would be very strange to see a statement like $$\\text{1 qubit}_\\rightarrow \\geq \\text{1 cbit}_\\leftarrow$$\n\n• Ok, I am trying to frame it in context of transmitting information via some quantum (or even any) channel. I have problem with the notion of \"sharing\" ebit (or anybit). More precisely, I wonder if $\\text{1 anybit}\\geqslant\\text{1 anybit}_\\rightarrow$ and if $\\text{1 anybit}_\\leftarrow+\\text{1 anybit}_\\rightarrow=\\text{1 anybit}$. The first one is intuitive, if we share the same $\\text{1 anybit}$ and if you send me your $\\text{1 anybit}$ then I as well may look at my $\\text{1 anybit}$. However, if we swap $\\text{anybits}$ do we share $\\text{1 anybit}$ or $\\text{2 anybits}$? Jul 21, 2021 at 19:33\n• Or does the sharing only have sense in the context of $\\text{ebits}$ by entanglement? Jul 21, 2021 at 19:39\n• The sharing only really makes sense in terms of an $\\text{ebit}$, because its the only resource that we define as having been split between both systems - a bell state won't let us do quantum teleportation if $A$ holds both of its qubits. Jul 21, 2021 at 20:25\n• maybe I don't understand the other statements; however a statement like $\\text{1 qubit} \\geq \\text{1 qubit}_\\rightarrow$ doesn't really parse without explicitly saying which direction the qubit on the LHS is transmitted; meanwhile $\\text{1 qubit}_\\rightarrow + \\text{1 qubit}_\\leftarrow$ actually requires two uses of a quantum channel so that equality you gave doesn't work. These kinds of statements are made in order to count the uses of a classical or quantum channel are required for a protocol (or composition of protocols) so the number of transmitted (qu)bits is the relevant quantity. Jul 21, 2021 at 20:34\n\nSimilarities ebit and qubit:\n\nAn ebit is one unit of bipartite entanglement, the amount of entanglement that is contained in a maximally entangled two-qubit state (Bell state).\n\nRequirement:\n\nIf a state is said to have X ebits of entanglement (quantified by some entanglement measure) it has the same amount of entanglement (in that measure) as X Bell states. If a task requires Y ebits, it can be done with Y or more Bell states, but not with fewer.\n\n\"Can do the job\"\n\nThere must also always be at least as many ebits as there are qubits. This is what is meant by \"can do the job\".\n\n• \"There must also be at least as many qubits as there are ebits\". Actually, I would believe that is the case (this in fact motivated me to post the question), but the law actually states the converse. Jul 21, 2021 at 18:52\n• Thanks for the hint, I got it mixed up. Let me correct that. Jul 21, 2021 at 18:55"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8777341,"math_prob":0.99524665,"size":5158,"snap":"2022-27-2022-33","text_gpt3_token_len":1392,"char_repetition_ratio":0.15424913,"word_repetition_ratio":0.019184653,"special_character_ratio":0.26405585,"punctuation_ratio":0.071283095,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980351,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T17:17:13Z\",\"WARC-Record-ID\":\"<urn:uuid:02571956-395f-4273-ab7b-7f127c3cb99c>\",\"Content-Length\":\"243106\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d7a7f4c-76eb-4b3f-a420-b4e780751ce7>\",\"WARC-Concurrent-To\":\"<urn:uuid:605238b1-6bc6-44cc-a5ce-a1900d82115a>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://quantumcomputing.stackexchange.com/questions/18521/what-does-it-mean-that-1-qubit-can-do-the-job-of-1-ebit-entanglement-bit-sec\",\"WARC-Payload-Digest\":\"sha1:POJO4FUBKYMBFEX6UOGFVQWTD66ZU626\",\"WARC-Block-Digest\":\"sha1:VS32VZZFTNYT4EVPNX54GUJZ6ID55VIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571056.58_warc_CC-MAIN-20220809155137-20220809185137-00755.warc.gz\"}"} |
https://gocoding.org/functions-in-c/ | [
"# Functions in C\n\n## Introduction\n\nFrom declaring a variable to printing something, to decision making, to iterations using loops, we have learned many basic concepts of C language till now. This and the next few chapters are going to be new but related to our basics. In this article, we will learn about Functions in C.\n\nA function in C is a set of instructions/statements grouped as code with a particular name. It can be called by its name anytime and anywhere in the program whenever needed.\n\nFor example, a user wants to maintain the record of the candidates based on their voting eligibility. So, he will make a function with a code that asks for necessary details like the candidate’s age, etc., and gives output. Thus, whenever a candidate arrives, there is no need to write the whole code again, just call the function and proceed accordingly.\n\n## Why functions in C?\n\nThe above example gives a relevant answer to this question. Functions reduce the workload of programmers and save time. In short, rewriting the same code can be avoided by using functions.\n\nMoreover, functions make our code neat and easily readable.\n\n## Benefits of functions in C\n\n2. Better memory utilization.\n3. Easy to read and modify.\n\n## Types of functions in C\n\nThere are two types of functions:\n\n1. Library functions\n2. User-defined functions.\n\n### Library functions in C\n\nLibrary functions are t functions that are pre-defined in the C library and are used by the programmers whenever needed. ‘printf()’ and ‘scanf()’ are predefined functions that have already been used in our programs.\n\nMoreover, using such library functions has many advantages like they work very well and are easy to use. These predefined functions save users time. Printing anything on the screen, taking the output from the user, and many more are already defined. This enables the programmers to work efficiently.\n\n### User-defined functions in C\n\nWhile writing a program, we make many functions according to our needs so that our code runs well. All such functions defined by the user are user-defined functions.\n\nUser-defined function mainly consists of 3 parts:\n\n1. Function declaration\n2. Function definition\n3. Function call\n\n## How to declare a function in C?\n\nA function is declared as follows:\n\nSyntax:\n\nreturn_type function_name (parameters);\n\nExample:\n\nTo calculate the area of a rectangle having two dimensions length and breadth. The ‘area’ function is declared as\n\nfloat area (float length, float breadth);\n\nHere the function named ‘area’ is declared, taking ‘length’ and ‘breadth’ of float type as input and returning the area value of type ‘float’.\n\nfloat average (int a1, int a2);\n\nThe function named ‘average’ is declared, which will return the average value of type ‘float’ after calculating it from two inputs ‘a1’, ‘a2’ of type ‘int’.\n\n## Defining functions in C\n\nSyntax:\n\nreturn_type function_name (parameters)\n\n{\n\n//code\n\n}\n\nLet’s define the ‘area’ function declared above.\n\nfloat area (float length, float breadth)\n\n{\n\nfloat A;\n\nreturn A;\n\n}\n\n‘A’ is a variable of type ‘float’ declared inside the ‘area’ function. It is only valid inside the function and not outside it. Such variables are called ‘local variables’. Thus, ‘A’ is a local variable.\n\nNote: Specifying the parameter type along with the parameters is necessary while defining the function.\n\nThe calculated value of ‘length*breadth’ will be assigned to ‘A’.\n\nreturn A; this function will give us the value of ‘A’, which type ‘float’.\n\n## Calling a function in C\n\nTill now, we declared a function and defined it according to our use. The function can be called in the program once or more according to the user’s need.\n\nWhen a call is made, the control will move to that function, and the code written in its definition will be executed. After that, the control passes to the main program.\n\nSyntax:\n\nfunction_name (parameters);\n\n‘area’ function can be called as\n\nSo, now that we are ready with functions, let’s write a program to find the area of a rectangle.\n\n```#include<stdio.h>\nint main()\n{\nfloat l, b;\nfloat a;\nprintf(\"enter the value of length:\\n\");\nscanf(\"%f\", &l);\nscanf(\"%f\", &b);\na=area (l, b); // function call\nprintf (\"Area is %f\\n\", a);\nreturn 0;\n}\n\nfloat area (float length, float breadth) // definition\n{\nfloat A;\nreturn A;\n}```\n\nOutput:",
null,
"float area(float length, float breadth); The function is declared first even before the ‘main’ function so that when the ‘main’ function encounters the ‘area’ function during the execution of code, it must know that there exists a function named ‘area’. It will search for the function definition.\n\nThe function named ‘area’ declared with ‘float’ type with 2 arguments of ‘float’. It means that while calling, we must parameters pass two ‘float’ inputs, and we will get a ‘float’ output in return.\n\nNow come to the ‘function call’ statement in the ‘main’ function.\n\na=area (l, b); The function call is made by passing two parameters of type ‘float’. Since ‘a’ was also declared as float type, ’80.500000’ gets stored in ‘a’.\n\nWhen the compiler reaches the function call, it searches for the ‘function definition’, written after the ‘main’ function in the above function.\n\n‘Function definition’ can also be made after declaration.\n\n```#include<stdio.h>\n{\nfloat A;\nreturn A;\n}\n\nint main()\n{\nfloat l,b;\nfloat a;\nprintf(\"enter the value of length:\\n\");\nscanf(\"%f\",&l);\n\nscanf(\"%f\",&b);\n\na=area(l,b);\nprintf(\"Area is %.3f\\n\",a);\nreturn 0;\n}```\n\nOutput:",
null,
"The function definition is made with its declaration. So in this case, while executing ‘main’ the compiler will know that there is a function named ‘area’ because it is defined above from where it is called.\n\nOne more example of a function is given below:\n\n```#include<stdio.h>\nvoid show (int a)\n{\nprintf(\"number is %d\\n\",a);\n}\n\nint main()\n{\nint x;\nprintf(\"enter a number:\\n\");\nscanf(\"%d\",&x);\n\nshow(x);\nreturn 0;\n}```\n\nOutput:",
null,
"void show (int a), In this statement, the ‘show’ function has return type ‘void’, which means that the ‘show’ function is not returning anything. The rest of the program is quite simple. The only concern to write the above program is to tell about the ‘void’ return type.\n\nThe function definition is also possible without passing any argument. See an example of this.\n\n```#include<stdio.h>\nvoid display ()\n{\nprintf(\"function with no argument\\n\");\n}\nint main()\n{\ndisplay();\nreturn 0;\n}```\n\nOutput:",
null,
"void display ( ), This function has empty ‘( )’ brackets, which means that we are passing no arguments in the function ‘display( )’.\n\n## Points to remember\n\n1. The function is an operation/perform task, which once defined, can be used many times/ can be called many times.\n2. One function in the program must be ‘main( )’.\n3. A program may have any number of functions.\n4. No function is defined in another function.\n5. Every function has its unique name.\n6. Every program’s execution starts with the ‘main( )’ function.\n7. Since ‘main( )’ is not a user-defined function, the operating system does the function call for ‘main( )’.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null,
"https://gocoding.org/wp-content/uploads/2021/08/Calling-a-function-in-C.png",
null,
"https://gocoding.org/wp-content/uploads/2021/08/function-definition.png",
null,
"https://gocoding.org/wp-content/uploads/2021/08/Function-Example-in-C.png",
null,
"https://gocoding.org/wp-content/uploads/2021/08/Void-Function.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84583706,"math_prob":0.94226056,"size":7188,"snap":"2022-27-2022-33","text_gpt3_token_len":1641,"char_repetition_ratio":0.17706013,"word_repetition_ratio":0.022165388,"special_character_ratio":0.24151364,"punctuation_ratio":0.14346895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9741161,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T12:51:50Z\",\"WARC-Record-ID\":\"<urn:uuid:4fe0c809-cff0-4d10-a701-d471c7bddb49>\",\"Content-Length\":\"213460\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c863985-bd55-40f9-a1e3-6396aaf2fe2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c5d3521-507e-42b0-911a-1374d8794c3a>\",\"WARC-IP-Address\":\"3.234.104.255\",\"WARC-Target-URI\":\"https://gocoding.org/functions-in-c/\",\"WARC-Payload-Digest\":\"sha1:ZU3JG2SIME75MMIQEJE5FP53YEQJKQGM\",\"WARC-Block-Digest\":\"sha1:5AIAZ2F624Z34EELOBZFVQWL2UC5FAFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103639050.36_warc_CC-MAIN-20220629115352-20220629145352-00779.warc.gz\"}"} |
https://leetcode.ca/2018-05-16-898-Bitwise-ORs-of-Subarrays/ | [
"##### Welcome to Subscribe On Youtube\n\nFormatted question description: https://leetcode.ca/all/898.html\n\n# 898. Bitwise ORs of Subarrays\n\nMedium\n\n## Description\n\nWe have an array A of non-negative integers.\n\nFor every (contiguous) subarray B = [A[i], A[i+1], ..., A[j]] (with i <= j), we take the bitwise OR of all the elements in B, obtaining a result A[i] | A[i+1] | ... | A[j].\n\nReturn the number of possible results. (Results that occur more than once are only counted once in the final answer.)\n\nExample 1:\n\nInput: \n\nOutput: 1\n\nExplanation:\n\nThere is only one possible result: 0.\n\nExample 2:\n\nInput: [1,1,2]\n\nOutput: 3\n\nExplanation:\n\nThe possible subarrays are , , , [1, 1], [1, 2], [1, 1, 2].\n\nThese yield the results 1, 1, 2, 1, 3, 3.\n\nThere are 3 unique values, so the answer is 3.\n\nExample 3:\n\nInput: [1,2,4]\n\nOutput: 6\n\nExplanation:\n\nThe possible results are 1, 2, 3, 4, 6, and 7.\n\nNote:\n\n1. 1 <= A.length <= 50000\n2. 0 <= A[i] <= 10^9\n\n## Solution\n\nUse a set to store all different bitwise OR results and use a list to store the previous bitwise OR results. For each number in A, add it to the set and the list, and use another list to store the current bitwise OR results. If a bitwise OR result of the current number is greater than the number, then the bitwise OR result is a new result, so add the new result into the second list. After the results of the current number are calculated, assign the second list to the first list. Finally, return the set’s size.\n\n• class Solution {\npublic int subarrayBitwiseORs(int[] A) {\nSet<Integer> totalSet = new HashSet<Integer>();\nList<Integer> prevList = new ArrayList<Integer>();\nint length = A.length;\nfor (int i = 0; i < length; i++) {\nint num = A[i];\nList<Integer> curList = new ArrayList<Integer>();\nfor (int prevNum : prevList) {\nint bitOR = num | prevNum;\nif (bitOR > num)\nnum = bitOR;\n}\nprevList = new ArrayList<Integer>(curList);\n}\n}\n}\n\n• // OJ: https://leetcode.com/problems/bitwise-ors-of-subarrays/\n// Time: O(30N)\n// Space: O(30N)\nclass Solution {\npublic:\nint subarrayBitwiseORs(vector<int>& A) {\nunordered_set<int> all, cur, next;\nfor (int n : A) {\nnext.clear();\nnext.insert(n);\nfor (int prev : cur) next.insert(prev | n);\nfor (int m : next) all.insert(m);\nswap(cur, next);\n}\nreturn all.size();\n}\n};\n\n• class Solution:\ndef subarrayBitwiseORs(self, arr: List[int]) -> int:\ns = set()\nprev = 0\nfor i, v in enumerate(arr):\nprev |= v\ncurr = 0\nfor j in range(i, -1, -1):\ncurr |= arr[j]\nif curr == prev:\nbreak\nreturn len(s)\n\n############\n\nclass Solution(object):\ndef subarrayBitwiseORs(self, A):\n\"\"\"\n:type A: List[int]\n:rtype: int\n\"\"\"\nres = set()\ncur = set()\nfor a in A:\ncur = {n | a for n in cur} | {a}\nres |= cur\nreturn len(res)\n\n• func subarrayBitwiseORs(arr []int) int {\ns := map[int]bool{}\nprev := 0\nfor i, v := range arr {\nprev |= v\ncurr := 0\nfor j := i; j >= 0; j-- {\ncurr |= arr[j]\ns[curr] = true\nif curr == prev {\nbreak\n}\n}\n}\nreturn len(s)\n}"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5038153,"math_prob":0.9912234,"size":2990,"snap":"2023-40-2023-50","text_gpt3_token_len":902,"char_repetition_ratio":0.11855325,"word_repetition_ratio":0.0,"special_character_ratio":0.3451505,"punctuation_ratio":0.22205663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99860746,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T02:32:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e54f5d60-a4e0-4d71-9280-09f90ad4fd52>\",\"Content-Length\":\"34513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6556481f-f5da-4928-b711-65fa5a76460f>\",\"WARC-Concurrent-To\":\"<urn:uuid:de0907f4-a2c0-4527-83f3-60fb452d6d7b>\",\"WARC-IP-Address\":\"18.165.83.112\",\"WARC-Target-URI\":\"https://leetcode.ca/2018-05-16-898-Bitwise-ORs-of-Subarrays/\",\"WARC-Payload-Digest\":\"sha1:HUU3JHYEYRDFX76DEQOLM64ZP7PYAQPK\",\"WARC-Block-Digest\":\"sha1:HZCKPYARLUMW6AZPLHZX265KBIYTRBKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510130.53_warc_CC-MAIN-20230926011608-20230926041608-00804.warc.gz\"}"} |
http://php.he.net/manual/en/mcrypt.examples.php | [
"The PHP Online Conference 2021\n\n# Examples\n\nMcrypt can be used to encrypt and decrypt using the above mentioned ciphers. If you linked against `libmcrypt-2.2.x`, the four important mcrypt commands (mcrypt_cfb(), mcrypt_cbc(), mcrypt_ecb(), and mcrypt_ofb()) can operate in both modes which are named `MCRYPT_ENCRYPT` and `MCRYPT_DECRYPT`, respectively.\n\nIf you linked against libmcrypt 2.4.x or 2.5.x, these functions are still available, but it is recommended that you use the advanced functions.\n\nExample #1 Encrypt an input value with `AES` with a 256-bit key under 2.4.x and higher in `CBC` mode\n\n``` <?php \\$key = hash('sha256', 'this is a secret key', true); \\$input = \"Let us meet at 9 o'clock at the secret place.\"; \\$td = mcrypt_module_open('rijndael-128', '', 'cbc', ''); \\$iv = mcrypt_create_iv(mcrypt_enc_get_iv_size(\\$td), MCRYPT_DEV_URANDOM); mcrypt_generic_init(\\$td, \\$key, \\$iv); \\$encrypted_data = mcrypt_generic(\\$td, \\$input); mcrypt_generic_deinit(\\$td); mcrypt_module_close(\\$td);?> ```\nThis example will give you the encrypted data as a string in `\\$encrypted_data`. For a full example see mcrypt_module_open().",
null,
"add a note\n\n### User Contributed Notes 2 notes\n\njizz @ Nowhere\n12 years ago\n``` Ok after having a problem using triple des with .net/visual basic with php I think this could help someone:Visual Basic 9 with .net 2.0Encrypting as a stream into the IO/Memory as bytesThen they get converted back after encryption I wanted to use base64 encoding to store the VB encryptionThe problem I found was ... I could En/Decrypt within VB and PHP just fine But when I tried to encrypt one in VB and decrypt in PHPI got the wrong values with the mcrypt function aloneI found that at least with VB9 that the stream encryption uses a UTF char that is the value for how many missing bytes left in the 8 bit stream.So if you encrypt 1234 it will add chr(4) four times (the amount of missing bytes)In php use chr otherwise most browsers/client cant read it.Im not good at explaining things but the php code I figured out is below.It will find the missing bytes on input as visual basic doesand replace as needed. For both encryption and decryption.Example is triple_des and cbc with self key and iv for storing in base64\\$key = \"E4HD9h4DhS23DYfhHemkS3Nf\";// 24 bit Key\\$iv = \"fYfhHeDm\";// 8 bit IV\\$input = \"Text to encrypt\";// text to encrypt\\$bit_check=8;// bit amount for diff algor.\\$str= encrypt(\\$input,\\$key,\\$iv,\\$bit_check);echo \"Start: \\$input - Excrypted: \\$str - Decrypted: \".decrypt(\\$str,\\$key,\\$iv,\\$bit_check);function encrypt(\\$text,\\$key,\\$iv,\\$bit_check) {\\$text_num =str_split(\\$text,\\$bit_check);\\$text_num = \\$bit_check-strlen(\\$text_num[count(\\$text_num)-1]);for (\\$i=0;\\$i<\\$text_num; \\$i++) {\\$text = \\$text . chr(\\$text_num);}\\$cipher = mcrypt_module_open(MCRYPT_TRIPLEDES,'','cbc','');mcrypt_generic_init(\\$cipher, \\$key, \\$iv);\\$decrypted = mcrypt_generic(\\$cipher,\\$text);mcrypt_generic_deinit(\\$cipher);return base64_encode(\\$decrypted);}function decrypt(\\$encrypted_text,\\$key,\\$iv,\\$bit_check){\\$cipher = mcrypt_module_open(MCRYPT_TRIPLEDES,'','cbc','');mcrypt_generic_init(\\$cipher, \\$key, \\$iv);\\$decrypted = mdecrypt_generic(\\$cipher,base64_decode(\\$encrypted_text));mcrypt_generic_deinit(\\$cipher);\\$last_char=substr(\\$decrypted,-1);for(\\$i=0;\\$i<\\$bit_check-1; \\$i++){ if(chr(\\$i)==\\$last_char){ \\$decrypted=substr(\\$decrypted,0,strlen(\\$decrypted)-\\$i); break; }}return \\$decrypted;} ```\nivoras at gmail dot com\n10 years ago\n``` Note that there can be standard padding in block modes:http://www.di-mgt.com.au/cryptopad.html ```",
null,
""
] | [
null,
"http://php.he.net/images/[email protected]",
null,
"http://php.he.net/images/[email protected]",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7337239,"math_prob":0.90234554,"size":2193,"snap":"2020-45-2020-50","text_gpt3_token_len":602,"char_repetition_ratio":0.15395157,"word_repetition_ratio":0.02846975,"special_character_ratio":0.28454173,"punctuation_ratio":0.1764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9799908,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T12:25:09Z\",\"WARC-Record-ID\":\"<urn:uuid:3be24754-9061-425b-ad7a-c3da04c537b3>\",\"Content-Length\":\"26011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce385e93-4bef-495c-8520-4479051355fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:770479e2-3c31-4813-a1ab-266d51451605>\",\"WARC-IP-Address\":\"64.71.164.5\",\"WARC-Target-URI\":\"http://php.he.net/manual/en/mcrypt.examples.php\",\"WARC-Payload-Digest\":\"sha1:AVEC5AXK4VYQILT5TWRUW3ZFTUMTD3JI\",\"WARC-Block-Digest\":\"sha1:XKTDWKKCU3KKKBDPHIF7LCQ7PFHBRZIH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872686.18_warc_CC-MAIN-20201020105000-20201020135000-00663.warc.gz\"}"} |
https://smartypants.savingadvice.com/2008/12/20/am-i-dreaming_46336/ | [
"User Real IP - 3.238.204.31\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n => Array\n(\n => 103.98.188.26\n)\n\n => Array\n(\n => 39.35.121.81\n)\n\n => Array\n(\n => 74.119.146.182\n)\n\n => Array\n(\n => 5.181.233.162\n)\n\n => Array\n(\n => 157.39.18.60\n)\n\n => Array\n(\n => 1.187.252.25\n)\n\n => Array\n(\n => 39.42.145.59\n)\n\n => Array\n(\n => 39.35.39.198\n)\n\n => Array\n(\n => 49.36.128.214\n)\n\n => Array\n(\n => 182.190.20.56\n)\n\n => Array\n(\n => 122.180.249.189\n)\n\n => Array\n(\n => 117.217.203.107\n)\n\n => Array\n(\n => 103.70.82.241\n)\n\n => Array\n(\n => 45.118.166.68\n)\n\n => Array\n(\n => 122.180.168.39\n)\n\n => Array\n(\n => 149.28.67.254\n)\n\n => Array\n(\n => 223.233.73.8\n)\n\n => Array\n(\n => 122.167.140.0\n)\n\n => Array\n(\n => 95.158.51.55\n)\n\n => Array\n(\n => 27.96.95.134\n)\n\n => Array\n(\n => 49.206.214.53\n)\n\n => Array\n(\n => 212.103.49.92\n)\n\n => Array\n(\n => 122.177.115.101\n)\n\n => Array\n(\n => 171.50.187.124\n)\n\n => Array\n(\n => 122.164.55.107\n)\n\n => Array\n(\n => 98.114.217.204\n)\n\n => Array\n(\n => 106.215.10.54\n)\n\n => Array\n(\n => 115.42.68.28\n)\n\n => Array\n(\n => 104.194.220.87\n)\n\n => Array\n(\n => 103.137.84.170\n)\n\n => Array\n(\n => 61.16.142.110\n)\n\n => Array\n(\n => 212.103.49.85\n)\n\n => Array\n(\n => 39.53.248.162\n)\n\n => Array\n(\n => 203.122.40.214\n)\n\n => Array\n(\n => 117.217.198.72\n)\n\n => Array\n(\n => 115.186.191.203\n)\n\n => Array\n(\n => 120.29.100.199\n)\n\n => Array\n(\n => 45.151.237.24\n)\n\n => Array\n(\n => 223.190.125.232\n)\n\n => Array\n(\n => 41.80.151.17\n)\n\n => Array\n(\n => 23.111.188.5\n)\n\n => Array\n(\n => 223.190.125.216\n)\n\n => Array\n(\n => 103.217.133.119\n)\n\n => Array\n(\n => 103.198.173.132\n)\n\n => Array\n(\n => 47.31.155.89\n)\n\n => Array\n(\n => 223.190.20.253\n)\n\n => Array\n(\n => 104.131.92.125\n)\n\n => Array\n(\n => 223.190.19.152\n)\n\n => Array\n(\n => 103.245.193.191\n)\n\n => Array\n(\n => 106.215.58.255\n)\n\n => Array\n(\n => 119.82.83.238\n)\n\n => Array\n(\n => 106.212.128.138\n)\n\n => Array\n(\n => 139.167.237.36\n)\n\n => Array\n(\n => 222.124.40.250\n)\n\n => Array\n(\n => 134.56.185.169\n)\n\n => Array\n(\n => 54.255.226.31\n)\n\n => Array\n(\n => 137.97.162.31\n)\n\n => Array\n(\n => 95.185.21.191\n)\n\n => Array\n(\n => 171.61.168.151\n)\n\n => Array\n(\n => 137.97.184.4\n)\n\n => Array\n(\n => 106.203.151.202\n)\n\n => Array\n(\n => 39.37.137.0\n)\n\n => Array\n(\n => 45.118.166.66\n)\n\n => Array\n(\n => 14.248.105.100\n)\n\n => Array\n(\n => 106.215.61.185\n)\n\n => Array\n(\n => 202.83.57.179\n)\n\n => Array\n(\n => 89.187.182.176\n)\n\n => Array\n(\n => 49.249.232.198\n)\n\n => Array\n(\n => 132.154.95.236\n)\n\n => Array\n(\n => 223.233.83.230\n)\n\n => Array\n(\n => 183.83.153.14\n)\n\n => Array\n(\n => 125.63.72.210\n)\n\n => Array\n(\n => 207.174.202.11\n)\n\n => Array\n(\n => 119.95.88.59\n)\n\n => Array\n(\n => 122.170.14.150\n)\n\n => Array\n(\n => 45.118.166.75\n)\n\n => Array\n(\n => 103.12.135.37\n)\n\n => Array\n(\n => 49.207.120.225\n)\n\n => Array\n(\n => 182.64.195.207\n)\n\n => Array\n(\n => 103.99.37.16\n)\n\n => Array\n(\n => 46.150.104.221\n)\n\n => Array\n(\n => 104.236.195.147\n)\n\n => Array\n(\n => 103.104.192.43\n)\n\n => Array\n(\n => 24.242.159.118\n)\n\n => Array\n(\n => 39.42.179.143\n)\n\n => Array\n(\n => 111.93.58.131\n)\n\n => Array\n(\n => 193.176.84.127\n)\n\n => Array\n(\n => 209.58.142.218\n)\n\n => Array\n(\n => 69.243.152.129\n)\n\n => Array\n(\n => 117.97.131.249\n)\n\n => Array\n(\n => 103.230.180.89\n)\n\n => Array\n(\n => 106.212.170.192\n)\n\n => Array\n(\n => 171.224.180.95\n)\n\n => Array\n(\n => 158.222.11.87\n)\n\n => Array\n(\n => 119.155.60.246\n)\n\n => Array\n(\n => 41.90.43.129\n)\n\n => Array\n(\n => 185.183.104.170\n)\n\n => Array\n(\n => 14.248.67.65\n)\n\n => Array\n(\n => 117.217.205.82\n)\n\n => Array\n(\n => 111.88.7.209\n)\n\n => Array\n(\n => 49.36.132.244\n)\n\n => Array\n(\n => 171.48.40.2\n)\n\n => Array\n(\n => 119.81.105.2\n)\n\n => Array\n(\n => 49.36.128.114\n)\n\n => Array\n(\n => 213.200.31.93\n)\n\n => Array\n(\n => 2.50.15.110\n)\n\n => Array\n(\n => 120.29.104.67\n)\n\n => Array\n(\n => 223.225.32.221\n)\n\n => Array\n(\n => 14.248.67.195\n)\n\n => Array\n(\n => 119.155.36.13\n)\n\n => Array\n(\n => 101.50.95.104\n)\n\n => Array\n(\n => 104.236.205.233\n)\n\n => Array\n(\n => 122.164.36.150\n)\n\n => Array\n(\n => 157.45.93.209\n)\n\n => Array\n(\n => 182.77.118.100\n)\n\n => Array\n(\n => 182.74.134.218\n)\n\n => Array\n(\n => 183.82.128.146\n)\n\n => Array\n(\n => 112.196.170.234\n)\n\n => Array\n(\n => 122.173.230.178\n)\n\n => Array\n(\n => 122.164.71.199\n)\n\n => Array\n(\n => 51.79.19.31\n)\n\n => Array\n(\n => 58.65.222.20\n)\n\n => Array\n(\n => 103.27.203.97\n)\n\n => Array\n(\n => 111.88.7.242\n)\n\n => Array\n(\n => 14.171.232.77\n)\n\n => Array\n(\n => 46.101.22.182\n)\n\n => Array\n(\n => 103.94.219.19\n)\n\n => Array\n(\n => 139.190.83.30\n)\n\n => Array\n(\n => 223.190.27.184\n)\n\n => Array\n(\n => 182.185.183.34\n)\n\n => Array\n(\n => 91.74.181.242\n)\n\n => Array\n(\n => 222.252.107.14\n)\n\n => Array\n(\n => 137.97.8.28\n)\n\n => Array\n(\n => 46.101.16.229\n)\n\n => Array\n(\n => 122.53.254.229\n)\n\n => Array\n(\n => 106.201.17.180\n)\n\n => Array\n(\n => 123.24.170.129\n)\n\n => Array\n(\n => 182.185.180.79\n)\n\n => Array\n(\n => 223.190.17.4\n)\n\n => Array\n(\n => 213.108.105.1\n)\n\n => Array\n(\n => 171.22.76.9\n)\n\n => Array\n(\n => 202.66.178.164\n)\n\n => Array\n(\n => 178.62.97.171\n)\n\n => Array\n(\n => 167.179.110.209\n)\n\n => Array\n(\n => 223.230.147.172\n)\n\n => Array\n(\n => 76.218.195.160\n)\n\n => Array\n(\n => 14.189.186.178\n)\n\n => Array\n(\n => 157.41.45.143\n)\n\n => Array\n(\n => 223.238.22.53\n)\n\n => Array\n(\n => 111.88.7.244\n)\n\n => Array\n(\n => 5.62.57.19\n)\n\n => Array\n(\n => 106.201.25.216\n)\n\n => Array\n(\n => 117.217.205.33\n)\n\n => Array\n(\n => 111.88.7.215\n)\n\n => Array\n(\n => 106.201.13.77\n)\n\n => Array\n(\n => 50.7.93.29\n)\n\n => Array\n(\n => 123.201.70.112\n)\n\n => Array\n(\n => 39.42.108.226\n)\n\n => Array\n(\n => 27.5.198.29\n)\n\n => Array\n(\n => 223.238.85.187\n)\n\n => Array\n(\n => 171.49.176.32\n)\n\n => Array\n(\n => 14.248.79.242\n)\n\n => Array\n(\n => 46.219.211.183\n)\n\n => Array\n(\n => 185.244.212.251\n)\n\n => Array\n(\n => 14.102.84.126\n)\n\n => Array\n(\n => 106.212.191.52\n)\n\n => Array\n(\n => 154.72.153.203\n)\n\n => Array\n(\n => 14.175.82.64\n)\n\n => Array\n(\n => 141.105.139.131\n)\n\n => Array\n(\n => 182.156.103.98\n)\n\n => Array\n(\n => 117.217.204.75\n)\n\n => Array\n(\n => 104.140.83.115\n)\n\n => Array\n(\n => 119.152.62.8\n)\n\n => Array\n(\n => 45.125.247.94\n)\n\n => Array\n(\n => 137.97.37.252\n)\n\n => Array\n(\n => 117.217.204.73\n)\n\n => Array\n(\n => 14.248.79.133\n)\n\n => Array\n(\n => 39.37.152.52\n)\n\n => Array\n(\n => 103.55.60.54\n)\n\n => Array\n(\n => 102.166.183.88\n)\n\n => Array\n(\n => 5.62.60.162\n)\n\n => Array\n(\n => 5.62.60.163\n)\n\n => Array\n(\n => 160.202.38.131\n)\n\n => Array\n(\n => 106.215.20.253\n)\n\n => Array\n(\n => 39.37.160.54\n)\n\n => Array\n(\n => 119.152.59.186\n)\n\n => Array\n(\n => 183.82.0.164\n)\n\n => Array\n(\n => 41.90.54.87\n)\n\n => Array\n(\n => 157.36.85.158\n)\n\n => Array\n(\n => 110.37.229.162\n)\n\n => Array\n(\n => 203.99.180.148\n)\n\n => Array\n(\n => 117.97.132.91\n)\n\n => Array\n(\n => 171.61.147.105\n)\n\n => Array\n(\n => 14.98.147.214\n)\n\n => Array\n(\n => 209.234.253.191\n)\n\n => Array\n(\n => 92.38.148.60\n)\n\n => Array\n(\n => 178.128.104.139\n)\n\n => Array\n(\n => 212.154.0.176\n)\n\n => Array\n(\n => 103.41.24.141\n)\n\n => Array\n(\n => 2.58.194.132\n)\n\n => Array\n(\n => 180.190.78.169\n)\n\n => Array\n(\n => 106.215.45.182\n)\n\n => Array\n(\n => 125.63.100.222\n)\n\n => Array\n(\n => 110.54.247.17\n)\n\n => Array\n(\n => 103.26.85.105\n)\n\n => Array\n(\n => 39.42.147.3\n)\n\n => Array\n(\n => 137.97.51.41\n)\n\n => Array\n(\n => 71.202.72.27\n)\n\n => Array\n(\n => 119.155.35.10\n)\n\n => Array\n(\n => 202.47.43.120\n)\n\n => Array\n(\n => 183.83.64.101\n)\n\n => Array\n(\n => 182.68.106.141\n)\n\n => Array\n(\n => 171.61.187.87\n)\n\n => Array\n(\n => 178.162.198.118\n)\n\n => Array\n(\n => 115.97.151.218\n)\n\n => Array\n(\n => 196.207.184.210\n)\n\n => Array\n(\n => 198.16.70.51\n)\n\n => Array\n(\n => 41.60.237.33\n)\n\n => Array\n(\n => 47.11.86.26\n)\n\n => Array\n(\n => 117.217.201.183\n)\n\n => Array\n(\n => 203.192.241.79\n)\n\n => Array\n(\n => 122.165.119.85\n)\n\n => Array\n(\n => 23.227.142.218\n)\n\n => Array\n(\n => 178.128.104.221\n)\n\n => Array\n(\n => 14.192.54.163\n)\n\n => Array\n(\n => 139.5.253.218\n)\n\n => Array\n(\n => 117.230.140.127\n)\n\n => Array\n(\n => 195.114.149.199\n)\n\n => Array\n(\n => 14.239.180.220\n)\n\n => Array\n(\n => 103.62.155.94\n)\n\n => Array\n(\n => 118.71.97.14\n)\n\n => Array\n(\n => 137.97.55.163\n)\n\n => Array\n(\n => 202.47.49.198\n)\n\n => Array\n(\n => 171.61.177.85\n)\n\n => Array\n(\n => 137.97.190.224\n)\n\n => Array\n(\n => 117.230.34.142\n)\n\n => Array\n(\n => 103.41.32.5\n)\n\n => Array\n(\n => 203.90.82.237\n)\n\n => Array\n(\n => 125.63.124.238\n)\n\n => Array\n(\n => 103.232.128.78\n)\n\n => Array\n(\n => 106.197.14.227\n)\n\n => Array\n(\n => 81.17.242.244\n)\n\n => Array\n(\n => 81.19.210.179\n)\n\n => Array\n(\n => 103.134.94.98\n)\n\n => Array\n(\n => 110.38.0.86\n)\n\n => Array\n(\n => 103.10.224.195\n)\n\n => Array\n(\n => 45.118.166.89\n)\n\n => Array\n(\n => 115.186.186.68\n)\n\n => Array\n(\n => 138.197.129.237\n)\n\n => Array\n(\n => 14.247.162.52\n)\n\n => Array\n(\n => 103.255.4.5\n)\n\n => Array\n(\n => 14.167.188.254\n)\n\n => Array\n(\n => 5.62.59.54\n)\n\n => Array\n(\n => 27.122.14.80\n)\n\n => Array\n(\n => 39.53.240.21\n)\n\n => Array\n(\n => 39.53.241.243\n)\n\n => Array\n(\n => 117.230.130.161\n)\n\n => Array\n(\n => 118.71.191.149\n)\n\n => Array\n(\n => 5.188.95.54\n)\n\n => Array\n(\n => 66.45.250.27\n)\n\n => Array\n(\n => 106.215.6.175\n)\n\n => Array\n(\n => 27.122.14.86\n)\n\n => Array\n(\n => 103.255.4.51\n)\n\n => Array\n(\n => 101.50.93.119\n)\n\n => Array\n(\n => 137.97.183.51\n)\n\n => Array\n(\n => 117.217.204.185\n)\n\n => Array\n(\n => 95.104.106.82\n)\n\n => Array\n(\n => 5.62.56.211\n)\n\n => Array\n(\n => 103.104.181.214\n)\n\n => Array\n(\n => 36.72.214.243\n)\n\n => Array\n(\n => 5.62.62.219\n)\n\n => Array\n(\n => 110.36.202.4\n)\n\n => Array\n(\n => 103.255.4.253\n)\n\n => Array\n(\n => 110.172.138.61\n)\n\n => Array\n(\n => 159.203.24.195\n)\n\n => Array\n(\n => 13.229.88.42\n)\n\n => Array\n(\n => 59.153.235.20\n)\n\n => Array\n(\n => 171.236.169.32\n)\n\n => Array\n(\n => 14.231.85.206\n)\n\n => Array\n(\n => 119.152.54.103\n)\n\n => Array\n(\n => 103.80.117.202\n)\n\n => Array\n(\n => 223.179.157.75\n)\n\n => Array\n(\n => 122.173.68.249\n)\n\n => Array\n(\n => 188.163.72.113\n)\n\n => Array\n(\n => 119.155.20.164\n)\n\n => Array\n(\n => 103.121.43.68\n)\n\n => Array\n(\n => 5.62.58.6\n)\n\n => Array\n(\n => 203.122.40.154\n)\n\n => Array\n(\n => 222.254.96.203\n)\n\n => Array\n(\n => 103.83.148.167\n)\n\n => Array\n(\n => 103.87.251.226\n)\n\n => Array\n(\n => 123.24.129.24\n)\n\n => Array\n(\n => 137.97.83.8\n)\n\n => Array\n(\n => 223.225.33.132\n)\n\n => Array\n(\n => 128.76.175.190\n)\n\n => Array\n(\n => 195.85.219.32\n)\n\n => Array\n(\n => 139.167.102.93\n)\n\n => Array\n(\n => 49.15.198.253\n)\n\n => Array\n(\n => 45.152.183.172\n)\n\n => Array\n(\n => 42.106.180.136\n)\n\n => Array\n(\n => 95.142.120.9\n)\n\n => Array\n(\n => 139.167.236.4\n)\n\n => Array\n(\n => 159.65.72.167\n)\n\n => Array\n(\n => 49.15.89.2\n)\n\n => Array\n(\n => 42.201.161.195\n)\n\n => Array\n(\n => 27.97.210.38\n)\n\n => Array\n(\n => 171.241.45.19\n)\n\n => Array\n(\n => 42.108.2.18\n)\n\n => Array\n(\n => 171.236.40.68\n)\n\n => Array\n(\n => 110.93.82.102\n)\n\n => Array\n(\n => 43.225.24.186\n)\n\n => Array\n(\n => 117.230.189.119\n)\n\n => Array\n(\n => 124.123.147.187\n)\n\n => Array\n(\n => 216.151.184.250\n)\n\n => Array\n(\n => 49.15.133.16\n)\n\n => Array\n(\n => 49.15.220.74\n)\n\n => Array\n(\n => 157.37.221.246\n)\n\n => Array\n(\n => 176.124.233.112\n)\n\n => Array\n(\n => 118.71.167.40\n)\n\n => Array\n(\n => 182.185.213.161\n)\n\n => Array\n(\n => 47.31.79.248\n)\n\n => Array\n(\n => 223.179.238.192\n)\n\n => Array\n(\n => 79.110.128.219\n)\n\n => Array\n(\n => 106.210.42.111\n)\n\n => Array\n(\n => 47.247.214.229\n)\n\n => Array\n(\n => 193.0.220.108\n)\n\n => Array\n(\n => 1.39.206.254\n)\n\n => Array\n(\n => 123.201.77.38\n)\n\n => Array\n(\n => 115.178.207.21\n)\n\n => Array\n(\n => 37.111.202.92\n)\n\n => Array\n(\n => 49.14.179.243\n)\n\n => Array\n(\n => 117.230.145.171\n)\n\n => Array\n(\n => 171.229.242.96\n)\n\n => Array\n(\n => 27.59.174.209\n)\n\n => Array\n(\n => 1.38.202.211\n)\n\n => Array\n(\n => 157.37.128.46\n)\n\n => Array\n(\n => 49.15.94.80\n)\n\n => Array\n(\n => 123.25.46.147\n)\n\n => Array\n(\n => 117.230.170.185\n)\n\n => Array\n(\n => 5.62.16.19\n)\n\n => Array\n(\n => 103.18.22.25\n)\n\n => Array\n(\n => 103.46.200.132\n)\n\n => Array\n(\n => 27.97.165.126\n)\n\n => Array\n(\n => 117.230.54.241\n)\n\n => Array\n(\n => 27.97.209.76\n)\n\n => Array\n(\n => 47.31.182.109\n)\n\n => Array\n(\n => 47.30.223.221\n)\n\n => Array\n(\n => 103.31.94.82\n)\n\n => Array\n(\n => 103.211.14.45\n)\n\n => Array\n(\n => 171.49.233.58\n)\n\n => Array\n(\n => 65.49.126.95\n)\n\n => Array\n(\n => 69.255.101.170\n)\n\n => Array\n(\n => 27.56.224.67\n)\n\n => Array\n(\n => 117.230.146.86\n)\n\n => Array\n(\n => 27.59.154.52\n)\n\n => Array\n(\n => 132.154.114.10\n)\n\n => Array\n(\n => 182.186.77.60\n)\n\n => Array\n(\n => 117.230.136.74\n)\n\n => Array\n(\n => 43.251.94.253\n)\n\n => Array\n(\n => 103.79.168.225\n)\n\n => Array\n(\n => 117.230.56.51\n)\n\n => Array\n(\n => 27.97.187.45\n)\n\n => Array\n(\n => 137.97.190.61\n)\n\n => Array\n(\n => 193.0.220.26\n)\n\n => Array\n(\n => 49.36.137.62\n)\n\n => Array\n(\n => 47.30.189.248\n)\n\n => Array\n(\n => 109.169.23.84\n)\n\n => Array\n(\n => 111.119.185.46\n)\n\n => Array\n(\n => 103.83.148.246\n)\n\n => Array\n(\n => 157.32.119.138\n)\n\n => Array\n(\n => 5.62.41.53\n)\n\n => Array\n(\n => 47.8.243.236\n)\n\n => Array\n(\n => 112.79.158.69\n)\n\n => Array\n(\n => 180.92.148.218\n)\n\n => Array\n(\n => 157.36.162.154\n)\n\n => Array\n(\n => 39.46.114.47\n)\n\n => Array\n(\n => 117.230.173.250\n)\n\n => Array\n(\n => 117.230.155.188\n)\n\n => Array\n(\n => 193.0.220.17\n)\n\n => Array\n(\n => 117.230.171.166\n)\n\n => Array\n(\n => 49.34.59.228\n)\n\n => Array\n(\n => 111.88.197.247\n)\n\n => Array\n(\n => 47.31.156.112\n)\n\n => Array\n(\n => 137.97.64.180\n)\n\n => Array\n(\n => 14.244.227.18\n)\n\n => Array\n(\n => 113.167.158.8\n)\n\n => Array\n(\n => 39.37.175.189\n)\n\n => Array\n(\n => 139.167.211.8\n)\n\n => Array\n(\n => 73.120.85.235\n)\n\n => Array\n(\n => 104.236.195.72\n)\n\n => Array\n(\n => 27.97.190.71\n)\n\n => Array\n(\n => 79.46.170.222\n)\n\n => Array\n(\n => 102.185.244.207\n)\n\n => Array\n(\n => 37.111.136.30\n)\n\n => Array\n(\n => 50.7.93.28\n)\n\n => Array\n(\n => 110.54.251.43\n)\n\n => Array\n(\n => 49.36.143.40\n)\n\n => Array\n(\n => 103.130.112.185\n)\n\n => Array\n(\n => 37.111.139.202\n)\n\n => Array\n(\n => 49.36.139.108\n)\n\n => Array\n(\n => 37.111.136.179\n)\n\n => Array\n(\n => 123.17.165.77\n)\n\n => Array\n(\n => 49.207.143.206\n)\n\n => Array\n(\n => 39.53.80.149\n)\n\n => Array\n(\n => 223.188.71.214\n)\n\n => Array\n(\n => 1.39.222.233\n)\n\n => Array\n(\n => 117.230.9.85\n)\n\n => Array\n(\n => 103.251.245.216\n)\n\n => Array\n(\n => 122.169.133.145\n)\n\n => Array\n(\n => 43.250.165.57\n)\n\n => Array\n(\n => 39.44.13.235\n)\n\n => Array\n(\n => 157.47.181.2\n)\n\n => Array\n(\n => 27.56.203.50\n)\n\n => Array\n(\n => 191.96.97.58\n)\n\n => Array\n(\n => 111.88.107.172\n)\n\n => Array\n(\n => 113.193.198.136\n)\n\n => Array\n(\n => 117.230.172.175\n)\n\n => Array\n(\n => 191.96.182.239\n)\n\n => Array\n(\n => 2.58.46.28\n)\n\n => Array\n(\n => 183.83.253.87\n)\n\n => Array\n(\n => 49.15.139.242\n)\n\n => Array\n(\n => 42.107.220.236\n)\n\n => Array\n(\n => 14.192.53.196\n)\n\n => Array\n(\n => 42.119.212.202\n)\n\n => Array\n(\n => 192.158.234.45\n)\n\n => Array\n(\n => 49.149.102.192\n)\n\n => Array\n(\n => 47.8.170.17\n)\n\n => Array\n(\n => 117.197.13.247\n)\n\n => Array\n(\n => 116.74.34.44\n)\n\n => Array\n(\n => 103.79.249.163\n)\n\n => Array\n(\n => 182.189.95.70\n)\n\n => Array\n(\n => 137.59.218.118\n)\n\n => Array\n(\n => 103.79.170.243\n)\n\n => Array\n(\n => 39.40.54.25\n)\n\n => Array\n(\n => 119.155.40.170\n)\n\n => Array\n(\n => 1.39.212.157\n)\n\n => Array\n(\n => 70.127.59.89\n)\n\n => Array\n(\n => 14.171.22.58\n)\n\n => Array\n(\n => 194.44.167.141\n)\n\n => Array\n(\n => 111.88.179.154\n)\n\n => Array\n(\n => 117.230.140.232\n)\n\n => Array\n(\n => 137.97.96.128\n)\n\n => Array\n(\n => 198.16.66.123\n)\n\n => Array\n(\n => 106.198.44.193\n)\n\n => Array\n(\n => 119.153.45.75\n)\n\n => Array\n(\n => 49.15.242.208\n)\n\n => Array\n(\n => 119.155.241.20\n)\n\n => Array\n(\n => 106.223.109.155\n)\n\n => Array\n(\n => 119.160.119.245\n)\n\n => Array\n(\n => 106.215.81.160\n)\n\n => Array\n(\n => 1.39.192.211\n)\n\n => Array\n(\n => 223.230.35.208\n)\n\n => Array\n(\n => 39.59.4.158\n)\n\n => Array\n(\n => 43.231.57.234\n)\n\n => Array\n(\n => 60.254.78.193\n)\n\n => Array\n(\n => 122.170.224.87\n)\n\n => Array\n(\n => 117.230.22.141\n)\n\n => Array\n(\n => 119.152.107.211\n)\n\n => Array\n(\n => 103.87.192.206\n)\n\n => Array\n(\n => 39.45.244.47\n)\n\n => Array\n(\n => 50.72.141.94\n)\n\n => Array\n(\n => 39.40.6.128\n)\n\n => Array\n(\n => 39.45.180.186\n)\n\n => Array\n(\n => 49.207.131.233\n)\n\n => Array\n(\n => 139.59.69.142\n)\n\n => Array\n(\n => 111.119.187.29\n)\n\n => Array\n(\n => 119.153.40.69\n)\n\n => Array\n(\n => 49.36.133.64\n)\n\n => Array\n(\n => 103.255.4.249\n)\n\n => Array\n(\n => 198.144.154.15\n)\n\n => Array\n(\n => 1.22.46.172\n)\n\n => Array\n(\n => 103.255.5.46\n)\n\n => Array\n(\n => 27.56.195.188\n)\n\n => Array\n(\n => 203.101.167.53\n)\n\n => Array\n(\n => 117.230.62.195\n)\n\n => Array\n(\n => 103.240.194.186\n)\n\n => Array\n(\n => 107.170.166.118\n)\n\n => Array\n(\n => 101.53.245.80\n)\n\n => Array\n(\n => 157.43.13.208\n)\n\n => Array\n(\n => 137.97.100.77\n)\n\n => Array\n(\n => 47.31.150.208\n)\n\n => Array\n(\n => 137.59.222.65\n)\n\n => Array\n(\n => 103.85.127.250\n)\n\n => Array\n(\n => 103.214.119.32\n)\n\n => Array\n(\n => 182.255.49.52\n)\n\n => Array\n(\n => 103.75.247.72\n)\n\n => Array\n(\n => 103.85.125.250\n)\n\n => Array\n(\n => 183.83.253.167\n)\n\n => Array\n(\n => 1.39.222.111\n)\n\n => Array\n(\n => 111.119.185.9\n)\n\n => Array\n(\n => 111.119.187.10\n)\n\n => Array\n(\n => 39.37.147.144\n)\n\n => Array\n(\n => 103.200.198.183\n)\n\n => Array\n(\n => 1.39.222.18\n)\n\n => Array\n(\n => 198.8.80.103\n)\n\n => Array\n(\n => 42.108.1.243\n)\n\n => Array\n(\n => 111.119.187.16\n)\n\n => Array\n(\n => 39.40.241.8\n)\n\n => Array\n(\n => 122.169.150.158\n)\n\n => Array\n(\n => 39.40.215.119\n)\n\n => Array\n(\n => 103.255.5.77\n)\n\n => Array\n(\n => 157.38.108.196\n)\n\n => Array\n(\n => 103.255.4.67\n)\n\n => Array\n(\n => 5.62.60.62\n)\n\n => Array\n(\n => 39.37.146.202\n)\n\n => Array\n(\n => 110.138.6.221\n)\n\n => Array\n(\n => 49.36.143.88\n)\n\n => Array\n(\n => 37.1.215.39\n)\n\n => Array\n(\n => 27.106.59.190\n)\n\n => Array\n(\n => 139.167.139.41\n)\n\n => Array\n(\n => 114.142.166.179\n)\n\n => Array\n(\n => 223.225.240.112\n)\n\n => Array\n(\n => 103.255.5.36\n)\n\n => Array\n(\n => 175.136.1.48\n)\n\n => Array\n(\n => 103.82.80.166\n)\n\n => Array\n(\n => 182.185.196.126\n)\n\n => Array\n(\n => 157.43.45.76\n)\n\n => Array\n(\n => 119.152.132.49\n)\n\n => Array\n(\n => 5.62.62.162\n)\n\n => Array\n(\n => 103.255.4.39\n)\n\n => Array\n(\n => 202.5.144.153\n)\n\n => Array\n(\n => 1.39.223.210\n)\n\n => Array\n(\n => 92.38.176.154\n)\n\n => Array\n(\n => 117.230.186.142\n)\n\n => Array\n(\n => 183.83.39.123\n)\n\n => Array\n(\n => 182.185.156.76\n)\n\n => Array\n(\n => 104.236.74.212\n)\n\n => Array\n(\n => 107.170.145.187\n)\n\n => Array\n(\n => 117.102.7.98\n)\n\n => Array\n(\n => 137.59.220.0\n)\n\n => Array\n(\n => 157.47.222.14\n)\n\n => Array\n(\n => 47.15.206.82\n)\n\n => Array\n(\n => 117.230.159.99\n)\n\n => Array\n(\n => 117.230.175.151\n)\n\n => Array\n(\n => 157.50.97.18\n)\n\n => Array\n(\n => 117.230.47.164\n)\n\n => Array\n(\n => 77.111.244.34\n)\n\n => Array\n(\n => 139.167.189.131\n)\n\n => Array\n(\n => 1.39.204.103\n)\n\n => Array\n(\n => 117.230.58.0\n)\n\n => Array\n(\n => 182.185.226.66\n)\n\n => Array\n(\n => 115.42.70.119\n)\n\n => Array\n(\n => 171.48.114.134\n)\n\n => Array\n(\n => 144.34.218.75\n)\n\n => Array\n(\n => 199.58.164.135\n)\n\n => Array\n(\n => 101.53.228.151\n)\n\n => Array\n(\n => 117.230.50.57\n)\n\n => Array\n(\n => 223.225.138.84\n)\n\n => Array\n(\n => 110.225.67.65\n)\n\n => Array\n(\n => 47.15.200.39\n)\n\n => Array\n(\n => 39.42.20.127\n)\n\n => Array\n(\n => 117.97.241.81\n)\n\n => Array\n(\n => 111.119.185.11\n)\n\n => Array\n(\n => 103.100.5.94\n)\n\n => Array\n(\n => 103.25.137.69\n)\n\n => Array\n(\n => 47.15.197.159\n)\n\n => Array\n(\n => 223.188.176.122\n)\n\n => Array\n(\n => 27.4.175.80\n)\n\n => Array\n(\n => 181.215.43.82\n)\n\n => Array\n(\n => 27.56.228.157\n)\n\n => Array\n(\n => 117.230.19.19\n)\n\n => Array\n(\n => 47.15.208.71\n)\n\n => Array\n(\n => 119.155.21.176\n)\n\n => Array\n(\n => 47.15.234.202\n)\n\n => Array\n(\n => 117.230.144.135\n)\n\n => Array\n(\n => 112.79.139.199\n)\n\n => Array\n(\n => 116.75.246.41\n)\n\n => Array\n(\n => 117.230.177.126\n)\n\n => Array\n(\n => 212.103.48.134\n)\n\n => Array\n(\n => 102.69.228.78\n)\n\n => Array\n(\n => 117.230.37.118\n)\n\n => Array\n(\n => 175.143.61.75\n)\n\n => Array\n(\n => 139.167.56.138\n)\n\n => Array\n(\n => 58.145.189.250\n)\n\n => Array\n(\n => 103.255.5.65\n)\n\n => Array\n(\n => 39.37.153.182\n)\n\n => Array\n(\n => 157.43.85.106\n)\n\n => Array\n(\n => 185.209.178.77\n)\n\n => Array\n(\n => 1.39.212.45\n)\n\n => Array\n(\n => 103.72.7.16\n)\n\n => Array\n(\n => 117.97.185.244\n)\n\n => Array\n(\n => 117.230.59.106\n)\n\n => Array\n(\n => 137.97.121.103\n)\n\n => Array\n(\n => 103.82.123.215\n)\n\n => Array\n(\n => 103.68.217.248\n)\n\n => Array\n(\n => 157.39.27.175\n)\n\n => Array\n(\n => 47.31.100.249\n)\n\n => Array\n(\n => 14.171.232.139\n)\n\n => Array\n(\n => 103.31.93.208\n)\n\n => Array\n(\n => 117.230.56.77\n)\n\n => Array\n(\n => 124.182.25.124\n)\n\n => Array\n(\n => 106.66.191.242\n)\n\n => Array\n(\n => 175.107.237.25\n)\n\n => Array\n(\n => 119.155.1.27\n)\n\n => Array\n(\n => 72.255.6.24\n)\n\n => Array\n(\n => 192.140.152.223\n)\n\n => Array\n(\n => 212.103.48.136\n)\n\n => Array\n(\n => 39.45.134.56\n)\n\n => Array\n(\n => 139.167.173.30\n)\n\n => Array\n(\n => 117.230.63.87\n)\n\n => Array\n(\n => 182.189.95.203\n)\n\n => Array\n(\n => 49.204.183.248\n)\n\n => Array\n(\n => 47.31.125.188\n)\n\n => Array\n(\n => 103.252.171.13\n)\n\n => Array\n(\n => 112.198.74.36\n)\n\n => Array\n(\n => 27.109.113.152\n)\n\n => Array\n(\n => 42.112.233.44\n)\n\n => Array\n(\n => 47.31.68.193\n)\n\n => Array\n(\n => 103.252.171.134\n)\n\n => Array\n(\n => 77.123.32.114\n)\n\n => Array\n(\n => 1.38.189.66\n)\n\n => Array\n(\n => 39.37.181.108\n)\n\n => Array\n(\n => 42.106.44.61\n)\n\n => Array\n(\n => 157.36.8.39\n)\n\n => Array\n(\n => 223.238.41.53\n)\n\n => Array\n(\n => 202.89.77.10\n)\n\n => Array\n(\n => 117.230.150.68\n)\n\n => Array\n(\n => 175.176.87.60\n)\n\n => Array\n(\n => 137.97.117.87\n)\n\n => Array\n(\n => 132.154.123.11\n)\n\n => Array\n(\n => 45.113.124.141\n)\n\n => Array\n(\n => 103.87.56.203\n)\n\n => Array\n(\n => 159.89.171.156\n)\n\n => Array\n(\n => 119.155.53.88\n)\n\n => Array\n(\n => 222.252.107.215\n)\n\n => Array\n(\n => 132.154.75.238\n)\n\n => Array\n(\n => 122.183.41.168\n)\n\n => Array\n(\n => 42.106.254.158\n)\n\n => Array\n(\n => 103.252.171.37\n)\n\n => Array\n(\n => 202.59.13.180\n)\n\n => Array\n(\n => 37.111.139.137\n)\n\n => Array\n(\n => 39.42.93.25\n)\n\n => Array\n(\n => 118.70.177.156\n)\n\n => Array\n(\n => 117.230.148.64\n)\n\n => Array\n(\n => 39.42.15.194\n)\n\n => Array\n(\n => 137.97.176.86\n)\n\n => Array\n(\n => 106.210.102.113\n)\n\n => Array\n(\n => 39.59.84.236\n)\n\n => Array\n(\n => 49.206.187.177\n)\n\n => Array\n(\n => 117.230.133.11\n)\n\n => Array\n(\n => 42.106.253.173\n)\n\n => Array\n(\n => 178.62.102.23\n)\n\n => Array\n(\n => 111.92.76.175\n)\n\n => Array\n(\n => 132.154.86.45\n)\n\n => Array\n(\n => 117.230.128.39\n)\n\n => Array\n(\n => 117.230.53.165\n)\n\n => Array\n(\n => 49.37.200.171\n)\n\n => Array\n(\n => 104.236.213.230\n)\n\n => Array\n(\n => 103.140.30.81\n)\n\n => Array\n(\n => 59.103.104.117\n)\n\n => Array\n(\n => 65.49.126.79\n)\n\n => Array\n(\n => 202.59.12.251\n)\n\n => Array\n(\n => 37.111.136.17\n)\n\n => Array\n(\n => 163.53.85.67\n)\n\n => Array\n(\n => 123.16.240.73\n)\n\n => Array\n(\n => 103.211.14.183\n)\n\n => Array\n(\n => 103.248.93.211\n)\n\n => Array\n(\n => 116.74.59.127\n)\n\n => Array\n(\n => 137.97.169.254\n)\n\n => Array\n(\n => 113.177.79.100\n)\n\n => Array\n(\n => 74.82.60.187\n)\n\n => Array\n(\n => 117.230.157.66\n)\n\n => Array\n(\n => 169.149.194.241\n)\n\n => Array\n(\n => 117.230.156.11\n)\n\n => Array\n(\n => 202.59.12.157\n)\n\n => Array\n(\n => 42.106.181.25\n)\n\n => Array\n(\n => 202.59.13.78\n)\n\n => Array\n(\n => 39.37.153.32\n)\n\n => Array\n(\n => 177.188.216.175\n)\n\n => Array\n(\n => 222.252.53.165\n)\n\n => Array\n(\n => 37.139.23.89\n)\n\n => Array\n(\n => 117.230.139.150\n)\n\n => Array\n(\n => 104.131.176.234\n)\n\n => Array\n(\n => 42.106.181.117\n)\n\n => Array\n(\n => 117.230.180.94\n)\n\n => Array\n(\n => 180.190.171.5\n)\n\n => Array\n(\n => 150.129.165.185\n)\n\n => Array\n(\n => 51.15.0.150\n)\n\n => Array\n(\n => 42.111.4.84\n)\n\n => Array\n(\n => 74.82.60.116\n)\n\n => Array\n(\n => 137.97.121.165\n)\n\n => Array\n(\n => 64.62.187.194\n)\n\n => Array\n(\n => 137.97.106.162\n)\n\n => Array\n(\n => 137.97.92.46\n)\n\n => Array\n(\n => 137.97.170.25\n)\n\n => Array\n(\n => 103.104.192.100\n)\n\n => Array\n(\n => 185.246.211.34\n)\n\n => Array\n(\n => 119.160.96.78\n)\n\n => Array\n(\n => 212.103.48.152\n)\n\n => Array\n(\n => 183.83.153.90\n)\n\n => Array\n(\n => 117.248.150.41\n)\n\n => Array\n(\n => 185.240.246.180\n)\n\n => Array\n(\n => 162.253.131.125\n)\n\n => Array\n(\n => 117.230.153.217\n)\n\n => Array\n(\n => 117.230.169.1\n)\n\n => Array\n(\n => 49.15.138.247\n)\n\n => Array\n(\n => 117.230.37.110\n)\n\n => Array\n(\n => 14.167.188.75\n)\n\n => Array\n(\n => 169.149.239.93\n)\n\n => Array\n(\n => 103.216.176.91\n)\n\n => Array\n(\n => 117.230.12.126\n)\n\n => Array\n(\n => 184.75.209.110\n)\n\n => Array\n(\n => 117.230.6.60\n)\n\n => Array\n(\n => 117.230.135.132\n)\n\n => Array\n(\n => 31.179.29.109\n)\n\n => Array\n(\n => 74.121.188.186\n)\n\n => Array\n(\n => 117.230.35.5\n)\n\n => Array\n(\n => 111.92.74.239\n)\n\n => Array\n(\n => 104.245.144.236\n)\n\n => Array\n(\n => 39.50.22.100\n)\n\n => Array\n(\n => 47.31.190.23\n)\n\n => Array\n(\n => 157.44.73.187\n)\n\n => Array\n(\n => 117.230.8.91\n)\n\n => Array\n(\n => 157.32.18.2\n)\n\n => Array\n(\n => 111.119.187.43\n)\n\n => Array\n(\n => 203.101.185.246\n)\n\n => Array\n(\n => 5.62.34.22\n)\n\n => Array\n(\n => 122.8.143.76\n)\n\n => Array\n(\n => 115.186.2.187\n)\n\n => Array\n(\n => 202.142.110.89\n)\n\n => Array\n(\n => 157.50.61.254\n)\n\n => Array\n(\n => 223.182.211.185\n)\n\n => Array\n(\n => 103.85.125.210\n)\n\n => Array\n(\n => 103.217.133.147\n)\n\n => Array\n(\n => 103.60.196.217\n)\n\n => Array\n(\n => 157.44.238.6\n)\n\n => Array\n(\n => 117.196.225.68\n)\n\n => Array\n(\n => 104.254.92.52\n)\n\n => Array\n(\n => 39.42.46.72\n)\n\n => Array\n(\n => 221.132.119.36\n)\n\n => Array\n(\n => 111.92.77.47\n)\n\n => Array\n(\n => 223.225.19.152\n)\n\n => Array\n(\n => 159.89.121.217\n)\n\n => Array\n(\n => 39.53.221.205\n)\n\n => Array\n(\n => 193.34.217.28\n)\n\n => Array\n(\n => 139.167.206.36\n)\n\n => Array\n(\n => 96.40.10.7\n)\n\n => Array\n(\n => 124.29.198.123\n)\n\n => Array\n(\n => 117.196.226.1\n)\n\n => Array\n(\n => 106.200.85.135\n)\n\n => Array\n(\n => 106.223.180.28\n)\n\n => Array\n(\n => 103.49.232.110\n)\n\n => Array\n(\n => 139.167.208.50\n)\n\n => Array\n(\n => 139.167.201.102\n)\n\n => Array\n(\n => 14.244.224.237\n)\n\n => Array\n(\n => 103.140.31.187\n)\n\n => Array\n(\n => 49.36.134.136\n)\n\n => Array\n(\n => 160.16.61.75\n)\n\n => Array\n(\n => 103.18.22.228\n)\n\n => Array\n(\n => 47.9.74.121\n)\n\n => Array\n(\n => 47.30.216.159\n)\n\n => Array\n(\n => 117.248.150.78\n)\n\n => Array\n(\n => 5.62.34.17\n)\n\n => Array\n(\n => 139.167.247.181\n)\n\n => Array\n(\n => 193.176.84.29\n)\n\n => Array\n(\n => 103.195.201.121\n)\n\n => Array\n(\n => 89.187.175.115\n)\n\n => Array\n(\n => 137.97.81.251\n)\n\n => Array\n(\n => 157.51.147.62\n)\n\n => Array\n(\n => 103.104.192.42\n)\n\n => Array\n(\n => 14.171.235.26\n)\n\n => Array\n(\n => 178.62.89.121\n)\n\n => Array\n(\n => 119.155.4.164\n)\n\n => Array\n(\n => 43.250.241.89\n)\n\n => Array\n(\n => 103.31.100.80\n)\n\n => Array\n(\n => 119.155.7.44\n)\n\n => Array\n(\n => 106.200.73.114\n)\n\n => Array\n(\n => 77.111.246.18\n)\n\n => Array\n(\n => 157.39.99.247\n)\n\n => Array\n(\n => 103.77.42.132\n)\n\n => Array\n(\n => 74.115.214.133\n)\n\n => Array\n(\n => 117.230.49.224\n)\n\n => Array\n(\n => 39.50.108.238\n)\n\n => Array\n(\n => 47.30.221.45\n)\n\n => Array\n(\n => 95.133.164.235\n)\n\n => Array\n(\n => 212.103.48.141\n)\n\n => Array\n(\n => 104.194.218.147\n)\n\n => Array\n(\n => 106.200.88.241\n)\n\n => Array\n(\n => 182.189.212.211\n)\n\n => Array\n(\n => 39.50.142.129\n)\n\n => Array\n(\n => 77.234.43.133\n)\n\n => Array\n(\n => 49.15.192.58\n)\n\n => Array\n(\n => 119.153.37.55\n)\n\n => Array\n(\n => 27.56.156.128\n)\n\n => Array\n(\n => 168.211.4.33\n)\n\n => Array\n(\n => 203.81.236.239\n)\n\n => Array\n(\n => 157.51.149.61\n)\n\n => Array\n(\n => 117.230.45.255\n)\n\n => Array\n(\n => 39.42.106.169\n)\n\n => Array\n(\n => 27.71.89.76\n)\n\n => Array\n(\n => 123.27.109.167\n)\n\n => Array\n(\n => 106.202.21.91\n)\n\n => Array\n(\n => 103.85.125.206\n)\n\n => Array\n(\n => 122.173.250.229\n)\n\n => Array\n(\n => 106.210.102.77\n)\n\n => Array\n(\n => 134.209.47.156\n)\n\n => Array\n(\n => 45.127.232.12\n)\n\n => Array\n(\n => 45.134.224.11\n)\n\n => Array\n(\n => 27.71.89.122\n)\n\n => Array\n(\n => 157.38.105.117\n)\n\n => Array\n(\n => 191.96.73.215\n)\n\n => Array\n(\n => 171.241.92.31\n)\n\n => Array\n(\n => 49.149.104.235\n)\n\n => Array\n(\n => 104.229.247.252\n)\n\n => Array\n(\n => 111.92.78.42\n)\n\n => Array\n(\n => 47.31.88.183\n)\n\n => Array\n(\n => 171.61.203.234\n)\n\n => Array\n(\n => 183.83.226.192\n)\n\n => Array\n(\n => 119.157.107.45\n)\n\n => Array\n(\n => 91.202.163.205\n)\n\n => Array\n(\n => 157.43.62.108\n)\n\n => Array\n(\n => 182.68.248.92\n)\n\n => Array\n(\n => 157.32.251.234\n)\n\n => Array\n(\n => 110.225.196.188\n)\n\n => Array\n(\n => 27.71.89.98\n)\n\n => Array\n(\n => 175.176.87.3\n)\n\n => Array\n(\n => 103.55.90.208\n)\n\n => Array\n(\n => 47.31.41.163\n)\n\n => Array\n(\n => 223.182.195.5\n)\n\n => Array\n(\n => 122.52.101.166\n)\n\n => Array\n(\n => 103.207.82.154\n)\n\n => Array\n(\n => 171.224.178.84\n)\n\n => Array\n(\n => 110.225.235.187\n)\n\n => Array\n(\n => 119.160.97.248\n)\n\n => Array\n(\n => 116.90.101.121\n)\n\n => Array\n(\n => 182.255.48.154\n)\n\n => Array\n(\n => 180.149.221.140\n)\n\n => Array\n(\n => 194.44.79.13\n)\n\n => Array\n(\n => 47.247.18.3\n)\n\n => Array\n(\n => 27.56.242.95\n)\n\n => Array\n(\n => 41.60.236.83\n)\n\n => Array\n(\n => 122.164.162.7\n)\n\n => Array\n(\n => 71.136.154.5\n)\n\n => Array\n(\n => 132.154.119.122\n)\n\n => Array\n(\n => 110.225.80.135\n)\n\n => Array\n(\n => 84.17.61.143\n)\n\n => Array\n(\n => 119.160.102.244\n)\n\n => Array\n(\n => 47.31.27.44\n)\n\n => Array\n(\n => 27.71.89.160\n)\n\n => Array\n(\n => 107.175.38.101\n)\n\n => Array\n(\n => 195.211.150.152\n)\n\n => Array\n(\n => 157.35.250.255\n)\n\n => Array\n(\n => 111.119.187.53\n)\n\n => Array\n(\n => 119.152.97.213\n)\n\n => Array\n(\n => 180.92.143.145\n)\n\n => Array\n(\n => 72.255.61.46\n)\n\n => Array\n(\n => 47.8.183.6\n)\n\n => Array\n(\n => 92.38.148.53\n)\n\n => Array\n(\n => 122.173.194.72\n)\n\n => Array\n(\n => 183.83.226.97\n)\n\n => Array\n(\n => 122.173.73.231\n)\n\n => Array\n(\n => 119.160.101.101\n)\n\n => Array\n(\n => 93.177.75.174\n)\n\n => Array\n(\n => 115.97.196.70\n)\n\n => Array\n(\n => 111.119.187.35\n)\n\n => Array\n(\n => 103.226.226.154\n)\n\n => Array\n(\n => 103.244.172.73\n)\n\n => Array\n(\n => 119.155.61.222\n)\n\n => Array\n(\n => 157.37.184.92\n)\n\n => Array\n(\n => 119.160.103.204\n)\n\n => Array\n(\n => 175.176.87.21\n)\n\n => Array\n(\n => 185.51.228.246\n)\n\n => Array\n(\n => 103.250.164.255\n)\n\n => Array\n(\n => 122.181.194.16\n)\n\n => Array\n(\n => 157.37.230.232\n)\n\n => Array\n(\n => 103.105.236.6\n)\n\n => Array\n(\n => 111.88.128.174\n)\n\n => Array\n(\n => 37.111.139.82\n)\n\n => Array\n(\n => 39.34.133.52\n)\n\n => Array\n(\n => 113.177.79.80\n)\n\n => Array\n(\n => 180.183.71.184\n)\n\n => Array\n(\n => 116.72.218.255\n)\n\n => Array\n(\n => 119.160.117.26\n)\n\n => Array\n(\n => 158.222.0.252\n)\n\n => Array\n(\n => 23.227.142.146\n)\n\n => Array\n(\n => 122.162.152.152\n)\n\n => Array\n(\n => 103.255.149.106\n)\n\n => Array\n(\n => 104.236.53.155\n)\n\n => Array\n(\n => 119.160.119.155\n)\n\n => Array\n(\n => 175.107.214.244\n)\n\n => Array\n(\n => 102.7.116.7\n)\n\n => Array\n(\n => 111.88.91.132\n)\n\n => Array\n(\n => 119.157.248.108\n)\n\n => Array\n(\n => 222.252.36.107\n)\n\n => Array\n(\n => 157.46.209.227\n)\n\n => Array\n(\n => 39.40.54.1\n)\n\n => Array\n(\n => 223.225.19.254\n)\n\n => Array\n(\n => 154.72.150.8\n)\n\n => Array\n(\n => 107.181.177.130\n)\n\n => Array\n(\n => 101.50.75.31\n)\n\n => Array\n(\n => 84.17.58.69\n)\n\n => Array\n(\n => 178.62.5.157\n)\n\n => Array\n(\n => 112.206.175.147\n)\n\n => Array\n(\n => 137.97.113.137\n)\n\n => Array\n(\n => 103.53.44.154\n)\n\n => Array\n(\n => 180.92.143.129\n)\n\n => Array\n(\n => 14.231.223.7\n)\n\n => Array\n(\n => 167.88.63.201\n)\n\n => Array\n(\n => 103.140.204.8\n)\n\n => Array\n(\n => 221.121.135.108\n)\n\n => Array\n(\n => 119.160.97.129\n)\n\n => Array\n(\n => 27.5.168.249\n)\n\n => Array\n(\n => 119.160.102.191\n)\n\n => Array\n(\n => 122.162.219.12\n)\n\n => Array\n(\n => 157.50.141.122\n)\n\n => Array\n(\n => 43.245.8.17\n)\n\n => Array\n(\n => 113.181.198.179\n)\n\n => Array\n(\n => 47.30.221.59\n)\n\n => Array\n(\n => 110.38.29.246\n)\n\n => Array\n(\n => 14.192.140.199\n)\n\n => Array\n(\n => 24.68.10.106\n)\n\n => Array\n(\n => 47.30.209.179\n)\n\n => Array\n(\n => 106.223.123.21\n)\n\n => Array\n(\n => 103.224.48.30\n)\n\n => Array\n(\n => 104.131.19.173\n)\n\n => Array\n(\n => 119.157.100.206\n)\n\n => Array\n(\n => 103.10.226.73\n)\n\n => Array\n(\n => 162.208.51.163\n)\n\n => Array\n(\n => 47.30.221.227\n)\n\n => Array\n(\n => 119.160.116.210\n)\n\n => Array\n(\n => 198.16.78.43\n)\n\n => Array\n(\n => 39.44.201.151\n)\n\n => Array\n(\n => 71.63.181.84\n)\n\n => Array\n(\n => 14.142.192.218\n)\n\n => Array\n(\n => 39.34.147.178\n)\n\n => Array\n(\n => 111.92.75.25\n)\n\n => Array\n(\n => 45.135.239.58\n)\n\n => Array\n(\n => 14.232.235.1\n)\n\n => Array\n(\n => 49.144.100.155\n)\n\n => Array\n(\n => 62.182.99.33\n)\n\n => Array\n(\n => 104.243.212.187\n)\n\n => Array\n(\n => 59.97.132.214\n)\n\n => Array\n(\n => 47.9.15.179\n)\n\n => Array\n(\n => 39.44.103.186\n)\n\n => Array\n(\n => 183.83.241.132\n)\n\n => Array\n(\n => 103.41.24.180\n)\n\n => Array\n(\n => 104.238.46.39\n)\n\n => Array\n(\n => 103.79.170.78\n)\n\n => Array\n(\n => 59.103.138.81\n)\n\n => Array\n(\n => 106.198.191.146\n)\n\n => Array\n(\n => 106.198.255.122\n)\n\n => Array\n(\n => 47.31.46.37\n)\n\n => Array\n(\n => 109.169.23.76\n)\n\n => Array\n(\n => 103.143.7.55\n)\n\n => Array\n(\n => 49.207.114.52\n)\n\n => Array\n(\n => 198.54.106.250\n)\n\n => Array\n(\n => 39.50.64.18\n)\n\n => Array\n(\n => 222.252.48.132\n)\n\n => Array\n(\n => 42.201.186.53\n)\n\n => Array\n(\n => 115.97.198.95\n)\n\n => Array\n(\n => 93.76.134.244\n)\n\n => Array\n(\n => 122.173.15.189\n)\n\n => Array\n(\n => 39.62.38.29\n)\n\n => Array\n(\n => 103.201.145.254\n)\n\n => Array\n(\n => 111.119.187.23\n)\n\n => Array\n(\n => 157.50.66.33\n)\n\n => Array\n(\n => 157.49.68.163\n)\n\n => Array\n(\n => 103.85.125.215\n)\n\n => Array\n(\n => 103.255.4.16\n)\n\n => Array\n(\n => 223.181.246.206\n)\n\n => Array\n(\n => 39.40.109.226\n)\n\n => Array\n(\n => 43.225.70.157\n)\n\n => Array\n(\n => 103.211.18.168\n)\n\n => Array\n(\n => 137.59.221.60\n)\n\n => Array\n(\n => 103.81.214.63\n)\n\n => Array\n(\n => 39.35.163.2\n)\n\n => Array\n(\n => 106.205.124.39\n)\n\n => Array\n(\n => 209.99.165.216\n)\n\n => Array\n(\n => 103.75.247.187\n)\n\n => Array\n(\n => 157.46.217.41\n)\n\n => Array\n(\n => 75.186.73.80\n)\n\n => Array\n(\n => 212.103.48.153\n)\n\n => Array\n(\n => 47.31.61.167\n)\n\n => Array\n(\n => 119.152.145.131\n)\n\n => Array\n(\n => 171.76.177.244\n)\n\n => Array\n(\n => 103.135.78.50\n)\n\n => Array\n(\n => 103.79.170.75\n)\n\n => Array\n(\n => 105.160.22.74\n)\n\n => Array\n(\n => 47.31.20.153\n)\n\n => Array\n(\n => 42.107.204.65\n)\n\n => Array\n(\n => 49.207.131.35\n)\n\n => Array\n(\n => 92.38.148.61\n)\n\n => Array\n(\n => 183.83.255.206\n)\n\n => Array\n(\n => 107.181.177.131\n)\n\n => Array\n(\n => 39.40.220.157\n)\n\n => Array\n(\n => 39.41.133.176\n)\n\n => Array\n(\n => 103.81.214.61\n)\n\n => Array\n(\n => 223.235.108.46\n)\n\n => Array\n(\n => 171.241.52.118\n)\n\n => Array\n(\n => 39.57.138.47\n)\n\n => Array\n(\n => 106.204.196.172\n)\n\n => Array\n(\n => 39.53.228.40\n)\n\n => Array\n(\n => 185.242.5.99\n)\n\n => Array\n(\n => 103.255.5.96\n)\n\n => Array\n(\n => 157.46.212.120\n)\n\n => Array\n(\n => 107.181.177.138\n)\n\n => Array\n(\n => 47.30.193.65\n)\n\n => Array\n(\n => 39.37.178.33\n)\n\n => Array\n(\n => 157.46.173.29\n)\n\n => Array\n(\n => 39.57.238.211\n)\n\n => Array\n(\n => 157.37.245.113\n)\n\n => Array\n(\n => 47.30.201.138\n)\n\n => Array\n(\n => 106.204.193.108\n)\n\n => Array\n(\n => 212.103.50.212\n)\n\n => Array\n(\n => 58.65.221.187\n)\n\n => Array\n(\n => 178.62.92.29\n)\n\n => Array\n(\n => 111.92.77.166\n)\n\n => Array\n(\n => 47.30.223.158\n)\n\n => Array\n(\n => 103.224.54.83\n)\n\n => Array\n(\n => 119.153.43.22\n)\n\n => Array\n(\n => 223.181.126.251\n)\n\n => Array\n(\n => 39.42.175.202\n)\n\n => Array\n(\n => 103.224.54.190\n)\n\n => Array\n(\n => 49.36.141.210\n)\n\n => Array\n(\n => 5.62.63.218\n)\n\n => Array\n(\n => 39.59.9.18\n)\n\n => Array\n(\n => 111.88.86.45\n)\n\n => Array\n(\n => 178.54.139.5\n)\n\n => Array\n(\n => 116.68.105.241\n)\n\n => Array\n(\n => 119.160.96.187\n)\n\n => Array\n(\n => 182.189.192.103\n)\n\n => Array\n(\n => 119.160.96.143\n)\n\n => Array\n(\n => 110.225.89.98\n)\n\n => Array\n(\n => 169.149.195.134\n)\n\n => Array\n(\n => 103.238.104.54\n)\n\n => Array\n(\n => 47.30.208.142\n)\n\n => Array\n(\n => 157.46.179.209\n)\n\n => Array\n(\n => 223.235.38.119\n)\n\n => Array\n(\n => 42.106.180.165\n)\n\n => Array\n(\n => 154.122.240.239\n)\n\n => Array\n(\n => 106.223.104.191\n)\n\n => Array\n(\n => 111.93.110.218\n)\n\n => Array\n(\n => 182.183.161.171\n)\n\n => Array\n(\n => 157.44.184.211\n)\n\n => Array\n(\n => 157.50.185.193\n)\n\n => Array\n(\n => 117.230.19.194\n)\n\n => Array\n(\n => 162.243.246.160\n)\n\n => Array\n(\n => 106.223.143.53\n)\n\n => Array\n(\n => 39.59.41.15\n)\n\n => Array\n(\n => 106.210.65.42\n)\n\n => Array\n(\n => 180.243.144.208\n)\n\n => Array\n(\n => 116.68.105.22\n)\n\n => Array\n(\n => 115.42.70.46\n)\n\n => Array\n(\n => 99.72.192.148\n)\n\n => Array\n(\n => 182.183.182.48\n)\n\n => Array\n(\n => 171.48.58.97\n)\n\n => Array\n(\n => 37.120.131.188\n)\n\n => Array\n(\n => 117.99.167.177\n)\n\n => Array\n(\n => 111.92.76.210\n)\n\n => Array\n(\n => 14.192.144.245\n)\n\n => Array\n(\n => 169.149.242.87\n)\n\n => Array\n(\n => 47.30.198.149\n)\n\n => Array\n(\n => 59.103.57.140\n)\n\n => Array\n(\n => 117.230.161.168\n)\n\n => Array\n(\n => 110.225.88.173\n)\n\n => Array\n(\n => 169.149.246.95\n)\n\n => Array\n(\n => 42.106.180.52\n)\n\n => Array\n(\n => 14.231.160.157\n)\n\n => Array\n(\n => 123.27.109.47\n)\n\n => Array\n(\n => 157.46.130.54\n)\n\n => Array\n(\n => 39.42.73.194\n)\n\n => Array\n(\n => 117.230.18.147\n)\n\n => Array\n(\n => 27.59.231.98\n)\n\n => Array\n(\n => 125.209.78.227\n)\n\n => Array\n(\n => 157.34.80.145\n)\n\n => Array\n(\n => 42.201.251.86\n)\n\n => Array\n(\n => 117.230.129.158\n)\n\n => Array\n(\n => 103.82.80.103\n)\n\n => Array\n(\n => 47.9.171.228\n)\n\n => Array\n(\n => 117.230.24.92\n)\n\n => Array\n(\n => 103.129.143.119\n)\n\n => Array\n(\n => 39.40.213.45\n)\n\n => Array\n(\n => 178.92.188.214\n)\n\n => Array\n(\n => 110.235.232.191\n)\n\n => Array\n(\n => 5.62.34.18\n)\n\n => Array\n(\n => 47.30.212.134\n)\n\n => Array\n(\n => 157.42.34.196\n)\n\n => Array\n(\n => 157.32.169.9\n)\n\n => Array\n(\n => 103.255.4.11\n)\n\n => Array\n(\n => 117.230.13.69\n)\n\n => Array\n(\n => 117.230.58.97\n)\n\n => Array\n(\n => 92.52.138.39\n)\n\n => Array\n(\n => 221.132.119.63\n)\n\n => Array\n(\n => 117.97.167.188\n)\n\n => Array\n(\n => 119.153.56.58\n)\n\n => Array\n(\n => 105.50.22.150\n)\n\n => Array\n(\n => 115.42.68.126\n)\n\n => Array\n(\n => 182.189.223.159\n)\n\n => Array\n(\n => 39.59.36.90\n)\n\n => Array\n(\n => 111.92.76.114\n)\n\n => Array\n(\n => 157.47.226.163\n)\n\n => Array\n(\n => 202.47.44.37\n)\n\n => Array\n(\n => 106.51.234.172\n)\n\n => Array\n(\n => 103.101.88.166\n)\n\n => Array\n(\n => 27.6.246.146\n)\n\n => Array\n(\n => 103.255.5.83\n)\n\n => Array\n(\n => 103.98.210.185\n)\n\n => Array\n(\n => 122.173.114.134\n)\n\n => Array\n(\n => 122.173.77.248\n)\n\n => Array\n(\n => 5.62.41.172\n)\n\n => Array\n(\n => 180.178.181.17\n)\n\n => Array\n(\n => 37.120.133.224\n)\n\n => Array\n(\n => 45.131.5.156\n)\n\n => Array\n(\n => 110.39.100.110\n)\n\n => Array\n(\n => 176.110.38.185\n)\n\n => Array\n(\n => 36.255.41.64\n)\n\n => Array\n(\n => 103.104.192.15\n)\n\n => Array\n(\n => 43.245.131.195\n)\n\n => Array\n(\n => 14.248.111.185\n)\n\n => Array\n(\n => 122.173.217.133\n)\n\n => Array\n(\n => 106.223.90.245\n)\n\n => Array\n(\n => 119.153.56.80\n)\n\n => Array\n(\n => 103.7.60.172\n)\n\n => Array\n(\n => 157.46.184.233\n)\n\n => Array\n(\n => 182.190.31.95\n)\n\n => Array\n(\n => 109.87.189.122\n)\n\n => Array\n(\n => 91.74.25.100\n)\n\n => Array\n(\n => 182.185.224.144\n)\n\n => Array\n(\n => 106.223.91.221\n)\n\n => Array\n(\n => 182.190.223.40\n)\n\n => Array\n(\n => 2.58.194.134\n)\n\n => Array\n(\n => 196.246.225.236\n)\n\n => Array\n(\n => 106.223.90.173\n)\n\n => Array\n(\n => 23.239.16.54\n)\n\n => Array\n(\n => 157.46.65.225\n)\n\n => Array\n(\n => 115.186.130.14\n)\n\n => Array\n(\n => 103.85.125.157\n)\n\n => Array\n(\n => 14.248.103.6\n)\n\n => Array\n(\n => 123.24.169.247\n)\n\n => Array\n(\n => 103.130.108.153\n)\n\n => Array\n(\n => 115.42.67.21\n)\n\n => Array\n(\n => 202.166.171.190\n)\n\n => Array\n(\n => 39.37.169.104\n)\n\n => Array\n(\n => 103.82.80.59\n)\n\n => Array\n(\n => 175.107.208.58\n)\n\n => Array\n(\n => 203.192.238.247\n)\n\n => Array\n(\n => 103.217.178.150\n)\n\n => Array\n(\n => 103.66.214.173\n)\n\n => Array\n(\n => 110.93.236.174\n)\n\n => Array\n(\n => 143.189.242.64\n)\n\n => Array\n(\n => 77.111.245.12\n)\n\n => Array\n(\n => 145.239.2.231\n)\n\n => Array\n(\n => 115.186.190.38\n)\n\n => Array\n(\n => 109.169.23.67\n)\n\n => Array\n(\n => 198.16.70.29\n)\n\n => Array\n(\n => 111.92.76.186\n)\n\n => Array\n(\n => 115.42.69.34\n)\n\n => Array\n(\n => 73.61.100.95\n)\n\n => Array\n(\n => 103.129.142.31\n)\n\n => Array\n(\n => 103.255.5.53\n)\n\n => Array\n(\n => 103.76.55.2\n)\n\n => Array\n(\n => 47.9.141.138\n)\n\n => Array\n(\n => 103.55.89.234\n)\n\n => Array\n(\n => 103.223.13.53\n)\n\n => Array\n(\n => 175.158.50.203\n)\n\n => Array\n(\n => 103.255.5.90\n)\n\n => Array\n(\n => 106.223.100.138\n)\n\n => Array\n(\n => 39.37.143.193\n)\n\n => Array\n(\n => 206.189.133.131\n)\n\n => Array\n(\n => 43.224.0.233\n)\n\n => Array\n(\n => 115.186.132.106\n)\n\n => Array\n(\n => 31.43.21.159\n)\n\n => Array\n(\n => 119.155.56.131\n)\n\n => Array\n(\n => 103.82.80.138\n)\n\n => Array\n(\n => 24.87.128.119\n)\n\n => Array\n(\n => 106.210.103.163\n)\n\n => Array\n(\n => 103.82.80.90\n)\n\n => Array\n(\n => 157.46.186.45\n)\n\n => Array\n(\n => 157.44.155.238\n)\n\n => Array\n(\n => 103.119.199.2\n)\n\n => Array\n(\n => 27.97.169.205\n)\n\n => Array\n(\n => 157.46.174.89\n)\n\n => Array\n(\n => 43.250.58.220\n)\n\n => Array\n(\n => 76.189.186.64\n)\n\n => Array\n(\n => 103.255.5.57\n)\n\n => Array\n(\n => 171.61.196.136\n)\n\n => Array\n(\n => 202.47.40.88\n)\n\n => Array\n(\n => 97.118.94.116\n)\n\n => Array\n(\n => 157.44.124.157\n)\n\n => Array\n(\n => 95.142.120.13\n)\n\n => Array\n(\n => 42.201.229.151\n)\n\n => Array\n(\n => 157.46.178.95\n)\n\n => Array\n(\n => 169.149.215.192\n)\n\n => Array\n(\n => 42.111.19.48\n)\n\n => Array\n(\n => 1.38.52.18\n)\n\n => Array\n(\n => 145.239.91.241\n)\n\n => Array\n(\n => 47.31.78.191\n)\n\n => Array\n(\n => 103.77.42.60\n)\n\n => Array\n(\n => 157.46.107.144\n)\n\n => Array\n(\n => 157.46.125.124\n)\n\n => Array\n(\n => 110.225.218.108\n)\n\n => Array\n(\n => 106.51.77.185\n)\n\n => Array\n(\n => 123.24.161.207\n)\n\n => Array\n(\n => 106.210.108.22\n)\n\n => Array\n(\n => 42.111.10.14\n)\n\n => Array\n(\n => 223.29.231.175\n)\n\n => Array\n(\n => 27.56.152.132\n)\n\n => Array\n(\n => 119.155.31.100\n)\n\n => Array\n(\n => 122.173.172.127\n)\n\n => Array\n(\n => 103.77.42.64\n)\n\n => Array\n(\n => 157.44.164.106\n)\n\n => Array\n(\n => 14.181.53.38\n)\n\n => Array\n(\n => 115.42.67.64\n)\n\n => Array\n(\n => 47.31.33.140\n)\n\n => Array\n(\n => 103.15.60.234\n)\n\n => Array\n(\n => 182.64.219.181\n)\n\n => Array\n(\n => 103.44.51.6\n)\n\n => Array\n(\n => 116.74.25.157\n)\n\n => Array\n(\n => 116.71.2.128\n)\n\n => Array\n(\n => 157.32.185.239\n)\n\n => Array\n(\n => 47.31.25.79\n)\n\n => Array\n(\n => 178.62.85.75\n)\n\n => Array\n(\n => 180.178.190.39\n)\n\n => Array\n(\n => 39.48.52.179\n)\n\n => Array\n(\n => 106.193.11.240\n)\n\n => Array\n(\n => 103.82.80.226\n)\n\n => Array\n(\n => 49.206.126.30\n)\n\n => Array\n(\n => 157.245.191.173\n)\n\n => Array\n(\n => 49.205.84.237\n)\n\n => Array\n(\n => 47.8.181.232\n)\n\n => Array\n(\n => 182.66.2.92\n)\n\n => Array\n(\n => 49.34.137.220\n)\n\n => Array\n(\n => 209.205.217.125\n)\n\n => Array\n(\n => 192.64.5.73\n)\n\n => Array\n(\n => 27.63.166.108\n)\n\n => Array\n(\n => 120.29.96.211\n)\n\n => Array\n(\n => 182.186.112.135\n)\n\n => Array\n(\n => 45.118.165.151\n)\n\n => Array\n(\n => 47.8.228.12\n)\n\n => Array\n(\n => 106.215.3.162\n)\n\n => Array\n(\n => 111.92.72.66\n)\n\n => Array\n(\n => 169.145.2.9\n)\n\n => Array\n(\n => 106.207.205.100\n)\n\n => Array\n(\n => 223.181.8.12\n)\n\n => Array\n(\n => 157.48.149.78\n)\n\n => Array\n(\n => 103.206.138.116\n)\n\n => Array\n(\n => 39.53.119.22\n)\n\n => Array\n(\n => 157.33.232.106\n)\n\n => Array\n(\n => 49.37.205.139\n)\n\n => Array\n(\n => 115.42.68.3\n)\n\n => Array\n(\n => 93.72.182.251\n)\n\n => Array\n(\n => 202.142.166.22\n)\n\n => Array\n(\n => 157.119.81.111\n)\n\n => Array\n(\n => 182.186.116.155\n)\n\n => Array\n(\n => 157.37.171.37\n)\n\n => Array\n(\n => 117.206.164.48\n)\n\n => Array\n(\n => 49.36.52.63\n)\n\n => Array\n(\n => 203.175.72.112\n)\n\n => Array\n(\n => 171.61.132.193\n)\n\n => Array\n(\n => 111.119.187.44\n)\n\n => Array\n(\n => 39.37.165.216\n)\n\n => Array\n(\n => 103.86.109.58\n)\n\n => Array\n(\n => 39.59.2.86\n)\n\n => Array\n(\n => 111.119.187.28\n)\n\n => Array\n(\n => 106.201.9.10\n)\n\n => Array\n(\n => 49.35.25.106\n)\n\n => Array\n(\n => 157.49.239.103\n)\n\n => Array\n(\n => 157.49.237.198\n)\n\n => Array\n(\n => 14.248.64.121\n)\n\n => Array\n(\n => 117.102.7.214\n)\n\n => Array\n(\n => 120.29.91.246\n)\n\n => Array\n(\n => 103.7.79.41\n)\n\n => Array\n(\n => 132.154.99.209\n)\n\n => Array\n(\n => 212.36.27.245\n)\n\n => Array\n(\n => 157.44.154.9\n)\n\n => Array\n(\n => 47.31.56.44\n)\n\n => Array\n(\n => 192.142.199.136\n)\n\n => Array\n(\n => 171.61.159.49\n)\n\n => Array\n(\n => 119.160.116.151\n)\n\n => Array\n(\n => 103.98.63.39\n)\n\n => Array\n(\n => 41.60.233.216\n)\n\n => Array\n(\n => 49.36.75.212\n)\n\n => Array\n(\n => 223.188.60.20\n)\n\n => Array\n(\n => 103.98.63.50\n)\n\n => Array\n(\n => 178.162.198.21\n)\n\n => Array\n(\n => 157.46.209.35\n)\n\n => Array\n(\n => 119.155.32.151\n)\n\n => Array\n(\n => 102.185.58.161\n)\n\n => Array\n(\n => 59.96.89.231\n)\n\n => Array\n(\n => 119.155.255.198\n)\n\n => Array\n(\n => 42.107.204.57\n)\n\n => Array\n(\n => 42.106.181.74\n)\n\n => Array\n(\n => 157.46.219.186\n)\n\n => Array\n(\n => 115.42.71.49\n)\n\n => Array\n(\n => 157.46.209.131\n)\n\n => Array\n(\n => 220.81.15.94\n)\n\n => Array\n(\n => 111.119.187.24\n)\n\n => Array\n(\n => 49.37.195.185\n)\n\n => Array\n(\n => 42.106.181.85\n)\n\n => Array\n(\n => 43.249.225.134\n)\n\n => Array\n(\n => 117.206.165.151\n)\n\n => Array\n(\n => 119.153.48.250\n)\n\n => Array\n(\n => 27.4.172.162\n)\n\n => Array\n(\n => 117.20.29.51\n)\n\n => Array\n(\n => 103.98.63.135\n)\n\n => Array\n(\n => 117.7.218.229\n)\n\n => Array\n(\n => 157.49.233.105\n)\n\n => Array\n(\n => 39.53.151.199\n)\n\n => Array\n(\n => 101.255.118.33\n)\n\n => Array\n(\n => 41.141.246.9\n)\n\n => Array\n(\n => 221.132.113.78\n)\n\n => Array\n(\n => 119.160.116.202\n)\n\n => Array\n(\n => 117.237.193.244\n)\n\n => Array\n(\n => 157.41.110.145\n)\n\n => Array\n(\n => 103.98.63.5\n)\n\n => Array\n(\n => 103.125.129.58\n)\n\n => Array\n(\n => 183.83.254.66\n)\n\n => Array\n(\n => 45.135.236.160\n)\n\n => Array\n(\n => 198.199.87.124\n)\n\n => Array\n(\n => 193.176.86.41\n)\n\n => Array\n(\n => 115.97.142.98\n)\n\n => Array\n(\n => 222.252.38.198\n)\n\n => Array\n(\n => 110.93.237.49\n)\n\n => Array\n(\n => 103.224.48.122\n)\n\n => Array\n(\n => 110.38.28.130\n)\n\n => Array\n(\n => 106.211.238.154\n)\n\n => Array\n(\n => 111.88.41.73\n)\n\n => Array\n(\n => 119.155.13.143\n)\n\n => Array\n(\n => 103.213.111.60\n)\n\n => Array\n(\n => 202.0.103.42\n)\n\n => Array\n(\n => 157.48.144.33\n)\n\n => Array\n(\n => 111.119.187.62\n)\n\n => Array\n(\n => 103.87.212.71\n)\n\n => Array\n(\n => 157.37.177.20\n)\n\n => Array\n(\n => 223.233.71.92\n)\n\n => Array\n(\n => 116.213.32.107\n)\n\n => Array\n(\n => 104.248.173.151\n)\n\n => Array\n(\n => 14.181.102.222\n)\n\n => Array\n(\n => 103.10.224.252\n)\n\n => Array\n(\n => 175.158.50.57\n)\n\n => Array\n(\n => 165.22.122.199\n)\n\n => Array\n(\n => 23.106.56.12\n)\n\n => Array\n(\n => 203.122.10.146\n)\n\n => Array\n(\n => 37.111.136.138\n)\n\n => Array\n(\n => 103.87.193.66\n)\n\n => Array\n(\n => 39.59.122.246\n)\n\n => Array\n(\n => 111.119.183.63\n)\n\n => Array\n(\n => 157.46.72.102\n)\n\n => Array\n(\n => 185.132.133.82\n)\n\n => Array\n(\n => 118.103.230.148\n)\n\n => Array\n(\n => 5.62.39.45\n)\n\n => Array\n(\n => 119.152.144.134\n)\n\n => Array\n(\n => 172.105.117.102\n)\n\n => Array\n(\n => 122.254.70.212\n)\n\n => Array\n(\n => 102.185.128.97\n)\n\n => Array\n(\n => 182.69.249.11\n)\n\n => Array\n(\n => 105.163.134.167\n)\n\n => Array\n(\n => 111.119.187.38\n)\n\n => Array\n(\n => 103.46.195.93\n)\n\n => Array\n(\n => 106.204.161.156\n)\n\n => Array\n(\n => 122.176.2.175\n)\n\n => Array\n(\n => 117.99.162.31\n)\n\n => Array\n(\n => 106.212.241.242\n)\n\n => Array\n(\n => 42.107.196.149\n)\n\n => Array\n(\n => 212.90.60.57\n)\n\n => Array\n(\n => 175.107.237.12\n)\n\n => Array\n(\n => 157.46.119.152\n)\n\n => Array\n(\n => 157.34.81.12\n)\n\n => Array\n(\n => 162.243.1.22\n)\n\n => Array\n(\n => 110.37.222.178\n)\n\n => Array\n(\n => 103.46.195.68\n)\n\n => Array\n(\n => 119.160.116.81\n)\n\n => Array\n(\n => 138.197.131.28\n)\n\n => Array\n(\n => 103.88.218.124\n)\n\n => Array\n(\n => 192.241.172.113\n)\n\n => Array\n(\n => 110.39.174.106\n)\n\n => Array\n(\n => 111.88.48.17\n)\n\n => Array\n(\n => 42.108.160.218\n)\n\n => Array\n(\n => 117.102.0.16\n)\n\n => Array\n(\n => 157.46.125.235\n)\n\n => Array\n(\n => 14.190.242.251\n)\n\n => Array\n(\n => 47.31.184.64\n)\n\n => Array\n(\n => 49.205.84.157\n)\n\n => Array\n(\n => 122.162.115.247\n)\n\n => Array\n(\n => 41.202.219.74\n)\n\n => Array\n(\n => 106.215.9.67\n)\n\n => Array\n(\n => 103.87.56.208\n)\n\n => Array\n(\n => 103.46.194.147\n)\n\n => Array\n(\n => 116.90.98.81\n)\n\n => Array\n(\n => 115.42.71.213\n)\n\n => Array\n(\n => 39.49.35.192\n)\n\n => Array\n(\n => 41.202.219.65\n)\n\n => Array\n(\n => 131.212.249.93\n)\n\n => Array\n(\n => 49.205.16.251\n)\n\n => Array\n(\n => 39.34.147.250\n)\n\n => Array\n(\n => 183.83.210.185\n)\n\n => Array\n(\n => 49.37.194.215\n)\n\n => Array\n(\n => 103.46.194.108\n)\n\n => Array\n(\n => 89.36.219.233\n)\n\n => Array\n(\n => 119.152.105.178\n)\n\n => Array\n(\n => 202.47.45.125\n)\n\n => Array\n(\n => 156.146.59.27\n)\n\n => Array\n(\n => 132.154.21.156\n)\n\n => Array\n(\n => 157.44.35.31\n)\n\n => Array\n(\n => 41.80.118.124\n)\n\n => Array\n(\n => 47.31.159.198\n)\n\n => Array\n(\n => 103.209.223.140\n)\n\n => Array\n(\n => 157.46.130.138\n)\n\n => Array\n(\n => 49.37.199.246\n)\n\n => Array\n(\n => 111.88.242.10\n)\n\n => Array\n(\n => 43.241.145.110\n)\n\n => Array\n(\n => 124.153.16.30\n)\n\n => Array\n(\n => 27.5.22.173\n)\n\n => Array\n(\n => 111.88.191.173\n)\n\n => Array\n(\n => 41.60.236.200\n)\n\n => Array\n(\n => 115.42.67.146\n)\n\n => Array\n(\n => 150.242.173.7\n)\n\n => Array\n(\n => 14.248.71.23\n)\n\n => Array\n(\n => 111.119.187.4\n)\n\n => Array\n(\n => 124.29.212.118\n)\n\n => Array\n(\n => 51.68.205.163\n)\n\n => Array\n(\n => 182.184.107.63\n)\n\n => Array\n(\n => 106.211.253.87\n)\n\n => Array\n(\n => 223.190.89.5\n)\n\n => Array\n(\n => 183.83.212.63\n)\n\n => Array\n(\n => 129.205.113.227\n)\n\n => Array\n(\n => 106.210.40.141\n)\n\n => Array\n(\n => 91.202.163.169\n)\n\n => Array\n(\n => 76.105.191.89\n)\n\n => Array\n(\n => 171.51.244.160\n)\n\n => Array\n(\n => 37.139.188.92\n)\n\n => Array\n(\n => 23.106.56.37\n)\n\n => Array\n(\n => 157.44.175.180\n)\n\n => Array\n(\n => 122.2.122.97\n)\n\n => Array\n(\n => 103.87.192.194\n)\n\n => Array\n(\n => 192.154.253.6\n)\n\n => Array\n(\n => 77.243.191.19\n)\n\n => Array\n(\n => 122.254.70.46\n)\n\n => Array\n(\n => 154.76.233.73\n)\n\n => Array\n(\n => 195.181.167.150\n)\n\n => Array\n(\n => 209.209.228.5\n)\n\n => Array\n(\n => 203.192.212.115\n)\n\n => Array\n(\n => 221.132.118.179\n)\n\n => Array\n(\n => 117.208.210.204\n)\n\n => Array\n(\n => 120.29.90.126\n)\n\n => Array\n(\n => 36.77.239.190\n)\n\n => Array\n(\n => 157.37.137.127\n)\n\n => Array\n(\n => 39.40.243.6\n)\n\n => Array\n(\n => 182.182.41.201\n)\n\n => Array\n(\n => 39.59.32.46\n)\n\n => Array\n(\n => 111.119.183.36\n)\n\n => Array\n(\n => 103.83.147.61\n)\n\n => Array\n(\n => 103.82.80.85\n)\n\n => Array\n(\n => 103.46.194.161\n)\n\n => Array\n(\n => 101.50.105.38\n)\n\n => Array\n(\n => 111.119.183.58\n)\n\n => Array\n(\n => 47.9.234.51\n)\n\n => Array\n(\n => 120.29.86.157\n)\n\n => Array\n(\n => 175.158.50.70\n)\n\n => Array\n(\n => 112.196.163.235\n)\n\n => Array\n(\n => 139.167.161.85\n)\n\n => Array\n(\n => 106.207.39.181\n)\n\n => Array\n(\n => 103.77.42.159\n)\n\n => Array\n(\n => 185.56.138.220\n)\n\n => Array\n(\n => 119.155.33.205\n)\n\n => Array\n(\n => 157.42.117.124\n)\n\n => Array\n(\n => 103.117.202.202\n)\n\n => Array\n(\n => 220.253.101.109\n)\n\n => Array\n(\n => 49.37.7.247\n)\n\n => Array\n(\n => 119.160.65.27\n)\n\n => Array\n(\n => 114.122.21.151\n)\n\n => Array\n(\n => 157.44.141.83\n)\n\n => Array\n(\n => 103.131.9.7\n)\n\n => Array\n(\n => 125.99.222.21\n)\n\n => Array\n(\n => 103.238.104.206\n)\n\n => Array\n(\n => 110.93.227.100\n)\n\n => Array\n(\n => 49.14.119.114\n)\n\n => Array\n(\n => 115.186.189.82\n)\n\n => Array\n(\n => 106.201.194.2\n)\n\n => Array\n(\n => 106.204.227.28\n)\n\n => Array\n(\n => 47.31.206.13\n)\n\n => Array\n(\n => 39.42.144.109\n)\n\n => Array\n(\n => 14.253.254.90\n)\n\n => Array\n(\n => 157.44.142.118\n)\n\n => Array\n(\n => 192.142.176.21\n)\n\n => Array\n(\n => 103.217.178.225\n)\n\n => Array\n(\n => 106.78.78.16\n)\n\n => Array\n(\n => 167.71.63.184\n)\n\n => Array\n(\n => 207.244.71.82\n)\n\n => Array\n(\n => 71.105.25.145\n)\n\n => Array\n(\n => 39.51.250.30\n)\n\n => Array\n(\n => 157.41.120.160\n)\n\n => Array\n(\n => 39.37.137.81\n)\n\n => Array\n(\n => 41.80.237.27\n)\n\n => Array\n(\n => 111.119.187.50\n)\n\n => Array\n(\n => 49.145.224.252\n)\n\n => Array\n(\n => 106.197.28.106\n)\n\n => Array\n(\n => 103.217.178.240\n)\n\n => Array\n(\n => 27.97.182.237\n)\n\n => Array\n(\n => 106.211.253.72\n)\n\n => Array\n(\n => 119.152.154.172\n)\n\n => Array\n(\n => 103.255.151.148\n)\n\n => Array\n(\n => 154.157.80.12\n)\n\n => Array\n(\n => 156.146.59.28\n)\n\n => Array\n(\n => 171.61.211.64\n)\n\n => Array\n(\n => 27.76.59.22\n)\n\n => Array\n(\n => 167.99.92.124\n)\n\n => Array\n(\n => 132.154.94.51\n)\n\n => Array\n(\n => 111.119.183.38\n)\n\n => Array\n(\n => 115.42.70.169\n)\n\n => Array\n(\n => 109.169.23.83\n)\n\n => Array\n(\n => 157.46.213.64\n)\n\n => Array\n(\n => 39.37.179.171\n)\n\n => Array\n(\n => 14.232.233.32\n)\n\n => Array\n(\n => 157.49.226.13\n)\n\n => Array\n(\n => 185.209.178.78\n)\n\n => Array\n(\n => 222.252.46.230\n)\n\n => Array\n(\n => 139.5.255.168\n)\n\n => Array\n(\n => 202.8.118.12\n)\n\n => Array\n(\n => 39.53.205.63\n)\n\n => Array\n(\n => 157.37.167.227\n)\n\n => Array\n(\n => 157.49.237.121\n)\n\n => Array\n(\n => 208.89.99.6\n)\n\n => Array\n(\n => 111.119.187.33\n)\n\n => Array\n(\n => 39.37.132.101\n)\n\n => Array\n(\n => 72.255.61.15\n)\n\n => Array\n(\n => 157.41.69.126\n)\n\n => Array\n(\n => 27.6.193.15\n)\n\n => Array\n(\n => 157.41.104.8\n)\n\n => Array\n(\n => 157.41.97.162\n)\n\n => Array\n(\n => 95.136.91.67\n)\n\n => Array\n(\n => 110.93.209.138\n)\n\n => Array\n(\n => 119.152.154.82\n)\n\n => Array\n(\n => 111.88.239.223\n)\n\n => Array\n(\n => 157.230.62.100\n)\n\n => Array\n(\n => 37.111.136.167\n)\n\n => Array\n(\n => 139.167.162.65\n)\n\n => Array\n(\n => 120.29.72.72\n)\n\n => Array\n(\n => 39.42.169.69\n)\n\n => Array\n(\n => 157.49.247.12\n)\n\n => Array\n(\n => 43.231.58.221\n)\n\n => Array\n(\n => 111.88.229.18\n)\n\n => Array\n(\n => 171.79.185.198\n)\n\n => Array\n(\n => 169.149.193.102\n)\n\n => Array\n(\n => 207.244.89.162\n)\n\n => Array\n(\n => 27.4.217.129\n)\n\n => Array\n(\n => 91.236.184.12\n)\n\n => Array\n(\n => 14.192.154.150\n)\n\n => Array\n(\n => 167.172.55.253\n)\n\n => Array\n(\n => 103.77.42.192\n)\n\n => Array\n(\n => 39.59.122.140\n)\n\n => Array\n(\n => 41.80.84.46\n)\n\n => Array\n(\n => 202.47.52.115\n)\n\n => Array\n(\n => 222.252.43.47\n)\n\n => Array\n(\n => 119.155.37.250\n)\n\n => Array\n(\n => 157.41.18.88\n)\n\n => Array\n(\n => 39.42.8.59\n)\n\n => Array\n(\n => 39.45.162.110\n)\n\n => Array\n(\n => 111.88.237.25\n)\n\n => Array\n(\n => 103.76.211.168\n)\n\n => Array\n(\n => 178.137.114.165\n)\n\n => Array\n(\n => 43.225.74.146\n)\n\n => Array\n(\n => 157.42.25.26\n)\n\n => Array\n(\n => 137.59.146.63\n)\n\n => Array\n(\n => 119.160.117.190\n)\n\n => Array\n(\n => 1.186.181.133\n)\n\n => Array\n(\n => 39.42.145.94\n)\n\n => Array\n(\n => 203.175.73.96\n)\n\n => Array\n(\n => 39.37.160.14\n)\n\n => Array\n(\n => 157.39.123.250\n)\n\n => Array\n(\n => 95.135.57.82\n)\n\n => Array\n(\n => 162.210.194.35\n)\n\n => Array\n(\n => 39.42.153.135\n)\n\n => Array\n(\n => 118.103.230.106\n)\n\n => Array\n(\n => 108.61.39.115\n)\n\n => Array\n(\n => 102.7.108.45\n)\n\n => Array\n(\n => 183.83.138.134\n)\n\n => Array\n(\n => 115.186.70.223\n)\n\n => Array\n(\n => 157.34.17.139\n)\n\n => Array\n(\n => 122.166.158.231\n)\n\n => Array\n(\n => 43.227.135.90\n)\n\n => Array\n(\n => 182.68.46.180\n)\n\n => Array\n(\n => 223.225.28.138\n)\n\n => Array\n(\n => 103.77.42.220\n)\n\n => Array\n(\n => 192.241.219.13\n)\n\n => Array\n(\n => 103.82.80.113\n)\n\n => Array\n(\n => 42.111.243.151\n)\n\n => Array\n(\n => 171.79.189.247\n)\n\n => Array\n(\n => 157.32.132.102\n)\n\n => Array\n(\n => 103.130.105.243\n)\n\n => Array\n(\n => 117.223.98.120\n)\n\n => Array\n(\n => 106.215.197.187\n)\n\n => Array\n(\n => 182.190.194.179\n)\n\n => Array\n(\n => 223.225.29.42\n)\n\n => Array\n(\n => 117.222.94.151\n)\n\n => Array\n(\n => 182.185.199.104\n)\n\n => Array\n(\n => 49.36.145.77\n)\n\n => Array\n(\n => 103.82.80.73\n)\n\n => Array\n(\n => 103.77.16.13\n)\n\n => Array\n(\n => 221.132.118.86\n)\n\n => Array\n(\n => 202.47.45.77\n)\n\n => Array\n(\n => 202.8.118.116\n)\n\n => Array\n(\n => 42.106.180.185\n)\n\n => Array\n(\n => 203.122.8.234\n)\n\n => Array\n(\n => 88.230.104.245\n)\n\n => Array\n(\n => 103.131.9.33\n)\n\n => Array\n(\n => 117.207.209.60\n)\n\n => Array\n(\n => 42.111.253.227\n)\n\n => Array\n(\n => 23.106.56.54\n)\n\n => Array\n(\n => 122.178.143.181\n)\n\n => Array\n(\n => 111.88.180.5\n)\n\n => Array\n(\n => 174.55.224.161\n)\n\n => Array\n(\n => 49.205.87.100\n)\n\n => Array\n(\n => 49.34.183.118\n)\n\n => Array\n(\n => 124.155.255.154\n)\n\n => Array\n(\n => 106.212.135.200\n)\n\n => Array\n(\n => 139.99.159.11\n)\n\n => Array\n(\n => 45.135.229.8\n)\n\n => Array\n(\n => 88.230.106.85\n)\n\n => Array\n(\n => 91.153.145.221\n)\n\n => Array\n(\n => 103.95.83.33\n)\n\n => Array\n(\n => 122.178.116.76\n)\n\n => Array\n(\n => 103.135.78.14\n)\n\n => Array\n(\n => 111.88.233.206\n)\n\n => Array\n(\n => 192.140.153.210\n)\n\n => Array\n(\n => 202.8.118.69\n)\n\n => Array\n(\n => 103.83.130.81\n)\n\n => Array\n(\n => 182.190.213.143\n)\n\n => Array\n(\n => 198.16.74.204\n)\n\n => Array\n(\n => 101.128.117.248\n)\n\n => Array\n(\n => 103.108.5.147\n)\n\n => Array\n(\n => 157.32.130.158\n)\n\n => Array\n(\n => 103.244.172.93\n)\n\n => Array\n(\n => 47.30.140.126\n)\n\n => Array\n(\n => 223.188.40.124\n)\n\n => Array\n(\n => 157.44.191.102\n)\n\n => Array\n(\n => 41.60.237.62\n)\n\n => Array\n(\n => 47.31.228.161\n)\n\n => Array\n(\n => 137.59.217.188\n)\n\n => Array\n(\n => 39.53.220.237\n)\n\n => Array\n(\n => 45.127.45.199\n)\n\n => Array\n(\n => 14.190.71.19\n)\n\n => Array\n(\n => 47.18.205.54\n)\n\n => Array\n(\n => 110.93.240.11\n)\n\n => Array\n(\n => 134.209.29.111\n)\n\n => Array\n(\n => 49.36.175.104\n)\n\n => Array\n(\n => 203.192.230.61\n)\n\n => Array\n(\n => 176.10.125.115\n)\n\n => Array\n(\n => 182.18.206.17\n)\n\n => Array\n(\n => 103.87.194.102\n)\n\n => Array\n(\n => 171.79.123.106\n)\n\n => Array\n(\n => 45.116.233.35\n)\n\n => Array\n(\n => 223.190.57.225\n)\n\n => Array\n(\n => 114.125.6.158\n)\n\n => Array\n(\n => 223.179.138.176\n)\n\n => Array\n(\n => 111.119.183.61\n)\n\n => Array\n(\n => 202.8.118.43\n)\n\n => Array\n(\n => 157.51.175.216\n)\n\n => Array\n(\n => 41.60.238.100\n)\n\n => Array\n(\n => 117.207.210.199\n)\n\n => Array\n(\n => 111.119.183.26\n)\n\n => Array\n(\n => 103.252.226.12\n)\n\n => Array\n(\n => 103.221.208.82\n)\n\n => Array\n(\n => 103.82.80.228\n)\n\n => Array\n(\n => 111.119.187.39\n)\n\n => Array\n(\n => 157.51.161.199\n)\n\n => Array\n(\n => 59.96.88.246\n)\n\n => Array\n(\n => 27.4.181.183\n)\n\n => Array\n(\n => 43.225.98.124\n)\n\n => Array\n(\n => 157.51.113.74\n)\n\n => Array\n(\n => 207.244.89.161\n)\n\n => Array\n(\n => 49.37.184.82\n)\n\n => Array\n(\n => 111.119.183.4\n)\n\n => Array\n(\n => 39.42.130.147\n)\n\n => Array\n(\n => 103.152.101.2\n)\n\n => Array\n(\n => 111.119.183.2\n)\n\n => Array\n(\n => 157.51.171.149\n)\n\n => Array\n(\n => 103.82.80.245\n)\n\n => Array\n(\n => 175.107.207.133\n)\n\n => Array\n(\n => 103.204.169.158\n)\n\n => Array\n(\n => 157.51.181.12\n)\n\n => Array\n(\n => 195.158.193.212\n)\n\n => Array\n(\n => 204.14.73.85\n)\n\n => Array\n(\n => 39.59.59.31\n)\n\n => Array\n(\n => 45.148.11.82\n)\n\n => Array\n(\n => 157.46.117.250\n)\n\n => Array\n(\n => 157.46.127.170\n)\n\n => Array\n(\n => 77.247.181.165\n)\n\n => Array\n(\n => 111.119.183.54\n)\n\n => Array\n(\n => 41.60.232.183\n)\n\n => Array\n(\n => 157.42.206.174\n)\n\n => Array\n(\n => 196.53.10.246\n)\n\n => Array\n(\n => 27.97.186.131\n)\n\n => Array\n(\n => 103.73.101.134\n)\n\n => Array\n(\n => 111.119.183.35\n)\n\n => Array\n(\n => 202.8.118.111\n)\n\n => Array\n(\n => 103.75.246.207\n)\n\n => Array\n(\n => 47.8.94.225\n)\n\n => Array\n(\n => 106.202.40.83\n)\n\n => Array\n(\n => 117.102.2.0\n)\n\n => Array\n(\n => 156.146.59.11\n)\n\n => Array\n(\n => 223.190.115.125\n)\n\n => Array\n(\n => 169.149.212.232\n)\n\n => Array\n(\n => 39.45.150.127\n)\n\n => Array\n(\n => 45.63.10.204\n)\n\n => Array\n(\n => 27.57.86.46\n)\n\n => Array\n(\n => 103.127.20.138\n)\n\n => Array\n(\n => 223.190.27.26\n)\n\n => Array\n(\n => 49.15.248.78\n)\n\n => Array\n(\n => 130.105.135.103\n)\n\n => Array\n(\n => 47.31.3.239\n)\n\n => Array\n(\n => 185.66.71.8\n)\n\n => Array\n(\n => 103.226.226.198\n)\n\n => Array\n(\n => 39.34.134.16\n)\n\n => Array\n(\n => 95.158.53.120\n)\n\n => Array\n(\n => 45.9.249.246\n)\n\n => Array\n(\n => 223.235.162.157\n)\n\n => Array\n(\n => 37.111.139.23\n)\n\n => Array\n(\n => 49.37.153.47\n)\n\n => Array\n(\n => 103.242.60.205\n)\n\n => Array\n(\n => 185.66.68.18\n)\n\n => Array\n(\n => 162.221.202.138\n)\n\n => Array\n(\n => 202.63.195.29\n)\n\n => Array\n(\n => 112.198.75.226\n)\n\n => Array\n(\n => 46.200.69.233\n)\n\n => Array\n(\n => 103.135.78.30\n)\n\n => Array\n(\n => 119.152.226.9\n)\n\n => Array\n(\n => 167.172.242.50\n)\n\n => Array\n(\n => 49.36.151.31\n)\n\n => Array\n(\n => 111.88.237.156\n)\n\n => Array\n(\n => 103.215.168.1\n)\n\n => Array\n(\n => 107.181.177.137\n)\n\n => Array\n(\n => 157.119.186.202\n)\n\n => Array\n(\n => 37.111.139.106\n)\n\n => Array\n(\n => 182.180.152.198\n)\n\n => Array\n(\n => 43.248.153.72\n)\n\n => Array\n(\n => 64.188.20.84\n)\n\n => Array\n(\n => 103.92.214.11\n)\n\n => Array\n(\n => 182.182.14.148\n)\n\n => Array\n(\n => 116.75.154.119\n)\n\n => Array\n(\n => 37.228.235.94\n)\n\n => Array\n(\n => 197.210.55.43\n)\n\n => Array\n(\n => 45.118.165.153\n)\n\n => Array\n(\n => 122.176.32.27\n)\n\n => Array\n(\n => 106.215.161.20\n)\n\n => Array\n(\n => 152.32.113.58\n)\n\n => Array\n(\n => 111.125.106.132\n)\n\n => Array\n(\n => 212.102.40.72\n)\n\n => Array\n(\n => 2.58.194.140\n)\n\n => Array\n(\n => 122.174.68.115\n)\n\n => Array\n(\n => 117.241.66.56\n)\n\n => Array\n(\n => 71.94.172.140\n)\n\n => Array\n(\n => 103.209.228.139\n)\n\n => Array\n(\n => 43.242.177.140\n)\n\n => Array\n(\n => 38.91.101.66\n)\n\n => Array\n(\n => 103.82.80.67\n)\n\n => Array\n(\n => 117.248.62.138\n)\n\n => Array\n(\n => 103.81.215.51\n)\n\n => Array\n(\n => 103.253.174.4\n)\n\n => Array\n(\n => 202.142.110.111\n)\n\n => Array\n(\n => 162.216.142.1\n)\n\n => Array\n(\n => 58.186.7.252\n)\n\n => Array\n(\n => 113.203.247.66\n)\n\n => Array\n(\n => 111.88.50.63\n)\n\n => Array\n(\n => 182.182.94.227\n)\n\n => Array\n(\n => 49.15.232.50\n)\n\n => Array\n(\n => 182.189.76.225\n)\n\n => Array\n(\n => 139.99.159.14\n)\n\n => Array\n(\n => 163.172.159.235\n)\n\n => Array\n(\n => 157.36.235.241\n)\n\n => Array\n(\n => 111.119.187.3\n)\n\n => Array\n(\n => 103.100.4.61\n)\n\n => Array\n(\n => 192.142.130.88\n)\n\n => Array\n(\n => 43.242.176.114\n)\n\n => Array\n(\n => 180.178.156.165\n)\n\n => Array\n(\n => 182.189.236.77\n)\n\n => Array\n(\n => 49.34.197.239\n)\n\n => Array\n(\n => 157.36.107.107\n)\n\n => Array\n(\n => 103.209.85.175\n)\n\n => Array\n(\n => 203.139.63.83\n)\n\n => Array\n(\n => 43.242.177.161\n)\n\n => Array\n(\n => 182.182.77.138\n)\n\n => Array\n(\n => 114.124.168.117\n)\n\n => Array\n(\n => 124.253.79.191\n)\n\n => Array\n(\n => 192.142.168.235\n)\n\n => Array\n(\n => 14.232.235.111\n)\n\n => Array\n(\n => 152.57.124.214\n)\n\n => Array\n(\n => 123.24.172.48\n)\n\n => Array\n(\n => 43.242.176.87\n)\n\n => Array\n(\n => 43.242.176.101\n)\n\n => Array\n(\n => 49.156.84.110\n)\n\n => Array\n(\n => 58.65.222.6\n)\n\n => Array\n(\n => 157.32.189.112\n)\n\n => Array\n(\n => 47.31.155.87\n)\n\n => Array\n(\n => 39.53.244.182\n)\n\n => Array\n(\n => 39.33.221.76\n)\n\n => Array\n(\n => 161.35.130.245\n)\n\n => Array\n(\n => 152.32.113.137\n)\n\n => Array\n(\n => 192.142.187.220\n)\n\n => Array\n(\n => 185.54.228.123\n)\n\n => Array\n(\n => 103.233.87.221\n)\n\n => Array\n(\n => 223.236.200.224\n)\n\n => Array\n(\n => 27.97.189.170\n)\n\n => Array\n(\n => 103.82.80.212\n)\n\n => Array\n(\n => 43.242.176.37\n)\n\n => Array\n(\n => 49.36.144.94\n)\n\n => Array\n(\n => 180.251.62.185\n)\n\n => Array\n(\n => 39.50.243.227\n)\n\n => Array\n(\n => 124.253.20.21\n)\n\n => Array\n(\n => 41.60.233.31\n)\n\n => Array\n(\n => 103.81.215.57\n)\n\n => Array\n(\n => 185.91.120.16\n)\n\n => Array\n(\n => 182.190.107.163\n)\n\n => Array\n(\n => 222.252.61.68\n)\n\n => Array\n(\n => 109.169.23.78\n)\n\n => Array\n(\n => 39.50.151.222\n)\n\n => Array\n(\n => 43.242.176.86\n)\n\n => Array\n(\n => 178.162.222.161\n)\n\n => Array\n(\n => 37.111.139.158\n)\n\n => Array\n(\n => 39.57.224.97\n)\n\n => Array\n(\n => 39.57.157.194\n)\n\n => Array\n(\n => 111.119.183.48\n)\n\n => Array\n(\n => 180.190.171.129\n)\n\n => Array\n(\n => 39.52.174.177\n)\n\n => Array\n(\n => 43.242.176.103\n)\n\n => Array\n(\n => 124.253.83.14\n)\n\n => Array\n(\n => 182.189.116.245\n)\n\n => Array\n(\n => 157.36.178.213\n)\n\n => Array\n(\n => 45.250.65.119\n)\n\n => Array\n(\n => 103.209.86.6\n)\n\n => Array\n(\n => 43.242.176.80\n)\n\n => Array\n(\n => 137.59.147.2\n)\n\n => Array\n(\n => 117.222.95.23\n)\n\n => Array\n(\n => 124.253.81.10\n)\n\n => Array\n(\n => 43.242.177.21\n)\n\n => Array\n(\n => 182.189.224.186\n)\n\n => Array\n(\n => 39.52.178.142\n)\n\n => Array\n(\n => 106.214.29.176\n)\n\n => Array\n(\n => 111.88.145.107\n)\n\n => Array\n(\n => 49.36.142.67\n)\n\n => Array\n(\n => 202.142.65.50\n)\n\n => Array\n(\n => 1.22.186.76\n)\n\n => Array\n(\n => 103.131.8.225\n)\n\n => Array\n(\n => 39.53.212.111\n)\n\n => Array\n(\n => 103.82.80.149\n)\n\n => Array\n(\n => 43.242.176.12\n)\n\n => Array\n(\n => 103.109.13.189\n)\n\n => Array\n(\n => 124.253.206.202\n)\n\n => Array\n(\n => 117.195.115.85\n)\n\n => Array\n(\n => 49.36.245.229\n)\n\n => Array\n(\n => 42.118.8.100\n)\n\n => Array\n(\n => 1.22.73.17\n)\n\n => Array\n(\n => 157.36.166.131\n)\n\n => Array\n(\n => 182.182.38.223\n)\n\n => Array\n(\n => 49.14.150.21\n)\n\n => Array\n(\n => 43.242.176.89\n)\n\n => Array\n(\n => 157.46.185.69\n)\n\n => Array\n(\n => 103.31.92.150\n)\n\n => Array\n(\n => 59.96.90.94\n)\n\n => Array\n(\n => 49.156.111.64\n)\n\n => Array\n(\n => 103.75.244.16\n)\n\n => Array\n(\n => 54.37.18.139\n)\n\n => Array\n(\n => 27.255.173.50\n)\n\n => Array\n(\n => 84.202.161.120\n)\n\n => Array\n(\n => 27.3.224.180\n)\n\n => Array\n(\n => 39.44.14.192\n)\n\n => Array\n(\n => 37.120.133.201\n)\n\n => Array\n(\n => 109.251.143.236\n)\n\n => Array\n(\n => 23.80.97.111\n)\n\n => Array\n(\n => 43.242.176.9\n)\n\n => Array\n(\n => 14.248.107.50\n)\n\n => Array\n(\n => 182.189.221.114\n)\n\n => Array\n(\n => 103.253.173.74\n)\n\n => Array\n(\n => 27.97.177.45\n)\n\n => Array\n(\n => 49.14.98.9\n)\n\n => Array\n(\n => 163.53.85.169\n)\n\n => Array\n(\n => 39.59.90.168\n)\n\n => Array\n(\n => 111.88.202.253\n)\n\n => Array\n(\n => 111.119.178.155\n)\n\n => Array\n(\n => 171.76.163.75\n)\n\n => Array\n(\n => 202.5.154.23\n)\n\n => Array\n(\n => 119.160.65.164\n)\n\n => Array\n(\n => 14.253.253.190\n)\n\n => Array\n(\n => 117.206.167.25\n)\n\n => Array\n(\n => 61.2.183.186\n)\n\n => Array\n(\n => 103.100.4.83\n)\n\n => Array\n(\n => 124.253.71.126\n)\n\n => Array\n(\n => 182.189.49.217\n)\n\n => Array\n(\n => 103.196.160.41\n)\n\n => Array\n(\n => 23.106.56.35\n)\n\n => Array\n(\n => 110.38.12.70\n)\n\n => Array\n(\n => 154.157.199.239\n)\n\n => Array\n(\n => 14.231.163.113\n)\n\n => Array\n(\n => 103.69.27.232\n)\n\n => Array\n(\n => 175.107.220.192\n)\n\n => Array\n(\n => 43.231.58.173\n)\n\n => Array\n(\n => 138.128.91.215\n)\n\n => Array\n(\n => 103.233.86.1\n)\n\n => Array\n(\n => 182.187.67.111\n)\n\n => Array\n(\n => 49.156.71.31\n)\n\n => Array\n(\n => 27.255.174.125\n)\n\n => Array\n(\n => 195.24.220.35\n)\n\n => Array\n(\n => 120.29.98.28\n)\n\n => Array\n(\n => 41.202.219.255\n)\n\n => Array\n(\n => 103.88.3.243\n)\n\n => Array\n(\n => 111.125.106.75\n)\n\n => Array\n(\n => 106.76.71.74\n)\n\n => Array\n(\n => 112.201.138.85\n)\n\n => Array\n(\n => 110.137.101.229\n)\n\n => Array\n(\n => 43.242.177.96\n)\n\n => Array\n(\n => 39.36.198.196\n)\n\n => Array\n(\n => 27.255.181.140\n)\n\n => Array\n(\n => 194.99.104.58\n)\n\n => Array\n(\n => 78.129.139.109\n)\n\n => Array\n(\n => 47.247.185.67\n)\n\n => Array\n(\n => 27.63.37.90\n)\n\n => Array\n(\n => 103.211.54.1\n)\n\n => Array\n(\n => 94.202.167.139\n)\n\n => Array\n(\n => 111.119.183.3\n)\n\n => Array\n(\n => 124.253.194.1\n)\n\n => Array\n(\n => 192.142.188.115\n)\n\n => Array\n(\n => 39.44.137.107\n)\n\n => Array\n(\n => 43.251.191.25\n)\n\n => Array\n(\n => 103.140.30.114\n)\n\n => Array\n(\n => 117.5.194.159\n)\n\n => Array\n(\n => 109.169.23.79\n)\n\n => Array\n(\n => 122.178.127.170\n)\n\n => Array\n(\n => 45.118.165.156\n)\n\n => Array\n(\n => 39.48.199.148\n)\n\n => Array\n(\n => 182.64.138.32\n)\n\n => Array\n(\n => 37.73.129.186\n)\n\n => Array\n(\n => 182.186.110.35\n)\n\n => Array\n(\n => 43.242.177.24\n)\n\n => Array\n(\n => 119.155.23.112\n)\n\n => Array\n(\n => 84.16.238.119\n)\n\n => Array\n(\n => 41.202.219.252\n)\n\n => Array\n(\n => 43.242.176.119\n)\n\n => Array\n(\n => 111.119.187.6\n)\n\n => Array\n(\n => 95.12.200.188\n)\n\n => Array\n(\n => 139.28.219.138\n)\n\n => Array\n(\n => 89.163.247.130\n)\n\n => Array\n(\n => 122.173.103.88\n)\n\n => Array\n(\n => 103.248.87.10\n)\n\n => Array\n(\n => 23.106.249.36\n)\n\n => Array\n(\n => 124.253.94.125\n)\n\n => Array\n(\n => 39.53.244.147\n)\n\n => Array\n(\n => 193.109.85.11\n)\n\n => Array\n(\n => 43.242.176.71\n)\n\n => Array\n(\n => 43.242.177.58\n)\n\n => Array\n(\n => 47.31.6.139\n)\n\n => Array\n(\n => 39.59.34.67\n)\n\n => Array\n(\n => 43.242.176.58\n)\n\n => Array\n(\n => 103.107.198.198\n)\n\n => Array\n(\n => 147.135.11.113\n)\n\n => Array\n(\n => 27.7.212.112\n)\n\n => Array\n(\n => 43.242.177.1\n)\n\n => Array\n(\n => 175.107.227.27\n)\n\n => Array\n(\n => 103.103.43.254\n)\n\n => Array\n(\n => 49.15.221.10\n)\n\n => Array\n(\n => 43.242.177.43\n)\n\n => Array\n(\n => 36.85.59.11\n)\n\n => Array\n(\n => 124.253.204.50\n)\n\n => Array\n(\n => 5.181.233.54\n)\n\n => Array\n(\n => 43.242.177.154\n)\n\n => Array\n(\n => 103.84.37.169\n)\n\n => Array\n(\n => 222.252.54.108\n)\n\n => Array\n(\n => 14.162.160.254\n)\n\n => Array\n(\n => 178.151.218.45\n)\n\n => Array\n(\n => 110.137.101.93\n)\n\n => Array\n(\n => 122.162.212.59\n)\n\n => Array\n(\n => 81.12.118.162\n)\n\n => Array\n(\n => 171.76.186.148\n)\n\n => Array\n(\n => 182.69.253.77\n)\n\n => Array\n(\n => 111.119.183.43\n)\n\n => Array\n(\n => 49.149.74.226\n)\n\n => Array\n(\n => 43.242.177.63\n)\n\n => Array\n(\n => 14.99.243.54\n)\n\n => Array\n(\n => 110.137.100.25\n)\n\n => Array\n(\n => 116.107.25.163\n)\n\n => Array\n(\n => 49.36.71.141\n)\n\n => Array\n(\n => 182.180.117.219\n)\n\n => Array\n(\n => 150.242.172.194\n)\n\n => Array\n(\n => 49.156.111.40\n)\n\n => Array\n(\n => 49.15.208.115\n)\n\n => Array\n(\n => 103.209.87.219\n)\n\n => Array\n(\n => 43.242.176.56\n)\n\n => Array\n(\n => 103.132.187.100\n)\n\n => Array\n(\n => 49.156.96.120\n)\n\n => Array\n(\n => 192.142.176.171\n)\n\n => Array\n(\n => 51.91.18.131\n)\n\n => Array\n(\n => 103.83.144.121\n)\n\n => Array\n(\n => 1.39.75.72\n)\n\n => Array\n(\n => 14.231.172.177\n)\n\n => Array\n(\n => 94.232.213.159\n)\n\n => Array\n(\n => 103.228.158.38\n)\n\n => Array\n(\n => 43.242.177.100\n)\n\n => Array\n(\n => 171.76.149.130\n)\n\n => Array\n(\n => 113.183.26.59\n)\n\n => Array\n(\n => 182.74.232.166\n)\n\n => Array\n(\n => 47.31.205.211\n)\n\n => Array\n(\n => 106.211.253.70\n)\n\n => Array\n(\n => 39.51.233.214\n)\n\n => Array\n(\n => 182.70.249.161\n)\n\n => Array\n(\n => 222.252.40.196\n)\n\n => Array\n(\n => 49.37.6.29\n)\n\n => Array\n(\n => 119.155.33.170\n)\n\n => Array\n(\n => 43.242.177.79\n)\n\n => Array\n(\n => 111.119.183.62\n)\n\n => Array\n(\n => 137.59.226.97\n)\n\n => Array\n(\n => 42.111.18.121\n)\n\n => Array\n(\n => 223.190.46.91\n)\n\n => Array\n(\n => 45.118.165.159\n)\n\n => Array\n(\n => 110.136.60.44\n)\n\n => Array\n(\n => 43.242.176.57\n)\n\n => Array\n(\n => 117.212.58.0\n)\n\n => Array\n(\n => 49.37.7.66\n)\n\n => Array\n(\n => 39.52.174.33\n)\n\n => Array\n(\n => 150.242.172.55\n)\n\n => Array\n(\n => 103.94.111.236\n)\n\n => Array\n(\n => 106.215.239.184\n)\n\n => Array\n(\n => 101.128.117.75\n)\n\n => Array\n(\n => 162.210.194.10\n)\n\n => Array\n(\n => 136.158.31.132\n)\n\n => Array\n(\n => 39.51.245.69\n)\n\n => Array\n(\n => 39.42.149.159\n)\n\n => Array\n(\n => 51.77.108.159\n)\n\n => Array\n(\n => 45.127.247.250\n)\n\n => Array\n(\n => 122.172.78.22\n)\n\n => Array\n(\n => 117.220.208.38\n)\n\n => Array\n(\n => 112.201.138.95\n)\n\n => Array\n(\n => 49.145.105.113\n)\n\n => Array\n(\n => 110.93.247.12\n)\n\n => Array\n(\n => 39.52.150.32\n)\n\n => Array\n(\n => 122.161.89.41\n)\n\n => Array\n(\n => 39.52.176.49\n)\n\n => Array\n(\n => 157.33.12.154\n)\n\n => Array\n(\n => 73.111.248.162\n)\n\n => Array\n(\n => 112.204.167.67\n)\n\n => Array\n(\n => 107.150.30.182\n)\n\n => Array\n(\n => 115.99.222.229\n)\n\n => Array\n(\n => 180.190.195.96\n)\n\n => Array\n(\n => 157.44.57.255\n)\n\n => Array\n(\n => 39.37.9.167\n)\n\n => Array\n(\n => 39.49.48.33\n)\n\n => Array\n(\n => 157.44.218.118\n)\n\n => Array\n(\n => 103.211.54.253\n)\n\n => Array\n(\n => 43.242.177.81\n)\n\n => Array\n(\n => 103.111.224.227\n)\n\n => Array\n(\n => 223.176.48.237\n)\n\n => Array\n(\n => 124.253.87.117\n)\n\n => Array\n(\n => 124.29.247.14\n)\n\n => Array\n(\n => 182.189.232.32\n)\n\n => Array\n(\n => 111.68.97.206\n)\n\n => Array\n(\n => 103.117.15.70\n)\n\n => Array\n(\n => 182.18.236.101\n)\n\n => Array\n(\n => 43.242.177.60\n)\n\n => Array\n(\n => 180.190.7.178\n)\n\n => Array\n(\n => 112.201.142.95\n)\n\n => Array\n(\n => 122.178.255.123\n)\n\n => Array\n(\n => 49.36.240.103\n)\n\n => Array\n(\n => 210.56.16.13\n)\n\n => Array\n(\n => 103.91.123.219\n)\n\n => Array\n(\n => 39.52.155.252\n)\n\n => Array\n(\n => 192.142.207.230\n)\n\n => Array\n(\n => 188.163.82.179\n)\n\n => Array\n(\n => 182.189.9.196\n)\n\n => Array\n(\n => 175.107.221.51\n)\n\n => Array\n(\n => 39.53.221.200\n)\n\n => Array\n(\n => 27.255.190.59\n)\n\n => Array\n(\n => 183.83.212.118\n)\n\n => Array\n(\n => 45.118.165.143\n)\n\n => Array\n(\n => 182.189.124.35\n)\n\n => Array\n(\n => 203.101.186.1\n)\n\n => Array\n(\n => 49.36.246.25\n)\n\n => Array\n(\n => 39.42.186.234\n)\n\n => Array\n(\n => 103.82.80.14\n)\n\n => Array\n(\n => 210.18.182.42\n)\n\n => Array\n(\n => 42.111.13.81\n)\n\n => Array\n(\n => 46.200.69.240\n)\n\n => Array\n(\n => 103.209.87.213\n)\n\n => Array\n(\n => 103.31.95.95\n)\n\n => Array\n(\n => 180.190.174.25\n)\n\n => Array\n(\n => 103.77.0.128\n)\n\n => Array\n(\n => 49.34.103.82\n)\n\n => Array\n(\n => 39.48.196.22\n)\n\n => Array\n(\n => 192.142.166.20\n)\n\n => Array\n(\n => 202.142.110.186\n)\n\n => Array\n(\n => 122.163.135.95\n)\n\n => Array\n(\n => 183.83.255.225\n)\n\n => Array\n(\n => 157.45.46.10\n)\n\n => Array\n(\n => 182.189.4.77\n)\n\n => Array\n(\n => 49.145.104.71\n)\n\n => Array\n(\n => 103.143.7.34\n)\n\n => Array\n(\n => 61.2.180.15\n)\n\n => Array\n(\n => 103.81.215.61\n)\n\n => Array\n(\n => 115.42.71.122\n)\n\n => Array\n(\n => 124.253.73.20\n)\n\n => Array\n(\n => 49.33.210.169\n)\n\n => Array\n(\n => 78.159.101.115\n)\n\n => Array\n(\n => 42.111.17.221\n)\n\n => Array\n(\n => 43.242.178.67\n)\n\n => Array\n(\n => 36.68.138.36\n)\n\n => Array\n(\n => 103.195.201.51\n)\n\n => Array\n(\n => 79.141.162.81\n)\n\n => Array\n(\n => 202.8.118.239\n)\n\n => Array\n(\n => 103.139.128.161\n)\n\n => Array\n(\n => 207.244.71.84\n)\n\n => Array\n(\n => 124.253.184.45\n)\n\n => Array\n(\n => 111.125.106.124\n)\n\n => Array\n(\n => 111.125.105.139\n)\n\n => Array\n(\n => 39.59.94.233\n)\n\n => Array\n(\n => 112.211.60.168\n)\n\n => Array\n(\n => 103.117.14.72\n)\n\n => Array\n(\n => 111.119.183.56\n)\n\n => Array\n(\n => 47.31.53.228\n)\n\n => Array\n(\n => 124.253.186.8\n)\n\n => Array\n(\n => 183.83.213.214\n)\n\n => Array\n(\n => 103.106.239.70\n)\n\n => Array\n(\n => 182.182.92.81\n)\n\n => Array\n(\n => 14.162.167.98\n)\n\n => Array\n(\n => 112.211.11.107\n)\n\n => Array\n(\n => 77.111.246.20\n)\n\n => Array\n(\n => 49.156.86.182\n)\n\n => Array\n(\n => 47.29.122.112\n)\n\n => Array\n(\n => 125.99.74.42\n)\n\n => Array\n(\n => 124.123.169.24\n)\n\n => Array\n(\n => 106.202.105.128\n)\n\n => Array\n(\n => 103.244.173.14\n)\n\n => Array\n(\n => 103.98.63.104\n)\n\n => Array\n(\n => 180.245.6.60\n)\n\n => Array\n(\n => 49.149.96.14\n)\n\n => Array\n(\n => 14.177.120.169\n)\n\n => Array\n(\n => 192.135.90.145\n)\n\n => Array\n(\n => 223.190.18.218\n)\n\n => Array\n(\n => 171.61.190.2\n)\n\n => Array\n(\n => 58.65.220.219\n)\n\n => Array\n(\n => 122.177.29.87\n)\n\n => Array\n(\n => 223.236.175.203\n)\n\n => Array\n(\n => 39.53.237.106\n)\n\n => Array\n(\n => 1.186.114.83\n)\n\n => Array\n(\n => 43.230.66.153\n)\n\n => Array\n(\n => 27.96.94.247\n)\n\n => Array\n(\n => 39.52.176.185\n)\n\n => Array\n(\n => 59.94.147.62\n)\n\n => Array\n(\n => 119.160.117.10\n)\n\n => Array\n(\n => 43.241.146.105\n)\n\n => Array\n(\n => 39.59.87.75\n)\n\n => Array\n(\n => 119.160.118.203\n)\n\n => Array\n(\n => 39.52.161.76\n)\n\n => Array\n(\n => 202.168.84.189\n)\n\n => Array\n(\n => 103.215.168.2\n)\n\n => Array\n(\n => 39.42.146.160\n)\n\n => Array\n(\n => 182.182.30.246\n)\n\n => Array\n(\n => 122.173.212.133\n)\n\n => Array\n(\n => 39.51.238.44\n)\n\n => Array\n(\n => 183.83.252.51\n)\n\n => Array\n(\n => 202.142.168.86\n)\n\n => Array\n(\n => 39.40.198.209\n)\n\n => Array\n(\n => 192.135.90.151\n)\n\n => Array\n(\n => 72.255.41.174\n)\n\n => Array\n(\n => 137.97.92.124\n)\n\n => Array\n(\n => 182.185.159.155\n)\n\n => Array\n(\n => 157.44.133.131\n)\n\n => Array\n(\n => 39.51.230.253\n)\n\n => Array\n(\n => 103.70.87.200\n)\n\n => Array\n(\n => 103.117.15.82\n)\n\n => Array\n(\n => 103.217.244.69\n)\n\n => Array\n(\n => 157.34.76.185\n)\n\n => Array\n(\n => 39.52.130.163\n)\n\n => Array\n(\n => 182.181.41.39\n)\n\n => Array\n(\n => 49.37.212.226\n)\n\n => Array\n(\n => 119.160.117.100\n)\n\n => Array\n(\n => 103.209.87.43\n)\n\n => Array\n(\n => 180.190.195.45\n)\n\n => Array\n(\n => 122.160.57.230\n)\n\n => Array\n(\n => 203.192.213.81\n)\n\n => Array\n(\n => 182.181.63.91\n)\n\n => Array\n(\n => 157.44.184.5\n)\n\n => Array\n(\n => 27.97.213.128\n)\n\n => Array\n(\n => 122.55.252.145\n)\n\n => Array\n(\n => 103.117.15.92\n)\n\n => Array\n(\n => 42.201.251.179\n)\n\n => Array\n(\n => 122.186.84.53\n)\n\n => Array\n(\n => 119.157.75.242\n)\n\n => Array\n(\n => 39.42.163.6\n)\n\n => Array\n(\n => 14.99.246.78\n)\n\n => Array\n(\n => 103.209.87.227\n)\n\n => Array\n(\n => 182.68.215.31\n)\n\n => Array\n(\n => 45.118.165.140\n)\n\n => Array\n(\n => 207.244.71.81\n)\n\n => Array\n(\n => 27.97.162.57\n)\n\n => Array\n(\n => 103.113.106.98\n)\n\n => Array\n(\n => 95.135.44.103\n)\n\n => Array\n(\n => 125.209.114.238\n)\n\n => Array\n(\n => 77.123.14.176\n)\n\n => Array\n(\n => 110.36.202.169\n)\n\n => Array\n(\n => 124.253.205.230\n)\n\n => Array\n(\n => 106.215.72.117\n)\n\n => Array\n(\n => 116.72.226.35\n)\n\n => Array\n(\n => 137.97.103.141\n)\n\n => Array\n(\n => 112.79.212.161\n)\n\n => Array\n(\n => 103.209.85.150\n)\n\n => Array\n(\n => 103.159.127.6\n)\n\n => Array\n(\n => 43.239.205.66\n)\n\n => Array\n(\n => 143.244.51.152\n)\n\n => Array\n(\n => 182.64.15.3\n)\n\n => Array\n(\n => 182.185.207.146\n)\n\n => Array\n(\n => 45.118.165.155\n)\n\n => Array\n(\n => 115.160.241.214\n)\n\n => Array\n(\n => 47.31.230.68\n)\n\n => Array\n(\n => 49.15.84.145\n)\n\n => Array\n(\n => 39.51.239.206\n)\n\n => Array\n(\n => 103.149.154.212\n)\n\n => Array\n(\n => 43.239.207.155\n)\n\n => Array\n(\n => 182.182.30.181\n)\n\n => Array\n(\n => 157.37.198.16\n)\n\n => Array\n(\n => 162.239.24.60\n)\n\n => Array\n(\n => 106.212.101.97\n)\n\n => Array\n(\n => 124.253.97.44\n)\n\n => Array\n(\n => 106.214.95.176\n)\n\n => Array\n(\n => 102.69.228.114\n)\n\n => Array\n(\n => 116.74.58.221\n)\n\n => Array\n(\n => 162.210.194.38\n)\n\n => Array\n(\n => 39.52.162.121\n)\n\n => Array\n(\n => 103.216.143.255\n)\n\n => Array\n(\n => 103.49.155.134\n)\n\n => Array\n(\n => 182.191.119.236\n)\n\n => Array\n(\n => 111.88.213.172\n)\n\n => Array\n(\n => 43.239.207.207\n)\n\n => Array\n(\n => 140.213.35.143\n)\n\n => Array\n(\n => 154.72.153.215\n)\n\n => Array\n(\n => 122.170.47.36\n)\n\n => Array\n(\n => 51.158.111.163\n)\n\n => Array\n(\n => 203.122.10.150\n)\n\n => Array\n(\n => 47.31.176.111\n)\n\n => Array\n(\n => 103.75.246.34\n)\n\n => Array\n(\n => 103.244.178.45\n)\n\n => Array\n(\n => 182.185.138.0\n)\n\n => Array\n(\n => 183.83.254.224\n)\n\n => Array\n(\n => 49.36.246.145\n)\n\n => Array\n(\n => 202.47.60.85\n)\n\n => Array\n(\n => 180.190.163.160\n)\n\n => Array\n(\n => 27.255.187.221\n)\n\n => Array\n(\n => 14.248.94.2\n)\n\n => Array\n(\n => 185.233.17.187\n)\n\n => Array\n(\n => 139.5.254.227\n)\n\n => Array\n(\n => 103.149.160.66\n)\n\n => Array\n(\n => 122.168.235.47\n)\n\n => Array\n(\n => 45.113.248.224\n)\n\n => Array\n(\n => 110.54.170.142\n)\n\n => Array\n(\n => 223.235.226.55\n)\n\n => Array\n(\n => 157.32.19.235\n)\n\n => Array\n(\n => 49.15.221.114\n)\n\n => Array\n(\n => 27.97.166.163\n)\n\n => Array\n(\n => 223.233.99.5\n)\n\n => Array\n(\n => 49.33.203.53\n)\n\n => Array\n(\n => 27.56.214.41\n)\n\n => Array\n(\n => 103.138.51.3\n)\n\n => Array\n(\n => 111.119.183.21\n)\n\n => Array\n(\n => 47.15.138.233\n)\n\n => Array\n(\n => 202.63.213.184\n)\n\n => Array\n(\n => 49.36.158.94\n)\n\n => Array\n(\n => 27.97.186.179\n)\n\n => Array\n(\n => 27.97.214.69\n)\n\n => Array\n(\n => 203.128.18.163\n)\n\n => Array\n(\n => 106.207.235.63\n)\n\n => Array\n(\n => 116.107.220.231\n)\n\n => Array\n(\n => 223.226.169.249\n)\n\n => Array\n(\n => 106.201.24.6\n)\n\n => Array\n(\n => 49.15.89.7\n)\n\n => Array\n(\n => 49.15.142.20\n)\n\n => Array\n(\n => 223.177.24.85\n)\n\n => Array\n(\n => 37.156.17.37\n)\n\n => Array\n(\n => 102.129.224.2\n)\n\n => Array\n(\n => 49.15.85.221\n)\n\n => Array\n(\n => 106.76.208.153\n)\n\n => Array\n(\n => 61.2.47.71\n)\n\n => Array\n(\n => 27.97.178.79\n)\n\n => Array\n(\n => 39.34.143.196\n)\n\n => Array\n(\n => 103.10.227.158\n)\n\n => Array\n(\n => 117.220.210.159\n)\n\n => Array\n(\n => 182.189.28.11\n)\n\n => Array\n(\n => 122.185.38.170\n)\n\n => Array\n(\n => 112.196.132.115\n)\n\n => Array\n(\n => 187.156.137.83\n)\n\n => Array\n(\n => 203.122.3.88\n)\n\n => Array\n(\n => 51.68.142.45\n)\n\n => Array\n(\n => 124.253.217.55\n)\n\n => Array\n(\n => 103.152.41.2\n)\n\n => Array\n(\n => 157.37.154.219\n)\n\n => Array\n(\n => 39.45.32.77\n)\n\n => Array\n(\n => 182.182.22.221\n)\n\n => Array\n(\n => 157.43.205.117\n)\n\n => Array\n(\n => 202.142.123.58\n)\n\n => Array\n(\n => 43.239.207.121\n)\n\n => Array\n(\n => 49.206.122.113\n)\n\n => Array\n(\n => 106.193.199.203\n)\n\n => Array\n(\n => 103.67.157.251\n)\n\n => Array\n(\n => 49.34.97.81\n)\n\n => Array\n(\n => 49.156.92.130\n)\n\n => Array\n(\n => 203.160.179.210\n)\n\n => Array\n(\n => 106.215.33.244\n)\n\n => Array\n(\n => 191.101.148.41\n)\n\n => Array\n(\n => 203.90.94.94\n)\n\n => Array\n(\n => 105.129.205.134\n)\n\n => Array\n(\n => 106.215.45.165\n)\n\n => Array\n(\n => 112.196.132.15\n)\n\n => Array\n(\n => 39.59.64.174\n)\n\n => Array\n(\n => 124.253.155.116\n)\n\n => Array\n(\n => 94.179.192.204\n)\n\n => Array\n(\n => 110.38.29.245\n)\n\n => Array\n(\n => 124.29.209.78\n)\n\n => Array\n(\n => 103.75.245.240\n)\n\n => Array\n(\n => 49.36.159.170\n)\n\n => Array\n(\n => 223.190.18.160\n)\n\n => Array\n(\n => 124.253.113.226\n)\n\n => Array\n(\n => 14.180.77.240\n)\n\n => Array\n(\n => 106.215.76.24\n)\n\n => Array\n(\n => 106.210.155.153\n)\n\n => Array\n(\n => 111.119.187.42\n)\n\n => Array\n(\n => 146.196.32.106\n)\n\n => Array\n(\n => 122.162.22.27\n)\n\n => Array\n(\n => 49.145.59.252\n)\n\n => Array\n(\n => 95.47.247.92\n)\n\n => Array\n(\n => 103.99.218.50\n)\n\n => Array\n(\n => 157.37.192.88\n)\n\n => Array\n(\n => 82.102.31.242\n)\n\n => Array\n(\n => 157.46.220.64\n)\n\n => Array\n(\n => 180.151.107.52\n)\n\n => Array\n(\n => 203.81.240.75\n)\n\n => Array\n(\n => 122.167.213.130\n)\n\n => Array\n(\n => 103.227.70.164\n)\n\n => Array\n(\n => 106.215.81.169\n)\n\n => Array\n(\n => 157.46.214.170\n)\n\n => Array\n(\n => 103.69.27.163\n)\n\n => Array\n(\n => 124.253.23.213\n)\n\n => Array\n(\n => 157.37.167.174\n)\n\n => Array\n(\n => 1.39.204.67\n)\n\n => Array\n(\n => 112.196.132.51\n)\n\n => Array\n(\n => 119.152.61.222\n)\n\n => Array\n(\n => 47.31.36.174\n)\n\n => Array\n(\n => 47.31.152.174\n)\n\n => Array\n(\n => 49.34.18.105\n)\n\n => Array\n(\n => 157.37.170.101\n)\n\n => Array\n(\n => 118.209.241.234\n)\n\n => Array\n(\n => 103.67.19.9\n)\n\n => Array\n(\n => 182.189.14.154\n)\n\n => Array\n(\n => 45.127.233.232\n)\n\n => Array\n(\n => 27.96.94.91\n)\n\n => Array\n(\n => 183.83.214.250\n)\n\n => Array\n(\n => 47.31.27.140\n)\n\n => Array\n(\n => 47.31.129.199\n)\n\n => Array\n(\n => 157.44.156.111\n)\n\n => Array\n(\n => 42.110.163.2\n)\n\n => Array\n(\n => 124.253.64.210\n)\n\n => Array\n(\n => 49.36.167.54\n)\n\n => Array\n(\n => 27.63.135.145\n)\n\n => Array\n(\n => 157.35.254.63\n)\n\n => Array\n(\n => 39.45.18.182\n)\n\n => Array\n(\n => 197.210.85.102\n)\n\n => Array\n(\n => 112.196.132.90\n)\n\n => Array\n(\n => 59.152.97.84\n)\n\n => Array\n(\n => 43.242.178.7\n)\n\n => Array\n(\n => 47.31.40.70\n)\n\n => Array\n(\n => 202.134.10.136\n)\n\n => Array\n(\n => 132.154.241.43\n)\n\n => Array\n(\n => 185.209.179.240\n)\n\n => Array\n(\n => 202.47.50.28\n)\n\n => Array\n(\n => 182.186.1.29\n)\n\n => Array\n(\n => 124.253.114.229\n)\n\n => Array\n(\n => 49.32.210.126\n)\n\n => Array\n(\n => 43.242.178.122\n)\n\n => Array\n(\n => 42.111.28.52\n)\n\n => Array\n(\n => 23.227.141.44\n)\n\n => Array\n(\n => 23.227.141.156\n)\n\n => Array\n(\n => 103.253.173.79\n)\n\n => Array\n(\n => 116.75.231.74\n)\n\n => Array\n(\n => 106.76.78.196\n)\n\n => Array\n(\n => 116.75.197.68\n)\n\n => Array\n(\n => 42.108.172.131\n)\n\n => Array\n(\n => 157.38.27.199\n)\n\n => Array\n(\n => 103.70.86.205\n)\n\n => Array\n(\n => 119.152.63.239\n)\n\n => Array\n(\n => 103.233.116.94\n)\n\n => Array\n(\n => 111.119.188.17\n)\n\n => Array\n(\n => 103.196.160.156\n)\n\n => Array\n(\n => 27.97.208.40\n)\n\n => Array\n(\n => 188.163.7.136\n)\n\n => Array\n(\n => 49.15.202.205\n)\n\n => Array\n(\n => 124.253.201.111\n)\n\n => Array\n(\n => 182.190.213.246\n)\n\n => Array\n(\n => 5.154.174.10\n)\n\n => Array\n(\n => 103.21.185.16\n)\n\n => Array\n(\n => 112.196.132.67\n)\n\n => Array\n(\n => 49.15.194.230\n)\n\n => Array\n(\n => 103.118.34.103\n)\n\n => Array\n(\n => 49.15.201.92\n)\n\n => Array\n(\n => 42.111.13.238\n)\n\n => Array\n(\n => 203.192.213.137\n)\n\n => Array\n(\n => 45.115.190.82\n)\n\n => Array\n(\n => 78.26.130.102\n)\n\n => Array\n(\n => 49.15.85.202\n)\n\n => Array\n(\n => 106.76.193.33\n)\n\n => Array\n(\n => 103.70.41.30\n)\n\n => Array\n(\n => 103.82.78.254\n)\n\n => Array\n(\n => 110.38.35.90\n)\n\n => Array\n(\n => 181.214.107.27\n)\n\n => Array\n(\n => 27.110.183.162\n)\n\n => Array\n(\n => 94.225.230.215\n)\n\n => Array\n(\n => 27.97.185.58\n)\n\n => Array\n(\n => 49.146.196.124\n)\n\n => Array\n(\n => 119.157.76.144\n)\n\n => Array\n(\n => 103.99.218.34\n)\n\n => Array\n(\n => 185.32.221.247\n)\n\n => Array\n(\n => 27.97.161.12\n)\n\n => Array\n(\n => 27.62.144.214\n)\n\n => Array\n(\n => 124.253.90.151\n)\n\n => Array\n(\n => 49.36.135.69\n)\n\n => Array\n(\n => 39.40.217.106\n)\n\n => Array\n(\n => 119.152.235.136\n)\n\n => Array\n(\n => 103.91.103.226\n)\n\n => Array\n(\n => 117.222.226.93\n)\n\n => Array\n(\n => 182.190.24.126\n)\n\n => Array\n(\n => 27.97.223.179\n)\n\n => Array\n(\n => 202.137.115.11\n)\n\n => Array\n(\n => 43.242.178.130\n)\n\n => Array\n(\n => 182.189.125.232\n)\n\n => Array\n(\n => 182.190.202.87\n)\n\n => Array\n(\n => 124.253.102.193\n)\n\n => Array\n(\n => 103.75.247.73\n)\n\n => Array\n(\n => 122.177.100.97\n)\n\n => Array\n(\n => 47.31.192.254\n)\n\n => Array\n(\n => 49.149.73.185\n)\n\n => Array\n(\n => 39.57.147.197\n)\n\n => Array\n(\n => 103.110.147.52\n)\n\n => Array\n(\n => 124.253.106.255\n)\n\n => Array\n(\n => 152.57.116.136\n)\n\n => Array\n(\n => 110.38.35.102\n)\n\n => Array\n(\n => 182.18.206.127\n)\n\n => Array\n(\n => 103.133.59.246\n)\n\n => Array\n(\n => 27.97.189.139\n)\n\n => Array\n(\n => 179.61.245.54\n)\n\n => Array\n(\n => 103.240.233.176\n)\n\n => Array\n(\n => 111.88.124.196\n)\n\n => Array\n(\n => 49.146.215.3\n)\n\n => Array\n(\n => 110.39.10.246\n)\n\n => Array\n(\n => 27.5.42.135\n)\n\n => Array\n(\n => 27.97.177.251\n)\n\n => Array\n(\n => 93.177.75.254\n)\n\n => Array\n(\n => 43.242.177.3\n)\n\n => Array\n(\n => 112.196.132.97\n)\n\n => Array\n(\n => 116.75.242.188\n)\n\n => Array\n(\n => 202.8.118.101\n)\n\n => Array\n(\n => 49.36.65.43\n)\n\n => Array\n(\n => 157.37.146.220\n)\n\n => Array\n(\n => 157.37.143.235\n)\n\n => Array\n(\n => 157.38.94.34\n)\n\n => Array\n(\n => 49.36.131.1\n)\n\n => Array\n(\n => 132.154.92.97\n)\n\n => Array\n(\n => 132.154.123.115\n)\n\n => Array\n(\n => 49.15.197.222\n)\n\n => Array\n(\n => 124.253.198.72\n)\n\n => Array\n(\n => 27.97.217.95\n)\n\n => Array\n(\n => 47.31.194.65\n)\n\n => Array\n(\n => 197.156.190.156\n)\n\n => Array\n(\n => 197.156.190.230\n)\n\n => Array\n(\n => 103.62.152.250\n)\n\n => Array\n(\n => 103.152.212.126\n)\n\n => Array\n(\n => 185.233.18.177\n)\n\n => Array\n(\n => 116.75.63.83\n)\n\n => Array\n(\n => 157.38.56.125\n)\n\n => Array\n(\n => 119.157.107.195\n)\n\n => Array\n(\n => 103.87.50.73\n)\n\n => Array\n(\n => 95.142.120.141\n)\n\n => Array\n(\n => 154.13.1.221\n)\n\n => Array\n(\n => 103.147.87.79\n)\n\n => Array\n(\n => 39.53.173.186\n)\n\n => Array\n(\n => 195.114.145.107\n)\n\n => Array\n(\n => 157.33.201.185\n)\n\n => Array\n(\n => 195.85.219.36\n)\n\n => Array\n(\n => 105.161.67.127\n)\n\n => Array\n(\n => 110.225.87.77\n)\n\n => Array\n(\n => 103.95.167.236\n)\n\n => Array\n(\n => 89.187.162.213\n)\n\n => Array\n(\n => 27.255.189.50\n)\n\n => Array\n(\n => 115.96.77.54\n)\n\n => Array\n(\n => 223.182.220.223\n)\n\n => Array\n(\n => 157.47.206.192\n)\n\n => Array\n(\n => 182.186.110.226\n)\n\n => Array\n(\n => 39.53.243.237\n)\n\n => Array\n(\n => 39.40.228.58\n)\n\n => Array\n(\n => 157.38.60.9\n)\n\n => Array\n(\n => 106.198.244.189\n)\n\n => Array\n(\n => 124.253.51.164\n)\n\n => Array\n(\n => 49.147.113.58\n)\n\n => Array\n(\n => 14.231.196.229\n)\n\n => Array\n(\n => 103.81.214.152\n)\n\n => Array\n(\n => 117.222.220.60\n)\n\n => Array\n(\n => 83.142.111.213\n)\n\n => Array\n(\n => 14.224.77.147\n)\n\n => Array\n(\n => 110.235.236.95\n)\n\n => Array\n(\n => 103.26.83.30\n)\n\n => Array\n(\n => 106.206.191.82\n)\n\n => Array\n(\n => 103.49.117.135\n)\n\n => Array\n(\n => 202.47.39.9\n)\n\n => Array\n(\n => 180.178.145.205\n)\n\n => Array\n(\n => 43.251.93.119\n)\n\n => Array\n(\n => 27.6.212.182\n)\n\n => Array\n(\n => 39.42.156.20\n)\n\n => Array\n(\n => 47.31.141.195\n)\n\n => Array\n(\n => 157.37.146.73\n)\n\n => Array\n(\n => 49.15.93.155\n)\n\n => Array\n(\n => 162.210.194.37\n)\n\n => Array\n(\n => 223.188.160.236\n)\n\n => Array\n(\n => 47.9.90.158\n)\n\n => Array\n(\n => 49.15.85.224\n)\n\n => Array\n(\n => 49.15.93.134\n)\n\n => Array\n(\n => 107.179.244.94\n)\n\n => Array\n(\n => 182.190.203.90\n)\n\n => Array\n(\n => 185.192.69.203\n)\n\n => Array\n(\n => 185.17.27.99\n)\n\n => Array\n(\n => 119.160.116.182\n)\n\n => Array\n(\n => 203.99.177.25\n)\n\n => Array\n(\n => 162.228.207.248\n)\n\n => Array\n(\n => 47.31.245.69\n)\n\n => Array\n(\n => 49.15.210.159\n)\n\n => Array\n(\n => 42.111.2.112\n)\n\n => Array\n(\n => 223.186.116.79\n)\n\n => Array\n(\n => 103.225.176.143\n)\n\n => Array\n(\n => 45.115.190.49\n)\n\n => Array\n(\n => 115.42.71.105\n)\n\n => Array\n(\n => 157.51.11.157\n)\n\n => Array\n(\n => 14.175.56.186\n)\n\n => Array\n(\n => 59.153.16.7\n)\n\n => Array\n(\n => 106.202.84.144\n)\n\n => Array\n(\n => 27.6.242.91\n)\n\n => Array\n(\n => 47.11.112.107\n)\n\n => Array\n(\n => 106.207.54.187\n)\n\n => Array\n(\n => 124.253.196.121\n)\n\n => Array\n(\n => 51.79.161.244\n)\n\n => Array\n(\n => 103.41.24.100\n)\n\n => Array\n(\n => 195.66.79.32\n)\n\n => Array\n(\n => 117.196.127.42\n)\n\n => Array\n(\n => 103.75.247.197\n)\n\n => Array\n(\n => 89.187.162.107\n)\n\n => Array\n(\n => 223.238.154.49\n)\n\n => Array\n(\n => 117.223.99.139\n)\n\n => Array\n(\n => 103.87.59.134\n)\n\n => Array\n(\n => 124.253.212.30\n)\n\n => Array\n(\n => 202.47.62.55\n)\n\n => Array\n(\n => 47.31.219.128\n)\n\n => Array\n(\n => 49.14.121.72\n)\n\n => Array\n(\n => 124.253.212.189\n)\n\n => Array\n(\n => 103.244.179.24\n)\n\n => Array\n(\n => 182.190.213.92\n)\n\n => Array\n(\n => 43.242.178.51\n)\n\n => Array\n(\n => 180.92.138.54\n)\n\n => Array\n(\n => 111.119.187.26\n)\n\n => Array\n(\n => 49.156.111.31\n)\n\n => Array\n(\n => 27.63.108.183\n)\n\n => Array\n(\n => 27.58.184.79\n)\n\n => Array\n(\n => 39.40.225.130\n)\n\n => Array\n(\n => 157.38.5.178\n)\n\n => Array\n(\n => 103.112.55.44\n)\n\n => Array\n(\n => 119.160.100.247\n)\n\n => Array\n(\n => 39.53.101.15\n)\n\n => Array\n(\n => 47.31.207.117\n)\n\n => Array\n(\n => 112.196.158.155\n)\n\n => Array\n(\n => 94.204.247.123\n)\n\n => Array\n(\n => 103.118.76.38\n)\n\n => Array\n(\n => 124.29.212.208\n)\n\n => Array\n(\n => 124.253.196.250\n)\n\n => Array\n(\n => 118.70.182.242\n)\n\n => Array\n(\n => 157.38.78.67\n)\n\n => Array\n(\n => 103.99.218.33\n)\n\n => Array\n(\n => 137.59.220.191\n)\n\n => Array\n(\n => 47.31.139.182\n)\n\n => Array\n(\n => 182.179.136.36\n)\n\n => Array\n(\n => 106.203.73.130\n)\n\n => Array\n(\n => 193.29.107.188\n)\n\n => Array\n(\n => 81.96.92.111\n)\n\n => Array\n(\n => 110.93.203.185\n)\n\n => Array\n(\n => 103.163.248.128\n)\n\n => Array\n(\n => 43.229.166.135\n)\n\n => Array\n(\n => 43.230.106.175\n)\n\n => Array\n(\n => 202.47.62.54\n)\n\n => Array\n(\n => 39.37.181.46\n)\n\n => Array\n(\n => 49.15.204.204\n)\n\n => Array\n(\n => 122.163.237.110\n)\n\n => Array\n(\n => 45.249.8.92\n)\n\n => Array\n(\n => 27.34.50.159\n)\n\n => Array\n(\n => 39.42.171.27\n)\n\n => Array\n(\n => 124.253.101.195\n)\n\n => Array\n(\n => 188.166.145.20\n)\n\n => Array\n(\n => 103.83.145.220\n)\n\n => Array\n(\n => 39.40.96.137\n)\n\n => Array\n(\n => 157.37.185.196\n)\n\n => Array\n(\n => 103.115.124.32\n)\n\n => Array\n(\n => 72.255.48.85\n)\n\n => Array\n(\n => 124.253.74.46\n)\n\n => Array\n(\n => 60.243.225.5\n)\n\n => Array\n(\n => 103.58.152.194\n)\n\n => Array\n(\n => 14.248.71.63\n)\n\n => Array\n(\n => 152.57.214.137\n)\n\n => Array\n(\n => 103.166.58.14\n)\n\n => Array\n(\n => 14.248.71.103\n)\n\n => Array\n(\n => 49.156.103.124\n)\n\n => Array\n(\n => 103.99.218.56\n)\n\n => Array\n(\n => 27.97.177.246\n)\n\n => Array\n(\n => 152.57.94.84\n)\n\n => Array\n(\n => 111.119.187.60\n)\n\n => Array\n(\n => 119.160.99.11\n)\n\n => Array\n(\n => 117.203.11.220\n)\n\n => Array\n(\n => 114.31.131.67\n)\n\n => Array\n(\n => 47.31.253.95\n)\n\n => Array\n(\n => 83.139.184.178\n)\n\n => Array\n(\n => 125.57.9.72\n)\n\n => Array\n(\n => 185.233.16.53\n)\n\n => Array\n(\n => 49.36.180.197\n)\n\n => Array\n(\n => 95.142.119.27\n)\n\n => Array\n(\n => 223.225.70.77\n)\n\n => Array\n(\n => 47.15.222.200\n)\n\n => Array\n(\n => 47.15.218.231\n)\n\n => Array\n(\n => 111.119.187.34\n)\n\n => Array\n(\n => 157.37.198.81\n)\n\n => Array\n(\n => 43.242.177.92\n)\n\n => Array\n(\n => 122.161.68.214\n)\n\n => Array\n(\n => 47.31.145.92\n)\n\n => Array\n(\n => 27.7.196.201\n)\n\n => Array\n(\n => 39.42.172.183\n)\n\n => Array\n(\n => 49.15.129.162\n)\n\n => Array\n(\n => 49.15.206.110\n)\n\n => Array\n(\n => 39.57.141.45\n)\n\n => Array\n(\n => 171.229.175.90\n)\n\n => Array\n(\n => 119.160.68.200\n)\n\n => Array\n(\n => 193.176.84.214\n)\n\n => Array\n(\n => 43.242.177.77\n)\n\n => Array\n(\n => 137.59.220.95\n)\n\n => Array\n(\n => 122.177.118.209\n)\n\n => Array\n(\n => 103.92.214.27\n)\n\n => Array\n(\n => 178.62.10.228\n)\n\n => Array\n(\n => 103.81.214.91\n)\n\n => Array\n(\n => 156.146.33.68\n)\n\n => Array\n(\n => 42.118.116.60\n)\n\n => Array\n(\n => 183.87.122.190\n)\n\n => Array\n(\n => 157.37.159.162\n)\n\n => Array\n(\n => 59.153.16.9\n)\n\n => Array\n(\n => 223.185.43.241\n)\n\n => Array\n(\n => 103.81.214.153\n)\n\n => Array\n(\n => 47.31.143.169\n)\n\n => Array\n(\n => 112.196.158.250\n)\n\n => Array\n(\n => 156.146.36.110\n)\n\n => Array\n(\n => 27.255.34.80\n)\n\n => Array\n(\n => 49.205.77.19\n)\n\n => Array\n(\n => 95.142.120.20\n)\n\n => Array\n(\n => 171.49.195.53\n)\n\n => Array\n(\n => 39.37.152.132\n)\n\n => Array\n(\n => 103.121.204.237\n)\n\n => Array\n(\n => 43.242.176.153\n)\n\n => Array\n(\n => 43.242.176.120\n)\n\n => Array\n(\n => 122.161.66.120\n)\n\n => Array\n(\n => 182.70.140.223\n)\n\n => Array\n(\n => 103.201.135.226\n)\n\n => Array\n(\n => 202.47.44.135\n)\n\n => Array\n(\n => 182.179.172.27\n)\n\n => Array\n(\n => 185.22.173.86\n)\n\n => Array\n(\n => 67.205.148.219\n)\n\n => Array\n(\n => 27.58.183.140\n)\n\n => Array\n(\n => 39.42.118.163\n)\n\n => Array\n(\n => 117.5.204.59\n)\n\n => Array\n(\n => 223.182.193.163\n)\n\n => Array\n(\n => 157.37.184.33\n)\n\n => Array\n(\n => 110.37.218.92\n)\n\n => Array\n(\n => 106.215.8.67\n)\n\n => Array\n(\n => 39.42.94.179\n)\n\n => Array\n(\n => 106.51.25.124\n)\n\n => Array\n(\n => 157.42.25.212\n)\n\n => Array\n(\n => 43.247.40.170\n)\n\n => Array\n(\n => 101.50.108.111\n)\n\n => Array\n(\n => 117.102.48.152\n)\n\n => Array\n(\n => 95.142.120.48\n)\n\n => Array\n(\n => 183.81.121.160\n)\n\n => Array\n(\n => 42.111.21.195\n)\n\n => Array\n(\n => 50.7.142.180\n)\n\n => Array\n(\n => 223.130.28.33\n)\n\n => Array\n(\n => 107.161.86.141\n)\n\n => Array\n(\n => 117.203.249.159\n)\n\n => Array\n(\n => 110.225.192.64\n)\n\n => Array\n(\n => 157.37.152.168\n)\n\n => Array\n(\n => 110.39.2.202\n)\n\n => Array\n(\n => 23.106.56.52\n)\n\n => Array\n(\n => 59.150.87.85\n)\n\n => Array\n(\n => 122.162.175.128\n)\n\n => Array\n(\n => 39.40.63.182\n)\n\n => Array\n(\n => 182.190.108.76\n)\n\n => Array\n(\n => 49.36.44.216\n)\n\n => Array\n(\n => 73.105.5.185\n)\n\n => Array\n(\n => 157.33.67.204\n)\n\n => Array\n(\n => 157.37.164.171\n)\n\n => Array\n(\n => 192.119.160.21\n)\n\n => Array\n(\n => 156.146.59.29\n)\n\n => Array\n(\n => 182.190.97.213\n)\n\n => Array\n(\n => 39.53.196.168\n)\n\n => Array\n(\n => 112.196.132.93\n)\n\n => Array\n(\n => 182.189.7.18\n)\n\n => Array\n(\n => 101.53.232.117\n)\n\n => Array\n(\n => 43.242.178.105\n)\n\n => Array\n(\n => 49.145.233.44\n)\n\n => Array\n(\n => 5.107.214.18\n)\n\n => Array\n(\n => 139.5.242.124\n)\n\n => Array\n(\n => 47.29.244.80\n)\n\n => Array\n(\n => 43.242.178.180\n)\n\n => Array\n(\n => 194.110.84.171\n)\n\n => Array\n(\n => 103.68.217.99\n)\n\n => Array\n(\n => 182.182.27.59\n)\n\n => Array\n(\n => 119.152.139.146\n)\n\n => Array\n(\n => 39.37.131.1\n)\n\n => Array\n(\n => 106.210.99.47\n)\n\n => Array\n(\n => 103.225.176.68\n)\n\n => Array\n(\n => 42.111.23.67\n)\n\n => Array\n(\n => 223.225.37.57\n)\n\n => Array\n(\n => 114.79.1.247\n)\n\n => Array\n(\n => 157.42.28.39\n)\n\n => Array\n(\n => 47.15.13.68\n)\n\n => Array\n(\n => 223.230.151.59\n)\n\n => Array\n(\n => 115.186.7.112\n)\n\n => Array\n(\n => 111.92.78.33\n)\n\n => Array\n(\n => 119.160.117.249\n)\n\n => Array\n(\n => 103.150.209.45\n)\n\n => Array\n(\n => 182.189.22.170\n)\n\n => Array\n(\n => 49.144.108.82\n)\n\n => Array\n(\n => 39.49.75.65\n)\n\n => Array\n(\n => 39.52.205.223\n)\n\n => Array\n(\n => 49.48.247.53\n)\n\n => Array\n(\n => 5.149.250.222\n)\n\n => Array\n(\n => 47.15.187.153\n)\n\n => Array\n(\n => 103.70.86.101\n)\n\n => Array\n(\n => 112.196.158.138\n)\n\n => Array\n(\n => 156.241.242.139\n)\n\n => Array\n(\n => 157.33.205.213\n)\n\n => Array\n(\n => 39.53.206.247\n)\n\n => Array\n(\n => 157.45.83.132\n)\n\n => Array\n(\n => 49.36.220.138\n)\n\n => Array\n(\n => 202.47.47.118\n)\n\n => Array\n(\n => 182.185.233.224\n)\n\n => Array\n(\n => 182.189.30.99\n)\n\n => Array\n(\n => 223.233.68.178\n)\n\n => Array\n(\n => 161.35.139.87\n)\n\n => Array\n(\n => 121.46.65.124\n)\n\n => Array\n(\n => 5.195.154.87\n)\n\n => Array\n(\n => 103.46.236.71\n)\n\n => Array\n(\n => 195.114.147.119\n)\n\n => Array\n(\n => 195.85.219.35\n)\n\n => Array\n(\n => 111.119.183.34\n)\n\n => Array\n(\n => 39.34.158.41\n)\n\n => Array\n(\n => 180.178.148.13\n)\n\n => Array\n(\n => 122.161.66.166\n)\n\n => Array\n(\n => 185.233.18.1\n)\n\n => Array\n(\n => 146.196.34.119\n)\n\n => Array\n(\n => 27.6.253.159\n)\n\n => Array\n(\n => 198.8.92.156\n)\n\n => Array\n(\n => 106.206.179.160\n)\n\n => Array\n(\n => 202.164.133.53\n)\n\n => Array\n(\n => 112.196.141.214\n)\n\n => Array\n(\n => 95.135.15.148\n)\n\n => Array\n(\n => 111.92.119.165\n)\n\n => Array\n(\n => 84.17.34.18\n)\n\n => Array\n(\n => 49.36.232.117\n)\n\n => Array\n(\n => 122.180.235.92\n)\n\n => Array\n(\n => 89.187.163.177\n)\n\n => Array\n(\n => 103.217.238.38\n)\n\n => Array\n(\n => 103.163.248.115\n)\n\n => Array\n(\n => 156.146.59.10\n)\n\n => Array\n(\n => 223.233.68.183\n)\n\n => Array\n(\n => 103.12.198.92\n)\n\n => Array\n(\n => 42.111.9.221\n)\n\n => Array\n(\n => 111.92.77.242\n)\n\n => Array\n(\n => 192.142.128.26\n)\n\n => Array\n(\n => 182.69.195.139\n)\n\n => Array\n(\n => 103.209.83.110\n)\n\n => Array\n(\n => 207.244.71.80\n)\n\n => Array\n(\n => 41.140.106.29\n)\n\n => Array\n(\n => 45.118.167.65\n)\n\n => Array\n(\n => 45.118.167.70\n)\n\n => Array\n(\n => 157.37.159.180\n)\n\n => Array\n(\n => 103.217.178.194\n)\n\n => Array\n(\n => 27.255.165.94\n)\n\n => Array\n(\n => 45.133.7.42\n)\n\n => Array\n(\n => 43.230.65.168\n)\n\n => Array\n(\n => 39.53.196.221\n)\n\n => Array\n(\n => 42.111.17.83\n)\n\n => Array\n(\n => 110.39.12.34\n)\n\n => Array\n(\n => 45.118.158.169\n)\n\n => Array\n(\n => 202.142.110.165\n)\n\n => Array\n(\n => 106.201.13.212\n)\n\n => Array\n(\n => 103.211.14.94\n)\n\n => Array\n(\n => 160.202.37.105\n)\n\n => Array\n(\n => 103.99.199.34\n)\n\n => Array\n(\n => 183.83.45.104\n)\n\n => Array\n(\n => 49.36.233.107\n)\n\n => Array\n(\n => 182.68.21.51\n)\n\n => Array\n(\n => 110.227.93.182\n)\n\n => Array\n(\n => 180.178.144.251\n)\n\n => Array\n(\n => 129.0.102.0\n)\n\n => Array\n(\n => 124.253.105.176\n)\n\n => Array\n(\n => 105.156.139.225\n)\n\n => Array\n(\n => 208.117.87.154\n)\n\n => Array\n(\n => 138.68.185.17\n)\n\n => Array\n(\n => 43.247.41.207\n)\n\n => Array\n(\n => 49.156.106.105\n)\n\n => Array\n(\n => 223.238.197.124\n)\n\n => Array\n(\n => 202.47.39.96\n)\n\n => Array\n(\n => 223.226.131.80\n)\n\n => Array\n(\n => 122.161.48.139\n)\n\n => Array\n(\n => 106.201.144.12\n)\n\n => Array\n(\n => 122.178.223.244\n)\n\n => Array\n(\n => 195.181.164.65\n)\n\n => Array\n(\n => 106.195.12.187\n)\n\n => Array\n(\n => 124.253.48.48\n)\n\n => Array\n(\n => 103.140.30.214\n)\n\n => Array\n(\n => 180.178.147.132\n)\n\n => Array\n(\n => 138.197.139.130\n)\n\n => Array\n(\n => 5.254.2.138\n)\n\n => Array\n(\n => 183.81.93.25\n)\n\n => Array\n(\n => 182.70.39.254\n)\n\n => Array\n(\n => 106.223.87.131\n)\n\n => Array\n(\n => 106.203.91.114\n)\n\n => Array\n(\n => 196.70.137.128\n)\n\n => Array\n(\n => 150.242.62.167\n)\n\n => Array\n(\n => 184.170.243.198\n)\n\n => Array\n(\n => 59.89.30.66\n)\n\n => Array\n(\n => 49.156.112.201\n)\n\n => Array\n(\n => 124.29.212.168\n)\n\n => Array\n(\n => 103.204.170.238\n)\n\n => Array\n(\n => 124.253.116.81\n)\n\n => Array\n(\n => 41.248.102.107\n)\n\n => Array\n(\n => 119.160.100.51\n)\n\n => Array\n(\n => 5.254.40.91\n)\n\n => Array\n(\n => 103.149.154.25\n)\n\n => Array\n(\n => 103.70.41.28\n)\n\n => Array\n(\n => 103.151.234.42\n)\n\n => Array\n(\n => 39.37.142.107\n)\n\n => Array\n(\n => 27.255.186.115\n)\n\n => Array\n(\n => 49.15.193.151\n)\n\n => Array\n(\n => 103.201.146.115\n)\n\n => Array\n(\n => 223.228.177.70\n)\n\n => Array\n(\n => 182.179.141.37\n)\n\n => Array\n(\n => 110.172.131.126\n)\n\n => Array\n(\n => 45.116.232.0\n)\n\n => Array\n(\n => 193.37.32.206\n)\n\n => Array\n(\n => 119.152.62.246\n)\n\n => Array\n(\n => 180.178.148.228\n)\n\n => Array\n(\n => 195.114.145.120\n)\n\n => Array\n(\n => 122.160.49.194\n)\n\n => Array\n(\n => 103.240.237.17\n)\n\n => Array\n(\n => 103.75.245.238\n)\n\n => Array\n(\n => 124.253.215.148\n)\n\n => Array\n(\n => 45.118.165.146\n)\n\n => Array\n(\n => 103.75.244.111\n)\n\n => Array\n(\n => 223.185.7.42\n)\n\n => Array\n(\n => 139.5.240.165\n)\n\n => Array\n(\n => 45.251.117.204\n)\n\n => Array\n(\n => 132.154.71.227\n)\n\n => Array\n(\n => 178.92.100.97\n)\n\n => Array\n(\n => 49.48.248.42\n)\n\n => Array\n(\n => 182.190.109.252\n)\n\n => Array\n(\n => 43.231.57.209\n)\n\n => Array\n(\n => 39.37.185.133\n)\n\n => Array\n(\n => 123.17.79.174\n)\n\n => Array\n(\n => 180.178.146.215\n)\n\n => Array\n(\n => 41.248.83.40\n)\n\n => Array\n(\n => 103.255.4.79\n)\n\n => Array\n(\n => 103.39.119.233\n)\n\n => Array\n(\n => 85.203.44.24\n)\n\n => Array\n(\n => 93.74.18.246\n)\n\n => Array\n(\n => 95.142.120.51\n)\n\n => Array\n(\n => 202.47.42.57\n)\n\n => Array\n(\n => 41.202.219.253\n)\n\n => Array\n(\n => 154.28.188.182\n)\n\n => Array\n(\n => 14.163.178.106\n)\n\n => Array\n(\n => 118.185.57.226\n)\n\n => Array\n(\n => 49.15.141.102\n)\n\n => Array\n(\n => 182.189.86.47\n)\n\n => Array\n(\n => 111.88.68.79\n)\n\n => Array\n(\n => 156.146.59.8\n)\n\n => Array\n(\n => 119.152.62.82\n)\n\n => Array\n(\n => 49.207.128.103\n)\n\n => Array\n(\n => 203.212.30.234\n)\n\n => Array\n(\n => 41.202.219.254\n)\n\n => Array\n(\n => 103.46.203.10\n)\n\n => Array\n(\n => 112.79.141.15\n)\n\n => Array\n(\n => 103.68.218.75\n)\n\n => Array\n(\n => 49.35.130.14\n)\n\n => Array\n(\n => 172.247.129.90\n)\n\n => Array\n(\n => 116.90.74.214\n)\n\n => Array\n(\n => 180.178.142.242\n)\n\n => Array\n(\n => 111.119.183.59\n)\n\n => Array\n(\n => 117.5.103.189\n)\n\n => Array\n(\n => 203.110.93.146\n)\n\n => Array\n(\n => 188.163.97.86\n)\n\n => Array\n(\n => 124.253.90.47\n)\n\n => Array\n(\n => 139.167.249.160\n)\n\n => Array\n(\n => 103.226.206.55\n)\n\n => Array\n(\n => 154.28.188.191\n)\n\n => Array\n(\n => 182.190.197.205\n)\n\n => Array\n(\n => 111.119.183.33\n)\n\n => Array\n(\n => 14.253.254.64\n)\n\n => Array\n(\n => 117.237.197.246\n)\n\n => Array\n(\n => 172.105.53.82\n)\n\n => Array\n(\n => 124.253.207.164\n)\n\n => Array\n(\n => 103.255.4.33\n)\n\n => Array\n(\n => 27.63.131.206\n)\n\n => Array\n(\n => 103.118.170.99\n)\n\n => Array\n(\n => 111.119.183.55\n)\n\n => Array\n(\n => 14.182.101.109\n)\n\n => Array\n(\n => 175.107.223.199\n)\n\n => Array\n(\n => 39.57.168.94\n)\n\n => Array\n(\n => 122.182.213.139\n)\n\n => Array\n(\n => 112.79.214.237\n)\n\n => Array\n(\n => 27.6.252.22\n)\n\n => Array\n(\n => 89.163.212.83\n)\n\n => Array\n(\n => 182.189.23.1\n)\n\n => Array\n(\n => 49.15.222.253\n)\n\n => Array\n(\n => 125.63.97.110\n)\n\n => Array\n(\n => 223.233.65.159\n)\n\n => Array\n(\n => 139.99.159.18\n)\n\n => Array\n(\n => 45.118.165.137\n)\n\n => Array\n(\n => 39.52.2.167\n)\n\n => Array\n(\n => 39.57.141.24\n)\n\n => Array\n(\n => 27.5.32.145\n)\n\n => Array\n(\n => 49.36.212.33\n)\n\n => Array\n(\n => 157.33.218.32\n)\n\n => Array\n(\n => 116.71.4.122\n)\n\n => Array\n(\n => 110.93.244.176\n)\n\n => Array\n(\n => 154.73.203.156\n)\n\n => Array\n(\n => 136.158.30.235\n)\n\n => Array\n(\n => 122.161.53.72\n)\n\n => Array\n(\n => 106.203.203.156\n)\n\n => Array\n(\n => 45.133.7.22\n)\n\n => Array\n(\n => 27.255.180.69\n)\n\n => Array\n(\n => 94.46.244.3\n)\n\n => Array\n(\n => 43.242.178.157\n)\n\n => Array\n(\n => 171.79.189.215\n)\n\n => Array\n(\n => 37.117.141.89\n)\n\n => Array\n(\n => 196.92.32.64\n)\n\n => Array\n(\n => 154.73.203.157\n)\n\n => Array\n(\n => 183.83.176.14\n)\n\n => Array\n(\n => 106.215.84.145\n)\n\n => Array\n(\n => 95.142.120.12\n)\n\n => Array\n(\n => 190.232.110.94\n)\n\n => Array\n(\n => 179.6.194.47\n)\n\n => Array\n(\n => 103.62.155.172\n)\n\n => Array\n(\n => 39.34.156.177\n)\n\n => Array\n(\n => 122.161.49.120\n)\n\n => Array\n(\n => 103.58.155.253\n)\n\n => Array\n(\n => 175.107.226.20\n)\n\n => Array\n(\n => 206.81.28.165\n)\n\n => Array\n(\n => 49.36.216.36\n)\n\n => Array\n(\n => 104.223.95.178\n)\n\n => Array\n(\n => 122.177.69.35\n)\n\n => Array\n(\n => 39.57.163.107\n)\n\n => Array\n(\n => 122.161.53.35\n)\n\n => Array\n(\n => 182.190.102.13\n)\n\n => Array\n(\n => 122.161.68.95\n)\n\n => Array\n(\n => 154.73.203.147\n)\n\n => Array\n(\n => 122.173.125.2\n)\n\n => Array\n(\n => 117.96.140.189\n)\n\n => Array\n(\n => 106.200.244.10\n)\n\n => Array\n(\n => 110.36.202.5\n)\n\n => Array\n(\n => 124.253.51.144\n)\n\n => Array\n(\n => 176.100.1.145\n)\n\n => Array\n(\n => 156.146.59.20\n)\n\n => Array\n(\n => 122.176.100.151\n)\n\n => Array\n(\n => 185.217.117.237\n)\n\n => Array\n(\n => 49.37.223.97\n)\n\n => Array\n(\n => 101.50.108.80\n)\n\n => Array\n(\n => 124.253.155.88\n)\n\n => Array\n(\n => 39.40.208.96\n)\n\n => Array\n(\n => 122.167.151.154\n)\n\n => Array\n(\n => 172.98.89.13\n)\n\n => Array\n(\n => 103.91.52.6\n)\n\n => Array\n(\n => 106.203.84.5\n)\n\n => Array\n(\n => 117.216.221.34\n)\n\n => Array\n(\n => 154.73.203.131\n)\n\n => Array\n(\n => 223.182.210.117\n)\n\n => Array\n(\n => 49.36.185.208\n)\n\n => Array\n(\n => 111.119.183.30\n)\n\n => Array\n(\n => 39.42.107.13\n)\n\n => Array\n(\n => 39.40.15.174\n)\n\n => Array\n(\n => 1.38.244.65\n)\n\n => Array\n(\n => 49.156.75.252\n)\n\n => Array\n(\n => 122.161.51.99\n)\n\n => Array\n(\n => 27.73.78.57\n)\n\n => Array\n(\n => 49.48.228.70\n)\n\n => Array\n(\n => 111.119.183.18\n)\n\n => Array\n(\n => 116.204.252.218\n)\n\n => Array\n(\n => 73.173.40.248\n)\n\n => Array\n(\n => 223.130.28.81\n)\n\n => Array\n(\n => 202.83.58.81\n)\n\n => Array\n(\n => 45.116.233.31\n)\n\n => Array\n(\n => 111.119.183.1\n)\n\n => Array\n(\n => 45.133.7.66\n)\n\n => Array\n(\n => 39.48.204.174\n)\n\n => Array\n(\n => 37.19.213.30\n)\n\n => Array\n(\n => 111.119.183.22\n)\n\n => Array\n(\n => 122.177.74.19\n)\n\n => Array\n(\n => 124.253.80.59\n)\n\n => Array\n(\n => 111.119.183.60\n)\n\n => Array\n(\n => 157.39.106.191\n)\n\n => Array\n(\n => 157.47.86.121\n)\n\n => Array\n(\n => 47.31.159.100\n)\n\n => Array\n(\n => 106.214.85.144\n)\n\n => Array\n(\n => 182.189.22.197\n)\n\n => Array\n(\n => 111.119.183.51\n)\n\n => Array\n(\n => 202.47.35.57\n)\n\n => Array\n(\n => 42.108.33.220\n)\n\n => Array\n(\n => 180.178.146.158\n)\n\n => Array\n(\n => 124.253.184.239\n)\n\n => Array\n(\n => 103.165.20.8\n)\n\n => Array\n(\n => 94.178.239.156\n)\n\n => Array\n(\n => 72.255.41.142\n)\n\n => Array\n(\n => 116.90.107.102\n)\n\n => Array\n(\n => 39.36.164.250\n)\n\n => Array\n(\n => 124.253.195.172\n)\n\n => Array\n(\n => 203.142.218.149\n)\n\n => Array\n(\n => 157.43.165.180\n)\n\n => Array\n(\n => 39.40.242.57\n)\n\n => Array\n(\n => 103.92.43.150\n)\n\n => Array\n(\n => 39.42.133.202\n)\n\n => Array\n(\n => 119.160.66.11\n)\n\n => Array\n(\n => 138.68.3.7\n)\n\n => Array\n(\n => 210.56.125.226\n)\n\n => Array\n(\n => 157.50.4.249\n)\n\n => Array\n(\n => 124.253.81.162\n)\n\n => Array\n(\n => 103.240.235.141\n)\n\n => Array\n(\n => 132.154.128.20\n)\n\n => Array\n(\n => 49.156.115.37\n)\n\n => Array\n(\n => 45.133.7.48\n)\n\n => Array\n(\n => 122.161.49.137\n)\n\n => Array\n(\n => 202.47.46.31\n)\n\n => Array\n(\n => 192.140.145.148\n)\n\n => Array\n(\n => 202.14.123.10\n)\n\n => Array\n(\n => 122.161.53.98\n)\n\n => Array\n(\n => 124.253.114.113\n)\n\n => Array\n(\n => 103.227.70.34\n)\n\n => Array\n(\n => 223.228.175.227\n)\n\n => Array\n(\n => 157.39.119.110\n)\n\n => Array\n(\n => 180.188.224.231\n)\n\n => Array\n(\n => 132.154.188.85\n)\n\n => Array\n(\n => 197.210.227.207\n)\n\n => Array\n(\n => 103.217.123.177\n)\n\n => Array\n(\n => 124.253.85.31\n)\n\n => Array\n(\n => 123.201.105.97\n)\n\n => Array\n(\n => 39.57.190.37\n)\n\n => Array\n(\n => 202.63.205.248\n)\n\n => Array\n(\n => 122.161.51.100\n)\n\n => Array\n(\n => 39.37.163.97\n)\n\n => Array\n(\n => 43.231.57.173\n)\n\n => Array\n(\n => 223.225.135.169\n)\n\n => Array\n(\n => 119.160.71.136\n)\n\n => Array\n(\n => 122.165.114.93\n)\n\n => Array\n(\n => 47.11.77.102\n)\n\n => Array\n(\n => 49.149.107.198\n)\n\n => Array\n(\n => 192.111.134.206\n)\n\n => Array\n(\n => 182.64.102.43\n)\n\n => Array\n(\n => 124.253.184.111\n)\n\n => Array\n(\n => 171.237.97.228\n)\n\n => Array\n(\n => 117.237.237.101\n)\n\n => Array\n(\n => 49.36.33.19\n)\n\n => Array\n(\n => 103.31.101.241\n)\n\n => Array\n(\n => 129.0.207.203\n)\n\n => Array\n(\n => 157.39.122.155\n)\n\n => Array\n(\n => 197.210.85.120\n)\n\n => Array\n(\n => 124.253.219.201\n)\n\n => Array\n(\n => 152.57.75.92\n)\n\n => Array\n(\n => 169.149.195.121\n)\n\n => Array\n(\n => 198.16.76.27\n)\n\n => Array\n(\n => 157.43.192.188\n)\n\n => Array\n(\n => 119.155.244.221\n)\n\n => Array\n(\n => 39.51.242.216\n)\n\n => Array\n(\n => 39.57.180.158\n)\n\n => Array\n(\n => 134.202.32.5\n)\n\n => Array\n(\n => 122.176.139.205\n)\n\n => Array\n(\n => 151.243.50.9\n)\n\n => Array\n(\n => 39.52.99.161\n)\n\n => Array\n(\n => 136.144.33.95\n)\n\n => Array\n(\n => 157.37.205.216\n)\n\n => Array\n(\n => 217.138.220.134\n)\n\n => Array\n(\n => 41.140.106.65\n)\n\n => Array\n(\n => 39.37.253.126\n)\n\n => Array\n(\n => 103.243.44.240\n)\n\n => Array\n(\n => 157.46.169.29\n)\n\n => Array\n(\n => 92.119.177.122\n)\n\n => Array\n(\n => 196.240.60.21\n)\n\n => Array\n(\n => 122.161.6.246\n)\n\n => Array\n(\n => 117.202.162.46\n)\n\n => Array\n(\n => 205.164.137.120\n)\n\n => Array\n(\n => 171.237.79.241\n)\n\n => Array\n(\n => 198.16.76.28\n)\n\n => Array\n(\n => 103.100.4.151\n)\n\n => Array\n(\n => 178.239.162.236\n)\n\n => Array\n(\n => 106.197.31.240\n)\n\n => Array\n(\n => 122.168.179.251\n)\n\n => Array\n(\n => 39.37.167.126\n)\n\n => Array\n(\n => 171.48.8.115\n)\n\n => Array\n(\n => 157.44.152.14\n)\n\n => Array\n(\n => 103.77.43.219\n)\n\n => Array\n(\n => 122.161.49.38\n)\n\n => Array\n(\n => 122.161.52.83\n)\n\n => Array\n(\n => 122.173.108.210\n)\n\n => Array\n(\n => 60.254.109.92\n)\n\n => Array\n(\n => 103.57.85.75\n)\n\n => Array\n(\n => 106.0.58.36\n)\n\n => Array\n(\n => 122.161.49.212\n)\n\n => Array\n(\n => 27.255.182.159\n)\n\n => Array\n(\n => 116.75.230.159\n)\n\n => Array\n(\n => 122.173.152.133\n)\n\n => Array\n(\n => 129.0.79.247\n)\n\n => Array\n(\n => 223.228.163.44\n)\n\n => Array\n(\n => 103.168.78.82\n)\n\n => Array\n(\n => 39.59.67.124\n)\n\n => Array\n(\n => 182.69.19.120\n)\n\n => Array\n(\n => 196.202.236.195\n)\n\n => Array\n(\n => 137.59.225.206\n)\n\n => Array\n(\n => 143.110.209.194\n)\n\n => Array\n(\n => 117.201.233.91\n)\n\n => Array\n(\n => 37.120.150.107\n)\n\n => Array\n(\n => 58.65.222.10\n)\n\n => Array\n(\n => 202.47.43.86\n)\n\n => Array\n(\n => 106.206.223.234\n)\n\n => Array\n(\n => 5.195.153.158\n)\n\n => Array\n(\n => 223.227.127.243\n)\n\n => Array\n(\n => 103.165.12.222\n)\n\n => Array\n(\n => 49.36.185.189\n)\n\n => Array\n(\n => 59.96.92.57\n)\n\n => Array\n(\n => 203.194.104.235\n)\n\n => Array\n(\n => 122.177.72.33\n)\n\n => Array\n(\n => 106.213.126.40\n)\n\n => Array\n(\n => 45.127.232.69\n)\n\n => Array\n(\n => 156.146.59.39\n)\n\n => Array\n(\n => 103.21.184.11\n)\n\n => Array\n(\n => 106.212.47.59\n)\n\n => Array\n(\n => 182.179.137.235\n)\n\n => Array\n(\n => 49.36.178.154\n)\n\n => Array\n(\n => 171.48.7.128\n)\n\n => Array\n(\n => 119.160.57.96\n)\n\n => Array\n(\n => 197.210.79.92\n)\n\n => Array\n(\n => 36.255.45.87\n)\n\n => Array\n(\n => 47.31.219.47\n)\n\n => Array\n(\n => 122.161.51.160\n)\n\n => Array\n(\n => 103.217.123.129\n)\n\n => Array\n(\n => 59.153.16.12\n)\n\n => Array\n(\n => 103.92.43.226\n)\n\n => Array\n(\n => 47.31.139.139\n)\n\n => Array\n(\n => 210.2.140.18\n)\n\n => Array\n(\n => 106.210.33.219\n)\n\n => Array\n(\n => 175.107.203.34\n)\n\n => Array\n(\n => 146.196.32.144\n)\n\n => Array\n(\n => 103.12.133.121\n)\n\n => Array\n(\n => 103.59.208.182\n)\n\n => Array\n(\n => 157.37.190.232\n)\n\n => Array\n(\n => 106.195.35.201\n)\n\n => Array\n(\n => 27.122.14.83\n)\n\n => Array\n(\n => 194.193.44.5\n)\n\n => Array\n(\n => 5.62.43.245\n)\n\n => Array\n(\n => 103.53.80.50\n)\n\n => Array\n(\n => 47.29.142.233\n)\n\n => Array\n(\n => 154.6.20.63\n)\n\n => Array\n(\n => 173.245.203.128\n)\n\n => Array\n(\n => 103.77.43.231\n)\n\n => Array\n(\n => 5.107.166.235\n)\n\n => Array\n(\n => 106.212.44.123\n)\n\n => Array\n(\n => 157.41.60.93\n)\n\n => Array\n(\n => 27.58.179.79\n)\n\n => Array\n(\n => 157.37.167.144\n)\n\n => Array\n(\n => 119.160.57.115\n)\n\n => Array\n(\n => 122.161.53.224\n)\n\n => Array\n(\n => 49.36.233.51\n)\n\n => Array\n(\n => 101.0.32.8\n)\n\n => Array\n(\n => 119.160.103.158\n)\n\n => Array\n(\n => 122.177.79.115\n)\n\n => Array\n(\n => 107.181.166.27\n)\n\n => Array\n(\n => 183.6.0.125\n)\n\n => Array\n(\n => 49.36.186.0\n)\n\n => Array\n(\n => 202.181.5.4\n)\n\n => Array\n(\n => 45.118.165.144\n)\n\n => Array\n(\n => 171.96.157.133\n)\n\n => Array\n(\n => 222.252.51.163\n)\n\n => Array\n(\n => 103.81.215.162\n)\n\n => Array\n(\n => 110.225.93.208\n)\n\n => Array\n(\n => 122.161.48.200\n)\n\n => Array\n(\n => 119.63.138.173\n)\n\n => Array\n(\n => 202.83.58.208\n)\n\n => Array\n(\n => 122.161.53.101\n)\n\n => Array\n(\n => 137.97.95.21\n)\n\n => Array\n(\n => 112.204.167.123\n)\n\n => Array\n(\n => 122.180.21.151\n)\n\n => Array\n(\n => 103.120.44.108\n)\n\n => Array\n(\n => 49.37.220.174\n)\n\n => Array\n(\n => 1.55.255.124\n)\n\n => Array\n(\n => 23.227.140.173\n)\n\n => Array\n(\n => 43.248.153.110\n)\n\n => Array\n(\n => 106.214.93.101\n)\n\n => Array\n(\n => 103.83.149.36\n)\n\n => Array\n(\n => 103.217.123.57\n)\n\n => Array\n(\n => 193.9.113.119\n)\n\n => Array\n(\n => 14.182.57.204\n)\n\n => Array\n(\n => 117.201.231.0\n)\n\n => Array\n(\n => 14.99.198.186\n)\n\n => Array\n(\n => 36.255.44.204\n)\n\n => Array\n(\n => 103.160.236.42\n)\n\n => Array\n(\n => 31.202.16.116\n)\n\n => Array\n(\n => 223.239.49.201\n)\n\n => Array\n(\n => 122.161.102.149\n)\n\n => Array\n(\n => 117.196.123.184\n)\n\n => Array\n(\n => 49.205.112.105\n)\n\n => Array\n(\n => 103.244.176.201\n)\n\n => Array\n(\n => 95.216.15.219\n)\n\n => Array\n(\n => 103.107.196.174\n)\n\n => Array\n(\n => 203.190.34.65\n)\n\n => Array\n(\n => 23.227.140.182\n)\n\n => Array\n(\n => 171.79.74.74\n)\n\n => Array\n(\n => 106.206.223.244\n)\n\n => Array\n(\n => 180.151.28.140\n)\n\n => Array\n(\n => 165.225.124.114\n)\n\n)\n```\nAm I dreaming?: SmartyPants's Personal Finance Blog\n Layout: Blue and Brown (Default) Author's Creation\n Home > Am I dreaming?\n\n# Am I dreaming?\n\nDecember 20th, 2008 at 12:38 am\n\nAm I really done with all of my finals?\nPinch me, coz I think I am dreaming. What am I going to do with all this free time?? It's snowing really hard here today. I am sitting in watching dvds borrowed from the library.\n\n### 3 Responses to “Am I dreaming?”\n\n1. Ms. Pearl Says:\n\nCongratulations...I know that feeling and it is heaven isn't it?\n\n2. Petunia Says:\n\nPinch, pinch. It's a great feeling to be done with them and looking forward to a break. Enjoy your DVDs!\n\n3. baselle Says:\n\nIf you are dreaming during the day, you're probably sleep deprived. Get some ZZZZs!\n\n(Note: If you were logged in, we could automatically fill in these fields for you.)\n Name: * Email: Will not be published. Subscribe: Notify me of additional comments to this entry. URL: Verification: * Please spell out the number 4. [ Why? ]\n\nvB Code: You can use these tags: [b] [i] [u] [url] [email]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9490158,"math_prob":0.9994455,"size":474,"snap":"2021-43-2021-49","text_gpt3_token_len":152,"char_repetition_ratio":0.14468086,"word_repetition_ratio":0.17045455,"special_character_ratio":0.37974682,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997588,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T18:54:37Z\",\"WARC-Record-ID\":\"<urn:uuid:ace7e172-5c94-45ac-801b-01a771ba8282>\",\"Content-Length\":\"289742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57211c1d-9bc4-4101-b8a8-c066060eb601>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb0c32cc-a5ba-46cd-b65c-9eb6e6025c48>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://smartypants.savingadvice.com/2008/12/20/am-i-dreaming_46336/\",\"WARC-Payload-Digest\":\"sha1:J6VWVRCNRPW57IU42Z2GYHI2BCLJRD77\",\"WARC-Block-Digest\":\"sha1:C3OXAWNEJSUW4M4J6GW6AZUOMRICMJ5L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588242.22_warc_CC-MAIN-20211027181907-20211027211907-00346.warc.gz\"}"} |
https://medium.com/analytics-vidhya/facial-age-prediction-with-fastai2-d67fdb575539 | [
"# Facial age prediction with Fastai2\n\nNot so long ago, age prediction applications were quite trending among the iOS phone users. In this post, we will create a Deep Learning model that predicts the age of a person based on their facial image.\n\nLet’s get started….!!\n\nWe will be using Fastai2 library for this model. It contains all the sub-libraries needed for NLP, Recommendation systems and computer vision. For this computer vision task, we will use the vision sub-library.\n\n`!pip install fastai2 -qfrom fastai2.vision.all import *from fastai2.basics import *`\n\nWe need a dataset that contains facial images along with their age. Our model should extract the features of these images by passing it through several layers of matrix multiplications and output a number(i.e. age), which is called the ‘Forward pass’. Now, this predicted age should be compared with the actual age to calculate loss and go back and change the values(weights) of our matrices and this, not so surprising, is called ‘Backward pass’.\n\nOnce uploaded to Drive, start the Google Colaboratory environment, connect to a runtime and mount the Drive. We need a path to where our data is stored.\n\n`path=Path('/content/drive/My Drive/face_age')path.ls()`\n\nIn this dataset, there are 99 folders named by the corresponding age of the people’s images in that folder. So the output, y value, is the label of the folder.\n\nNow we shall create a get_y function which gets the name of the folder and converts it into an integer to do the regression. The Pipeline is particular to Fastai and it facilitates the processes to happen in a sequence.\n\n`def to_num(x:str): return int(x)get_y= Pipeline([parent_label,to_num])`\n\nWe’ll use the DataBlock API to get the data, apply transformations, augmentations, split them into training and validation sets, get the y-value and normalise.\n\n`dblock=DataBlock(blocks=[ImageBlock,RegressionBlock()],get_items=get_image_files,splitter=RandomSplitter(),get_y=get_y,item_tfms=Resize(240, method='squish'),batch_tfms=[*aug_transforms(size=224, max_warp=0, max_rotate=7.0, max_zoom=1.0)])`\n\nNow, let’s make this data block into a data loader with a batch size of 64.\n\n`dls=dblock.dataloaders(path,bs=64,verbose=True)dls.show_batch()`\n\nLet’s create a default CNN learner using the cnn_learner() function and let’s use resnet18 architecture. As this is a regression problem, it is mandatory to specify the y_range.\n\n`learn=cnn_learner(dls,resnet18,loss_func=MSELossFlat(), y_range=(10.0,70.0))`\n\nNow let’s train the model, i.e. fit, for 5 epochs and make a prediction on an image.\n\n`learn.fit_one_cycle(5,0.0055)fnames=get_image_files(path/'050')pred,_,_=learn.predict(fnames);pred`\n\nThe output age I got was 50.8 and of course it might vary each time you run the model. It is because of the randomness in parameter initialisation.\n\nThat, by all means, is a fairly good result, yet still, we’ll hack into the cnn_learner() and customise it with a new architecture, activation function, self-attention and optimiser. If we look inside the learner.py notebook in Fastai’s Github repo, what cnn_leaner() does is it creates a cnn_model() and passes the model into a Learner() and the cnn_model() call the create_head() and create_body() to create the model.\n\nWe shall copy the create_body() function and make some changes so that we can pass in an activation function and self-attention. (changes made in the respective lines are shown below)\n\n`def create_custom_body(arch, n_in=3, pretrained=True,act_cls=nn.ReLU(),sa=False, cut=None):model = arch(pretrained=pretrained,act_cls=act_cls,sa=sa)`\n\nNow, we shall use xresnet18's architecture, use Mish as activation and set self-attention as True.\n\n`body=create_custom_body(xresnet18, pretrained=True, act_cls=Mish, sa=True)`\n\nTo create the head we need to find the number of outputs from body and number of outputs from the head. We’ve to double the number of input features to the head because our head will contain average pooling and max-pooling layers. As we are doing a regression we have to set a y_range (i.e. output boundaries)\n\n`nf=num_features_model(nn.Sequential(*body.children())) * 2; nfhead=create_head(nf,dls.c, y_range=(0,100))`\n\n`model=nn.Sequential(body, head)apply_init(model, nn.init.kaiming_normal_)`\n\nWe can now pass our model into a Learner(). Since Fastai uses discriminative learning rate, we need to spilt the model so that each set will be trained at a different learning rate accordingly and the split function can also be got from the same learner.py notebook. Also, we’ll use ranger optimiser which is nothing but a RAdam optimizer passed to a LookAhead().\n\n`def _xresnet_split(m): return L(m[:3], m[3:], m[1:]).map(params)learn=Learner(dls, model, loss_func=MSELossFlat(), splitter=_xresnet_split, opt_func=ranger)`\n\nNow our learner is ready! We shall freeze the learner and train it and roughly after 10 epochs we can see the validation loss heavily drops and now we can do a prediction on an image.\n\nAs we can see, the loss has not yet started to shoot up which means we can still train for few more epochs with reduced learning rate. Also, we can unfreeze the model and train for a few more epochs to get even better results. Another trick that could improve the accuracy is to increase the size of the images, say from too 240 to 360, and train it.\n\nYou can get the full code from the GitHub link below and check out Jeremy Howard’s course on Deep Learning to know more about Fastai.\n\nThank you and keep learning!\n\nWritten by\n\nWritten by\n\n## Ajaykumaar S\n\n#### Hello World!",
null,
"## More From Medium\n\nWelcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch\nFollow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore\nGet unlimited access to the best stories on Medium — and support writers while you’re at it. Just \\$5/month. Upgrade"
] | [
null,
"https://miro.medium.com/fit/c/80/80/1*miCA9MEw8TjpXyR0xY1w-A.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81631464,"math_prob":0.90600526,"size":5790,"snap":"2020-24-2020-29","text_gpt3_token_len":1394,"char_repetition_ratio":0.1004148,"word_repetition_ratio":0.002352941,"special_character_ratio":0.23471503,"punctuation_ratio":0.14039621,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9612552,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T03:17:11Z\",\"WARC-Record-ID\":\"<urn:uuid:be95d0c5-027c-4019-8c7a-0e1426ef4bb4>\",\"Content-Length\":\"192941\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96001516-38f7-4f85-8775-425817c1da1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:58c28b8c-5eff-4a29-89f7-5b812d82ad03>\",\"WARC-IP-Address\":\"104.16.120.127\",\"WARC-Target-URI\":\"https://medium.com/analytics-vidhya/facial-age-prediction-with-fastai2-d67fdb575539\",\"WARC-Payload-Digest\":\"sha1:NLVW6JEYWSOPQ6E6GV3HG5V5PGGDWALR\",\"WARC-Block-Digest\":\"sha1:K2GLZL2MXFOXLEOMUHBVZ5NSIMLI34JL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347401260.16_warc_CC-MAIN-20200529023731-20200529053731-00396.warc.gz\"}"} |
https://mathvis.academic.wlu.edu/ | [
"# Parametrizing Petal Projections\n\nWritten by Keally Rohrbacher and Sawyer Dunn-Matrullo (students in Math 383D Knot Theory Spring 2023).\n\nOur plan was to create models of knots with petal projections. This is a particular type of diagram of knots that has only one crossing, and a certain odd number of arcs which cross each other there. For example, Figure 1 shows the petal projection of the Trefoil Knot (image from Wikipedia). The numbers written on the arcs in Figure 1 are important, as the order in which the knot crosses through itself is what distinguishes different petal projections of knots from each other.\n\nWe wanted to produce 3D models of the 41, 52, and 61 knots, all of which have petal projections (shown in the figures below).\n\nSince it is not important what the actual picture of the petal projection is, just that it crosses through the center in a particular order, we had some choice as to how we were doing to construct these knots. We decided that it would be interesting to construct these as parameterized functions. We knew that we could make the x and y-components of this function fairly easily using a rose curve, a type of polar function which produces a cool rose that looks just like the petal projection in 2D. So, we knew that all we had to do was come up with a function for the z-axis which would parametrize the rose curve to go around and hit the center at certain heights to produce the petal we wanted. We knew which order the strand should go through the crossing in for each knot from the paper “Knot Projections with a Single Multi-Cossing” by Colin Adams and his coauthors.\n\nWe considered a couple ideas for finding a function that would hit these particular heights, but ultimately decided to try to find a polynomial function.\n\nWe did this by creating points with the heights we needed to hit at even intervals and plugged these points into an online calculator which uses Lagrangian interpolation to produce a polynomial which hit all of these points. Once we defined this as the z-component for our function, and used the rose curve for the x- and y-components, we plugged our curve into a 3D graphing calculator called GeoGebra. The side views, in figures 2, 4 and 6 (above and below), depict these heights being hit in proper order in accordance with the Adams’ paper. The top views in figures 3, 5 and 7 show the petal projection shape of the rose curve projected to the xy-plane.\n\nWe ran into a couple issues during this project. The most annoying of which was trying to define our curve as a parametrized function in Cinema 4D.\n\nBecause we had to hit so many points, the function on the z-axis ended up being a degree 11 polynomial for both knots. And while we were able to produce this on GeoGebra, a powerful calculator, we could not make a satisfactory spline of our function in Cinema 4D. We spent a long time trying to fix this problem by manipulating points on the software. We were exasperated to find that we could simply download an .stl file directly from GeoGebra, where we already had constructed the knot, circumventing our entire issue. (Note that 3D printers use .stl files as their start point.) This made its own set of issues, however, as the shapes from GeoGebra were not smooth. But, for our final print of the knots, we also graphed the curves on Mathematica like we did in GeoGebra, again avoiding the issue of plotting a curve in Cinema 4D and Mathematica gave us models with much smoother curves.\n\nThe most challenging part of coming up with our functions though was creating and working with high order polynomials, but this mostly just involved us typing out long equations many times. We found that WolframAlpha easily came up with the required higher order polynomials for the 61 knot (which ended up being a degree 14 polynomial).\n\n# Tying Knots on the Shortest Lattice Walks\n\nWritten by Chadrack Bantange and JCW (students in Math 383D Knot Theory Spring 2023).\n\n# Background:\n\nWhat is a minimal cubic lattice knot?\n\nLet’s start with: what is the cubic lattice? The cubic lattice is all the points (a, b, c) in R3 such that a, b, c are integers. That is, the cubic lattice is composed of all the points in three dimensions that have integer entries. The image to the right is a depiction of the cubic lattice (from Wolfram MathWorld).\n\nNext, what would it mean for a knot to be in the cubic lattice? Cubic lattice knots are knots whose vertices lie in the cubic lattice. That is, the vertices of the knot can be represented by (a, b, c) where a, b, c are integers.\n\nAnother way to think about cubic lattice knots would be to imagine you start your knot on the point (0, 0, 0). To take a step tracing the knot one must take one of the steps as depicted below. Observe that on this first step from the origin one has six options. One could go: left, right, up, down, forward, or back. Consider the diagram below which presents all six of the options. The step which traces the knot is itself not a part of the cubic lattice. If we move up from the origin to point (0, 0, 1) all the points between these vertices makeup the knot, but are not part of the cubic lattice per se.\n\nIn each of the subsequent steps you have similar options. Because you are walking within the cubic lattice, on each step you can only ever add +1 or -1 to just one of your x, y, or z coordinates to get your next vertex. Note: since we are drawing a knot, you will have one less option than when starting at the origin, because you do not want to walk back on the knot you have already drawn. If you go up from (0, 0, 0) to (0, 0, 1), then you could not go back down to (0, 0, 0) when tracing your knot.\n\nLastly, what is a minimal cubic lattice knot? Each knot has multiple equivalent representations and diagrams that can be reached via planar isotopies and Reidemeister moves. This means there are many versions of each kind of knot in the cubic lattice.\n\nConsider the trefoil. One could trace a trefoil knot in the cubic lattice that globally looked somewhat smooth and like a common depiction, as below on the left. This, however, is not particularly mathematically interesting or remarkable. Rather, to ask the more interesting question we ask: what is the minimal cubic lattice knot? That is, for each of the knot equivalence classes what is the shortest walk we could take in the cubic lattice to trace out a knot in the equivalence class? On the right, observe the minimal cubic lattice knot for the trefoil. In the case of the trefoil (31), the minimal walk in the cubic lattice is 24 steps. (Left image from the Rolfsen Knot Table Mosaic.)",
null,
"Figure showing the standard image of the trefoil knot (left), and the image of the minimal length trefoil in the cubic lattice (right).\n\n# Construction\n\nTo build the cubic lattice knots, we used Cinema 4D. As part of this process, we referred to Andrew Rechnitzer’s website on plotting minimal cubic lattices). For all the cubic lattices, we noticed that when importing the data from Notepad into Cinema 4D, one line of the coordinates was missing, making the object look incomplete. This was a challenge we experienced in constructing our knots. Correcting this issue involved manually attaching numbers to each line of the coordinates while also inserting a row of zeros in the first line of the dataset. As we went on building more knots, we noticed that this process was time-consuming. Alternatively, we used Microsoft Excel to copy our coordinates from Notepad, pasted into Excel using the “paste special” option, and we were able to use Excel to automatically attach numbers to the individual coordinates. We then pasted these coordinates back into Notepad and saved it as a “txt” format.\n\nOnce the data was imported into Cinema 4D, we went to “view” in the left menu, selected “Frame Geometry” to get a better geometric view of the knot. The spline automatically connected all the points except the last to the first. Before connecting them, we first made sure that under Object manager, we have “Rectangle—Spline” selected; under “Attributes—Object—Type” , choose “bezier or linear” depending on how smooth we want the corners of the knot to look like. For some of the knots, “bezier” was the best smoother of the corners, while for others, “linear” was best. To connect the vertices, we selected “spline pen” and were able to click on the two, unconnected points to add the last edge to the spline.\n\nAt this point all we had was a spline made of 1-dimensional lines. In order to make the knot 3-dimensional for printing and visualization, we added “circle” and “sweep” to our Cinema 4D workspace. The circle would serve as the cross section for our knot and the sweep is what swept the circle cross section along the spline we created above. We then adjusted the radius of the knot, which in most cases was 0.25 cm. In order to smooth out the corners, we added a Chamfer to the knots and set its radius at 0.25 cm for the 3, 4, 5, and 6 crossing knots and 0.3 cm for the 7 crossing cubic knots. We then scaled the object to be hand sized. There were a lot of variations in this regard depending on the knot, especially since some of the minimal cubic lattice knots occupied the space of a cube, while others occupied the space of a rectangular prism. In general, however, the scale ranged from 4 to 8 cm in all the three axes (x,y,z). We saved both .stl and .c4d files, ready to be printed in the IQ center at WLU.\n\nHere are some screenshots of our work. In both cases, the standard image of the knot came from the Rolfsen Knot Table Mosaic.",
null,
"On the left, the standard view of 5_2 knot. On the right, the minimal cubic lattice knot of 5_2.",
null,
"On the left the standard view of the knot 6_1. On the right the minimal cubic lattice knot of 6_1.\n\n# Flowering 3D Models\n\nWritten by Claire Gilreath. Joanne Wang, and Selihom Gobeze (students in Math 383D Knot Theory Spring 2023).\n\n# Building Petal Knots in Cinema 4D\n\nOnce we had found all of our coordinates for our petal knots, we were excited to actually build the knots in Cinema4D! Of course, this was not without its challenges. We started with 3_1. Our first step was to to import our points to a spline. Naturally, we ran into issues here because we were not aware that the points from the first row are read as the x, y, and z labels, so the data is not actually imported. We fixed this by editing our .txt file to include a row of 0s at the top. Also, one of the points was incorrect and we had to go back to WolframAlpha for a quick fix (this happened a couple of times throughout the process).\n\nAt this point, we were not aware that we needed to connect the last point to the first point, so we skipped that step and decided to try the Spline Smooth tool to round out our polygonal edges. We found that the result was not as uniform as we had expected but we decided to keep going to see what would happen after the sweep. We made a circle and decided to set the radius at 0.25cm in order to avoid self-intersections but give the model enough structure to support itself. Then, we made the sweep and looked at our slightly wonky 3_1 knot. We realized that the ends were not connected so we used stitch and sew to fix that (this was a result of failing to connect the last two points of the spline). To the right is our final improved 3_1 knot.\n\nWe moved on to the 5_1 knot and followed the same procedure. Once we had built it, Professor Denne looked at our slightly wonky 5_1, and suggested we try to use Chamfer tool instead of the Spline Smooth tool to make the model look smoother and more uniform, with less harsh edges of the petals. In doing this, we discovered that our last point vanished when we tried to Chamfer. We realized that we needed to close the spline by deleting our last point at (0,0,0) and using the Spline Pen to connect the first point to the last point to close the gap. We set the radius of the Chamfer to 3cm and compared our new 5_1 to the wonky one. We decided we liked the Chamfer version better and fixed our 3_1 the same way. This time we set the radius to 5cm, which we found looked much smoother and decided to stick with that for all knots. Our completed 5_1 knot is pictured above.\n\nWe repeated this same process with 6_2, expecting everything to be a lot easier now that we had solidified our methods. However, this was not the case. As we rotated around the knot after sweeping, we noticed that we had a self-intersection at one point and two strands that were concerningly close together. Though we’re not entirely sure why this happened, we think the Chamfer forced the two strands too close together. We first tried to make the radius of the circle inside the sweep smaller, but the strands were still touching at 0.17cm, so we decided to manually pull the spline points apart after the Chamfer but before the sweep. This fixed our problem and everything looked good after the sweep this time. Our completed 6_2 knot is pictured above.\n\nAs we built 6_3 we were worried about self-intersections, since the (x,y) coordinates were the same as in 6_2. We did not run into any problems this time, perhaps because of the different heights. Below is a picture of 6_3 from four angles, showing the “nice” petal view as well as the less attractive side views.\n\n# Trig Trials\n\nWritten by Claire Gilreath. Joanne Wang, and Selihom Gobeze (students in Math 383D Knot Theory Spring 2023).\n\n# Finding the Coordinates of Petal Knots\n\nWe decided we wanted to try to 3D print the petal projections of knots for our project because they looked pretty and seemed like a challenge.\n\nPetal projections of knots are projections where all of the crossings are aligned through a single line in 3d space, giving the knot a flower-like appearance. A two-dimensional petal projection of the trefoil (3_1) is pictured to the right (via this image from Wikipedia). We decided to work on creating petal versions of 3_1, 5_1, 6_2, and 6_3 knots. We relied heavily on the work described in Colin Adams’ “Knot Projections with a Single Multi-Crossing” paper. In the appendix of this paper, Adams and his coauthors described the order of each petal for the knots we hoped to construct. Also, we utilized the Wikipedia page about petal projections of knots to better understand their structure.\n\nWe were a bit uncertain of the best method to create these knots, since as far as we knew, there were no equations we could plug into Cinema 4D, nor were there coordinates constructed by someone else. After talking to Professor Denne, we felt that our best bet would be creating our own sets of coordinates using trig and polar coordinates. We decided to find four points per petal: the center, the top of the arc of each petal, and the end of both edges. To do this we constructed a circle through the ends of the edges, shown in red in the diagram of the trefoil below.",
null,
"This figure shows the details behind the computations needed to find the coordinates of points along the petal trefoil knot.\n\nWe then found the angle between each of the edges using the formula Pi/(number of petals), so for the trefoil, we had Pi/5. The length of each red edge was the radius of the circle, r, which we determined in a later step after finding the heights. Next, we added lines from the top of each petal to the center of our circle, shown in green, and found the angle between these lines using the formula 2Pi/(the number of petals), this was 2Pi/5 for the trefoil. To find the length of the green lines, we treated the curved part of the petal as a semicircle and found its radius by finding the length of the chord between the two red edges and dividing by 2. We then found the length of the other piece by using the Pythagorean theorem and added both measurements together. These calculations are shown in purple in the diagram above, where the length of the green line is given by the following formula: the square root of (r2.-(r*sin(Pi/5))2) plus r*sin(Pi/5). Using trig, this is r(cos(Pi/5)+sin(Pi/5)). We called this quantity x as we labeled the points of the trefoil in the diagram above.\n\nThe next step was to find the heights of each point and then convert everything into Cartesian coordinates in order to have points that Cinema4D could understand.\n\nWe used the orders from the last page of Adams’ paper which gave us the heights for the edges at (0,0). We also assigned heights to the points we found by averaging the heights of the edges to find the height for the top of the petal. We then averaged the height of the top of the petal and the edges at (0,0) to find the height of the end of each edge. For the trefoil, we came up with the heights to the right and then scaled them up by 1.5, so our model would be a bit larger after printing. This also guided our decision to make the radius of our circle (r) be 3 for 3_1 and 5_1. We used r = 3.5 for 6_2 and 6_3.\n\nAt this point, we had found the polar coordinates for all of the points we wished to plot in Cinema4D, so we converted them to decimal approximations with the help of WolframAlpha. The Cartesian coordinates of 3_1 are shown in the table below. We followed a similar process for 5_1, 6_2, and 6_3 knots. Points completed!\n\n Vertex x y z 1 0 0 6 2 -0.927050983 2.853169549 4.875 3 0 4.19040674 3.75 4 0.927050983 2.853169549 2.625 5 0 0 1.5 6 -0.927050983 -2.853169549 2.25 7 -2.463059283 -3.390110266 3 8 -2.427050983 -1.763355757 3.75 9 0 0 4.5 10 2.427050983 1.763355757 3.375 11 3.985313636 1.294906896 2.25 12 3 0 1.125 13 0 0 0 14 -3 0 0.75 15 -3.985313636 1.294906896 1.5 16 -2.427050983 1.763355757 2.25 17 0 0 3 18 2.427050983 -1.763355757 3.75 19 2.463059283 -3.390110266 4.5 20 0.927050983 -2.853169549 5.25 21 0 0 6\n\n# The Figure-8 Knot and its Mirror Image\n\nWritten by Elizabeth Marshall, Mason Shelley, and Libby Kerr (students in Math 383D Knot Theory Spring 2023).\n\nWe built different models depicting how the figure-8 knot 41 is achiral, meaning the knot is equivalent to its mirror image. In this case, if we were given a knot diagram, its mirror image would be the same diagram with swapped crossings. We observe some knot theory facts: the figure-8 knot (41) is a 1 component alternating knot with a crossing number of 4 and an unknotting number of 1. The crossing number of a link is the minimum number of crossings needed in a diagram, and the unknotting number is the minimum number of times the knot must pass through itself before it becomes the unknot. While the knot looks trivial, its composition is surprisingly difficult to model.\n\nStarting with a mirror image of the figure-8 knot (Step 1 below), each of the subsequent steps shows one or two R-moves that lead the knot back to its original state. An R-move is a simple manipulation of a piece of the knot in space, where a series of them can result in significant alteration of the knot. The steps are as follows:",
null,
"Step2: Make an R-2 move that forms two “over” crossings on top of the figure-8 shape of the knot.",
null,
"Step 3: Make an R-3 move that brings the arc on the left of the figure-8 shape to the center of the diagram.",
null,
"Step 4: Make an R-2 move that pulls the now central arc over to the right of the main shape.",
null,
"Step 5: Complete an R-3 and an R-1 move that form an outer arc to the top right.",
null,
"Step 7: Flip the figure in Step 6 180 degrees. This move reveals the figure-8 knot with crossings that are the opposite of those seen in Figure 1.\n\nWe used Cinema 4D to develop our models. On this software, we used the drawing tool to build our original knot shown in Step 1. To draw the subsequent steps representative of the R-moves we manipulated the knot from the previous step, scaled it appropriately, and checked to make sure all crossings were correct.\n\nUsing new software presented numerous challenges that we faced throughout the project. We initially found parametric equations on Wikipedia for the figure-8 knot that resulted in the wrong knot when visualized in Mathematica and then resulted in a simple line in space when entered in Cinema 4D. This meant that the this parameterization was not going to work. Without these equations, we were forced to draw the knot using the spline pen. It took nearly 10 attempts to draw the first image of the knot but eventually, we made a cohesive, recognizable figure-8 knot.\n\nUsing this tool, created one big challenge with regards to placing the points for future manipulation during the design process. Our most successful strategy was to draw the structure of the knot in 2D (holding the z-axis flat) and manipulating the points to create the over/under crossings after the knot was drawn.\n\nAnother challenge was figuring out how to properly demonstrate the transition of the knot’s mirror image back to its original state. In order to make the manipulations clear, we made a video showing the use of R-moves to visualize how the diagrams were interconnected.\n\nTowards the end of our design process, we decided to take out the 6th step (shown above) because there was too much similarity between steps 6 and 7 (final model); that is why you’ll see 7 steps and only 6 models.\n\nOverall, the main challenge that we ran into was using the software and properly putting our ideas into the diagrams. A massive help to our group was Dave Pfaff (IQ Center WLU), who is the expert that helped us navigate Cinema 4D. Thank you so much Dave, all of your help is greatly appreciated.\n\n# New Torus Link, Improved Visualizations, and Cinema 4D Problems.\n\nWritten by Hillis Burns, Shannon Timoney, Hall Pritchard (students in Math 383D Knot Theory Spring 2023).\n\nWe created the T(2, 8) torus link (or 821 link) using Cinema 4D. The equations for this two component link are x = Cos[t]*(3+Cos[4t]), y = Sin[t]*(3+Cos[4t]), and z = Sin[4t]. The second component is created by the equations x = Cos[t]*(3+Cos[4t+Pi]), y = Sin[t]*(3+Cos[4t+Pi]), and z = Sin[4t+Pi].\n\nWe also created the T(3, 3) torus link (or 632 link). With the 632 link, each of the three components goes around the longitude once and goes around the meridian once. The equations for this knot are x1 = Cos[t]*(3+Cos[t]), y1 = Sin[t]*(3+Cos[t]), z1 = Sin[t], x2 = Cos[t]*(3+Cos[t+2*Pi/3]), y2 = Sin[t]*(3+Cos[t+2*Pi/3]), z2 = Sin[t+2*Pi/3], and x3 = Cos[t]*(3+Cos[t+4*Pi/3]), y3 = Sin[t]*(3+Cos[t+4*Pi/3]), z3 = Sin[t+4*Pi/3].\n\nAfter creating the T(3, 3) or 632 link, we wanted to build a model that helps demonstrate what the torus link actually is. We did this by first opening back up the T(3, 6) or 632 in Cinema 4D. Then we created a torus surface and rotated it 90° so that the torus link sat in the right position on the torus surface. Then we changed the radius of both the meridian and the longitude so that the 3d model was in a presentable format. Our final model which is shown above gives a good physical representation of how a torus link is constructed.\n\nWe also created the T(3, 6) torus link (or 633 link). With the 633 link, each of the three components goes once around the longitude and goes three times around the meridian. The equations for this knot are x1 = Cos[t]*(3+Cos[2t]), y1 = Sin[1t]*(3+Cos[2t]), z1 = Sin[2t], x2 = Cos[t]*(3+Cos[2t+2*Pi/3]), y2 = Sin[t]*(3+Cos[2t+Pi]), z2 = Sin[2t+2*Pi/3], and x3 = Cos[t]*(3+Cos[2t+4*Pi/3]), y3 = Sin[t]*(3+Cos[2t+4*Pi/3]), z3 = Sin[2t+4*Pi/3].\n\nWhile using the Cinema 4D software, the biggest problem we had was fixing the join at the end of two strands. In Cinema 4D, the join will sometimes not look correct. In order to fix this, we will first decrease the period of the parametric equations in order to make the join fully noticeable. We decreased it from 2π (~6.28) to 6.275.\n\nWe then try to highlight all the points at the end of the knot, in order to use the “stitch and sew” function. We came across problems when we accidentally highlighted other points not at the end. This is shown in the figure to the right. This would happen more often when we increased the sample size to a number that was higher than necessary (>300). This is the case in the image below. By decreasing the sample size, it made it easier to highlight just the end points of the knot, as shown below. At this point we were able to successfully use the “stitch and sew” function.",
null,
"This image shows that just the end points of the two ends of the torus link are highlighted. This allowed us to successfully use the Stitch and Sew function to join the ends together.\n\n# Overview of Torus Shapes, Knots, and Links\n\nWritten by Hillis Burns, Shannon Timoney, Hall Pritchard (students in Math 383D Knot Theory Spring 2023).\n\nThe torus is the surface of a donut in 3-dimensions. A torus knot/link is a knot/link that can be moved to lay on the torus surface in R3.The image on the right shows a link being wrapped to lie on a torus; this is the T(3, 6) or 633 torus link. Knots are also commonly described in knot tables using the notation, Crnj. The crossing number is denoted by Cr, the number of components by n, and the certain configuration is j. As seen in this image, the torus has two key circles: the longitude, which wraps around the long way of the torus, and the meridian, which wraps around the short way. These are illustrated in the image below.\n\nThe notation for torus knots is T(p, q); The knot wraps around the longitude p times, while it wraps q times around the meridian. The two figures are from a Mathematica file which visualizes torus links, and this website. This Knot Plot website also has a neat table showing many of the torus knots and links.\n\nUsing a Mathematica file provided by Professor Denne, we were able to start creating the T(2, 4) torus link (or 421) in Cinema 4D. This is a two-component link where each component goes once around the longitude and twice around the meridian, as illustrated below.\n\nThe parametric equations that create the link are x = Cos[t]*(2+Cos[2t]), y = Sin[t]*(2+Cos[2t]), z = Sin[2t], with t going from 0 to 2Pi. However, for this link there are two components. Thus, a second equation was needed for the second component. To create a torus link like this, the second equation must be rotated 180 degrees to fit with the first curve. To do that, we added Pi to the trigonometric equations of the first sweep: x = Cos[t]*(3+Cos[2t+Pi]), y = Sin[t]*(3+Cos[2t+Pi]), z = Sin[2t+Pi].\n\nWe also decided to create the T(2, 11) torus knot (also known as 111) in addition to the links in Cinema 4D. This is a knot (one component link) where the curve goes twice around the longitude and 11 times around the meridian. The topmost figure below shows the original image of the torus knot that we created. The knot does not look smooth, as Cinema 4D only evaluated a few points along the parametrized curve. However, after adding more sample points we were able to make the torus knot smoother. The progression of sample points, from 20, to 50, to 100, to 200, is shown from top to bottom:",
null,
"",
null,
"We also had to adjust the radius because the loops were too close together. In order to spread out the components, the radius was changed from 2 to 3.\n\nNext, we created a T(2, 6) torus link (or 621) in Cinema 4D. With the 621, each of the two components goes once around the longitude and three times around the meridian. Using the Mathematica file, we knew that the equations for this link were, x = Cos[t]*(3+Cos[3t]), y = Sin[t]*(3+Cos[3t]), z = Sin[3t]. The second component’s equations, again rotated by Pi, consist of, x = Cos[t]*(3+Cos[3t+Pi]), y = Sin[t]*(3+Cos[3t+Pi]), z = Sin[3t+Pi].\n\n# 3D Printed 7_1 Mosaic Knot\n\nWritten by Sion Jang and Charlotte Peete (students in Math 383D Knot Theory Spring 2023).\n\nThe mosaic number for the 71 knot is six, meaning that it cannot be created on a grid using mosaic tiles smaller than 6 x 6. We are creating a 3D version of 71 mosaic knot using Cinema 4D as our main design program. We created this knot using the same method as our Trefoil knot. However, we made changes to the Chamfering process, the diameter of the tube, and the distance between some of the over-strands and feet.\n\nWhen creating this knot with the same process as the Trefoil and Figure-8 knots, we came across a few problems.",
null,
"Figure 1: The red arrows show the three feet where the z-coordinates were changed.\n\nSince the 71 knot has more crossings than either of the other two knots we created, we found that the close proximity of the feet would lead to self-intersections. We first changed the diameter of the circle from 6 to 4 mm. While this change helped to solve the intersection problem, the feet still seemed to be close together. Aesthetically, we still weren’t satisfied with how crowded the feet looked. So, we changed the z-coordinates of the horizontal feet to create more space between the adjacent vertical feet. The arrows in Figure 1 point to the three feet for which we changed these coordinates. Figure 2 shows an overhead view of the spacing between the feet with these changes.\n\nThe biggest challenge we came across with this knot was figuring out how to properly curve vertices without distorting the rest of the knot. Our original method of Chamfering did not work because there wasn’t enough space between the curves of the knot and the feet. To fix this problem, we added an additional point next to each vertex of the over-strand immediately before the foot. These points were added as close to the original vertices as possible.\n\nFigure 3 shows our final product.\n\n# 3D Printed Trefoil Mosaic Knot\n\nWritten by Sion Jang and Charlotte Peete (students in Math 383D Knot Theory Spring 2023).\n\nMosaic knot theory uses a combination of the following eleven tiles to create a knot or a link representation on an nxn grid. These tiles are shown in Figure 1. As explained in the Knot Mosaic Tabulation paper by Hwa Jeong Lee, Lewis D. Ludwig, Joseph S. Paat, and Amanda Peiffer, the mosaic number of a knot K is the smallest integer n for which K can be represented on an mosaic board. This mosaic number is a knot invariant and can be used to distinguish between two knots.\n\nThe Knot Mosaic Tabulation Paper provided the minimal grid mosaic diagrams for all 36 prime knots of eight crossings or fewer. The mosaic number for a trefoil is four, m(31)=4. Thus, a trefoil cannot be created on a 3 × 3 grid, and the minimum grid needed is a 4×4 grid. We created a 3D version of 31 Trefoil mosaic knot using Cinema 4D as our main design program. To create this knot, we first drew a coordinate plane onto the diagram of the Trefoil knot, as shown in Figure 2.\n\nWe did this so that we could insert integer coordinates into Cinema 4D that would allow for accurate spacing in our knot. Our final knot was scaled to be around 7×7 cm without the circle sweep, which is the thick tube around the curve.\n\nOne of the main challenges we encountered in constructing this knot was figuring out how to represent the over and under-crossings. Using this blog by Laura Taalman as inspiration, we decided to create “feet” for our knot. The distance from the center of each foot to the center of the over-strand is 1 cm. We also put a .125 mm space on either side of the over-strand between the legs to show that the strands are not connected. Figure 3 shows a close-up representation of one of the feet and legs on this knot.",
null,
"Figure 3: The foot and two legs create a strand which crosses under a different strand of the knot.\n\nThe coordinates we chose made the knot have an angular rather than curved shape, and we manually curved the knot in Cinema 4D using the “Chamfer tool” once our knot was constructed. Figure 4 shows our knot before we curved the edges. Figure 5 shows our knot after we curved the edges. For the different curves, we used different values of the Chamfer to get the desired look.",
null,
"Figure 4: Top view & angled view of the trefoil knot before curving the corners.",
null,
"Figure 5: Top view & angled view of the trefoil knot after curving the corners.\n\nUsing this same process, we created 41, the Figure-8 knot. The final product is shown in Figure 6.\n\n# Modeling and 3D-Printing Equilateral Stick Knots\n\nWritten by Aidan Aengus Kitchen, Arun Ghosh, and Alex Wolff (students in Math 383D Knot Theory Spring 2023).\n\nBrief Math Background:\n\nA knot is a simple, closed curve in space. This means that it forms a closed loop and does not intersect itself. The figure to the left illustrates the simplest knots with 0 to 8 crossings (from https://knotplot.com/zoo/). The knots are labelled with the crossing number and a subscript which is a number that distinguishes between different knots with the same number of crossings. A polygonal knot is composed of a finite number of edges or straight sticks. For an equilateral stick knot, the edges have the same length. We constructed equilateral stick knots representing the knots shown above. In the models we made, we used the minimum number of sticks necessary to construct the knots.\n\nClayton Shonkwiler is an Associate Professor of Mathematics from the University of Colorado. His primary research is using geometry to solve topological and physical problems. His recent work published in 2022, New Stick number bounds from random sampling confined polygons, looks at equilateral stick knots. The paper focuses on finding the upper and lower bounds for stick number and also gives the coordinates for constructing the knots in 3 dimensional space. These coordinates can be found in following GitHub repository: https://github.com/thomaseddy/stick-knot-gen/tree/master/stick_number/mseq_knots\n\nThe table below displays the crossing number of the knots we constructed, as well as the number of sticks we used for each one.\n\n Knot Crossing # # of sticks used 31 3 6 41 4 7 51 5 8 52 5 8 61 6 8 62 6 8 63 6 8 71 7 9 72 7 9 73 7 9 74 7 9 75 7 9 76 7 9 77 7 9 949 9 9 K11a1 11 12 K12n63 12 11\n\nHow We Built the Knots:\n\nFor each knot, we first retrieved the coordinate data from the GitHub repository and saved it in a text file. Then, we imported the data into Cinema4D by creating a linear spline object, opening the Structure window, clicking on “Import ASCII Data” and navigating to the text file. In order to close the knot, we had to add a point at the origin and connect it to the last point in the imported data using the Spline Pen.\n\nThen, we resized the spline to 6-7 cm for each dimension, and swept a circle of radius 0.3 cm around the spline, creating a tube around the knot. We did this to make the knots 3-dimensional, because we cannot 3D-print a spline (a 1-dimensional object in 3D space). To round the corners, we used the Chamfer function. Finally, we exported the models as .stl files and 3D printed them. In total, we designed all of the equilateral stick knots with three, four, five, six, and seven crossings. Additionally, we modeled knots with higher crossing number, such as 949, k11A1, and k12N603. The images above and below depict the 31 and 61 equilateral stick knot models.\n\nChallenges and Observations:\n\n• Scaling\n• Self-intersections\n\nInitially, we did not scale our models to the appropriate size (suggested 6-7cm, or palm size). We also moved vertices around in the original 63 model, so the sticks lengths were altered, and we had to redo it. Additionally, the radius of the sticks was too small to neatly 3D print a label on the physical knot, so we decided to tape the labels on after the knots were printed. One interesting observation we made was increasing the radius of the tubes created self-intersections between the “tubes” that were not evident in the spline. The images above and below highlight self-intersections in the 61 and K12n630 models."
] | [
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Bantange.Ward-Image3-300x127.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Bantange.Ward-Image4-300x127.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Bantange.Ward-Image5-300x116.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Gilreath.Gobeze.Wang-Post-1-Fig-2-300x287.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/fig8-step-2-233x300.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/fig8-step-3-230x300.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/fig8-step-4-235x300.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/fig8-step-5-231x300.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/fig8-step-7-229x300.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Burns-Timoney-Pritchard-figure-10-300x167.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Burns-Timoney-Pritchard-figure-6-300x208.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Burns-Timoney-Pritchard-figure-7-300x225.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Jang.Peete_.-blog2fig1-300x281.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Jang.Peete_.-fig3.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Jang.Peete_.-fig4-300x140.jpg",
null,
"https://mathvis.academic.wlu.edu/files/2023/05/Jang.Peete_.-fig5-300x150.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9665075,"math_prob":0.9680191,"size":3773,"snap":"2023-14-2023-23","text_gpt3_token_len":823,"char_repetition_ratio":0.118599094,"word_repetition_ratio":0.0030165913,"special_character_ratio":0.20699708,"punctuation_ratio":0.07786885,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853471,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,4,null,4,null,4,null,4,null,4,null,4,null,3,null,3,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T17:05:25Z\",\"WARC-Record-ID\":\"<urn:uuid:d75d20de-70a4-4a20-b24f-00feff5ae978>\",\"Content-Length\":\"146584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ff4325e-afd1-4b48-a70d-04ea68af2ef1>\",\"WARC-Concurrent-To\":\"<urn:uuid:88700b25-cefb-4a3d-8188-be2346d5980e>\",\"WARC-IP-Address\":\"51.81.77.114\",\"WARC-Target-URI\":\"https://mathvis.academic.wlu.edu/\",\"WARC-Payload-Digest\":\"sha1:OWV3L3WG65YI5FFWEXUUOX7SD2N5KX3C\",\"WARC-Block-Digest\":\"sha1:QY26WGGDYDR5YFHTWFQFSDRP3ADCHMY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646937.1_warc_CC-MAIN-20230531150014-20230531180014-00136.warc.gz\"}"} |
https://www.analog.com/en/analog-dialogue/articles/adjustable-cable-equalizer-wideband-differential-receiver.html | [
"Originally intended to carry LAN traffic, category-5 (Cat-5) unshielded twisted-pair (UTP) cable has become an economical solution in many other signal-transmission applications, owing to its respectable performance and low cost. For instance, an application that has become popular is keyboard-video-mouse (KVM) networking, in which three of the four twisted pairs carry the red, green, and blue (RGB) video signals.\n\nLike any transmission medium, Cat-5 imposes transmission losses on the signals it carries, manifested as signal dispersion and loss of high-frequency content. Unless something is done to compensate for these losses, they can render the cables useless for transmitting high-resolution video signals over reasonable distances. Presented here is a practical technique to compensate for Cat-5 losses by introducing an equalizer (EQ), with eleven (11) switchable cable-range settings, at the receiving end of the cable. Because each setting of the EQ provides the proper amount of frequency-dependent gain to make up for the cable losses, the EQ-cable combination becomes suitable for high-resolution video transmission.\n\nThe first step in the EQ design is to derive a model for the Cat-5 frequency response. It is well known that the frequency response of metallic cable follows a low-pass characteristic, with an exponential roll-off that depends on the square root of frequency. Figure 1 depicts this relationship for lengths of Cat-5 from 100 feet (30.48 m) through 1000 feet (304.8 m), in 100-foot increments. In this illustration, it should be evident that the power loss at a given frequency is characterized by a constant attenuation rate (expressed in dB/ft).\n\nTable I presents the Cat-5 equivalent voltage-attenuation magnitudes as a function of frequency for the same cable lengths as shown in Figure 1.\n\nTable I. Voltage-attenuation magnitude ratios of Cat-5 cable. For example, 500 feet of cable attenuates a 10-MHz, 1-V signal to 0.32 V, which corresponds to about –9.90 dB (Figure 1).\n\n Frequency 100 ft 200 ft 300 ft 400 ft 500 ft 600 ft 700 ft 800 ft 900 ft 1000 ft 1 MHz 0.932 0.869 0.8100 0.7550 0.7040 0.65600 0.6120 0.57000 0.53200 0.496000 14 MHz 0.866 0.750 0.6490 0.5620 0.4870 0.42200 0.3650 0.31600 0.27400 0.237000 10 MHz 0.796 0.634 0.5040 0.4020 0.3200 0.25400 0.2030 0.16100 0.12800 0.102000 16 MHz 0.750 0.562 0.4220 0.3160 0.2370 0.17800 0.1330 0.10000 0.07500 0.056300 20 MHz 0.722 0.521 0.3760 0.2710 0.1960 0.14100 0.1020 0.07350 0.05300 0.038300 31 MHz 0.663 0.440 0.2920 0.1940 0.1280 0.08510 0.0565 0.03750 0.02480 0.016500 63 MHz 0.551 0.303 0.1670 0.0920 0.0507 0.02790 0.0154 0.00846 0.00466 0.002570 100 MHz 0.462 0.214 0.0987 0.0456 0.0211 0.00973 0.0045 0.00208 0.00096 0.000444\n\nUsing the data in Table I, the frequency response for each cable length can be approximated by a mathematical model based on a negative-real-axis pole-zero transfer function. Any one of the many available math software packages with the capability of least-squares polynomial curve fitting can be used to perform the approximation. Figure 1 suggests that, for long cables at high frequencies—because of the steepening slope, exceeding 20 dB/decade—consecutive negative-real-axis poles are required to obtain a close fit, while at low frequencies—to fit the nearly linear slope—alternating poles and zeros are required. As an extreme example, the frequency response for 1000 feet of cable at 100 MHz is rolling off approximately as 1/f4, which can only be attained by a model having multiple consecutive poles.\n\nEqualization is achieved by passing the signal received over the cable through an equalizer whose transfer function is the reciprocal of the cable pole-zero model’s transfer function. To neutralize the cable’s frequency-dependency, the EQ has poles that are coincident with the zeros of the cable model and zeros that are coincident with the poles of the cable model.\n\nOne of the properties of passive RC networks is that the alternating poles and zeros of their driving-point impedances are restricted to the negative real axis. This property also holds for those operational-amplifier circuits having a transfer function determined by the simple ratio of feedback-impedance to gain-impedance (Zf/Zg), where these impedances are RC networks. (The property does not hold for other cases, such as active RC filter sections that synthesize conjugate pole-pairs.)\n\nFor a practical equalizer design, we prefer that an EQ be based on a single amplifier stage in order to keep its adjustability manageable and to minimize cost and complexity. The equalizer to be discussed here uses RC networks of the former type, described by Budak, with alternating poles and zeros; but such a design precludes the use of a single amplifier stage to realize the consecutive zeros required to compensate for consecutive poles in the cable model at all frequencies. As a compromise that will provide good equalization for all but long cables at high frequencies, the design chosen uses a single amplifier to realize two zeros and two poles, alternating on the negative real axis.\n\nBecause equalization requires increased gain at the high end of the band, a low-noise amplifier is required. To avoid introducing significant errors due to amplifier dynamics, a large gain-bandwidth product is needed. For the specific design requirements of this application, the amplifier must have the capacity to perform frequency-dependent differential-to-single-ended transformations with voltage gain. The Analog Devices AD8129, just such an amplifier, is the heart of the basic frequency-dependent gain stage in the EQ. Figure 2 shows the dual-differential-input architecture of the AD8129, and its standard closed-loop configuration for applications requiring voltage gain.\n\nAs can be seen, the AD8129 circuitry and operation differ from those of the traditional op amp; principally, it provides the designer with a beneficial separation of circuitry between the differential input and the feedback network. The two input stages are high-impedance, high-common-mode-rejection (CMR), wideband, high-gain transconductance amplifiers with closely matched gm. The output currents of the two transconductance amplifiers are summed (at high impedance), and the voltage at the summing node is buffered to provide a low impedance output. The output current of amplifier A equals the negative of the output current of amplifier B, and their transconductances are closely matched, so negative feedback applied around amplifier B drives vout to the level that forces the input voltage of amplifier B to equal the negative of the input voltage of amplifier A. From the above discussion, the closed-loop voltage gain for the ideal case can be expressed as:",
null,
"(1)\n\nThe EQ is designed using this gain equation, with RC networks for Zf and Zg. Its canonical circuit is depicted in Figure 3, which represents an EQ designed to compensate for a given length of cable.\n\nIn Figure 3, the high differential input impedance of the upper transconductance amplifier facilitates provision of a good impedance match for the signal to be received over the Cat-5 cable; the lower amplifier provides the negative feedback circuit that implements the frequency-dependent gain. The Bode plot for the circuit has a high-pass characteristic, as shown in Figure 4. Zn and Pn are the respective zeros and poles of the equalizer.\n\nIn the following analysis, where the pole-zero pairs in Figure 4 are sufficiently separated, the capacitors can be approximated as short- or open circuits. The pole- and zero frequencies are expressed in radians per second. At low frequencies, all capacitors are open circuits, and the gain is simply\n\nThis gain, set to compensate for flat (i.e., dc) losses, includes any loss due to matching and the cable’s low-frequency flat loss. It also provides the flat gain required to stabilize the AD8129 when equalizing short cables (to be covered in greater depth below).\n\nMoving up in frequency, the lowest-frequency pole-zero EQ section, comprising the series-connected REQ and CEQ, starts to take effect, producing Z1 and P1. By approximating Cf and CS as open circuits, the following equations can be written:",
null,
"(2)",
null,
"(3)\n\nThe magnitude of the frequency response asymptotically approaches\n\nas CEQ approaches a short circuit.\n\nAs frequency increases, CS begins to take effect, introducing another zero, Z2. The primary function of Cf is to keep the amplifier stable by compensating for CS. By initially approximating Cf as an open circuit (Cf <<CS), and presuming that Z2 is sufficiently far in frequency from P1 that CEQ can be considered as having negligible impedance compared to REQ, the approximate expression for Z2 can be written:",
null,
"(4)\n\nFinally, P2 can be expressed as:",
null,
"(5)\n\nBetween P2 and P3, the magnitude of the frequency response asymptotically approaches the closed-loop gain produced by the capacitive divider formed by Cf and CS,\n\nThis is the closed-loop gain at frequencies leading up to P3, so P3, which is due to the amplifier’s dominant-pole roll-off, can be approximated as:",
null,
"(6)\n\nwhere AO is the amplifier’s dc open-loop gain, and ωp is the amplifier’s dominant pole. This result follows directly from standard op-amp gain-bandwidth analysis. P3 is imposed by the gain-bandwidth product of the amplifier, and sets the approximate upper frequency limit of the equalizer. Using the above results, along with the cable’s pole-zero model, an EQ can be designed for any practical length of cable that can be modeled by two alternating pole-zero pairs, provided that the amplifier has a sufficiently high gain-bandwidth product.\n\nIn order for the EQ to be useful over a wide range of cable lengths, it must be adjustable. A simple means of adding adjustability is to switch different RC networks between the feedback pin of the AD8129 and ground. This scheme is illustrated in Figure 5.\n\nEach EQ section in Figure 5 is appropriate for a range of cable lengths. Section EQ0 covers 0 to 50 feet, and Section EQ10 covers 950 to 1000 feet. The other sections are centered on 100 feet, 200 feet, etc., and cover ±50 feet from their centers. This resolution is sufficient for most RGB applications.\n\n### Practical Matters\n\nThe AD8129 is stable for gains greater than 10 V/V, where it has a nominal phase margin of 56°, but if care is taken with regard to layout and parasitic capacitance, it can be successfully operated with a gain of 8, where it has approximately 45° of phase margin. This gain is required at high frequencies. For the longer cable lengths, sufficient high frequency gain is provided by the high-pass nature of the equalizer. For cable lengths between 0 and 300 feet, however, excess flat gain is required in order to keep the AD8129 stable. Because the excess gain is flat, it can be easily inserted by adjusting the Rf/Rg ratio, and removed by switching in the same amount of flat attenuation after the equalizer.\n\nThe AD8129’s input stage has a limited linear dynamic range (±0.5-V operating range). For optimum performance, it is best to attenuate the 700-mV RGB video signals by a factor of four before applying them to the AD8129 inputs. Sometimes the video signals are already attenuated by a factor of two before transmission over the cable. (This is not the matching loss—which is normally accounted for by using a cable driver with a gain of 2.) In this case, an additional factor-of-two attenuation can be inserted at the input to the AD8129 to produce an end-to-end flat attenuation factor of four. A buffer with a flat gain of four, placed after the EQ, is used to compensate for this attenuation (the AD8001 is an excellent choice for this stage). The buffer also simplifies the switched attenuator at the EQ output, which can be a simple L-pad.\n\nThe parasitic capacitance of each off channel in the ADG704 analog multiplexers used to select the EQ sections is 9 pF. The parasitic capacitance of the sum all of the unselected EQ sections is therefore quite large; it adds to the CS value of the selected EQ section. For the EQ sections from 400 to 1000 feet, this parasitic capacitance can usually be absorbed into CS. For the shorter sections, the excess closed-loop gain described above is used to compensate for the peaking caused by the parasitic capacitance. As a general rule, it is best to scale the impedances used in the EQ sections in such a way as to maximize the capacitance values, thus allowing absorption of as much parasitic capacitance into CS as possible. This can’t be carried too far however, since it reduces the associated resistances. The scaling is also limited by the parasitic inductance in the traces that connect the EQ sections. Small resistances provide little damping; if the resistance levels are too small, a moderate-Q tank circuit, resulting from the parasitic trace inductances and switch capacitances, can cause instability in the AD8129.\n\nOptimizing the EQ PCB layout is of paramount importance. The major part of all power- and ground-plane copper must be removed from all layers under the traces that connect to the AD8129 summing node. Small ground-plane strips can be strategically placed as needed in these areas to provide low-Z return current paths, while minimizing stray capacitance at the summing node. The AD8129s and ADG704s should be in µSOIC packages, and the AD8001 should be in the SOT-5 package. Trace inductance in the EQ sections must be kept to an absolute minimum to avoid instability in the AD8129, so 0402 packages should be used for the resistors and capacitors, and the EQ sections should be laid out in such a way as to minimize trace lengths.\n\nAfter the RC values that are based on the cable model have been determined, and the parasitic effects have been taken into account, a final tuning process in the time domain is required for RGB video applications. This is because one of the most important performance metrics for RGB video circuits is the step response; the step response of the cable and EQ combination must be tuned so as to exhibit fast rise time, minimum overshoot and ringing, and short settling time. CS has the greatest effect on overshoot and ringing, and the series connection of REQ and CEQ has the greatest effect on the long-term settling time. The positions of the pole and the zero produced by the series connection of REQ and CEQ can be altered somewhat without changing the frequency response a great deal, because they are placed where the cable’s frequency response has a rather gradual roll-off. This means that the equalized frequency response can appear to be quite good, while the positions of the pole and zero can be suboptimal from a step-response standpoint. It is therefore best to fine-tune the values of CS, REQ, and CEQ in the time domain by adjusting their values to produce a step response with the shortest settling time.\n\nSince the equalizer must interface with long differential cables with no ground reference, the received signal may contain large common-mode voltage swings with respect to the power supply voltages at the receiver. It is therefore best to use dual power supplies of at least ±5 V. This also allows the output signal to swing to 0 V, which is generally required for video signals.\n\n### Conclusion\n\nThe equalizer presented here can stably compensate for lengths of Cat-5 cable from 0 to 1000 feet at frequencies to greater than 100 MHz at short cable lengths and to 25 MHz at 1000 feet, making it suitable for KVM networking and other high-resolution video transmission applications.\n\n### References\n\nPassive and Active Network Analysis and Synthesis, by Aram Budak, Houghton Mifflin, 1974."
] | [
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-01.gif",
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-03.gif",
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-04.gif",
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-06.gif",
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-07.gif",
null,
"https://www.analog.com/-/media/images/analog-dialogue/en/volume-38/number-3/articles/adjustable-cable-equalizer-wideband-differential-receiver/cable-equalizer-eq-09.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9153476,"math_prob":0.96067727,"size":15718,"snap":"2023-40-2023-50","text_gpt3_token_len":3664,"char_repetition_ratio":0.1310933,"word_repetition_ratio":0.011124355,"special_character_ratio":0.24277899,"punctuation_ratio":0.104899704,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9552609,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,9,null,9,null,9,null,9,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T13:00:34Z\",\"WARC-Record-ID\":\"<urn:uuid:b0c6c0b2-7e75-4ec9-b649-33f160c4f12f>\",\"Content-Length\":\"95007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1535fba-028e-453d-896f-21e69a9bf9f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f6a080d-c7cb-493f-97ee-3b10678aff53>\",\"WARC-IP-Address\":\"23.208.33.208\",\"WARC-Target-URI\":\"https://www.analog.com/en/analog-dialogue/articles/adjustable-cable-equalizer-wideband-differential-receiver.html\",\"WARC-Payload-Digest\":\"sha1:BWHDBUJXV7B74MKDSWZLHYU7DBEAH5HF\",\"WARC-Block-Digest\":\"sha1:RIWCIPK4UNMEG66JDW4VZ2BI3WMLJDZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511106.1_warc_CC-MAIN-20231003124522-20231003154522-00661.warc.gz\"}"} |
http://www.kylesconverter.com/area-density/zentners-per-hectare-to-milligrams-per-square-centimeter | [
"# Convert Zentners Per Hectare to Milligrams Per Square Centimeter\n\n### Kyle's Converter > Area Density > Zentners Per Hectare > Zentners Per Hectare to Milligrams Per Square Centimeter\n\n Zentners Per Hectare (Zentners/ha) Milligrams Per Square Centimeter (mg/cm2) Precision: 0 1 2 3 4 5 6 7 8 9 12 15 18\nReverse conversion?\nMilligrams Per Square Centimeter to Zentners Per Hectare\n(or just enter a value in the \"to\" field)\n\nPlease share if you found this tool useful:\n\nUnit Descriptions\n1 Zentner per Hectare:\nMass of zentners per area of one hectare. A German zentner having 50 kilograms. 1 Zentner/ha ? 0.005 kg/m2.\n1 Milligram per Square Centimeter:\nMass of milligrams per an area of a square centimeter. 1 mg/cm2 = 0.01 kg/m2.\n\nLink to Your Exact Conversion\n\nConversions Table\n1 Zentners Per Hectare to Milligrams Per Square Centimeter = 0.570 Zentners Per Hectare to Milligrams Per Square Centimeter = 35\n2 Zentners Per Hectare to Milligrams Per Square Centimeter = 180 Zentners Per Hectare to Milligrams Per Square Centimeter = 40\n3 Zentners Per Hectare to Milligrams Per Square Centimeter = 1.590 Zentners Per Hectare to Milligrams Per Square Centimeter = 45\n4 Zentners Per Hectare to Milligrams Per Square Centimeter = 2100 Zentners Per Hectare to Milligrams Per Square Centimeter = 50\n5 Zentners Per Hectare to Milligrams Per Square Centimeter = 2.5200 Zentners Per Hectare to Milligrams Per Square Centimeter = 100\n6 Zentners Per Hectare to Milligrams Per Square Centimeter = 3300 Zentners Per Hectare to Milligrams Per Square Centimeter = 150\n7 Zentners Per Hectare to Milligrams Per Square Centimeter = 3.5400 Zentners Per Hectare to Milligrams Per Square Centimeter = 200\n8 Zentners Per Hectare to Milligrams Per Square Centimeter = 4500 Zentners Per Hectare to Milligrams Per Square Centimeter = 250\n9 Zentners Per Hectare to Milligrams Per Square Centimeter = 4.5600 Zentners Per Hectare to Milligrams Per Square Centimeter = 300\n10 Zentners Per Hectare to Milligrams Per Square Centimeter = 5800 Zentners Per Hectare to Milligrams Per Square Centimeter = 400\n20 Zentners Per Hectare to Milligrams Per Square Centimeter = 10900 Zentners Per Hectare to Milligrams Per Square Centimeter = 450\n30 Zentners Per Hectare to Milligrams Per Square Centimeter = 151,000 Zentners Per Hectare to Milligrams Per Square Centimeter = 500\n40 Zentners Per Hectare to Milligrams Per Square Centimeter = 2010,000 Zentners Per Hectare to Milligrams Per Square Centimeter = 5000\n50 Zentners Per Hectare to Milligrams Per Square Centimeter = 25100,000 Zentners Per Hectare to Milligrams Per Square Centimeter = 50000\n60 Zentners Per Hectare to Milligrams Per Square Centimeter = 301,000,000 Zentners Per Hectare to Milligrams Per Square Centimeter = 500000"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5560277,"math_prob":0.9910203,"size":2033,"snap":"2019-13-2019-22","text_gpt3_token_len":597,"char_repetition_ratio":0.28092656,"word_repetition_ratio":0.45731708,"special_character_ratio":0.2690605,"punctuation_ratio":0.031055901,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846172,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T09:10:51Z\",\"WARC-Record-ID\":\"<urn:uuid:bf820339-4759-486e-b85b-f754f042bde6>\",\"Content-Length\":\"20104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6bdeb8b-25e8-451b-8137-7337070f6ee5>\",\"WARC-Concurrent-To\":\"<urn:uuid:efc589c4-7705-40a7-abc0-0a6844138cb3>\",\"WARC-IP-Address\":\"99.84.185.144\",\"WARC-Target-URI\":\"http://www.kylesconverter.com/area-density/zentners-per-hectare-to-milligrams-per-square-centimeter\",\"WARC-Payload-Digest\":\"sha1:6AEEE43YHYR2LJ3EY7TN57EZC3LCDOV6\",\"WARC-Block-Digest\":\"sha1:5OYNC4Z3MTJ3HVM3UYBM3DSWLX2RU43A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256778.29_warc_CC-MAIN-20190522083227-20190522105227-00552.warc.gz\"}"} |
https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/ | [
"Theorem: allow a, b, and c it is in integers with a e 0 and also b e 0. If a|b and also b|c, then a|c.\n\nIn order to prove this statement, we very first need to recognize what the math notation colorreda|b implies.\n\nYou are watching: If a equals b and b equals c\n\nI have a separate lesson discussing the meaning of a|b.\n\nTo review, the mathematics notation a|b is check out as “a divides b “. The assumption is that both a and b space integers but a doesn’t equal zero, a e 0. In addition, the vertical bar in a|b is called pipe.",
null,
"As it stands, the notation a|b is not useful to us due to the fact that in its present form, there’s no way that we can algebraically manipulate it. Us must convert it in an equation form.\n\nHere’s the thing, a|b deserve to be written in the equation together b = ar where r is one integer.",
null,
"For example, in 2|10, we understand that 2 same divides 10. That means there is one integer once multiplied to 2 offers a product the 10.\n\nWhat could that number be? that is colorred5 since 2 imes 5 = 10.\n\nThus, we say 2|10 implies 10 = 2left( 5 ight)\n\n## BRAINSTORM prior to WRITING THE PROOF",
null,
"Note: The objective of brainstorming in writing proof is for united state to understand what the organize is trying come convey; and also gather enough information to connect the dots, which will certainly be used to leg the hypothesis and the conclusion.\n\nSince we room using the method of direct proof, we want to present that we deserve to manipulate the hypothesis to arrive at the conclusion.\n\nHypothesis: a divides b and also b divides c\n\nConclusion: a divides c",
null,
"Now, let’s express each notation right into an equation. Us hope the by doing therefore will expose an opportunity so we deserve to proceed with our heat of reasoning.\n\n Notations Equations Notes a|b b = to be ← Equation #1 m is one integer b|c c = bn ← Equation #2 n is an integer\n\nWhat should we perform next? Well, we can substitute the expression for b that Equation #1 into the b the Equation #2.",
null,
"After substitution, we obtain the one below.\n\nc = left( am ight)n\n\nApply the Associative building of Multiplication. Notification that the group symbol (parenthesis) moves from am to mn.\n\nThe Associative property of Multiplication assures that when multiplying numbers, the product is constantly the same no matter just how we team the numbers. Thus, left( am ight)n = aleft( mn ight).\n\nThis property allows us to rewrite the equation there is no breaking any type of math laws since the 2 equations may look different however they are essentially the very same or equivalent.\n\nI expect you deserve to see currently why we have to perform such slight adjustment utilizing the Associative Property.\n\nc = left( am ight)n → c = aleft( mn ight)\n\nAfter we substitute the expression the largeb from Equation #1 right into the largeb of Equation #2, and apply the Associative residential or commercial property of Multiplication, us are ready to move to the next step.\n\nNotice the inside the parenthesis are two arbitrarily integers that are being multiplied.\n\nIf girlfriend remember, over there is a straightforward yet an extremely useful property of the set of Integers ( the symbol because that the collection of integers is mathbbZ ).\n\nThe home is dubbed the Closure residential or commercial property of Multiplication. It says that if m and also n space integers then the product that m and n is likewise an integer. Therefore, m imes n in mathbbZ.\n\nFrom wherein we left off, we have\n\nc = aleft( mn ight).\n\nSince mn is just one more integer using the Closure building of Multiplication, that way we can let mn = k whereby k is one integer.\n\nWe have the right to rewrite c = aleft( mn ight) as c = aleft( k ight).\n\nSee more: What Is A Mare A Female Horse Called? Everything You Should Know\n\nThe equation c = aleft( k ight) have the right to be express in notation form as a|c which means that a divides c.\n\nThis is exactly where we desire to show! now it’s time to create the actual proof.\n\n### WRITE THE PROOF\n\nTHEOREM: allow a, b, and also c it is in integers through a e 0 and b e 0. If a|b and also b|c, climate a|c.\n\nPROOF: mean a, b, and c space integers whereby both a and also b perform not equal to zero. Due to the fact that a divides b, a|b, then there exists an integer m such that b = to be (Equation #1). Similarly, since b divides c, b|c, over there exists an integer n such that c=bn (Equation #2). Now, substitute the expression that b native Equation #1 into the b in Equation #2. By law so, the equation c=bm is changed to c=(am)n. Next, apply the Associative home of Multiplication on the equation c=(am)n to acquire c=a(mn). Because m and also n space integers, their product must also be an creature by the Closure residential property of Multiplication; that is, m imes n in mathbbZ. Allow k = m imes n. In the equation c=a(mn), instead of mn through k to obtain c=ak.The equation c=ak implies that a divides c or when written in shorthand we have actually a|c. Therefore, we have actually proved the a divides c. ◾️"
] | [
null,
"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/imager_1_3228_700.jpg",
null,
"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/imager_2_3228_700.jpg",
null,
"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/imager_3_3228_700.jpg",
null,
"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/imager_4_3228_700.jpg",
null,
"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/imager_5_3228_700.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9143786,"math_prob":0.99402505,"size":4990,"snap":"2021-43-2021-49","text_gpt3_token_len":1232,"char_repetition_ratio":0.13477738,"word_repetition_ratio":0.02383532,"special_character_ratio":0.23607214,"punctuation_ratio":0.10085633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994858,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T17:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:57e81763-733c-47d6-a602-e1653261a57c>\",\"Content-Length\":\"14902\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a63081ee-faab-41a4-8cdd-2d0bcc1aa7ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:0388b0c4-2487-458a-86ec-3cae1c26d7d4>\",\"WARC-IP-Address\":\"172.67.160.91\",\"WARC-Target-URI\":\"https://ubraintv-jp.com/if-a-equals-b-and-b-equals-c/\",\"WARC-Payload-Digest\":\"sha1:Q3ZEQDN5GYT4SUVMXCLPQ6KCX7CUWHMK\",\"WARC-Block-Digest\":\"sha1:BVZ6V7KYGIWZOVS33NCOK4YSBSDTNEOQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358570.48_warc_CC-MAIN-20211128164634-20211128194634-00288.warc.gz\"}"} |
https://lilianweng.github.io/posts/2018-04-08-policy-gradient/ | [
"[Updated on 2018-06-30: add two new policy gradient methods, SAC and D4PG.]\n[Updated on 2019-06-26: Thanks to Chanseok, we have a version of this post in Korean].\n[Updated on 2020-10-15: add a new policy gradient method PPG & some new discussion in PPO.]\n[Updated on 2021-09-19: Thanks to Wenhao & 爱吃猫的鱼, we have this post in Chinese1 & Chinese2].\n\nPolicy gradient is an approach to solve reinforcement learning problems. If you haven’t looked into the field of reinforcement learning, please first read the section “A (Long) Peek into Reinforcement Learning » Key Concepts” for the problem definition and key concepts.\n\n## Notations#\n\nSymbol Meaning\n$s \\in \\mathcal{S}$ States.\n$a \\in \\mathcal{A}$ Actions.\n$r \\in \\mathcal{R}$ Rewards.\n$S_t, A_t, R_t$ State, action, and reward at time step $t$ of one trajectory. I may occasionally use $s_t, a_t, r_t$ as well.\n$\\gamma$ Discount factor; penalty to uncertainty of future rewards; $0<\\gamma \\leq 1$.\n$G_t$ Return; or discounted future reward; $G_t = \\sum_{k=0}^{\\infty} \\gamma^k R_{t+k+1}$.\n$P(s', r \\vert s, a)$ Transition probability of getting to the next state $s'$ from the current state $s$ with action $a$ and reward $r$.\n$\\pi(a \\vert s)$ Stochastic policy (agent behavior strategy); $\\pi_\\theta(.)$ is a policy parameterized by $\\theta$.\n$\\mu(s)$ Deterministic policy; we can also label this as $\\pi(s)$, but using a different letter gives better distinction so that we can easily tell when the policy is stochastic or deterministic without further explanation. Either $\\pi$ or $\\mu$ is what a reinforcement learning algorithm aims to learn.\n$V(s)$ State-value function measures the expected return of state $s$; $V_w(.)$ is a value function parameterized by $w$.\n$V^\\pi(s)$ The value of state $s$ when we follow a policy $\\pi$; $V^\\pi (s) = \\mathbb{E}_{a\\sim \\pi} [G_t \\vert S_t = s]$.\n$Q(s, a)$ Action-value function is similar to $V(s)$, but it assesses the expected return of a pair of state and action $(s, a)$; $Q_w(.)$ is a action value function parameterized by $w$.\n$Q^\\pi(s, a)$ Similar to $V^\\pi(.)$, the value of (state, action) pair when we follow a policy $\\pi$; $Q^\\pi(s, a) = \\mathbb{E}_{a\\sim \\pi} [G_t \\vert S_t = s, A_t = a]$.\n$A(s, a)$ Advantage function, $A(s, a) = Q(s, a) - V(s)$; it can be considered as another version of Q-value with lower variance by taking the state-value off as the baseline.\n\nThe goal of reinforcement learning is to find an optimal behavior strategy for the agent to obtain optimal rewards. The policy gradient methods target at modeling and optimizing the policy directly. The policy is usually modeled with a parameterized function respect to $\\theta$, $\\pi_\\theta(a \\vert s)$. The value of the reward (objective) function depends on this policy and then various algorithms can be applied to optimize $\\theta$ for the best reward.\n\nThe reward function is defined as:\n\n$$J(\\theta) = \\sum_{s \\in \\mathcal{S}} d^\\pi(s) V^\\pi(s) = \\sum_{s \\in \\mathcal{S}} d^\\pi(s) \\sum_{a \\in \\mathcal{A}} \\pi_\\theta(a \\vert s) Q^\\pi(s, a)$$\n\nwhere $d^\\pi(s)$ is the stationary distribution of Markov chain for $\\pi_\\theta$ (on-policy state distribution under $\\pi$). For simplicity, the parameter $\\theta$ would be omitted for the policy $\\pi_\\theta$ when the policy is present in the subscript of other functions; for example, $d^{\\pi}$ and $Q^\\pi$ should be $d^{\\pi_\\theta}$ and $Q^{\\pi_\\theta}$ if written in full.\n\nImagine that you can travel along the Markov chain’s states forever, and eventually, as the time progresses, the probability of you ending up with one state becomes unchanged — this is the stationary probability for $\\pi_\\theta$. $d^\\pi(s) = \\lim_{t \\to \\infty} P(s_t = s \\vert s_0, \\pi_\\theta)$ is the probability that $s_t=s$ when starting from $s_0$ and following policy $\\pi_\\theta$ for t steps. Actually, the existence of the stationary distribution of Markov chain is one main reason for why PageRank algorithm works. If you want to read more, check this.\n\nIt is natural to expect policy-based methods are more useful in the continuous space. Because there is an infinite number of actions and (or) states to estimate the values for and hence value-based approaches are way too expensive computationally in the continuous space. For example, in generalized policy iteration, the policy improvement step $\\arg\\max_{a \\in \\mathcal{A}} Q^\\pi(s, a)$ requires a full scan of the action space, suffering from the curse of dimensionality.\n\nUsing gradient ascent, we can move $\\theta$ toward the direction suggested by the gradient $\\nabla_\\theta J(\\theta)$ to find the best $\\theta$ for $\\pi_\\theta$ that produces the highest return.\n\nComputing the gradient $\\nabla_\\theta J(\\theta)$ is tricky because it depends on both the action selection (directly determined by $\\pi_\\theta$) and the stationary distribution of states following the target selection behavior (indirectly determined by $\\pi_\\theta$). Given that the environment is generally unknown, it is difficult to estimate the effect on the state distribution by a policy update.\n\nLuckily, the policy gradient theorem comes to save the world! Woohoo! It provides a nice reformation of the derivative of the objective function to not involve the derivative of the state distribution $d^\\pi(.)$ and simplify the gradient computation $\\nabla_\\theta J(\\theta)$ a lot.\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &= \\nabla_\\theta \\sum_{s \\in \\mathcal{S}} d^\\pi(s) \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\pi_\\theta(a \\vert s) \\\\ &\\propto \\sum_{s \\in \\mathcal{S}} d^\\pi(s) \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\nabla_\\theta \\pi_\\theta(a \\vert s) \\end{aligned}\n\n## Proof of Policy Gradient Theorem#\n\nThis session is pretty dense, as it is the time for us to go through the proof (Sutton & Barto, 2017; Sec. 13.1) and figure out why the policy gradient theorem is correct.\n\n\\begin{aligned} & \\nabla_\\theta V^\\pi(s) \\\\ =& \\nabla_\\theta \\Big(\\sum_{a \\in \\mathcal{A}} \\pi_\\theta(a \\vert s)Q^\\pi(s, a) \\Big) & \\\\ =& \\sum_{a \\in \\mathcal{A}} \\Big( \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) + \\pi_\\theta(a \\vert s) \\color{red}{\\nabla_\\theta Q^\\pi(s, a)} \\Big) & \\scriptstyle{\\text{; Derivative product rule.}} \\\\ =& \\sum_{a \\in \\mathcal{A}} \\Big( \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) + \\pi_\\theta(a \\vert s) \\color{red}{\\nabla_\\theta \\sum_{s', r} P(s',r \\vert s,a)(r + V^\\pi(s'))} \\Big) & \\scriptstyle{\\text{; Extend } Q^\\pi \\text{ with future state value.}} \\\\ =& \\sum_{a \\in \\mathcal{A}} \\Big( \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) + \\pi_\\theta(a \\vert s) \\color{red}{\\sum_{s', r} P(s',r \\vert s,a) \\nabla_\\theta V^\\pi(s')} \\Big) & \\scriptstyle{P(s',r \\vert s,a) \\text{ or } r \\text{ is not a func of }\\theta}\\\\ =& \\sum_{a \\in \\mathcal{A}} \\Big( \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) + \\pi_\\theta(a \\vert s) \\color{red}{\\sum_{s'} P(s' \\vert s,a) \\nabla_\\theta V^\\pi(s')} \\Big) & \\scriptstyle{\\text{; Because } P(s' \\vert s, a) = \\sum_r P(s', r \\vert s, a)} \\end{aligned}\n\nNow we have:\n\n$$\\color{red}{\\nabla_\\theta V^\\pi(s)} = \\sum_{a \\in \\mathcal{A}} \\Big( \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) + \\pi_\\theta(a \\vert s) \\sum_{s'} P(s' \\vert s,a) \\color{red}{\\nabla_\\theta V^\\pi(s')} \\Big)$$\n\nThis equation has a nice recursive form (see the red parts!) and the future state value function $V^\\pi(s')$ can be repeated unrolled by following the same equation.\n\nLet’s consider the following visitation sequence and label the probability of transitioning from state s to state x with policy $\\pi_\\theta$ after k step as $\\rho^\\pi(s \\to x, k)$.\n\n$$s \\xrightarrow[]{a \\sim \\pi_\\theta(.\\vert s)} s' \\xrightarrow[]{a \\sim \\pi_\\theta(.\\vert s')} s'' \\xrightarrow[]{a \\sim \\pi_\\theta(.\\vert s'')} \\dots$$\n• When k = 0: $\\rho^\\pi(s \\to s, k=0) = 1$.\n• When k = 1, we scan through all possible actions and sum up the transition probabilities to the target state: $\\rho^\\pi(s \\to s', k=1) = \\sum_a \\pi_\\theta(a \\vert s) P(s' \\vert s, a)$.\n• Imagine that the goal is to go from state s to x after k+1 steps while following policy $\\pi_\\theta$. We can first travel from s to a middle point s' (any state can be a middle point, $s' \\in \\mathcal{S}$) after k steps and then go to the final state x during the last step. In this way, we are able to update the visitation probability recursively: $\\rho^\\pi(s \\to x, k+1) = \\sum_{s'} \\rho^\\pi(s \\to s', k) \\rho^\\pi(s' \\to x, 1)$.\n\nThen we go back to unroll the recursive representation of $\\nabla_\\theta V^\\pi(s)$! Let $\\phi(s) = \\sum_{a \\in \\mathcal{A}} \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a)$ to simplify the maths. If we keep on extending $\\nabla_\\theta V^\\pi(.)$ infinitely, it is easy to find out that we can transition from the starting state s to any state after any number of steps in this unrolling process and by summing up all the visitation probabilities, we get $\\nabla_\\theta V^\\pi(s)$!\n\n\\begin{aligned} & \\color{red}{\\nabla_\\theta V^\\pi(s)} \\\\ =& \\phi(s) + \\sum_a \\pi_\\theta(a \\vert s) \\sum_{s'} P(s' \\vert s,a) \\color{red}{\\nabla_\\theta V^\\pi(s')} \\\\ =& \\phi(s) + \\sum_{s'} \\sum_a \\pi_\\theta(a \\vert s) P(s' \\vert s,a) \\color{red}{\\nabla_\\theta V^\\pi(s')} \\\\ =& \\phi(s) + \\sum_{s'} \\rho^\\pi(s \\to s', 1) \\color{red}{\\nabla_\\theta V^\\pi(s')} \\\\ =& \\phi(s) + \\sum_{s'} \\rho^\\pi(s \\to s', 1) \\color{red}{\\nabla_\\theta V^\\pi(s')} \\\\ =& \\phi(s) + \\sum_{s'} \\rho^\\pi(s \\to s', 1) \\color{red}{[ \\phi(s') + \\sum_{s''} \\rho^\\pi(s' \\to s'', 1) \\nabla_\\theta V^\\pi(s'')]} \\\\ =& \\phi(s) + \\sum_{s'} \\rho^\\pi(s \\to s', 1) \\phi(s') + \\sum_{s''} \\rho^\\pi(s \\to s'', 2)\\color{red}{\\nabla_\\theta V^\\pi(s'')} \\scriptstyle{\\text{ ; Consider }s'\\text{ as the middle point for }s \\to s''}\\\\ =& \\phi(s) + \\sum_{s'} \\rho^\\pi(s \\to s', 1) \\phi(s') + \\sum_{s''} \\rho^\\pi(s \\to s'', 2)\\phi(s'') + \\sum_{s'''} \\rho^\\pi(s \\to s''', 3)\\color{red}{\\nabla_\\theta V^\\pi(s''')} \\\\ =& \\dots \\scriptstyle{\\text{; Repeatedly unrolling the part of }\\nabla_\\theta V^\\pi(.)} \\\\ =& \\sum_{x\\in\\mathcal{S}}\\sum_{k=0}^\\infty \\rho^\\pi(s \\to x, k) \\phi(x) \\end{aligned}\n\nThe nice rewriting above allows us to exclude the derivative of Q-value function, $\\nabla_\\theta Q^\\pi(s, a)$. By plugging it into the objective function $J(\\theta)$, we are getting the following:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &= \\nabla_\\theta V^\\pi(s_0) & \\scriptstyle{\\text{; Starting from a random state } s_0} \\\\ &= \\sum_{s}\\color{blue}{\\sum_{k=0}^\\infty \\rho^\\pi(s_0 \\to s, k)} \\phi(s) &\\scriptstyle{\\text{; Let }\\color{blue}{\\eta(s) = \\sum_{k=0}^\\infty \\rho^\\pi(s_0 \\to s, k)}} \\\\ &= \\sum_{s}\\eta(s) \\phi(s) & \\\\ &= \\Big( {\\sum_s \\eta(s)} \\Big)\\sum_{s}\\frac{\\eta(s)}{\\sum_s \\eta(s)} \\phi(s) & \\scriptstyle{\\text{; Normalize } \\eta(s), s\\in\\mathcal{S} \\text{ to be a probability distribution.}}\\\\ &\\propto \\sum_s \\frac{\\eta(s)}{\\sum_s \\eta(s)} \\phi(s) & \\scriptstyle{\\sum_s \\eta(s)\\text{ is a constant}} \\\\ &= \\sum_s d^\\pi(s) \\sum_a \\nabla_\\theta \\pi_\\theta(a \\vert s)Q^\\pi(s, a) & \\scriptstyle{d^\\pi(s) = \\frac{\\eta(s)}{\\sum_s \\eta(s)}\\text{ is stationary distribution.}} \\end{aligned}\n\nIn the episodic case, the constant of proportionality ($\\sum_s \\eta(s)$) is the average length of an episode; in the continuing case, it is 1 (Sutton & Barto, 2017; Sec. 13.2). The gradient can be further written as:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &\\propto \\sum_{s \\in \\mathcal{S}} d^\\pi(s) \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\nabla_\\theta \\pi_\\theta(a \\vert s) &\\\\ &= \\sum_{s \\in \\mathcal{S}} d^\\pi(s) \\sum_{a \\in \\mathcal{A}} \\pi_\\theta(a \\vert s) Q^\\pi(s, a) \\frac{\\nabla_\\theta \\pi_\\theta(a \\vert s)}{\\pi_\\theta(a \\vert s)} &\\\\ &= \\mathbb{E}_\\pi [Q^\\pi(s, a) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert s)] & \\scriptstyle{\\text{; Because } (\\ln x)' = 1/x} \\end{aligned}\n\nWhere $\\mathbb{E}_\\pi$ refers to $\\mathbb{E}_{s \\sim d_\\pi, a \\sim \\pi_\\theta}$ when both state and action distributions follow the policy $\\pi_\\theta$ (on policy).\n\nThe policy gradient theorem lays the theoretical foundation for various policy gradient algorithms. This vanilla policy gradient update has no bias but high variance. Many following algorithms were proposed to reduce the variance while keeping the bias unchanged.\n\n$$\\nabla_\\theta J(\\theta) = \\mathbb{E}_\\pi [Q^\\pi(s, a) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert s)]$$\n\nHere is a nice summary of a general form of policy gradient methods borrowed from the GAE (general advantage estimation) paper (Schulman et al., 2016) and this post thoroughly discussed several components in GAE , highly recommended.",
null,
"Tons of policy gradient algorithms have been proposed during recent years and there is no way for me to exhaust them. I’m introducing some of them that I happened to know and read about.\n\n## REINFORCE#\n\nREINFORCE (Monte-Carlo policy gradient) relies on an estimated return by Monte-Carlo methods using episode samples to update the policy parameter $\\theta$. REINFORCE works because the expectation of the sample gradient is equal to the actual gradient:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &= \\mathbb{E}_\\pi [Q^\\pi(s, a) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert s)] & \\\\ &= \\mathbb{E}_\\pi [G_t \\nabla_\\theta \\ln \\pi_\\theta(A_t \\vert S_t)] & \\scriptstyle{\\text{; Because } Q^\\pi(S_t, A_t) = \\mathbb{E}_\\pi[G_t \\vert S_t, A_t]} \\end{aligned}\n\nTherefore we are able to measure $G_t$ from real sample trajectories and use that to update our policy gradient. It relies on a full trajectory and that’s why it is a Monte-Carlo method.\n\nThe process is pretty straightforward:\n\n1. Initialize the policy parameter $\\theta$ at random.\n2. Generate one trajectory on policy $\\pi_\\theta$: $S_1, A_1, R_2, S_2, A_2, \\dots, S_T$.\n3. For t=1, 2, … , T:\n1. Estimate the the return $G_t$;\n2. Update policy parameters: $\\theta \\leftarrow \\theta + \\alpha \\gamma^t G_t \\nabla_\\theta \\ln \\pi_\\theta(A_t \\vert S_t)$\n\nA widely used variation of REINFORCE is to subtract a baseline value from the return $G_t$ to reduce the variance of gradient estimation while keeping the bias unchanged (Remember we always want to do this when possible). For example, a common baseline is to subtract state-value from action-value, and if applied, we would use advantage $A(s, a) = Q(s, a) - V(s)$ in the gradient ascent update. This post nicely explained why a baseline works for reducing the variance, in addition to a set of fundamentals of policy gradient.\n\n## Actor-Critic#\n\nTwo main components in policy gradient are the policy model and the value function. It makes a lot of sense to learn the value function in addition to the policy, since knowing the value function can assist the policy update, such as by reducing gradient variance in vanilla policy gradients, and that is exactly what the Actor-Critic method does.\n\nActor-critic methods consist of two models, which may optionally share parameters:\n\n• Critic updates the value function parameters w and depending on the algorithm it could be action-value $Q_w(a \\vert s)$ or state-value $V_w(s)$.\n• Actor updates the policy parameters $\\theta$ for $\\pi_\\theta(a \\vert s)$, in the direction suggested by the critic.\n\nLet’s see how it works in a simple action-value actor-critic algorithm.\n\n1. Initialize $s, \\theta, w$ at random; sample $a \\sim \\pi_\\theta(a \\vert s)$.\n2. For $t = 1 \\dots T$:\n1. Sample reward $r_t \\sim R(s, a)$ and next state $s' \\sim P(s' \\vert s, a)$;\n2. Then sample the next action $a' \\sim \\pi_\\theta(a' \\vert s')$;\n3. Update the policy parameters: $\\theta \\leftarrow \\theta + \\alpha_\\theta Q_w(s, a) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert s)$;\n4. Compute the correction (TD error) for action-value at time t:\n$\\delta_t = r_t + \\gamma Q_w(s', a') - Q_w(s, a)$\nand use it to update the parameters of action-value function:\n$w \\leftarrow w + \\alpha_w \\delta_t \\nabla_w Q_w(s, a)$\n5. Update $a \\leftarrow a'$ and $s \\leftarrow s'$.\n\nTwo learning rates, $\\alpha_\\theta$ and $\\alpha_w$, are predefined for policy and value function parameter updates respectively.\n\nBoth REINFORCE and the vanilla version of actor-critic method are on-policy: training samples are collected according to the target policy — the very same policy that we try to optimize for. Off policy methods, however, result in several additional advantages:\n\n1. The off-policy approach does not require full trajectories and can reuse any past episodes (“experience replay”) for much better sample efficiency.\n2. The sample collection follows a behavior policy different from the target policy, bringing better exploration.\n\nNow let’s see how off-policy policy gradient is computed. The behavior policy for collecting samples is a known policy (predefined just like a hyperparameter), labelled as $\\beta(a \\vert s)$. The objective function sums up the reward over the state distribution defined by this behavior policy:\n\n$$J(\\theta) = \\sum_{s \\in \\mathcal{S}} d^\\beta(s) \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\pi_\\theta(a \\vert s) = \\mathbb{E}_{s \\sim d^\\beta} \\big[ \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\pi_\\theta(a \\vert s) \\big]$$\n\nwhere $d^\\beta(s)$ is the stationary distribution of the behavior policy $\\beta$; recall that $d^\\beta(s) = \\lim_{t \\to \\infty} P(S_t = s \\vert S_0, \\beta)$; and $Q^\\pi$ is the action-value function estimated with regard to the target policy $\\pi$ (not the behavior policy!).\n\nGiven that the training observations are sampled by $a \\sim \\beta(a \\vert s)$, we can rewrite the gradient as:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &= \\nabla_\\theta \\mathbb{E}_{s \\sim d^\\beta} \\Big[ \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\pi_\\theta(a \\vert s) \\Big] & \\\\ &= \\mathbb{E}_{s \\sim d^\\beta} \\Big[ \\sum_{a \\in \\mathcal{A}} \\big( Q^\\pi(s, a) \\nabla_\\theta \\pi_\\theta(a \\vert s) + \\color{red}{\\pi_\\theta(a \\vert s) \\nabla_\\theta Q^\\pi(s, a)} \\big) \\Big] & \\scriptstyle{\\text{; Derivative product rule.}}\\\\ &\\stackrel{(i)}{\\approx} \\mathbb{E}_{s \\sim d^\\beta} \\Big[ \\sum_{a \\in \\mathcal{A}} Q^\\pi(s, a) \\nabla_\\theta \\pi_\\theta(a \\vert s) \\Big] & \\scriptstyle{\\text{; Ignore the red part: } \\color{red}{\\pi_\\theta(a \\vert s) \\nabla_\\theta Q^\\pi(s, a)}}. \\\\ &= \\mathbb{E}_{s \\sim d^\\beta} \\Big[ \\sum_{a \\in \\mathcal{A}} \\beta(a \\vert s) \\frac{\\pi_\\theta(a \\vert s)}{\\beta(a \\vert s)} Q^\\pi(s, a) \\frac{\\nabla_\\theta \\pi_\\theta(a \\vert s)}{\\pi_\\theta(a \\vert s)} \\Big] & \\\\ &= \\mathbb{E}_\\beta \\Big[\\frac{\\color{blue}{\\pi_\\theta(a \\vert s)}}{\\color{blue}{\\beta(a \\vert s)}} Q^\\pi(s, a) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert s) \\Big] & \\scriptstyle{\\text{; The blue part is the importance weight.}} \\end{aligned}\n\nwhere $\\frac{\\pi_\\theta(a \\vert s)}{\\beta(a \\vert s)}$ is the importance weight. Because $Q^\\pi$ is a function of the target policy and thus a function of policy parameter $\\theta$, we should take the derivative of $\\nabla_\\theta Q^\\pi(s, a)$ as well according to the product rule. However, it is super hard to compute $\\nabla_\\theta Q^\\pi(s, a)$ in reality. Fortunately if we use an approximated gradient with the gradient of Q ignored, we still guarantee the policy improvement and eventually achieve the true local minimum. This is justified in the proof here (Degris, White & Sutton, 2012).\n\nIn summary, when applying policy gradient in the off-policy setting, we can simple adjust it with a weighted sum and the weight is the ratio of the target policy to the behavior policy, $\\frac{\\pi_\\theta(a \\vert s)}{\\beta(a \\vert s)}$.\n\n## A3C#\n\n[paper|code]\n\nAsynchronous Advantage Actor-Critic (Mnih et al., 2016), short for A3C, is a classic policy gradient method with a special focus on parallel training.\n\nIn A3C, the critics learn the value function while multiple actors are trained in parallel and get synced with global parameters from time to time. Hence, A3C is designed to work well for parallel training.\n\nLet’s use the state-value function as an example. The loss function for state value is to minimize the mean squared error, $J_v(w) = (G_t - V_w(s))^2$ and gradient descent can be applied to find the optimal w. This state-value function is used as the baseline in the policy gradient update.\n\nHere is the algorithm outline:\n\n1. We have global parameters, $\\theta$ and $w$; similar thread-specific parameters, $\\theta'$ and $w'$.\n\n2. Initialize the time step $t = 1$\n\n3. While $T \\leq T_\\text{MAX}$:\n\n1. Reset gradient: $\\mathrm{d}\\theta = 0$ and $\\mathrm{d}w = 0$.\n2. Synchronize thread-specific parameters with global ones: $\\theta' = \\theta$ and $w' = w$.\n3. $t_\\text{start}$ = t and sample a starting state $s_t$.\n4. While ($s_t$ != TERMINAL) and $t - t_\\text{start} \\leq t_\\text{max}$:\n1. Pick the action $A_t \\sim \\pi_{\\theta'}(A_t \\vert S_t)$ and receive a new reward $R_t$ and a new state $s_{t+1}$.\n2. Update $t = t + 1$ and $T = T + 1$\n5. Initialize the variable that holds the return estimation\n$$R = \\begin{cases} 0 & \\text{if } s_t \\text{ is TERMINAL} \\\\ V_{w'}(s_t) & \\text{otherwise} \\end{cases}$$\n6. For $i = t-1, \\dots, t\\_\\text{start}$: 1. $R \\leftarrow \\gamma R + R\\_i$; here R is a MC measure of $G\\_i$. 2. Accumulate gradients w.r.t. $\\theta'$: $d\\theta \\leftarrow d\\theta + \\nabla\\_{\\theta'} \\log \\pi\\_{\\theta'}(a\\_i \\vert s\\_i)(R - V\\_{w'}(s\\_i))$;\nAccumulate gradients w.r.t. w': $dw \\leftarrow dw + 2 (R - V\\_{w'}(s\\_i)) \\nabla\\_{w'} (R - V\\_{w'}(s\\_i))$.\n1. Update asynchronously $\\theta$ using $\\mathrm{d}\\theta$, and $w$ using $\\mathrm{d}w$.\n\nA3C enables the parallelism in multiple agent training. The gradient accumulation step (6.2) can be considered as a parallelized reformation of minibatch-based stochastic gradient update: the values of $w$ or $\\theta$ get corrected by a little bit in the direction of each training thread independently.\n\n## A2C#\n\n[paper|code]\n\nA2C is a synchronous, deterministic version of A3C; that’s why it is named as “A2C” with the first “A” (“asynchronous”) removed. In A3C each agent talks to the global parameters independently, so it is possible sometimes the thread-specific agents would be playing with policies of different versions and therefore the aggregated update would not be optimal. To resolve the inconsistency, a coordinator in A2C waits for all the parallel actors to finish their work before updating the global parameters and then in the next iteration parallel actors starts from the same policy. The synchronized gradient update keeps the training more cohesive and potentially to make convergence faster.\n\nA2C has been shown to be able to utilize GPUs more efficiently and work better with large batch sizes while achieving same or better performance than A3C.",
null,
"## DPG#\n\n[paper|code]\n\nIn methods described above, the policy function $\\pi(. \\vert s)$ is always modeled as a probability distribution over actions $\\mathcal{A}$ given the current state and thus it is stochastic. Deterministic policy gradient (DPG) instead models the policy as a deterministic decision: $a = \\mu(s)$. It may look bizarre — how can you calculate the gradient of the action probability when it outputs a single action? Let’s look into it step by step.\n\nRefresh on a few notations to facilitate the discussion:\n\n• $\\rho_0(s)$: The initial distribution over states\n• $\\rho^\\mu(s \\to s', k)$: Starting from state s, the visitation probability density at state s' after moving k steps by policy $\\mu$.\n• $\\rho^\\mu(s')$: Discounted state distribution, defined as $\\rho^\\mu(s') = \\int_\\mathcal{S} \\sum_{k=1}^\\infty \\gamma^{k-1} \\rho_0(s) \\rho^\\mu(s \\to s', k) ds$.\n\nThe objective function to optimize for is listed as follows:\n\n$$J(\\theta) = \\int_\\mathcal{S} \\rho^\\mu(s) Q(s, \\mu_\\theta(s)) ds$$\n\nDeterministic policy gradient theorem: Now it is the time to compute the gradient! According to the chain rule, we first take the gradient of Q w.r.t. the action a and then take the gradient of the deterministic policy function $\\mu$ w.r.t. $\\theta$:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &= \\int_\\mathcal{S} \\rho^\\mu(s) \\nabla_a Q^\\mu(s, a) \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)} ds \\\\ &= \\mathbb{E}_{s \\sim \\rho^\\mu} [\\nabla_a Q^\\mu(s, a) \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)}] \\end{aligned}\n\nWe can consider the deterministic policy as a special case of the stochastic one, when the probability distribution contains only one extreme non-zero value over one action. Actually, in the DPG paper, the authors have shown that if the stochastic policy $\\pi_{\\mu_\\theta, \\sigma}$ is re-parameterized by a deterministic policy $\\mu_\\theta$ and a variation variable $\\sigma$, the stochastic policy is eventually equivalent to the deterministic case when $\\sigma=0$. Compared to the deterministic policy, we expect the stochastic policy to require more samples as it integrates the data over the whole state and action space.\n\nThe deterministic policy gradient theorem can be plugged into common policy gradient frameworks.\n\nLet’s consider an example of on-policy actor-critic algorithm to showcase the procedure. In each iteration of on-policy actor-critic, two actions are taken deterministically $a = \\mu_\\theta(s)$ and the SARSA update on policy parameters relies on the new gradient that we just computed above:\n\n\\begin{aligned} \\delta_t &= R_t + \\gamma Q_w(s_{t+1}, a_{t+1}) - Q_w(s_t, a_t) & \\small{\\text{; TD error in SARSA}}\\\\ w_{t+1} &= w_t + \\alpha_w \\delta_t \\nabla_w Q_w(s_t, a_t) & \\\\ \\theta_{t+1} &= \\theta_t + \\alpha_\\theta \\color{red}{\\nabla_a Q_w(s_t, a_t) \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)}} & \\small{\\text{; Deterministic policy gradient theorem}} \\end{aligned}\n\nHowever, unless there is sufficient noise in the environment, it is very hard to guarantee enough exploration due to the determinacy of the policy. We can either add noise into the policy (ironically this makes it nondeterministic!) or learn it off-policy-ly by following a different stochastic behavior policy to collect samples.\n\nSay, in the off-policy approach, the training trajectories are generated by a stochastic policy $\\beta(a \\vert s)$ and thus the state distribution follows the corresponding discounted state density $\\rho^\\beta$:\n\n\\begin{aligned} J_\\beta(\\theta) &= \\int_\\mathcal{S} \\rho^\\beta Q^\\mu(s, \\mu_\\theta(s)) ds \\\\ \\nabla_\\theta J_\\beta(\\theta) &= \\mathbb{E}_{s \\sim \\rho^\\beta} [\\nabla_a Q^\\mu(s, a) \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)} ] \\end{aligned}\n\nNote that because the policy is deterministic, we only need $Q^\\mu(s, \\mu_\\theta(s))$ rather than $\\sum_a \\pi(a \\vert s) Q^\\pi(s, a)$ as the estimated reward of a given state s. In the off-policy approach with a stochastic policy, importance sampling is often used to correct the mismatch between behavior and target policies, as what we have described above. However, because the deterministic policy gradient removes the integral over actions, we can avoid importance sampling.\n\n## DDPG#\n\n[paper|code]\n\nDDPG (Lillicrap, et al., 2015), short for Deep Deterministic Policy Gradient, is a model-free off-policy actor-critic algorithm, combining DPG with DQN. Recall that DQN (Deep Q-Network) stabilizes the learning of Q-function by experience replay and the frozen target network. The original DQN works in discrete space, and DDPG extends it to continuous space with the actor-critic framework while learning a deterministic policy.\n\nIn order to do better exploration, an exploration policy $\\mu'$ is constructed by adding noise $\\mathcal{N}$:\n\n$$\\mu'(s) = \\mu_\\theta(s) + \\mathcal{N}$$\n\nIn addition, DDPG does soft updates (“conservative policy iteration”) on the parameters of both actor and critic, with $\\tau \\ll 1$: $\\theta' \\leftarrow \\tau \\theta + (1 - \\tau) \\theta'$. In this way, the target network values are constrained to change slowly, different from the design in DQN that the target network stays frozen for some period of time.\n\nOne detail in the paper that is particularly useful in robotics is on how to normalize the different physical units of low dimensional features. For example, a model is designed to learn a policy with the robot’s positions and velocities as input; these physical statistics are different by nature and even statistics of the same type may vary a lot across multiple robots. Batch normalization is applied to fix it by normalizing every dimension across samples in one minibatch.",
null,
"## D4PG#\n\n[paper|code (Search “github d4pg” and you will see a few.)]\n\nDistributed Distributional DDPG (D4PG) applies a set of improvements on DDPG to make it run in the distributional fashion.\n\n(1) Distributional Critic: The critic estimates the expected Q value as a random variable ~ a distribution $Z_w$ parameterized by $w$ and therefore $Q_w(s, a) = \\mathbb{E} Z_w(x, a)$. The loss for learning the distribution parameter is to minimize some measure of the distance between two distributions — distributional TD error: $L(w) = \\mathbb{E}[d(\\mathcal{T}_{\\mu_\\theta}, Z_{w'}(s, a), Z_w(s, a)]$, where $\\mathcal{T}_{\\mu_\\theta}$ is the Bellman operator.\n\nThe deterministic policy gradient update becomes:\n\n\\begin{aligned} \\nabla_\\theta J(\\theta) &\\approx \\mathbb{E}_{\\rho^\\mu} [\\nabla_a Q_w(s, a) \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)}] & \\scriptstyle{\\text{; gradient update in DPG}} \\\\ &= \\mathbb{E}_{\\rho^\\mu} [\\mathbb{E}[\\nabla_a Z_w(s, a)] \\nabla_\\theta \\mu_\\theta(s) \\rvert_{a=\\mu_\\theta(s)}] & \\scriptstyle{\\text{; expectation of the Q-value distribution.}} \\end{aligned}\n\n(2) $N$-step returns: When calculating the TD error, D4PG computes $N$-step TD target rather than one-step to incorporate rewards in more future steps. Thus the new TD target is:\n\n$$r(s_0, a_0) + \\mathbb{E}[\\sum_{n=1}^{N-1} r(s_n, a_n) + \\gamma^N Q(s_N, \\mu_\\theta(s_N)) \\vert s_0, a_0 ]$$\n\n(3) Multiple Distributed Parallel Actors: D4PG utilizes $K$ independent actors, gathering experience in parallel and feeding data into the same replay buffer.\n\n(4) Prioritized Experience Replay (PER): The last piece of modification is to do sampling from the replay buffer of size $R$ with an non-uniform probability $p_i$. In this way, a sample $i$ has the probability $(Rp_i)^{-1}$ to be selected and thus the importance weight is $(Rp_i)^{-1}$.",
null,
"[paper|code]\n\nMulti-agent DDPG (MADDPG) (Lowe et al., 2017) extends DDPG to an environment where multiple agents are coordinating to complete tasks with only local information. In the viewpoint of one agent, the environment is non-stationary as policies of other agents are quickly upgraded and remain unknown. MADDPG is an actor-critic model redesigned particularly for handling such a changing environment and interactions between agents.\n\nThe problem can be formalized in the multi-agent version of MDP, also known as Markov games. MADDPG is proposed for partially observable Markov games. Say, there are N agents in total with a set of states $\\mathcal{S}$. Each agent owns a set of possible action, $\\mathcal{A}_1, \\dots, \\mathcal{A}_N$, and a set of observation, $\\mathcal{O}_1, \\dots, \\mathcal{O}_N$. The state transition function involves all states, action and observation spaces $\\mathcal{T}: \\mathcal{S} \\times \\mathcal{A}_1 \\times \\dots \\mathcal{A}_N \\mapsto \\mathcal{S}$. Each agent’s stochastic policy only involves its own state and action: $\\pi_{\\theta_i}: \\mathcal{O}_i \\times \\mathcal{A}_i \\mapsto [0, 1]$, a probability distribution over actions given its own observation, or a deterministic policy: $\\mu_{\\theta_i}: \\mathcal{O}_i \\mapsto \\mathcal{A}_i$.\n\nLet $\\vec{o} = {o_1, \\dots, o_N}$, $\\vec{\\mu} = {\\mu_1, \\dots, \\mu_N}$ and the policies are parameterized by $\\vec{\\theta} = {\\theta_1, \\dots, \\theta_N}$.\n\nThe critic in MADDPG learns a centralized action-value function $Q^\\vec{\\mu}_i(\\vec{o}, a_1, \\dots, a_N)$ for the i-th agent, where $a_1 \\in \\mathcal{A}_1, \\dots, a_N \\in \\mathcal{A}_N$ are actions of all agents. Each $Q^\\vec{\\mu}_i$ is learned separately for $i=1, \\dots, N$ and therefore multiple agents can have arbitrary reward structures, including conflicting rewards in a competitive setting. Meanwhile, multiple actors, one for each agent, are exploring and upgrading the policy parameters $\\theta_i$ on their own.\n\nActor update:\n\n$$\\nabla_{\\theta_i} J(\\theta_i) = \\mathbb{E}_{\\vec{o}, a \\sim \\mathcal{D}} [\\nabla_{a_i} Q^{\\vec{\\mu}}_i (\\vec{o}, a_1, \\dots, a_N) \\nabla_{\\theta_i} \\mu_{\\theta_i}(o_i) \\rvert_{a_i=\\mu_{\\theta_i}(o_i)} ]$$\n\nWhere $\\mathcal{D}$ is the memory buffer for experience replay, containing multiple episode samples $(\\vec{o}, a_1, \\dots, a_N, r_1, \\dots, r_N, \\vec{o}')$ — given current observation $\\vec{o}$, agents take action $a_1, \\dots, a_N$ and get rewards $r_1, \\dots, r_N$, leading to the new observation $\\vec{o}'$.\n\nCritic update:\n\n\\begin{aligned} \\mathcal{L}(\\theta_i) &= \\mathbb{E}_{\\vec{o}, a_1, \\dots, a_N, r_1, \\dots, r_N, \\vec{o}'}[ (Q^{\\vec{\\mu}}_i(\\vec{o}, a_1, \\dots, a_N) - y)^2 ] & \\\\ \\text{where } y &= r_i + \\gamma Q^{\\vec{\\mu}'}_i (\\vec{o}', a'_1, \\dots, a'_N) \\rvert_{a'_j = \\mu'_{\\theta_j}} & \\scriptstyle{\\text{; TD target!}} \\end{aligned}\n\nwhere $\\vec{\\mu}'$ are the target policies with delayed softly-updated parameters.\n\nIf the policies $\\vec{\\mu}$ are unknown during the critic update, we can ask each agent to learn and evolve its own approximation of others' policies. Using the approximated policies, MADDPG still can learn efficiently although the inferred policies might not be accurate.\n\nTo mitigate the high variance triggered by the interaction between competing or collaborating agents in the environment, MADDPG proposed one more element - policy ensembles:\n\n1. Train K policies for one agent;\n2. Pick a random policy for episode rollouts;\n3. Take an ensemble of these K policies to do gradient update.\n\n• Centralized critic + decentralized actors;\n• Actors are able to use estimated policies of other agents for learning;\n• Policy ensembling is good for reducing variance.",
null,
"## TRPO#\n\n[paper|code]\n\nTo improve training stability, we should avoid parameter updates that change the policy too much at one step. Trust region policy optimization (TRPO) (Schulman, et al., 2015) carries out this idea by enforcing a KL divergence constraint on the size of policy update at each iteration.\n\nConsider the case when we are doing off-policy RL, the policy $\\beta$ used for collecting trajectories on rollout workers is different from the policy $\\pi$ to optimize for. The objective function in an off-policy model measures the total advantage over the state visitation distribution and actions, while the mismatch between the training data distribution and the true policy state distribution is compensated by importance sampling estimator:\n\n\\begin{aligned} J(\\theta) &= \\sum_{s \\in \\mathcal{S}} \\rho^{\\pi_{\\theta_\\text{old}}} \\sum_{a \\in \\mathcal{A}} \\big( \\pi_\\theta(a \\vert s) \\hat{A}_{\\theta_\\text{old}}(s, a) \\big) & \\\\ &= \\sum_{s \\in \\mathcal{S}} \\rho^{\\pi_{\\theta_\\text{old}}} \\sum_{a \\in \\mathcal{A}} \\big( \\beta(a \\vert s) \\frac{\\pi_\\theta(a \\vert s)}{\\beta(a \\vert s)} \\hat{A}_{\\theta_\\text{old}}(s, a) \\big) & \\scriptstyle{\\text{; Importance sampling}} \\\\ &= \\mathbb{E}_{s \\sim \\rho^{\\pi_{\\theta_\\text{old}}}, a \\sim \\beta} \\big[ \\frac{\\pi_\\theta(a \\vert s)}{\\beta(a \\vert s)} \\hat{A}_{\\theta_\\text{old}}(s, a) \\big] & \\end{aligned}\n\nwhere $\\theta_\\text{old}$ is the policy parameters before the update and thus known to us; $\\rho^{\\pi_{\\theta_\\text{old}}}$ is defined in the same way as above; $\\beta(a \\vert s)$ is the behavior policy for collecting trajectories. Noted that we use an estimated advantage $\\hat{A}(.)$ rather than the true advantage function $A(.)$ because the true rewards are usually unknown.\n\nWhen training on policy, theoretically the policy for collecting data is same as the policy that we want to optimize. However, when rollout workers and optimizers are running in parallel asynchronously, the behavior policy can get stale. TRPO considers this subtle difference: It labels the behavior policy as $\\pi_{\\theta_\\text{old}}(a \\vert s)$ and thus the objective function becomes:\n\n$$J(\\theta) = \\mathbb{E}_{s \\sim \\rho^{\\pi_{\\theta_\\text{old}}}, a \\sim \\pi_{\\theta_\\text{old}}} \\big[ \\frac{\\pi_\\theta(a \\vert s)}{\\pi_{\\theta_\\text{old}}(a \\vert s)} \\hat{A}_{\\theta_\\text{old}}(s, a) \\big]$$\n\nTRPO aims to maximize the objective function $J(\\theta)$ subject to, trust region constraint which enforces the distance between old and new policies measured by KL-divergence to be small enough, within a parameter δ:\n\n$$\\mathbb{E}_{s \\sim \\rho^{\\pi_{\\theta_\\text{old}}}} [D_\\text{KL}(\\pi_{\\theta_\\text{old}}(.\\vert s) \\| \\pi_\\theta(.\\vert s)] \\leq \\delta$$\n\nIn this way, the old and new policies would not diverge too much when this hard constraint is met. While still, TRPO can guarantee a monotonic improvement over policy iteration (Neat, right?). Please read the proof in the paper if interested :)\n\n## PPO#\n\n[paper|code]\n\nGiven that TRPO is relatively complicated and we still want to implement a similar constraint, proximal policy optimization (PPO) simplifies it by using a clipped surrogate objective while retaining similar performance.\n\nFirst, let’s denote the probability ratio between old and new policies as:\n\n$$r(\\theta) = \\frac{\\pi_\\theta(a \\vert s)}{\\pi_{\\theta_\\text{old}}(a \\vert s)}$$\n\nThen, the objective function of TRPO (on policy) becomes:\n\n$$J^\\text{TRPO} (\\theta) = \\mathbb{E} [ r(\\theta) \\hat{A}_{\\theta_\\text{old}}(s, a) ]$$\n\nWithout a limitation on the distance between $\\theta_\\text{old}$ and $\\theta$, to maximize $J^\\text{TRPO} (\\theta)$ would lead to instability with extremely large parameter updates and big policy ratios. PPO imposes the constraint by forcing $r(\\theta)$ to stay within a small interval around 1, precisely $[1-\\epsilon, 1+\\epsilon]$, where $\\epsilon$ is a hyperparameter.\n\n$$J^\\text{CLIP} (\\theta) = \\mathbb{E} [ \\min( r(\\theta) \\hat{A}_{\\theta_\\text{old}}(s, a), \\text{clip}(r(\\theta), 1 - \\epsilon, 1 + \\epsilon) \\hat{A}_{\\theta_\\text{old}}(s, a))]$$\n\nThe function $\\text{clip}(r(\\theta), 1 - \\epsilon, 1 + \\epsilon)$ clips the ratio to be no more than $1+\\epsilon$ and no less than $1-\\epsilon$. The objective function of PPO takes the minimum one between the original value and the clipped version and therefore we lose the motivation for increasing the policy update to extremes for better rewards.\n\nWhen applying PPO on the network architecture with shared parameters for both policy (actor) and value (critic) functions, in addition to the clipped reward, the objective function is augmented with an error term on the value estimation (formula in red) and an entropy term (formula in blue) to encourage sufficient exploration.\n\n$$J^\\text{CLIP'} (\\theta) = \\mathbb{E} [ J^\\text{CLIP} (\\theta) - \\color{red}{c_1 (V_\\theta(s) - V_\\text{target})^2} + \\color{blue}{c_2 H(s, \\pi_\\theta(.))} ]$$\n\nwhere Both $c_1$ and $c_2$ are two hyperparameter constants.\n\nPPO has been tested on a set of benchmark tasks and proved to produce awesome results with much greater simplicity.\n\nIn a later paper by Hsu et al., 2020, two common design choices in PPO are revisited, precisely (1) clipped probability ratio for policy regularization and (2) parameterize policy action space by continuous Gaussian or discrete softmax distribution. They first identified three failure modes in PPO and proposed replacements for these two designs.\n\nThe failure modes are:\n\n1. On continuous action spaces, standard PPO is unstable when rewards vanish outside bounded support.\n2. On discrete action spaces with sparse high rewards, standard PPO often gets stuck at suboptimal actions.\n3. The policy is sensitive to initialization when there are locally optimal actions close to initialization.\n\nDiscretizing the action space or use Beta distribution helps avoid failure mode 1&3 associated with Gaussian policy. Using KL regularization (same motivation as in TRPO) as an alternative surrogate model helps resolve failure mode 1&2.",
null,
"## PPG#\n\n[paper|code]\n\nSharing parameters between policy and value networks have pros and cons. It allows policy and value functions to share the learned features with each other, but it may cause conflicts between competing objectives and demands the same data for training two networks at the same time. Phasic policy gradient (PPG; Cobbe, et al 2020) modifies the traditional on-policy actor-critic policy gradient algorithm. precisely PPO, to have separate training phases for policy and value functions. In two alternating phases:\n\n1. The policy phase: updates the policy network by optimizing the PPO objective $L^\\text{CLIP} (\\theta)$;\n2. The auxiliary phase: optimizes an auxiliary objective alongside a behavioral cloning loss. In the paper, value function error is the sole auxiliary objective, but it can be quite general and includes any other additional auxiliary losses.\n\\begin{aligned} L^\\text{joint} &= L^\\text{aux} + \\beta_\\text{clone} \\cdot \\mathbb{E}_t[\\text{KL}[\\pi_{\\theta_\\text{old}}(\\cdot\\mid s_t), \\pi_\\theta(\\cdot\\mid s_t)]] \\\\ L^\\text{aux} &= L^\\text{value} = \\mathbb{E}_t \\big[\\frac{1}{2}\\big( V_w(s_t) - \\hat{V}_t^\\text{targ} \\big)^2\\big] \\end{aligned}\n\nwhere $\\beta_\\text{clone}$ is a hyperparameter for controlling how much we would like to keep the policy not diverge too much from its original behavior while optimizing the auxiliary objectives.",
null,
"where\n\n• $N_\\pi$ is the number of policy update iterations in the policy phase. Note that the policy phase performs multiple iterations of updates per single auxiliary phase.\n• $E_\\pi$ and $E_V$ control the sample reuse (i.e. the number of training epochs performed across data in the reply buffer) for the policy and value functions, respectively. Note that this happens within the policy phase and thus $E_V$ affects the learning of true value function not the auxiliary value function.\n• $E_\\text{aux}$ defines the sample reuse in the auxiliary phrase. In PPG, value function optimization can tolerate a much higher level sample reuse; for example, in the experiments of the paper, $E_\\text{aux} = 6$ while $E_\\pi = E_V = 1$.\n\nPPG leads to a significant improvement on sample efficiency compared to PPO.",
null,
"## ACER#\n\n[paper|code]\n\nACER, short for actor-critic with experience replay (Wang, et al., 2017), is an off-policy actor-critic model with experience replay, greatly increasing the sample efficiency and decreasing the data correlation. A3C builds up the foundation for ACER, but it is on policy; ACER is A3C’s off-policy counterpart. The major obstacle to making A3C off policy is how to control the stability of the off-policy estimator. ACER proposes three designs to overcome it:\n\n• Use Retrace Q-value estimation;\n• Truncate the importance weights with bias correction;\n• Apply efficient TRPO.\n\nRetrace Q-value Estimation\n\nRetrace is an off-policy return-based Q-value estimation algorithm with a nice guarantee for convergence for any target and behavior policy pair $(\\pi, \\beta)$, plus good data efficiency.\n\nRecall how TD learning works for prediction:\n\n1. Compute TD error: $\\delta_t = R_t + \\gamma \\mathbb{E}_{a \\sim \\pi} Q(S_{t+1}, a) - Q(S_t, A_t)$; the term $r_t + \\gamma \\mathbb{E}_{a \\sim \\pi} Q(s_{t+1}, a)$ is known as “TD target”. The expectation $\\mathbb{E}_{a \\sim \\pi}$ is used because for the future step the best estimation we can make is what the return would be if we follow the current policy $\\pi$.\n2. Update the value by correcting the error to move toward the goal: $Q(S_t, A_t) \\leftarrow Q(S_t, A_t) + \\alpha \\delta_t$. In other words, the incremental update on Q is proportional to the TD error: $\\Delta Q(S_t, A_t) = \\alpha \\delta_t$.\n\nWhen the rollout is off policy, we need to apply importance sampling on the Q update:\n\n$$\\Delta Q^\\text{imp}(S_t, A_t) = \\gamma^t \\prod_{1 \\leq \\tau \\leq t} \\frac{\\pi(A_\\tau \\vert S_\\tau)}{\\beta(A_\\tau \\vert S_\\tau)} \\delta_t$$\n\nThe product of importance weights looks pretty scary when we start imagining how it can cause super high variance and even explode. Retrace Q-value estimation method modifies $\\Delta Q$ to have importance weights truncated by no more than a constant $c$:\n\n$$\\Delta Q^\\text{ret}(S_t, A_t) = \\gamma^t \\prod_{1 \\leq \\tau \\leq t} \\min(c, \\frac{\\pi(A_\\tau \\vert S_\\tau)}{\\beta(A_\\tau \\vert S_\\tau)}) \\delta_t$$\n\nACER uses $Q^\\text{ret}$ as the target to train the critic by minimizing the L2 error term: $(Q^\\text{ret}(s, a) - Q(s, a))^2$.\n\nImportance weights truncation\n\nTo reduce the high variance of the policy gradient $\\hat{g}$, ACER truncates the importance weights by a constant c, plus a correction term. The label $\\hat{g}_t^\\text{acer}$ is the ACER policy gradient at time t.\n\n\\begin{aligned} \\hat{g}_t^\\text{acer} = & \\omega_t \\big( Q^\\text{ret}(S_t, A_t) - V_{\\theta_v}(S_t) \\big) \\nabla_\\theta \\ln \\pi_\\theta(A_t \\vert S_t) & \\scriptstyle{\\text{; Let }\\omega_t=\\frac{\\pi(A_t \\vert S_t)}{\\beta(A_t \\vert S_t)}} \\\\ = & \\color{blue}{\\min(c, \\omega_t) \\big( Q^\\text{ret}(S_t, A_t) - V_w(S_t) \\big) \\nabla_\\theta \\ln \\pi_\\theta(A_t \\vert S_t)} \\\\ & + \\color{red}{\\mathbb{E}_{a \\sim \\pi} \\big[ \\max(0, \\frac{\\omega_t(a) - c}{\\omega_t(a)}) \\big( Q_w(S_t, a) - V_w(S_t) \\big) \\nabla_\\theta \\ln \\pi_\\theta(a \\vert S_t) \\big]} & \\scriptstyle{\\text{; Let }\\omega_t (a) =\\frac{\\pi(a \\vert S_t)}{\\beta(a \\vert S_t)}} \\end{aligned}\n\nwhere $Q_w(.)$ and $V_w(.)$ are value functions predicted by the critic with parameter w. The first term (blue) contains the clipped important weight. The clipping helps reduce the variance, in addition to subtracting state value function $V_w(.)$ as a baseline. The second term (red) makes a correction to achieve unbiased estimation.\n\nEfficient TRPO\n\nFurthermore, ACER adopts the idea of TRPO but with a small adjustment to make it more computationally efficient: rather than measuring the KL divergence between policies before and after one update, ACER maintains a running average of past policies and forces the updated policy to not deviate far from this average.\n\nThe ACER paper is pretty dense with many equations. Hopefully, with the prior knowledge on TD learning, Q-learning, importance sampling and TRPO, you will find the paper slightly easier to follow :)\n\n## ACTKR#\n\n[paper|code]\n\nACKTR (actor-critic using Kronecker-factored trust region) (Yuhuai Wu, et al., 2017) proposed to use Kronecker-factored approximation curvature (K-FAC) to do the gradient update for both the critic and actor. K-FAC made an improvement on the computation of natural gradient, which is quite different from our standard gradient. Here is a nice, intuitive explanation of natural gradient. One sentence summary is probably:\n\n“we first consider all combinations of parameters that result in a new network a constant KL divergence away from the old network. This constant value can be viewed as the step size or learning rate. Out of all these possible combinations, we choose the one that minimizes our loss function.”\n\nI listed ACTKR here mainly for the completeness of this post, but I would not dive into details, as it involves a lot of theoretical knowledge on natural gradient and optimization methods. If interested, check these papers/posts, before reading the ACKTR paper:\n\nHere is a high level summary from the K-FAC paper:\n\n“This approximation is built in two stages. In the first, the rows and columns of the Fisher are divided into groups, each of which corresponds to all the weights in a given layer, and this gives rise to a block-partitioning of the matrix. These blocks are then approximated as Kronecker products between much smaller matrices, which we show is equivalent to making certain approximating assumptions regarding the statistics of the network’s gradients.\n\nIn the second stage, this matrix is further approximated as having an inverse which is either block-diagonal or block-tridiagonal. We justify this approximation through a careful examination of the relationships between inverse covariances, tree-structured graphical models, and linear regression. Notably, this justification doesn’t apply to the Fisher itself, and our experiments confirm that while the inverse Fisher does indeed possess this structure (approximately), the Fisher itself does not.”\n\n## SAC#\n\n[paper|code]\n\nSoft Actor-Critic (SAC) (Haarnoja et al. 2018) incorporates the entropy measure of the policy into the reward to encourage exploration: we expect to learn a policy that acts as randomly as possible while it is still able to succeed at the task. It is an off-policy actor-critic model following the maximum entropy reinforcement learning framework. A precedent work is Soft Q-learning.\n\nThree key components in SAC:\n\n• An actor-critic architecture with separate policy and value function networks;\n• An off-policy formulation that enables reuse of previously collected data for efficiency;\n• Entropy maximization to enable stability and exploration.\n\nThe policy is trained with the objective to maximize the expected return and the entropy at the same time:\n\n$$J(\\theta) = \\sum_{t=1}^T \\mathbb{E}_{(s_t, a_t) \\sim \\rho_{\\pi_\\theta}} [r(s_t, a_t) + \\alpha \\mathcal{H}(\\pi_\\theta(.\\vert s_t))]$$\n\nwhere $\\mathcal{H}(.)$ is the entropy measure and $\\alpha$ controls how important the entropy term is, known as temperature parameter. The entropy maximization leads to policies that can (1) explore more and (2) capture multiple modes of near-optimal strategies (i.e., if there exist multiple options that seem to be equally good, the policy should assign each with an equal probability to be chosen).\n\nPrecisely, SAC aims to learn three functions:\n\n• The policy with parameter $\\theta$, $\\pi_\\theta$.\n• Soft Q-value function parameterized by $w$, $Q_w$.\n• Soft state value function parameterized by $\\psi$, $V_\\psi$; theoretically we can infer $V$ by knowing $Q$ and $\\pi$, but in practice, it helps stabilize the training.\n\nSoft Q-value and soft state value are defined as:\n\n\\begin{aligned} Q(s_t, a_t) &= r(s_t, a_t) + \\gamma \\mathbb{E}_{s_{t+1} \\sim \\rho_{\\pi}(s)} [V(s_{t+1})] & \\text{; according to Bellman equation.}\\\\ \\text{where }V(s_t) &= \\mathbb{E}_{a_t \\sim \\pi} [Q(s_t, a_t) - \\alpha \\log \\pi(a_t \\vert s_t)] & \\text{; soft state value function.} \\end{aligned}\n$$\\text{Thus, } Q(s_t, a_t) = r(s_t, a_t) + \\gamma \\mathbb{E}_{(s_{t+1}, a_{t+1}) \\sim \\rho_{\\pi}} [Q(s_{t+1}, a_{t+1}) - \\alpha \\log \\pi(a_{t+1} \\vert s_{t+1})]$$\n\n$\\rho_\\pi(s)$ and $\\rho_\\pi(s, a)$ denote the state and the state-action marginals of the state distribution induced by the policy $\\pi(a \\vert s)$; see the similar definitions in DPG section.\n\nThe soft state value function is trained to minimize the mean squared error:\n\n\\begin{aligned} J_V(\\psi) &= \\mathbb{E}_{s_t \\sim \\mathcal{D}} [\\frac{1}{2} \\big(V_\\psi(s_t) - \\mathbb{E}[Q_w(s_t, a_t) - \\log \\pi_\\theta(a_t \\vert s_t)] \\big)^2] \\\\ \\text{with gradient: }\\nabla_\\psi J_V(\\psi) &= \\nabla_\\psi V_\\psi(s_t)\\big( V_\\psi(s_t) - Q_w(s_t, a_t) + \\log \\pi_\\theta (a_t \\vert s_t) \\big) \\end{aligned}\n\nwhere $\\mathcal{D}$ is the replay buffer.\n\nThe soft Q function is trained to minimize the soft Bellman residual:\n\n\\begin{aligned} J_Q(w) &= \\mathbb{E}_{(s_t, a_t) \\sim \\mathcal{D}} [\\frac{1}{2}\\big( Q_w(s_t, a_t) - (r(s_t, a_t) + \\gamma \\mathbb{E}_{s_{t+1} \\sim \\rho_\\pi(s)}[V_{\\bar{\\psi}}(s_{t+1})]) \\big)^2] \\\\ \\text{with gradient: } \\nabla_w J_Q(w) &= \\nabla_w Q_w(s_t, a_t) \\big( Q_w(s_t, a_t) - r(s_t, a_t) - \\gamma V_{\\bar{\\psi}}(s_{t+1})\\big) \\end{aligned}\n\nwhere $\\bar{\\psi}$ is the target value function which is the exponential moving average (or only gets updated periodically in a “hard” way), just like how the parameter of the target Q network is treated in DQN to stabilize the training.\n\nSAC updates the policy to minimize the KL-divergence:\n\n\\begin{aligned} \\pi_\\text{new} &= \\arg\\min_{\\pi' \\in \\Pi} D_\\text{KL} \\Big( \\pi'(.\\vert s_t) \\| \\frac{\\exp(Q^{\\pi_\\text{old}}(s_t, .))}{Z^{\\pi_\\text{old}}(s_t)} \\Big) \\\\[6pt] &= \\arg\\min_{\\pi' \\in \\Pi} D_\\text{KL} \\big( \\pi'(.\\vert s_t) \\| \\exp(Q^{\\pi_\\text{old}}(s_t, .) - \\log Z^{\\pi_\\text{old}}(s_t)) \\big) \\\\[6pt] \\text{objective for update: } J_\\pi(\\theta) &= \\nabla_\\theta D_\\text{KL} \\big( \\pi_\\theta(. \\vert s_t) \\| \\exp(Q_w(s_t, .) - \\log Z_w(s_t)) \\big) \\\\[6pt] &= \\mathbb{E}_{a_t\\sim\\pi} \\Big[ - \\log \\big( \\frac{\\exp(Q_w(s_t, a_t) - \\log Z_w(s_t))}{\\pi_\\theta(a_t \\vert s_t)} \\big) \\Big] \\\\[6pt] &= \\mathbb{E}_{a_t\\sim\\pi} [ \\log \\pi_\\theta(a_t \\vert s_t) - Q_w(s_t, a_t) + \\log Z_w(s_t) ] \\end{aligned}\n\nwhere $\\Pi$ is the set of potential policies that we can model our policy as to keep them tractable; for example, $\\Pi$ can be the family of Gaussian mixture distributions, expensive to model but highly expressive and still tractable. $Z^{\\pi_\\text{old}}(s_t)$ is the partition function to normalize the distribution. It is usually intractable but does not contribute to the gradient. How to minimize $J_\\pi(\\theta)$ depends our choice of $\\Pi$.\n\nThis update guarantees that $Q^{\\pi_\\text{new}}(s_t, a_t) \\geq Q^{\\pi_\\text{old}}(s_t, a_t)$, please check the proof on this lemma in the Appendix B.2 in the original paper.\n\nOnce we have defined the objective functions and gradients for soft action-state value, soft state value and the policy network, the soft actor-critic algorithm is straightforward:",
null,
"## SAC with Automatically Adjusted Temperature#\n\n[paper|code]\n\nSAC is brittle with respect to the temperature parameter. Unfortunately it is difficult to adjust temperature, because the entropy can vary unpredictably both across tasks and during training as the policy becomes better. An improvement on SAC formulates a constrained optimization problem: while maximizing the expected return, the policy should satisfy a minimum entropy constraint:\n\n$$\\max_{\\pi_0, \\dots, \\pi_T} \\mathbb{E} \\Big[ \\sum_{t=0}^T r(s_t, a_t)\\Big] \\text{s.t. } \\forall t\\text{, } \\mathcal{H}(\\pi_t) \\geq \\mathcal{H}_0$$\n\nwhere $\\mathcal{H}_0$ is a predefined minimum policy entropy threshold.\n\nThe expected return $\\mathbb{E} \\Big[ \\sum_{t=0}^T r(s_t, a_t)\\Big]$ can be decomposed into a sum of rewards at all the time steps. Because the policy $\\pi_t$ at time t has no effect on the policy at the earlier time step, $\\pi_{t-1}$, we can maximize the return at different steps backward in time — this is essentially DP.\n\n$$\\underbrace{\\max_{\\pi_0} \\Big( \\mathbb{E}[r(s_0, a_0)]+ \\underbrace{\\max_{\\pi_1} \\Big(\\mathbb{E}[...] + \\underbrace{\\max_{\\pi_T} \\mathbb{E}[r(s_T, a_T)]}_\\text{1st maximization} \\Big)}_\\text{second but last maximization} \\Big)}_\\text{last maximization}$$\n\nwhere we consider $\\gamma=1$.\n\nSo we start the optimization from the last timestep $T$:\n\n$$\\text{maximize } \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) ] \\text{ s.t. } \\mathcal{H}(\\pi_T) - \\mathcal{H}_0 \\geq 0$$\n\nFirst, let us define the following functions:\n\n\\begin{aligned} h(\\pi_T) &= \\mathcal{H}(\\pi_T) - \\mathcal{H}_0 = \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [-\\log \\pi_T(a_T\\vert s_T)] - \\mathcal{H}_0\\\\ f(\\pi_T) &= \\begin{cases} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) ], & \\text{if }h(\\pi_T) \\geq 0 \\\\ -\\infty, & \\text{otherwise} \\end{cases} \\end{aligned}\n\nAnd the optimization becomes:\n\n$$\\text{maximize } f(\\pi_T) \\text{ s.t. } h(\\pi_T) \\geq 0$$\n\nTo solve the maximization optimization with inequality constraint, we can construct a Lagrangian expression with a Lagrange multiplier (also known as “dual variable”), $\\alpha_T$:\n\n$$L(\\pi_T, \\alpha_T) = f(\\pi_T) + \\alpha_T h(\\pi_T)$$\n\nConsidering the case when we try to minimize $L(\\pi_T, \\alpha_T)$ with respect to $\\alpha_T$ - given a particular value $\\pi_T$,\n\n• If the constraint is satisfied, $h(\\pi_T) \\geq 0$, at best we can set $\\alpha_T=0$ since we have no control over the value of $f(\\pi_T)$. Thus, $L(\\pi_T, 0) = f(\\pi_T)$.\n• If the constraint is invalidated, $h(\\pi_T) < 0$, we can achieve $L(\\pi_T, \\alpha_T) \\to -\\infty$ by taking $\\alpha_T \\to \\infty$. Thus, $L(\\pi_T, \\infty) = -\\infty = f(\\pi_T)$.\n\nIn either case, we can recover the following equation,\n\n$$f(\\pi_T) = \\min_{\\alpha_T \\geq 0} L(\\pi_T, \\alpha_T)$$\n\nAt the same time, we want to maximize $f(\\pi_T)$,\n\n$$\\max_{\\pi_T} f(\\pi_T) = \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} L(\\pi_T, \\alpha_T)$$\n\nTherefore, to maximize $f(\\pi_T)$, the dual problem is listed as below. Note that to make sure $\\max_{\\pi_T} f(\\pi_T)$ is properly maximized and would not become $-\\infty$, the constraint has to be satisfied.\n\n\\begin{aligned} \\max_{\\pi_T} \\mathbb{E}[ r(s_T, a_T) ] &= \\max_{\\pi_T} f(\\pi_T) \\\\ &= \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} L(\\pi_T, \\alpha_T) \\\\ &= \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} f(\\pi_T) + \\alpha_T h(\\pi_T) \\\\ &= \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) ] + \\alpha_T ( \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [-\\log \\pi_T(a_T\\vert s_T)] - \\mathcal{H}_0) \\\\ &= \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) - \\alpha_T \\log \\pi_T(a_T\\vert s_T)] - \\alpha_T \\mathcal{H}_0 \\\\ &= \\min_{\\alpha_T \\geq 0} \\max_{\\pi_T} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) + \\alpha_T \\mathcal{H}(\\pi_T) - \\alpha_T \\mathcal{H}_0 ] \\end{aligned}\n\nWe could compute the optimal $\\pi_T$ and $\\alpha_T$ iteratively. First given the current $\\alpha_T$, get the best policy $\\pi_T^{*}$ that maximizes $L(\\pi_T^{*}, \\alpha_T)$. Then plug in $\\pi_T^{*}$ and compute $\\alpha_T^{*}$ that minimizes $L(\\pi_T^{*}, \\alpha_T)$. Assuming we have one neural network for policy and one network for temperature parameter, the iterative update process is more aligned with how we update network parameters during training.\n\n\\begin{aligned} \\pi^{*}_T &= \\arg\\max_{\\pi_T} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi}} [ r(s_T, a_T) + \\alpha_T \\mathcal{H}(\\pi_T) - \\alpha_T \\mathcal{H}_0 ] \\\\ \\color{blue}{\\alpha^{*}_T} &\\color{blue}{=} \\color{blue}{\\arg\\min_{\\alpha_T \\geq 0} \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi^{*}}} [\\alpha_T \\mathcal{H}(\\pi^{*}_T) - \\alpha_T \\mathcal{H}_0 ]} \\end{aligned}\n$$\\text{Thus, }\\max_{\\pi_T} \\mathbb{E} [ r(s_T, a_T) ] = \\mathbb{E}_{(s_T, a_T) \\sim \\rho_{\\pi^{*}}} [ r(s_T, a_T) + \\alpha^{*}_T \\mathcal{H}(\\pi^{*}_T) - \\alpha^{*}_T \\mathcal{H}_0 ]$$\n\nNow let’s go back to the soft Q value function:\n\n\\begin{aligned} Q_{T-1}(s_{T-1}, a_{T-1}) &= r(s_{T-1}, a_{T-1}) + \\mathbb{E} [Q(s_T, a_T) - \\alpha_T \\log \\pi(a_T \\vert s_T)] \\\\ &= r(s_{T-1}, a_{T-1}) + \\mathbb{E} [r(s_T, a_T)] + \\alpha_T \\mathcal{H}(\\pi_T) \\\\ Q_{T-1}^{*}(s_{T-1}, a_{T-1}) &= r(s_{T-1}, a_{T-1}) + \\max_{\\pi_T} \\mathbb{E} [r(s_T, a_T)] + \\alpha_T \\mathcal{H}(\\pi^{*}_T) & \\text{; plug in the optimal }\\pi_T^{*} \\end{aligned}\n\nTherefore the expected return is as follows, when we take one step further back to the time step $T-1$:\n\n\\begin{aligned} &\\max_{\\pi_{T-1}}\\Big(\\mathbb{E}[r(s_{T-1}, a_{T-1})] + \\max_{\\pi_T} \\mathbb{E}[r(s_T, a_T] \\Big) \\\\ &= \\max_{\\pi_{T-1}} \\Big( Q^{*}_{T-1}(s_{T-1}, a_{T-1}) - \\alpha^{*}_T \\mathcal{H}(\\pi^{*}_T) \\Big) & \\text{; should s.t. } \\mathcal{H}(\\pi_{T-1}) - \\mathcal{H}_0 \\geq 0 \\\\ &= \\min_{\\alpha_{T-1} \\geq 0} \\max_{\\pi_{T-1}} \\Big( Q^{*}_{T-1}(s_{T-1}, a_{T-1}) - \\alpha^{*}_T \\mathcal{H}(\\pi^{*}_T) + \\alpha_{T-1} \\big( \\mathcal{H}(\\pi_{T-1}) - \\mathcal{H}_0 \\big) \\Big) & \\text{; dual problem w/ Lagrangian.} \\\\ &= \\min_{\\alpha_{T-1} \\geq 0} \\max_{\\pi_{T-1}} \\Big( Q^{*}_{T-1}(s_{T-1}, a_{T-1}) + \\alpha_{T-1} \\mathcal{H}(\\pi_{T-1}) - \\alpha_{T-1}\\mathcal{H}_0 \\Big) - \\alpha^{*}_T \\mathcal{H}(\\pi^{*}_T) \\end{aligned}\n\nSimilar to the previous step,\n\n\\begin{aligned} \\pi^{*}_{T-1} &= \\arg\\max_{\\pi_{T-1}} \\mathbb{E}_{(s_{T-1}, a_{T-1}) \\sim \\rho_\\pi} [Q^{*}_{T-1}(s_{T-1}, a_{T-1}) + \\alpha_{T-1} \\mathcal{H}(\\pi_{T-1}) - \\alpha_{T-1} \\mathcal{H}_0 ] \\\\ \\color{green}{\\alpha^{*}_{T-1}} &\\color{green}{=} \\color{green}{\\arg\\min_{\\alpha_{T-1} \\geq 0} \\mathbb{E}_{(s_{T-1}, a_{T-1}) \\sim \\rho_{\\pi^{*}}} [ \\alpha_{T-1} \\mathcal{H}(\\pi^{*}_{T-1}) - \\alpha_{T-1}\\mathcal{H}_0 ]} \\end{aligned}\n\nThe equation for updating $\\alpha_{T-1}$ in green has the same format as the equation for updating $\\alpha_{T-1}$ in blue above. By repeating this process, we can learn the optimal temperature parameter in every step by minimizing the same objective function:\n\n$$J(\\alpha) = \\mathbb{E}_{a_t \\sim \\pi_t} [-\\alpha \\log \\pi_t(a_t \\mid s_t) - \\alpha \\mathcal{H}_0]$$\n\nThe final algorithm is same as SAC except for learning $\\alpha$ explicitly with respect to the objective $J(\\alpha)$ (see Fig. 7):",
null,
"## TD3#\n\n[paper|code]\n\nThe Q-learning algorithm is commonly known to suffer from the overestimation of the value function. This overestimation can propagate through the training iterations and negatively affect the policy. This property directly motivated Double Q-learning and Double DQN: the action selection and Q-value update are decoupled by using two value networks.\n\nTwin Delayed Deep Deterministic (short for TD3; Fujimoto et al., 2018) applied a couple of tricks on DDPG to prevent the overestimation of the value function:\n\n(1) Clipped Double Q-learning: In Double Q-Learning, the action selection and Q-value estimation are made by two networks separately. In the DDPG setting, given two deterministic actors $(\\mu_{\\theta_1}, \\mu_{\\theta_2})$ with two corresponding critics $(Q_{w_1}, Q_{w_2})$, the Double Q-learning Bellman targets look like:\n\n\\begin{aligned} y_1 &= r + \\gamma Q_{w_2}(s', \\mu_{\\theta_1}(s'))\\\\ y_2 &= r + \\gamma Q_{w_1}(s', \\mu_{\\theta_2}(s')) \\end{aligned}\n\nHowever, due to the slow changing policy, these two networks could be too similar to make independent decisions. The Clipped Double Q-learning instead uses the minimum estimation among two so as to favor underestimation bias which is hard to propagate through training:\n\n\\begin{aligned} y_1 &= r + \\gamma \\min_{i=1,2}Q_{w_i}(s', \\mu_{\\theta_1}(s'))\\\\ y_2 &= r + \\gamma \\min_{i=1,2} Q_{w_i}(s', \\mu_{\\theta_2}(s')) \\end{aligned}\n\n(2) Delayed update of Target and Policy Networks: In the actor-critic model, policy and value updates are deeply coupled: Value estimates diverge through overestimation when the policy is poor, and the policy will become poor if the value estimate itself is inaccurate.\n\nTo reduce the variance, TD3 updates the policy at a lower frequency than the Q-function. The policy network stays the same until the value error is small enough after several updates. The idea is similar to how the periodically-updated target network stay as a stable objective in DQN.\n\n(3) Target Policy Smoothing: Given a concern with deterministic policies that they can overfit to narrow peaks in the value function, TD3 introduced a smoothing regularization strategy on the value function: adding a small amount of clipped random noises to the selected action and averaging over mini-batches.\n\n\\begin{aligned} y &= r + \\gamma Q_w (s', \\mu_{\\theta}(s') + \\epsilon) & \\\\ \\epsilon &\\sim \\text{clip}(\\mathcal{N}(0, \\sigma), -c, +c) & \\scriptstyle{\\text{ ; clipped random noises.}} \\end{aligned}\n\nThis approach mimics the idea of SARSA update and enforces that similar actions should have similar values.\n\nHere is the final algorithm:",
null,
"## SVPG#\n\n[paper|code for SVPG]\n\nStein Variational Policy Gradient (SVPG; Liu et al, 2017) applies the Stein variational gradient descent (SVGD; Liu and Wang, 2016) algorithm to update the policy parameter $\\theta$.\n\nIn the setup of maximum entropy policy optimization, $\\theta$ is considered as a random variable $\\theta \\sim q(\\theta)$ and the model is expected to learn this distribution $q(\\theta)$. Assuming we know a prior on how $q$ might look like, $q_0$, and we would like to guide the learning process to not make $\\theta$ too far away from $q_0$ by optimizing the following objective function:\n\n$$\\hat{J}(\\theta) = \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] - \\alpha D_\\text{KL}(q\\|q_0)$$\n\nwhere $\\mathbb{E}_{\\theta \\sim q} [R(\\theta)]$ is the expected reward when $\\theta \\sim q(\\theta)$ and $D_\\text{KL}$ is the KL divergence.\n\nIf we don’t have any prior information, we might set $q_0$ as a uniform distribution and set $q_0(\\theta)$ to a constant. Then the above objective function becomes SAC, where the entropy term encourages exploration:\n\n\\begin{aligned} \\hat{J}(\\theta) &= \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] - \\alpha D_\\text{KL}(q\\|q_0) \\\\ &= \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] - \\alpha \\mathbb{E}_{\\theta \\sim q} [\\log q(\\theta) - \\log q_0(\\theta)] \\\\ &= \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] + \\alpha H(q(\\theta)) \\end{aligned}\n\nLet’s take the derivative of $\\hat{J}(\\theta) = \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] - \\alpha D_\\text{KL}(q|q_0)$ w.r.t. $q$:\n\n\\begin{aligned} \\nabla_q \\hat{J}(\\theta) &= \\nabla_q \\big( \\mathbb{E}_{\\theta \\sim q} [J(\\theta)] - \\alpha D_\\text{KL}(q\\|q_0) \\big) \\\\ &= \\nabla_q \\int_\\theta \\big( q(\\theta) J(\\theta) - \\alpha q(\\theta)\\log q(\\theta) + \\alpha q(\\theta) \\log q_0(\\theta) \\big) \\\\ &= \\int_\\theta \\big( J(\\theta) - \\alpha \\log q(\\theta) -\\alpha + \\alpha \\log q_0(\\theta) \\big) \\\\ &= 0 \\end{aligned}\n\nThe optimal distribution is:\n\n$$\\log q^{*}(\\theta) = \\frac{1}{\\alpha} J(\\theta) + \\log q_0(\\theta) - 1 \\text{ thus } \\underbrace{ q^{*}(\\theta) }_\\textrm{\"posterior\"} \\propto \\underbrace{\\exp ( J(\\theta) / \\alpha )}_\\textrm{\"likelihood\"} \\underbrace{q_0(\\theta)}_\\textrm{prior}$$\n\nThe temperature $\\alpha$ decides a tradeoff between exploitation and exploration. When $\\alpha \\rightarrow 0$, $\\theta$ is updated only according to the expected return $J(\\theta)$. When $\\alpha \\rightarrow \\infty$, $\\theta$ always follows the prior belief.\n\nWhen using the SVGD method to estimate the target posterior distribution $q(\\theta)$, it relies on a set of particle $\\{\\theta_i\\}_{i=1}^n$ (independently trained policy agents) and each is updated:\n\n$$\\theta_i \\gets \\theta_i + \\epsilon \\phi^{*}(\\theta_i) \\text{ where } \\phi^{*} = \\max_{\\phi \\in \\mathcal{H}} \\{ - \\nabla_\\epsilon D_\\text{KL} (q'_{[\\theta + \\epsilon \\phi(\\theta)]} \\| q) \\text{ s.t. } \\|\\phi\\|_{\\mathcal{H}} \\leq 1\\}$$\n\nwhere $\\epsilon$ is a learning rate and $\\phi^{*}$ is the unit ball of a RKHS (reproducing kernel Hilbert space) $\\mathcal{H}$ of $\\theta$-shaped value vectors that maximally decreases the KL divergence between the particles and the target distribution. $q'(.)$ is the distribution of $\\theta + \\epsilon \\phi(\\theta)$.\n\nMethod Update space\nPlain gradient $\\Delta \\theta$ on the parameter space\nNatural gradient $\\Delta \\theta$ on the search distribution space\nSVGD $\\Delta \\theta$ on the kernel function space (edited)\n\nOne estimation of $\\phi^{*}$ has the following form. A positive definite kernel $k(\\vartheta, \\theta)$, i.e. a Gaussian radial basis function, measures the similarity between particles.\n\n\\begin{aligned} \\phi^{*}(\\theta_i) &= \\mathbb{E}_{\\vartheta \\sim q'} [\\nabla_\\vartheta \\log q(\\vartheta) k(\\vartheta, \\theta_i) + \\nabla_\\vartheta k(\\vartheta, \\theta_i)]\\\\ &= \\frac{1}{n} \\sum_{j=1}^n [\\color{red}{\\nabla_{\\theta_j} \\log q(\\theta_j) k(\\theta_j, \\theta_i)} + \\color{green}{\\nabla_{\\theta_j} k(\\theta_j, \\theta_i)}] & \\scriptstyle{\\text{;approximate }q'\\text{ with current particle values}} \\end{aligned}\n• The first term in red encourages $\\theta_i$ learning towards the high probability regions of $q$ that is shared across similar particles. => to be similar to other particles\n• The second term in green pushes particles away from each other and therefore diversifies the policy. => to be dissimilar to other particles",
null,
"Usually the temperature $\\alpha$ follows an annealing scheme so that the training process does more exploration at the beginning but more exploitation at a later stage.\n\n## IMPALA#\n\n[paper|code]\n\nIn order to scale up RL training to achieve a very high throughput, IMPALA (“Importance Weighted Actor-Learner Architecture”) framework decouples acting from learning on top of basic actor-critic setup and learns from all experience trajectories with V-trace off-policy correction.\n\nMultiple actors generate experience in parallel, while the learner optimizes both policy and value function parameters using all the generated experience. Actors update their parameters with the latest policy from the learner periodically. Because acting and learning are decoupled, we can add many more actor machines to generate a lot more trajectories per time unit. As the training policy and the behavior policy are not totally synchronized, there is a gap between them and thus we need off-policy corrections.",
null,
"Let the value function $V_\\theta$ parameterized by $\\theta$ and the policy $\\pi_\\phi$ parameterized by $\\phi$. Also we know the trajectories in the replay buffer are collected by a slightly older policy $\\mu$.\n\nAt the training time $t$, given $(s_t, a_t, s_{t+1}, r_t)$, the value function parameter $\\theta$ is learned through an L2 loss between the current value and a V-trace value target. The $n$-step V-trace target is defined as:\n\n\\begin{aligned} v_t &= V_\\theta(s_t) + \\sum_{i=t}^{t+n-1} \\gamma^{i-t} \\big(\\prod_{j=t}^{i-1} c_j\\big) \\color{red}{\\delta_i V} \\\\ &= V_\\theta(s_t) + \\sum_{i=t}^{t+n-1} \\gamma^{i-t} \\big(\\prod_{j=t}^{i-1} c_j\\big) \\color{red}{\\rho_i (r_i + \\gamma V_\\theta(s_{i+1}) - V_\\theta(s_i))} \\end{aligned}\n\nwhere the red part $\\delta_i V$ is a temporal difference for $V$. $\\rho_i = \\min\\big(\\bar{\\rho}, \\frac{\\pi(a_i \\vert s_i)}{\\mu(a_i \\vert s_i)}\\big)$ and $c_j = \\min\\big(\\bar{c}, \\frac{\\pi(a_j \\vert s_j)}{\\mu(a_j \\vert s_j)}\\big)$ are truncated importance sampling (IS) weights. The product of $c_t, \\dots, c_{i-1}$ measures how much a temporal difference $\\delta_i V$ observed at time $i$ impacts the update of the value function at a previous time $t$. In the on-policy case, we have $\\rho_i=1$ and $c_j=1$ (assuming $\\bar{c} \\geq 1$) and therefore the V-trace target becomes on-policy $n$-step Bellman target.\n\n$\\bar{\\rho}$ and $\\bar{c}$ are two truncation constants with $\\bar{\\rho} \\geq \\bar{c}$. $\\bar{\\rho}$ impacts the fixed-point of the value function we converge to and $\\bar{c}$ impacts the speed of convergence. When $\\bar{\\rho} =\\infty$ (untruncated), we converge to the value function of the target policy $V^\\pi$; when $\\bar{\\rho}$ is close to 0, we evaluate the value function of the behavior policy $V^\\mu$; when in-between, we evaluate a policy between $\\pi$ and $\\mu$.\n\nThe value function parameter is therefore updated in the direction of:\n\n$$\\Delta\\theta = (v_t - V_\\theta(s_t))\\nabla_\\theta V_\\theta(s_t)$$\n\nThe policy parameter $\\phi$ is updated through policy gradient,\n\n\\begin{aligned} \\Delta \\phi &= \\rho_t \\nabla_\\phi \\log \\pi_\\phi(a_t \\vert s_t) \\big(r_t + \\gamma v_{t+1} - V_\\theta(s_t)\\big) + \\nabla_\\phi H(\\pi_\\phi)\\\\ &= \\rho_t \\nabla_\\phi \\log \\pi_\\phi(a_t \\vert s_t) \\big(r_t + \\gamma v_{t+1} - V_\\theta(s_t)\\big) - \\nabla_\\phi \\sum_a \\pi_\\phi(a\\vert s_t)\\log \\pi_\\phi(a\\vert s_t) \\end{aligned}\n\nwhere $r_t + \\gamma v_{t+1}$ is the estimated Q value, from which a state-dependent baseline $V_\\theta(s_t)$ is subtracted. $H(\\pi_\\phi)$ is an entropy bonus to encourage exploration.\n\nIn the experiments, IMPALA is used to train one agent over multiple tasks. Two different model architectures are involved, a shallow model (left) and a deep residual model (right).",
null,
"# Quick Summary#\n\nAfter reading through all the algorithms above, I list a few building blocks or principles that seem to be common among them:\n\n• Try to reduce the variance and keep the bias unchanged to stabilize learning.\n• Off-policy gives us better exploration and helps us use data samples more efficiently.\n• Experience replay (training data sampled from a replay memory buffer);\n• Target network that is either frozen periodically or updated slower than the actively learned policy network;\n• Batch normalization;\n• Entropy-regularized reward;\n• The critic and actor can share lower layer parameters of the network and two output heads for policy and value functions.\n• It is possible to learn with deterministic policy rather than stochastic one.\n• Put constraint on the divergence between policy updates.\n• New optimization methods (such as K-FAC).\n• Entropy maximization of the policy helps encourage exploration.\n• Try not to overestimate the value function.\n• Think twice whether the policy and value network should share parameters.\n• TBA more.\n\nCited as:\n\n@article{weng2018PG,\nauthor = \"Weng, Lilian\",\njournal = \"lilianweng.github.io\",\nyear = \"2018\",\n}\n\n\n jeremykun.com Markov Chain Monte Carlo Without all the Bullshit\n\n Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction; 2nd Edition. 2017.\n\n John Schulman, et al. “High-dimensional continuous control using generalized advantage estimation.\" ICLR 2016.\n\n Thomas Degris, Martha White, and Richard S. Sutton. “Off-policy actor-critic.\" ICML 2012.\n\n timvieira.github.io Importance sampling\n\n Mnih, Volodymyr, et al. “Asynchronous methods for deep reinforcement learning.\" ICML. 2016.\n\n David Silver, et al. “Deterministic policy gradient algorithms.\" ICML. 2014.\n\n Timothy P. Lillicrap, et al. “Continuous control with deep reinforcement learning.\" arXiv preprint arXiv:1509.02971 (2015).\n\n Ryan Lowe, et al. “Multi-agent actor-critic for mixed cooperative-competitive environments.\" NIPS. 2017.\n\n John Schulman, et al. “Trust region policy optimization.\" ICML. 2015.\n\n Ziyu Wang, et al. “Sample efficient actor-critic with experience replay.\" ICLR 2017.\n\n Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. “Safe and efficient off-policy reinforcement learning” NIPS. 2016.\n\n Yuhuai Wu, et al. “Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation.\" NIPS. 2017.\n\n kvfrans.com A intuitive explanation of natural gradient descent\n\n “Going Deeper Into Reinforcement Learning: Fundamentals of Policy Gradients.\" - Seita’s Place, Mar 2017.\n\n “Notes on the Generalized Advantage Estimation Paper.\" - Seita’s Place, Apr, 2017.\n\n Gabriel Barth-Maron, et al. “Distributed Distributional Deterministic Policy Gradients.\" ICLR 2018 poster.\n\n Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.\" arXiv preprint arXiv:1801.01290 (2018).\n\n Scott Fujimoto, Herke van Hoof, and Dave Meger. “Addressing Function Approximation Error in Actor-Critic Methods.\" arXiv preprint arXiv:1802.09477 (2018).\n\n Tuomas Haarnoja, et al. “Soft Actor-Critic Algorithms and Applications.\" arXiv preprint arXiv:1812.05905 (2018).\n\n David Knowles. “Lagrangian Duality for Dummies” Nov 13, 2010.\n\n Yang Liu, et al. “Stein variational policy gradient.\" arXiv preprint arXiv:1704.02399 (2017).\n\n Qiang Liu and Dilin Wang. “Stein variational gradient descent: A general purpose bayesian inference algorithm.\" NIPS. 2016.\n\n Lasse Espeholt, et al. “IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures” arXiv preprint 1802.01561 (2018).\n\n Karl Cobbe, et al. “Phasic Policy Gradient.\" arXiv preprint arXiv:2009.04416 (2020).\n\n Chloe Ching-Yun Hsu, et al. “Revisiting Design Choices in Proximal Policy Optimization.\" arXiv preprint arXiv:2009.10897 (2020)."
] | [
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/general_form_policy_gradient.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/A3C_vs_A2C.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/DDPG_algo.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/D4PG_algo.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/MADDPG.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/ppo-loss-functions.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/PPG_algo.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/PPG_exp.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/SAC_algo.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/SAC2_algo.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/TD3.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/SVPG.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/IMPALA.png",
null,
"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/IMPALA-arch.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8368803,"math_prob":0.9990337,"size":55378,"snap":"2023-40-2023-50","text_gpt3_token_len":13480,"char_repetition_ratio":0.1564272,"word_repetition_ratio":0.009825974,"special_character_ratio":0.24601828,"punctuation_ratio":0.110892646,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995255,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T12:09:28Z\",\"WARC-Record-ID\":\"<urn:uuid:b774c541-407c-4ab8-afa7-12407578e9f6>\",\"Content-Length\":\"205127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:177148d0-ccd2-4a78-a3a1-4f2710f65cc6>\",\"WARC-Concurrent-To\":\"<urn:uuid:82418485-451a-46f2-8557-159882e5c467>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://lilianweng.github.io/posts/2018-04-08-policy-gradient/\",\"WARC-Payload-Digest\":\"sha1:KCLRJKWLZ7BZWVYEVDNXZHSSZUWTU5MA\",\"WARC-Block-Digest\":\"sha1:VK4NMQB56Y5SKQ5C6W7DUWXF4LBQX24G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510994.61_warc_CC-MAIN-20231002100910-20231002130910-00874.warc.gz\"}"} |
https://www.colorhexa.com/0f5e72 | [
"# #0f5e72 Color Information\n\nIn a RGB color space, hex #0f5e72 is composed of 5.9% red, 36.9% green and 44.7% blue. Whereas in a CMYK color space, it is composed of 86.8% cyan, 17.5% magenta, 0% yellow and 55.3% black. It has a hue angle of 192.1 degrees, a saturation of 76.7% and a lightness of 25.3%. #0f5e72 color hex could be obtained by blending #1ebce4 with #000000. Closest websafe color is: #006666.\n\n• R 6\n• G 37\n• B 45\nRGB color chart\n• C 87\n• M 18\n• Y 0\n• K 55\nCMYK color chart\n\n#0f5e72 color description : Very dark cyan.\n\n# #0f5e72 Color Conversion\n\nThe hexadecimal color #0f5e72 has RGB values of R:15, G:94, B:114 and CMYK values of C:0.87, M:0.18, Y:0, K:0.55. Its decimal value is 1007218.\n\nHex triplet RGB Decimal 0f5e72 `#0f5e72` 15, 94, 114 `rgb(15,94,114)` 5.9, 36.9, 44.7 `rgb(5.9%,36.9%,44.7%)` 87, 18, 0, 55 192.1°, 76.7, 25.3 `hsl(192.1,76.7%,25.3%)` 192.1°, 86.8, 44.7 006666 `#006666`\nCIE-LAB 36.595, -14.791, -17.718 7.236, 9.321, 17.336 0.213, 0.275, 9.321 36.595, 23.08, 230.146 36.595, -24.946, -22.317 30.531, -11.122, -12.296 00001111, 01011110, 01110010\n\n# Color Schemes with #0f5e72\n\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #72230f\n``#72230f` `rgb(114,35,15)``\nComplementary Color\n• #0f7255\n``#0f7255` `rgb(15,114,85)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #0f2d72\n``#0f2d72` `rgb(15,45,114)``\nAnalogous Color\n• #72550f\n``#72550f` `rgb(114,85,15)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #720f2d\n``#720f2d` `rgb(114,15,45)``\nSplit Complementary Color\n• #5e720f\n``#5e720f` `rgb(94,114,15)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #720f5e\n``#720f5e` `rgb(114,15,94)``\n• #0f7223\n``#0f7223` `rgb(15,114,35)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #720f5e\n``#720f5e` `rgb(114,15,94)``\n• #72230f\n``#72230f` `rgb(114,35,15)``\n• #06262e\n``#06262e` `rgb(6,38,46)``\n• #093945\n``#093945` `rgb(9,57,69)``\n• #0c4b5b\n``#0c4b5b` `rgb(12,75,91)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #127189\n``#127189` `rgb(18,113,137)``\n• #15839f\n``#15839f` `rgb(21,131,159)``\n• #1896b6\n``#1896b6` `rgb(24,150,182)``\nMonochromatic Color\n\n# Alternatives to #0f5e72\n\nBelow, you can see some colors close to #0f5e72. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0f726d\n``#0f726d` `rgb(15,114,109)``\n• #0f6f72\n``#0f6f72` `rgb(15,111,114)``\n• #0f6672\n``#0f6672` `rgb(15,102,114)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #0f5672\n``#0f5672` `rgb(15,86,114)``\n• #0f4e72\n``#0f4e72` `rgb(15,78,114)``\n• #0f4572\n``#0f4572` `rgb(15,69,114)``\nSimilar Colors\n\n# #0f5e72 Preview\n\nThis text has a font color of #0f5e72.\n\n``<span style=\"color:#0f5e72;\">Text here</span>``\n#0f5e72 background color\n\nThis paragraph has a background color of #0f5e72.\n\n``<p style=\"background-color:#0f5e72;\">Content here</p>``\n#0f5e72 border color\n\nThis element has a border color of #0f5e72.\n\n``<div style=\"border:1px solid #0f5e72;\">Content here</div>``\nCSS codes\n``.text {color:#0f5e72;}``\n``.background {background-color:#0f5e72;}``\n``.border {border:1px solid #0f5e72;}``\n\n# Shades and Tints of #0f5e72\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #01080a is the darkest color, while #f8fdfe is the lightest one.\n\n• #01080a\n``#01080a` `rgb(1,8,10)``\n• #04171b\n``#04171b` `rgb(4,23,27)``\n• #06252d\n``#06252d` `rgb(6,37,45)``\n• #08333e\n``#08333e` `rgb(8,51,62)``\n• #0a414f\n``#0a414f` `rgb(10,65,79)``\n• #0d5061\n``#0d5061` `rgb(13,80,97)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #116c83\n``#116c83` `rgb(17,108,131)``\n• #147b95\n``#147b95` `rgb(20,123,149)``\n• #1689a6\n``#1689a6` `rgb(22,137,166)``\n• #1897b7\n``#1897b7` `rgb(24,151,183)``\n• #1aa5c9\n``#1aa5c9` `rgb(26,165,201)``\n• #1db4da\n``#1db4da` `rgb(29,180,218)``\n• #28bde3\n``#28bde3` `rgb(40,189,227)``\n• #39c2e5\n``#39c2e5` `rgb(57,194,229)``\n• #4ac8e7\n``#4ac8e7` `rgb(74,200,231)``\n• #5ccdea\n``#5ccdea` `rgb(92,205,234)``\n• #6dd2ec\n``#6dd2ec` `rgb(109,210,236)``\n• #7ed7ee\n``#7ed7ee` `rgb(126,215,238)``\n• #90ddf0\n``#90ddf0` `rgb(144,221,240)``\n• #a1e2f3\n``#a1e2f3` `rgb(161,226,243)``\n• #b2e7f5\n``#b2e7f5` `rgb(178,231,245)``\n• #c4edf7\n``#c4edf7` `rgb(196,237,247)``\n• #d5f2f9\n``#d5f2f9` `rgb(213,242,249)``\n• #e6f7fc\n``#e6f7fc` `rgb(230,247,252)``\n• #f8fdfe\n``#f8fdfe` `rgb(248,253,254)``\nTint Color Variation\n\n# Tones of #0f5e72\n\nA tone is produced by adding gray to any pure hue. In this case, #3c4345 is the less saturated color, while #006781 is the most saturated one.\n\n• #3c4345\n``#3c4345` `rgb(60,67,69)``\n• #37464a\n``#37464a` `rgb(55,70,74)``\n• #32494f\n``#32494f` `rgb(50,73,79)``\n• #2d4c54\n``#2d4c54` `rgb(45,76,84)``\n• #284f59\n``#284f59` `rgb(40,79,89)``\n• #23525e\n``#23525e` `rgb(35,82,94)``\n• #1e5563\n``#1e5563` `rgb(30,85,99)``\n• #195868\n``#195868` `rgb(25,88,104)``\n• #145b6d\n``#145b6d` `rgb(20,91,109)``\n• #0f5e72\n``#0f5e72` `rgb(15,94,114)``\n• #0a6177\n``#0a6177` `rgb(10,97,119)``\n• #05647c\n``#05647c` `rgb(5,100,124)``\n• #006781\n``#006781` `rgb(0,103,129)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0f5e72 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5070775,"math_prob":0.6983147,"size":3689,"snap":"2019-43-2019-47","text_gpt3_token_len":1711,"char_repetition_ratio":0.12591587,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5605855,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9882366,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T11:50:31Z\",\"WARC-Record-ID\":\"<urn:uuid:da022225-8908-4f3b-a2c9-421be35bf3bf>\",\"Content-Length\":\"36266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d762442-73f9-44c9-95f4-d95e72019e9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:453e23a9-bf97-446a-be82-cd34b0a1e8fd>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0f5e72\",\"WARC-Payload-Digest\":\"sha1:OLJLFST3FVOVGHXCVEL24EE57LHBTWVY\",\"WARC-Block-Digest\":\"sha1:IAS3NGYTGXHJ3YRFSBR5LYN3INIUDQVF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669755.17_warc_CC-MAIN-20191118104047-20191118132047-00523.warc.gz\"}"} |
https://www.angelfire.com/electronic/planetarycom/Electromagnetic.html | [
"Electromagnetism warp drive solid state and liquid state !!1 ----------------- Bulletin Message ----------------- From: INFINITE INTELLIGENT BEINGS Date: 21/11/2007 Electromagnetism warp drive solid state and liquid state\n\nMagnetism, electricity, and special relativity\n\nMain article: Electromagnetism\n\nAs a consequence of Einstein's theory of special relativity, electricity and magnetism are understood to be fundamentally interlinked. Both magnetism without electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity. In particular, a phenomenon that appears purely electric to one observer may be purely magnetic to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity \"mixes\" electricity and magnetism into a single, inseparable phenomenon called electromagnetism (analogously to how special relativity \"mixes\" space and time into spacetime).\n\n Magnetic fields and forces\nMagnetic lines of force of a bar magnet shown by iron filings on paper\nMagnetic lines of force of a bar magnet shown by iron filings on paper\n\nMain article: Magnetic field\n\nThe phenomenon of magnetism is \"mediated\" by the magnetic field -- i.e., an electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields.\n\nTo an excellent approximation (but ignoring some quantum effects---see quantum electrodynamics), Maxwell's equations (which simplify to the Biot-Savart law in the case of steady currents) describe the origin and behavior of the fields that govern these forces. Therefore magnetism is seen whenever electrically charged particles are in motion---for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from \"intrinsic\" magnetic dipoles arising from quantum effects, i.e. from quantum-mechanical spin.\n\nThe same situations which create magnetic fields (charge moving in a current or in an atom, and intrinsic magnetic dipoles) are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole.\n\nWhen a charged particle moves through a magnetic field B, it feels a force F given by the cross product:\n\n..vec{F} = q ..vec{v} ..times ..vec{B}\n\nwhere q.., is the electric charge of the particle, ..vec{v} .., is the velocity vector of the particle, and ..vec{B} .., is the magnetic field. Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is\n\nF = q v B ..sin..theta..,\n\nwhere ..theta .., is the angle between the ..vec{v} .., and ..vec{B} .., vectors.\n\nOne tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger \"V\", the middle finger \"B\", and the thumb \"F\" with your right hand. When making a gun-like configuration (with the middle finger crossing under the index finger), the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right hand rule.\n\nLenz's law gives the direction of the induced electromotive force (emf) and current resulting from electromagnetic induction. German physicist Heinrich Lenz formulated it in 1834.",
null,
"",
null,
"Mini Micro Wireless COLOR Pinhole Spy Camera * Smallest * Perfect for Nanny Cam, R/C Helicopter, RC Car & Airplane + Security ONLY \\$49.95",
null,
"",
null,
"",
null,
"Science Fiction eBooks",
null,
"",
null,
"Music eBooks",
null,
"",
null,
"Music",
null,
"",
null,
"Email: [email protected]"
] | [
null,
"http://www.awltovhc.com/image-2615312-9482441",
null,
"http://www.tqlkg.com/image-2615312-10118393",
null,
"http://www.ftjcfx.com/image-2615312-10481196",
null,
"http://www.lduhtrp.net/image-2615312-9325067",
null,
"http://www.awltovhc.com/image-2615312-10446699",
null,
"http://www.tqlkg.com/image-2615312-7330593",
null,
"http://www.tqlkg.com/image-2615312-10440424",
null,
"http://www.lduhtrp.net/image-2615312-7330596",
null,
"http://www.ftjcfx.com/image-2615312-10403183",
null,
"http://www.awltovhc.com/image-2615312-8829222",
null,
"https://www.angelfire.com/cgi-bin/Count.cgi",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87096924,"math_prob":0.9648843,"size":4142,"snap":"2021-04-2021-17","text_gpt3_token_len":897,"char_repetition_ratio":0.16843887,"word_repetition_ratio":0.044444446,"special_character_ratio":0.21269917,"punctuation_ratio":0.1559633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9570853,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T13:21:14Z\",\"WARC-Record-ID\":\"<urn:uuid:dbbaa799-734c-4d87-9628-59bbfd7beec4>\",\"Content-Length\":\"9312\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afd3c7b6-640a-469e-b72e-09e9ee3f5657>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec618a6d-dc84-4bba-879a-ad6f4ecc6712>\",\"WARC-IP-Address\":\"209.202.252.105\",\"WARC-Target-URI\":\"https://www.angelfire.com/electronic/planetarycom/Electromagnetic.html\",\"WARC-Payload-Digest\":\"sha1:K77L7DTREEHBFRO3B6IUR35HKTN3EF4P\",\"WARC-Block-Digest\":\"sha1:SSMGZHGZVIEGX4S6SDMBQN5LDJU7RQI3\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703520883.15_warc_CC-MAIN-20210120120242-20210120150242-00672.warc.gz\"}"} |
http://7776590.com/qspevdu_t1006003/ | [
"• 防诈骗中心\n• 客服中心 |\n• 网站导航 |\n• 设为主页 |\n• 加入收藏\n• 您当前位置: 首页>产品库>商务服务>交通运输\n相关分类:\n• 湖南\n• 长沙市\n• 常德市\n• 郴州市\n• 衡阳市\n• 怀化市\n• 娄底市\n• 邵阳市\n• 湘潭市\n• 湘西土家族苗族自治州\n• 益阳市\n• 永州市\n• 岳阳市\n• 张家界市\n• 株洲市\n• 山西\n• 长治市\n• 大同市\n• 晋城市\n• 晋中市\n• 临汾市\n• 吕梁市\n• 朔州市\n• 太原市\n• 忻州市\n• 阳泉市\n• 运城市\n• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• yabo国际市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•",
null,
"大量供应销量好的金1丝猴电动车,湖北金1丝猴电动车\n\n品牌:欢乐豆电动车,一一新能源车业,金鹰电动车\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n临沂一一新能源车业有限公司\n\n黄金会员:",
null,
"主营:欢乐豆电动车,一一新能源车业,金鹰电动车,小红豆电动车,电动车\n\n•",
null,
"飞翼车厢,展翼集装箱,车厢制造厂,飞翼式厢车订制,价格优惠\n\n品牌:信合\n\n出厂地:罗城仡佬族自治县(东门镇)\n\n报价:面议\n\n河北沧州信合集装箱制造有限公司\n\n经营模式:生产型\n\n主营:预制舱 特种集装箱 飞翼集装箱 亚博体育官网下载苹果集装箱 开顶集装箱 标准集装箱 集装箱配件\n\n•",
null,
"河北预制舱厂家专业生产电气亚博体育官网下载苹果预制舱\n\n品牌:信合\n\n出厂地:罗城仡佬族自治县(东门镇)\n\n报价:面议\n\n河北沧州信合集装箱制造有限公司\n\n经营模式:生产型\n\n主营:预制舱 特种集装箱 飞翼集装箱 亚博体育官网下载苹果集装箱 开顶集装箱 标准集装箱 集装箱配件\n\n•",
null,
"买金鹰电动车在哪买更划算_威海金鹰电动车供应商\n\n品牌:欢乐豆电动车,一一新能源车业,金鹰电动车\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n临沂一一新能源车业有限公司\n\n黄金会员:",
null,
"主营:欢乐豆电动车,一一新能源车业,金鹰电动车,小红豆电动车,电动车\n\n•",
null,
"山东品质好的临沂一一新能源电动车 广东电动车价格\n\n品牌:欢乐豆电动车,一一新能源车业,金鹰电动车\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n临沂一一新能源车业有限公司\n\n黄金会员:",
null,
"主营:欢乐豆电动车,一一新能源车业,金鹰电动车,小红豆电动车,电动车\n\n•",
null,
"报价:面议\n\n河北沧州信合集装箱制造有限公司\n\n经营模式:生产型\n\n主营:预制舱 特种集装箱 飞翼集装箱 亚博体育官网下载苹果集装箱 开顶集装箱 标准集装箱 集装箱配件\n\n•",
null,
"集装箱角件 178*162*118 标准角件 厂家\n\n品牌:信合\n\n出厂地:罗城仡佬族自治县(东门镇)\n\n报价:面议\n\n河北沧州信合集装箱制造有限公司\n\n经营模式:生产型\n\n主营:预制舱 特种集装箱 飞翼集装箱 亚博体育官网下载苹果集装箱 开顶集装箱 标准集装箱 集装箱配件\n\n•",
null,
"报价:面议\n\n经营模式:未登记\n\n主营:\n\n•",
null,
"10kv预制舱 二次亚博体育官网下载苹果预制舱 光伏预制舱全新定制\n\n品牌:信合\n\n出厂地:罗城仡佬族自治县(东门镇)\n\n报价:面议\n\n河北沧州信合集装箱制造有限公司\n\n经营模式:生产型\n\n主营:预制舱 特种集装箱 飞翼集装箱 亚博体育官网下载苹果集装箱 开顶集装箱 标准集装箱 集装箱配件\n\n•",
null,
"【光头强集装箱】烟台集装箱_烟台二手集装箱_烟台集装箱租赁\n\n品牌:光头强,光头强集装箱,\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n烟台光头强集装箱有限公司\n\n黄金会员:",
null,
"主营:烟台集装箱租赁,烟台集装箱销售,烟台住人集装箱,烟台冷藏集装箱,烟台二手海运集装...\n\n• 没有找到合适的供应商?您可以发布采购信息\n\n没有找到满足要求的供应商?您可以搜索 交通运输批发 交通运输公司 交通运输厂\n\n### 最新入驻厂家\n\n相关产品:\n金1丝猴电动车 飞翼车厢 预制舱厂家 金鹰电动车 临沂一一新能源电动车 新能源亚博体育官网下载苹果箱 集装箱角件 电动车 10kv预制舱 烟台集装箱"
] | [
null,
"http://image-ali.bianjiyi.com/1/2020/0319/10/15845843917484.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://imagebooksir.258fuwu.com/images/business/202039/11/4511417681583726045.jpeg",
null,
"http://imagebooksir.258fuwu.com/images/business/2019529/8/4511417681559089496.jpeg",
null,
"http://image-ali.bianjiyi.com/1/2020/0319/09/15845829668902.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0319/10/15845862592245.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://imagebooksir.258fuwu.com/images/business/2020421/15/4511417681587452666.jpeg",
null,
"http://imagebooksir.258fuwu.com/images/business/2019916/9/4511417681568596559.jpeg",
null,
"http://image-ali.bianjiyi.com/1/2016/0719/21/578e2bdd5ca36.jpg",
null,
"http://imagebooksir.258fuwu.com/images/business/2020925/14/4511417681601013750.jpeg",
null,
"http://image-ali.bianjiyi.com/1/2020/1020/15/16031776597389.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.64945,"math_prob":0.45056283,"size":668,"snap":"2020-45-2020-50","text_gpt3_token_len":801,"char_repetition_ratio":0.19277108,"word_repetition_ratio":0.0,"special_character_ratio":0.22305389,"punctuation_ratio":0.2635135,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.963703,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,2,null,null,null,1,null,1,null,1,null,null,null,2,null,null,null,1,null,1,null,3,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T11:10:14Z\",\"WARC-Record-ID\":\"<urn:uuid:12dd7fc4-c7e6-4a68-a3d9-53672665a668>\",\"Content-Length\":\"101171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:42f33ba2-92f0-41f8-9882-7cddd5b2ddd0>\",\"WARC-Concurrent-To\":\"<urn:uuid:038009b2-d6e2-4d8e-811b-23ed0661bca2>\",\"WARC-IP-Address\":\"161.123.161.67\",\"WARC-Target-URI\":\"http://7776590.com/qspevdu_t1006003/\",\"WARC-Payload-Digest\":\"sha1:KOY77EQFB2HCGKFNUQQF7VMCGFBYVXG5\",\"WARC-Block-Digest\":\"sha1:CWUPOVW2ZAVMXNLTRUUBJNRF2PJUSWCO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107872686.18_warc_CC-MAIN-20201020105000-20201020135000-00166.warc.gz\"}"} |
https://convertoctopus.com/1293-grams-to-pounds | [
"Conversion formula\n\nThe conversion factor from grams to pounds is 0.0022046226218488, which means that 1 gram is equal to 0.0022046226218488 pounds:\n\n1 g = 0.0022046226218488 lb\n\nTo convert 1293 grams into pounds we have to multiply 1293 by the conversion factor in order to get the mass amount from grams to pounds. We can also form a simple proportion to calculate the result:\n\n1 g → 0.0022046226218488 lb\n\n1293 g → M(lb)\n\nSolve the above proportion to obtain the mass M in pounds:\n\nM(lb) = 1293 g × 0.0022046226218488 lb\n\nM(lb) = 2.8505770500505 lb\n\nThe final result is:\n\n1293 g → 2.8505770500505 lb\n\nWe conclude that 1293 grams is equivalent to 2.8505770500505 pounds:\n\n1293 grams = 2.8505770500505 pounds\n\nAlternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 pound is equal to 0.35080616395978 × 1293 grams.\n\nAnother way is saying that 1293 grams is equal to 1 ÷ 0.35080616395978 pounds.\n\nApproximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that one thousand two hundred ninety-three grams is approximately two point eight five one pounds:\n\n1293 g ≅ 2.851 lb\n\nAn alternative is also that one pound is approximately zero point three five one times one thousand two hundred ninety-three grams.\n\nConversion table\n\ngrams to pounds chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from grams to pounds\n\ngrams (g) pounds (lb)\n1294 grams 2.853 pounds\n1295 grams 2.855 pounds\n1296 grams 2.857 pounds\n1297 grams 2.859 pounds\n1298 grams 2.862 pounds\n1299 grams 2.864 pounds\n1300 grams 2.866 pounds\n1301 grams 2.868 pounds\n1302 grams 2.87 pounds\n1303 grams 2.873 pounds"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.792209,"math_prob":0.9976227,"size":1748,"snap":"2022-05-2022-21","text_gpt3_token_len":493,"char_repetition_ratio":0.18520643,"word_repetition_ratio":0.007017544,"special_character_ratio":0.38157895,"punctuation_ratio":0.10557185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99836594,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T17:15:52Z\",\"WARC-Record-ID\":\"<urn:uuid:b232f11d-bc36-46f8-93f5-b2c6cb01fe8a>\",\"Content-Length\":\"29194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5524aaf6-c031-441a-b025-8963207d77dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d9d9fcf-35e5-4012-87bd-fe77fd7e7725>\",\"WARC-IP-Address\":\"172.67.155.243\",\"WARC-Target-URI\":\"https://convertoctopus.com/1293-grams-to-pounds\",\"WARC-Payload-Digest\":\"sha1:M7REBXYWJFY3X5M6HZA3URFB5PFVKTJ6\",\"WARC-Block-Digest\":\"sha1:CMCUBD4FYCKDOTIKZ4NHSK27BUNAU3NZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300574.19_warc_CC-MAIN-20220117151834-20220117181834-00230.warc.gz\"}"} |
https://trycolors.com/colors/A47A74 | [
"#A47A74\n\nColor name: Pharlap\n\nHex #A47A74 has hue angle of 8 degrees, value = 64 and saturation = 29. #A47A74 can be obtained by mixing 4 colors: 36% of YELLOW, 29% of MAGENTA, 29% of BLUE, 7% of CYAN. Click \"ADJUST\" button to move #A47A74 to the mixer and play with it.\n\n#A47A74\n36%\n5\nYELLOW\n29%\n4\nMAGENTA\n29%\n4\nBLUE\n7%\n1\nCYAN\n\nMixing #A47A74 step by step\n\nThe diagram shows the process of mixing multiple colors step by step. Here you can see the mix of 5 drops of YELLOW, 4 drops of MAGENTA, 4 drops of BLUE, 1 drop of CYAN.\n\n1\n=\n1\n1\n+\n1\n=\n2\n1\n+\n2\n=\n3\n1\n+\n3\n=\n4\n1\n+\n4\n=\n5\n1\n+\n5\n=\n6\n1\n+\n6\n=\n7\n1\n+\n7\n=\n8\n1\n+\n8\n=\n9\n1\n+\n9\n=\n10\n1\n+\n10\n=\n11\n1\n+\n11\n=\n12\n1\n+\n12\n=\n13\n1\n+\n13\n=\n14\n\nColor #A47A74 conversion table\n\nHEX\n#A47A74\n\nHSV\n8°, 29, 64\n\nHSL\n8°, 21, 55\n\nCIE Lab\n55.15, 15.48, 9.68\n\nRGB decimal\n164, 122, 116\n\nRGB percent\n64.3%, 47.8%, 45.5%\n\nCMYK\n0, 26, 29, 36\n\nColor name\nPharlap\n\nMix of color #A47A74 with water\n\nBelow you can see the model of the mix of #A47A74 with pure water. Labels indicate the transparency of the mixture.\n\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8938022,"math_prob":0.8793759,"size":621,"snap":"2022-05-2022-21","text_gpt3_token_len":182,"char_repetition_ratio":0.14262562,"word_repetition_ratio":0.0,"special_character_ratio":0.30756843,"punctuation_ratio":0.12592593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9608946,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T17:27:13Z\",\"WARC-Record-ID\":\"<urn:uuid:d399017f-1124-4b48-ac9c-b74291d38190>\",\"Content-Length\":\"165809\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa9fa3c8-7364-4c1b-b35e-01c6df48eedd>\",\"WARC-Concurrent-To\":\"<urn:uuid:64d015f7-0476-47cc-9726-c4a0a6235641>\",\"WARC-IP-Address\":\"76.76.21.21\",\"WARC-Target-URI\":\"https://trycolors.com/colors/A47A74\",\"WARC-Payload-Digest\":\"sha1:KSH3HEWOESNQH7IFUX6P5VAWD22C2ZZS\",\"WARC-Block-Digest\":\"sha1:JARA5QTS43AHENQ4N2KPSIUP7JQBCTWX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320302355.97_warc_CC-MAIN-20220120160411-20220120190411-00152.warc.gz\"}"} |
https://algebra-help.com/algebra-help-factor/geometry/plane-trigonometry-problems.html | [
"",
null,
"# Our users:\n\nMy husband has been using the software since he went back to school a few months ago. Hes been out of college for over 10 years so he was very rusty with his math skills. A teacher friend of ours suggested the program since she uses it to teach her students fractions. Mike has been doing well in his two math classes. Thank you!\nCatherine, IL\n\nOur daughter is making the grades she is capable of thanks to the Algebrator. Hats off to you all! Thank you!\nTommy Hobroken, WY\n\nThank you for the responses. You actually make learning Algebra sort of fun.\nMaria Chavez, TX\n\n# Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among them?\n\nSearch phrases used on 2015-03-02:\n\n• picture of trag adition\n• add fration eqation how to do videos\n• how to learn algerbra books\n• 6th grade egypt math word problems\n• whats a diffrence from a rhombus to a square\n• learn integers step by step\n• solve for inverse cubed\n• Glencoe Advanced Mathematical Concepts Answer Key Maker(CD)\n• math worksheet - lcd\n• simplify boolean algebra calculator\n• where can i find tutorial on induction in algebra\n• Algebra 2 Formula worker\n• Algebra 2 Help calculater\n• Physics Formula Calculator\n• square formula for java\n• math eqations\n• what is the slope formula for a quadratic equation\n• maths test / changing fractions into decimals\n• solve quadratic equations by finding square roots\n• matrix intermediate teacher's book download\n• online GED algebra practice\n• chemistry worksheet with solution\n• free advance math tests online\n• how to find out solution of a quadratic equation with the help of maple\n• solving complex expressions\n• help with graph equations\n• convert fraction to least common denominator calculator\n• ti 83 cube root\n• find slope on graphing calculator\n• find fraction not equivalent worksheet\n• mcdougal geometry resource book test answers\n• free printable two - three step math problems for fourth grade\n• algebra homework solver\n• formula of fraction to decimal\n• Aptitude questions\n• evaluating expressions worksheet\n• free online solutions manual to \"contemporary abstract algebra by gallian\"\n• free lessons of algebra 10th grade\n• Multiplication of Radical Expression\n• doing radicals on a TI 83 plus\n• simplify radical expression calculator\n• Download TI-84 plus\n• online solving logarithms calculator\n• poems that has mathematical terms\n• math + how to simplify the right side of an equation\n• least common multiple tool\n• free answers to math homework\n• plotting simultaneous equations in matlab\n• simultaneous solver\n• glencoe/mcgraw hill 6th grade math answer keys\n• radical equations in real life\n• simple fractions first grade\n• algebra jokes + trivia\n• saxon math answer sheet\n• Radical expression calculator\n• algerbra 2\n• science worksheets for GED\n• year 11 algebra\n• prentice hall pre-algebra california edition answers\n• factor my equation\n• SOLVING LINEAR EQUATION ON TI 84\n• Holt Algebra 1 Workbook answers\n• fraction computation worksheets\n• how to do a mixed number with a TI-83 Plus Calculator\n• quad root\n• rudin solution manual\n• Basic algebra sample test and answers\n• math trivia about whole numders for elementary\n• the hardest math problem for a 6th grader\n• www.howtouseti-84plus.com\n• fraction+worksheets+grade 3\n• free online fraction solver\n• glenco Algebra 1 Book Online\n• formulas algebra 1\n• multiply integers using line graph\n• using ratio boxes pre algebra worksheets\n• common denominator tool\n• non-homogenous ordinary differential equations\n• ohio free first grade math and writing\n• algebra solve for x calculator\n• maple syntax for the solution of non linear algebraic equation\n• online calculator with triple fractions\n• inverse logaritme ti 89\n• how to teach polynomials step by step\n• how to find out the r2 value using graphics calculator\n• online calculator complex\n• box method factoring algebra worksheet\n• first grader font download\n• positive negative numbers add worksheet\n• grade 10 polynomial questions"
] | [
null,
"https://algebra-help.com/images/template/phone.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8456668,"math_prob":0.9679283,"size":4099,"snap":"2021-04-2021-17","text_gpt3_token_len":954,"char_repetition_ratio":0.13553114,"word_repetition_ratio":0.0028735632,"special_character_ratio":0.21273482,"punctuation_ratio":0.031298906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99954784,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T20:16:07Z\",\"WARC-Record-ID\":\"<urn:uuid:b12e6ec9-5526-427e-997e-ffc1b6d96b14>\",\"Content-Length\":\"12678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2d1d9a7-fa7f-4b3c-9a4f-55210de834a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac7e0317-0a1a-4aec-b277-d4337144cad7>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://algebra-help.com/algebra-help-factor/geometry/plane-trigonometry-problems.html\",\"WARC-Payload-Digest\":\"sha1:4ZN6WC5S5IZ7Q4PS2ZWOCZ3D3HIDES2B\",\"WARC-Block-Digest\":\"sha1:HZ3ZJ3OLU4JHKTSCGUODEHTG53MPKWFX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038860318.63_warc_CC-MAIN-20210418194009-20210418224009-00566.warc.gz\"}"} |
https://jerseycycling.je/race/divisional-circuit-series-division-4-2-3/ | [
"# Divisional Circuit Series Division 4 (2/3)\n\n### Race Report\n\n#### Course\n\nFrank Machon Circuit\nLes Quennevais Circuit, Les Quennevais Sports Centre, St Brelade\n\n#### Details\n\nDate Time Series Season Club\n25th August 2020 18:00 2020 Divisional Circuit Series 2020 VSJ\n\n#### Race Results\n\n```Array\n(\n => Array\n(\n[position] => Array\n(\n => 27\n)\n\n[time] => 0:00\n[rank_position] => 9\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 29\n)\n\n[time] => 0:00\n[rank_position] => 13\n[jrr_club] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 28\n)\n\n[time] => 0:00\n[rank_position] => 7\n[jrr_club] =>\n[jrr_category_position] => 1st Super Veteran\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 27\n)\n\n[time] => 0:00\n[rank_position] => 2\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 27\n)\n\n[time] => 0:00\n[rank_position] => 1\n[jrr_club] =>\n[jrr_category_position] => 1st Senior\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 134\n)\n\n[time] => 0:00\n[rank_position] => 14\n[jrr_club] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 133\n)\n\n[time] => 0:00\n[rank_position] => 3\n[jrr_club] =>\n[jrr_category_position] => 1st Youth C\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 42\n)\n\n[time] => 0:00\n[rank_position] => 4\n[jrr_club] =>\n[jrr_category_position] => 1st Youth A\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 133\n)\n\n[time] => 0:00\n[rank_position] => 10\n[jrr_club] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 26\n)\n\n[time] => 0:00\n[rank_position] => 15\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 28\n)\n\n[time] => 0:00\n[rank_position] => 99\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 27\n)\n\n[time] => 0:00\n[rank_position] => 11\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 133\n)\n\n[time] => 0:00\n[rank_position] => 6\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 29\n)\n\n[time] => 0:00\n[rank_position] => 12\n[jrr_club] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 30\n)\n\n[time] => 0:00\n[rank_position] => 5\n[jrr_club] =>\n[jrr_category_position] => 1st Junior\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 115\n)\n\n[time] => 0:00\n[rank_position] => 8\n[jrr_club] =>\n[jrr_category_position] =>\n[number] =>\n)\n\n => Array\n(\n[position] => Array\n(\n => 43\n)\n\n[time] => 0:00\n[rank_position] => 16\n[jrr_club] =>\n[jrr_category_position] => 1st Yoth B\n[number] =>\n)\n\n)\n```\nPosition Rider Club Category Category Position\n9Ben StantonGuest RiderSenior\n7Colin HidrioCaesarean Cycling ClubSuper Veteran1st Super Veteran\n2Dan GarridoGuest RiderSenior\n1David BaileyGuest RiderSenior1st Senior"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.51963854,"math_prob":0.99845356,"size":1229,"snap":"2021-43-2021-49","text_gpt3_token_len":370,"char_repetition_ratio":0.20163265,"word_repetition_ratio":0.04712042,"special_character_ratio":0.28315705,"punctuation_ratio":0.02,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95378625,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T06:14:48Z\",\"WARC-Record-ID\":\"<urn:uuid:d36f6908-d58e-4f61-831f-dae4ee3507d8>\",\"Content-Length\":\"50911\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d352246-7f71-4a9a-ae91-c470a074e494>\",\"WARC-Concurrent-To\":\"<urn:uuid:2df10c45-2d33-413a-b908-4329728c8ef9>\",\"WARC-IP-Address\":\"160.153.138.219\",\"WARC-Target-URI\":\"https://jerseycycling.je/race/divisional-circuit-series-division-4-2-3/\",\"WARC-Payload-Digest\":\"sha1:TSGG5KLN2EP6MP6BBFSAP623R5RX5OMT\",\"WARC-Block-Digest\":\"sha1:5EMRR6OXEJI3EXJM7ZAFV547CBWMHIFK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587799.46_warc_CC-MAIN-20211026042101-20211026072101-00142.warc.gz\"}"} |
https://whatisconvert.com/253-deciliters-in-imperial-fluid-ounces | [
"# What is 253 Deciliters in Imperial Fluid Ounces?\n\n## Convert 253 Deciliters to Imperial Fluid Ounces\n\nTo calculate 253 Deciliters to the corresponding value in Imperial Fluid Ounces, multiply the quantity in Deciliters by 3.5195079727854 (conversion factor). In this case we should multiply 253 Deciliters by 3.5195079727854 to get the equivalent result in Imperial Fluid Ounces:\n\n253 Deciliters x 3.5195079727854 = 890.43551711471 Imperial Fluid Ounces\n\n253 Deciliters is equivalent to 890.43551711471 Imperial Fluid Ounces.\n\n## How to convert from Deciliters to Imperial Fluid Ounces\n\nThe conversion factor from Deciliters to Imperial Fluid Ounces is 3.5195079727854. To find out how many Deciliters in Imperial Fluid Ounces, multiply by the conversion factor or use the Volume converter above. Two hundred fifty-three Deciliters is equivalent to eight hundred ninety point four three six Imperial Fluid Ounces.\n\n## Definition of Deciliter\n\nA deciliter (also written \"decilitre\", symbol: dL) is a metric unit of capacity, equal to one tenth of a liter or about 3.38 U.S. fluid ounces.\n\n## Definition of Imperial Fluid Ounce\n\nA fluid ounce (abbreviated fl oz, fl. oz. or oz. fl.) is a unit of volume. It is equal to about 28.41 ml in the imperial system or about 29.57 ml in the US system. The fluid ounce is sometimes referred to simply as an \"ounce\" in applications where its use is implicit.\n\n## Using the Deciliters to Imperial Fluid Ounces converter you can get answers to questions like the following:\n\n• How many Imperial Fluid Ounces are in 253 Deciliters?\n• 253 Deciliters is equal to how many Imperial Fluid Ounces?\n• How to convert 253 Deciliters to Imperial Fluid Ounces?\n• How many is 253 Deciliters in Imperial Fluid Ounces?\n• What is 253 Deciliters in Imperial Fluid Ounces?\n• How much is 253 Deciliters in Imperial Fluid Ounces?\n• How many uk fl oz are in 253 dL?\n• 253 dL is equal to how many uk fl oz?\n• How to convert 253 dL to uk fl oz?\n• How many is 253 dL in uk fl oz?\n• What is 253 dL in uk fl oz?\n• How much is 253 dL in uk fl oz?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.80641055,"math_prob":0.90460855,"size":1994,"snap":"2021-43-2021-49","text_gpt3_token_len":541,"char_repetition_ratio":0.25477386,"word_repetition_ratio":0.14782609,"special_character_ratio":0.28184554,"punctuation_ratio":0.11253197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99837464,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T08:01:01Z\",\"WARC-Record-ID\":\"<urn:uuid:65e11033-c199-4ab7-99c0-439b396ffef0>\",\"Content-Length\":\"32813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c188d979-6140-43bf-8b42-cecaf7c0d36b>\",\"WARC-Concurrent-To\":\"<urn:uuid:16cd2dc6-1f8c-496f-b5f4-2ba63079abfe>\",\"WARC-IP-Address\":\"104.21.33.10\",\"WARC-Target-URI\":\"https://whatisconvert.com/253-deciliters-in-imperial-fluid-ounces\",\"WARC-Payload-Digest\":\"sha1:N6QBPY3ZJZI6D6XYDDVB55KFCAARRES6\",\"WARC-Block-Digest\":\"sha1:AWUNPT2DY5E5CGP23NMQPPYZM55ZHPCW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.59_warc_CC-MAIN-20211206072825-20211206102825-00361.warc.gz\"}"} |
https://discuss.pytorch.org/t/custom-loss-function-based-on-external-library/76166 | [
"# Custom loss function based on external library\n\nHow to write a model / loss function where the loss is calculated in external tool.\n\nI wrote a psedocode to illustrate what I want to achieve:\n\n``````model = CustomModel().to(device) # Lets say 1000 inputs and 10 outputs\n\nmodel_out = model(inputs) # 10 values that will be used to calculate loss\n\nsave_outs_to_file('model_out.txt', model_out)\n\nproc = subprocess.Popen([\"./calculate_loss\", \"-i\", \"model_out.txt\"], stdout=subprocess.PIPE)\n\n# What to do next with this loss?\n# How to perform backprop?\n# ... .backward()\n\noptimizer.step()\n``````\n\nSince you are leaving PyTorch, you would have to write a custom `autograd.Function`, as described here, and also implement the `backward` pass manually.\nAutograd won’t be able to track the operations in the other process.\n\nFor what kind of applications would you need external tool to calculate the loss @adm ? Never encountered this kind of problem, just curious to know",
null,
"Thank you for your help. As far as I understand I will not be able to solve this in such form as I have to provide it with forward and backward function (what means, I have to perform all operations the external tool is doing so the pytorch will be able to compute the gradient). The problem is that I do not know what exactly the external tool does.\n\nThan I doubt it’ll be possible to use it inside your forward pass.",
null,
"Would it be possible to use an alternative, “open”, approach using a different library, so that you could at least see, which operations are performed?\n\nActually what came into my mind is that I can calculate the gradient numerically. I know the range of possible values and all are integers, so the epsilon will be 1. The question is how can I join it with the pytorch pipe? Let’s say, there is a model of X layers that outputs 10 parameters. Now I want to put this custom numerical loss on these 10 parameters.\n\nYou could define a custom `autograd.Function` and calculate the gradient in the `backward` method manually as described here.\n\nHave you solved this problem?I am also faced with this problem recently.If you have some ideas, i am happy you share with me.Thank you!",
null,
""
] | [
null,
"https://discuss.pytorch.org/images/emoji/apple/slight_smile.png",
null,
"https://discuss.pytorch.org/images/emoji/apple/confused.png",
null,
"https://discuss.pytorch.org/images/emoji/apple/grin.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90829307,"math_prob":0.5733148,"size":1780,"snap":"2022-05-2022-21","text_gpt3_token_len":402,"char_repetition_ratio":0.09065315,"word_repetition_ratio":0.0,"special_character_ratio":0.23876405,"punctuation_ratio":0.13483146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852641,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T07:46:42Z\",\"WARC-Record-ID\":\"<urn:uuid:7c523fcf-9f59-44dd-b8a6-bbebc176af87>\",\"Content-Length\":\"30194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8511ed81-4f82-498e-9358-e86b6b0954f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:be55f806-0b10-4d5f-ab3c-99dd051949e3>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/custom-loss-function-based-on-external-library/76166\",\"WARC-Payload-Digest\":\"sha1:3GBLFZV7NZJHNEEHDYP4VYNVS6YYI3UX\",\"WARC-Block-Digest\":\"sha1:XELIXZGNF2NPY64PZ5VOQ7SKHL534HDY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604495.84_warc_CC-MAIN-20220526065603-20220526095603-00520.warc.gz\"}"} |
https://tecpaper.org/2020/10/01/algorithm3100/ | [
"Chat with us, powered by LiveChat Algorithm3100 | Economics Write\n+1(978)310-4246 [email protected]\nSelect Page\n\nNeed the answer and the code\nhw1.pdf\n\nhw2.pdf\n\nUnformatted Attachment Preview\n\nHomework 1:\ndue September 4, 2020\nIndividual contributions only, submit via D2L, only typeset solutions in pdf-format are accepted\nIn this homework, we evaluate the performance of a recursive version of Fibonacci that\nThe following Python code calculates recursively the Fibonacci numbers, defined by\nfn =\n0\nfor n = 0\n1\nfor n = 1 .\nfn−1 + fn−2 for n ≥ 2\ndef rec_fib(n):\nif n < 2: return n else: return rec_fib(n-1)+rec_fib(n-2) If you implement this code and run it, you will find that for even moderately large arguments n, this implementation will take too long. On my machine, I can barely calculate rec_fib(35), and the situation does not become much better if I use C++ instead of Python. The reason is that for each increment of n, I have two recursive calls, which often create additional recursive calls. In fact, we will show that the number of recursive calls increases very much like the Fibonacci sequence itself. If the argument n is 0 or 1, there is no recursive call. If the argument is 2, then there are recursive calls with arguments 0 and 1 and no further calls. If the argument is 3, then there will be recursive calls with arguments 2 and 1, the former creating 2 more recursive calls. Thus, we have Recursive Calls rn Argument 0 r0 = 0 1 r1 = 0 2 r2 = 2 3 r3 = r2 + r1 + 2 = 4 Develop a recurrence relation for the number of recursive calls rn. Then prove by induction that rn = − 2 + fn−1 + fn + fn+1 for n > 1.\nFirst Programming Assignment\nIn this assignment, you will measure the speed of a recursive version of the Fibonacci number\non your system and compare it to a non-recursive version.\nMeasuring Time:\nIn principle, we could measure the timing of computer programs using a stop-watch. This\nwould involve determining exactly when a program terminates, which could be difficult to\nachieve. In general, we are better off using the system time. System time has a better\nresolution than a stopwatch or a phone application and human reaction times do not have to\nbe taken into consideration. Almost all programming environments allow you to measure time\nwell, for example, using the time module in Python 3 or the library in C++.\nUnfortunately, other processes in a system can have a large influx on measured times. For\nexample, in a Java environment, measurements are almost useless because the garbage\ncollector can start and slow down any program. This is why you are not allowed to use Java\nfor this programming assignment. Even in a modern multi-core architecture with maybe a\ndozen of threads that can run in parallel, contention for the RAM-cache interface, contention\nfor shared caches, or a sudden burst of system processes can slow down any single thread. It\nis therefore best to measure performance several times. In what follows, we will use a for loop\nto execute a process to be measured several times. Then we will repeat several times to get a\nnumber of timings. Finally, we will use some statistics to find confidence intervals for the\ntiming.\nWhen you measure timing, you are really measuring the timing of an implementation. If you are\nusing a compiler, you can set its optimization levels. At one setting, the compiler might figure\nout that you are not actually using the result of the function whose timing you are measuring\nand optimize the function call away. If you get very good runtimes, this might be the reason.\nYou will still see counter-intuitive timings that are attributable to such things as cold cache\nmisses.\nFibonacci Numbers:\nThe Fibonacci numbers are defined by\nfi =\ni\nif i < 2 {fi−1 + fi−2 if i ≥ 2 The recursive implementation uses exactly this definition. As each function call with an argument larger than one generates at least two function call, which in turn can each generate two function calls, the number of function calls even for moderate argument values is very high. In contrast, maintaining two variables and updating them is much more efficient. After initializing two variables, cur and pre, we just update them using cur, pre = cur+pre, cur. If you use C or C++, you need to implement this tuple assignment using a temporary variable. Figure 1: C++20 implementation of the timer Statistical Processing All measurements are subject to measurement errors. We usually use a statistical model in order to extract information on measurement errors. We assume that our runtimes consists of the true runtime plus an error component that is normally distributed. This is certainly not the case, but it is a good enough assumption in our case. We repeat each measurement several times, a good value would be 25 times. We then calculate the sample mean (average) and sample standard deviation. From these and the count, we can calculate the confidence interval size of the student-t distribution. We finally graph average-confidence interval size and average+confidence interval size. The reason for this procedure is that the average value of a number of runs is much closer to being normally distributed. However, we also need to measure the sample standard deviation, which creates its own error, so we use the student-t distribution instead of the normal distribution. Figure 2: Python implementation of a timer. This will measure the performance for each value in 25 batches of 50 runs each. Hand-In: You need to submit a single pdf file with: (1) (2) (3) (4) (5) A title and your name A description of your code (one paragraph) A listing of your code (as figures or embedded in the text). A table with your results after statistical processing A graphical representation of your results. Extra credit if you figure out how to use errorbars, either using excel (very difficult), matlab or Mathematica (not so easy), or seaborn (simple, but you need to know to use numpy, matplotlib.pyplot, and seaborn). (6) A short text that summarizes your findings and references the table and the figure. Here is a Table with some values as an example on how to format results: Value Recursive Fibonacci (nsec) Good Fibonacci (nsec) 0 17.245 ± 2.398 10.012± 0.238 1 11.650 ± 0.134 11.592± 0.024 2 13.650 ± 0.282 11.452± 0.109 3 15.783 ± 1.094 12.761± 0.823 ... Purchase answer to see full attachment\n\nerror: Content is protected !!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8765543,"math_prob":0.9490113,"size":6230,"snap":"2021-21-2021-25","text_gpt3_token_len":1482,"char_repetition_ratio":0.10970125,"word_repetition_ratio":0.0037209303,"special_character_ratio":0.23499197,"punctuation_ratio":0.11201299,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99163276,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T23:40:34Z\",\"WARC-Record-ID\":\"<urn:uuid:dadeb75d-1081-440b-9ad6-392d5ae8cb62>\",\"Content-Length\":\"40198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0706823a-58f0-43ca-8829-75c5e65deb34>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d0a4a46-0b4b-4ba0-a031-2b85ef84fc8c>\",\"WARC-IP-Address\":\"68.65.122.69\",\"WARC-Target-URI\":\"https://tecpaper.org/2020/10/01/algorithm3100/\",\"WARC-Payload-Digest\":\"sha1:MSYGELY3OI6UVOBPDWCLSCQURLNTS4CC\",\"WARC-Block-Digest\":\"sha1:AFQDGFMDHFRQME6YYBQHDFPGH3L5HPKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487643354.47_warc_CC-MAIN-20210618230338-20210619020338-00452.warc.gz\"}"} |
http://www.88dzw.com/dz/dianlu6/4626977.html | [
"# 指针作为函数参数\n\n### 文章摘要:指针作为函数参数大多数C程序员开始使用指针时是用它们实现函数的参数,所谓可变参数。为了理解可变参数是如何工作的,我们看看在C语言中如何执行一个交换函数。要执行一个交换函数,您要做的是引入两个变量,让函数交换它们的值。这里是一个执行交换函数的试验――输入并执行以下代码,看看会发生什么:#include void swap(int i, int j){int t;t=i;i=j;j=t;}void main(){int a,b;a=5;b=10;printf(\"%d %dn\", a, b);swap(a,b);printf(\"%d %dn\n\n`#include void swap(int i, int j){int t;t=i;i=j;j=t;}void main(){int a,b;a=5;b=10;printf(\"%d %dn\", a, b);swap(a,b);printf(\"%d %dn\", a, b);}`\n\n`#include void swap(int *i, int *j){int t;t = *i;*i = *j;*j = t;}void main(){int a,b;a=5;b=10;printf(\"%d %dn\",a,b);swap(printf(\"%d %dn\",a,b);}`",
null,
"Tag:电路基础电子电路基础,模拟电路基础电路基础"
] | [
null,
"http://www.88dzw.com/pd_dianzi/UploadPic/2013-9/201391213820220.gif",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9378606,"math_prob":0.9973311,"size":1738,"snap":"2019-35-2019-39","text_gpt3_token_len":1374,"char_repetition_ratio":0.115340255,"word_repetition_ratio":0.0,"special_character_ratio":0.27272728,"punctuation_ratio":0.20723684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.977246,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T11:14:10Z\",\"WARC-Record-ID\":\"<urn:uuid:125d093a-3716-4e2a-b84a-5a0eeb77c7db>\",\"Content-Length\":\"21194\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11922938-6703-422f-9ecc-04f25d9fd4f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c653d64-e819-4c70-a3b9-5cc5f9f72f2e>\",\"WARC-IP-Address\":\"60.169.75.26\",\"WARC-Target-URI\":\"http://www.88dzw.com/dz/dianlu6/4626977.html\",\"WARC-Payload-Digest\":\"sha1:QI2QSXWIHT73DYKUYFUOFPVXTWOBMNVN\",\"WARC-Block-Digest\":\"sha1:T55I6R7PW7AWHSAMIZ2F2UKYFCZNQGWL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574409.16_warc_CC-MAIN-20190921104758-20190921130758-00400.warc.gz\"}"} |
https://optovr.com/ordering-fractions-decimals-percentages-teach-maths-free-resources-teaching-worksheets/ | [
"# Ordering Fractions Decimals Percentages Teach Maths Free Resources Teaching Worksheets",
null,
"Ordering Fractions Decimals Percentages Teach Maths Free Resources Teaching Worksheets.\n\nIt may be printed, downloaded or saved and used in your classroom, home school, or other. percentage decrease worksheets. these superhero-themed percentage decrease sheets are sure to go down a treat with your students (well, compared to worksheets that have the incredible hulk and wonder woman on them anyway).\n\nMatching fractions decimals percentages worksheet kids network teaching worksheets. Grade 5 math worksheets addition teaching mind puzzles practice test fun projects kids percentages. Free worksheets activities lesson plans handouts teaching percentages. Percentage worksheets grade 7 numbers number genetics practice problems worksheet answers rational place adding subtracting teaching percentages. Maths worksheets fraction decimal percentage teaching percentages. Maths worksheets grade decimals workbooks easy teaching math lessons free percentages worksheet adding. Activities literacy worksheets prep teaching autism sounds free addition games kindergarten division remainders math dittos grade percentages resources geometry worksheet."
] | [
null,
"https://optovr.com/images/ordering-fractions-decimals-percentages-teach-maths-free-resources-teaching-worksheets.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81501865,"math_prob":0.7018274,"size":1231,"snap":"2021-04-2021-17","text_gpt3_token_len":192,"char_repetition_ratio":0.22982885,"word_repetition_ratio":0.06535948,"special_character_ratio":0.1429732,"punctuation_ratio":0.0867052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.980385,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T09:28:58Z\",\"WARC-Record-ID\":\"<urn:uuid:cb994ea0-056f-4437-98c2-cdcd88866339>\",\"Content-Length\":\"21858\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1a50461b-390e-4886-9c5b-74b729b59d5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:60cf807e-7172-42cf-be4a-762340ab8708>\",\"WARC-IP-Address\":\"104.21.5.91\",\"WARC-Target-URI\":\"https://optovr.com/ordering-fractions-decimals-percentages-teach-maths-free-resources-teaching-worksheets/\",\"WARC-Payload-Digest\":\"sha1:M7APWWUXMJ7DCBELGQT26XJ2LTV5K32Y\",\"WARC-Block-Digest\":\"sha1:M652OYYA24VO6AJODASCHVY6FWLQG23N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038118762.49_warc_CC-MAIN-20210417071833-20210417101833-00411.warc.gz\"}"} |
https://opac.iub.edu.bd/cgi-bin/koha/opac-detail.pl?biblionumber=20147 | [
"Normal view\n\n# Fractal geography / André Dauphiné.\n\nMaterial type:",
null,
"TextSeries: ISTEPublication details: ©2012. Description: 1 online resource (xvii, 241 pages) : illustrationsContent type: text Media type: computer Carrier type: online resourceISBN: 9781118603024; 1118603028; 9781118603178; 1118603176; 9781118603161; 1118603168Genre/Form: Electronic books.Additional physical formats: Print version:: Fractal geography.DDC classification: 910.01/514742 LOC classification: G70.23 | .D37 2012ebOther classification: RB 10103 Online resources: Wiley Online Library\nContents:\nCover; Title Page; Copyright Page; Table of Contents; Introduction; Chapter 1. A Fractal World; 1.1. Fractals pervade into geography; 1.1.1. From geosciences to physical geography; 1.1.2. Urban geography: a big beneficiary; 1.2. Forms of fractal processes; 1.2.1. Some fractal forms that make use of the principle of allometry; 1.2.2. Time series and processes are also fractal; 1.2.3. Rank-size rules are generally fractal structures; 1.3. First reflections on the link between power laws and fractals; 1.3.1. Brief introduction into power laws.\n1.3.2. Some power laws recognized before the fractal era1.4. Conclusion; Chapter 2. Auto-similar and Self-affine Fractals; 2.1. The rarity of auto-similar terrestrial forms; 2.2. Yet more classes of self-affine fractal forms and processes; 2.2.1. Brownian, fractional Brownian and multi-fractional Brownian motion; 2.2.2. Lévy models; 2.2.3. Four examples of generalizations for simulating realistic forms; 2.3. Conclusion; Chapter 3. From the Fractal Dimension to Multifractal Spectrums; 3.1. Two extensions of the fractal dimension: lacunarity and codimension.\n3.1.1. Some territorial textures differentiated by their lacunarity3.1.2. Codimension as a relative fractal dimension; 3.2. Some corrections to the power laws: semifractals, parabolicfractals and log-periodic distributions; 3.2.1. Semifractals and double or truncated Pareto distributions; 3.2.2. The parabolic fractal model; 3.2.3. Log-periodic distributions; 3.3. A routine technique in medical imaging: fractal scanning; 3.4. Multifractals used to describe all the irregularities of a setdefined by measurement; 3.4.1. Definition and characteristics of a multifractal.\n3.4.2. Two functions to interpret: generalized dimension spectrumand singularity spectrum3.4.3. An approach that is classical in geosciences but exceptional in social sciences; 3.4.4. Three potential generalizations; 3.5. Conclusion; Chapter 4. Calculation and Interpretation of Fractal Dimensions; 4.1. Test data representing three categories of fractals: black and white maps, grayscale Landsat images and pluviometric chronicle series; 4.2. A first incontrovertible stage: determination of the fractal classof the geographical phenomenon studied.\n4.2.1. Successive tests using Fourier or wavelet decompositions4.2.2. Decadal rainfall in Barcelona and Beirut are fractionalGaussian noise; 4.3. Some algorithms for the calculation of the fractal dimensionsof auto-similar objects; 4.3.1. Box counting, information and area measurementdimensions for auto-similar objects; 4.3.2. A geographically inconclusive application from perception; 4.4. The fractal dimensions of objects and self-affine processes; 4.4.1. A multitude of algorithms; 4.4.2. High irregularity of decadal rainfall for Barcelona and Beirut; 4.5. Conclusion.\nSummary: Our daily universe is rough and infinitely diverse. The fractal approach clarifies and orders these disparities. It helps us to envisage new explanations of geographical phenomena, which are, however, considered as definitely understood. Written for use by geographers and researchers from similar disciplines, such as ecologists, economists, historians and sociologists, this book presents the algorithms best adapted to the phenomena encountered, and proposes case studies illustrating their applications in concrete situations. An appendix is also provided that develops programs writ.\nTags from this library: No tags from this library for this title.\nNo physical items for this record\n\nIncludes bibliographical references (pages 221]-238) and index.\n\nPrint version record.\n\nCover; Title Page; Copyright Page; Table of Contents; Introduction; Chapter 1. A Fractal World; 1.1. Fractals pervade into geography; 1.1.1. From geosciences to physical geography; 1.1.2. Urban geography: a big beneficiary; 1.2. Forms of fractal processes; 1.2.1. Some fractal forms that make use of the principle of allometry; 1.2.2. Time series and processes are also fractal; 1.2.3. Rank-size rules are generally fractal structures; 1.3. First reflections on the link between power laws and fractals; 1.3.1. Brief introduction into power laws.\n\n1.3.2. Some power laws recognized before the fractal era1.4. Conclusion; Chapter 2. Auto-similar and Self-affine Fractals; 2.1. The rarity of auto-similar terrestrial forms; 2.2. Yet more classes of self-affine fractal forms and processes; 2.2.1. Brownian, fractional Brownian and multi-fractional Brownian motion; 2.2.2. Lévy models; 2.2.3. Four examples of generalizations for simulating realistic forms; 2.3. Conclusion; Chapter 3. From the Fractal Dimension to Multifractal Spectrums; 3.1. Two extensions of the fractal dimension: lacunarity and codimension.\n\n3.1.1. Some territorial textures differentiated by their lacunarity3.1.2. Codimension as a relative fractal dimension; 3.2. Some corrections to the power laws: semifractals, parabolicfractals and log-periodic distributions; 3.2.1. Semifractals and double or truncated Pareto distributions; 3.2.2. The parabolic fractal model; 3.2.3. Log-periodic distributions; 3.3. A routine technique in medical imaging: fractal scanning; 3.4. Multifractals used to describe all the irregularities of a setdefined by measurement; 3.4.1. Definition and characteristics of a multifractal.\n\n3.4.2. Two functions to interpret: generalized dimension spectrumand singularity spectrum3.4.3. An approach that is classical in geosciences but exceptional in social sciences; 3.4.4. Three potential generalizations; 3.5. Conclusion; Chapter 4. Calculation and Interpretation of Fractal Dimensions; 4.1. Test data representing three categories of fractals: black and white maps, grayscale Landsat images and pluviometric chronicle series; 4.2. A first incontrovertible stage: determination of the fractal classof the geographical phenomenon studied.\n\n4.2.1. Successive tests using Fourier or wavelet decompositions4.2.2. Decadal rainfall in Barcelona and Beirut are fractionalGaussian noise; 4.3. Some algorithms for the calculation of the fractal dimensionsof auto-similar objects; 4.3.1. Box counting, information and area measurementdimensions for auto-similar objects; 4.3.2. A geographically inconclusive application from perception; 4.4. The fractal dimensions of objects and self-affine processes; 4.4.1. A multitude of algorithms; 4.4.2. High irregularity of decadal rainfall for Barcelona and Beirut; 4.5. Conclusion.\n\nOur daily universe is rough and infinitely diverse. The fractal approach clarifies and orders these disparities. It helps us to envisage new explanations of geographical phenomena, which are, however, considered as definitely understood. Written for use by geographers and researchers from similar disciplines, such as ecologists, economists, historians and sociologists, this book presents the algorithms best adapted to the phenomena encountered, and proposes case studies illustrating their applications in concrete situations. An appendix is also provided that develops programs writ.\n\nThere are no comments on this title."
] | [
null,
"https://opac.iub.edu.bd/opac-tmpl/lib/famfamfam/BK.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76378596,"math_prob":0.81981206,"size":4414,"snap":"2022-05-2022-21","text_gpt3_token_len":1125,"char_repetition_ratio":0.12154195,"word_repetition_ratio":0.010398613,"special_character_ratio":0.24535568,"punctuation_ratio":0.26041666,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96328115,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T10:59:43Z\",\"WARC-Record-ID\":\"<urn:uuid:784b82af-ec2c-497d-9e3b-7f40a7f1d91c>\",\"Content-Length\":\"86588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d365558-07c8-47ed-b609-5c0afc9354fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b19bb07-4920-42f6-b059-f4825a17915d>\",\"WARC-IP-Address\":\"103.81.70.81\",\"WARC-Target-URI\":\"https://opac.iub.edu.bd/cgi-bin/koha/opac-detail.pl?biblionumber=20147\",\"WARC-Payload-Digest\":\"sha1:P53BU52A4TNBOK4G3SL5IPTE5Y5WMTXX\",\"WARC-Block-Digest\":\"sha1:TEEEOULPJT7GABGVMJM6QPYIBUONCP4R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662584398.89_warc_CC-MAIN-20220525085552-20220525115552-00105.warc.gz\"}"} |
https://homeworkhelpprofessors.com/math/do-math-in-computer/ | [
"",
null,
"# How to do math in a computer\n\nHow to do math in a computer? Mathematics is a crucial discipline, which cannot be overlooked, especially in the field of technology. Computers accurately perform mathematical calculations in binary form, which is quite different from what we use: the decimal numbering. Binary typically refers to a numbering system invented by Gottfried Leibniz. In this approach of numbering, a digit has only two possible states or values, i.e. 0 and 1. 0s denote OFF, false or low, and 1s indicate ON, true or high in a transistor. It allows the storage of numbers in a computing device, and therefore calculation becomes possible.\n\n## Using binary digits\n\nA bit is a binary digit. It is the least unit of data in a computing device. The value of a bit is either 0 or 1 when storing informing or executing instructions. For example, decimal numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 are represented in binary form in a computer, as shown below.\n0=0, 1=1, 2=10, 3=11, 4=100, 5=101, 6=110, 7=111, 8=1000, 9=1001, 10=1010\nUnlike the decimal/denary number system which applies the power of ten (10n), binary uses the power of two (2n) i.e.\nDecimal system/ base 10\n101 = 10\n102= 100\n103 = 1,000\n106 =1,000,000\nBinary system/ base 2\n21 = 2\n22 = 4\n23 = 8\n24=16\n26 =64\nTo clearly understand how to do math in a computer, it is essential to note that the calculation of binary starts from right to left. On the left is the most significant bit (MSB), and on the absolute right is the least significant bit (LSB). Starting with “1” from the right, an 8-bit number would be;\n128 64 32 16 8 4 2 1\n8-bits are equal to 1 byte. When adding in the binary counting system, the bit rules outlined below apply.\n0+0=0, 1+0=1, 0+1=1, 1+1=0\nAdding all the numbers above equals to 421.\n128+64+32+16+8+4+2+1=421\n42110 = 1101001012\n\n## Using Boolean logic\n\nComputers operate using Boolean logic. That is a simple method to discover the “true” state of an expression in the binary numbering approach. Boolean operators AND, OR, XOR, and NOT evaluate values to give a true (1) or false (0) outcome. For intricate calculations, the computer processor interlinks several Boolean statements to produce accurate results.\n\n### Conclusion\n\nAs illustrated above, mathematics has been used immensely in the advancement of technology. In one way or another, the application of mathematics in our day to day lives has a tremendous positive impact. That is why our math specialists have made it their priority to help students having challenges in math excel. If you want to pay someone to do your math homework, do not hesitate to get in touch with us. Our customer support department is available all the time to offer any form of assistance that you may require.\nYou do not have to get overwhelmed by complex math problems and assignments, spending sleepless nights trying to beat tight deadlines. At very competitive prices, we ensure your math homework gets completed in time with quality results. We always guarantee high grades, no matter the complexity level of your math homework. Go ahead and place your order with us today, and you will never regret it.\n\nFind more knowledge about math subject here:"
] | [
null,
"https://homeworkhelpprofessors.com/wp-content/uploads/2020/02/geometry-mathematics-volume-1044090-1024x674.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8913914,"math_prob":0.9760851,"size":3479,"snap":"2020-45-2020-50","text_gpt3_token_len":887,"char_repetition_ratio":0.10388489,"word_repetition_ratio":0.0032786885,"special_character_ratio":0.27249208,"punctuation_ratio":0.12343967,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973944,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T08:57:43Z\",\"WARC-Record-ID\":\"<urn:uuid:898b1c7d-579e-49f4-a611-c31037efad50>\",\"Content-Length\":\"44831\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:010a4fa9-d557-497c-b0c7-aa0bd34a4d9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:822539ee-a9ee-4c8c-b4a3-e1e7fb7c0749>\",\"WARC-IP-Address\":\"51.89.40.16\",\"WARC-Target-URI\":\"https://homeworkhelpprofessors.com/math/do-math-in-computer/\",\"WARC-Payload-Digest\":\"sha1:SDR6HOKIWKHJROT66FDEGFEKLYGYUDTB\",\"WARC-Block-Digest\":\"sha1:UHWDNXOSR6RFCV4PJNL4MB5LFYNHUVSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107879362.3_warc_CC-MAIN-20201022082653-20201022112653-00111.warc.gz\"}"} |
https://javadoc.scijava.org/Fiji/bunwarpj/MathTools.html | [
"bunwarpj\n\n## Class MathTools\n\n• ```public class MathTools\nextends Object```\nThis class has the math methods to deal with b-splines and images.\n• ### Constructor Summary\n\nConstructors\nConstructor and Description\n`MathTools()`\n• ### Method Summary\n\nAll Methods\nModifier and Type Method and Description\n`static double[][]` ```antiSymmetricPadding(double[][] c, int extra)```\n`static double[]` ```antiSymmetricPadding(double[] c, int n, int extra)```\n`static float[]` ```antiSymmetricPadding(float[] c, int n, int extra)```\n`static double` `Bspline01(double x)`\nB-spline 01.\n`static double` `Bspline02(double x)`\nB-spline 02.\n`static double` `Bspline03(double x)`\nB-spline 03.\n`static double` ```EuclideanNorm(double a, double b)```\nEuclidean Norm.\n`static boolean` ```invertMatrixSVD(int Ydim, int Xdim, double[][] B, double[][] iB)```\nInvert a matrix by the Singular Value Decomposition method.\n`static double[]` ```linearLeastSquares(double[][] A, double[] b)```\nGives the least-squares solution to (A * x = b) such that (A^T * A)^-1 * A^T * b = x is a vector of size (column), where A is a (line x column) matrix, and where b is a vector of size (line).\n`static double` ```nchoosek(int n, int k)```\nN choose K.\n`static void` ```QRdecomposition(double[][] Q, double[][] R)```\nDecomposes the (line x column) input matrix Q into an orthonormal output matrix Q of same size (line x column) and an upper-diagonal square matrix R of size (column x column), such that the matrix product (Q * R) gives the input matrix, and such that the matrix product (Q^T * Q) gives the identity.\n`static void` ```showMatrix(int Ydim, int Xdim, double[][] A)```\nMethod to display the matrix in the command line.\n`static void` ```singularValueBackSubstitution(double[][] U, double[] W, double[][] V, double[] B, double[] X)```\nsolve (U.W.Transpose(V)).X == B in terms of X {U, W, V} are given by SingularValueDecomposition by convention, set w[i,j]=0 to get (1/w[i,j])=0 the size of the input matrix U is (Lines x Columns) the size of the vector (1/W) of singular values is (Columns) the size of the untransposed orthogonal matrix V is (Columns x Columns) the size of the input vector B is (Lines) the size of the output vector X is (Columns)\n`static void` ```singularValueDecomposition(double[][] U, double[] W, double[][] V)```\nSingular Value Decomposition.\n• ### Methods inherited from class java.lang.Object\n\n`clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`\n• ### Constructor Detail\n\n• #### MathTools\n\n`public MathTools()`\n• ### Method Detail\n\n• #### Bspline01\n\n`public static double Bspline01(double x)`\nB-spline 01.\nParameters:\n`x` -\n• #### Bspline02\n\n`public static double Bspline02(double x)`\nB-spline 02.\nParameters:\n`x` -\n• #### Bspline03\n\n`public static double Bspline03(double x)`\nB-spline 03.\nParameters:\n`x` -\n• #### EuclideanNorm\n\n```public static double EuclideanNorm(double a,\ndouble b)```\nEuclidean Norm.\nParameters:\n`a` -\n`b` -\n• #### invertMatrixSVD\n\n```public static boolean invertMatrixSVD(int Ydim,\nint Xdim,\ndouble[][] B,\ndouble[][] iB)```\nInvert a matrix by the Singular Value Decomposition method.\nParameters:\n`Ydim` - input, Y-dimension\n`Xdim` - input, X-dimension\n`B` - input, matrix to invert\n`iB` - output, inverted matrix\nReturns:\nunder-constrained flag\n• #### linearLeastSquares\n\n```public static double[] linearLeastSquares(double[][] A,\ndouble[] b)```\nGives the least-squares solution to (A * x = b) such that (A^T * A)^-1 * A^T * b = x is a vector of size (column), where A is a (line x column) matrix, and where b is a vector of size (line). The result may differ from that obtained by a singular-value decomposition in the cases where the least-squares solution is not uniquely defined (SVD returns the solution of least norm, not QR).\nParameters:\n`A` - An input matrix A[line][column] of size (line x column)\n`b` - An input vector b[line] of size (line)\nReturns:\nAn output vector x[column] of size (column)\n• #### nchoosek\n\n```public static double nchoosek(int n,\nint k)```\nN choose K.\nParameters:\n`n` -\n`k` -\n• #### QRdecomposition\n\n```public static void QRdecomposition(double[][] Q,\ndouble[][] R)```\nDecomposes the (line x column) input matrix Q into an orthonormal output matrix Q of same size (line x column) and an upper-diagonal square matrix R of size (column x column), such that the matrix product (Q * R) gives the input matrix, and such that the matrix product (Q^T * Q) gives the identity.\nParameters:\n`Q` - An in-place (line x column) matrix Q[line][column], which expects as input the matrix to decompose, and which returns as output an orthonormal matrix\n`R` - An output (column x column) square matrix R[column][column]\n• #### showMatrix\n\n```public static void showMatrix(int Ydim,\nint Xdim,\ndouble[][] A)```\nMethod to display the matrix in the command line.\nParameters:\n`Ydim` - Y-dimension\n`Xdim` - X-dimension\n`A` - matrix to display\n• #### singularValueDecomposition\n\n```public static void singularValueDecomposition(double[][] U,\ndouble[] W,\ndouble[][] V)```\nSingular Value Decomposition.\nParameters:\n`U` - input matrix\n`W` - vector of singular values\n`V` - untransposed orthogonal matrix\n• #### singularValueBackSubstitution\n\n```public static void singularValueBackSubstitution(double[][] U,\ndouble[] W,\ndouble[][] V,\ndouble[] B,\ndouble[] X)```\nsolve (U.W.Transpose(V)).X == B in terms of X {U, W, V} are given by SingularValueDecomposition by convention, set w[i,j]=0 to get (1/w[i,j])=0 the size of the input matrix U is (Lines x Columns) the size of the vector (1/W) of singular values is (Columns) the size of the untransposed orthogonal matrix V is (Columns x Columns) the size of the input vector B is (Lines) the size of the output vector X is (Columns)\nParameters:\n`U` - input matrix\n`W` - vector of singular values\n`V` - untransposed orthogonal matrix\n`B` - input vector\n`X` - returned solution\n\n```public static double[][] antiSymmetricPadding(double[][] c,\nint extra)```\n```public static double[] antiSymmetricPadding(double[] c,\n```public static float[] antiSymmetricPadding(float[] c,"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5550057,"math_prob":0.98397565,"size":3787,"snap":"2021-21-2021-25","text_gpt3_token_len":1033,"char_repetition_ratio":0.21623051,"word_repetition_ratio":0.125,"special_character_ratio":0.27171904,"punctuation_ratio":0.10769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99967754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T05:53:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ee0cc62c-602e-4b6f-90e6-fc221bab0fcd>\",\"Content-Length\":\"25274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a5b064c-37c3-4340-a4b9-dce1ddbcbc48>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5ca143b-c03d-458c-816f-8513b6fae972>\",\"WARC-IP-Address\":\"144.92.48.198\",\"WARC-Target-URI\":\"https://javadoc.scijava.org/Fiji/bunwarpj/MathTools.html\",\"WARC-Payload-Digest\":\"sha1:CFBFTXKIJ4CVKGGYX7IZ32LYAO2MRLTK\",\"WARC-Block-Digest\":\"sha1:U4WFK3JTEYOUKMNYOIP565KV4THRDXRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988741.20_warc_CC-MAIN-20210506053729-20210506083729-00327.warc.gz\"}"} |
https://www.onworks.net/zh-CN/os-distributions/programs/mia-2dseriessmoothgradmad-online | [
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"### 程序:\n\n#### 配置\n\n-i --in-file=(输入,必填); 细绳\n\n-o --out-file=(输出,需要); io\n\n-k --skip=0\n\n-e --放大边界=5\n\n-c --crop\n\n-g --gauss=1\n\n-V --verbose=警告\n\ninfo - 低级消息\n\nmessage - 普通消息\n\n- 版权\n\n-h --帮助\n\n-? - 用法\n\n- 版\n\n--线程=-1\n\nnumber 应该小于或等于逻辑处理器内核的数量\n\n#### 插件:1d/空间内核\n\n(无参数)\n\nw = 1; 输入 [0, inf)\n\n#### 插件:一维/样条内核\n\nd = 3; [0, 5] 中的整数\n\nd = 3; [3, 3] 中的整数\n\n(无参数)\n\n(无参数)\n\nDIV 图像组合器“div”\n\n(无参数)\n\nMUL 图像组合器“mul”\n\n(无参数)\n\n(无参数)\n\n#### 插件:2d图像/过滤器\n\nw = 2; int in [1, inf)\n\nw = 1; int in [1, inf)\n\nITER = 100; [1, 10000] 中的整数\n\nk = -1; 浮动在 [0, 100]\nk 噪声阈值(<=0 -> 自适应)。\n\nn = 8; 放\n\nPSI = 火鸡; 字典\n\n- 测试停止功能\n\npm1 - 停止功能1\npm2 - 停止功能2\n\n)\n\nfalse,第一个操作符是通过过滤器管道的图像,以及\n\nop =(必需,工厂)\n\na = 1; 漂浮\n\nb = 0; 漂浮\n\n- 有符号的 32 位\n\n- 有符号的 16 位\nUINT - 无符号 32 位\n\n- 二进制数据\n\n)\n\n(无参数)\n\nb = [[1,1]]; 2d边界\n\nbx = 1; 输入 [1, inf)\nx 方向的块大小。\n\nby = 1; 输入 [1, inf)\ny 方向的块大小。\n\n)\n\nw = 1; int in [0, inf)\n\n(无参数)\n\nc = 3; int in [2, inf)\n\nn = 4n; 工厂\n\n= 分钟; 字典\n\n- 将掩码外的值设置为零。\n\n= 0; 布尔值\n\nw = 1; int in [1, inf)\n\nw = 1; int in [1, inf)\n\nw = 1; int in [1, inf)\n\n(无参数)\n\ng = [高斯:mu=0,sigma=10]; 工厂\n\nMOD = 0; 布尔值\n\n)\n\nITER = 0; [1, 1000000] 中的整数\n\nn = 8n; 工厂\n\nw = 1; int in [1, inf)\n\ns = [[0,0]]; 2d边界\n\nsx = 0; 输入 [0, inf)\nx 方向的目标大小,0:使用输入大小。\n\nsy = 0; 输入 [0, inf)\ny 方向的目标大小,0:使用输入大小。\n\n(无参数)\n\nsepconv 2D图像强度分离卷积滤波器,支持的参数有:\n\nkx = [高斯:w=1]; 工厂\n\nky = [高斯:w=1]; 工厂\n\nDIR = x; 字典\n\ny - y 方向的梯度\nx - x 方向的梯度\n\n(无参数)\n\nSWS 种子水源。 该算法提取的区域正好与初始区域一样多\n\nn = [球体:r=1]; 工厂\n\nITER = 0; [1, 1000000] 中的整数\n\nws 基本水源分割,支持的参数有:\n\nn = [球体:r=1]; 工厂\n\n#### 插件:二维图像/IO\n\nBMP BMP 2D 图像输入/输出支持\n\nJPG 用于 jpeg 灰度图像的 2dimage io 插件\n\nPNG 一个用于 png 图像的 2dimage io 插件\n\nRAW 2D 图像输出支持\n\nTIF TIFF 2D 图像输入/输出支持\n\n#### 插件:2d图像/形状\n\n1n 只包含中心点的形状\n\n(无参数)\n\n4n 4n邻域二维形状\n\n(无参数)\n\n8n 8n邻域二维形状\n\n(无参数)\n\n= 1; 布尔值\n\nr = 2; 浮在 (0, inf)\n\n= 1; 布尔值\n\n#### 插件:2d变换/io\n\nBBS 2D 转换的二进制(不可移植)序列化 IO\n\nXML 2D 转换的 XML 序列化 IO\n\nmu = 0; 漂浮\n\na = 0; 漂浮\n\nb = 1; 漂浮\n\nOpenEXR 格式的图像。"
] | [
null,
"https://gtranslate.net/flags/blank.png",
null,
"https://gtranslate.net/flags/blank.png",
null,
"https://gtranslate.net/flags/blank.png",
null,
"https://www.onworks.net/images/onworkslogofavicon.ico",
null,
"https://www.onworks.net/imagescropped/mia2dseriessmoothgradmad.png_3.webp",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9740737,"math_prob":0.999584,"size":7972,"snap":"2022-05-2022-21","text_gpt3_token_len":6918,"char_repetition_ratio":0.13202812,"word_repetition_ratio":0.11990212,"special_character_ratio":0.32752132,"punctuation_ratio":0.10261313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796304,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T12:06:47Z\",\"WARC-Record-ID\":\"<urn:uuid:d4677427-96c9-4353-885a-62987948ad37>\",\"Content-Length\":\"172476\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb2dd90b-dbcf-4fbf-bed2-8bb5037d5c64>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa5b0217-16ca-40a5-8327-f300b2e12345>\",\"WARC-IP-Address\":\"51.195.46.31\",\"WARC-Target-URI\":\"https://www.onworks.net/zh-CN/os-distributions/programs/mia-2dseriessmoothgradmad-online\",\"WARC-Payload-Digest\":\"sha1:SH7C757S6HP3XJKYCZG4OVJAACKDN6JE\",\"WARC-Block-Digest\":\"sha1:FYV3VJKLMSKTJORY6JPHYN6X7RNSAI4X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662644142.66_warc_CC-MAIN-20220529103854-20220529133854-00029.warc.gz\"}"} |
https://mathjs.org/docs/reference/functions/simplify.html | [
"# Function simplify #\n\nSimplify an expression tree.\n\nA list of rules are applied to an expression, repeating over the list until no further changes are made. It’s possible to pass a custom set of rules to the function as second argument. A rule can be specified as an object, string, or function:\n\n``````const rules = [\n{ l: 'n1*n3 + n2*n3', r: '(n1+n2)*n3' },\n'n1*n3 + n2*n3 -> (n1+n2)*n3',\nfunction (node) {\n// ... return a new node or return the node unchanged\nreturn node\n}\n]\n``````\n\nString and object rules consist of a left and right pattern. The left is used to match against the expression and the right determines what matches are replaced with. The main difference between a pattern and a normal expression is that variables starting with the following characters are interpreted as wildcards:\n\n• ‘n’ - matches any Node\n• ‘c’ - matches any ConstantNode\n• ‘v’ - matches any Node that is not a ConstantNode\n\nThe default list of rules is exposed on the function as `simplify.rules` and can be used as a basis to built a set of custom rules.\n\nFor more details on the theory, see:\n\nAn optional `options` argument can be passed as last argument of `simplify`. There is currently one option available:\n\n• `exactFractions`: a boolean which is `true` by default.\n• `fractionsLimit`: when `exactFractions` is true, a fraction will be returned only when both numerator and denominator are smaller than `fractionsLimit`. Default value is 10000.\n\n## Syntax #\n\n``````simplify(expr)\nsimplify(expr, rules)\nsimplify(expr, rules)\nsimplify(expr, rules, scope)\nsimplify(expr, rules, scope, options)\nsimplify(expr, scope)\nsimplify(expr, scope, options)\n``````\n\n### Parameters #\n\nParameter Type Description\n`expr` Node | string The expression to be simplified\n`rules` Array<{l:string, r: string} | string | function> Optional list with custom rules\n\n### Returns #\n\nType Description\nNode Returns the simplified form of `expr`\n\n## Examples #\n\n``````math.simplify('2 * 1 * x ^ (2 - 1)') // Node \"2 * x\"\nmath.simplify('2 * 3 * x', {x: 4}) // Node \"24\"\nconst f = math.parse('2 * 1 * x ^ (2 - 1)')\nmath.simplify(f) // Node \"2 * x\"\nmath.simplify('0.4 * x', {}, {exactFractions: true}) // Node \"x * 2 / 5\"\nmath.simplify('0.4 * x', {}, {exactFractions: false}) // Node \"0.4 * x\"\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6859435,"math_prob":0.95190114,"size":2295,"snap":"2021-31-2021-39","text_gpt3_token_len":604,"char_repetition_ratio":0.1440419,"word_repetition_ratio":0.031578947,"special_character_ratio":0.28366014,"punctuation_ratio":0.1421801,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98941773,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T12:58:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8c3c1785-3a6e-4f16-a5b7-1111bb438efb>\",\"Content-Length\":\"12287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ecbeb13-9cfa-41a6-9cb9-f03ae5129a3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:37713226-3531-4ffa-b897-e0318ef930d0>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://mathjs.org/docs/reference/functions/simplify.html\",\"WARC-Payload-Digest\":\"sha1:BDNH2DPXLPQ5AYAOWLZLWHYLTYUP4ASC\",\"WARC-Block-Digest\":\"sha1:X5MSSVENILUVVVFGKN6CBUTG7FOS5FZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150266.65_warc_CC-MAIN-20210724125655-20210724155655-00300.warc.gz\"}"} |
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatLUFactor.html | [
"petsc-3.11.2 2019-05-18\nReport Typos and Errors\n\nMatLUFactor\n\nPerforms in-place LU factorization of matrix.\n\nSynopsis\n\n```#include \"petscmat.h\"\nPetscErrorCode MatLUFactor(Mat mat,IS row,IS col,const MatFactorInfo *info)\n```\nCollective on Mat\n\nInput Parameters\n\n mat - the matrix row - row permutation col - column permutation info - options for factorization, includes\n``` fill - expected fill as ratio of original fill.\n```\n``` dtcol - pivot tolerance (0 no pivot, 1 full column pivoting)\n```\n``` Run with the option -info to determine an optimal value to use\n```\n\nNotes\n\nMost users should employ the simplified KSP interface for linear solvers instead of working directly with matrix algebra routines such as this. See, e.g., KSPCreate().\n\nThis changes the state of the matrix to a factored matrix; it cannot be used for example with MatSetValues() unless one first calls MatSetUnfactored().\n\nMatLUFactorSymbolic(), MatLUFactorNumeric(), MatCholeskyFactor(),\nMatGetOrdering(), MatSetUnfactored(), MatFactorInfo, MatGetFactor()\n\nDeveloper Note: fortran interface is not autogenerated as the f90 interface defintion cannot be generated correctly [due to MatFactorInfo]\n\ndeveloper\n\nLocation\n\nsrc/mat/interface/matrix.c\n\nImplementations\n\nMatLUFactor_SeqAIJ in src/mat/impls/aij/seq/aijfact.c\nMatLUFactor_SeqBAIJ in src/mat/impls/baij/seq/baijfact.c\nMatLUFactor_SeqDense in src/mat/impls/dense/seq/dense.c\nMatLUFactor_Elemental in src/mat/impls/elemental/matelem.cxx\n\nIndex of all Mat routines"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5434428,"math_prob":0.5955198,"size":1513,"snap":"2019-26-2019-30","text_gpt3_token_len":413,"char_repetition_ratio":0.12789927,"word_repetition_ratio":0.0,"special_character_ratio":0.20621282,"punctuation_ratio":0.12048193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9572563,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T16:05:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d1928144-92a5-4504-9f2f-9ec90c8dc61e>\",\"Content-Length\":\"4771\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b82248d-d413-4adc-b035-333182e44f70>\",\"WARC-Concurrent-To\":\"<urn:uuid:61a05b27-c3fd-43da-9cc0-19d095d35b67>\",\"WARC-IP-Address\":\"140.221.6.95\",\"WARC-Target-URI\":\"https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatLUFactor.html\",\"WARC-Payload-Digest\":\"sha1:S2BCFD7FQWL75T6SV72NBSEVQ3RCFJ63\",\"WARC-Block-Digest\":\"sha1:FLBS7AIJOR4BCBOLVQILT7CY3STSZ45A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999261.43_warc_CC-MAIN-20190620145650-20190620171650-00361.warc.gz\"}"} |
https://www.colorhexa.com/8df6f0 | [
"# #8df6f0 Color Information\n\nIn a RGB color space, hex #8df6f0 is composed of 55.3% red, 96.5% green and 94.1% blue. Whereas in a CMYK color space, it is composed of 42.7% cyan, 0% magenta, 2.4% yellow and 3.5% black. It has a hue angle of 176.6 degrees, a saturation of 85.4% and a lightness of 75.9%. #8df6f0 color hex could be obtained by blending #ffffff with #1bede1. Closest websafe color is: #99ffff.\n\n• R 55\n• G 96\n• B 94\nRGB color chart\n• C 43\n• M 0\n• Y 2\n• K 4\nCMYK color chart\n\n#8df6f0 color description : Very soft cyan.\n\n# #8df6f0 Color Conversion\n\nThe hexadecimal color #8df6f0 has RGB values of R:141, G:246, B:240 and CMYK values of C:0.43, M:0, Y:0.02, K:0.04. Its decimal value is 9303792.\n\nHex triplet RGB Decimal 8df6f0 `#8df6f0` 141, 246, 240 `rgb(141,246,240)` 55.3, 96.5, 94.1 `rgb(55.3%,96.5%,94.1%)` 43, 0, 2, 4 176.6°, 85.4, 75.9 `hsl(176.6,85.4%,75.9%)` 176.6°, 42.7, 96.5 99ffff `#99ffff`\nCIE-LAB 90.717, -31.87, -6.658 59.664, 77.862, 94.318 0.257, 0.336, 77.862 90.717, 32.558, 191.799 90.717, -46.992, -5.221 88.239, -33.724, -1.607 10001101, 11110110, 11110000\n\n# Color Schemes with #8df6f0\n\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #f68d93\n``#f68d93` `rgb(246,141,147)``\nComplementary Color\n• #8df6bc\n``#8df6bc` `rgb(141,246,188)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #8dc8f6\n``#8dc8f6` `rgb(141,200,246)``\nAnalogous Color\n• #f6bc8d\n``#f6bc8d` `rgb(246,188,141)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #f68dc8\n``#f68dc8` `rgb(246,141,200)``\nSplit Complementary Color\n• #f6f08d\n``#f6f08d` `rgb(246,240,141)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #f08df6\n``#f08df6` `rgb(240,141,246)``\n• #93f68d\n``#93f68d` `rgb(147,246,141)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #f08df6\n``#f08df6` `rgb(240,141,246)``\n• #f68d93\n``#f68d93` `rgb(246,141,147)``\n• #46f0e7\n``#46f0e7` `rgb(70,240,231)``\n• #5ef2ea\n``#5ef2ea` `rgb(94,242,234)``\n• #75f4ed\n``#75f4ed` `rgb(117,244,237)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #a5f8f3\n``#a5f8f3` `rgb(165,248,243)``\n• #bcfaf6\n``#bcfaf6` `rgb(188,250,246)``\n• #d4fcf9\n``#d4fcf9` `rgb(212,252,249)``\nMonochromatic Color\n\n# Alternatives to #8df6f0\n\nBelow, you can see some colors close to #8df6f0. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #8df6d6\n``#8df6d6` `rgb(141,246,214)``\n• #8df6df\n``#8df6df` `rgb(141,246,223)``\n• #8df6e7\n``#8df6e7` `rgb(141,246,231)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #8df3f6\n``#8df3f6` `rgb(141,243,246)``\n• #8debf6\n``#8debf6` `rgb(141,235,246)``\n• #8de2f6\n``#8de2f6` `rgb(141,226,246)``\nSimilar Colors\n\n# #8df6f0 Preview\n\nThis text has a font color of #8df6f0.\n\n``<span style=\"color:#8df6f0;\">Text here</span>``\n#8df6f0 background color\n\nThis paragraph has a background color of #8df6f0.\n\n``<p style=\"background-color:#8df6f0;\">Content here</p>``\n#8df6f0 border color\n\nThis element has a border color of #8df6f0.\n\n``<div style=\"border:1px solid #8df6f0;\">Content here</div>``\nCSS codes\n``.text {color:#8df6f0;}``\n``.background {background-color:#8df6f0;}``\n``.border {border:1px solid #8df6f0;}``\n\n# Shades and Tints of #8df6f0\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010d0d is the darkest color, while #fafffe is the lightest one.\n\n• #010d0d\n``#010d0d` `rgb(1,13,13)``\n• #021f1e\n``#021f1e` `rgb(2,31,30)``\n• #04322f\n``#04322f` `rgb(4,50,47)``\n• #054440\n``#054440` `rgb(5,68,64)``\n• #075651\n``#075651` `rgb(7,86,81)``\n• #086863\n``#086863` `rgb(8,104,99)``\n• #0a7a74\n``#0a7a74` `rgb(10,122,116)``\n• #0b8d85\n``#0b8d85` `rgb(11,141,133)``\n• #0d9f96\n``#0d9f96` `rgb(13,159,150)``\n• #0eb1a8\n``#0eb1a8` `rgb(14,177,168)``\n• #0fc3b9\n``#0fc3b9` `rgb(15,195,185)``\n• #11d5ca\n``#11d5ca` `rgb(17,213,202)``\n• #12e7db\n``#12e7db` `rgb(18,231,219)``\n• #20ede2\n``#20ede2` `rgb(32,237,226)``\n• #32efe4\n``#32efe4` `rgb(50,239,228)``\n• #44f0e6\n``#44f0e6` `rgb(68,240,230)``\n• #56f2e9\n``#56f2e9` `rgb(86,242,233)``\n• #69f3eb\n``#69f3eb` `rgb(105,243,235)``\n• #7bf5ee\n``#7bf5ee` `rgb(123,245,238)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #9ff7f2\n``#9ff7f2` `rgb(159,247,242)``\n• #b1f9f5\n``#b1f9f5` `rgb(177,249,245)``\n• #c4faf7\n``#c4faf7` `rgb(196,250,247)``\n• #d6fcfa\n``#d6fcfa` `rgb(214,252,250)``\n• #e8fdfc\n``#e8fdfc` `rgb(232,253,252)``\n• #fafffe\n``#fafffe` `rgb(250,255,254)``\nTint Color Variation\n\n# Tones of #8df6f0\n\nA tone is produced by adding gray to any pure hue. In this case, #c1c2c2 is the less saturated color, while #88fbf4 is the most saturated one.\n\n• #c1c2c2\n``#c1c2c2` `rgb(193,194,194)``\n• #bcc7c6\n``#bcc7c6` `rgb(188,199,198)``\n• #b8cbca\n``#b8cbca` `rgb(184,203,202)``\n• #b3d0ce\n``#b3d0ce` `rgb(179,208,206)``\n• #aed5d3\n``#aed5d3` `rgb(174,213,211)``\n``#a9dad7` `rgb(169,218,215)``\n• #a5dedb\n``#a5dedb` `rgb(165,222,219)``\n• #a0e3df\n``#a0e3df` `rgb(160,227,223)``\n• #9be8e3\n``#9be8e3` `rgb(155,232,227)``\n• #96ede8\n``#96ede8` `rgb(150,237,232)``\n• #92f1ec\n``#92f1ec` `rgb(146,241,236)``\n• #8df6f0\n``#8df6f0` `rgb(141,246,240)``\n• #88fbf4\n``#88fbf4` `rgb(136,251,244)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #8df6f0 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.53334993,"math_prob":0.76000077,"size":3727,"snap":"2021-43-2021-49","text_gpt3_token_len":1726,"char_repetition_ratio":0.12301907,"word_repetition_ratio":0.011090573,"special_character_ratio":0.5320633,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98512965,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T12:28:58Z\",\"WARC-Record-ID\":\"<urn:uuid:7a67ea69-a6d8-499f-b1d8-b5934b4a26bf>\",\"Content-Length\":\"36243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be4fa49a-97e6-4021-8ff5-36c2f19b902c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0586998c-630b-4050-875b-4efcdcb34e8d>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/8df6f0\",\"WARC-Payload-Digest\":\"sha1:W2OEASMRNMKGP7NN4BA72L5LJXXN2K7F\",\"WARC-Block-Digest\":\"sha1:3LCU6G3MTJ4NJJICG2QO3CIPUIOW74GV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585507.26_warc_CC-MAIN-20211022114748-20211022144748-00576.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1009.3449/ | [
"# Modeling scale-dependent bias on the baryonic acoustic scale with the statistics of peaks of Gaussian random fields\n\nVincent Desjacques Institute for Theoretical Physics, University of Zurich, 8057 Zurich, Switzerland Martin Crocce Institut de Ciències de l’Espai, IEEC-CSIC, Campus UAB, Facultat de Ciències, Barcelona 08193, Spain Roman Scoccimarro Center for Cosmology and Particle Physics, Department of Physics, New York University, NY 10003, USA Ravi K. Sheth Center for Particle Cosmology, University of Pennsylvania, 209 S 33rd Street, Philadelphia, PA 19104, USA\n###### Abstract\n\nModels of galaxy and halo clustering commonly assume that the tracers can be treated as a continuous field locally biased with respect to the underlying mass distribution. In the peak model pioneered by Bardeen et al. (1986), one considers instead density maxima of the initial, Gaussian mass density field as an approximation to the formation site of virialized objects. In this paper, the peak model is extended in two ways to improve its predictive accuracy. Firstly, we derive the two-point correlation function of initial density peaks up to second order and demonstrate that a peak-background split approach can be applied to obtain the -independent and -dependent peak bias factors at all orders. Secondly, we explore the gravitational evolution of the peak correlation function within the Zel’dovich approximation. We show that the local (Lagrangian) bias approach emerges as a special case of the peak model, in which all bias parameters are scale-independent and there is no statistical velocity bias. We apply our formulae to study how the Lagrangian peak biasing, the diffusion due to large scale flows and the mode-coupling due to nonlocal interactions affect the scale dependence of bias from small separations up to the baryon acoustic oscillation (BAO) scale. For 2 density peaks collapsing at , our model predicts a 5% residual scale-dependent bias around the acoustic scale that arises mostly from first-order Lagrangian peak biasing (as opposed to second-order gravity mode-coupling). We also search for a scale dependence of bias in the large scale auto-correlation of massive halos extracted from a very large N-body simulation provided by the MICE collaboration. For halos with mass , our measurements demonstrate a scale-dependent bias across the BAO feature which is very well reproduced by a prediction based on the peak model.\n\n###### pacs:\n98.80.-k, 98.65.-r, 95.35.+d, 98.80.Es\n\n## I Introduction\n\nA considerable amount of effort has already been invested in measuring the large scale distribution of galaxies, especially the galaxy two-point correlation function and power spectrum, to constrain viable cosmological models (e.g., 1990Natur.348..705E, ; 1995MNRAS.276L..59B, ; 1999MNRAS.305..527T, ; 2001MNRAS.327.1297P, ; 2004ApJ…606..702T, ; 2005MNRAS.362..505C, ; 2005ApJ…633..560E, ; 2006PhRvD..74l3507T, ; 2006A&A…459..375H, ; 2007MNRAS.381.1053P, ; 2009MNRAS.400.1643S, ; 2009MNRAS.393.1183C, ; 2009ApJ…696L..93M, ; 2010MNRAS.404…60R, ). The amplitude, shape and baryon acoustic feature in these two-point statistics encode a wealth of cosmological information (1996ApJ…471…30H, ; 1998ApJ…504L..57E, ; 2001ApJ…557L…7C, ; 2003PhRvD..68f3004H, ; 2003ApJ…594..665B, ; 2003PhRvD..68h3504L, ; 2004ApJ…615..573M, ; 2005MNRAS.357..429A, ; 2005MNRAS.363.1329B, ; 2005ApJ…631….1G, ; 2006MNRAS.366..884D, ; 2006ApJ…644..663Z, ; 2006MNRAS.365..255B, ; 2008PhRvD..77l3540P, ; 2009MNRAS.399.1663G, ; 2009ApJ…693.1404S, ; 2009ApJ…698..967Y, ; 2010ApJ…710.1444K, ). Ongoing and planned galaxy surveys of the high redshift Universe will furnish measurements of the underlying mass distribution with unprecedent precision and statistics. Alongside this great observational effort, interpreting this vast amount of data will require a much better understanding of the relation between the surveyed galaxies and the mass fluctuations they are thought to trace.\n\nEssentially all models of galaxy clustering assume that galaxies are biased tracers of the mass density fluctuation field. Although this bias is expected to be nonlinear, scale-dependent and stochastic (1996MNRAS.282..347M, ; 1999MNRAS.304..767S, ), the simpler, linear, scale-independent, deterministic model has proved to be an extremely useful first order approximation (1984ApJ…284L…9K, ; 1986ApJ…304…15B, ; 1989MNRAS.237.1127C, ). However, in order to predict corrections beyond linear order to the galaxy two-point correlation, or even the leading order contribution to higher order statistics such as the galaxy three-point correlation or bispectrum, one must address the complications which arise from nonlinearity, scale dependence and stochasticity. For example, if the bias relation is established in coordinate space, then nonlinear biasing will produce scale dependence and stochasticity in Fourier space, and vice-versa (e.g., 1999ApJ…525..543M, ). This randomness will add to other sources of stochasticity which may arise, for example, from the fact that the formation of galaxies and halos depends on quantities other than the mass density field (e.g., the surrounding tidal field). Moreover, the bias may be established in the initial conditions (Lagrangian bias) or, alternatively, at the present time (Eulerian bias). In the former case, the bias between the tracers and the mass will be affected by the subsequent, nonlinear gravitational evolution. This will introduce additional nonlinearity, scale dependence and stochasticity. Furthermore, if the velocities of the tracers differ from those of the mass elements, then this will complicate the application of the continuity equation to describe the redshift evolution of bias.\n\nCurrent analytic approaches to galaxy and dark matter halo clustering take into account some of these complications. In most models, the fundamental quantity is the overdensity of tracers within a sphere of radius centered at position . It is commonly assumed that is solely a function of the local mass overdensity (1993ApJ…413..447F, ; 1988ApJ…333…21S, ) (see also 2009JCAP…08..020M, ), whose Taylor expansion coefficients are the galaxy or halo bias parameters (1997MNRAS.284..189M, ; 2001ApJ…546…20S, ; 2010MNRAS.402..589M, ). If this bias is established at a different time than the epoch at which the tracers are observed, then this local bias scheme is combined with some (Eulerian or Lagrangian) perturbative treatment of gravitational instability (see 2002PhR…367….1B, , for a review of perturbation theory) to predict the galaxy or halo power spectrum, bispectrum etc. (e.g., 1998MNRAS.301..797H, ; 1998MNRAS.297..692C, ; 1998MNRAS.298.1097P, ; 2000ApJ…537…37T, ; 2000ApJ…530…36B, ; 2001ApJ…546…20S, ; 2006PhRvD..74j3512M, ; 2007PhRvD..75f3512S, ; 2007PhRvD..76h3004S, ; 2007PASJ…59…93N, ; 2008PhRvD..78h3519M, ; 2009ApJ…691..569J, ; 2009PhRvD..80h3528S, ; 2010arXiv1009.1131V, ). This formalism can be extended to include stochasticity by formulating the biasing in terms of the conditional probability distribution of at a given (e.g., 1998ApJ…504..601P, ; 1999ApJ…520…24D, ; 1999ApJ…525..543M, ; 2000ApJ…537…37T, ; 2000ApJ…542..559T, ). One of the main caveats with such local biasing schemes is that galaxies (or halos) are treated as though they define a continuous field smoothed on some scale , whereas they are, in fact, discrete objects.\n\nThe peaks approach to galaxy and dark matter halo clustering is interesting because it exhibits all of the complications mentioned above while also accounting for the discrete nature of the tracers (after all, peaks define a point process). In this model, the fundamental quantity is the set of positions which are local maxima of the density field (from which a peak overabundance in spheres of radius could in principle be derived). Since the evolved density field is highly nonlinear, the peak constraint is generally applied to the initial (Lagrangian) Gaussian density field, with the assumption that the most prominent peaks should be in one-to-one correspondence with luminous galaxies or massive halos in the low redshift Universe (see, e.g., 1988ApJ…327..507F, ; 1993MNRAS.265..689K, ; 2002MNRAS.332..339P, , for numerical studies of this association). Peak abundances, profiles and correlation functions in real and redshift space have been studied in the literature (1985MNRAS.217..805P, ; 1985ApJ…297…16H, ; 1986ApJ…304…15B, ; 1986PhRvL..56.1878O, ; 1987MNRAS.225..777C, ; 1989MNRAS.238..319C, ; 1989MNRAS.238..293L, ; 1990MNRAS.243..133P, ; 1995MNRAS.272..447R, ; 1998ApJ…499..548M, ; 2008PhRvD..78j3503D, ; 2010PhRvD..81b3526D, ). Some of these results have been used to interpret the abundance and clustering of rich clusters (1985ApJ…297..365K, ; 1988lsmu.book..419B, ; 1989ApJ…345….3C, ; 1997MNRAS.284..189M, ; 1998ApJ…509..494C, ; 2001NYASA.927….1S, ), constrain the power spectrum of mass fluctuations (1998ApJ…495..554C, ; 2010MNRAS.401.1989D, ), and study evolution bias (2008MNRAS.385L..78P, ) and assembly bias (2008ApJ…687…12D, ).\n\nOn asymptotically large scales, peaks are linearly biased tracers of the mass density field, and this bias is scale independent (1984ApJ…284L…9K, ; 1984ApJ…285L…1P, ; 1986ApJ…304…15B, ; 1997MNRAS.284..189M, ). However, these conclusions are based on a configuration space argument known as the peak background split – which establishes a relation between the sensitivity of the peak bias factors and the peak abundances on the peak height – whereas a Fourier space analysis suggests that the linear bias factor of peaks is the sum of two terms, one of which is -dependent (1999ApJ…525..543M, ; 2008PhRvD..78j3503D, ). In configuration space, this leads to scale dependence of the bias and stochasticity. The -dependence of the linear peak bias arises from the peak constraint, i.e. the fact that one must specify not only the value of the mass density field but also its first two derivatives to define a peak. Therefore, this is a model in which the bias depends on quantities other than the local density. Moreover, as mentioned above, the peak biasing is applied to the initial Gaussian density field so that the late time peak bias is modified by nonlinear evolution and associated stochasticity. In this regards, peaks exhibit a nontrivial velocity bias (2010PhRvD..81b3526D, ), which further complicates the nonlinear evolution.\n\nIn the peak model, both the constant and the -dependent piece of the linear bias factor depend on peak height. As shown in (2010PhRvD..81b3526D, ), the scale-independent contribution can be derived from the peak background split argument. In the first half of this paper, we demonstrate that the Fourier space approach also predicts constant and -dependent contributions to the second and higher order peak bias factors. We then show that the scale-independent parts of all these nonlinear peak bias factors can be also derived from the peak background split argument, thus generalizing the result of (2010PhRvD..81b3526D, ). We go on to show how the peak background split approach can be used to determine the scale-dependent part of the peak bias factors, first at linear order, and then for all nonlinear orders as well. This is particularly interesting because it illustrates how the peak background split argument should be implemented if the abundance of the biased tracers (in this case, peaks) depends on quantities other than the local mass overdensity (in this case, the first and second derivatives of the mass density field).\n\nAs recognized in 2008PhRvD..78j3503D , the -dependence of the first order peak bias strongly amplifies the contrast of the baryon acoustic oscillation (or BAO. see 2009arXiv0910.5224B, , and references therein) in the correlation of initial density maxima. However, this calculation was performed for peaks identified in the initial conditions, so there was no clear connection with the clustering of dark matter halos and galaxies. This is also true of all the results presented in the first half of this paper. To remedy this problem, we show in the second half how the effects of the (nonlinear, nonlocal) gravitational evolution of density peaks can be incorporated in the peak model. This allows us to ascertain the extent to which the initial scale dependence of bias across the BAO survives at late times. Our analysis incorporates two main complications that are usually ignored in local bias schemes. Namely, peak biasing depends on more than just the value of the local density, and peaks exhibit a velocity bias which (in addition to merging) complicates analyses based on the continuity equation. Finally, we show that taking into account these effects is of more than academic interest: Our peaks model provides a very good description of the scale dependence of the bias of massive halos in numerical simulations – halos that are expected to host the luminous red galaxies (LRGs) which are often targeted in BAO experiments.\n\nThe paper is organized as follows. Section §II briefly reviews known results and introduce some useful definitions. Section §III focuses on the correlation of initial density peaks of a Gaussian random field. It is shown that the scale-dependent and scale-independent parts of the peak bias parameters can be derived from a peak-background split argument. Section §IV considers the gravitational evolution of the peak correlation function in the Zel’dovich approximation. It is shown that, in addition to gravity mode-coupling, the Lagrangian peak biasing can generate a significant scale-dependent bias across the baryonic acoustic feature at the collapse epoch. Measurements of bias at BAO scales from the clustering of massive halos are also presented and compared with the model. Section §V summarizes our results. Technical details of the calculation can be found in Appendix §A and B.\n\n## Ii Definitions, notations and known results\n\nWe begin by introducing some definitions and reviewing known results about the clustering of density peaks in Gaussian random fields. Next, we derive the peak correlation at second order. This result will serve as input to the calculation of the evolved correlation of density peaks.\n\n### ii.1 Spectral moments\n\nThe statistical properties of density peaks depend not only on the underlying density field, but also on its first and second derivatives. We are, therefore, interested in the linear (Gaussian) density field and its first and second derivatives, and . In this regard, it is convenient to introduce the normalized variables , and , where the are the spectral moments of the matter power spectrum,\n\n σ2n(RS,z0)≡12π2∫∞0dkk2(n+1)Pδ(k,z0)W(kRS)2. (1)\n\ndenotes the dimensionless power spectrum of the linear density field at redshift , and is a spherically symmetric smoothing kernel of length introduced to ensure convergence of all spectral moments. A Gaussian filter will be adopted throughout this paper. We will use the notation to denote . The ratio is proportional to the typical separation between zero-crossings of the density field (1986ApJ…304…15B, ). For subsequent use, we also define the spectral parameters\n\n γn(RS)=σ2nσn−1σn+1 (2)\n\nwhich reflect the range over which is large. We will also work with the scaled velocities and with the curvature . Here, is the -th component of the (proper) peculiar velocity, , is the logarithmic derivative of the linear theory growth rate and is the Laplacian. Note that has dimensions of length.\n\nThe analogous quantities to at non-zero separation are defined as follows:\n\n ξ(n)ℓ(RS,r,z0)=12π2∫∞0dkk2(n+1)PδS(k,z0)jℓ(kr), (3)\n\nwhere are spherical Bessel functions. As gets larger, these harmonic transforms become increasingly sensitive to small scale power. The auto- and cross-correlations of the fields , , , and can generally be decomposed into components with definite transformation properties under rotations. 2008PhRvD..78j3503D gives explicit expressions for the isotropic and homogeneous linear density field.\n\n### ii.2 Peak biasing and 2-point correlation function at the first order\n\nAlthough density peaks form a well-behaved point process, the large scale asympotics of the two-point correlation and line of sight mean streaming of peaks of height and curvature identified on a scale in the initial Gaussian density field linearly extrapolated at redshift can be thought of as arising from the continuous, deterministic bias relation (2008PhRvD..78j3503D, ; 2010PhRvD..81b3526D, )\n\n δpk(ν,u,RS,x) =bνδS(x,z0)−bζ∂2δS(x,z0), (4) vpk(RS,x,z0) =vS(x,z0)−σ20σ21∂δS(x,z0), (5)\n\nwhich is nonlocal owing to the smoothing of the mass distribution. Here, and are the average peak overdensity and velocity, and are the mass density and velocity smoothed at scale (so as to retain only the large scale, coherent motion of the peak), is the Laplacian, and the bias parameters and are\n\n bν(ν,u,RS,z0)≡1σ0(RS,z0)(ν−γ1u1−γ21),bζ(ν,u,RS,z0)≡1σ2(RS,z0)(u−γ1ν1−γ21). (6)\n\nThe bias coefficient is dimensionless, whereas has units of (length). In fact, is precisely the amplification factor found by 1986ApJ…304…15B who neglected derivatives of the density correlation function (i.e. their analysis assumes ). Unlike , does not depend on (as expected) because the redshift dependence of , cancels the factor coming from . Note also that, if the effective peak density can be less than -1 in deep voids. However, this is not a problem because is not an observable quantity (this is not a count-in-cell density).\n\nIn what follows, we will focus on the clustering of initial density peaks of significance , for which the first order bias parameters are\n\n ¯bν(ν,RS,z0) ≡1σ0(RS,z0)(ν−γ1¯u1−γ21) (7) ¯bζ(ν,RS,z0) ≡1σ2(RS,z0)(¯u−γ1ν1−γ21). (8)\n\nHere, the overline denotes the averaging over the peak curvature, so that is the mean curvature of peaks of height on filtering scale . It is convenient to define the quantity as the Fourier space multiplication by\n\n bspk(k,z0)=bν+bζk2, (9)\n\nwhere we have omitted the explicit dependence on , and for brevity. Although has the same functional form as Eq. (57) of 1999ApJ…525..543M , this author approximated density peaks by density extrema. Therefore, our coefficients agree with his expressions only in the limit , in which and . As we will see shortly, the product of factors can be used to define spatial bias parameters at all orders. For peaks of significance , the first order biasing is equivalent to the Fourier space multiplication by , i.e.\n\n ~bI(k,z0)≡~b10+~b01k2where~b10≡¯bν, ~b01≡¯bζ. (10)\n\nWe emphasize that this result is exact: there are no higher powers such as , etc. In , and count the number of factors of and , respectively (our notation should not be confounded with that of 2010PhRvD..81f3530G, ). In Sec.§III.2, we will demonstrate that the are the bias parameters in the local bias model. Eq.(10) defines the first order bias for peaks of height . Notice that, in real space, is a differential operator acting on fields and correlation functions. Hence, the first order average peak overabundance can also be rewritten .\n\nUsing the peak bias (9), it is straightforward to show that the real space cross- and auto-power spectrum are\n\n P(1)pk,δ(ν,RS,k,z0) =~bI(k,z0)Pδ(k,z0)W(kRS) (11) P(1)pk(ν,RS,k) =~b2I(k,z0)Pδ(k,z0)W2(kRS). (12)\n\nThe corresponding relations for the correlation functions are\n\n ξ(1)pk,δ(ν,RS,r,z0) =(~bIξ(0)0)=~b10ξ(0)0(RS,r,z0)+~b01ξ(1)0(RS,r,z0) (13) ξ(1)pk(ν,RS,r) =(~b2Iξ(0)0)=~b210ξ(0)0(RS,r,z0)+2~b10~b01ξ(1)0(RS,r,z0)+~b201ξ(2)0(RS,r,z0). (14)\n\nNote that the cross-correlations with the linear density field depend explicitly on . As shown in 2008PhRvD..78j3503D ; 2010PhRvD..81b3526D , these expressions agree with those obtained from a rather lengthy derivation based on the peak constraint, which involves joint probability distributions of the density field and its derivatives. It is worth noticing that, while expressions (12) and (14) are only valid at leading order, the cross-correlation functions (11) and (13) are exact to all orders.\n\nWe emphasize that the biasing (5) is a mean bias relation that does not contain any information about stochasticity. Due to the discrete nature of density peaks however, one can expect that the average peak overabundance in a cell centered at generally be a random function of the underlying matter density (and its derivatives) in some neighborhood of that point. In fact, while the bias is deterministic in Fourier space, it is generally stochastic and scale-dependent in configuration space (2010PhRvD..81b3526D, ).\n\n### ii.3 Velocities\n\nIn what follows, we will be interested in the gravitational evolution of the correlation of initial density peaks for which the velocity field also matters. As can be seen from, e.g., the average bias relation (5), peaks locally move with the dark matter (since the gradient of the density vanishes at the position of a peak). However, the three-dimensional velocity dispersion of peaks, , is smaller than the mass velocity dispersion (1986ApJ…304…15B, ; 1987AcPhH..62..263S, ),\n\n σ2vpk=σ2−1(1−γ20), (15)\n\nbecause large scale flows are more likely to be directed towards peaks than to be oriented randomly. As recognized in 2010PhRvD..81b3526D , the -dependence of the first order peak bias leads to a -dependence of the peak velocity statistics even though the peaks move with the dark matter flows. Taking the divergence of the peak velocity Eq. (5) and Fourier transforming, we find\n\n θpk(RS,k,z0)=(1−σ20σ21k2)W(kRS)θ(k,z0)≡bvpk(k)θS(k,z0), (16)\n\nwhere is the velocity divergence. This defines the linear velocity bias factor for peaks of significance and curvature . Note that it does not depend on , nor on redshift and, for the highest peaks, it remains scale dependent even though the spatial bias has no -dependence. Nonetheless, for notational consistency, we define\n\n ~bvpk(k)≡¯bvpk(k)=bvpk(k) (17)\n\nas being the velocity bias of peaks of height .\n\n### ii.4 Smoothing scale and peak height\n\nTo illustrate the key predictions of the peak formalism, we will present results for the two-point correlation of density peaks in a CDM cosmology with , , , and normalization consistent with the latest constraints from the CMB (2010arXiv1001.4538K, ). The sound horizon at recombination is .\n\nThe peak height and the filtering radius could in principle be treated as two independent variables. However, in order to make as much connection with dark matter halos (and, to a lesser extent, galaxies) as possible, we assume that density maxima with height identified in the smoothed density field linearly extrapolated at are related to dark matter halos of mass collapsing at redshift , where is the critical density for collapse in the spherical model (1972ApJ…176….1G, ; 1974ApJ…187..425P, ) and we use a Gaussian filter to relate smoothing scale to mass. A more realistic treatment should include non-spherical collapse since the maxima of a Gaussian density field are inherently triaxial (1970Afz…..6..581D, ; 1996ApJS..103….1B, ; 2001MNRAS.323….1S, ; 2008MNRAS.388..638D, ). In the background cosmology we assume, the linear critical density for spherical collapse at is . The Gaussian smoothing scale at which is , which corresponds to a characteristic mass scale .\n\nWhile there is a direct correspondence between massive halos in the evolved density field and the largest maxima of the initial density field, the extent to which galaxy-sized halos trace the initial density maxima is unclear. Therefore, we will only consider mass scales significantly larger than the characteristic mass for clustering, , for which the peak model is expected to work best. For sake of illustration, we will present results for (2) and (3) density peaks. At redshift , this corresponds to a filtering length and or, equivalently, a mass scale and . To help set scales in the discussion which follows, the associated values of are and , respectively (note that bias factors here are Lagrangian ones). The three-dimensional velocity dispersion of these peaks is : for our two smoothing scales, this corresponds to and (recall that our velocities are in units of at , so dispersions have dimensions of (length)).\n\n## Iii Correlation of initial density peaks at second order\n\n### iii.1 The general formula\n\nCorrelations of density maxima can be evaluated using the Kac-Rice formula (Kac1943, ; Rice1945, ). In this approach, is Taylor-expanded around the position of a local maximum. The number density of peaks of height at position in the smoothed density field reads as (we drop the subscript in the right-hand side for notational convenience)\n\n npk(ν′,RS,x)≡33/2R31|detζ(x)|δ(3)[η(x)]θ[λ3(x)]δ[ν(x)−ν′] (18)\n\nwhere\n\n R1≡√3σ1σ2 (19)\n\nis the characteristic radius of a peak. Note that Eq. (18) is independent of the redshift for peaks at fixed . The three-dimensional Dirac delta ensures that all extrema are included. The product of the theta function , where is the lowest eigenvalue of the shear tensor , and the Dirac delta further restrict the set to density maxima with specific height . The 2-point correlation function for maxima of a given significance separated by a distance thus is\n\n 1+ξpk(ν,RS,r)=⟨npk(ν,RS,x1)npk(ν,RS,x2)⟩¯n2pk(ν,RS), (20)\n\nwhere is the differential average number density of peaks of height on filtering scale (1986ApJ…304…15B, ),\n\n ¯npk(ν,RS)=1(2π)2R31e−ν2/2G(1)0(γ1,γ1ν). (21)\n\nNote that it does not depend on (or, equivalently, on the amplitude of density fluctuations) at fixed . The function is defined in Eq.(143). For the 2 and 3 density peaks considered here, the mean abundance is and , respectively. While the calculation of Eq.(20) at first order in the mass correlation and its derivatives is rather straightforward (2008PhRvD..78j3503D, ) (this is Eq.14), at second order it is quite involved. The main steps are detailed in Appendix A. Fortunately, most of the terms nicely combine together, and the final result can be recast into the compact form\n\n ξpk(ν,RS,r) (22)\n\nIn the right-hand side of Eq.(III.1), all the correlations are function of , and . More precisely, the first line contains terms involving first and second order peak bias parameters and , the second line has a -dependence through the function (which is displayed in Fig.9), and the last two terms depend on the separation (and ) only. Note that this expression exhibits not only terms quadratic in bias parameters but, unlike standard local bias (Eulerian or Lagrangian), also terms linear in them. These terms involve derivatives of the linear mass correlation that vanish at zero lag. They arise because the peak correlation depends also on the statistical properties of and .",
null,
"Figure 1: Left panel : Lagrangian bias coefficients characterizing the second order peak bias ~bII(q1,q2), Eqs. (24) – (26), as a function of peak height for a filtering radius RS=2.9 h−1Mpc or, equivalently, a mass scale MS=3×1013 M⊙/h. The shape parameter is γ1≈0.65. For the 2σ peaks considered in subsequent illustrations, ~b20 is negative, ~b20≈−1.2. Right panel : The second and fourth root ~b1/211 and ~b1/402 define a characteristic scale below which the scale dependence of ~bII is large. In the limit ν→∞, ~b02 becomes negative and converges towards −σ−22(1−γ21)−1, whereas ~b11 asymptotes to the constant (γ1/σ1)2(1−γ21)−1. Note that, in contrast to ~b1/211 and ~b1/402 that have units of length, ~b20 is dimensionless.\n\nIn analogy with , the action of the second order peak bias is defined as the Fourier space multiplication by\n\n (23)\n\nwhere and are wavemodes and the coefficients , and describing the peak bias at second order are\n\n ~b20(ν,RS,z0) ≡¯bνν−1σ20(1−γ21)=1σ20⎡⎣ν2−2γ1ν¯u+γ21¯¯¯¯¯u2(1−γ21)2−1(1−γ21)⎤⎦ (24) ~b11(ν,RS,z0) (25) ~b02(ν,RS,z0) ≡¯bζζ−1σ22(1−γ21)=1σ22⎡⎣¯¯¯¯¯u2−2γ1ν¯u+γ21ν2(1−γ21)2−1(1−γ21)⎤⎦. (26)\n\nSee Fig. 1 for the relative size of these contributions as a function of peak height. As shown in Appendix §A, , and arise upon averaging the products , and over the peak curvature. In this respect, is the th moment of the peak curvature at a given significance . For sake of completeness, acts on the functions and according to\n\n (ξ(n1)ℓ1~b2IIξ(n2)ℓ2)≡14π4∫∞0dq1∫∞0dq2q2(n1+1)1q2(n2+1)2~b2II(q1,q2,z0)PδS(q1,z0)PδS(q2,z0)jℓ1(q1r)jℓ2(q2r). (27)\n\nWhen , the real space counterpart of is readily obtained by making the replacement (which reflects the fact that is a solution to the Helmholtz equation ). Eq.(III.1) is the main result of this Section. We note that 1995MNRAS.272..447R also computed second order corrections to the peak correlation for which, however, they did not provide any explicit expression.\n\nBefore illustrating the impact of the second order terms on the correlation of initial density maxima, we remark that, although the calculation of the peak correlation at third order is very involved, the contribution proportional to can be derived relatively easily. We find\n\n (28)\n\nwhere the third order coefficient is defined as\n\n ¯bννν(ν,RS,z0)=ν3−3γ1ν2¯u+3γ21ν¯¯¯¯¯u2−γ31¯¯¯¯¯u3σ30(1−γ21)3. (29)\n\nThus, up to third order, the peak correlation may be cast into the form\n\nwhere the missing terms, while of the same order as the ones we display, have a more complicated structure (see Eq.(III.1) for second order contributions). In the limit , the scale-independent pieces , and to the bias asymptote to the values , and obtained in the high level excursion set approximation (1984ApJ…284L…9K, ). We will see shortly that these bias factors are indeed equal to the peak-background split biases derived from the average peak abundance Eq.(21).\n\nIn Fig.1, the second order Lagrangian biases , and are shown as a function of the peak height for the mass scale (left panel). The second and fourth root and , respectively, define a characteristic comoving scale below which the corresponding scale-dependent terms and are large. In the limit , the scale-independent piece increasingly dominates whereas and tend towards the constant value and . For a realistic threshold height however, the scale-dependent contributions cannot be neglected since and/or are typically much larger than . Although the exact value of the second order biases somewhat changes with the mass scale , their overall behavior varies little over the range as weakly depends on . Therefore, our conclusions hold regardless the exact amount of smoothing. It should also be noted that, if one wishes to associate these bias factors to halos of mass , then the variation with is in fact a variation with redshift.\n\nThe correlation is shown in Fig.2 for the 2 and 3 initial density peaks collapsing at redshift . The solid (green) curve represents the first order term (Eq.(14)) while the long-dashed-dotted curve is the full second order correlation (Eq.(III.1). We have also plotted the second order contributions quadratic in , linear in and independent of separately. They are shown as the short-dashed-dotted, short-dashed and long-dashed curve, respectively. Notice that and for the low and high threshold, respectively. For the 3 peaks, the term linear in is negative over the range of distances considered and, thus, appears as a dotted line. In fact, the piece linear in is the only negative contribution at small separations but it vanishes at zero lag. Since the true peak correlation rapidly converges to -1 for (as shown by 1989MNRAS.238..293L , the small scale behavior of is dominated by an exponential term ), small scale exclusion should manifest itself in higher-order terms, but it is beyond the scope of this paper to calculate them. We can also observe that the correlation of 2 peaks is negative on scales . However, this is likely an artifact of truncating the expansion at second order in the correlations .\n\nFigure 3 focuses on the baryon acoustic oscillation. At separation , second order corrections are negligibly small, so that is given by Eq.(14) with an accuracy better than 1%. For comparison, we also plot the first order correlation arising in a local biasing scheme with same value of . It is important to note that is the correlation of the unsmoothed, linear mass density field (in practice we use ). It is quite remarkable how the “sombrero”-shaped terms and restore the contrast of the baryonic feature otherwise smeared out by the large filtering (Recall that and 5.3 for the 2 and 3 peaks, respectively). The BAO in the peak correlations is even sharpened relative to the BAO in . A thorough discussion of this effect can be found in 2008PhRvD..78j3503D . In §IV, we will see that, although most of the initial enhancement of the BAO contrast is smeared out by the gravitational motions of the peaks, some of it survives at the epoch of collapse.",
null,
"Figure 2: The correlation of initial density peaks at the second order is shown as the long dashed-dotted (magenta) curve for 2σ (left panel) and 3σ (right panel) density peaks collapsing at redshift z0=0.3 according to the spherical collapse prescription. For the Gaussian filter used in this paper, this corresponds to a mass scale MS=3×1013 and 2×1014 M⊙/h, respectively. The individual contributions appearing in Eq. (III.1) are shown separately. Namely, the solid (cyan) curve is the first order contribution ~b2Iξ(0)0, whereas the second order term quadratic in ~bII, linear in ~bII and independent of ~bII are shown as the short dashed-dotted, short-dashed and long-dashed curve, respectively. A dotted line indicates negative values.\n\n### iii.2 A peak-background split derivation of the peak bias factors\n\nThe peak-background split (1986ApJ…304…15B, ; 1989MNRAS.237.1127C, ; 1996MNRAS.282..347M, ; 1999MNRAS.308..119S, ) is a heuristic argument that furnishes another way to derive the large scale bias of density peaks. This approach is quite different from ours because it is based on number counts in configuration space and, thus, does not make reference to the bias in Fourier space.\n\nThere are two ways in which the peak-background split is implemented. In the first (1986ApJ…304…15B, ; 1989MNRAS.237.1127C, ; 1997MNRAS.284..189M, ), the -th order bias parameter is related to the -th order derivative of the differential number density of virialized objects according to\n\n bN(ν,z0)≡(−1σ0(z0))N¯n(ν)−1∂N[¯n(ν)]∂νN, (31)\n\nwith the important caveat that the mass function is universal (i.e. it depends on solely). For density peaks, on setting and performing the derivatives with respect to at fixed smoothing radius , one obtains (1997MNRAS.284..189M, )\n\n bN(ν,RS,z0)≡(−1σ0(RS,z0))N¯npk(ν,RS)−1∂N[¯npk(ν,RS)]∂νN. (32)\n\nAs noted by 2010PhRvD..81b3526D , the first order peak-background split bias is\n\n bI(ν,RS,z0)=~b10(ν,RS,z0). (33)\n\nThis means that the large scale, constant and deterministic bias factor returned by the peak-background split argument is exactly the same as in our approach, when we are on large enough scales that the -dependence associated with the term can be ignored. It turns out that Eq.(33) generalizes as follows: higher order derivatives of the peak number density (21) with respect to (which are reported in 1997MNRAS.284..189M, ) result in the large scale, -independent peak bias coefficients\n\n bII(ν,RS,z0)=~b20(ν,RS,z0),bIII(ν,RS,z0)=~b30(ν,RS,z0),etc.. (34)\n\nHowever, derivatives of Eq.(21) cannot produce the -dependent bias terms like , etc., which arise owing to the constraints imposed by derivatives of the mass density field.\n\nTherefore, we will now consider the second implementation of the peak background split (1996MNRAS.282..347M, ; 1999MNRAS.308..119S, ) in which the dependence of the mass function on the overdensity of the background is derived explicitly. The ratio of this conditional mass function to the universal one is then expanded in powers of the background density. The bias factors are the coefficients of this expansion. We will demonstrate below that this is the correct approach to recover the scale or -dependence of the peak bias parameters.\n\nThe key quantity is the average number density of peaks identified on scale as a function of the overdensity defined on another smoothing scale (we use the subscript because we are mainly be interested in the regime in which the scale of the background satisfies ). This conditional peak number density is\n\n ¯npk(ν,RS|δB,RB)=G(0)0(~γ1,~γ1~ν)(2π)3/2R31exp[−(ν−ϵνB)2/2(1−ϵ2)]√2π(1−ϵ2), (35)\n\nwhere\n\n νB≡δBσ0B,⟨ννB⟩≡ϵ=σ20×σ0Sσ0B,⟨uνB⟩≡γ1ϵr,r≡⟨k2⟩×⟨k2⟩S=σ21×/σ21Sσ20×/σ20S, (36) ⟨u|ν,νB⟩≡~γ1~ν=γ1ν(1−ϵ2r1−ϵ2)−γ1(1−r1−ϵ2)ϵνB,Var(u|ν,νB)≡1−~γ21≡1−γ21[1+ϵ2(1−r)21−ϵ2], (37)\n\nwith , , and we have defined\n\n σ2n×≡12π2∫∞0dkk2(n+1)Pδ(k)W(kRS)W(kRB) (38)\n\n(see equation E5 of 1986ApJ…304…15B, ). Here, the denotes the splitting of smoothing scales, i.e. one filter is of size , the other of size . We have deliberately written as an average of to emphasize that we naively expect it to give rise to the dependence of peak bias. This will eventually be proven correct. In addition, note that . In what follows, it will be convenient to define . When , this ratio is of order unity (there is a form factor that depends on the shape of the smoothing filter).\n\nNotice that the integral of Eq. (35) over all gives the unconditional number density of Eq. (21). The peak background split expands the ratio in powers of . This ratio is then interpreted as representing the average overabundance of peaks in regions which have mass overdensity although, strictly speaking, it is a statement about cells of overdensity that have a peak at their center. Therefore, it is not a statement about randomly placed cells, even though, as we discuss below, it is often treated as such.\n\nIf we set and then the coefficient of the term of order gives , that of order gives gives , etc. Note that in this limit,"
] | [
null,
"https://media.arxiv-vanity.com/render-output/3618529/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/3618529/x3.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9003566,"math_prob":0.97735274,"size":33793,"snap":"2021-43-2021-49","text_gpt3_token_len":7956,"char_repetition_ratio":0.16428423,"word_repetition_ratio":0.021264477,"special_character_ratio":0.24800995,"punctuation_ratio":0.16120516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.982495,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T18:34:53Z\",\"WARC-Record-ID\":\"<urn:uuid:8780128e-f74f-4870-8978-839a69145ef6>\",\"Content-Length\":\"1049554\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1196554-522d-4e9f-86e0-36ab6f6e8b45>\",\"WARC-Concurrent-To\":\"<urn:uuid:38e9aa66-1b32-4def-93ad-f9a390583c8a>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1009.3449/\",\"WARC-Payload-Digest\":\"sha1:3LHR2MWSK7JYBD6ZEJFTC2G3IYLPJJUC\",\"WARC-Block-Digest\":\"sha1:4RSPQFDQIVCT6WFOWD5BSPVJCZVESAYR\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585280.84_warc_CC-MAIN-20211019171139-20211019201139-00394.warc.gz\"}"} |
https://pgxn.org/dist/hyperloglog_estimator/1.2.6/ | [
"# hyperloglog_estimator 1.2.6\n\nThis Release\nhyperloglog_estimator 1.2.6\nDate\nStatus\nTesting\nOther Releases\nAbstract\nEstimates number of distinct elements in a data set (aggregate and a data type).\nDescription\nProvides an alternative to COUNT(DISTINCT) aggregate, computing an estimate of number of distinct values, and a data type that may be used within a table (and updated continuously). This implementation is based on HyperLogLog algorithm, an enhancement of LogLog (see the paper 'HyperLogLog: the analysis of near-optimal cardinality estimation algorithm' by Flajolet, Fusy, Gandouet and Meunier, published in 2007).\nReleased By\nLicense\nThe (three-clause) BSD License\nResources\nSpecial Files\nTags\n\n### Extensions\n\nhyperloglog_counter 1.2.6\n\n# HyperLogLog Estimator\n\nThis is an implementation of HyperLogLog algorithm as described in the paper \"HyperLogLog: the analysis of near-optimal cardinality estimation algorithm\", published by Flajolet, Fusy, Gandouet and Meunier in 2007. Generally it is an improved version of LogLog algorithm with the last step modified, to combine the parts using harmonic means.\n\nThis is not the only (or first) PostgreSQL extension implementing the HyperLogLog estimator - since 2013/02 there's postgresql-hll It's a nice mature extension, so you may try it. I plan to write some article comparing the pros/cons of the two implementations eventually.\n\n## Contents of the extension\n\nThe extension provides the following elements\n\n• hyperloglog_estimator data type (may be used for columns, in PL/pgSQL)\n\n• functions to work with the hyperloglog_estimator data type\n\n• `hyperloglog_size(error_rate real)`\n• `hyperloglog_init(error_rate real)`\n\n• `hyperloglog_add_item(counter hyperloglog_estimator, item anyelement)`\n\n• `hyperloglog_get_estimate(counter hyperloglog)`\n\n• `hyperloglog_reset(counter hyperloglog)`\n\n• `length(counter hyperloglog_estimator)`\n\nThe purpose of the functions is quite obvious from the names, alternatively consult the SQL script for more details.\n\n• aggregate functions\n\n• `hyperloglog_distinct(anyelement, real)`\n• `hyperloglog_distinct(anyelement)`\n\nwhere the 1-parameter version uses default error rate 2%. That's quite generous and it may result in unnecessarily large estimators, so if you can work with lower precision, supply your error rate.\n\n## Usage\n\nUsing the aggregate is quite straightforward - just use it like a regular aggregate function\n\n``````db=# SELECT hyperloglog_distinct(i, 0.01)\nFROM generate_series(1,100000) s(i);\n``````\n\nand you can use it from a PL/pgSQL (or another PL) like this:\n\n``````DO LANGUAGE plpgsql \\$\\$\nDECLARE\nv_counter hyperloglog_estimator := hyperloglog_init(32, 0.025);\nv_estimate real;\nBEGIN\nPERFORM hyperloglog_add_item(v_counter, 1);\nPERFORM hyperloglog_add_item(v_counter, 2);\nPERFORM hyperloglog_add_item(v_counter, 3);\n\nSELECT hyperloglog_get_estimate(v_counter) INTO v_estimate;\n\nRAISE NOTICE 'estimate = %',v_estimate;\nEND\\$\\$;\n``````\n\nAnd this can be done from a trigger (updating an estimate stored in a table).\n\n## Problems\n\nBe careful about the implementation, as the estimators may easily occupy several kilobytes (depends on the precision etc.). Keep in mind that the PostgreSQL MVCC works so that it creates a copy of the row on update, an that may easily lead to bloat. So group the updates or something like that.\n\nThis is of course made worse by using unnecessarily large estimators, so always tune the estimator to use the lowest amount of memory."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7097838,"math_prob":0.6163196,"size":2664,"snap":"2019-13-2019-22","text_gpt3_token_len":610,"char_repetition_ratio":0.16954887,"word_repetition_ratio":0.0,"special_character_ratio":0.2042042,"punctuation_ratio":0.11655012,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9773251,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T05:17:32Z\",\"WARC-Record-ID\":\"<urn:uuid:f243adc1-8db9-48f6-aa9d-b479d50cb906>\",\"Content-Length\":\"12203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9624bd49-c5f6-420e-bc9e-8efea82f9234>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f9569f0-5c29-4462-96dc-bacb9317cea3>\",\"WARC-IP-Address\":\"88.198.49.178\",\"WARC-Target-URI\":\"https://pgxn.org/dist/hyperloglog_estimator/1.2.6/\",\"WARC-Payload-Digest\":\"sha1:A62IR5GC4HZPN57JU52NILF57KQYVPW3\",\"WARC-Block-Digest\":\"sha1:WUVRNYREFNYVFJYGVA42CFJQRR255TCH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256763.42_warc_CC-MAIN-20190522043027-20190522065027-00209.warc.gz\"}"} |
https://numberworld.info/202331120031131 | [
"# Number 202331120031131\n\n### Properties of number 202331120031131\n\nCross Sum:\nFactorization:\nDivisors:\nCount of divisors:\nSum of divisors:\nPrime number?\nNo\nFibonacci number?\nNo\nBell Number?\nNo\nCatalan Number?\nNo\nBase 3 (Ternary):\nBase 4 (Quaternary):\nBase 5 (Quintal):\nBase 8 (Octal):\nb804e289559b\nBase 32:\n5o0jh8ilcr\nsin(202331120031131)\n0.58137376003262\ncos(202331120031131)\n0.81363662106959\ntan(202331120031131)\n0.71453735608454\nln(202331120031131)\n32.940926679369\nlg(202331120031131)\n14.30606268563\nsqrt(202331120031131)\n14224314.39582\nSquare(202331120031131)\n4.0937882133052E+28\n\n### Number Look Up\n\n202331120031131 which is pronounced (two hundred two trillion three hundred thirty-one billion one hundred twenty million thirty-one thousand one hundred thirty-one) is a great figure. The cross sum of 202331120031131 is 23. If you factorisate the figure 202331120031131 you will get these result 7 * 13 * 2223418901441. The number 202331120031131 has 8 divisors ( 1, 7, 13, 91, 2223418901441, 15563932310087, 28904445718733, 202331120031131 ) whith a sum of 249022916961504. 202331120031131 is not a prime number. 202331120031131 is not a fibonacci number. 202331120031131 is not a Bell Number. 202331120031131 is not a Catalan Number. The convertion of 202331120031131 to base 2 (Binary) is 101110000000010011100010100010010101010110011011. The convertion of 202331120031131 to base 3 (Ternary) is 222112101122222122020022112012. The convertion of 202331120031131 to base 4 (Quaternary) is 232000103202202111112123. The convertion of 202331120031131 to base 5 (Quintal) is 203004443113211444011. The convertion of 202331120031131 to base 8 (Octal) is 5600234242252633. The convertion of 202331120031131 to base 16 (Hexadecimal) is b804e289559b. The convertion of 202331120031131 to base 32 is 5o0jh8ilcr. The sine of the figure 202331120031131 is 0.58137376003262. The cosine of the figure 202331120031131 is 0.81363662106959. The tangent of 202331120031131 is 0.71453735608454. The root of 202331120031131 is 14224314.39582.\nIf you square 202331120031131 you will get the following result 4.0937882133052E+28. The natural logarithm of 202331120031131 is 32.940926679369 and the decimal logarithm is 14.30606268563. that 202331120031131 is amazing number!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67249143,"math_prob":0.8194359,"size":2682,"snap":"2023-14-2023-23","text_gpt3_token_len":896,"char_repetition_ratio":0.24197163,"word_repetition_ratio":0.2670623,"special_character_ratio":0.56301266,"punctuation_ratio":0.15081206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998414,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T22:33:51Z\",\"WARC-Record-ID\":\"<urn:uuid:32239798-094f-4176-99f9-b57b95c54934>\",\"Content-Length\":\"14584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f31a48ed-47b2-424f-aa83-820c46492bef>\",\"WARC-Concurrent-To\":\"<urn:uuid:d65aaebd-4263-4199-a1c3-4ad25d175629>\",\"WARC-IP-Address\":\"176.9.140.13\",\"WARC-Target-URI\":\"https://numberworld.info/202331120031131\",\"WARC-Payload-Digest\":\"sha1:KMNKJGDMCLEBNNLY5MHQNM5H7KLXZQ4D\",\"WARC-Block-Digest\":\"sha1:I7CN2SEABB7HQ5PVQ5NHKFFHVIOPGRKA\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647459.8_warc_CC-MAIN-20230531214247-20230601004247-00711.warc.gz\"}"} |
https://csegrecorder.com/articles/view/refraction-tomography-a-practical-overview-of-emerging-technologies | [
"Near-surface velocity anomalies produce severe distortions in seismic images. If one knew the detailed structure of these anomalies, the best way to tackle this problem would be to perform wave-equation datuming or depth migration from the surface. However, 3-D prestack depth imaging and datuming are computational challenges and highly sensitive to the near-surface model. Therefore, statics application, assuming surface-consistent ray propagation through the near surface, remains the most common approach to account for near-surface anomalies. For estimating the short-period part of statics (so-called residual statics) analyzing the stacking response of reflection data works well. However, estimating the long-period statics, using reflection data alone, is more problematic. Therefore, solving for the near-surface velocity model from refraction data, followed by calculating predominantly long-period statics, is customary in seismic processing.\n\nRefraction statics have long been implemented using delay-time methods. For example, the generalized reciprocal method has been widely applied to 2-D data. Unfortunately, for 3-D seismic, it is difficult to apply due to the lack of reciprocal data. However, the concept of delay times is useful for 3-D refraction statics calculations by assuming first arrivals to be the onset of head waves propagating along the refracting interfaces of locally flat layers on the scale of the offset range (Figure 1). First-arrival picks are decomposed in a surface-consistent manner into delay times and refracting-layer velocities. These are then converted to layer thicknesses assuming a critical angle of incidence at the refracting layers. The model used in the delay-time method is also used in head-wave tomography, as implemented, for example, in Generalized Linear Inversion (Hampson and Russell, 1984). In this approach, instead of the two-step inversion via delay times, traveltimes are inverted directly for layer thicknesses and velocities after ray tracing of head waves.\n\nHead-wave methods are in general robust because the relationship between the delay times and the observed traveltimes is linear. However, in areas with complex geology and rough terrain the layered model typically employed is often too simple to explain important data features. Furthermore, since head-wave methods do not account for nonlinear moveout of first arrivals, it is often necessary to limit the offset range. This limiting of the offset range contributes to a fundamental velocity/depth ambiguity. To address this issue, it is common practice to specify the weathering velocity prior to depth estimation (Hampson and Russell, 1984).\n\nRecently, diving-wave, or turning-ray, tomography has become a popular alternative to head-wave methods (Zhu et al., 1992; Bell et al., 1994; Osypov, 1998; Zhu et al., 2000). In this approach, the medium is typically parameterized as a number of cells. Turning rays are then traced through the model, and traveltime residuals are inverted for velocity perturbations in every cell crossed by rays. Since this method adds more degrees of freedom to the model, it generally fits the observed first-arrival moveouts better than headwave methods. Specifically, the model for diving waves includes vertical velocity gradients (Figure 2). This accounts for the nonlinear moveout of first arrivals and allows for the solution to incorporate a wider offset range. However, the relationship between the model parameters and the traveltimes becomes nonlinear due to the significant sensitivity of turning-ray paths to the velocity model. The nonlinearity is usually handled by iterating the ray tracing and the model updating using a local linearization. This makes the tomographic results sensitive to the initial model. Since the quality of the initial model depends on the analyst’s expertise, this may cause a bias in the final solution. The above issues are exacerbated when the data quality is poor. In other words, the results of diving-wave tomography are usually more sensitive to pick errors than are the solutions of head-wave methods.",
null,
"Figure 2: Model and rays for diving waves.\n\nA desirable goal for diving-wave tomography is to reformulate it as a linear problem in order to remove the initial-model dependency. Toward this goal, traveltime inversion in the τ-p domain is very attractive, as it possesses an inherently linear formulation. Different forms of τ-p traveltime inversion have long been used in earthquake seismology. However, the practical application to 3-D refraction tomography, until recently, was constrained by a 1-D assumption. Figure 3 illustrates a 1-D inversion producing a unique velocity/depth model from the observed traveltime curve under the assumption of velocity increasing with depth. For the velocity model shown in Figure 3d, the traveltime vs. offset curve is plotted in Figure 3a. Given the observed traveltimes along this curve, there exist two particular approaches for solving the inverse problem to estimate the implied velocity model. One option is to take a derivative of the traveltime curve to calculate apparent slowness as a function of offset x (Figure 3c). The apparent slowness in the 1-D case corresponds to the ray parameter, p. In the x-p domain, one can then apply the Herglotz-Wiechert (H-W) formula (Aki and Richards, 1980) to derive the depths of penetration for diving waves corresponding to the ray parameters for the different offsets. Since the p value corresponds to the physical velocity at the turning point, this transformation provides an estimate of the desired velocity/depth function in Figure 3d.",
null,
"Figure 3: Illustration of the 1-D traveltime inversion. (a) The traveltime curve as a function of offset x; (b) The representation of the traveltime curve in τ-p domain; (c) The representation of the traveltime curve in x-p domain; (d) Velocity V vs. depth z corresponding to the traveltime curve in (a). Velocity/depth model in (d) can be estimated using (b) or (c) representations.\n\nThe other solution of interest for the inverse problem, is to calculate intercept times, τ, for each p (Figure 3b). Then, one can apply another form of the H-W formula in the τ-p domain to estimate the velocity/depth model in Figure 3d. A discrete form of this H-W transformation corresponds to the ray geometry for head waves, yielding a multi-layer delay-time relationship. Numerically, the HW transformation in the x-p domain is more accurate and stable for continuous velocity functions. However, the transformation in the τ-p domain is better for treating velocity discontinuities. This leads us to adopt a hybrid form of the H-W transformation to invert both head and diving waves for a velocity model with both sharp velocity transitions and mild vertical velocity gradients. This transformation may be extended to the case of velocity inversions, but the solution becomes non-unique.\n\nIn order to extend this hybrid approach to 2-D and 3-D, one has to decompose the observed first-arrival traveltimes into an equivalent τ-p representation. I introduced a form of refraction tomography without ray tracing using a local, 1-D H-W transformation applied to the traveltime curves obtained by the decomposition of first arrivals (Osypov, 1999). Later, I modified the decomposition by exploiting the tomographic-like integrals coupled with a 3-D extension of the hybrid H-W approach (Osypov, 2000). This founded the basis for a new τ-p refraction tomography method.\n\nI implement τ-p refraction tomography as a two-step process. First, I decompose the first-arrival picks to a best-fit τ-p representation in 3-D using a linear inversion that does not require any explicit ray tracing, cell parameterization, or initial model. The second step is a separate, local inversion that involves the estimation of the 3-D velocity/depth model from the derived τ-p representation. This model building process may incorporate a priori information such as upholes.",
null,
"Figure 4: Tomographic velocity/depth model. The lines correspond to the regional interfaces between different geological layers interpreted from well information.\n\nFigures 4, 5, and 6 show the results of applying τ-p refraction tomography to a data example from Tunisia (courtesy of Petro-Canada). The results demonstrate a good agreement with the independent geologic and uphole information. Other examples comparing the accuracy and robustness of different forms of refraction tomography may be seen in Osypov (1998, 1999, and 2000).",
null,
"Figure 5: Stack after the application of statics corresponding to the tomographic model in Figure 4.\n\nIn conclusion, τ-p refraction tomography is an emerging technology that complements other well-known methods for modeling the near surface and producing static corrections. This method of traveltime inversion differs from conventional tomographic approaches in that it performs no explicit ray tracing, although its formulation is implicitly based on ray theory. The approach combines the robustness of delay-time methods, as it does not require an initial model, and the flexibility of tomography, as it inverts both head and diving waves over the complete offset range.",
null,
"Figure 6: Comparison of vertical times through the tomographic model in Figure 4 with the uphole times.",
null,
"### About the Author(s)\n\nKonstantin Osypov received his M.S. degree and Ph.D. in geophysics from St. Petersburg University, Russia, in 1988 and 1992, respectively.\n\nPrior to joining Western’s R&D in Denver in 1997 he worked at the University of Uppsala, Sweden, and the Colorado School of Mines.\n\nHis main research interests lie in seismic tomography, inverse problems, statistical timeseries analysis, and signal processing.\n\n### References\n\nAki, K., and Richards, P. G., 1980, Quantitative seismology: Theory and Methods, Freeman and Co.\n\nBell, M. L., Lara, R., and Gray, W. C., 1994, Application of turning-ray tomography to the offshore Mississippi delta: 64th Ann. Internat. Mtg., Soc. Expl. Geophys., 1509-1512.\n\nHampson, D., and Russell, B., 1984, First-break interpretation using generalized linear inversion: 54th Ann. Internat. Mtg., Soc. Expl. Geophys., 532-534.\n\nOsypov, K., 1998, A comparative study between 3-D diving-wave tomography and head-wave refraction methods: 68th Ann. Internat. Mtg., Soc. Expl. Geophys., 1222-1225.\n\nOsypov, K., 1999, Refraction tomography without ray tracing: 69th Ann. Internat. Mtg., Soc. Expl. Geophys., 1283-1286.\n\nOsypov, K., 2000, Robust refraction tomography: 70th Ann. Internat. Mtg., Soc. Expl. Geophys., 2032-2035.\n\nZhu, T., Cheadle, S., Petrella, A., and Gray, S., 2000, First-arrival tomography: method and application: 70th Ann. Internat. Mtg., Soc. Expl. Geophys., 2028-2031.\n\nZhu, X., Sixta, D. P., and Angstman, B. G., 1992, Tomostatics: Turningray tomography + static correction: The Leading Edge, 11, 15-23.\n\n### Join the Conversation\n\nInterested in starting, or contributing to a conversation about an article or issue of the RECORDER? Join our CSEG LinkedIn Group."
] | [
null,
"https://csegrecorder.com/assets/images/articles/archive/2001-02-refraction-fig02.jpg",
null,
"https://csegrecorder.com/assets/images/articles/archive/2001-02-refraction-fig03.jpg",
null,
"https://csegrecorder.com/assets/images/articles/archive/2001-02-refraction-fig04.jpg",
null,
"https://csegrecorder.com/assets/images/articles/archive/2001-02-refraction-fig05.jpg",
null,
"https://csegrecorder.com/assets/images/articles/archive/2001-02-refraction-fig06.jpg",
null,
"https://csegrecorder.com/assets/images/global/end-sign.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8780494,"math_prob":0.90967596,"size":10210,"snap":"2019-13-2019-22","text_gpt3_token_len":2229,"char_repetition_ratio":0.123260826,"word_repetition_ratio":0.011811024,"special_character_ratio":0.20910871,"punctuation_ratio":0.15592515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96804667,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T01:27:25Z\",\"WARC-Record-ID\":\"<urn:uuid:bb77539b-db83-4ad0-bd1c-577f29b44b59>\",\"Content-Length\":\"31195\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29f43c7f-d8d9-489c-bec1-3f27fb21f795>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f9472d7-c1b0-47ff-b2ae-2a1c4138fc24>\",\"WARC-IP-Address\":\"64.207.176.211\",\"WARC-Target-URI\":\"https://csegrecorder.com/articles/view/refraction-tomography-a-practical-overview-of-emerging-technologies\",\"WARC-Payload-Digest\":\"sha1:72MJC4OXAOQ6M62TUCEFGVI5C5N4HEPF\",\"WARC-Block-Digest\":\"sha1:I5XKQADR7COD4B5MAFRY4TYXPQTNTCJO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203547.62_warc_CC-MAIN-20190325010547-20190325032547-00464.warc.gz\"}"} |
https://answers.everydaycalculation.com/divide-fractions/56-70-divided-by-10-90 | [
"Solutions by everydaycalculation.com\n\n## Divide 56/70 with 10/90\n\n56/70 ÷ 10/90 is 36/5.\n\n#### Steps for dividing fractions\n\n1. Find the reciprocal of the divisor\nReciprocal of 10/90: 90/10\n2. Now, multiply it with the dividend\nSo, 56/70 ÷ 10/90 = 56/70 × 90/10\n3. = 56 × 90/70 × 10 = 5040/700\n4. After reducing the fraction, the answer is 36/5\n5. In mixed form: 71/5\n\nMathStep (Works offline)",
null,
"Download our mobile app and learn to work with fractions in your own time:"
] | [
null,
"https://answers.everydaycalculation.com/mathstep-app-icon.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81423247,"math_prob":0.94995457,"size":267,"snap":"2022-40-2023-06","text_gpt3_token_len":94,"char_repetition_ratio":0.15209125,"word_repetition_ratio":0.0,"special_character_ratio":0.3857678,"punctuation_ratio":0.08928572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669371,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T11:36:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3db03488-1250-4a63-b17c-302dc07eb7bd>\",\"Content-Length\":\"7042\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e19eddea-1586-4c37-867a-645a378b25fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:54bbed6b-623d-41ad-8a48-36646e45b951>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/divide-fractions/56-70-divided-by-10-90\",\"WARC-Payload-Digest\":\"sha1:YEY5G7C6RALCURAAGIK5U3EX3IKPNDVA\",\"WARC-Block-Digest\":\"sha1:3O23YYZAVKSNDXD3GC7B3CCP4Y337BNP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500456.61_warc_CC-MAIN-20230207102930-20230207132930-00142.warc.gz\"}"} |
https://stats.stackexchange.com/questions/589771/why-reversible-jump-mcmc-has-only-one-step-increase-decrease | [
"# Why Reversible jump mcmc has only one step increase/ decrease?\n\nI was applying reversible jump MCMC for joint estimation of model order and parameter estimation. I've a conceptual question in my mind. First of all, the algorithm has 3 steps, namely the birth, death and update. In birth and death steps, the model order is either increased or decreased by 1.\n\nI've seen some papers where after each iteration, a hyper parameter ($$\\Lambda$$) is updated based on a MH step and that again is used as the Possion distribution parameter for the model order in the next iteration. In short, the model order on every iteration is adaptively chosen and then the birth/ death moves are considered again. This makes it converge faster.\n\nThis made me think the following. Instead of increasing/ deceasing the model order by 1, why not do the following.\n\n1. Randomly choose an order based on the Poisson distribution. Based on the current model order, add or remove parameters and compute the a MH step and accept one of the orders.\n2. Update step\n3. Repeat\n\n1 and 2 can be performed based on a uniform random draw is less/ greater than a threshold.\n\nThe parameter of the Poisson can be used as a hyper parameter nicely here.\n\nIt’s more like a joint estimation but with arbitrary steps.\n\nIs it popular? Or, is it very stupid of me to think an approach like this. I’m sure the authors thought very carefully about the increase/ decrease by only 1 in the RJMCMC.\n\n• Increasing/decreasing by one is nothing special. The RJMCMC algorithm allows for proposed moves between arbitrary pairs of models. For instance, in Bayesian Core, we detail a RJMCMC for $AR(k)$ models where the proposed moves are between $k$ and $k\\pm 1, k\\pm 2$. Choosing an order with a Poisson proposal is thus perfectly legit. Sep 23 at 6:35"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9174987,"math_prob":0.9666225,"size":1373,"snap":"2022-40-2023-06","text_gpt3_token_len":301,"char_repetition_ratio":0.1395179,"word_repetition_ratio":0.0,"special_character_ratio":0.21267298,"punctuation_ratio":0.096296296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98803896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T22:00:13Z\",\"WARC-Record-ID\":\"<urn:uuid:feaca8f2-25c8-46b6-a993-b813ee642c38>\",\"Content-Length\":\"134686\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f80387b4-161b-4046-a122-2645a61f4585>\",\"WARC-Concurrent-To\":\"<urn:uuid:c12b883a-3a61-4d3e-88e0-ba6298127c43>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/589771/why-reversible-jump-mcmc-has-only-one-step-increase-decrease\",\"WARC-Payload-Digest\":\"sha1:TCGU6UVYCE3B7TYDWLJYNKJD2EDUBXCZ\",\"WARC-Block-Digest\":\"sha1:7DGQMFRGHXPIIWKOR7SPQ2R4DUTUFRRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338280.51_warc_CC-MAIN-20221007210452-20221008000452-00778.warc.gz\"}"} |
https://www.datasciencemadesimple.com/mode-function-python-pandas-dataframe-row-column-wise-mode/ | [
"# Mode Function in Python pandas (Dataframe, Row and column wise mode)\n\nMode Function in python pandas is used to calculate the mode or most repeated value of a given set of numbers. mode() function is used in creating most repeated value of a data frame, we will take a look at on how to get mode of all the column and mode of rows as well as mode of a specific column, let’s see an example of each We need to use the package name “statistics” in calculation of mode. In this tutorial we will learn,\n\n• How to find the mode of a given set of numbers\n• How to find mode of a dataframe in pandas\n• How to find the mode of a column in dataframe\n• How to find row mode of a dataframe\n\nSyntax of Mode Function:\n\nDataFrame.mode(axis=0, numeric_only=False, dropna=True)\n axis 0 – get mode of each column 1 -get mode of each row numeric_only if True, only apply to numeric columns dropna Don’t consider the counts of NaN\n\n#### Simple mode function in python is shown below\n\n```# calculate mode or most repeated value\nImport statistics\n\nprint(statistics.mode([1,5,5,7,5,6,8,7]))\nprint(statistics.mode(['lion', 'cat', 'cat','dog','tiger']))\n\n```\n\n5\ncat\n\n#### Mode of a dataframe:\n\nCreate dataframe\n\n```import pandas as pd\nimport numpy as np\n\n#Create a DataFrame\nd = {\n'Rahul','David','Andrew','Ajay','Teresa'],\n'Score1':[62,47,55,74,47,77,85,63,42,32,71,57],\n'Score2':[89,87,67,55,47,72,76,79,44,67,99,69],\n'Score3':[56,86,77,45,73,62,74,89,71,67,97,68]}\n\ndf = pd.DataFrame(d)\ndf\n\n```\n\nSo the resultant dataframe will be",
null,
"#### Mode of the dataframe:\n\n```# mode of the dataframe\ndf.mode()\n\n```\n\nwill calculate the mode of the dataframe across columns so the output will be",
null,
"#### Column Mode of the dataframe in python pandas :\n\nmode function takes axis =0 as argument. so that it calculates a column wise mode.\n\n```# column mode of the dataframe\ndf.mode(axis=0)\n\n```\n\naxis=0 argument calculates the column wise mode of the dataframe so the result will be",
null,
"#### Row Mode of the dataframe in python pandas :\n\nmode function takes axis =1 as argument, so that it calculates the row wise mode.\n\n```# Row mode of the dataframe\ndf.mode(axis=1)\n\n```\n\naxis=1 argument calculates the row wise mode of the dataframe so the result will be",
null,
"#### Calculate the mode of the specific Column – pandas\n\n```# mode of the specific column\ndf.loc[:,\"Score1\"].mode()\n\n```\n\nthe above code calculates the mode of the “Score1” column so the result will be\n\n0 47\ndtype: int64"
] | [
null,
"https://www.datasciencemadesimple.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.datasciencemadesimple.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.datasciencemadesimple.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null,
"https://www.datasciencemadesimple.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.61942554,"math_prob":0.9912284,"size":2319,"snap":"2020-45-2020-50","text_gpt3_token_len":607,"char_repetition_ratio":0.20475163,"word_repetition_ratio":0.11141305,"special_character_ratio":0.28891763,"punctuation_ratio":0.17017208,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943435,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T12:34:06Z\",\"WARC-Record-ID\":\"<urn:uuid:a1b170f2-af83-4163-acd2-98137e2e5690>\",\"Content-Length\":\"64061\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2de309e4-9f11-4f84-9c1d-7a5a22f50c3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6171577-d94e-4d59-a64c-61ff45503d8c>\",\"WARC-IP-Address\":\"104.27.149.144\",\"WARC-Target-URI\":\"https://www.datasciencemadesimple.com/mode-function-python-pandas-dataframe-row-column-wise-mode/\",\"WARC-Payload-Digest\":\"sha1:SIT4RZUCAPQV3A2FAMBPSQZYOWUE7GSX\",\"WARC-Block-Digest\":\"sha1:6QCT46AQI2M7NGLUQXSVJN7VYYO2MJCG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191692.20_warc_CC-MAIN-20201127103102-20201127133102-00173.warc.gz\"}"} |
https://cs.stackexchange.com/questions/74785/given-the-alphabet-a-b-c-how-many-words-can-we-form-with-4-letters/76155 | [
"# Given the alphabet $\\{a, b, c\\}$, how many words can we form with 4 letters?\n\nQuestion: Given the alphabet $\\{a, b, c\\}$, how many words can we form with 4 letters? And how many words can we form with up to 4 letters?\n\nI was thinking about the logic behind this and came up with this: perhaps the number of words that can be formed with 4 letters is $4^3 = 64$ words. Is that correct?\n\nI could not think about how many words up to 4 letters, because that includes words with 1, 2 and 3 letters.\n\n• Hint: by the same token, the words having only 1 letter are $1^3 = 1$. Does it look right? For \"up to four\", count the words having 0,1,2,3,4 letters using the same \"corrected\" formula. – chi May 1 '17 at 19:32\n\nAssume you have the alphabet $\\{A,B,C\\}$ and you want to form words of length 4.\nFor the first letter you have 3 choices, $A, B$ or $C$. For the second letter you have again 3 choices, $A,B$ or $C$ and so on. In total: $3 \\cdot 3 \\cdot 3 \\cdot 3 = 3 ^ 4 = 81$ possibilities.\nDoes not \"with up to 4 letters\" mean that we should count 1-letter, 2-letter, 3-letter, and 4-letter words? Then the answer is $3 + 3^2 + 3^3 + 3^4$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.984888,"math_prob":0.99609023,"size":414,"snap":"2019-51-2020-05","text_gpt3_token_len":111,"char_repetition_ratio":0.17804877,"word_repetition_ratio":0.07594936,"special_character_ratio":0.2777778,"punctuation_ratio":0.12631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99472743,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T03:40:05Z\",\"WARC-Record-ID\":\"<urn:uuid:adba94f8-afe8-4594-9280-40c0d7d9a9a6>\",\"Content-Length\":\"143671\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:463984f5-47a1-44e0-aaed-c599c2b9aac8>\",\"WARC-Concurrent-To\":\"<urn:uuid:5af23ef0-6b5d-4853-b921-d34a6a0f199b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/74785/given-the-alphabet-a-b-c-how-many-words-can-we-form-with-4-letters/76155\",\"WARC-Payload-Digest\":\"sha1:UIAH4YKT6QNS3WXLC26CQ3UZLGMP5JYU\",\"WARC-Block-Digest\":\"sha1:3LKOEYUVTSKNDZ5WAIMFNHYAMXE76XOU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594101.10_warc_CC-MAIN-20200119010920-20200119034920-00166.warc.gz\"}"} |
https://man.linuxreviews.org/man3/dgesdd.3.html | [
"# dgesdd.f\n\nSection: LAPACK (3)\nUpdated: Tue Nov 14 2017\nPage Index\n\ndgesdd.f\n\n## SYNOPSIS\n\n### Functions/Subroutines\n\nsubroutine dgesdd (JOBZ, M, N, A, LDA, S, U, LDU, VT, LDVT, WORK, LWORK, IWORK, INFO)\nDGESDD\n\n## Function/Subroutine Documentation\n\n### subroutine dgesdd (character JOBZ, integer M, integer N, double precision, dimension( lda, * ) A, integer LDA, double precision, dimension( * ) S, double precision, dimension( ldu, * ) U, integer LDU, double precision, dimension( ldvt, * ) VT, integer LDVT, double precision, dimension( * ) WORK, integer LWORK, integer, dimension( * ) IWORK, integer INFO)\n\nDGESDD\n\nPurpose:\n\n``` DGESDD computes the singular value decomposition (SVD) of a real\nM-by-N matrix A, optionally computing the left and right singular\nvectors. If singular vectors are desired, it uses a\ndivide-and-conquer algorithm.\n\nThe SVD is written\n\nA = U * SIGMA * transpose(V)\n\nwhere SIGMA is an M-by-N matrix which is zero except for its\nmin(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and\nV is an N-by-N orthogonal matrix. The diagonal elements of SIGMA\nare the singular values of A; they are real and non-negative, and\nare returned in descending order. The first min(m,n) columns of\nU and V are the left and right singular vectors of A.\n\nNote that the routine returns VT = V**T, not V.\n\nThe divide and conquer algorithm makes very mild assumptions about\nfloating point arithmetic. It will work on machines with a guard\ndigit in add/subtract, or on those binary machines without guard\ndigits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or\nCray-2. It could conceivably fail on hexadecimal or decimal machines\nwithout guard digits, but we know of none.\n```\n\nParameters:\n\nJOBZ\n\n``` JOBZ is CHARACTER*1\nSpecifies options for computing all or part of the matrix U:\n= 'A': all M columns of U and all N rows of V**T are\nreturned in the arrays U and VT;\n= 'S': the first min(M,N) columns of U and the first\nmin(M,N) rows of V**T are returned in the arrays U\nand VT;\n= 'O': If M >= N, the first N columns of U are overwritten\non the array A and all rows of V**T are returned in\nthe array VT;\notherwise, all columns of U are returned in the\narray U and the first M rows of V**T are overwritten\nin the array A;\n= 'N': no columns of U or rows of V**T are computed.\n```\n\nM\n\n``` M is INTEGER\nThe number of rows of the input matrix A. M >= 0.\n```\n\nN\n\n``` N is INTEGER\nThe number of columns of the input matrix A. N >= 0.\n```\n\nA\n\n``` A is DOUBLE PRECISION array, dimension (LDA,N)\nOn entry, the M-by-N matrix A.\nOn exit,\nif JOBZ = 'O', A is overwritten with the first N columns\nof U (the left singular vectors, stored\ncolumnwise) if M >= N;\nA is overwritten with the first M rows\nof V**T (the right singular vectors, stored\nrowwise) otherwise.\nif JOBZ .ne. 'O', the contents of A are destroyed.\n```\n\nLDA\n\n``` LDA is INTEGER\nThe leading dimension of the array A. LDA >= max(1,M).\n```\n\nS\n\n``` S is DOUBLE PRECISION array, dimension (min(M,N))\nThe singular values of A, sorted so that S(i) >= S(i+1).\n```\n\nU\n\n``` U is DOUBLE PRECISION array, dimension (LDU,UCOL)\nUCOL = M if JOBZ = 'A' or JOBZ = 'O' and M < N;\nUCOL = min(M,N) if JOBZ = 'S'.\nIf JOBZ = 'A' or JOBZ = 'O' and M < N, U contains the M-by-M\northogonal matrix U;\nif JOBZ = 'S', U contains the first min(M,N) columns of U\n(the left singular vectors, stored columnwise);\nif JOBZ = 'O' and M >= N, or JOBZ = 'N', U is not referenced.\n```\n\nLDU\n\n``` LDU is INTEGER\nThe leading dimension of the array U. LDU >= 1; if\nJOBZ = 'S' or 'A' or JOBZ = 'O' and M < N, LDU >= M.\n```\n\nVT\n\n``` VT is DOUBLE PRECISION array, dimension (LDVT,N)\nIf JOBZ = 'A' or JOBZ = 'O' and M >= N, VT contains the\nN-by-N orthogonal matrix V**T;\nif JOBZ = 'S', VT contains the first min(M,N) rows of\nV**T (the right singular vectors, stored rowwise);\nif JOBZ = 'O' and M < N, or JOBZ = 'N', VT is not referenced.\n```\n\nLDVT\n\n``` LDVT is INTEGER\nThe leading dimension of the array VT. LDVT >= 1;\nif JOBZ = 'A' or JOBZ = 'O' and M >= N, LDVT >= N;\nif JOBZ = 'S', LDVT >= min(M,N).\n```\n\nWORK\n\n``` WORK is DOUBLE PRECISION array, dimension (MAX(1,LWORK))\nOn exit, if INFO = 0, WORK(1) returns the optimal LWORK;\n```\n\nLWORK\n\n``` LWORK is INTEGER\nThe dimension of the array WORK. LWORK >= 1.\nIf LWORK = -1, a workspace query is assumed. The optimal\nsize for the WORK array is calculated and stored in WORK(1),\nand no other work except argument checking is performed.\n\nLet mx = max(M,N) and mn = min(M,N).\nIf JOBZ = 'N', LWORK >= 3*mn + max( mx, 7*mn ).\nIf JOBZ = 'O', LWORK >= 3*mn + max( mx, 5*mn*mn + 4*mn ).\nIf JOBZ = 'S', LWORK >= 4*mn*mn + 7*mn.\nIf JOBZ = 'A', LWORK >= 4*mn*mn + 6*mn + mx.\nThese are not tight minimums in all cases; see comments inside code.\nFor good performance, LWORK should generally be larger;\na query is recommended.\n```\n\nIWORK\n\n``` IWORK is INTEGER array, dimension (8*min(M,N))\n```\n\nINFO\n\n``` INFO is INTEGER\n= 0: successful exit.\n< 0: if INFO = -i, the i-th argument had an illegal value.\n> 0: DBDSDC did not converge, updating process failed.\n```\n\nAuthor:\n\nUniv. of Tennessee\n\nUniv. of California Berkeley\n\nNAG Ltd.\n\nDate:\n\nJune 2016\n\nContributors:\n\nMing Gu and Huan Ren, Computer Science Division, University of California at Berkeley, USA\n\nDefinition at line 220 of file dgesdd.f.\n\n## Author\n\nGenerated automatically by Doxygen for LAPACK from the source code."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7845094,"math_prob":0.9738982,"size":4548,"snap":"2022-27-2022-33","text_gpt3_token_len":1352,"char_repetition_ratio":0.13820423,"word_repetition_ratio":0.15172414,"special_character_ratio":0.29397538,"punctuation_ratio":0.15377268,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99847007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T02:58:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c77bfc98-7c85-4f4a-baca-ce64041ed4d2>\",\"Content-Length\":\"10443\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c8ae0dc-d47a-42dc-97b7-3ddce2f3662d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9709ccbd-fd9e-4c22-952c-e41442f28406>\",\"WARC-IP-Address\":\"85.25.199.78\",\"WARC-Target-URI\":\"https://man.linuxreviews.org/man3/dgesdd.3.html\",\"WARC-Payload-Digest\":\"sha1:DBPF4TADIYQLZ4HDQ6LL7HBQEYSPFCWI\",\"WARC-Block-Digest\":\"sha1:2454MMHPFASP4Q4R2JEQPUJ4MD4UQEPQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104293758.72_warc_CC-MAIN-20220704015700-20220704045700-00315.warc.gz\"}"} |
https://numbermatics.com/n/6550588/ | [
"# 6550588\n\n## 6,550,588 is an even composite number composed of three prime numbers multiplied together.\n\nWhat does the number 6550588 look like?\n\nThis visualization shows the relationship between its 3 prime factors (large circles) and 24 divisors.\n\n6550588 is an even composite number. It is composed of three distinct prime numbers multiplied together. It has a total of twenty-four divisors.\n\n## Prime factorization of 6550588:\n\n### 22 × 11 × 533\n\n(2 × 2 × 11 × 53 × 53 × 53)\n\nSee below for interesting mathematical facts about the number 6550588 from the Numbermatics database.\n\n### Names of 6550588\n\n• Cardinal: 6550588 can be written as Six million, five hundred fifty thousand, five hundred eighty-eight.\n\n### Scientific notation\n\n• Scientific notation: 6.550588 × 106\n\n### Factors of 6550588\n\n• Number of distinct prime factors ω(n): 3\n• Total number of prime factors Ω(n): 6\n• Sum of prime factors: 66\n\n### Divisors of 6550588\n\n• Number of divisors d(n): 24\n• Complete list of divisors:\n• Sum of all divisors σ(n): 12746160\n• Sum of proper divisors (its aliquot sum) s(n): 6195572\n• 6550588 is a deficient number, because the sum of its proper divisors (6195572) is less than itself. Its deficiency is 355016\n\n### Bases of 6550588\n\n• Binary: 110001111110100001111002\n• Base-36: 3WEGS\n\n### Squares and roots of 6550588\n\n• 6550588 squared (65505882) is 42910203145744\n• 6550588 cubed (65505883) is 281087061804072897472\n• The square root of 6550588 is 2559.4116511415\n• The cube root of 6550588 is 187.1084617723\n\n### Scales and comparisons\n\nHow big is 6550588?\n• 6,550,588 seconds is equal to 10 weeks, 5 days, 19 hours, 36 minutes, 28 seconds.\n• To count from 1 to 6,550,588 would take you about sixteen weeks!\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 6550588 cubic inches would be around 15.6 feet tall.\n\n### Recreational maths with 6550588\n\n• 6550588 backwards is 8850556\n• The number of decimal digits it has is: 7\n• The sum of 6550588's digits is 37\n• More coming soon!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87692237,"math_prob":0.97597075,"size":2863,"snap":"2020-24-2020-29","text_gpt3_token_len":780,"char_repetition_ratio":0.1304652,"word_repetition_ratio":0.042410713,"special_character_ratio":0.34055188,"punctuation_ratio":0.16970803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99358624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T11:19:21Z\",\"WARC-Record-ID\":\"<urn:uuid:bf8d5ab1-7ce5-476c-ab72-01bfa65a02bc>\",\"Content-Length\":\"18393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:566f3ba2-6a9b-44c3-a8d0-64860f5cc9ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7069128-9241-4bfa-9632-61b32fe54989>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/6550588/\",\"WARC-Payload-Digest\":\"sha1:J7DC63PT3AGXE5CHNIQ7TTSG6GEMXJWH\",\"WARC-Block-Digest\":\"sha1:36J3LHTT3AH3LWOP7OR4RMCOQC2VGMWW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896932.38_warc_CC-MAIN-20200708093606-20200708123606-00463.warc.gz\"}"} |
https://www.nagwa.com/en/worksheets/409123138312/ | [
"# Lesson Worksheet: Surface of Revolution of Parametric Curves Mathematics\n\nIn this worksheet, we will practice using integration to find the area of the surface of revolution of a parametrically defined curve.\n\nQ1:\n\nConsider the parametric equations and , where .The area of the surface obtained by rotating this parametric curve radians about the -axis can be calculated by evaluating the integral where .\n\nFind .\n\n• A\n• B\n• C\n• D\n• E\n\nHence, find the surface area of by evaluating the integral.\n\n• A\n• B\n• C\n• D\n• E\n\nQ2:\n\nConsider the parametric equations and , where . Calculate the area of the surface obtained when the curve is rotated radians about the -axis.\n\n• A\n• B\n• C\n• D\n• E\n\nQ3:\n\nDetermine the surface area of the solid obtained by rotating the parametric curve and , where , about the .\n\n• A\n• B\n• C24\n• D\n• E\n\nQ4:\n\nDetermine the surface area of the solid obtained by rotating the parametric curve and , where , about the .\n\n• A\n• B\n• C\n• D\n• E\n\nQ5:\n\nDetermine the surface area of the solid obtained by rotating the parametric curve and , where about the . Approximate your answer to the nearest decimal place.\n\nQ6:\n\nCalculate the surface area of the solid obtained by revolving the curve given by the parametric equations and such that about the line . Round your answer to two decimal places.\n\nQ7:\n\nCalculate the surface area of the solid obtained by revolving the curve given by the parametric equations and such that about the . Round your answer to two decimal places.\n\nQ8:\n\nCalculate the surface area of the solid obtained by revolving the curve given by the parametric equations and such that about the . Round your answer to two decimal places.\n\nQ9:\n\nCalculate the surface area of the solid obtained by revolving the curve given by the parametric equations and such that about the . Round your answer to two decimal places.\n\nQ10:\n\nCalculate the surface area of the solid obtained by revolving the curve given by the parametric equations and such that about the line . Round your answer to two decimal places.\n\nThis lesson includes 72 additional question variations for subscribers."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89006674,"math_prob":0.9989126,"size":2073,"snap":"2022-27-2022-33","text_gpt3_token_len":470,"char_repetition_ratio":0.17061383,"word_repetition_ratio":0.57591623,"special_character_ratio":0.22624215,"punctuation_ratio":0.10909091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998673,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T05:25:24Z\",\"WARC-Record-ID\":\"<urn:uuid:56fd642f-3709-4a9e-9041-9e6f60759ac2>\",\"Content-Length\":\"65910\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b96c6f2-2bfe-4fc4-abb1-4e4b3a85368c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6bf7d01-b01a-43d2-a19f-2e22ec5a1ac8>\",\"WARC-IP-Address\":\"13.248.238.219\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/worksheets/409123138312/\",\"WARC-Payload-Digest\":\"sha1:MKS4MYPODXQCBK6V5DZOMG5CYUUSDXID\",\"WARC-Block-Digest\":\"sha1:ZFOQIPODHZARID5BJQVINROH3TQIADQH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103661137.41_warc_CC-MAIN-20220630031950-20220630061950-00467.warc.gz\"}"} |
https://www.reddit.com/r/Python/comments/1i7has/django_querysets_fucking_awesome_yes/ | [
"×\n\n[–][deleted] 14 points15 points (0 children)\n\nI don't really understand what the fuss is about. A queryset is comprised of two parts:\n\n• lazy evaluation of SQL queries\n• a datastructure for representing queries\n\nDjango querysets do a nice job of the lazy evaluation, but I think they're pretty lacking as a data-structure...You have to use all sorts of weird APIs to represent various types of expressions (`Q`, `F`, `annotate` and `aggregate`, `values`, etc).\n\nSQLAlchemy and Peewee are much more expressive. I posted this elsewhere, but here's a post I wrote about shortcomings of the ORM.\n\n[–] 2 points3 points (5 children)\n\nNice article, but why not use python native iterators?\n\n``````import itertools\n\nclass Node(models.Model):\nparent = models.ForeignKey(to='self', null=True, blank=True)\nvalue = models.IntegerField()\n\ndef __unicode__(self):\nreturn '%s #%s' % (self.__class__.__name__, self.id)\n\n@property\ndef ancestors(self):\nif self.parent is None:\nreturn ()\nreturn itertools.chain((self.parent,), self.parent.ancestors)\n\n@property\ndef larger_ancestors(self):\nreturn (a for a in self.ancestors if a.value > self.value)\n``````\n\n[–]Django, gevent 5 points6 points (4 children)\n\nBecause\n\n``````(a for a in self.ancestors if a.value > self.value)\n``````\n\nLoads all values from the database and filters them in python, while\n\n``````self.get_ancestors().filter(value__gt=self.value)\n``````\n\nlets the database handle filtering (which might be more efficient) and reduces network traffic. Additionally, handing back a queryset means you can use other queryset operations, like counting and aggregation, without ever getting the individual rows from the database.\n\n[–] 2 points3 points (3 children)\n\nAFAIK Django can't create recursive SQL requests so one way or another it will be filtered in Python.\n\n[–]Django, gevent 3 points4 points (2 children)\n\nYou're right. In this case you already have to get the ancestors one query at a time from the parent attribute of each ancestor. You could do it in fewer queries by filtering in python, and still use less memory. I overlooked this in my previous comment.\n\nThat said, I think this is more of a flaw in the simplified example. The point of the example was to demonstrate how querysets can be filtered and combined, which gets lost when you use pythons iterators. I agree that your proposal would be more efficient for this case in a real system, but there are lots of places you can get advantages from querysets, and the point here was to demonstrate how to manipulate querysets.\n\n[–] 1 point2 points (1 child)\n\nI couldn't get out of my head those DB multi requests so I wrote this monstrosity, which bothers DB only once and returns QuerySet. Maybe someone will finds this interesting:\n\n``````class Node(models.Model):\nparent = models.ForeignKey(to='self', null=True, blank=True)\nvalue = models.IntegerField()\n\ndef __unicode__(self):\nreturn '%s #%s' % (self.__class__.__name__, self.id)\n\ndef get_ancestors(self, max_ancestors_num=10):\nif self.parent is None:\nreturn self.__class__.objects.none()\n\ndb_table = self._meta.db_table\ndb_id_field = self._meta.get_field('id').column\ndb_parent_field = self._meta.get_field('parent').column\n\nsql_ids_str = ', '.join('t_%i.%s' % (i, db_id_field)\nfor i in range(2, max_ancestors_num + 1))\nsql_joins = ' '.join(\n'LEFT JOIN {table} AS {alias_2} '\n'ON {alias_2}.{id_} = {alias_1}.{parent}'.format(\ntable=db_table,\nid_=db_id_field,\nparent=db_parent_field,\nalias_1=('t_%i' % i),\nalias_2=('t_%i' % (i + 1))\n)\nfor i in range(1, max_ancestors_num)\n)\nsql_ancestors_ids = (\n\"SELECT CONCAT_WS(',', {ids_str}) \"\n\"FROM {table} AS t_1 \"\n\"{joins} \"\n\"WHERE t_1.{id_} = '{self_id}'\".format(\ntable=db_table,\nid_=db_id_field,\nids_str=sql_ids_str,\njoins=sql_joins,\nself_id=self.id\n)\n)\nreturn self.__class__.objects.extra(where=('FIND_IN_SET(%s, (%s))' %\n(db_id_field, sql_ancestors_ids),))\n\ndef get_larger_ancestors(self, max_ancestors_num=10):\nreturn self.get_ancestors(max_ancestors_num).filter(value__gt=self.value)\n``````\n\nThe amount of generated JOIN's depends on max_ancestors_num variable, which is 10 by default. Thereby this code can't handle infinite recursions. This SQL (MySQL) should explain basic principle:\n\n``````CREATE TABLE t(\nid INT PRIMARY KEY AUTO_INCREMENT,\nparent INT NULL\n);\n\nINSERT INTO t(parent)\nVALUES (5), (NULL), (1), (2), (6), (4);\n\nSELECT * FROM t;\n\nSELECT *\nFROM t\nWHERE FIND_IN_SET(id, (\nSELECT CONCAT_WS(',', t2.id, t3.id, t4.id, t5.id)\nFROM t as t1\nLEFT JOIN t as t2 ON t2.id = t1.parent\nLEFT JOIN t as t3 ON t3.id = t2.parent\nLEFT JOIN t as t4 ON t4.id = t3.parent\nLEFT JOIN t as t5 ON t5.id = t4.parent\nWHERE t1.id = 3\n));\n``````\n\nPS. PostgreSQL have native recursion support: http://www.postgresql.org/docs/8.4/static/queries-with.html\n\n[–]Django, gevent 0 points1 point (0 children)\n\nYou might also be interested in django-mptt, which adds more general tree support to django models.\n\n[–]django n' shit 3 points4 points (3 children)\n\nA cool trick with Django QuerySet is to replace the Q() object.\n\n``````mymodel.objects.filter(Q(a=1)|Q(b=2))\n``````\n\nYou can write\n\n``````mymodel.objects.filter(a=1) | mymodel.objects.filter(b=2)\n``````\n\n[–]imported from __future__ 0 points1 point (2 children)\n\nIt seems like the former would be a lot faster.\n\n[–] 5 points6 points (1 child)\n\nI was an unbeliever too until two minutes ago.\n\nWhen the or operator `|` or the and operator `&` is used to link QuerySets the entire expression is reduced to a single SQL query. So the second example:\n\n``````mymodel.objects.filter(a=1) | mymodel.objects.filter(b=2)\n``````\n\nWill result in a SQL query on the `mymodel` table with a single `WHERE` statement operating with `OR` on the filter expressions, just like the Q object based filter. Had to test it out using Django debug toolbar's debugsqlshell.\n\n[–]imported from __future__ 1 point2 points (0 children)\n\nWhoa. That's awesome.\n\n[–] 3 points4 points (0 children)\n\nYes, relational algebra is very cool.\n\n[–] 2 points3 points (0 children)\n\nThe elders of Reddit's Python will downvote me to hell, but it is interesting to point out that web2py's queryset is just as nice, if not more expressive and powerful.\n\n``````db( db.article.id > 0 )\ndb( (db.article.name == 'John') & (db.article.tags.contain('news')))\n``````\n\nWhy is it more powerful? Because inner joins are done nicely and intuitively, when two tables are linked together. For example, in table \"thing\", there is a field \"owner_id\" that references (is a foreign key of) table \"person\":\n\n``````db(db.person.id==db.thing.owner_id)\n``````"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62059134,"math_prob":0.8817117,"size":566,"snap":"2021-21-2021-25","text_gpt3_token_len":185,"char_repetition_ratio":0.34163702,"word_repetition_ratio":0.054054055,"special_character_ratio":0.3074205,"punctuation_ratio":0.048780486,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96419644,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T06:59:45Z\",\"WARC-Record-ID\":\"<urn:uuid:302d2caa-5dc3-4803-aa74-e3e75ac092a0>\",\"Content-Length\":\"113459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c135d759-1d76-4ac0-a551-04021117860c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad307bb8-3d84-4cd1-82d5-ed790651c790>\",\"WARC-IP-Address\":\"151.101.249.140\",\"WARC-Target-URI\":\"https://www.reddit.com/r/Python/comments/1i7has/django_querysets_fucking_awesome_yes/\",\"WARC-Payload-Digest\":\"sha1:ZNXPLRQE3ZI6WRF3VCE7IWZLTS3KLILP\",\"WARC-Block-Digest\":\"sha1:SEAVVPV7BC4LZ6PYKA3Q5WO67J7DRD6Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989690.55_warc_CC-MAIN-20210516044552-20210516074552-00426.warc.gz\"}"} |
https://stage.geogebra.org/m/X6x8Bu4B | [
"# Homework\n\n1) Here are some isosceles triangles.\nAt least two sides of the triangles were congruent.\nFor every isosceles triangle there is 1 pair of congruent segments and 1 pair of congruent angles\n2)Triangles with Two congruent angles.\nIf two angles were congruent the sides opposite the angles were congruent"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9477331,"math_prob":0.73868036,"size":340,"snap":"2022-40-2023-06","text_gpt3_token_len":82,"char_repetition_ratio":0.22321428,"word_repetition_ratio":0.0,"special_character_ratio":0.19411765,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987933,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T15:56:28Z\",\"WARC-Record-ID\":\"<urn:uuid:8fc0c6d8-c98f-4187-b0df-fd58070208eb>\",\"Content-Length\":\"52784\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fb7bc43-ebb8-4b24-b60f-5691f2a83b92>\",\"WARC-Concurrent-To\":\"<urn:uuid:19ee9db6-199a-4ae3-83c5-2d992eb0f67e>\",\"WARC-IP-Address\":\"18.67.76.62\",\"WARC-Target-URI\":\"https://stage.geogebra.org/m/X6x8Bu4B\",\"WARC-Payload-Digest\":\"sha1:5OYAAEZ7BSLDHJRJMZHF2ZCBOJGIAUNC\",\"WARC-Block-Digest\":\"sha1:NLFS4T3DUEILIXN4BE2VWJCAM5T6ALH5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500619.96_warc_CC-MAIN-20230207134453-20230207164453-00440.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.