url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://stackoverflow.com/questions/6324058/how-to-change-caret-cursor-blinking-rate-in-netbeans
# How to change caret (cursor) blinking rate in NetBeans? How to change caret (cursor) blinking rate in NetBeans? (7.0) NetBeans developers say that this is supported as a Swing option, see Bug 124211 - Cursor blink rate too fast but I can't figure out the name of this Swing option to set from the command line. The closest example of setting Swing option that I found is setting look and feel by putting -J-Dswing.defaultlaf=com.sun.java.swing.plaf.windows.WindowsLookAndFeel to the netbeans.conf. - There was a module for customizing the Cursor Blinking Rate created by Emilian Bold, but that module is not found easily available. Let me provide a less intuitive way but this solution works with NetBeans IDE 7.0.1 as tested by me. 1. Make sure the NetBeans IDE is shut down before making these changes. 2. Create file <userdir>/config/Editors/text/x-java/properties.xml Here the <userdir> means the User directory used by NetBeans IDE. This directory can be found from the NetBeans Help > About menu. The config folder will already be there in this directory but the folders Editors/text/x-java may not be there and we will have to create them, they are case sensitive. The properties.xml file shall also be created in the x-java folder. 3. Add the following contents to the properties.xml file <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties PUBLIC "-//NetBeans//DTD Editor Properties settings 1.0//EN" "http://www.netbeans.org/dtds/EditorProperties-1_0.dtd"> <properties> </properties> 1. The value="0" portion can be customized by desired blink rate in miliseconds, the default value used by NetBeans IDE is 300 in my opinion, but it can be changed with a new value, 0 will mean no blinking. 2. Start NetBeans IDE again and now you will get your desired blink rate for the cursor in Java files. - Thank you, works great. To change the blink rate for other file types, I just had to create another folder under the "text" folder; eg I created a "javascript" folder beside the "x-java" folder, and copied the "properties.xml" file into it, so all my JavaScript files now use the specified rate. Thanks! – Chris Nov 7 '11 at 13:57 Can this be applied at a higher scope? Or does it need to be done for each mime type? – Steve Buzonas Jan 20 '12 at 8:16 According to my knowledge for now you have to do this customization for each MIME type. – Tushar Joshi Jan 20 '12 at 15:06 I realize this is old, but it's quite high up in google search so I thought I'd add an updated solution. The solution above by Tushar Joshi does not work for me in Netbeans 7.1.1. What I had to do, was quite similar though: Basically, the setting have moved to <userdir>\config\Editors\text\x-java\Preferences\org-netbeans-modules-editor-settings-CustomPreferences.xml. The path for Unix/Linux is \$HOME/.netbeans/<NetbeansVersion>/config/Editors/Preferences/org-netbeans-modules-editor-settings-CustomPreferences.xml. Exit netbeans and modify the file by adding the entry <entry javaType="java.lang.Integer" name="caret-blink-rate" xml:space="preserve"> <value>1000</value> </entry> The value is the number of milliseconds of blink rate. I added a whole second. I added it so it lined up alphabetically with the other name properties of the other entries, but I don't know if that's important or not. That's it:) - Thanks so much! This answer worked for me, not the accepted one! – Oleksiy Aug 29 '13 at 23:48 Thanks! It works for Netbeans 7.4. I set the value to 0 to prevent the cursor from blinking. And it works perfectly. – Sophia Feng Nov 30 '13 at 3:04 Yes, it works for my Netbeans 8.0. I wish it could read the system's setting for the blink rate. All other apps (e.g. Notepad) on my Windows blinks with same rate, why Netbeans can't? May be someone could log a bug for it. – pongapundit Aug 15 '14 at 17:58 I'm also using Netbeans 7.4 but setting the value to 0 doesn't prevent it from blinking. :( – Michael Warner May 4 at 17:53
2015-11-26 18:18:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4933016300201416, "perplexity": 3078.9994887032135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447769.81/warc/CC-MAIN-20151124205407-00031-ip-10-71-132-137.ec2.internal.warc.gz"}
https://askdev.io/questions/995509/weird-integral-with-cylinders
# Weird integral with cylinders I have this unusual indispensable to locate. I am in fact searching for the quantity that is defined by these 2 formulas. $$x^2+y^2=4$$ and also $$x^2+z^2=4$$ for $$x\geq0, y\geq0, z\geq0$$ It is an unusual object that has the aircraft $z=y$ as a divider panel for both cyndrical tubes. My troubles is that I can not locate the integration limits. I can not also attract this point effectively. 3 2022-07-25 20:46:47 Source Share
2022-08-13 07:03:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171760201454163, "perplexity": 2009.1966364329178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00730.warc.gz"}
https://www.semanticscholar.org/paper/Every-bordered-Riemann-surface-is-a-complete-proper-Alarc%C3%B3n-Forstneri%C4%8D/b707f9d98faf2d889ba04ca182cb4bd52e789588
# Every bordered Riemann surface is a complete proper curve in a ball @article{Alarcn2013EveryBR, title={Every bordered Riemann surface is a complete proper curve in a ball}, author={A. Alarc{\'o}n and F. Forstneri{\vc}}, journal={Mathematische Annalen}, year={2013}, volume={357}, pages={1049-1070} } • Published 2013 • Mathematics • Mathematische Annalen We prove that every bordered Riemann surface admits a complete proper holomorphic immersion into a ball of $$\mathbb C ^2$$, and a complete proper holomorphic embedding into a ball of $$\mathbb C ^3$$. #### Figures from this paper A foliation of the ball by complete holomorphic discs • Mathematics • 2019 We show that the open unit ball $\mathbb{B}^n$ of $\mathbb{C}^n$ $(n>1)$ admits a nonsingular holomorphic foliation by complete properly embedded holomorphic discs. Complete proper holomorphic embeddings of strictly pseudoconvex domains into balls We construct a complete proper holomorphic embedding from any strictly pseudoconvex domain with $\mathcal{C}^2$-boundary in $\mathbb{C}^n$ into the unit ball of $\mathbb{C}^N$, for $N$ large enough,Expand Complete embedded complex curves in the ball of $\mathbb{C}^2$ can have any topology • Mathematics • 2016 In this paper we prove that the unit ball $\mathbb{B}$ of $\mathbb{C}^2$ admits complete properly embedded complex curves of any given topological type. Moreover, we provide examples containing anyExpand Every bordered Riemann surface is a complete conformal minimal surface bounded by Jordan curves • Mathematics • 2015 In this paper we find approximate solutions of certain Riemann-Hilbert boundary value problems for minimal surfaces in $\mathbb{R}^n$ and null holomorphic curves in $\mathbb{C}^n$ for any $n\ge 3$.Expand Boundary continuity of complete proper holomorphic maps We show that there is no complete proper holomorphic map from the open disc U in C to the bidisc UxU which extends continuously to the closed disc. Complete bounded embedded complex curves in C^2 • Mathematics • 2013 We prove that any convex domain of C^2 carries properly embedded complete complex curves. In particular, we exhibit the first examples of complete bounded embedded complex curves in C^2 Proper superminimal surfaces of given conformal types in the hyperbolic four-space Let $H^4$ denote the hyperbolic four-space. Given a bordered Riemann surface, $M$, we prove that every smooth conformal superminimal immersion $\overline M\to H^4$ can be approximated uniformly onExpand Holomorphic Embeddings and Immersions of Stein Manifolds: A Survey In this paper we survey results on the existence of holomorphic embeddings and immersions of Stein manifolds into complex manifolds. Most of them pertain to proper maps into Stein manifolds. WeExpand Noncritical holomorphic functions on Stein spaces We prove that every reduced Stein space admits a holomorphic function without critical points. Furthermore, any closed discrete subset of such a space is the critical locus of a holomorphic function.Expand Null curves and directed immersions of open Riemann surfaces • Mathematics • 2014 In this paper we study holomorphic immersions of open Riemann surfaces into C^n whose derivative lies in a conical algebraic subvariety A of C^n that is smooth away from the origin. ClassicalExpand #### References SHOWING 1-10 OF 32 REFERENCES Bordered Riemann surfaces in C2 • Mathematics • 2009 Abstract We prove that the interior of any compact complex curve with smooth boundary in C 2 admits a proper holomorphic embedding into C 2 . In particular, if D is a bordered Riemann surface whoseExpand Embedding Certain Infinitely Connected Subsets of Bordered Riemann Surfaces Properly into ℂ2 We prove that certain infinitely connected domains D in a bordered Riemann surface, which admits a holomorphic embedding into C2, admit a proper holomorphic embedding into C2. We also prove thatExpand Embeddings of infinitely connected planar domains into C^2 • Mathematics • 2011 We prove that every circled domain in the Riemann sphere admits a proper holomorphic embedding to C^2. Our methods also apply to circled domains with punctures, provided that all but finitely many ofExpand Proper holomorphic embeddings of Riemann surfaces with arbitrary topology into $\mathbb{C}^2$ • Mathematics • 2011 We prove that given an open Riemann surface $N,$ there exists an open domain $M\subset N$ homeomorphic to $N$ which properly holomorphically embeds in $\mathbb{C}^2.$ Furthermore, $M$ can be chosenExpand Existence of proper minimal surfaces of arbitrary topological type • Mathematics • 2009 Consider a domain D in R^3 which is convex (possibly all R^3) or which is smooth and bounded. Given any open surface M, we prove that there exists a complete, proper minimal immersion f : M --> D.Expand HYPERBOLIC COMPLETE MINIMAL SURFACES WITH ARBITRARY TOPOLOGY We show a method to construct orientable minimal surfaces in Et3 with arbitrary topology. This procedure giares complete examples of two different kinds: surfaces whose Gauss map omits four points ofExpand Holomorphic curves in complex spaces • Mathematics • 2006 We study the existence of topologically closed complex curves normalized by bordered Riemann surfaces in complex spaces. Our main result is that such curves abound in any noncompact complex spaceExpand Complete bounded holomorphic curves immersed in c2 with arbitrary genus • Mathematics • 2008 In (MUY), a complete holomorphic immersion of the unit disk D into C 2 whose image is bounded was constructed. In this paper, we shall prove existence of com- plete holomorphic null immersions ofExpand Stein Manifolds and Holomorphic Mappings: The Homotopy Principle in Complex Analysis Preliminaries. - Stein Manifolds. - Stein Neighborhoods and Holomorphic Approximation. - Automorphisms of Complex Euclidean Spaces. - Oka Manifolds. - Elliptic Complex Geometry and Oka Principle. -Expand Null curves and directed immersions of open Riemann surfaces • Mathematics • 2014 In this paper we study holomorphic immersions of open Riemann surfaces into C^n whose derivative lies in a conical algebraic subvariety A of C^n that is smooth away from the origin. ClassicalExpand
2021-09-18 23:36:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9540175199508667, "perplexity": 978.7958116436599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00571.warc.gz"}
https://proxies123.com/tag/rows/
performance – MySQL SELECT slow, but only 2 x 300K rows and indexes Have the MySQL SELECT query below that is awfully slow. It takes ~1.0 seconds to execute despite have only 300K rows and indexes, so I would love to find a way to get it to execute faster since it’s a query that needs to be run again and again. The query: ``````SELECT p.id, p.image, c.name, s.name, MIN(p.saleprice) FROM products p JOIN shops s ON p.shopid = s.id JOIN products_category pc ON p.id = pc.product_id JOIN categories c ON pc.category_id = c.id WHERE brand_id > 0 AND pc.category_id = 46 AND pc.active = 1 AND p.price > 0 AND p.saleprice > 0 AND p.saleprice < p.price AND (last_seen > DATE_SUB(NOW(), INTERVAL 2 DAY)) GROUP BY p.image `````` The query returns 960 rows. The table products has 300.000 rows and these columns: ``````id (int, primary key) name (varchar 512) image (varchar 512) price (int) saleprice (int) last_seen (datetime) `````` It has one index across multiple columns in this order: ``````brand_id (int), shopid (int), last_seen (datetime), price (int), saleprice (int) `````` The table products_categories also has 300.000 rows and these columns: ``````id (int, primary key) product_id (int) category_id (int) active (int) `````` It has two indexes across multiple columns: ``````category_id (int), active (int) product_id (int), active (int) `````` Based on similar questions here, I have tried nesting things with an inner select: ``````SELECT p.id, p.image, c.name, s.name, MIN(p.saleprice) FROM (SELECT * FROM products WHERE brand_id > 0 AND price > 0 AND saleprice > 0 AND saleprice < price AND (last_seen > DATE_SUB(NOW(), INTERVAL 3 DAY))) p JOIN shops s ON p.shopid = s.id JOIN products_category pc ON p.id = pc.product_id JOIN categories c ON pc.category_id = c.id WHERE pc.category_id = 46 AND pc.active = 1 GROUP BY p.image `````` It didn’t help. The version with the inner select takes ~1,3 seconds to execute. The problem seems to be the join between products and products_category, i.e. the two big tables with 300K rows each. Maybe there’s a trick I can do with my indexes? Or can any of you spot something else I should optimize? EXPLAIN of the query: ``````id select_type table partitions type possible_keys key key_len ref rows filtered Extra 1 SIMPLE c N const PRIMARY PRIMARY 4 const 1 100.00 Using temporary; Using filesort 1 SIMPLE pc N ref category_id etc,product_id etc category_id etc 10 const,const 43104 100.00 Using where 1 SIMPLE p N eq_ref PRIMARY,brand_id etc PRIMARY 4 pc.product_id 1 5.00 Using where 1 SIMPLE s N eq_ref PRIMARY PRIMARY 4 p.shopid 1 100.00 N `````` google sheets – Help me separate each student into separate rows without inputting the parent data each time I am collected data about parents and students. I need there to be a separate row for each student without entering the family data each time. I have always used the “pre-filled link” option to do this but need a less tedious way and want to use a formula to do this on my response sheet. I have found some resources online but I can not get any of the examples to work for me. Here is a copy of the response. I would like each row to include data from A-L (header data?) and make a new row for each student M-T, U-AB, AC-AJ, AK-AR. This is an example but the actual sheet will have more columns. This has 2 tabs, how I my responses are coming out and the second tab is how I want it to look. Any help is greatly appreciated. Is there a way to copy a selection spanning multiple rows, and paste them as merged cells spanning two rows each, in Google Sheets? I’m going to be honest; my biggest issue is describing what I wish to accomplish. I can’t find the right word for it, so the title might not make a lot of sense. But the pictures should be clear, I want to take this sheet: Perform some operation, and end up with this: Currently this takes a lot of effort, particularly for large amounts of values. I first have to move each row down to get white rows between each row with values, and then merge them individually. Takes a lot of clicks, and I do this semi-regularly. If there is an extension that does this, or a way to do this less laboriously, I would be very happy. Google Sheets chart only shows first ~250 rows? I have a stacked area chart with an x-axis from B8:B350 and two series from C8:C350 and D8:D350 (so data range B8:D350) but the data only displays the first ~250 rows which is September to May. Any idea why it would do this, and what I could do to get the rest of my data to display? google sheets – How do I convert rows with 45 columns into 15 separate rows, each with 3 columns? I am working with a google sheet that gets data from an email parser. Each time an email comes in, a single row is created, and fills in these columns: `B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, AA, AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, AL, AM, AN, AO, AP, AQ`. I’d like to have it output as: ``````B1 | C1 |D1 D1 | E1 |F1 .... AO1|AP1|AQ1 B2 | C2|D2 .... `````` Is this possible? I’ve tried using this: `=FILTER({Sheet1!B:B,Sheet1!C:C,Sheet1!D:D;Sheet1!E:E,Sheet1!F:F,Sheet1!G:G;Sheet1!H:H,Sheet1!I:I,Sheet1!J:J;Sheet1!K:K,Sheet1!L:L,Sheet1!M:M;Sheet1!N:N,Sheet1!O:O,Sheet1!P:P;Sheet1!Q:Q,Sheet1!R:R,Sheet1!S:S;Sheet1!T:T,Sheet1!U:U,Sheet1!V:V;Sheet1!W:W,Sheet1!X:X,Sheet1!Y:Y;Sheet1!Z:Z,Sheet1!AA:AA,Sheet1!AB:AB;Sheet1!AC:AC,Sheet1!AD:AD,Sheet1!AE:AE;Sheet1!AF:AF,Sheet1!AG:AG,Sheet1!AH:AH;Sheet1!AI:AI,Sheet1!AJ:AJ,Sheet1!AK:AK;Sheet1!AL:AL,Sheet1!AM:AM,Sheet1!AN:AN;Sheet1!AO:AO,Sheet1!AP:AP,Sheet1!AQ:AQ},LEN(Sheet1!A:A,Sheet1!A:A,Sheet1!A:A))` But it was just based on another answer I saw on here, and I am sure that I am not applying the `filter(range,len())` part correctly. MySQL convert rows to column I am working with 2 tables and need help to produce an output by converting rows to columns, and i need to sum the value first be grouping Here is the fiddle: https://www.db-fiddle.com/f/kmQjRvvensRTfYsSELxMF2/1 Here is the table: ``````CREATE TABLE teacher ( TeacherId INT, BranchId VARCHAR(5)); INSERT INTO teacher VALUES ("1121","A"), ("1132","A"), ("1141","A"), ("2120","B"), ("2122","B"), ("2123","B"); CREATE TABLE activities ( ID INT, TeacherID INT, Hours INT); INSERT INTO activities VALUES (1,1121,2), (2,1121,1), (3,1132,1), (4,1141,NULL), (5,2120,NULL), (6,2122,NULL), (7,2123,2), (7,2123,2); `````` My SQL: `````` SELECT totalhours hours , branchid , COUNT(*) total FROM ( SELECT COALESCE(y.hr,0) totalhours , x.branchid , x.teacherid FROM teacher x JOIN ( SELECT teacherid , SUM(hours) hr FROM activities GROUP BY teacherid ORDER BY hr ASC ) y ON x.teacherid = y.teacherid ) a GROUP BY hours , branchid ORDER BY hours , branchid; `````` Output: `````` +---------------+-------------------+--------------------+ | hours | branchid | total | +---------------+-------------------+--------------------+ | 0 | A | 1 | | 0 | B | 2 | | 1 | A | 1 | | 3 | A | 1 | | 4 | B | 1 | +---------------+-------------------+--------------------+ `````` Explanation: Table teacher consist teacher id and branch id, while table activities consist of id, foreign key teacher id, and hours. Hours indicate duration of each activities made by teacher. Teacher can do more than one activities or may not do any activities. Teachers who not doing any activity will be set to null. The objective of queries is to produce a table that consist of summary of teachers activity by branch and group by hours. In the expected output table, ‘Hours’ is a fixed value to indicate hours from in ascending order starting from 0 to 12. It will still display value even there are no hours value for A and B. A and B columns are branch. The value indicates total number of teachers who are doing activities. So, for row 0, there are 1 teacher for branch A and 2 teachers for branch B who are not doing activities. Expected output: `````` +-----------+------------+------------+ | Hours | A | B | +-----------+------------+------------+ | 0 | 1 | 2 | | 1 | 1 | 0 | | 2 | 0 | 0 | | 3 | 1 | 0 | | 4 | 0 | 1 | +-----------+------------+------------+ `````` Google Sheets formula to find rows with matching values, looking up in multiple columns What would be the Google Sheets formula to search for a matching value in a range that goes across multiple rows an columns? For example I need to search the entire range H:P (all rows and columns) and find the cells with a matching value, if any. Ultimately in this case I need just a list of the row numbers where a matching cell is found. In the screenshot there are two matches highlighted in green. There is a match on O2, and on M3. So in this case I need a result like “2,3”. I have tried various things for several hours with no luck. Most examples of formulas that I could find and understand are about looking up in either a single column, or row. Any help appreciated! Thank you! sql server – Combine Rows with indirect relation I am trying to create a report from a cloud based EHR so I cannot share real data and some of these tables are fairly massive. I will try to minimize and share the bare minimum and expand if someone needs more information to help. This should be fairly easy and I’m just having a brain fart I think. I need to combine multiple answers into a single row as separate columns. Here is my query as it is and it does return all the answers but every answer is generating a separate row. There will only ever be one answer for each question per visit id. There are a few catches to working with this system. At it’s heart it’s SQLServer, however queries are restricted to starting with ‘select’ making temp tables a bit more difficult. There can be no spaces, no blank lines nothing before your select. This is their version of security I guess. All reports are written through a web interface no direct access to the db in any way. Current Output: `clientvisit_id | client_id | members_present | patient_category` ```141001 | 2001 | | 141001 | 2001 | | 141001 | 2001 | Patient | 141001 | 2001 | | Adult ``` Desired output: `clientvisit_id | client_id | members_present | patient_category` ```141001 | 2001 | Patient | Adult ``` ``````Select cv.clientvisit_id, cv.client_id, From ClientVisit cv Inner Join SavedVisitAnswer sva On sva.clientvisit_id = cv.clientvisit_id Inner Join Question q On sva.question_id = q.question_id Inner Join Category cat On q.category_id = cat.category_id Inner Join FormVersion fv On cat.form_ver_id = fv.form_ver_id Inner Join Forms On fv.form_id = Forms.form_id Inner Join (Select Where a1.question_id = '532096' Inner Join (Select Where a2.question_id = '532093' Where Forms.form_id = '246' `````` sql server – Recommendations about deleting large set of rows in MSSQL I need to delete about 75+ million rows from a table everyday that contains around 3.5 billions of record. Database recovery mode is simple, I have writen a code that deletes 15.000 rows in a while condition until all 75M records is deleted. (i use batch delete due to log file grow) However, with current deletion speed it looks like it will take at least 5 days, which means that amount of data required to be deleted is multiply faster than my deletion speed. Basically what i’m trying to do is summarizing (in another table) and deleting data older than 2 months. There is no update operation in that table, only insert and delete. I have an enterprise edition of MSSQL 2017 Any suggestions will be welcome. optimization – Bounding 0-1 matrix with k unique rows Problem Statement: Suppose that I have a $$0-1$$ matrix $$A$$ (all of the entries are $$0$$ or $$1$$). I wish to find the tightest upper bound with $$k$$ many unique rows. To be more precise, let S denote the set of $$0-1$$ matrices $$B$$ such that it only has $$k$$ unique rows, $$A_{ij} leq B_{ij}$$ for all $$i$$ and $$j$$. Find $$min_{B in S} ||A – B||$$ Example: Suppose $$k = 2$$ and $$A = begin{bmatrix} 1 & 0 & 0 & 0 & 0\ 1 & 1 & 0 & 0 & 0\ 0 & 0 & 1 & 0 & 1\ 0 & 0 & 1 & 1 & 0\ end{bmatrix}$$ Then the optimal matrix $$B$$ is $$A = begin{bmatrix} 1 & 1 & 0 & 0 & 0\ 1 & 1 & 0 & 0 & 0\ 0 & 0 & 1 & 1 & 1\ 0 & 0 & 1 & 1 & 1\ end{bmatrix}$$ Since $$B$$ only has $$2$$ distinct rows, $$A leq B$$, and $$||A – B|| = 3$$ is minimized. Question 1: This problem reminds of the minimum $$k$$-union, set-union, and other NP-complete problems. Is this problem an NP-complete optimization problem? Question 2: Is there an efficient way to obtain approximately optimal matrix $$B in S$$? Instead of minimizing $$||A – B||$$, can we get close to the minimum possible value? So far, I have tried to cluster each row of matrix $$A$$ using k-means. Then within each cluster $$i$$, I tried to construct a vector $$v_i$$. Where $$j^{th}$$ entry of $$v_i$$ is $$1$$ if at least p-percent of the vectors in cluster $$i$$ has $$j^{th}$$ entry to be $$1$$. The vectors $$v_i$$ served as initial potential guess for possible rows of the matrix $$B$$. Then I used greedy algorithm. This has decent performance, but it’s not great.
2020-07-11 11:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24833893775939941, "perplexity": 2217.633741294998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00451.warc.gz"}
https://www.physicsforums.com/threads/f-ma-2010-exam-24-moi-after-shift-in-cog.667631/
# F = MA 2010 Exam # 24 (MoI after shift in CoG) 1. Jan 28, 2013 ### SignaturePF 1. The problem statement, all variables and given/known data See Number 24 2. Relevant equations CoM = M1x1 + M2x2 / (M1+M2) MOI_disk: 1/2mr^2 3. The attempt at a solution So what I was thinking was find the new CoM then use the parallel-axis theorem. To find the new CoM: MsXs + MbXb / (Ms + Mb) = 0, where s is the small unshaded part, and b is the big, shaded portion. MsXs = - MbXb Xb = -Ms/MbXs Xb = (-R)(ρ∏R^2) / ρ(∏(2R)^2 - ∏R^2) Xb = R/3 Parallel Axis Theorem will smaller mass: 1/2MR^2 + 1/9MR^2 11/18MR^2 Yeah I'm pretty lost. I feel like there is a much better solution. 2. Jan 28, 2013 ### ModusPwnd I like your idea of using the parallel axis theorem. Naively I might consider the moment of inertia for each circle about each circle's center. But we know that the circle cut out, it never rotated about its center. Before being cut out, it rotated about the larger circle's center. So I would start by finding the moment of inertia of the small circle rotating about the large circles axis. Does that make sense? If you get that you will have the moment of inertia for the large disk about its center and one for the small disk about the large disk's center. Of course you want neither of those, you want the moment of inertia for a large disk with a small disk cut away. 3. Jan 28, 2013 ### SignaturePF Ok going along with what you said: The big disk is simply 1/2Mr^2 Little disk about big disk's center: 1/2mr^2 + m(r/2)^2 1/2mr^2 + mr^2/4 3/4mr^2 is MoI about the big disk center for little guy. But we have two different masses and if we bust out ρ it won't end up cancelling. 4. Jan 28, 2013 ### ModusPwnd Should be uniform density right? So 'm', the little disk's mass should be proportional to 'M' in the same way the area's of each are proportional. One square meter of the stuff always has the same mass. Figure out what the ratio of masses is and then you can eliminate 'm'. 5. Jan 28, 2013 ### SignaturePF because of the area ratio: m should be 1/4M This makes it: 3/32MR^2 Subtracting 1/2MR^2 (MOI big disk) from 3/32MR^2 (MOI little disk about big disk's center) gives 13/32MR^2. Aha! By the way, in general can you find the moment of inertia this way; that is, if you have a hole and a shape, you can take the MoI of the original shape about its center and subtract the MoI of the hole around the shape? 6. Jan 28, 2013 ### ModusPwnd Yes, I think so. Make sure that you stick to the same axis by using the theorem. Some things that might cause a problem would be irregular shapes or non-constant mass density. Each of those would need some calculus. But if you have "nice" shapes and "nice" mass density (constant or maybe linear) then this scheme should work. Makes sense right? Envision spinning the disk around on a stick. Then think about spinning the little disk mounted on it's edge. Each of those has a resistance to being spun. This comes from the mass needing force and energy to get up to speed. Each little piece has its own need of force and energy, each little piece has its own contribution to the resistance to being spun, each little piece has its own moment of inertia. If you take away that piece, you take away that resistance and you literally subtract away the moment of inertia for that piece. 7. Jan 28, 2013 ### tms Since the moment of inertia is defined as $$I = \int r^2\,dm,$$ and since integrals are a fancy form of addition, moments of inertia are additive.
2017-08-20 18:34:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41393953561782837, "perplexity": 1010.5998936699739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00452.warc.gz"}
https://www.qb365.in/materials/stateboard/11th-standard-chemistry-fundamentals-of-organic-chemistry-english-medium-free-online-test-1-mark-questions-with-answer-key-2020-2021-5324.html
#### 11th Standard Chemistry Fundamentals of Organic Chemistry English Medium Free Online Test 1 Mark Questions with Answer Key 2020 - 2021 11th Standard Reg.No. : • • • • • • Chemistry Time : 00:20:00 Hrs Total Marks : 20 20 x 1 = 20 1. In the hydrocarbo $\overset { 7 }{ { CH }_{ 3 } } -\overset { 6 }{ { CH }_{ 2 } } -\overset { 5 }{ CH } =\overset { 4 }{ CH } -\overset { 3 }{ { CH }_{ 2 } } -\overset { 2 }{ CH } =\overset { 1 }{ CH }$ the state of hybridisation of carbon 1,2,3,4 and 7 are in the following sequence. (a) sp, sp, sp3, sp2, sp3 (b) sp2, sp, sp3, sp2, sp3 (c) sp, sp, sp2, sp, sp3 (d) none of these 2. Which one of the following names does not fit a real name? (a) 3 – Methyl –3–hexanone (b) 4–Methyl –3– hexanone (c) 3– Methyl –3– hexanol (d) 2– Methyl cyclo hexanone 3. IUPAC name of ${ CH }_{ 3 }-\overset { \underset { | }{ H } }{ \underset { \overset { | }{ { C }_{ 2 }{ H }_{ 5 } } }{ C } } -\overset { \underset { | }{ { C }_{ 4 }{ H }_{ 9 } } }{ \underset { \overset { | }{ { CH }_{ 3 } } }{ C } } -{ CH }_{ 3 }$ is (a) 3, 4, 4 – Trimethylheptane (b) 2 – Ethyl –3, 3– dimethyl heptane (c) 3, 4, 4 – Trimethyloctane (d) 2 – Butyl -2 –methyl – 3 – ethyl-butane 4. The IUPAC name of the compound  is (a) 3 – Ethyl -2– hexene (b) 3 – Propyl -3– hexene (c) 4 – Ethyl – 4 – hexene (d) 3 – Propyl -2-hexene 5. The IUPAC name of  is (a) 2 – Bromo -3 – methyl butanoic acid (b) 2 - methyl - 3- bromobutanoic acid (c) 3 - Bromo - 2 - methylbutanoic acid (d) 3 - Bromo - 2, 3 - dimethyl propanoic acid 6. The isomer of ethanol is (a) acetaldehyde (b) dimethylether (c) acetone (d) methyl carbinol 7. Tetravalency of carbon is possible in the case of (a) sp3-hybridisation (b) sp2-hybridisation (c) sp-hybridisation (d) All of these 8. Which of the following compound has maximum number of primary H-atoms? (a) CH4 (b) CH3-CH2-CH3 (c) $\underset { \underset { { CH }_{ 3 } }{ | } }{ { H }_{ 5 }{ C }_{ 2 }-{ CH }-{ CH }_{ 3 } }$ (d) C (CH3)4 9. The compound which has one isopropyl group is (a) 2,2,3,3 - Tetramethyl pentane (b) 2,2 - Dimethyl pentane (c) 2,2,3 - Trimethyl pentane (d) 2 - Methylpentane 10. Structure of the compound whose IUPAC name is 3-ethyl-2-hydroxy-4-methyl hex-3-en-5-ynoic acid is (a) (b) (c) (d) 11. Which one of the following pairs represents stereoisomerism? (a) Chain isomerism and rotational isomerism. (b) Structural isomerism and geometrical isomerism (c) (d) Optical isomerism and geometrical isomerism 12. Purification of two miscible .liqulds possessing very close boiling points can be separated using- (a) Fractional distillation (b) Sublimation (c) Simple distillation (d) Steam distillation 13. The principle involved in paper chromatography is (a) partition (b) sublimation (c) (d) filtration 14. Which of the following is not an organic compound? (a) DNA (b) Lipid (c) Glycogen (d) Bronze 15. Which one of the following is the functional group of ketone? (a) -CHO (b) $-\underset { \overset { || }{ O } }{ C } -$ (c) -O- (d) -OH 16. Which one of the following is commonly called mesitylene? (a) (b) (c) (d) 17. Which one of the following is called ferric ferrocyanide? (a) Na4[Fe(CN)6] (b) Na4[Fe(CN)6]3 (c) Fe4[Fe(CN)6] (d) Fe4[Fe(CN)6]3 18. Which one of the following is the formula of sodium nitroprusside? (a) Na4[Fe(CN)5NO5 (b) Na4[Fe(CN)5SON] (c) Na4[Fe(CN)6] (d) Fe4[Fe(CN)6]3 19. Which of the following will absorb CO2? (a) Cone. H2SO4 (b) KOH (c) HCl (d) Copper 20. Which method is used to estimate sulphur? (a) Lassaigne's test (b) Oxide test (c) Carius method (d) Kjedahl's method
2021-03-05 01:04:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39521223306655884, "perplexity": 13657.439120586518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00202.warc.gz"}
https://mailman.ntg.nl/pipermail/ntg-pdftex/2005-June/001320.html
# [NTG-pdftex] Version numbering Martin Schröder martin at oneiros.de Fri Jun 10 01:48:41 CEST 2005 On 2005-06-08 18:06:12 +0200, Heiko Oberdiek wrote: > I do not mind if this is a character and thought there are some > requests to have numbers. Why does a software care for _bugfix_ levels? Workarounds? > > > Thus we have already A, B, and C. What we really need is rather > > > a specification: > > > * data type: A, B, C are numbers (/strings), range > > > > A \in [1..\infty] > > B \in [0..99] > > C in "0".."9" (currently it's "a".."z" > > > > B is only increased by 10. > > > > > * how does the version look like, formatting issues > > > > A.BB.C > > There are some contradictions. Here the second B is always zero. > Taco takes the second B as patchlevel, but what is \pdftexrevision > then? Starting with 1.30, B will be \in [30,40,50,60,70,80,90], starting with 2.0, B will be \in [0..9]. \pdftexrevision will always be in "0".."9". > Also already the tenth version forces the main version A to > be incremented. Yes. 1.90, 2.0, .. 2.9, 3.0 .. Best regards Martin -- http://www.tm.oneiros.de
2022-05-21 02:38:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928639531135559, "perplexity": 8388.840555588304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00069.warc.gz"}
https://www.torontochurchplanting.ca/national-coordinating-npxbarv/hash-function-for-coordinates-047a2f
Select Page halfMD5 . Multiplying the x position by -1 will give the same result. iliary coordinates. Morton, which encodes a geographic location into a short string of letters and digits. Otherwise, go back to Step 2. GeoGeometry class with methods that allow you to: Calculate distance between two coordinates using the Haversine- algorithm. I had an interesting math problem today. In an off-line step, the objects are encoded by treating each pair of points as a geometric basis. Do any two distinct colors map to the same hashcode? The Color class includes a custom hash function. Learning codes and hash functions using auxiliary coordinates The optimization of the loss L(h) in eq. There are two ways to detect mirrored objects. Image retrieval experimentsshowthe resulting hash function outperforms or is competitive with state-of-the-art methods for binary hashing. real hashing function, evaluated at runtime without lookup tables. The candidate basis is accepted if a sufficiently large number of the data points index a consistent object basis. This is referred to as a hash function - not to be confused with random number generators, where each random number is dependent on the previous one. Refer to Sec. You can also use this function to transform a local point to page coordinates, or vice versa. = − 1 +2⋯( −1) / = −((−1) 2)≈2. A special case of hashing is known as geometric hashing or the grid method. Proper hash codes. We need to specify the rule so that the compiler knows what to do. All gists Back to GitHub. Trivial solution: make a hash key out of the lat/long pair, and hash that. 1. collision When a hash function maps two different keys to the same table address, a collision is said to occur. Turns out my hash code algorithm was stupid. Embed Embed this gist in your website. Our approach: Learning codes and hash functions using auxiliary coordinates. Bob generates a hash value of the message using the same hash function. The method could be used to recognize one of the multiple objects in a base, in this case the hash table should store not only the pose information but also the index of object model in the base. A locality-preserving hashing is a hash function f that maps a point or points in a multidimensional coordinate space to a scalar value, such that if we have three points A, B and C such that | − | < | − | ⇒ | − | < | − |. The output I ideally would look like this:fn(0, 0, 0) = 0fn(1, 0, 0) = 1fn(0, 1, 0) = 2fn(1, 1, 0) = 3fn(0, 0, 1) = 4etc. Please note that a digital signature proves the integrity of a message but does not actually encrypt it. A locality-preserving hashing is a hash function f that maps a point or points in a multidimensional coordinate space to a scalar value, such that if we have three points A, B and C such that | − | < | − | ⇒ | − | < | − |. The z-axis is perpendicular to the created axis using the right-hand rule. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. However, more importantly, this hash function works for integer coordinates, but how can hash real coordinates? Geohash is a public domain geocode system invented in 2008 by Gustavo Niemeyer and (similar work in 1966) G.M. Extremely efficient in practice. Quantize obtained coordinates as it was done before. position n+1 from the top. Sec. Post by Stefano Zaghi However, more importantly, this hash function works for integer coordinates, but how can hash real coordinates? Features. This pairing function only works with positive numbers, but if we want to be able to use negative coordinates, we can simply add this to the top of our function: x = if x >= 0 then 2 * x else -2 * x - 1y = if y >= 0 then 2 * y else -2 * y - 1z = if z >= 0 then 2 * z else -2 * z - 1. Which hash functions should we use? For each basis such that the count exceeds a certain threshold, verify the hypothesis that it corresponds to an image basis chosen in Step 2. The resulting algorithm can be seen as an iter- ated version of the procedure of optimizing first over the codes and then learning the hash function. In the on-line (recognition) step, randomly selected pairs of data points are considered as candidate bases. One reason is that Nisan’s pseudorandom number generator [Nis92] lets us store the hash functions with only a log nfactor increase in space. Geometric hashing is a method used for object recognition. From: Cryptographic Boolean Functions and Applications, 2009. The resulting algorithm can be seen as a corrected, iterated version of the procedure of optimizing first over the codes and then learning the hash function. The hashcode of an integer in .NET is just the value of that integer. As we’ve mentioned before, all player location information is kept private. Actually, using 3 points for the basis is another approach for geometric hashing. In 1985, Ken Perlin wrote a Siggraph paper called "An Image Synthetizer" in which he presented a type of noise function similar to the one we studied in the previous lesson (Noise Part 1) but slightly better. Then combines hashes, takes the first 8 bytes of the hash of the resulting string, and interprets them as UInt64 in big-endian byte order. PH(,) ≈1 ⋅−1 /⋅2 ⋯−(−1)/. He also decrypts the hash value using Alice’s public key and compares the two hashes. 2n distinct hash values. Traditionally the hash functions are considered in a form of h(v) = f(v) mod m, where m is considered as a prime number and f(v) is a function over the element v, which is generally of „unlimited“ dimensionality and/or of „unlimited“ range of values. (1) is difficult because of the thresholded hash function, which appears as the argument of the loss function L. We use the recently proposed method of auxiliary coordinates (MAC) [1], which is a meta-algorithm to construct optimization algorithms for nested functions. Then if you have the key, by definition you have the coordinates. A hash function is a function that converts a variable size sequence of bytes (a string, a file content etc.) The LOCTOLOC function converts a point from local coordinates in a source shape to local coordinates in a destination shape. The MiMC Hash Function. •Most methods do not scale beyond a few thousand training points. Last active Feb 9, 2016. FNV-1 is rumoured to be a good hash function for strings.. For long strings (longer than, say, about 200 characters), you can get good performance out of the MD4 hash function. Optimizing affinity-based binary hashing using auxiliary coordinates: Reviewer 1 Summary. Here’s a visual comparison: This is nice because you could, for instance, fit two 16-bit integers into a single 32-bit integer with no collisions. The hash function hash maps the discretized 3D position (i,j,k) to a 1D index hand the vertex and object information is stored in a hash table at this index h: h = hash(i,j,k). For each candidate basis, the remaining data points are encoded according to the basis and possible correspondences from the object are found in the previously constructed table. Then if we wish to run Count-Sketch on multiple di erent vectors, we can reuse the hash functions. Note. In practice, this is approximated, and a successful way to do this is binary hashing [12]. Let s be the source node of a put(K,D,Q) operation. For a pixel with coordinates $\{ r, g, b, a \}$, the corresponding hashcode (at least in version 8 of the JDK) is $2^{24} \times a + 2^{16} \times r + 2^8 \times g + b . The general problem of binary hashing is: given a metric/similarity/affinity, find the best hash function mapping the original objects into Hamming space of fixed dimension, while preserving the distances/affinity, etc. The 4-bit window Pedersen hash function is a secure hash function which maps a sequence of bits to a compressed point on an elliptic curve (Libert, Mouhartem, and Stehlé, n.d.). In the view of implementation, this hash function can be encoded using remainder operator or using bitwise AND with 127. Choose an arbitrary basis. It takes some time to find constants which give good visual results and also to find a specific area of the noise which is most free from … The hash function which is working best for me takes the form hash = mod( coord.x * coord.x * coord.y * coord.y, SOMELARGEFLOAT ) / SOMELARGEFLOAT. Even one tiny change to the original input should result in an entirely different hash value. Namespace: System.Management.Automation.Host Assembly: System.Management.Automation.dll Package: Microsoft.PowerShell.5.1.ReferenceAssemblies v1.0.0 4.3 describe how to find the opti-mal hash …$ Question B2: Given that hashcodes are 32-bit integers, is every hashcode realizable by some Color object? I could do something something simple like concatenate the string forms of the unsigned integers, but then collisions would happen sooner. learning hash functions using affinity-based loss functions that uses auxiliary coordinates. •the hash function must output binary values, hence the problem is not just generally nonconvex, but also nonsmooth. In 2004 Joshua Bloch "went so far as to call up Dennis Ritchie, who said that he did not know where the hash function came from. TIL the current hash function for Java strings is of unknown author. The hash function which is working best for me takes the form hash = mod( coord.x * coord.x * coord.y * coord.y, SOMELARGEFLOAT ) / SOMELARGEFLOAT. This allows detecting mirror images (or objects). Assuming, that hash function distributes hash codes uniformly and table allows dynamic resizing, amortized complexity of insertion, removal and lookup operations is constant. This hash function provides CAN-based coordinates that determine where a triple should be stored. As a cryptographic function, it was broken about 15 years ago, but for non cryptographic purposes, … Geohash is a public domain geocode system invented in 2008 by Gustavo Niemeyer and (similar work in 1966) G.M. SQL Reference; Functions; Hash Functions . For simplicity, this example will not use too many point features and assume that their descriptors are given by their coordinates only (in practice local descriptors such as SIFT could be used for indexing). The resulting algorithm can be seen as a corrected, iterated version of the procedure of optimizing first over the codes and then learning the hash function. SQL Reference; Functions; Hash Functions . I found this really interesting pairing function by Matthew Szudzik (via StackOverflow) that assigns numbers along the edges of a square instead of the traditional Cantor method of assigning diagonally. The 3D version simply offsets the SOMELARGEFLOAT value by a fraction of the Z coordinate. If successful, the object is found. We assume each peer stores RDF data and can easily sort triples alphabetically (using index trees for instance). Therefore, geometric hashing should be able to find the object, too. Non-trivial solution: use spatial hashing. For a pixel with coordinates $\{ r, g, b, a \}$, the corresponding hashcode (at least in version 8 of the JDK) is 2^{24} \times a + 2^{16} \times r + 2^8 \times g + b . keyed hash function (prefix-MAC) BLAKE3: arbitrary keyed hash function (supplied IV) HMAC: KMAC: arbitrary based on Keccak MD6: 512 bits Merkle tree NLFSR: One-key MAC (OMAC; CMAC) PMAC (cryptography) Poly1305-AES: 128 bits nonce-based SipHash: 64 bits non-collision-resistant PRF HighwayHash: 64, 128 or 256 bits non-collision-resistant PRF UMAC: VMAC: Unkeyed cryptographic hash functions… It takes some time to find constants which give good visual results and also to find a specific area of the noise which is most free from … iliary coordinates. 4.3 describe how to find the opti-mal hash … Instead, only the hashes of the coordinates of your planets are uploaded to the Dark Forest core contract. Rob Edwards from San Diego State University demonstrates a common method of creating an integer for a string, and some of the problems you can get into. After a lot of scribbling in my notebook, I came up with this formula: function(x, y, z) { max = MAX(x, y, z) hash = max^3 + (2 * max * z) + z if (max == z) hash += MAX(x, y)^2 if (y >= x) hash += x + y else hash += y return hash}. •The b single-bit hash functions … Interprets all the input parameters as strings and calculates the MD5 hash value for each of them. The coordinates should be discretised to make recognition, Repeat the process for a different basis pair (Step 2). Similar to the example above, hashing applies to higher-dimensional data. compute the projections to the new coordinate axes. So the hashcodes of coordinates (1,2,3), (3,2,1), (1,3,2) etc were all the same. Consider a point in a D-dimensional space x= (x 1;x 2;:::;x D) ;D coordinates. Order of insertions Theorem: The set of occupied cell and the total number of probes done while inserting a set of items into a hash table using linear probing does not depend on the order in which the items are inserted Exercise: Prove the theorem Exercise: Is the same true for uniform probing? real hashing function, evaluated at runtime without lookup tables. Sign in Sign up Instantly share code, notes, and snippets. Similarly, if two keys are simply digited or character permutations of each other (such as 139 and 319), they should also hash into different values. Permalink. Hc (K) returns a pair of geographic coordinates (x, y) as the destination of the packet Pp =<(x,y),>. Hash Function. Find interesting feature points in the input image. Here, given a high-dimensional vector x∈ RD, the hash function hmaps it to a b-bit vector z = h(x) ∈ {−1,+1}b, and the nearest neighbor search is then done in the binary space. to a fixed size sequence of bytes, called digest.This means that hashing a file of any length, the hash function will always return the same unique sequence of bytes for that file. However, the input image may contain the object in mirror transform. Most hash tables cannot have identical keys mapped to different values. In computer science, geometric hashing is a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation, though extensions exist to other object representations and transformations. The default hash function applied by all peers of Figure 1 for all dimensions is shown on Figure 3. These are the two prominent qualities of cryptographic hash functions. Hash Function. Using a hash function N !N, it is evaluated on each component of the noise function input, but linked to the previous component evaluation in a similar way Perlin linked to its permutation evaluation. 4.2 and Sec. Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. And XORing integers together produces the same result, regardless of the order. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. Geometric hashing was originally suggested in computer vision for object recognition in 2D and 3D,[1] but later was applied to different problems such as structural alignment of proteins.[2][3]. using affinity-based loss functions that uses auxiliary coordinates. Keywords: Perlin noise, gradient noise, permutation, hashing function, derivatives, interpolant, height map, displacement. The input u and outputs x and y are elements of the field F. The affine coordinates (x, y) specify a point on an elliptic curve defined over F. Note that the point (x, y) is not a uniformly random point. eight bytes if each coordinate value is a 32-bit integer. Skip to content. Interprets all the input parameters as strings and calculates the MD5 hash value for each of them. So in real life one won’t encode basis keys (1.0, 0.0) and (-1.0, 0.0) in a hash table. In computer science, geometric hashing is a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation, though extensions exist to other object representations and transformations. Refer to Sec. If the point features are identical or similar, then increase the count for the corresponding basis (and the type of object, if any). [x-post /r/java] For each point, its quantizedtransformed coordinates a… What would you like to do? This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. •While the gradients of the objective function do exist wrt W, they are zero nearly everywhere. In our algorithm, we use a hash function h to map grid cell “addresses” of the form (a,b,c,l) ∈Z4into a hash ta- ble. learning hash functions using affinity-based loss functions that uses auxiliary coordinates. The hash function hash maps the discretized 3D position (i,j,k) to a 1D index hand the vertex and object information is stored in a hash table at this index h: h = hash(i,j,k). mbostock /.block. because fully random hash functions would take up more space than the sketch itself, but there are reasons why this constraint is not too problematic. 3) The hash function "uniformly" distributes the data across the entire set of possible hash values. Actual time, taken by those operations linearly depends on table's load factor. He walked across the hall and asked Brian Kernighan, who also had no recollection." Share Copy sharable link for this gist. If the hash function h was a continuous function of its input x and its parameters, one could simply apply the chain rule to compute derivatives over the parameters of hof the objective function (1) and then apply a nonlinear optimization method such as gradient descent. Calculate distance of a point to a line. Hash functions can be used for the deterministic pseudo-random shuffling of elements. Image retrieval experimentsshowthe resulting hash function outperforms or is competitive with state-of-the-art methods for binary hashing. We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. So now we can produce a deterministic seed from x, y, z coordinates allowing for as much room as possible before collisions occur. This can be accomplished with geometric hashing. Power of two sized tables are often used in practice (for instance in Java). A spectacular example of this being done before was over 3½ years ago with MD5 (as seen in this SO: MD5 Hash function in excel without using VBA). This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. In this paper, we introduce and analyze a simple objective for learning hash functions, develop an ef-ficient coordinate-descent algorithm, and demonstrate that the proposed approach leads to improved results as compared to existing hashing techniques. This measure prevents collisions occuring for hash codes that do not differ in lower bits. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. hash function to associate 3D block coordinates with entries in a hash table, which in our current implementation is the same as in [16] i.e. 4.1 for details on the hash function. It is needed to handle. Thus, a hash function that simply extracts a portion of a key is not suitable. The underlying problem of finding the binary codes for the points is an NP-complete optimization over Nb variables. Even substantially overloaded hash table, based on chaining, shows well performance. I needed to get a deterministic number from three ordered numbers. This reformulates the optimization as alternating two easier steps: one that learns the encoder anddecoderseparately,andonethat optimizes thecodefor eachimage. The opti-mization of the loss L(h)in eq. Note. Then combines hashes, takes the first 8 bytes of the hash of the resulting string, and interprets them as UInt64 in big-endian byte order. The Color class includes a custom hash function. 1. This page was last edited on 21 April 2020, at 09:46. GitHub Gist: instantly share code, notes, and snippets. Table allows only integers as values. You could put these hashes into a database or search engine to implement polygon search. Hash function: It is basically a mathematical operation that defines how we transform the input. Embed. Thus, the presence of a hash collision is highly when the likely table size 2is much less than . s firstly computes H c (K), the hash function conditioned with the sensor distribution in the sensing field, as discussed in Section 2. It seems that this method is only capable of handling scaling, translation, and rotation. Question B2: Given that hashcodes are 32-bit integers, is every hashcode realizable by some Color object? Hash keys are fairly compact, e.g. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. The remaining points can be represented in an invariant fashion with respect to this basis using two parameters. Hash functions can be used for the deterministic pseudo-random shuffling of elements. This reformulates the optimization as alternating two easier steps: one that learns the encoder anddecoderseparately,andonethat optimizes thecodefor eachimage. OPTIMIZING AFFINITY-BASED BINARY HASHING USING AUXILIARY COORDINATES ... •The hash function outputs binary values, hence the problem is nonconvex and nonsmooth. The problem is that this function is only designed for a pairing of x y, and I have x, y, z coordinates I would like to use. But procedural generation is not the typical use of hash functions, and not all hash functions are well suited for procedural generation, as they may either not have sufficiently random distribution, or be unnecessarily expensive. Notice that the order of the points affects the resulting basis, Three-dimensional model-based object recognition and segmentation in cluttered scenes, "The LabelHash algorithm for substructure matching", "Efficient detection of three-dimensional structural motifs in biological macromolecules by computer vision techniques", https://en.wikipedia.org/w/index.php?title=Geometric_hashing&oldid=952257765, Creative Commons Attribution-ShareAlike License, Find the model's feature points. Figure 3: Default hash function. The remaining points can be represented in an invariant fashion with respect to this basis using two parameters. You can use this function to construct a shape, for example, in terms of a point from another coordinate space. For each point, its quantized transformed coordinates are stored in the hash table as a key, and indices of the basis points as a value. Transfer the image coordinate system to the model one (for the supposed object) and try to match them. Our hash function maps an infinite set of possible input keys K onto a finite set of hash values {0,1,...,m−1}: h(a,b,c,l) →{0,1,...,m−1} (4) where m is the chosen hash table size. By scaling each real by some power of 10, so that the result is an integer in 32 bits. Specifically I was trying to get a random seed based on x, y, z coordinates. hash function Function which, when applied to the key, produces a integer which can be used as an address in a hash table. We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. Star 1 Fork 2 Code Revisions 4 Stars 1 Forks 2. Use 3 points for the basis. Has anybody found or created a way to do more secure SHA256 or SHA512 hashing in Excel, without using VBA or macros? We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. Using a hash function N !N, it is evaluated on each component of the noise function input, but linked to the previous component evaluation in a similar way Perlin linked to its permutation evaluation. Here we discuss how to develop a good elementary hash function for the l 2 (euclidean) distance. This closes the loop and optimizes jointly over the hash functions and the binary codes so that they gradually match each other. The hash function should be so difficult and make the data so obscure that it would be improbable for someone to reverse engineer the hash to determine its original key value. When the table is large (i.e. 4.2 and Sec. linear probing A simple re-hashing scheme in which the next slot in the table is checked on a collision. The 3D version simply offsets the SOMELARGEFLOAT value by a fraction of the Z coordinate. This means that the coordinates of all of your planets are never uploaded to the blockchain, where all data is publicly accessible. I would like to similarly count along the edges of cubes. The inbuilt hash function expects a predefined data type to be the input, so that it can hash the value. halfMD5 . Describe coordinates of the feature points in the new basis. This must be a class that overrides operator() and calculates the hash value given an object of the key-type. This function makes the coordinates of a point on the elliptic curve over the finite field from a hash of the We propose a general framework for learning hash functions using affinity-based loss functions that uses auxiliary coordinates. Then a new pair of basis points is selected, and the process is repeated. steve kargl 2018-05-03 18:21:27 UTC. Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is used to test whether an element is a member of a set. Do any two distinct colors map to the same hashcode? There have been many solutions proposed here, many based on solving some … Assume that 5 feature points are found in the model image with the coordinates, Introduce a basis to describe the locations of the feature points. If there isn't a suitable arbitrary basis, then it is likely that the input image does not contain the target object. The seed would always be the same based on location, and collisions would only occur as you got very far away from the origin (ideally as far as possible). If the two hash values match, Bob knows that Alice’s message has not been tampered with during transmission. function(x, y, z) { max = MAX(x, y, z) hash = max^3 + (2 * max * z) + z if (max == z) hash += MAX(x, y)^2 if (y >= x) hash += x + y else hash += y return hash} This pairing function only works with positive numbers, but if we want to be able to use negative coordinates, we can simply add this to the top of our function: x = if x >= 0 then 2 * x else -2 * x - 1 Compare all the transformed point features in the input image with the hash table. Learn about hash maps, the efficient key-value storage used in many different programming languages, and then implement one yourself! Hash function to be used is the remainder of division by 128. The calculations involved in the Szudzik function are also less intensive than Cantor’s. The first two points define the x-axis, and the third point defines the y-axis (with the first point). For 2D space and, Describe feature locations with respect to that basis, i.e. For the vector graph, make the left side positive, and the right side negative. These hashes are calculated with the algorithm in GeoHashUtils. Morton, which encodes a geographic location into a short string of letters and digits. When used, there is a special hash function, which is applied in addition to the main one. In an off-line step, the objects are encoded by treating each pair of points as a geometric basis. Hashing Points. Let’s say that we want to check if a model image can be seen in an input image. A special case of hashing is known as geometric hashing or the grid method. The main idea is to construct hash functions that explicitly preserve the input distances when mapping to the Hamming space. Sec. Hash functions are used to map a large collection of messages into a small set of message digests and can be used to generate efficiently both signatures and message authentication codes, and they can be also used as one-way functions in key agreement and key establishment protocols. Characteristics of a Hash Function in Cryptography . ≫), we can use the approxima- tion ≈1 + for small to obtain: Pr. For three-dimensional data points, three points are also needed for the basis. Perlin Noise. 4.1 for details on the hash function. Two coordinates using the same hashcode codes that do not scale beyond few. Portion of a hash function /⋅2 ⋯− ( −1 ) / in which the next in. Java ) side positive, and a successful way to do more secure SHA256 SHA512. Where all data is publicly accessible a source shape to local coordinates in destination..., bob knows that Alice ’ s message has not been tampered with during transmission by all peers Figure. Example, in terms of a message but does not contain the target object, height map displacement! Dark Forest core contract the calculations involved in the table is checked on a.! A special case of hashing is known as geometric hashing or the grid method no recollection. candidate is! Two parameters steps: one that learns the encoder anddecoderseparately, andonethat thecodefor! Was last edited on 21 April 2020, at 09:46 on Figure 3 optimizing affinity-based binary hashing W, are... 32-Bit integer also nonsmooth vice versa sign in sign up instantly share code, notes, and the codes. Is likely that the input parameters as strings and calculates the MD5 hash value dimensions shown! Count-Sketch on multiple di erent vectors, we can use this function to be the input, so they... Without using VBA or macros new pair of points as a geometric basis means that the result is NP-complete! ) the hash functions and the binary codes so that they gradually match each other used, there is a! We need to specify the rule so that they gradually match each other sized tables are often used in different! Actually, using 3 points for the vector graph, make the left side positive, and hash can... All of your planets are uploaded to the Hamming space ) ≈1 ⋅−1 ⋯−... ≫ ), ( 1,3,2 ) etc were all the input, so that they gradually match other! Do any two distinct colors map to the same result, regardless of loss! Content etc. by those operations linearly depends on table 's load factor but does not the! A consistent object basis do this is approximated, and snippets / = (! You have the coordinates of the loss L ( h ) in eq rule so that they gradually match other! Optimization as alternating two easier steps: one that learns the encoder anddecoderseparately, andonethat optimizes thecodefor.! Two sized tables are often used in many hash function for coordinates programming languages, a... Of coordinates ( 1,2,3 ), we can use this function to be the,! Pair, and the third point defines the y-axis ( with the first two points define the x-axis, snippets. B2: Given that hashcodes are 32-bit integers, but also nonsmooth above, function... This method is only capable of handling scaling, translation, and hash.... The main idea is to construct hash functions using auxiliary coordinates... •The hash function works for integer,... Try to match them specify the rule so that it can hash real coordinates i., only the hashes of the Z coordinate sized tables are often used in practice, this hash function be. Encoded by treating each pair of points as a geometric basis at without... For hash codes that do not scale beyond a few thousand training points much less than runtime without tables... Each pair of points as a geometric basis input should result in an off-line,! Works for integer hash function for coordinates, but how can hash real coordinates we want to if... That we want to check if a sufficiently large number of the lat/long,! Search engine to implement polygon search also use this function to be the input, that! Be a class that overrides operator ( ) and calculates the MD5 hash value of the feature points in table... Likely table size 2is much less than, more importantly, this hash function is a 32-bit integer that preserve... Main one multiple di erent vectors, we can use this function to transform a local to... The hashcodes of coordinates ( 1,2,3 ), ( 3,2,1 ), ( 1,3,2 ) etc were all input... Loctoloc function converts a point from local coordinates in a destination shape geometric hashing the! Overrides operator ( ) and try to match them model image can be encoded using operator! The optimization as alternating two easier steps: one that learns the encoder anddecoderseparately, andonethat optimizes eachimage... Hashcodes are 32-bit integers, is every hashcode realizable by some power of 10, that... Maps two different keys to the original input should result in an input.... Cantor ’ s a suitable arbitrary basis, then it is likely that the input image loss that. Gist: instantly share code, notes, and snippets polygon search three ordered numbers are encoded treating... Ve mentioned before, all player location information is kept private give the same result Figure! In 1966 ) G.M time, taken by those operations linearly depends on table 's load factor mapped to values. That hashcodes are 32-bit integers, is every hashcode realizable by some Color?! Table address, a hash collision is highly when the likely table size 2is much than! And with 127 an NP-complete optimization over Nb variables vice versa the inbuilt hash function to be input! The integrity of a hash function can be encoded using remainder operator or using and. Sha256 or SHA512 hashing in Excel, without using VBA or macros the. In terms of a key is not suitable are also less intensive than ’!: Pr on multiple di erent vectors, we can use the tion! ) in eq z-axis is perpendicular to the same result highly when likely. Same hashcode is highly when the likely table size 2is much less than or created a way to.! The hash functions do not scale beyond a few thousand training points coordinate to! Try to match them are the two hash values match, bob knows that ’! Keys to the created axis using the same result, regardless of the coordinates that overrides operator )! If we wish to run Count-Sketch on multiple di erent vectors, we can use the approxima- tion ≈1 for. The LOCTOLOC function converts a point from another coordinate space could do something. The example above, hashing function, which encodes a geographic location into a short string letters... From another coordinate space instantly share code, notes, and the binary codes so that they gradually each. Forest core contract binary hashing using auxiliary coordinates successful way hash function for coordinates do shuffling of elements functions and the third defines... Is nonconvex and nonsmooth the rule so that they gradually match each other ( 1,2,3 ), we use... 3,2,1 ), we can use this function to construct hash functions and the binary codes so that gradually! The same table address, a file content etc. output binary values, hence the problem is nonconvex nonsmooth... Converts a point from another coordinate space operator or using bitwise and with 127 dimensions is shown Figure... Just generally nonconvex, but then collisions would happen sooner this must be a class that overrides operator )... 1966 ) G.M ] Question B2: Given that hashcodes are 32-bit integers, is every realizable! System invented in 2008 by Gustavo Niemeyer and ( similar work in 1966 ) G.M −1. What to do to transform a local point to page coordinates, but also nonsmooth as... Mapping to the example above, hashing function, which encodes a geographic into., hashing function, evaluated at runtime without lookup tables the loop and optimizes jointly over the hash functions the... Codes that do not differ in lower bits about hash maps, the input parameters as strings and the... The entire set of possible hash values functions and the right side.! Model one ( for the vector graph, make the left side positive, and snippets: Cryptographic functions. Function for Java strings is of unknown author prevents collisions occuring for codes! Hamming space, only the hashes of the Z coordinate real hashing function evaluated! Real hashing function, evaluated at runtime without lookup tables generates a hash function provides CAN-based coordinates that determine a. Triple should be able to find the object in mirror transform ( 1,3,2 ) etc were all transformed... Coordinates of your planets are never uploaded to the model one ( instance... − ( ( −1 ) / = − 1 +2⋯ ( −1 ) / = − (! ( using index trees for instance ) for hash codes that do not differ in lower bits easily sort alphabetically... Niemeyer and ( similar work in 1966 ) G.M only capable of handling scaling translation... Z coordinate Z coordinates point from another coordinate space post by Stefano however! A message but does not actually encrypt it function is a function that converts a point from local in! Presence of a message but does not contain the target object original input should result an... Number from three ordered numbers ] Question B2: Given that hashcodes are 32-bit integers, but how hash. Linear probing a simple re-hashing scheme in which the next slot in the new basis hash.! Values match, bob knows that Alice hash function for coordinates s public key and compares the two hashes not in. Competitive with state-of-the-art methods for binary hashing [ 12 ] objective function do exist wrt W, they are nearly... Sequence of bytes ( a string, a file content etc. only of! Entirely different hash value for each of them input distances when mapping to the same last edited on 21 2020! Peers of Figure 1 for all dimensions is shown on Figure 3 the unsigned,! Accepted if a model image can be encoded using remainder operator or using bitwise and 127.
2021-09-26 10:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36740198731422424, "perplexity": 1126.4293201007797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00684.warc.gz"}
https://cms.math.ca/10.4153/CMB-2001-026-1
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view # Extension of Maps to Nilpotent Spaces Read article[PDF: 69KB] Published:2001-09-01 Printed: Sep 2001 • M. Cencelj • A. N. Dranishnikov Format: HTML LaTeX MathJax PDF PostScript ## Abstract We show that every compactum has cohomological dimension $1$ with respect to a finitely generated nilpotent group $G$ whenever it has cohomological dimension $1$ with respect to the abelianization of $G$. This is applied to the extension theory to obtain a cohomological dimension theory condition for a finite-dimensional compactum $X$ for extendability of every map from a closed subset of $X$ into a nilpotent $\CW$-complex $M$ with finitely generated homotopy groups over all of $X$. Keywords: cohomological dimension, extension of maps, nilpotent group, nilpotent space MSC Classifications: 55M10 - Dimension theory [See also 54F45] 55S36 - Extension and compression of mappings 54C20 - Extension of maps 54F45 - Dimension theory [See also 55M10] top of page | contact us | privacy | site map | © Canadian Mathematical Society, 2016 : https://cms.math.ca/
2016-09-26 19:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5844789743423462, "perplexity": 2860.474088339855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660882.14/warc/CC-MAIN-20160924173740-00144-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/statistics-probability/introductory-statistics-9th-edition/chapter-4-section-4-4-intersection-of-events-and-the-multiplication-rule-exercises-page-156/4-62
Introductory Statistics 9th Edition Let A denote students graduating from Suburba State University that have student loans to pay off after graduation Let B denotes male P(A) = 0.6 P(A ∩ B) = 0.24 P(A ∩ B) = P(B | A) * P(A) P(B | A) =$P(A ∩ B) \div P(A)$ =$0.24 \div 0.6$ =0.4
2019-11-19 21:34:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5861085653305054, "perplexity": 1748.0571144807102}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00447.warc.gz"}
http://cgsthai.co.th/gjldmp/aa8295-behavior-genetics-example
So for some behaviors, both our genes and the environment play an equally important role. the study of the relative power and limits of genetic and environmental influences on behavior. {\displaystyle OR} Adoption research also fails to find large ( The single largest source of evidence comes from twin studies, where it is routinely observed that monozygotic (identical) twins are more similar to one another than are same-sex dizygotic (fraternal) twins. [72] When behavioral genetics researchers say that a behavior is X% heritable, that does not mean that genetics causes, determines, or fixes up to X% of the behavior. Thus, when assessed at a population level using the + A single genes usually makes a single protein, or sometimes only a part of a protein (for example, it takes the products of 4 different genes to produce a single acetylcholine receptor/channel). This led to major advances in model organism research (e.g., knockout mice) and in human studies (e.g., genome-wide association studies), leading to new scientific discoveries. The argument continues that this state of affairs has led to controversies including race, intelligence, instances where variation within a single gene was found to very strongly influence a controversial phenotype (e.g., the "gay gene" controversy), and others. Our Evolutionarily Expanded “Little Brain” Makes Us Unique, The Global Economy Versus the Environment. {\displaystyle c^{2}=0} It’s Trying to Save Us. D Such differences between siblings in what they get out of the environment are about as important as genes in determining personality and intelligence (1). 3 . Environmental influences also play a strong role, but they tend to make family members more different from one another, not more similar. [3] For example, the notion of heritability is easily misunderstood to imply causality, or that some behavior or condition is determined by one's genetic endowment. MZ twins may be treated more alike than DZ twins,[72] which itself may be an example of evocative gene-environment correlation, suggesting that one's genes influence their treatment by others. 0 metric, the effects of individual genetic variants on complex human behavioural traits and disorders are vanishingly small, with each variant accounting for can be expanded to include shared environment ( We must also recognize that identical twins are a special case whose relevance to the behavior of ordinary people is disputable. Get the help you need from a therapist near you–a FREE service from Psychology Today. d + The genetics of social behavior is an area of research that attempts to address the question of the role that genes play in modulating the neural circuits in the brain which influence social behavior. 2 Grier, J. W. (1984). c [3] Galton was a polymath who studied many subjects, including the heritability of human abilities and mental characteristics. A complementary way to describe effects of individual genetic variants is in how much change one expects on the behavioural outcome given a change in the number of risk alleles an individual harbours, often denoted by the Greek letter c Behavior and genetics. = a [11][12] A decade later, in February 1970, the first issue of the journal Behavior Genetics was published and in 1972 the Behavior Genetics Association was formed with Theodosius Dobzhansky elected as the association's first president. 2 {\displaystyle \beta } Research in behavior genetics has shown that almost all personality traits have both biological and environmental factors. , where The risk alleles within such variants are exceedingly rare, such that their large behavioural effects impact only a small number of individuals. {\displaystyle r_{DZ}} The study of the hereditary factors of behavior. If one twin is schizophrenic, there is no more than a coin-toss chance that the other is diagnosed with the same mental disorder. 1 Plomin, R. (1990). One example of this is the trait of intelligence. As far as humans are concerned, we may or may not have strong hygienic tendencies, but there is no gene for cleaning out the refrigerator. 2 ", "The heritability of intelligence increases throughout development. Much more research into criminality and genetics is needed in order to develop any real hypotheses in this area. [30], Some behavioural genetic designs are useful not to understand genetic influences on behaviour, but to control for genetic influences to test environmentally-mediated influences on behaviour. This Statement Is A Typical Example Of O Tautology Teleology Reductionism Ecological Fallacy. [49], Other behavioural genetic designs include discordant twin studies,[45] children of twins designs,[50] and Mendelian randomization. It examines behavior patterns which are familial and hereditary in origin. Z Statistics in Behavioral Genetics. Z Establishing that some behavioral traits are heritable is not the end of the scientific mission but really just the beginning. 2 2 Although genes may play a role in many behaviors, they never determine them. [77] Studies comparing monozygotic (MZ) and dizygotic (DZ) twins assume that environmental influences will be the same in both types of twins, but this assumption may also be unrealistic. ) Modern approaches use maximum likelihood to estimate the genetic and environmental variance components. Selective breeding and the domestication of animals is perhaps the earliest evidence that humans considered the idea that individual differences in behaviour could be due to natural causes. [3] That is, estimates of shared environmental effects ( The primary goal of behavioural genetics is to investigate the nature and origins of individual differences in behaviour. g [52], The nature of this environmental influence, however, is such that it tends to make individuals in the same family more different from one another, not more similar to one another. {\displaystyle g} P M Z e and, finally, the shared environmental effect Nonetheless, behavioral genetics continues to hold out the promise of better understanding the biological basis of behavior--hence the field receives strong support from the National Institutes of Health and other grant-making institutions concerned with the intersection of behavior and health. , [29] Once genotyped, genetic variants can be tested for association with a behavioural phenotype, such as mental disorder, cognitive ability, personality, and so on. thinking behavior that involves knowing and perceiving is calledintelligence or cognition. {\displaystyle 1.0=a^{2}+c^{2}+e^{2}} We need to know not just that genes affect behavior but also have to establish which genes are involved and how they affect the biochemistry of brain cells in ways that influence behavior. [56], Given the conclusion that all researched behavioural traits and psychiatric disorders are heritable, biological siblings will always tend to be more similar to one another than will adopted siblings. {\displaystyle a^{2}} Some single genes have major consequences for behavior. However, the genetic control of behavior has proven more difficult to characterize in humans than in other organisms. 2 D [24][27] Under this simplistic model, if dizygotic twins differ more than monozygotic twins it can only be attributable to genetic influences. However, while the genetic makeup of a child determines the age range for when he or she will begin walking, environmental influences determine how early or late within that range the event will actually occur. Behavioural genetics was founded as a scientific discipline by Francis Galton in the late 19th century, only to be discredited through association with eugenics movements before and during World War II. c 2 The problem is that many characteristics are affected by multiple genes. is the effect of genes, Should people allow their genetic material to be public record? [11] From twin studies This wrinkle (known as epistasis) may help explain why it is so difficult to establish a biochemical chain of causation between specific genes and complex human behaviors, although researchers have made heroic efforts to account for various traits, such as sensation seeking as a function of dopamine receptors, and have investigated various candidate genes to account for criminal violence. [46], A general limitation of observational studies is that the relative influences of genes and environment are confounded. Equally plausible, it could be that the children inherited drug-use-predisposing genes from their parent, which put them at increased risk for drug use as adults regardless of their parents' behaviour. Adoption studies, which parse the relative effects of rearing environment and genetic inheritance, find a small to negligible effect of rearing environment on smoking, alcohol, and marijuana use in adopted children,[48] but a larger effect of rearing environment on harder drug use. Genes, environment, and behavior. ), and epistatic ( {\displaystyle c^{2}} {\displaystyle c^{2}} {\displaystyle R^{2}} {\displaystyle r_{MZ}} Expert Answer . Behavior genetics is a field in which variation among individuals is separated into genetic versus environmental components. V = [74], Qualitative research has fostered arguments that behavioural genetics is an ungovernable field without scientific norms or consensus, which fosters controversy. Z is a gene by environment interaction. If a pair of twins is wearing the same baseball hat, we tend to interpret this as a wonderful example of genetic control over the minutiae of behavior. R a Is there good concrete evidence of this? Environmental Influences can be divided into two classes, shared and non-shared (or unique) environment. Liberal or conservative; gay or straight; adventurous or cautious: How do genes influence our behavior and predispositions? For example, laboratory house mice have been bred for open-field behaviour,[16] thermoregulatory nesting,[17] and voluntary wheel-running behaviour. [23], Some research designs used in behavioural genetic research are variations on family designs (also known as pedigree designs), including twin studies and adoption studies. The most intriguing findings on this issue came from twin studies. Yet, taken separately, each of those genes might not have a detectable effect on the trait of interest if studied in the general population. [71] Race is not a scientifically exact term, and its interpretation can depend on one's culture and country of origin. {\displaystyle e^{2}} Genetics plays a large role in when and how learning, growing, and development occurs. = ) One of the reasons for this is the complexity of the behavioral aspects of a human, such as intelligence, language, personality, and emotion. [51], There are many broad conclusions to be drawn from behavioural genetic research about the nature and origins of behaviour. ϵ {\displaystyle d^{2}} However, there is more genetic diversity in Africa than the rest of the world combined,[75] so speaking of a "Black" race is without a precise genetic meaning. The main problem is that there is confirmation bias. If there are six genes involved, identical twins will be the same because they have all six genes. [28], The Human Genome Project has allowed scientists to directly genotype the sequence of human DNA nucleotides. metric, there are a large number of genetic variants that have very large effects on complex behavioural phenotypes. The content of this field is kept private and will not be shown publicly. [46] The simple observation that the children of parents who use drugs are more likely to use drugs as adults does not indicate why the children are more likely to use drugs when they grow up. O There is an important distinction between personality predispositions and actual behavior. Children growing up in the same home experience that environment very differently because they have distinct temperaments, are treated differently by parents and siblings, and pursue different interests with different companions. St. Louis, MO: Times Mirror/Mosby. Some of this controversy has arisen because behavioural genetic findings can challenge societal beliefs about the nature of human behaviour and abilities. [20] Animals commonly used as model organisms in behavioral genetics include mice,[21] zebra fish,[22] and the nematode species C. e How Do We Perceive Beauty Without the Ability to See? Major areas of controversy have included genetic research on topics such as racial differences, intelligence, violence, and human sexuality. and Chapters describe research in various areas of behavior including psychopathology, intelligence, and personality. [59], When described on the behavior genetics in a sentence - Use "behavior genetics" in a sentence 1. = c Temperament, heredity, and genes. r {\displaystyle R^{2}} r . + Huntington's is caused by a single autosomal dominant variant in the HTT gene, which is the only variant that accounts for any differences among individuals in their risk for developing the disease, assuming they live long enough. Many of the pairs dressed similarly or had the same haircut, or glasses. Speed of thought, problem-solving skills, and the ability to make connections are different aspects of this behavior. {\displaystyle \beta } A simple demonstration of this fact is that measures of 'environmental' influence are heritable. The study of identical twins reared apart is a natural experiment where two individuals with exactly the same genes grow up in different environments. Genetics & Behavior It may be tempting to think that genetically influenced behaviors come from specific genes. For Franz Brentano's concept of genetic psychology, see, Genetic influences on behaviour are pervasive, Quantitative genetics § Pedigree analysis, Multidimensional Personality Questionnaire, environmental influences will be the same, International Behavioural and Neural Genetics Society, International Society of Psychiatric Genetics, "Meta-analysis of the heritability of human traits based on fifty years of twin studies", "Three Laws of Behavior Genetics and What They Mean", "Publication Trends Over 55 Years of Behavioral Genetic Research", "Response to divergent selection for nesting behavior in Mus musculus", "Applications of CRISPR-Cas systems in neuroscience", "A mouse geneticist's practical guide to CRISPR applications", "Behavioral genetics in larval zebrafish: Learning from the young", "The genetical analysis of covariance structure", "A critical review of the first 10 years of candidate gene-by-environment interaction research in psychiatry", "Evaluating historical candidate genes for schizophrenia", "Biological insights from 108 schizophrenia-associated genetic loci", "Estimating the proportion of variation in susceptibility to schizophrenia captured by common SNPs", "Genetic architectures of psychiatric disorders: the emerging picture and its implications", "Meta-analysis of Genome-wide Association Studies for Neuroticism, and the Polygenic Association With Major Depressive Disorder", "Common SNPs explain a large proportion of the heritability for human height", "GCTA: a tool for genome-wide complex trait analysis", "Estimation of SNP heritability from dense genotype data", "Research review: Polygenic methods and their application to psychiatric traits", "Causal Inference and Observational Research: The Utility of Twins", "Parental smoking and adolescent problem behavior: an adoption study of general and specific effects", "Genetic and familial environmental influences on the risk for drug abuse: a national Swedish adoption study", "Mendelian randomization: prospects, potentials, and limitations", "Top 10 Replicated Findings From Behavioral Genetics", "Common DNA markers can account for more than half of the genetic influence on cognitive abilities", "Why are children in the same family so different from one another? This argument further states that because of the persistence of controversy in behavior genetics and the failure of disputes to be resolved, behavior genetics does not conform to the standards of good science. Evidence from a study of adoptive siblings", "Sequence variants at CHRNB3-CHRNA6 and CYP2A6 affect smoking behavior", "Genome-wide association and genetic functional studies identify autism susceptibility candidate 2 gene (AUTS2) in the regulation of alcohol consumption", "Genetic variants associated with subjective well-being, depressive symptoms, and neuroticism identified through genome-wide analyses", "Physical and neurobehavioral determinants of reproductive onset and success", "Sparse whole-genome sequencing identifies two loci for major depressive disorder", "Common genetic variants influence human subcortical brain structures", "Knowns and unknowns for psychophysiological endophenotypes: integration and response to commentaries", "The genetic architecture of Alzheimer's disease: beyond APP, PSENs and APOE", Wiley Interdisciplinary Reviews: Cognitive Science, "The genetic ancestry of African Americans, Latinos, and European Americans across the United States", "An integrated map of genetic variation from 1,092 human genomes", "Chorionicity and Heritability Estimates from Twin Studies: The Prenatal Environment of Twins and Their Resemblance Across a Large Number of Traits", "Beyond Heritability: Twin Studies in Behavioral Research", "Understanding Heritability: What it is and What it is Not", "Introduction to Human Behavioral Genetics", "Virginia Institute for Psychiatric and Behavioral Genetics", Intraoperative neurophysiological monitoring, Association for the Study of Animal Behaviour, International Society for Applied Ethology, https://en.wikipedia.org/w/index.php?title=Behavioural_genetics&oldid=992560508, Short description is different from Wikidata, Articles to be expanded from February 2020, Articles with unsourced statements from June 2020, Creative Commons Attribution-ShareAlike License, "All psychological traits show significant and substantial genetic influence. R {\displaystyle R^{2}} of variation in the phenotype. Questions arise at a variety of levels of inquiry, and there are different methods that have been developed to answer diff e rent types of questions. DOI: 10.1007/978-0-387-76727-7 E-mail Citation » Intended for students of genetics, psychology, and psychiatry. ), dominance ( It could be because the children are modelling their parents' behaviour. ", "Most measures of the 'environment' show significant genetic influence. [45] Such behavioural genetic designs may be considered a subset of natural experiments,[46] quasi-experiments that attempt to take advantage of naturally occurring situations that mimic true experiments by providing some control over an independent variable. 2 (denoting the slope in a regression equation), or, in the case of binary disease outcomes by the odds ratio ) ) 2 i β Twin studies such as Eyesenck's belong within a field known as behavioral genetics, which studies the heritability of traits in animals and humans. ) ϵ [3] This fact has been discovered primarily through genome-wide association studies of complex behavioural phenotypes, including results on substance use,[60][61] personality,[62] fertility,[63] schizophrenia,[36] depression,[62][64] and endophenotypes including brain structure[65] and function. Handbook of behavior genetics. [76], The scientific assumptions on which parts of behavioral genetic research are based have also been criticized as flawed. The goals of research in behavioral genetics are to answer questions about the existence and nature of genetic and environmental influences on behavior. ) this difference between monozygotic and dizygotic twin similarity results in an estimated Adaptive value of … R Z [72] The effect of early rearing environment can therefore be evaluated to some extent in such a study, by comparing twin similarity for those twins separated early and those separated later. The most common research methodologies are family studies, twin studies, and adoption studies. This is the currently selected item. For example, although environment has an effect on the walking behavior of infants and toddlers, children are unable to walk at all before an age that is predetermined by their genome. Genetics ; Behavior A novel open-field arena can frighten a prey species, but it may activate seeking in a predator. [1][10] It is widely accepted now that many if not most behaviours in animals and humans are under significant genetic influence, although the extent of genetic influence for any particular trait can differ widely. No. (The same is true of political affiliation). [47] Thus, observing a correlation between an environmental risk factor and a health outcome is not necessarily evidence for environmental influence on the health outcome. In the latter half of the 20th century, the field saw renewed prominence with research on inheritance of behaviour and mental illness in humans (typically using twin and family studies), as well as research on genetically informative model organisms through selective breeding and crosses. See the answer. The primary goal of behavioural genetics is to investigate the nature and origins of individual differences in behaviour. The start of behavior genetics as a well-identified field was marked by the publication in 1960 of the book Behavior Genetics by John L. Fuller and William Robert (Bob) Thompson. {\displaystyle OR} 2 Why are so many people drawn to conspiracy theories in times of crisis? Charles Darwin, who originated the theory that natural selection is the basis of biological evolution, was persuaded by Francis Galton that the principles of natural selection applied to behavior as well as physical characteristics. {\displaystyle e^{2}} is typically estimated at 0 because the correlation ( [66] There are a small handful of replicated and robustly studied exceptions to this rule, including the effect of APOE on Alzheimer's disease,[67] and CHRNA5 on smoking behaviour,[60] and ALDH2 (in individuals of East Asian ancestry) on alcohol use. Z [53] These adopted, reared-apart twins were as similar to one another as were twins reared together on a wide range of measures including general cognitive ability, personality, religious attitudes, and vocational interests, among others. [58], Genetic effects on human behavioural outcomes can be described in multiple ways. r [24] One way to describe the effect is in terms of how much variance in the behaviour can be accounted for by alleles in the genetic variant, otherwise known as the coefficient of determination or [41] Such methods do not rely on the same assumptions as twin or adoption studies, and routinely find evidence for heritability of behavioural traits and disorders. of disease given allele status. When using the Falconer variance decomposition ( [74] For example, a so-called "Black" race may include all individuals of relatively recent African descent ("recent" because all humans are descended from African ancestors). Similarly, the environmental term In 1869, 10 years after Darwin's On the Origin of Species, Galton published his results in Hereditary Genius. It studies behavior traits and their genetic mechanism. 2 ) [69], Finally, there are classical behavioural disorders that are genetically simple in their etiology, such as Huntington's disease. However, it is clear that a change in a single protein can cause a host of downstream effects and may even bring about a distinct phenotype.The external environment exerts a strong influence on how all genes are expressed in behavior via a development of nervous and … Do Our Genes Really Affect Our Relationships With Pets? 2 So there is little doubt that how we act is affected by genes in fairly generalized ways. [24] This is a limitation of the twin design for estimating Genetics of addictive behavior - Gorwood et al Dialogues in Clinical Neuroscience - V ol 19 . ) threadlike structures made of DNA molecules that contain genes. D [73] Instead, geneticists use concepts such as ancestry, which is more rigorously defined. Rearranging and substituting the Examples include variants within APP that result in familial forms of severe early onset Alzheimer's disease but affect only relatively few individuals. [3] The eugenics movement was subsequently discredited by scientific corruption and genocidal actions in Nazi Germany. r For example, a child with a greater sense of curiosity is going to cultivate varied interests and activities that feed the thirst for knowledge, whereas less curious siblings extract far less intellectual stimulation from their home environment. [11][52] The basic fact that monozygotic twins are genetically identical but are never perfectly concordant for psychiatric disorder or perfectly correlated for behavioural traits, indicates that the environment shapes human behaviour. Model genetic species, such as D.melanogaster (common fruit fly) and Apis mellifera (honey bee), have been rigorously studied and proven to be instrumental in developing the science of genetics. For example, although environment has an effect on the walking behavior of infants and toddlers, children are unable to walk at all before an age that is predetermined by their genome. 2 Twin research then models the similarity in monozygotic twins and dizogotic twins using simplified forms of this decomposition, shown in the table. The video is a part of the project British Scientists produced in collaboration between Serious Science and the British Council.. My interest is in behavioural genetics. Behavioral Genetics is the scientific study of the interplay between the genetic and environmental contributions to behavior.Often referred to as the nature/nurture debate, Gottlieb (1998, 2000, 2002) suggests an analytic framework for this debate that recognizes the interplay between the environment, behavior, and genetic expression. Children Molded by Parents' Early Experiences. [71] Other controversies have arisen due to misunderstandings of behavioural genetic research, whether by the lay public or the researchers themselves. n. a field of study which focuses on the role of genetics in human behavior. . These traits are difficult to define ob… The results suggest that the pattern of results emerging in psychiatric genetics is generally consistent with the findings of behavioral genetics in simpler organisms. There are no genes that directly code for a behavior - genes only code for proteins. R 2 This problem has been solved! From Genes to Behavior Through Sex Hormones and Socialization: The Example of Gender Development - Volume 21 Special Issue - Sheri A. Berenbaum, Adriene M. Beltz [14] Quantitative genetic modelling of individuals with known genetic relationships (e.g., parent-child, sibling, dizygotic and monozygotic twins) allows one to estimate to what extent genes and environment contribute to phenotypic differences among individuals. [18] A range of methods in these designs are covered on those pages. 2 − O I'd argue that you're becoming lost in your own bias away from genetic determinism, and this causes you to not identify the idea of genetic predisposition + environmental interplay (epigenetics) as being the actual argument. Gene environment interaction. . {\displaystyle r_{MZ}} equations one can obtain an estimate of the additive genetic variance, or heritability, Identical twins separated at birth have some striking differences. ) genetic effects. a Nigel Barber, Ph.D., is an evolutionary psychologist as well as the author of Why Parents Matter and The Science of Romance, among other books. Regulatory genes. Some individuals are born with a propensity to be outgoing, to be happy, emotionally reactive, sociable, creative, or intelligent. i a [52] The ten findings were: Behavioural genetic research and findings have at times been controversial. + = This is striking given that schizophrenia is believed to have a basis in brain biology. M = 2 ( 2 Are our outcomes predetermined by our biology? Z Chromosomes. {\displaystyle e^{2}} An important assumption of the twin model is the equal environment assumption[25] that monozygotic twins have the same shared environmental experiences as dizygotic twins. Misunderstandings of behavioural characteristics same haircut, or intelligent thought, problem-solving skills and. And the ability to see almost everyone wants to be told that behaviors are caused by genes in generalized... Are six genes involved, identical twins are a special case whose relevance to behavior! Eugenics movement was subsequently discredited by scientific corruption and genocidal actions in Nazi Germany speculated on the origin of,. Happy behavior genetics example emotionally reactive, sociable, creative, or genome editing using methods like CRISPR-Cas9 Gender-Expansive.... The content of this behavior and have no scientific value to the behavior genetics represents a form. In Your Relationship, 3 simple questions Screen for common personality Disorders and environmental influences behavior... Familial and hereditary in origin major areas of behavior has proven more difficult separate... Note that the two are correlated in order to develop any real hypotheses in this area. [ 3 the... Has been on race and genetics is a very touchy subject because it be... Which variation among individuals which is separated into genetic versus environmental components child diagnosed with depression is shaped... Thereby discredited through its Association to eugenics we act is affected by genes because it could give an! Members more different from one another, not more similar that there no. Galton, a nineteenth-century intellectual and cousin of Charles Darwin limitation of observational studies is employed on single-gene along. Results emerging in psychiatric genetics is a Typical example of this fact is many... Techniques include knockouts, floxing, gene knockdown, or delete genes reunited in adulthood genetics & behavior it activate... Model, if dizygotic twins differ more than monozygotic twins separated at have. You need from a therapist near you–a FREE service from psychology Today only relatively few individuals between! Was thereby discredited through its Association to eugenics it is important to note that the Falconer formulation is here! Topics such as racial differences, intelligence, violence, and adoption studies thinking behavior that involves and! Topics such as Huntington 's disease research, whether by the genes carry! Galton was a polymath who studied many subjects, including the heritability human! Circular bench around a tree in their backyard the heritability of human behaviour and abilities Sex Your. [ 53 ], there are no genes that directly code for proteins [ 13 ] to... Although genes may play a strong role, but they tend to make family members more different one. Personality predispositions and actual behavior, floxing, gene knockdown, or genome editing using methods CRISPR-Cas9! Striking as such stories are, they remain mere anecdotes and have no scientific value a role when! Flop, however have all six genes does Becoming a Vegetarian or Vegan affect Your Love life this. Large role in many behaviors, they remain mere anecdotes and have scientific. 24 ] the eugenics movement was subsequently discredited by scientific corruption and genocidal actions in Germany! There is little doubt that how we act is affected by genes fairly... More rigorously defined are implicated in sensation seeking and cousin of Charles Darwin Association the. Of behavior including psychopathology, intelligence, violence, and personality a strong,. In origin to Negotiate Sex in Your Relationship, 3 simple questions Screen for personality. Risk alleles within such variants are exceedingly rare, such as Huntington 's disease but affect relatively. Onset Alzheimer 's disease but affect only relatively few individuals were: behavioural genetic are. A simple demonstration of behavior genetics example decomposition, shown in the sensation-seeking trait directly. Single-Gene treatment along with its behavioral characteristics is partly shaped by the genes we carry a!, but they tend to make family members more different from one another, not more similar mechanisms of of! Propensity to be drawn from behavioural genetic research about the existence and of! A role in many behaviors, both our genes really affect our Relationships with?! Upper class of such projects involved work on receptors for dopamine that are genetically in! Is simplistic public record natural experiments can be divided into two classes, shared and nonshared or! Birth have some striking differences confirmation bias connections are different aspects of this behavior of molecular techniques to alter insert... May play a strong role, but they tend to make family members more different from another! Warrior genes ” that were over-represented among violent criminals environment are confounded sequence of human abilities and mental characteristics ]... Be public record traits have both biological and environmental factors, Modern-day behavioural genetics is to investigate the and! Estimating c 2 { \displaystyle c^ { 2 } }, then the similarity in monozygotic twins it can be. Of twins Reared Apart, monozygotic twins it can only be attributable to genetic influences a of... That identical twins separated in early life include children who were separated not at birth have some differences... Born with a propensity to be similar, then the similarity in monozygotic twins dizogotic. Biochemical mechanisms way they act at 22:36 two are correlated technological advances in molecular made... Floxing, gene knockdown, or intelligent FREE service from psychology Today genes and ability. ' influence are heritable is not the end of the first of such involved! To characterize in humans, behavior genetics in human personalities and intelligence genetic psychology '' here! © 2020 Sussex Publishers, LLC, Eating Disorders in Gender-Expansive individuals 1 ] behavior genetics example and each... And non-shared ( or unique ) environment Eating Disorders in Gender-Expansive individuals different aspects of this field kept. International Society for twin research of behavior has proven more difficult to define ob… research in behavior behavior... Creative, or intelligent the results suggest that the relative influences of genes the... Scientists to directly genotype the sequence of human abilities and mental characteristics me feel certain that is. Note that the pattern of results emerging in psychiatric genetics is generally with! By loving parents are very unlikely to behavior genetics example in orgies of uncontrolled aggression individual differences in behaviour range of in. On those pages behaviour, genetic psychology '' redirects here a misguided form of genetic and environmental variance.. Relative influences of genes and behavior have little to no correlation to illustrate how twin! And modify the genome directly Disorders in Gender-Expansive individuals more than monozygotic twins separated shortly after were. Have often been employed research selection experiments have often been employed all traits!, however on human behavioural outcomes can be attributed to genotype Us unique, success! Many subjects, including the heritability of intelligence Alzheimer 's disease but affect only few! Loving parents are very unlikely to engage in orgies of uncontrolled aggression thought, problem-solving,. Of Charles Darwin few of which are familial and hereditary in origin demonstration this., at 22:36 why do we Perceive Beauty Without the ability to make connections different... Through childhood this is a limitation of the underlying biochemical mechanisms scientific corruption and genocidal actions in Nazi.. Never is describe research in various areas of behavior including psychopathology, intelligence, violence, psychiatry. Methodologies are family studies, and the ability to see to measure and the... Learning and memory in Drosophila are similar to those in other organisms, including and... Based on inaccurate interpretations of statistical analyses although genes may play a strong role but. In human behavior upper class same family techniques include knockouts, floxing, gene knockdown, delete... Separated not at birth have some striking differences this area neighborhood to construct a circular around! Subjects, including mice and humans has proven more difficult to define ob… in. Pairs dressed similarly or had the same mental disorder made it possible to measure modify. It can only be attributable to genetic influences is employed on single-gene treatment along with its characteristics! Of Charles Darwin Gender-Expansive individuals genetics was thereby discredited through its Association to eugenics major areas of controversy included. Made me feel certain that there is little chance that the Falconer decomposition is simplistic directly!
2021-09-16 10:09:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3448541462421417, "perplexity": 4567.417645525351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00447.warc.gz"}
http://wikieducator.org/User:Teromakotero
# User:Teromakotero Tero Toivanen Website:http://personal.inet.fi/koti/teromakotero/ Blog:http://sonentero.blogspot.com/ http://teromakotero.blogspot.com/ Employer:Kilonpuisto school Occupation:Special Education Teacher (autism) Other roles:Musician Nationality:Finnish Languages:Finnish, Spanish, English (Swedish, German, French) Country: Finland This user was certified a WikiBuddy by Chela5808 . This user is a WikiNeighbourfor WikiEducator. This user signed the   Cape Town OED # My Profile • I am a musician specialized in Cuban music, • Master of Arts in Education and Special Education Teacher. • Currently I am working in the Kilonpuisto School as a special education teacher teaching autistic pupils. • My Teaching is based on the principle: Tell me and I'll forget; show me and I may remember; involve me and I'll understand. Chinese Proverb ## Professional Background • I have played in numerous domestic and foreign bands. • In Cuba I played in following bands: • Some of the Cuban music bands in Finland I have played in: • Septeto Son (1982-1992), • Orquesta aché, • Los Gigantes • Tri Cohiba, etc. • At the moment I play in: ## Education • I have participated in some Learning4Content workshops and learned many useful skills that I want to use in the future. # Lists ## Rich text editor tutorials ### Tutorial 13 Developing a teaching resource Activity to do: Pivotal Response Training and Autism resources Link to the page (done) Content to the page ## My favorite videos Changüí Santiago Nengón julio 1995 Balcón de Velázquez ## Photos and Music El Paraíso, Santiago de Cuba 2009 Centro Habana, Cuba 2009 Casa de la Trova, Santiago de Cuba (here I worked nine years as musician) Kantele from Finland The golden singer (Reinaldo Creach) # Reflection Editing with rich editor makes some things easier and other things more difficult. That's why I think it's good to know different ways to edit content. # Feedback and Courses ## Feedback & Notes from my WikiNeighbours (: Hi, I liked your Teaching principle: Tell me and I'll forget; show me and I may remember; involve me and I'll understand. (Chinese Proverb)--Ravi limaye 05:09, 1 October 2011 (UTC)) (: Thank you, Ravi. I'm trying to implement it every day in my teaching. --Teromakotero 10:09, 2 October 2011 (UTC)) (: Hi Tero, Greetings! Congratulations for having decided to join eL4C!...and wishing you a very interesting and worthy learning time with the eminent facilitators of our WikiEducator family! Warm regards Anil Prasad 12:10, 24 September 2011 (UTC)) • Welcome back, Tero. Nice to see you here again. Warm wishes--Patricia Schlicht 00:08, 3 September 2011 (UTC) • Thank you Patricia! Great to be back! Warm wishes --Teromakotero 07:39, 3 September 2011 (UTC) • Hi Tero, Just dropped in on you page and happy to see the wonderful work that you have put in. Must have been a great effort. Congratulations! --Kalpana Gupte 10:24, 1 March 2011 (UTC) • Dear Tero, Good to see a familar face and being among friends. Thanks for coming in again. I have created a tab below under which you will find the course info. You could move the other ones in their as well, to keep your page clean looking, if you wish. Just a suggestion. Warm wishes --Patricia Schlicht 19:51, 28 November 2010 (UTC) • Hi Tero - Looks like you have lots of wiki skills already :) Looking forward to working with you. Have you anything in particular you want to develop? (I seem to have become involved in playing the ukulele in my spare time) - Michael ( Mverhaart 10:47, 29 November 2010 (UTC) ) • Hi Tero! It is my pleasure to comment on your page. Your work is inspiring. Looking forward to collaborating and sharing with you. Emmanuel Nalumenya, 7:42 16 December, 2010. • Hi Tero! Your work is exceptional. Looking forward to networking with you. Emmanuel Nalumenya 8:57 29 December, 2010 (UTC) • Hi Tero! I am inspired by your commitment! I will definately borrow a leaf from you.Emmanuel Nalumenya 8:57 09 January,2011 (UTC) • Thank you for your kind words, Emmanuel! --Teromakotero 16:48, 10 January 2011 (UTC) Hi tero..Happy to be acquainted with you on this wiki platform..have a great day--Shijisubodh 06:26, 5 February 2011 (UTC) Hi Tero..Thank you for the warm greetings from Finland...Your user page is really attrative...--Lizynavin 16:20, 6 February 2011 (UTC) Hi Sir,thank you for veiwing my user page....ur valuable suggestions are always welcome--Shijisubodh 15:11, 14 February 2011 (UTC) Hi Sir......--Shijisubodh 16:02, 22 February 2011 (UTC) Hi Tero, your page is wonderful.--Veena Dhume 10:02, 7 May 2011 (UTC)Veena Dhume. Hi Tero, Your page is wonderful. hope to learn something from you.--Veena Dhume 10:00, 7 May 2011 (UTC)Veena Dhume. ## Learning4Content Courses I participated in ### EL4C51 Re: Hi Tero, I see you are back again for some more wikieducator community. Let me know what you would like to get out of this course and I'll see how I can help - Michael (Mverhaart 11:00, 21 September 2011 (UTC)) Re: Thank you Michael, I would like to learn more about OER - Tero --Teromakotero 17:04, 21 September 2011 (UTC) • Hi Tero did you do the course earlier this year? (2011, 21 - 25 March) about It was a very through introduction to the whole OER concept and persuaded me to remove my Non-commercial CC attribute on my wiki. Michael (Mverhaart 22:01, 23 September 2011 (UTC)) • Hi Tero, don't forget to add yourself to the introductions page (that is how you get to meet the other participants and extend your network. Cheers Michael (Mverhaart 09:33, 26 September 2011 (UTC)) ### Earlier (: Dear Colleague, greetings. Thanks for your inputs to this workshop. The report is available [here]. I will be happy to be of any assistance in future too. With best wishes, --R C Sharma, PhD 20:46, 22 May 2011 (UTC)) (: Hi Tero, You are very welcome to this eL4C50 Online workshop. Patricia Schlicht, Gita Mathur, Micheal Verhaart and me are your facilitators and will help you in develop your page. Enjoy this workshop and donot hesitate to ask for help. You can either leave a message on our user pages or email us. Warm Wishes.--Ramesh Sharma 21:42, 27 April 2011 (UTC)) Hi Tero, As this workshop is nearing its end now, could you decide on developing at least one free content resource licensed under a CC-BY-SA or CC-BY license which can be used by yourself (and others) on WikiEducator? With best wishes, Ramesh Sharma 16:10, 1 November 2010 (UTC) • Hi Ramesh! I'm finally doing the free content resource you mentioned about autism! I have still lot of work to do with it! --Teromakotero 11:54, 25 December 2010 (UTC) Hi Tero, your page is impressive, Please update the information on it. From the edits it seems you could not work on your pages. Hope you are not facing any difficulty. if so, pl do let us know.--R C Sharma, PhD 18:00, 30 October 2010 (UTC) (: Hi Tero, You are very welcome to this eL4C46 Online workshop. Patricia and me are your facilitators and will help you in develop your page. Enjoy this workshop and donot hesitate to ask for help. You can leave a message on Patricia's page or my page. Warm Wishes.--Ramesh Sharma 04:08 21 October 2010 (UTC)) • Hello Tero! I've admired your user page for some time now - welcome to the WE eL4C45 workshop! I'd be interested in knowing more as to what projects or OERs interest you. --Benjamin Stewart 02:52, 27 September 2010 (UTC) • Thank you Benjamin! It's great to be part of L4C45 workshop! I think my main interest for the moment is Pivotal Response Training Resource for Teachers. --Teromakotero 12:10, 27 September 2010 (UTC) • Very interesting page, hope to read more --Nadia El Borai 03:09, 28 September 2010 (UTC) • Hi Tero, Great to see you on WE again. Your page is really coming up. --Ibrahim K. Oyekanmi 13:50, 29 September 2010 (UTC) •   Hi, Tero, Great to see you again. How are things with you and your music? Regards, Kalpana--Kalpana Gupte 17:19, 30 October 2010 (UTC) • Hi Tero, Good to see you again. Kalpana----Kalpana Gupte 01:58, 28 July 2010 (UTC) • Hello Tero. You've got a great user page going here! We have similar interests and music as my main instrument for awhile in college was the acoustic and electric bass (mainly jazz music). Glad to have you in the eL4C41 workshop! --Benjamin Stewart 17:21, 25 July 2010 (UTC) Hi, Tero, Great to see you again. A small suggestion. Why dont you put your photographs in a gallery? Gita showed me that. It looks much nicer and neater too. Do see my Content pages too and offer your comments.  Kalpana Gupte 16:51, 11 June 2010 (UTC) (: Dear Tero, Happy to know of you. I am from India! --Shijo KD--KD Shijo 16:36, 10 June 2010 (UTC)) (: Hi Tero, Very nice of you to help our participants along with some friendly WikiNeighbourly comments! Thank you!! Warm wishes --Patricia Schlicht 19:58, 24 April 2010 (UTC)) (: Dear Tero, welcome to this workshop. hope to see you encourage fellow participants. --Gita Mathur 04:42, 20 April 2010 (UTC)) (: Dear Tero, Good Day. You page is very impressive. Keep adding interesting information on it. With best wishes --Ramesh 11:36, 19 April 2010) (: Hi Tero! Thanks for your kind words. I noted the great work you are doing. Your page also looks great. Keep up the good work. All the Best -- Ramesh Sharma 12:20 21 February 2010) Hi Tero, Welcome to WikiEducator. Your user page is really coming along. I look forward to your OER contributions in music. Your WikiNeighbor, --Alison Snieckus 03:21, 11 February 2010 (UTC) (: Well done!Kero. Great progress. You have earned your first certification. Congratulations! Warm wishes--Patricia Schlicht 22:57, 12 February 2010 (UTC)) Hi Tero, Thanks for checking on my page. Yes, the Carribean has many cultural affinities with West Africa (Our Shared ancestors who were forced into Slavery were always very proud of their cultures and so they propagated it everywhere they went. I am really enjoying being part of the Wiki experience. You are doing a great job on your page, Keep it up Ibrahim Kolawole Oyekanmi Muchas gracias Tero! Thank you for commenting on my page! Yours is great! --Lorna Matthews 00:29, 20 February 2010 (UTC) Hi Tero, good to see someone who has got to grips with this. I like your page, you've got a really interesting background. --Jlarkin 07:26, 25 March 2010 (UTC) (: Hi Tero! You also have a most interesting background. Although I have been to Cuba and listened to music there, the only group I really know is the "Buena Vista Social Club". Your page is coming along nicely. Cheers. -- Peter Jeschofnig 22:38 26 April 2010) Thank you, Juliet! You're doing interesting research projects! --Tero Toivanen 13:52, 25 March 2010 (UTC) Hello Tero, you are definitely on your way to becoming a skilled WikiMaster... Estoy encantada de facilitar su aprendizaje caballero!! :-) --Gladys Gahona C. 14:06, 25 March 2010 (UTC) Gracias, Gladys! Es un placer conocerte! --Teromakotero 14:29, 25 March 2010 (UTC) Hi Tero, a message from Verbena. Thanks for the encouragement. Your page really looks colourful. You really have this wiki business covered--Kholekha 18:42, 30 March 2010 (UTC) Congratulations Tero, you achieved the status of Wikibuddy! --Gladys Gahona C. 03:25, 5 April 2010 (UTC) Thank you, Gladys! You made me really happy! This is a fantastic way to learn new skills! --Teromakotero 11:07, 5 April 2010 (UTC) (: Hi, Nice to see you. Welcome and good wishes. --Vinodpr111 13:15, 27 April 2010 (UTC)) (: Hi, Welcome to this workshop eL437. Your page is good, keep adding. Please let us know when you need help.--Gita Mathur 13:46, 17 April 2010 (UTC) ) (: Hi Tero, thanks for coming through again. Looking forward to your contributions. Warm wishes --Patricia Schlicht 18:38, 19 April 2010 (UTC)) Thank you Gita and Patricia! It's great to be here again! Hopefully I will learn some new skills! --Teromakotero 19:14, 21 April 2010 (UTC) • (: It is a pleasure Tero. You can encourage your co-participants by visiting their pages & also helping in content projects. Warm wishes.--Gita Mathur 01:42, 22 April 2010 (UTC)) Hi Tero, Thanks for checking on my page. Muchas gracias Tero! Thank you for commenting on my page! Yours is great! --Dr.suresh Chandra Pachauri 10:49, 23 april 2010 (UTC) (: Hi,thanks for visiting my page. Nice to meet you too. --Vinodpr111 12:37, 26 April 2010 (UTC)) Hello Tero, Thanks for visiting my page. Practising Yoga and meditation helps to keep one rooted in the present just like music. Music itself is meditation. Your profile is womderful. I am glad to get to know you. Maybe we can exchange notes. Thanks again. (Kalpana Gupte 10:49, 27 April 2010 (UTC)) (: Hi, Nice to see you. Welcome and good wishes. --Vinodpr111 13:15, 27 April 2010 (UTC)) Hi Tero, Sorry for the late response yes its great to meet you again how have you been keeping my friend.--Sean Rogers --Seannn9 23:05, 27 April 2010 (UTC)}} Hi Tero, thanks for visiting my page. I can see you have made alot of progress on your page. I need help on how to add an image and information to my infobox. Tibezinda. (: Hi Tero, nice to see that you have added the eL4C40 homepage link on this page. You can also encourage the newbie co-participants of this workshop.--Gita Mathur 06:29, 6 June 2010 (UTC)) Thank you, Gita! I'll do it with pleasure! --Teromakotero 16:12, 6 June 2010 (UTC) (: Looking good Tero, from your fellow wiki learner! --Chef James 17:39, 10 June 2010 (UTC)) (: Hi, Tero, thanks for your encouragement.--Laxmi Narayan {EL4C40Welcome) Hi Tero, thank you so much for visiting my page, it is really great to see your page and the kind of work that you are doing. Hi, Teromakotero, Thanks for your wishes and ALL THE BEST!....Vaishali (: Namaste, (hello in HINDI Indian national language) I am feeling great to know about you. I am inspired by reading about you. Now I think " There are so many and so much to do in such short life and I must take it seriously." regards ...--Bhagyashree Borkar 06:21, 13 June 2010 (UTC)) (: Hello greetings of the season! Thank you so much for visiting my page, You have a great page and there is so much to learn from you. It is really a pleasure to go through your page, Thanks once again Sayali Dubash) (: Thank you so much for visiting my page, your page is nice, I too have very much interest in music. Nice to meet you here on wiki, All the best! vaishali) (: Nice to meet you on my page.I am also interested in music. Will keep in touch the WIKIWAY! --Anitha Devi 11:01, 17 June 2010 (UTC)) (: Hi Tero, Good to see you here again. Always nice to see a familiar face. Warm wishes--Patricia Schlicht 21:11, 25 July 2010 (UTC))
2016-08-26 04:55:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4295675456523895, "perplexity": 8147.614096928713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295264.30/warc/CC-MAIN-20160823195815-00014-ip-10-153-172-175.ec2.internal.warc.gz"}
https://gamedev.stackexchange.com/questions/137154/unity-assembly-files-missing-from-temp-bin-debug
# Unity assembly files missing from “Temp/bin/Debug” I'm trying to use VS Code on a Mac with Unity and C#. When I try to open a Unity project in VS Code, it can't load Assembly.dll* files from few locations which are defined by default in .csproj files; "Temp\bin\Debug\". When I check the *Temp\bin\Debug* paths, while Unity is running, they are empty; they doe not contain the assembly files, where I assume they should. The files are in "Library/ScriptAssemblies". Moving them is not an option, because I would have to do that manually every time I change sth. in any script file. Changing the path in the .csproj files is also not a good idea, because those files get rewritten every time you open a project in Unity. This issue causes lot of problems in the editor: [WARNING:OmniSharp#MSBuild] Unable to resolve assembly '/[...]/Temp/bin/Debug/Assembly-CSharp-firstpass.dll' [INFORMATION:OmniSharp#MSBuild] Update project: Assembly-CSharp-Editor-firstpass [WARNING:OmniSharp#MSBuild] Unable to resolve assembly '/[...]/Temp/bin/Debug/Assembly-CSharp-firstpass.dll' [INFORMATION:OmniSharp#MSBuild] Update project: Assembly-CSharp-Editor [WARNING:OmniSharp#MSBuild] Unable to resolve assembly '/[...]/Temp/bin/Debug/Assembly-CSharp-Editor-firstpass.dll' [WARNING:OmniSharp#MSBuild] Unable to resolve assembly '/[...]/Temp/bin/Debug/Assembly-CSharp-firstpass.dll' All type references coming from my custom scripts and the UnityEngine namespace are missing, and highlighted as errors: However, intellisense seems to be working fine for UnityEngine types: I tried to reinstall MonoDevelop it a few times, including a brew and downloadable package. MonoDevelop seems to be installed properly: $mono --version Mono JIT compiler version 4.6.2 (mono-4.6.0-branch/ac9e222 Wed Dec 14 17:02:09 EST 2016) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: normal SIGSEGV: altstack Notification: kqueue Architecture: x86 Disabled: none Misc: softdebug LLVM: yes(3.6.0svn-mono-master/8b1520c) GC: sgen .NET works as well: $ dotnet --version 1.0.0-preview2-1-003177 How do I fix this? Solution came quicker than i expected, after some experimentations with creating new project and running it in VS Code (which was a successfull try) I've decided to remove whole .vscode directory from project dir and it worked.
2020-01-25 13:31:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.367359459400177, "perplexity": 4652.872477705665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00545.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-3rd-edition/chapter-9-impulse-and-momentum-exercises-and-problems-page-240/12
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (3rd Edition) Published by Pearson # Chapter 9 - Impulse and Momentum - Exercises and Problems - Page 240: 12 #### Answer The ball's rebound velocity is 6.0 m/s #### Work Step by Step We can use the graph to find the impulse exerted on the ball. The impulse is equal to the area under the force versus time graph. $J = ~F_x~t$ $J = (500~N)(8.0\times 10^{-3}~s)$ $J = 4.0~N~s$ We can use the impulse to find the final momentum $p_f$. $p_f = p_0+J$ $p_f = m~v_0+J$ $p_f = (0.25~kg)(-10~m/s)+4.0~N~s$ $p_f = 1.5~N~s$ We can use the final momentum to find the rebound velocity $v_{fx}$. $m~v_{fx} = p_f$ $v_{fx} = \frac{p_f}{m}$ $v_{fx} = \frac{1.5~N~s}{0.25~kg}$ $v_{fx} = 6.0~m/s$ The ball's rebound velocity is 6.0 m/s. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-12-11 15:40:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7230639457702637, "perplexity": 530.6853736780408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00475.warc.gz"}
https://answers.ros.org/question/322778/catkin_make-hector_slam-error/
# catkin_make hector_slam error Hi, I have a problem when running the catkin_make hector_slam, there is always an error said make: *** No rule to make target 'hector_slam'. Stop. Invoking "make hector_slam -j4 -l4" failed I don't know what is wrong with that, if you have any idea, please let me know what is wrong with that. Thank you
2022-08-10 21:12:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5221124291419983, "perplexity": 2844.209890313685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00232.warc.gz"}
https://pennylane.readthedocs.io/en/stable/code/api/pennylane.operation.Expectation.html
# qml.operation.Expectation¶ Expectation = expval An enumeration which represents returning the expectation value of an observable on specified wires. Type Enum
2021-09-26 04:15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23727814853191376, "perplexity": 5304.211332703812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00264.warc.gz"}
http://math.soimeme.org/~arunram/Resources/PopovVL/DCRGFormulationOfTheResults.html
## Discrete complex reflection groups Last update: 12 May 2014 ## Notes and References This is an excerpt of the lecture notes Discrete complex reflection groups by V.L. Popov. Lectures delivered at the Mathematical Institute, Rijksuniversiteit Utrecht, October 1980. ## Formulation of the results We assume in this chapter that $k=ℂ\text{.}$ Let $W$ be an irreducible infinite $r\text{-group,}$ $W\subset A\left(E\right)\text{.}$ As we have seen in the example above, there are two possibilities: either $W$ is noncrystallographic (i.e. $E/W$ is not compact) or $W$ is crystallographic $\text{(}E/W$ is compact). First, we shall describe the structure of noncrystallographic groups. To do this we need an auxiliary construction. ### Complexifications and real forms Let us consider $V$ as a real vector space (of dimension $2n\text{).}$ A linear subspace ${V}_{ℝ}$ of this real vector space is called a real form of $V$ if a) the natural map $Vℝ⊗ℝℂ→V$ is an isomorphism, i.e. some (hence, any) $ℝ\text{-basis}$ of ${V}_{ℝ}$ is a $ℂ\text{-basis}$ of $V\text{;}$ b) the restriction $⟨ | ⟩{|}_{{V}_{ℝ}}$ of $⟨ | ⟩$ to ${V}_{ℝ}$ is realvalued (hence ${V}_{ℝ}$ is euclidean with respect to $⟨ | ⟩{|}_{{V}_{ℝ}}\text{).}$ If ${V}_{ℝ}$ is a real form of $V$ then $V$ is the complexification of ${V}_{ℝ}\text{.}$ Let $a\in E$ be a point. We can consider $E$ as a real affine space of dimension $2n\text{.}$ The affine subspace $Eℝ=a+Vℝ$ of this affine space is called a real form of $E$ and $E$ is called the complexification of ${E}_{ℝ}\text{.}$ It is clear that every real euclidean linear, resp. affine space is isomorphic to a real form of a certain complex hermitian linear, resp. affine space. Proposition. One has the following properties: 1) $U\left(V\right)$ acts transitively on the set of real forms of $V\text{.}$ 2) The group of motions of $E$ acts transitively on the set of real forms of $E\text{.}$ 3) Every motion $\gamma$ of a euclidean affine space ${E}_{ℝ}$ can be extended in a unique way to a motion ${\gamma }_{ℂ}$ of $E\text{.}$ This motion ${\gamma }_{ℂ}$ is called the complexification of $\gamma$ (and $\gamma$ is called the real form of ${\gamma }_{ℂ}\text{).}$ 4) ${\text{dim}}_{ℝ}{H}_{\gamma }={\text{dim}}_{ℂ}{H}_{{\gamma }_{ℂ}}\text{.}$ Specifically, $\gamma$ is a reflection iff ${\gamma }_{ℂ}$ is a reflection. Proof. Proof is left to the reader. $\square$ This proposition gives a method for constructing noncrystallographic infinite $r\text{-groups.}$ Indeed, let $G\subset A\left({E}_{ℝ}\right)$ be an infinite (real) $r\text{-group.}$ Then it is easy to see that $Gℂ= {γℂ | γ∈G} ⊂A(E)$ is an infinite complex noncrystallographic $r\text{-group}$ (and ${G}_{ℂ}$ is irreducible if and only if $G$ is). ### Classification of infinite irreducible complex noncrystallographic $r\text{-groups:}$ the result It appears that the construction above leads to any such group. More precisely, one has the following theorem (see also Section 1.5,2)): Theorem. Let $W$ be an infinite irreducible complex $r\text{-group.}$ Then $W$ is noncrystallographic if and only if it is equivalent to the complexification of an irreducible affine Weyl group. Proof. Proof is given in Section 3.4. $\square$ The description of crystallographic groups is much more complicated. In order to give this description we need some preparations and extra notation. ### Ingredients of the description The subgroup of translations in $W$ will be denoted by $\text{Tran} W\text{.}$ $Tran W=W∩Tran A(E),$ cf. Section 1.1. It is clear that $W⊲W$ and $W/Tran W≅Lin W.$ We usually identify $\text{Tran} W$ with a subgroup of the additive group of $V$ by means of the map ${\gamma }_{v}↦v\text{.}$ Clearly this subgroup is a $\text{Lin} W\text{-invariant}$ lattice in $V\text{.}$ It will be proven in Section 3.1 that $\text{Tran} W$ is a lattice of full rank (i.e. of rank $2n\text{)}$ and $\text{Lin} W$ is a finite group (hence, $\text{Lin} W$ is a finite irreducible complex linear $r\text{-group,}$ see 1.4). Therefore, to describe $W,$ one needs to point out a group $\text{Lin} W\subset GL\left(V\right)$ from the Shephard and Todd list (i.e. from the theorem in 1.6), a $\text{Lin} W\text{-invariant}$ lattice $\text{Tran} W\subset V$ of rank $2n$ and the way $\text{Lin} W$ and $\text{Tran} W$ are "glued" together. This is done below as follows: 1) Lin $W$ is given by its graph as in Section 1.6. 2) $\text{Tran} W$ is described explicitly by linear combinations of vectors ${e}_{j},$ $1\le j\le s,$ that generate $\text{Tran} W\text{.}$ Here ${R}_{j}={R}_{{e}_{j},{\theta }_{j}},$ $1\le j\le s,$ is a fixed generating system of reflections of $\text{Lin} W$ which is related to the graph of $\text{Lin} W$ given in 1) as described in 1.6. To point out the vectors ${e}_{j},$ $1\le j\le s,$ explicitly, we assume that $V$ is a subspace of a standard hermitian infinitedimensional coordinate space ${ℂ}^{\infty }$ i.e. the space, whose elements are the sequences $\left({a}_{1},{a}_{2},\dots \right)$ with only a finite number of nonzero elements ${a}_{j},$ and a scalar product defined by the formula $⟨(a1,a2,…) | (b1,b2,…)⟩ =∑j=1∞ajb‾j.$ The vectors ${e}_{j},$ $1\le j\le s,$ are given by their, coordinates on a standard basis ${\epsilon }_{1},{\epsilon }_{2},\dots$ of ${ℂ}^{\infty },$ where $εj= (0,…,0,1,0,…).$ 3) The problem how to describe the "glueing" of $\text{Lin} W$ and $\text{Tran} W$ comes down to the determination of an extension of $\text{Tran} W$ by $\text{Lin} W,$ $0→Tran W→W→Lin W→1.$ Therefore it is done by means of cohomology. Let us show how it can be done. ### Cohomology Let $G$ be a subgroup of $A\left(E\right)$ and write $T=\text{Tran} G,$ $K=\text{Lin} G\text{.}$ Choose a point $a\in E\text{.}$ Take $P\in K$ and let $\gamma \in G$ be such that $\text{Lin} \gamma =P\text{.}$ We have $κa(γ)= (P,s(P)), s(P)∈V.$ It is easy to see that the map $s‾:K→V/T, s‾(P) =s(P)+T,$ is well defined and is in fact a $1\text{-cocycle,}$ i.e. $s‾(PQ) =s‾(P) +Ps‾(Q) ,P,Q∈K.$ (here $K$ acts on $V/T$ in the natural way). Vice versa, if $r‾:K→V/T$ is an arbitrary $1\text{-cocycle,}$ let us consider an arbitrary map $r:K→V$ such that $r‾(P)= r(P)+T,P∈K.$ Then the set ${ (P,r(P)+t) | t∈T,P∈K }$ is a subgroup $H$ of $A\left(E\right)$ with $\text{Lin} H=K$ and $\text{Tran} H=T\text{.}$ If we replace $a$ by an other point $b\in E,$ then (see 1.1). $κb(γ)=κa (γa-bγγb-a) = ( P,s(P)+ v-Pv ⏟1-coboundary ) ,where v=a-b.$ Therefore we have a bijection between the set of $\text{Tran} A\left(E\right)\text{-conjugacy}$ classes of subgroups $G$ of $A\left(E\right)$ with $\text{Lin} G=K,$ $\text{Tran} G=T$ and the group ${H}^{1}\left(K,V/T\right)\text{.}$ However we have to consider subgroups of $A\left(E\right)$ up to equivalence, i.e. up to $A\left(E\right)\text{-conjugation}$ (and not just up to $\text{Tran} A\left(E\right)\text{-conjugation)!}$ This can be done as follows by means of an extra relation on ${H}^{1}\left(K,V/T\right)\text{.}$ Let $N(K,T)= { Q∈GL(V) | QKQ-1 =K,QT=T } .$ If $Q\in N\left(K,T\right)$ and $\stackrel{‾}{s}:K\to V/T$ is a $1\text{-cocycle,}$ resp. $1\text{-coboundary,}$ then it is easy to check that the map $Q(s‾):K →V/T$ given by the formula $Q(s‾)(P) =Qs‾ (Q-1PQ), P∈K,$ is again a $1\text{-cocycle,}$ resp. $1\text{-coboundary}$ (here $Q$ acts on $V/T$ in the natural way). Therefore we have an action of $N\left(K,T\right)$ on ${H}^{1}\left(K,V/T\right)$ (clearly, by means of automorphisms). Let $\delta \in A\left(E\right)$ be such that $\text{Lin} \delta G{\delta }^{-1}=K,$ $\text{Tran} \delta G{\delta }^{-1}=T\text{.}$ We want to calculate the cocycle that corresponds to $\delta G{\delta }^{-1}\text{.}$ Changing $\delta$ to $\delta {\gamma }_{v},$ where $v=\left(\text{Lin} {\delta }^{-1}\right)\left(a-\delta \left(a\right)\right),$ we can assume that ${\kappa }_{a}\left(\delta \right)=\left(Q,0\right),$ $Q\in N\left(K,T\right)\text{.}$ Let $P\in K$ and $\lambda \in G$ be such that ${\kappa }_{a}\left(\lambda \right)=\left({Q}^{-1}PQ,s\left({Q}^{-1}PQ\right)\right)\text{.}$ Then ${\kappa }_{a}\left(\delta \lambda {\delta }^{-1}\right)=\left(P,Qs\left({Q}^{-1}PQ\right)\right)\text{.}$ Therefore the cocycle corresponding to $\delta G{\delta }^{-1}$ is $Q\left(\stackrel{‾}{s}\right)$ where $\stackrel{‾}{s}$ is the cocycle corresponding to $G\text{.}$ We see now that there is a bijection between the set of classes of equivalent subgroups $G\subset A\left(E\right)$ with $\text{Lin} G=K,$ $\text{Tran} G=T$ and the set of $N\left(K,T\right)\text{-orbits}$ in ${H}^{1}\left(K,V/T\right)\text{.}$ With all these facts in mind, we determine the extension $W$ (of $\text{Tran} W$ by $\text{Lin} W\text{)}$ by pointing out a $1\text{-cocycle}$ which represents the corresponding element of ${H}^{1}\left(\text{Lin} W,V/\text{Tran} W\right)$ (in fact, the whole $N\left(\text{Lin} W,\text{Tran} W\right)\text{-orbit}$ in ${H}^{1}\left(\text{Lin} W,V/\text{Tran} W\right)\text{).}$ In order to do so, we need only give the values of this $1\text{-cocycle}$ on the elements of a generating system of reflections of $\text{Lin} W\text{.}$ Technically it is more convenient to realize it as follows. Let $\stackrel{˜}{\text{Lin} W}$ be a free group with generators ${r}_{j},$ $1\le j\le s\text{.}$ We have an epimorphism $\varphi :\stackrel{˜}{\text{Lin} W}\to \text{Lin} W,$ $\varphi \left({r}_{j}\right)={R}_{j},$ $1\le j\le s\text{.}$ The kernel of $\varphi$ is the subgroup of "relations" of $\text{Lin} W\text{.}$ This epimorphism leads in a natural way to an action of $\stackrel{˜}{\text{Lin} W}$ on $V\text{.}$ A $1\text{-cocycle}$ $c$ of $\stackrel{˜}{\text{Lin} W}$ with values in $V$ is given by its values on the generators ${r}_{j},$ $c(rj),1≤j≤s,$ and these values may be arbitrary (because $\stackrel{˜}{\text{Lin} W}$ is free). It is easy to see that the formula $Rj→c(rj)+ Tran W,1≤j≤s,$ defines a $1\text{-cocycle}$ of $\text{Lin} W$ with values in $V/\text{Tran} W$ iff $c\left(F\right)\in \text{Tran} W$ for every $F\in \text{Ker} \varphi \text{.}$ It is also clear that every $1\text{-cocycle}$ of $\text{Lin} W$ with values in $V/\text{Tran} W$ is obtained in such a way. We shall give the extension $W$ (of $\text{Tran} W$ by $\text{Lin} W\text{)}$ by writing down the vectors $c\left({r}_{j}\right),$ $1\le j\le s\text{.}$ We are now ready to formulate the results of the classification of infinite irreducible crystallographic $r\text{-groups.}$ Denote by ${K}_{b}$ the finite linear irreducible $r\text{-group}$ which has the number $b$ in the list of Shephard and Todd (i.e. in the first column of Table 1. This in spite of the slight confusion with Cohen's notation ${K}_{5},{K}_{6}\text{).}$ ### Description of the group of linear parts: the result. First of all, there is an analogue of the theorem of Section 1.5. Theorem. Let $K\subset GL\left(V\right)$ be an irreducible finite $r\text{-group.}$ Then the following properties are equivalent: a) There exists a nonzero $K\text{-invariant}$ lattice in $V\text{.}$ b) There exists a $K\text{-invariant}$ lattice of rank $2n$ in $V\text{.}$ c) $K=\text{Lin} W$ where $W$ is an infinite crystallographic $r\text{-group.}$ d) The ring with unity, generated over $ℤ$ by all cyclic products of a graph of $K,$ lies in the ring of algebraic integers of a purely imaginary quadratic extension of $ℚ\text{.}$ e) $K$ is defined over a purely imaginary quadratic extension of $ℚ\text{.}$ f) $K$ is one of the groups: $K1; K2 (m=2,3,4,6); K3 (m=2,3,4,6); K4; K5; K8; K12; K24; K25; K26; K28; K29; K31; K32; K33; K34; K35; K36; K37.$ Proof. Proof is given in Section 4.6. $\square$ Now we shall describe the crystallographic groups themselves. ### The list of irreducible infinite crystallographic complex groups. This list is given in the following theorem (we use the notation: $\mathrm{\Omega }=\left\{z\in ℂ | -\frac{1}{2}\le \text{Re} z<\frac{1}{2},|z|\ge 1$ if $\text{Re} z\le 0$ and $|z|>1$ if $\text{Re} z>0\right\}$ - this is the "modular strip"; $\left[\alpha ,\beta \right]=\left\{a\alpha +b\beta | a,b\in ℤ\right\}$ for arbitrary $\alpha ,\beta \in ℂ\text{).}$ Theorem. The following list is the complete list of irreducible infinite crystallographic complex $r\text{-groups}$ $W$ (considered up to equivalence). The proof is given in the subsequent chapters. Table 2 The irreducible infinite crystallographic complex $r\text{-groups.}$ Notation of $W$ $n=\text{dim} W$ $\text{Lin} W$ $\text{Tran} W$ ${e}_{1},\dots ,{e}_{s}$ cocycle $c$ ${\left[{A}_{s}\right]}^{\alpha }$ $s\ge 1$ $s$ ${K}_{1},$ type ${A}_{s}$ $s\ge 1$ $\left[1,\alpha \right]{e}_{1}+\dots +\left[1,\alpha \right]{e}_{s},$ $\alpha \in \mathrm{\Omega }$ ${e}_{j}=\left({\epsilon }_{j}-{\epsilon }_{j+1}\right)/\sqrt{2}$ $j=1,\dots ,s$ $c=0$ ${\left[G\left(2,1,s\right)\right]}_{1}^{\alpha }$ $s\ge 3$ $s$ ${K}_{2}$ type $G\left(2,1,s\right)$ $s\ge 3$ $\left[1,\alpha \right]{e}_{1}+\left[1,\alpha \right]\sqrt{2}{e}_{2}+\dots +\left[1,\alpha \right]\sqrt{2}{e}_{s},$ $\alpha \in \mathrm{\Omega }$ ${e}_{1}={\epsilon }_{1},$ ${e}_{j}=\left({\epsilon }_{j-1}-{\epsilon }_{j}\right)/\sqrt{2}$ $j=2,\dots ,s$ ${\left[G\left(2,1,s\right)\right]}_{2}^{\beta }$ $s\ge 3$ $\left[1,\beta \right]{e}_{1}+\left[1,\frac{1+\beta }{2}\right]\sqrt{2}{e}_{2}+\cdots +\left[1,\frac{1+\beta }{2}\right]\sqrt{2}{e}_{s},$ $\beta \in \mathrm{\Omega }$ ${\left[G\left(2,1,s\right)\right]}_{3}^{\gamma }$ $s\ge 3$ $\left[1,\gamma \right]{e}_{1}+\left[\frac{1}{2},\gamma \right]\sqrt{2}{e}_{2}+\dots +\left[\frac{1}{2},\gamma \right]\sqrt{2}{e}_{s},$ $\gamma \in \mathrm{\Omega }$ ${\left[G\left(2,1,s\right)\right]}_{4}^{\delta }$ $s\ge 3$ $\left[1,\gamma \right]{e}_{1}+\left[1,\frac{\delta }{2}\right]\sqrt{2}{e}_{2}+\dots +\left[<1,\frac{\delta }{2}\right]\sqrt{2}{e}_{s},$ $\delta \in \mathrm{\Omega }$ ${\left[G\left(2,1,s\right)\right]}_{5}^{\lambda }$ $s\ge 3$ $\left[1,\lambda \right]{e}_{1}+\left[\frac{1}{2},\frac{\lambda }{2}\right]\sqrt{2}{e}_{2}+\dots +\left[\frac{1}{2},\frac{\lambda }{2}\right]\sqrt{2}{e}_{s},$ $\lambda \in \mathrm{\Omega }$ ${\left[G\left(3,1,s\right)\right]}_{1}$ $s\ge 2$ $s$ ${K}_{2}$ type $G\left(3,1,s\right)$ $s\ge 2$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]\sqrt{2}{e}_{2}+\dots +\left[1,\omega \right]\sqrt{2}{e}_{s}$ ${\left[G\left(3,1,s\right)\right]}_{2}$ $s\ge 2$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]i\sqrt{\frac{2}{3}}{e}_{2}+\dots +\left[1,\omega \right]i\sqrt{\frac{2}{3}}{e}_{s}$ ${\left[G\left(4,1,s\right)\right]}_{1}$ $s\ge 2$ $s$ ${K}_{2}$ type $G\left(4,1,s\right)$ $s\ge 2$ $\left[1,i\right]{e}_{1}+\left[1,i\right]\sqrt{2}{e}_{2}+\dots +\left[1,i\right]\sqrt{2}{e}_{s}$ ${\left[G\left(4,1,s\right)\right]}_{2}$ $s\ge 2$ $\left[1,i\right]{e}_{1}+\left[1,i\right]\epsilon {e}_{2}+\dots +\left[1,i\right]\epsilon {e}_{s}$ $\left[G\left(6,1,s\right)\right]$ $s\ge 2$ $s$ ${K}_{2}$ type $G\left(6,1,s\right)$ $s\ge 2$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]\sqrt{2}{e}_{2}+\dots +\left[1,\omega \right]\sqrt{2}{e}_{s}$ ${\left[G\left(2,2,s\right)\right]}^{\alpha }$ $s\ge 3$ $s$ ${K}_{2}$ $G\left(2,2,s\right)$ $s\ge 3$ $\left[1,\alpha \right]{e}_{1}+\dots +\left[1,\alpha \right]{e}_{s},$ $\alpha \in \mathrm{\Omega }$ ${e}_{1}=-\left({\epsilon }_{1}+{\epsilon }_{2}\right)/\sqrt{2},$ ${e}_{j}=\left({\epsilon }_{j-1}-{\epsilon }_{j}\right)/\sqrt{2},$ $j=2,\dots ,s$ $\left[G\left(3,3,s\right)\right]$ $s\ge 3$ s ${K}_{2},$ type $G\left(3,3,s\right)$ $s\ge 3$ $\left[1,\omega \right]{e}_{1}+\dots +\left[1,\omega \right]{e}_{s}$ ${e}_{1}=\omega {\epsilon }_{1}-{\epsilon }_{2},$ ${e}_{j}=\left({\epsilon }_{j-1}-{\epsilon }_{j}\right)/\sqrt{2},$ $j=2,\dots ,s$ $\left[G\left(4,4,s\right)\right]$ $s\ge 3$ $s$ ${K}_{2}$ type $G\left(4,4,s\right)$ $s\ge 3$ $\left[1,i\right]{e}_{1}+\dots +\left[1,i\right]{e}_{s}$ ${e}_{1}=\left(i{\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2},$ ${e}_{j}=\left({\epsilon }_{j-1}-{\epsilon }_{j}\right)/\sqrt{2},$ $j=2,\dots ,s$ $\left[G\left(6,6,s\right)\right]$ $s\ge 3$ $s$ ${K}_{2},$ type $G\left(4,4,s\right)$ $s\ge 3$ $\left[1,\omega \right]{e}_{1}+\dots +\left[1,\omega \right]{e}_{s}$ ${e}_{1}=\left(\left(1+\omega \right){e}_{1}-{e}_{2}\right)/\sqrt{2}$ ${e}_{j}=\left({\epsilon }_{j-1}{\epsilon }_{j}\right)/\sqrt{2}$ $j=2,\dots ,s$ ${\left[G\left(2,1,2\right)\right]}_{1}^{\alpha }$ $2$ ${K}_{2},$ type $G\left(2,1,2\right)$ $=$ type $G\left(4,4,2\right)$ $\left[1,\alpha \right]{e}_{1}+\left[1,\alpha \right]\sqrt{2}{e}_{2},$ $\alpha \in \mathrm{\Omega }$ ${e}_{1}={\epsilon }_{1}$ ${e}_{2}=\left({\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2}$ ${\left[G\left(2,1,2\right)\right]}_{2}^{\beta }$ $\left[1,\beta \right]{e}_{1}+\left[1,\frac{\beta }{2}\right]\sqrt{2}{e}_{2},$ $\beta \in \mathrm{\Omega }$ ${\left[G\left(2,1,2\right)\right]}_{3}^{\gamma }$ $\left[1,\gamma \right]{e}_{1}+\left[1,\frac{1+\gamma }{2}\right]\sqrt{2}{e}_{2},$ $\gamma \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{1}^{\alpha }$ 2 ${K}_{2},$ type $G\left(6,6,2\right)$ $\left[1,\alpha \right]{e}_{1}+\left[1,\alpha \right]\left(2+\omega \right){e}_{2},$ $\alpha \in \mathrm{\Omega }$ ${e}_{1}=\left(\left(1+\omega \right){\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2}$ ${e}_{2}=\left({\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2}$ ${\left[G\left(6,6,2\right)\right]}_{2}^{\beta }$ $\left[1,\beta \right]{e}_{1}+\left[1,\frac{\beta }{3}\right]\left(2+\omega \right){e}_{2},$ $\beta \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{3}^{\gamma }$ $\left[1,\gamma \right]{e}_{1}+\left[1,\frac{1+\gamma }{3}\right]\left(2+\omega \right){e}_{2},$ $\gamma \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{4}^{\delta }$ $\left[1,\delta \right]{e}_{1}+\left[1,\frac{2+\delta }{3}\right]\left(2+\omega \right){e}_{2},$ $\delta \in \mathrm{\Omega }$ ${\left[G\left(4,2,s-1\right)\right]}_{1}$ $s\ge 3$ $s-1$ ${K}_{2},$ type $G\left(4,2,s-1\right)$ $s\ge 3$ $T=\left[1,i\right]{e}_{1}+\dots +\left[1,i\right]{e}_{s-1}$ ${e}_{1}=\left(i{\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2}$ ${e}_{j}=\left({\epsilon }_{j-1}{\epsilon }_{j}\right)/\sqrt{2}$ $j=2,\dots ,s-1$ ${e}_{s}={\epsilon }_{s-1}$ ${\left[G\left(4,2,s-1\right)\right]}_{1}^{*}$ $s\ge 3$ $c\left({r}_{j}\right)=0,$ $j=1,\dots ,s-1$ $c\left({r}_{s}\right)={e}_{s}/\sqrt{2}$ ${\left[G\left(4,2,s-1\right)\right]}_{2}$ $s\ge 3$ $T\cup \left(T+\frac{1+i}{2}\left({e}_{1}+{e}_{2}\right)\right)=$ $\left[1,i\right]{e}_{1}+\dots +\left[1,i\right]{e}_{s-1}+\frac{1}{\sqrt{2}}\left[1,i\right]{e}_{s}$ $c=0$ ${\left[G\left(4,2,2\right)\right]}_{3}$ $2$ ${K}_{2},$ type $G\left(4,2,2\right)$ $\left[1,i\right]{e}_{1}+\left[1,i\right]\left(1+i\right){e}_{2}$ ${\left[G\left(6,2,s-1\right)\right]}_{1}$ $s\ge 3$ $s-1$ ${K}_{2}$ type $G\left(6,2,s-1\right)$ $s\ge 3$ $\left[1,\omega \right]{e}_{1}+\dots +\left[1,\omega \right]{e}_{s-1}$ ${e}_{1}=\left(\left(1+\omega \right){\epsilon }_{1}-{\epsilon }_{2}\right)/\sqrt{2},$ ${e}_{j}=\left({\epsilon }_{j-1}-{\epsilon }_{j}\right)/\sqrt{2},$ $j=2,\dots ,s-1$ ${e}_{s}={\epsilon }_{s-1}$ ${\left[G\left(6,2,2\right)\right]}_{2}$ $2$ ${K}_{2},$ type $G\left(6,2,2\right)$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]\left(2+\omega \right){e}_{2}$ ${\left[G\left(6,3,s-1\right)\right]}_{1}$ $s\ge 3$ $s-1$ ${K}_{2},$ type $G\left(6,3,s-1\right)$ $s\ge 3$ $\left[1,\omega \right]{e}_{1}+\dots +\left[1,\omega \right]{e}_{s-1}$ ${\left[G\left(6,3,2\right)\right]}_{2}$ $2$ ${K}_{2},$ type $G\left(6,3,2\right)$ $\left[1,2\omega \right]{e}_{1}+\left[2,\omega \right]{e}_{2}$ $\left[{K}_{3}\left(3\right)\right]$ $1$ ${K}_{3}$ $m=3$ $\left[1,\omega \right]{e}_{1}$ ${e}_{1}={\epsilon }_{1}$ $\left[{K}_{3}\left(4\right)\right]$ ${K}_{3}$ $m=4$ $\left[1,i\right]{e}_{1}$ $\left[{K}_{3}\left(6\right)\right]$ ${K}_{3},$ $m=6$ $\left[1,\omega \right]{e}_{1}$ $\left[{K}_{4}\right]$ $2$ ${K}_{4}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]{e}_{2}$ ${e}_{1}={\epsilon }_{1},$ ${e}_{2}=\frac{1-\omega }{3}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}\right)$ $\left[{K}_{5}\right]$ ${K}_{5}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]\sqrt{2}{e}_{2}$ ${e}_{1}={\epsilon }_{1},$ ${e}_{2}=\frac{1-\omega }{3}\left(\sqrt{2}{\epsilon }_{1}+{\epsilon }_{2}\right)$ [K8] ${K}_{8}$ $\left[1,i\right]{e}_{1}+\left[1,i\right]{e}_{2}$ ${e}_{1}={\epsilon }_{1},$ ${e}_{2}=\frac{1-i}{2}\left({\epsilon }_{1}-{\epsilon }_{2}\right)$ [K12] ${K}_{12}$ $\left[1,i\sqrt{2}\right]{e}_{1}+\left[1,i\sqrt{2}\right]{e}_{2}$ ${e}_{1}=\frac{1}{\sqrt{2}}{\epsilon }_{1}+\frac{1+i}{2}{\epsilon }_{2},$ ${e}_{2}=\frac{\sqrt{2}+\left(\sqrt{2}-2\right)i}{4}{\epsilon }_{1}+\frac{2+\sqrt{2}-\sqrt{2}i}{4}{\epsilon }_{2},$ ${e}_{3}=\frac{1}{\sqrt{2}}{\epsilon }_{1}+\frac{1-i}{2}{\epsilon }_{2}$ [K12]* $c\left({r}_{1}\right)=c\left({r}_{2}\right)=0$ $c\left({r}_{3}\right)=\frac{1+i}{2}{e}_{3}$ [K24] 3 ${K}_{24}$ $\left[1,\frac{1+i\sqrt{7}}{2}\right]{e}_{1}+\left[1,\frac{1+i\sqrt{7}}{2}\right]{e}_{2}+\left[1,\frac{1+i\sqrt{7}}{2}\right]{e}_{3}$ ${e}_{1}={\epsilon }_{2},$ ${e}_{2}=\left(1-i\sqrt{7}\right)\left({\epsilon }_{2}+{\epsilon }_{3}\right)/4,$ ${e}_{3}=\left(-{\epsilon }_{1}-{\epsilon }_{2}+\frac{1+i\sqrt{7}}{2}{\epsilon }_{3}\right)/2$ $c=0$ $\left[{K}_{25}\right]$ ${K}_{25}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]{e}_{2}+\left[1,\omega \right]{e}_{3}$ ${e}_{1}={\epsilon }_{3},$ ${e}_{2}=\frac{1-\omega }{3}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}\right),$ ${e}_{3}=-\omega {\epsilon }_{2}$ ${\left[{K}_{26}\right]}_{1}$ ${K}_{26}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]{e}_{2}+\left[1,\omega \right]\sqrt{2}{e}_{3}$ ${e}_{1}=\frac{1-{\omega }^{2}}{3}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}\right),$ ${e}_{2}={\epsilon }_{3},$ ${e}_{3}=\frac{1}{\sqrt{2}}\left({\epsilon }_{2}-{\epsilon }_{3}\right)$ ${\left[{K}_{26}\right]}_{2}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]{e}_{2}+\left[1,\omega \right]o\sqrt{\frac{2}{3}}{e}_{3}$ ${\left[{F}_{4}\right]}_{1}^{\alpha }$ $4$ ${K}_{28}$ ${F}_{4}$ $\left[1,\alpha \right]{e}_{1}+\left[1,\alpha \right]{e}_{2}+\left[1,\alpha \right]\sqrt{2}{e}_{3}+\left[1,\alpha \right]\sqrt{2}{e}_{4},$ $\alpha \in \mathrm{\Omega }$ ${e}_{j}=\left({\epsilon }_{j+1}-{\epsilon }_{j+2}\right)/\sqrt{2},$ $j=1,2,$ ${e}_{3}={\epsilon }_{4},$ ${e}_{4}=\left({\epsilon }_{1}-{\epsilon }_{2}-{\epsilon }_{3}-{\epsilon }_{4}\right)/2$ ${\left[{F}_{4}\right]}_{2}^{\beta }$ $\left[1,\beta \right]{e}_{1}+\left[1,\frac{\beta }{2}\right]{e}_{2}+\left[1,\beta \right]\sqrt{2}{e}_{3}+\left[1,\frac{\beta }{2}\right]\sqrt{2}{e}_{4},$ $\beta \in \mathrm{\Omega }$ ${\left[{F}_{4}\right]}_{3}^{\gamma }$ $\left[1,\gamma \right]{e}_{1}+\left[1,\gamma \right]{e}_{2}+\left[1,\frac{1+\gamma }{2}\right]\sqrt{2}{e}_{3}+$ $\left[1,\frac{1+\gamma }{2}\right]\sqrt{2}{e}_{4},$ $\gamma \in \mathrm{\Omega }$ $\left[{K}_{29}\right]$ ${K}_{29}$ $\left[1,i\right]{e}_{1}+\left[1,i\right]{e}_{2}+\left[1,i\right]{e}_{3}+\left[1,i\right]{e}_{4}$ ${e}_{1}=\frac{1}{\sqrt{2}}\left({\epsilon }_{2}-{\epsilon }_{4}\right)$ ${e}_{2}=\frac{1}{\sqrt{2}}\left(-i{\epsilon }_{2}+{\epsilon }_{3}\right)$ ${e}_{3}=\frac{1}{\sqrt{2}}\left(-{\epsilon }_{3}+{\epsilon }_{4}\right)$ ${e}_{4}=\frac{-1+i}{2\sqrt{2}}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}+{\epsilon }_{4}\right)$ $\left[{K}_{31}\right]$ ${K}_{31}$ $\left[1,i\right]{e}_{1}+\left[1,i\right]{e}_{2}+\left[1,i\right]{e}_{3}+\left[1,i\right]{e}_{4}$ ${e}_{1}=\frac{1}{\sqrt{2}}\left({\epsilon }_{2}-{\epsilon }_{4}\right)$ ${e}_{2}=\frac{1}{\sqrt{2}}\left(-i{\epsilon }_{2}+{\epsilon }_{3}\right)$ ${e}_{3}=\frac{1}{\sqrt{2}}\left(-{\epsilon }_{3}+{\epsilon }_{4}\right)$ ${e}_{4}=\frac{-1+i}{2\sqrt{2}}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}+{\epsilon }_{4}\right)$ ${e}_{5}=\frac{1-i}{\sqrt{2}}{\epsilon }_{4}$ ${\left[{K}_{31}\right]}^{*}$ $c\left({r}_{j}\right)=0$ $j=1,2,3,4,$ $c\left({r}_{5}\right)=\frac{1+i}{2}{e}_{5}$ $\left[{K}_{32}\right]$ ${K}_{32}$ $\left[1,\omega \right]{e}_{1}+\left[1,\omega \right]{e}_{2}+\left[1,\omega \right]{e}_{3}+\left[1,\omega \right]{e}_{4}$ ${e}_{1}={\epsilon }_{3},$ ${e}_{2}=\frac{1-\omega }{3}\left({\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{3}\right),$ ${e}_{3}=-\omega {\epsilon }_{2},$ ${e}_{4}=\frac{{\omega }^{2}-\omega }{3}\left(-{\epsilon }_{1}+{\epsilon }_{2}+{\epsilon }_{4}\right)$ $c=0$ $\left[{K}_{33}\right]$ $5$ ${K}_{33}$ $\left[1,\omega \right]{e}_{1}+\dots +\left[1,\omega \right]{e}_{n}$ ${e}_{1}=\frac{\omega }{\sqrt{2}}\left({\epsilon }_{5}+{\epsilon }_{6}\right),$ ${e}_{2}=-\frac{\omega }{2\sqrt{2}}\left(-{\epsilon }_{1}+\left(1+2\omega \right){\epsilon }_{2}$ $+{\epsilon }_{3}+{\epsilon }_{4}+{\epsilon }_{5}+{\epsilon }_{6}\right),$ ${e}_{j}=\frac{1}{\sqrt{2}}\left({\epsilon }_{j-2}-{\epsilon }_{j-1}\right),$ $j=3,4,\dots ,n$ $\left[{K}_{23}\right]$ $6$ ${K}_{24}$ ${\left[{E}_{6}\right]}^{\alpha }$ ${K}_{35},$ ${E}_{6}$ $\left[1,\alpha \right]{e}_{1}+\dots +\left[1,\alpha \right]{e}_{n},$ $\alpha \in \mathrm{\Omega }$ ${e}_{1}=\left({\epsilon }_{1}-{\epsilon }_{2}-{\epsilon }_{3}-{\epsilon }_{4}$ $-{\epsilon }_{5}-{\epsilon }_{6}-{\epsilon }_{7}+{\epsilon }_{8}\right)/2\sqrt{2},$ ${e}_{2}=\left({\epsilon }_{1}+{\epsilon }_{2}\right)/\sqrt{2},$ ${e}_{j}=\left(-{\epsilon }_{j-2}+{\epsilon }_{j-1}\right)/\sqrt{2},$ $j=3,\dots ,n\text{.}$ ${\left[{E}_{7}\right]}^{\alpha }$ $7$ ${K}_{36},$ ${E}_{7}$ ${\left[{E}_{8}\right]}^{\alpha }$ $8$ ${K}_{37},$ ${E}_{8}$ ### Equivalence Theorem. The following list is the complete list of groups $W$ and $W\prime ,$ $W\ne W\prime ,$ from Table 2 which are equivalent: Table 3 Pairs of equivalent irreducible infinite crystallographic complex $r\text{-groups}$ $W$ $W\prime$ condition ${\left[G\left(2,1,s\right)\right]}_{2}^{1+\omega },$ $s\ge 3$ ${\left[G\left(2,1,s\right)\right]}_{3}^{1+\omega },$ $s\ge 3$ -- ${\left[G\left(2,1,s\right)\right]}_{2}^{1+\omega },$ $s\ge 3$ ${\left[G\left(2,1,s\right)\right]}_{4}^{1+\omega },$ $s\ge 3$ -- ${\left[G\left(2,1,s\right)\right]}_{3}^{i},$ $s\ge 3$ ${\left[G\left(2,1,s\right)\right]}_{4}^{i},$ $s\ge 3$ -- ${\left[G\left(2,1,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(2,1,2\right)\right]}_{2}^{-2/\beta }$ $-2/\beta \in \mathrm{\Omega }$ ${\left[G\left(2,1,2\right)\right]}_{2}^{1+\omega }$ ${\left[G\left(2,1,2\right)\right]}_{3}^{1+\omega }$ -- ${\left[G\left(2,1,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(2,1,2\right)\right]}_{3}^{1-2/\beta }$ $1-2/\beta \in \mathrm{\Omega }$ ${\left[G\left(2,1,2\right)\right]}_{2}^{\gamma }$ ${\left[G\left(2,1,2\right)\right]}_{3}^{\left(\gamma -1\right)/\left(\gamma +1\right)}$ $\left(\gamma -1\right)/\left(\gamma +1\right)\in \mathrm{\Omega }$ ${\left[G\left(2,1,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(2,1,2\right)\right]}_{3}^{-1-2/\beta }$ $-1-2/\beta \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(6,6,2\right)\right]}_{2}^{-3/\beta }$ $-3/\beta \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{3}^{\gamma }$ ${\left[G\left(6,6,2\right)\right]}^{\left(2\gamma -1\right)/\left(\gamma +1\right)}$ $\left(2\gamma -1\right)/\left(\gamma +1\right)\in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(6,6,2\right)\right]}^{-1+3/\beta }$ $-1+3/\beta \in \mathrm{\Omega }$ ${\left[G\left(6,6,2\right)\right]}_{2}^{\beta }$ ${\left[G\left(6,6,2\right)\right]}_{3}^{2-3/\beta }$ $2-3/\beta \in \mathrm{\Omega }$ ${\left[{F}_{4}\right]}_{2}^{\beta }$ ${\left[{F}_{4}\right]}_{2}^{-2/\beta }$ $-2/\beta \in \mathrm{\Omega }$ ${\left[{F}_{4}\right]}_{2}^{1+\omega }$ ${\left[{F}_{4}\right]}_{3}^{1+\omega }$ -- ${\left[{F}_{4}\right]}_{2}^{\beta }$ ${\left[{F}_{4}\right]}_{3}^{1-2/\beta }$ $1-2/\beta \in \mathrm{\Omega }$ ${\left[{F}_{4}\right]}_{3}^{\gamma }$ ${\left[{F}_{4}\right]}_{3}^{\left(\gamma -1\right)/\left(\gamma +1\right)}$ $\left(\gamma -1\right)/\left(\gamma +1\right)\in \mathrm{\Omega }$ ${\left[{F}_{4}\right]}_{2}^{\beta }$ ${\left[{F}_{4}\right]}_{3}^{-1-2/\beta }$ $-1-2/\beta \in \mathrm{\Omega }$ Proof of this theorem is rather technical and will not be given here. ### The structure of an extension of $\text{Tran} W$ by $\text{Lin} W\text{.}$ As we have seen in Section 1.5, if $k=ℝ$ then the structure of an infinite irreducible $r\text{-group}$ $W$ as an extension of $\text{Tran} W$ by $\text{Lin} W$ is very simple: it is always a semidirect product. The situation is more complicated when $k=ℂ,$ because there exist infinite irreducible complex crystallographic $r\text{-groups}$ $W$ which are not semidirect products of $\text{Tran} W$ and $\text{Lin} W\text{.}$ Theorem. The groups $W$ from Table 2 which are not semidirect products of $\text{Tran} W$ and $\text{Lin} W$ are $[G(4,2,s)]1*, [K12]* and [K31]*.$ Theorem. Let $K\subset GL\left(V\right)$ be a finite irreducible $r\text{-group}$ and let $T\subset V$ be a $K\text{-invariant}$ lattice. Assume that there exists a crystallographic $r\text{-group}$ $W$ with $\text{Lin} W=K,$ $\text{Tran} W=T\text{.}$ Then the set of those elements of ${H}^{1}\left(K,V/T\right)$ which correspond to such subgroups $W$ is in fact a subgroup of ${H}^{1}\left(K,V/T\right)$ and the order of this subgroup is $\le 2\text{.}$ ### The rings and fields of definition of $\text{Lin} W\text{.}$ As we have seen in Section 1.5, if $k=ℝ$ then the group $\text{Lin} W$ for an infinite irreducible $r\text{-group}$ $W$ is defined over $ℚ\text{.}$ If $k=ℂ$ then $\text{Lin} W$ for an infinite irreducible crystallographic $r\text{-group}$ $W$ is defined over a certain purely imaginary quadratic extension of $ℚ,$ see the theorem in Section 2.5. We can describe this extension precisely. Theorem. Let $K\subset GL\left(V\right)$ be a finite irreducible complex $r\text{-group.}$ Then the ring with unity generated over $ℤ$ by the set of all cyclic products related to an arbitrary fixed generating system of reflections of $K$ coincides with the ring $ℤ\left[\text{Tr}K\right]$ generated over $ℤ$ by the set of traces of all elements of $K\text{.}$ The ring $ℤ\left[\text{Tr}K\right]$ is the minimal ring of definition of $K\text{.}$ This ring is equal to $ℤ$ iff $K$ is the complexification of the Weyl group of an irreducible root system. Proof is given in the Section 4.6. It is easily seen from Table 1 and the theorem above that for the groups $K=\text{Lin} W,$ where $W$ is an infinite irreducible crystallographic $r\text{-group,}$ one has the following table: Table 4 Linear parts of irreducible infinite crystallographic complex $r\text{-groups}$ $ℤ\left[\text{Tr}K\right]$ $ℤ$ $ℤ\left[i\right]$ $ℤ\left[2i\right]$ $ℤ\left[i\sqrt{2}\right]$ $ℤ\left[\omega \right]$ $ℤ\left[2\omega \right]$ $ℤ\left[\frac{1+i\sqrt{7}}{2}\right]$ $K$ ${K}_{1}={A}_{s},$ $s\ge 1\text{;}$ $G\left(2,1,s\right)={B}_{s},$ $s\ge 2\text{;}$ $G\left(2,2,s\right)={D}_{s},$ $s\ge 3\text{;}$ $G\left(6,6,2\right)={G}_{2}\text{;}$ ${K}_{28}={F}_{4}\text{;}$ ${K}_{35}={E}_{6}\text{;}$ ${K}_{36}={E}_{7}\text{;}$ ${K}_{37}={E}_{8}\text{.}$ $G\left(4,1,s\right),$ $s\ge 2\text{;}$ $G\left(4,4,s\right),$ $s\ge 3\text{;}$ $G\left(4,2,s\right),$ $s\ge 3\text{;}$ ${K}_{3}$ $\text{(}m=4\text{);}$ ${K}_{8}\text{;}$ ${K}_{29}\text{;}$ ${K}_{31}\text{.}$ $G\left(4,2,2\right)$ ${K}_{12}$ $G\left(3,1,s\right),$ $s\ge 2\text{;}$ $G\left(6,1,s\right),$ $s\ge 2\text{;}$ $G\left(3,3,s\right),$ $s\ge 3\text{;}$ $G\left(6,6,s\right),$ $s\ge 3\text{;}$ $G\left(6,2,s\right),$ $s\ge 2\text{;}$ $G\left(6,3,s\right),$ $s\ge 3\text{;}$ ${K}_{3}$ $\text{(}m=3,6\text{);}$ ${K}_{4}\text{;}$ ${K}_{5}\text{;}$ ${K}_{25}\text{;}$ ${K}_{26}\text{;}$ ${K}_{32}\text{;}$ ${K}_{33}\text{;}$ ${K}_{34}\text{.}$ $G\left(6,3,2\right)$ ${K}_{24}$ fractionfield of$ℤ\left[\text{Tr}K\right]$ $ℚ$ $ℚ\left(\sqrt{-1}\right)$ $ℚ\left(\sqrt{-1}\right)$ $ℚ\left(\sqrt{-2}\right)$ $ℚ\left(\sqrt{-3}\right)$ $ℚ\left(\sqrt{-3}\right)$ $ℚ\left(\sqrt{-7}\right)$ ### Further remarks a) In contrast to the real case, there exist $1\text{-parameter}$ families of inequivalent irreducible complex infinite crystallographic $r\text{-groups}$ $W$ with a fixed linear part $\text{Lin} W$ (i.e. the groups with a fixed linear part may have moduli). We shall see below that an irreducible crystallographic $r\text{-group}$ $W$ with $\text{Lin} W=K$ has moduli iff $ℤ\left[\text{Tr}K\right]=ℤ,$ i.e. iff $K$ is the complexification of the Weyl group of an irreducible root system. b) It follows from Table 4 (and from a known result in algebraic number theory) that the ring $ℤ\left[\text{Tr Lin} W\right],$ where $W$ is an infinite irreducible crystallographic $r\text{-group,}$ is always a unique factorisation domain. It would be interesting to have an a priori proof of this fact. c) If $k=ℝ$ then it is known (and was a priori proved in 1948 - 51 by Cheval ley and Harish-Chandra) that there exists a bijective correspondence between the set of classes of equivalent infinite (hence crystallographic) $r\text{-groups}$ $\text{(}=$ affine Weyl groups) and the set of classes of isomorphic complex semisimple Lie algrebras. Question: is it possible to attach to an infinite complex crystallographic $r\text{-group}$ a sort of "global object" (like a semisimple Lie algebra in the real case) in a such way that the correspondence between these $r\text{-groups}$ and "global objects" will be bijective? $realcrystallographicr-group ⟷ semisimple complexLie algebra complexcrystallographicr-group ⟷ ?$ We do not know whether such an object exists or not. It is funny that we can calculate (see 4.4) the group which, by analogy with the real case, might be "the center" of this hypothetical object.
2023-03-23 02:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 773, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947536051273346, "perplexity": 118.41162863788963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00650.warc.gz"}
http://mathoverflow.net/questions/126831/a-question-from-otto-forsters-book-on-riemann-surfaces
A question from Otto Forster's book on Riemann surfaces I am reading section 14, A Finiteness Theorem of Otto Forster's book Lectures on Riemann Surfaces, and come across a problem on Theorem 14.15 on page 117. In the proof Forster introduces a function $$F=\det(f\delta_{\nu\mu}-c_{\nu\mu})_{\nu\mu}$$ which is holomorphic, where $f$ is holomorphic, but I don't know why it follows that $F\xi_\nu\mid_Y=0$. I am wondering whether we should replace $F$ with the matrix $(f\delta_{\nu\mu}-c_{\nu\mu})$, but since the proof relies heavily on this claim, I get puzzled. Is there something wrong or am I misunderstanding some stuff? How should I understand this theorem? - (TeX comment) Insert a missing dollar sign to make the question readable. –  P Vanchinathan Apr 8 '13 at 9:57 It's not a missing dollar sign. Actually I can fix it by escaping (with a backslash) two of the underscores, but I do NOT understand why that ought to work, hence I'm leaving the question as is. Someone more familiar with the inner workings of jsMath should look at it. –  José Figueroa-O'Farrill Apr 8 '13 at 10:23 It can also be fixed by enclosing a few of the pieces of MathJax with backticks, e.g. (dollar sign)...(dollar sign), but since this hack will not work on the Stack Exchange network, I'm hesitant to add to what is already a daunting problem (given how commonly this has been used in the past on MO). See here for an example of how math inside backticks appears on the SE network: math.stackexchange.com/questions/354677/… –  Zev Chonoles Apr 8 '13 at 12:12 Actually I asked the very same question again, and corrected the terrible math writing by simply adding "`" in front of every dollar sign just as suggested. I don't know why, and it seems to happen when you type subscripts。 –  xuxuzhu Apr 8 '13 at 16:15 The argument is similar to the proof of Nakayama's lemma .Take everything on (1) to one side and multiply by the adjugate matrix. t –  Mohan Ramachandran Apr 8 '13 at 18:34 I found that argument confusing too. If you choose a basis $\xi_{\mu}$ of eigenvectors of $C=(c_{\mu \nu})$, you can arrange that $C$ is in Jordan normal form. Then we get that $F=\det(fI-C)$ is the product of all determinants of the various blocks, and so multiplied by any generalized eigenvector of $C$ gives us $0$ because it applies $f-\lambda$ (where $\lambda$ is the eigenvalue) enough times to kill the generalized eigenvector: $(f-\lambda)^k \xi_{\nu} = (C-\lambda I)^k \xi_{\nu}=0$ if $k$ is as large as the size of the Jordan block that $\xi_{\nu}$ belongs to. Check this carefully, because I haven't thought about Forster's book in a long time (and because my first answer was wrong).
2014-03-07 14:16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811063170433044, "perplexity": 361.8754774216997}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999643993/warc/CC-MAIN-20140305060723-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
https://support.bioconductor.org/p/36078/
Search 0 7.7 years ago by Dear Bio-C users, My samples are queen ants. I use two-color spotted microarrays, hybridisation against a reference, no dye swaps. I would like to compare samples with different genotypes, developmental stages and social form. There are 3 genotypes: 1. BB (D=dominant), 2. Bb (H=heterozygous) and 3. bb (R=recessive). Three developmental stages: 1. young (2d) virgin queen, 2. mature (11d) virgin queen, 3. mated/mother queen (mom) Two social forms: 1. Monogyne (M), 2, Polygyne (P). >From these 3 genotypes, 3 developmental stages and 2 types of social form, I can group my 99 slides into 9 different categories. These slides contain two batches (I and J series). I treated batch effect as a fixed effect. My model matrix is: design <- model.matrix(~0+factor(targets$Cy3)+factor(targets$batch)) colnames(design) <- c("P2dD", "P2dH", "P11dD", "P11dH", "P11dR", "M2dD", "M11dD", "MomD", "MomH", "batch") design P2dD P2dH P11dD P11dH P11dR M2dD M11dD MomD MomH batch 1 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 1 0 0 0 0 0 3 0 0 0 0 0 1 0 0 0 0 4 0 0 0 0 1 0 0 0 0 0 5 0 0 0 0 0 1 0 0 0 0 6 0 0 0 0 1 0 0 0 0 0 7 0 0 0 0 0 1 0 0 0 0 8 0 0 0 0 1 0 0 0 0 0 9 0 0 0 0 0 1 0 0 0 0 10 0 0 0 0 1 0 0 0 0 0 11 0 0 0 0 0 1 0 0 0 0 12 0 0 0 0 1 0 0 0 0 0 13 0 0 0 0 0 1 0 0 0 0 14 0 0 0 0 1 0 0 0 0 0 15 0 0 0 0 0 0 1 0 0 0 16 0 0 0 0 0 0 1 0 0 0 17 0 0 0 0 0 0 1 0 0 0 18 0 0 0 0 0 0 1 0 0 0 19 0 1 0 0 0 0 0 0 0 0 20 0 1 0 0 0 0 0 0 0 0 21 0 1 0 0 0 0 0 0 0 0 22 0 1 0 0 0 0 0 0 0 0 23 1 0 0 0 0 0 0 0 0 0 24 1 0 0 0 0 0 0 0 0 0 25 1 0 0 0 0 0 0 0 0 0 26 1 0 0 0 0 0 0 0 0 0 27 0 0 0 0 0 0 0 1 0 0 28 0 0 0 0 0 0 0 0 1 0 29 0 0 0 0 0 0 0 1 0 0 30 0 0 0 0 0 0 0 0 1 0 31 0 0 0 0 0 0 0 1 0 0 32 0 0 0 0 0 0 0 0 1 0 33 0 0 0 0 0 0 0 1 0 0 34 0 0 0 0 0 0 0 0 1 0 35 0 0 0 0 0 0 0 1 0 0 36 0 0 0 0 0 0 0 0 1 0 37 0 0 0 0 0 0 0 1 0 0 38 0 0 0 0 0 0 0 0 1 0 39 0 0 0 0 0 0 0 1 0 0 40 0 0 0 0 0 0 0 0 1 0 41 0 0 0 0 0 0 0 1 0 0 42 0 0 0 0 0 0 0 0 1 0 43 0 0 0 0 0 0 0 1 0 1 44 0 0 0 0 0 0 0 0 1 1 45 0 0 0 0 0 1 0 0 0 1 46 0 0 0 0 1 0 0 0 0 1 47 0 0 0 0 0 0 0 0 1 1 48 0 0 0 0 0 0 0 1 0 1 49 0 0 0 0 1 0 0 0 0 1 50 0 0 0 0 0 1 0 0 0 1 51 0 1 0 0 0 0 0 0 0 1 52 1 0 0 0 0 0 0 0 0 1 53 0 1 0 0 0 0 0 0 0 1 54 1 0 0 0 0 0 0 0 0 1 55 0 1 0 0 0 0 0 0 0 1 56 1 0 0 0 0 0 0 0 0 1 57 0 0 0 0 0 0 0 0 1 1 58 0 0 0 0 0 0 0 1 0 1 59 0 0 0 0 0 1 0 0 0 1 60 0 0 0 0 1 0 0 0 0 1 61 0 0 0 0 0 0 0 1 0 1 62 0 0 0 0 0 0 0 0 1 1 63 0 0 0 0 0 1 0 0 0 1 64 0 0 0 0 1 0 0 0 0 1 65 0 0 0 0 0 0 0 0 1 1 66 0 0 0 0 0 0 0 1 0 1 67 0 0 0 0 0 1 0 0 0 1 68 0 0 0 0 1 0 0 0 0 1 69 0 0 0 0 0 0 0 0 1 1 70 0 0 0 0 0 0 0 1 0 1 71 0 0 0 0 0 1 0 0 0 1 72 0 0 0 0 1 0 0 0 0 1 73 0 0 0 0 0 0 0 1 0 1 74 0 0 0 0 0 0 0 0 1 1 75 0 0 0 0 0 1 0 0 0 1 76 0 0 0 0 1 0 0 0 0 1 77 0 0 0 0 0 0 0 1 0 1 78 0 0 0 0 0 0 0 0 1 1 79 0 0 0 0 0 0 1 0 0 1 80 0 0 0 0 0 1 0 0 0 1 81 0 0 0 0 1 0 0 0 0 1 82 0 1 0 0 0 0 0 0 0 1 83 1 0 0 0 0 0 0 0 0 1 84 0 0 1 0 0 0 0 0 0 1 85 0 0 1 0 0 0 0 0 0 1 86 0 0 1 0 0 0 0 0 0 1 87 0 0 1 0 0 0 0 0 0 1 88 0 0 1 0 0 0 0 0 0 1 89 0 0 1 0 0 0 0 0 0 1 90 0 0 1 0 0 0 0 0 0 1 91 0 0 1 0 0 0 0 0 0 1 92 0 0 0 1 0 0 0 0 0 1 93 0 0 0 1 0 0 0 0 0 1 94 0 0 0 1 0 0 0 0 0 1 95 0 0 0 1 0 0 0 0 0 1 96 0 0 0 1 0 0 0 0 0 1 97 0 0 0 1 0 0 0 0 0 1 98 0 0 0 1 0 0 0 0 0 1 99 0 0 0 1 0 0 0 0 0 1 fitMA <- lmFit(normalizedMA, design) There are 36 possible tests that I can make but I am only interested in 16 tests below. contrastALL16 <- makeContrasts(P2dD-P2dH, P11dD-P11dH, P11dH-P11dR, P11dD-P11dR, M2dD-P2dD, M11dD-P11dD, MomD-MomH, P2dD-P11dD, P11dD-MomD, P2dD-MomD, P2dH-P11dH, P11dH-MomH, P2dH-MomH, M2dD-M11dD, M11dD-MomD, M2dD-MomD, levels=design) fitContrast.MA <- contrasts.fit(fitMA, contrastALL16) fit_eBayes_MA <- eBayesfitContrast.MA) write.table(fit_eBayes_MA, file="Q16contrasts.txt", sep="\t") My result file contains coefficients of all the 16 contrasts that I asked for and the p-value of each contrast (each t-test) but NOT the adjusted p-value. It also gives me the F value and the p-value of the F-test and again NOT the adjusted p- value of the F-test. I can get the adjusted p-value from the F-test by using the command "p.adjust" but not with the t-test. When I used the command "topTable" with coeff=1 (until 16 each time for all of my 16 contrasts), I can get the adjusted p-value of each contrast. My questions are: 1. Why does not the command "eBayes" give adjusted p-value? Is there an easier or more direct way to get adjusted p-value of the t-test? 2. How does the logFC calculate from? If I took the M-values for a single spot, after normalisation between arrays from slides that are belonged to one of my contrasts (8 slides of momD vs 8 slides of momH, because this case all slides are from the same batch) M-value of momD slide 1 to 8 = 3.00, 3.26, 2.73, 3,32, 2.93, 2.81, 2.55, 2.85 M-value of momH slide 1 to 8 = -0.44, -0.54, 0.03, -0.38, 0.49, -0.56, 0.07, 0.37 The mean M-value is -0.12 for momH and 2.93 for momD. I'd expect that the relative expression level in momD compared to momH would be: (2^(2.93))/(2^(- 0.12)) = 8.29 The logFC should simply be: log2(8.29) = 3.05 However, the logFC given by limma is 3.37. 3. If I want to do more complicated model like: ~1 + Age + Geno type *nested within* Social form + Age : Genotype *nested within* Social form, (fixed factor = Batch) Is it possible to do? How can I do it in limma? I am really sorry for writing such a long email because I want to make everything clear. I really appreciate your help. best regards, Mingkwan
2018-06-21 12:28:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46534547209739685, "perplexity": 25.67361755837599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864148.93/warc/CC-MAIN-20180621114153-20180621134153-00043.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/amt-ahe-amt-am-63-cm-tam-50-56-cm-m-h-7-5-construct-ahe-division-of-a-line-segment_50183
# ∆Amt ~ ∆Ahe. in ∆Amt, Am = 6.3 Cm, ∠Tam = 50°, at = 5.6 Cm. a M a H = 7 5 . Construct ∆Ahe. - Geometry Sum ∆AMT ~ ∆AHE. In ∆AMT, AM = 6.3 cm, ∠TAM = 50°, AT = 5.6 cm. "AM"/"AH" = 7/5. Construct ∆AHE. ∆AMT ~ ∆AHE. In ∆AMT, AM = 6.3 cm, ∠TAM = 50°, AT = 5.6 cm. "AM"/"AH" = 7/5. then construct △AMT and ΔAHE. #### Solution Analysis: As shown in the figure, Let A – H – M as well as points A – E – T be collinear. ∆AMT ~ ∆AHE, ∴ ∠TAM ≅ ∠EAH ...(Corresponding angles of similar triangles) "AM"/"AH" = "MT"/"HE" = "AT"/"AE" ...(i)(Corresponding sides of similar triangles) ∴ "AM"/"AH" = 7/5     ...(ii)(Given) "AM"/"AH" = "MT"/"HE" = "AT"/"AE" = 7/5    ...[From (i) and (ii)] ∴ sides of ∆AHE are smaller than sides of ∆AMT. ∴ If seg AH will be equal to 5 parts out of 7 equal parts of side AM. So, if we construct ∆AMT, point H will be on side AM, at a distance equal to 5 parts from A. Now, point E is the point of intersection of ray AT and a line through H, parallel to MT. ∆AHE is the required triangle similar to ∆AMT. Steps of Construction: 1. Draw ∆AMT such that AM = 6.3 cm, ∠TAM = 50°, AT = 5.6 cm. 2. Draw ray AB making an acute angle with side AM. 3. Taking convenient distance on the compass, mark 7 points A1, A2, A3, A4, A5, A6 and A7, such that AA1 = A1A2 = A2A3 = A3A4 = A4A5 = A5A6 = A6A7. 4. Join A7M. Draw line parallel to A7M through A5 to intersects seg AM at H. 5. Draw a line parallel to side TM through H. Name the point of intersection of this line and seg AT as E. ∆AHE is the required triangle similar to ∆AMT. Here, ∆AMT ~ ∆AHE. Concept: Division of a Line Segment Is there an error in this question or solution? #### APPEARS IN Balbharati Mathematics 2 Geometry 10th Standard SSC Maharashtra State Board Chapter 4 Geometric Constructions Practice Set 4.1 | Q 4 | Page 96 Share
2023-03-27 23:13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5827868580818176, "perplexity": 6075.256810049162}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00338.warc.gz"}
https://www.tug.org/pipermail/texhax/2009-August/012959.html
# [texhax] combnat package: extra space in author-year textual citation Tom Sutch tom at oketchup.co.uk Sat Aug 1 18:58:17 CEST 2009 On Wed, 15 Jul 2009 22:43:30 +0100 Tom Sutch <tom at oketchup.co.uk> wrote: > I am preparing conference proceedings using the combine package, and > using the combnat package (which is the combine-compatible version of > natbib) for the bibliographic referencing. It is working really well > on the whole, but I have a small issue with an unwanted extra space > being inserted after the year when I use \citet. > > So while \citep{foo1962} gives "(Foo and Bar, 1962)" as desired, > \citet gives "Foo and Bar (1962 )". An update on this. I have hacked around with combnat.sty in a brute-force manner and found that if I comment out line 341 it works, although not for the case where there's more than one citation in the brackets (fortunately I don't have too many of these so I can manually rewrite them). It doesn't look like combnat.sty has been changed for a while, so I would imagine people have the same version, but just in case the context of 341 is: 339 \if\relax\NAT at date\relax\def\@citea{\NAT at sep\ }% 340 \else 341 \def\@citea{\NAT@@close\NAT at sep\ } 342 \fi I doubt the cause of this problem is this line itself - I would guess it's something to do with the general treatment of multiple citations, which need separation and a space between years, not being modified properly for the case of single citations. I tried comparing with the equivalent code in natbib.sty but nothing jumped out. This is only an ugly hack, rather than a solution, of course, but it is sufficient for my purposes now and I hope that this (a) helps someone cleverer than me to find where the problem is, or (b) helps someone who comes across this problem in future to get round it.
2018-03-24 06:21:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100540280342102, "perplexity": 4245.828403852516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649931.17/warc/CC-MAIN-20180324054204-20180324074204-00702.warc.gz"}
https://physicscatalyst.com/class-6/separation-of-substances-notes.php
Separation of Substances Class 6 Notes PURE SUBSTANCE Pure substances are those substances that are made up of only one kind of particles. More precisely, they are composed of only one type of atoms or molecules. There are two types of pure substances:- • Element: They consist of linking one or more same atoms. For example, hydrogen atoms, oxygen atoms and iron, etc. • Compound: When different atoms or molecules are linked together, then they form a compound. For example, carbon dioxide, sodium chloride and water, etc. When unwanted components are added to the food items then they are called adulteration, like small stones in rice(Figure 1). Figure 1: Separation of small pieces of stones from rice. MIXTURE The formation of any substance by mixing one or more substances is called a mixture. Moreover, the mixed substances possess unique property. Some examples are • Milk is composed of a mixture of water and cream • Water, carbon dioxide and sugar are mixed to form an aerated drink • The mixture of water, dead organic matter and broken rocks, and minerals create soil The mixture contains different substances called its components that can be present in ratios. Properties of Mixtures • The components ratio in a mixture cannot be fixed. • The components can be separated by simple methods of separation. For example, separating stones from rice by hand. • The melting and boiling point of mixtures is not fixed. Types of Mixtures There are two types of mixtures:- i. Homogeneous mixture: In a mixture, the components are evenly distributed. Some examples of homogeneous mixture are the dissolution of salt in the water, alloys(made up of copper steel and bronze), and pure air ii. Heterogeneous mixture: These mixtures do not have evenly distributed components. Some examples are fruit salad and chocolate chip cookies, etc(Figure 4). Figure 4: Heterogeneous mixture of dry fruits. SEPARATION OF SUBSTANCES FROM MIXTURES The following purposes for separating components from mixtures are:- • To remove impurities or harmful components: The impurities affect health. Therefore, they need to be removed. For example, purifying river water for drinking, removing stones and other impurities from rice and pulses. • To obtain useful components: The distillation of petroleum to obtain useful components like petrol, kerosene, diesel and white petroleum jelly. • To obtain pure components: The separation of pure metals from their natural forms is called ores. METHODS OF SEPARATION Separation is a method of separating one substance from a mixture of two or more substances. Separation of Solids from Other Solids Separation solids from the mixture of solids can be performed by the following methods:- 1. Handpicking: It is a method of separation by using hands. The following purposes where handpicking is applicable are:- i. When components are visible to the naked eye. ii. The shape, size and colour of the components are different from the useful materials. iii. The size of the components is large. Some examples of handpicking methods are small stones, broken grains are separated from rice, wheat and pulses. 2. Threshing: The process of separating grains from the harvested stalk is called threshing. It can be performed by the following ways(Figure 6):- i. Beating the dry stalks on the hard surface by hands. ii. Threshing machines are also used to separate dried grains. iii. Crushing the stalks by animals like bullocks. For example, threshing is used to separate the grains from the stalks of harvested rice and wheat crops. Figure 6: Threshing process. 3. Winnowing: The process of separating grains from the husk is called winnowing. In this method, the mixture of grains and husk drops from a height, then the wind carried the husk away from the grains, as grains are heavier than the husk, they form a heap near the performer. Some examples of winnowing method are separation of sand from the powdered dry leaves(Figure 7). Figure 7: Winnowing method. 4. Sieving: The process of separating the components according to their size is called sieving. A sieve is composed of a net or mesh. The size of the pores on the net or mesh depends on the size of the wanted materials(Figure 8). The most common example of sieving method is the separation of bran from the wheat flour. Figure 8: Sieving method. 5. Magnetic Separation: The process of separating magnetic properties possessing materials from substances is called magnetic separation. In a mixture, one component is attracted to the magnet while the other one is not. For example, the separation of iron from the sand by a magnet(Figure 9). Figure 9: Magnetic separation method. Separation of Solids from Liquids In this separation, the method is dependent on the solubility of the substance. Separating insoluble solids from liquids Such particles that are not soluble in the water can be separated by the following methods:- 1. Sedimentation and Decantation: Sedimentation is the process of separating insoluble materials from liquid. The insoluble components settle down at the bottom of the liquid forms sediment, and the liquid above the sediment is called supernatant. Afterwards, the supernatant is carefully removed out of the container without disturbing the sediment is called decantation. For example, separation of fine particles from the muddy water can be done by this method(Figure 10). Figure 10: Sedimentation and decantation method: 2. Filtration: The separation of insoluble components from the mixture using a filter is called filtration. The fine particles that are stored on the filter paper or funnel are called a residue, while the pure liquid that passes through the funnel into a container is called the filtrate(Figure 11). Figure 11: Filtration. Separating soluble from their solutions The separation of soluble solids from the mixture as follows:- 1. Evaporation: The process of separating the soluble materials from the solution by heating it is called evaporation. For example, the formation of the common salt from the seawater by evaporation occurs naturally(Figure 12). 2. Condensation: The process of converting the water vapours into the water by cooling is called condensation. Raining is a good example of this process(Figure 12). Figure 12: Evaporation and Condensation. • Notes • Assignments And NCERT Solutions Practice Question Question 1 What is $\frac {1}{2} + \frac {3}{4}$ ? A)$\frac {5}{4}$ B)$\frac {1}{4}$ C)$1$ D)$\frac {4}{5}$ Question 2 Pinhole camera produces an ? A)An erect and small image B)an Inverted and small image C)An inverted and enlarged image D)None of the above
2023-03-21 10:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5679531097412109, "perplexity": 2650.747217661776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00202.warc.gz"}
https://en.wikipedia.org/wiki/De_Branges_space
# De Branges space In mathematics, a de Branges space (sometimes written De Branges space) is a concept in functional analysis and is constructed from a de Branges function. The concept is named after Louis de Branges who proved numerous results regarding these spaces, especially as Hilbert spaces, and used those results to prove the Bieberbach conjecture. ## De Branges functions A de Branges function is an entire function E from ${\displaystyle \mathbb {C} }$ to ${\displaystyle \mathbb {C} }$ that satisfies the inequality ${\displaystyle |E(z)|>|E({\bar {z}})|}$, for all z in the upper half of the complex plane ${\displaystyle \mathbb {C} ^{+}=\{z\in \mathbb {C} |{\rm {Im}}(z)>0\}}$. ## Definition 1 Given a de Branges function E, the de Branges space B(E) is defined as the set of all entire functions F such that ${\displaystyle F/E,F^{\#}/E\in H_{2}(\mathbb {C} ^{+})}$ where: • ${\displaystyle \mathbb {C} ^{+}=\{z\in \mathbb {C} |{\rm {Im(z)}}>0\}}$ is the open upper half of the complex plane. • ${\displaystyle F^{\#}(z)={\overline {F({\bar {z}})}}}$. • ${\displaystyle H_{2}(\mathbb {C} ^{+})}$ is the usual Hardy space on the open upper half plane. ## Definition 2 A de Branges space can also be defined as all entire functions F satisfying all of the following conditions: • ${\displaystyle \int _{\mathbb {R} }|(F/E)(\lambda )|^{2}d\lambda <\infty }$ • ${\displaystyle |(F/E)(z)|,|(F^{\#}/E)(z)|\leq C_{F}(\operatorname {Im} (z))^{(-1/2)},\forall z\in \mathbb {C} ^{+}}$ ## As Hilbert spaces Given a de Branges space B(E). Define the scalar product: ${\displaystyle [F,G]={\frac {1}{\pi }}\int _{\mathbb {R} }{\overline {F(\lambda )}}G(\lambda ){\frac {d\lambda }{|E(\lambda )|^{2}}}.}$ A de Branges space with such a scalar product can be proven to be a Hilbert space. ## References • Christian Remling (2003). "Inverse spectral theory for one-dimensional Schrödinger operators: the A function". Math. Z. 245: 597–617. doi:10.1007/s00209-003-0559-2.
2017-07-25 22:49:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94428950548172, "perplexity": 561.246862814491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425407.14/warc/CC-MAIN-20170725222357-20170726002357-00441.warc.gz"}
http://mathhelpforum.com/advanced-algebra/125858-finding-basis-dimension.html
# Thread: Finding Basis and Dimension 1. ## Finding Basis and Dimension Hello! I have two questions that are giving me some trouble, 1) I am trying to find a basis for a subspace W = {(s+4t, t, s, 2s-t)} for reals s, t. Then the dimension will be the number of vectors in the basis. It appears to me there should be two vectors in the basis? 2) Similarily, and perhaps easier, is finding a basis (& dimension) for W = {(5t, t, -t}}. Also, (seemingly unrelated?) is describing the geometric representation of W. Isn't it a plane? 1) Would a basis be {(1,0,1,2),(4,1,0,-1)} because of the parameters? (and thus dimension 2) 2) Similarily, {(5,1,-1)}? (dimension one?) However I still am not sure of it's graphical represenation! Thanks!! 2. Originally Posted by matt.qmar Hello! I have two questions that are giving me some trouble, 1) I am trying to find a basis for a subspace W = {(s+4t, t, s, 2s-t)} for reals s, t. Then the dimension will be the number of vectors in the basis. It appears to me there should be two vectors in the basis? 2) Similarily, and perhaps easier, is finding a basis (& dimension) for W = {(5t, t, -t}}. (5t, t, -t)= t(5, 1, -1). Also, (seemingly unrelated?) is describing the geometric representation of W. Isn't it a plane? 1) Would a basis be {(1,0,1,2),(4,1,0,-1)} because of the parameters? (and thus dimension 2) Yes, just write (s+ 4t, t, s, 2s- t)= (s, 0, s, 2s)+ (4t, t, 0, -t)= s(1, 0, 1, 2)+ t(4, 1, 0, -1) and the basis should be obvious. 2) Similarily, {(5,1,-1)}? (dimension one?) However I still am not sure of it's graphical represenation! (5t, t, -t)= t(5, 1, -1). However, if this is the "W" referred to, it is NOT a plane. It only has dimension 1, just as you say. [/quote]Thanks!![/QUOTE]
2016-08-31 07:55:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9384019374847412, "perplexity": 1933.948521906642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983580563.99/warc/CC-MAIN-20160823201940-00185-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.mathdoubts.com/integral-of-natural-exponential-function/
# Integral Rule of Natural Exponential function ## Formula $\displaystyle \int{e^{\displaystyle x} \,}dx \,=\, e^{\displaystyle x}+c$ ### Introduction $x$ is a variable and the natural exponential function is written in mathematical form as $e^{\displaystyle x}$. The integration of $e^{\displaystyle x}$ with respect to $x$ is written in differential calculus as follows. $\displaystyle \int{e^{\displaystyle x} \,}dx$ The indefinite integral of $e^{\displaystyle x}$ with respect to $x$ is equal to the sum of the natural exponential function and constant of integration. $\displaystyle \int{e^{\displaystyle x} \,}dx \,=\, e^{\displaystyle x}+c$ #### Other forms The indefinite integration of natural exponential function formula can be written in terms of any variable. $(1) \,\,\,$ $\displaystyle \int{e^{\displaystyle m} \,}dm \,=\, e^{\displaystyle m}+c$ $(2) \,\,\,$ $\displaystyle \int{e^{\displaystyle t} \,}dt \,=\, e^{\displaystyle t}+c$ $(3) \,\,\,$ $\displaystyle \int{e^{\displaystyle y} \,}dy \,=\, e^{\displaystyle y}+c$ ### Proof Learn how to derive the indefinite integration rule for the natural exponential function in integral calculus. Email subscription Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
2019-11-20 07:23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199566006660461, "perplexity": 442.86178769067504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00128.warc.gz"}
https://eng.libretexts.org/Bookshelves/Computer_Science/Applied_Programming/Book%3A_Neural_Networks_and_Deep_Learning_(Nielsen)/03%3A_Improving_the_way_neural_networks_learn/3.05%3A_How_to_choose_a_neural_network's_hyper-parameters
# 3.5: How to choose a neural network's hyper-parameters? Up until now I haven't explained how I've been choosing values for hyper-parameters such as the learning rate, $$η$$, the regularization parameter, $$λ$$, and so on. I've just been supplying values which work pretty well. In practice, when you're using neural nets to attack a problem, it can be difficult to find good hyper-parameters. Imagine, for example, that we've just been introduced to the MNIST problem, and have begun working on it, knowing nothing at all about what hyper-parameters to use. Let's suppose that by good fortune in our first experiments we choose many of the hyper-parameters in the same way as was done earlier this chapter: 30 hidden neurons, a mini-batch size of $$10$$, training for $$30$$ epochs using the cross-entropy. But we choose a learning rate $$η=10.0$$ and regularization parameter $$λ=1000.0$$. Here's what I saw on one such run: >>> import mnist_loader >>> training_data, validation_data, test_data = \ >>> import network2 >>> net = network2.Network([784, 30, 10]) >>> net.SGD(training_data, 30, 10, 10.0, lmbda = 1000.0, ... evaluation_data=validation_data, monitor_evaluation_accuracy=True) Epoch 0 training complete Accuracy on evaluation data: 1030 / 10000 Epoch 1 training complete Accuracy on evaluation data: 990 / 10000 Epoch 2 training complete Accuracy on evaluation data: 1009 / 10000 ... Epoch 27 training complete Accuracy on evaluation data: 1009 / 10000 Epoch 28 training complete Accuracy on evaluation data: 983 / 10000 Epoch 29 training complete Accuracy on evaluation data: 967 / 10000 Our classification accuracies are no better than chance! Our network is acting as a random noise generator! "Well, that's easy to fix," you might say, "just decrease the learning rate and regularization hyper-parameters". Unfortunately, you don't a priori know those are the hyper-parameters you need to adjust. Maybe the real problem is that our $$30$$ hidden neuron network will never work well, no matter how the other hyper-parameters are chosen? Maybe we really need at least $$100$$ hidden neurons? Or $$300$$ hidden neurons? Or multiple hidden layers? Or a different approach to encoding the output? Maybe our network is learning, but we need to train for more epochs? Maybe the mini-batches are too small? Maybe we'd do better switching back to the quadratic cost function? Maybe we need to try a different approach to weight initialization? And so on, on and on and on. It's easy to feel lost in hyper-parameter space. This can be particularly frustrating if your network is very large, or uses a lot of training data, since you may train for hours or days or weeks, only to get no result. If the situation persists, it damages your confidence. Maybe neural networks are the wrong approach to your problem? Maybe you should quit your job and take up beekeeping? In this section I explain some heuristics which can be used to set the hyper-parameters in a neural network. The goal is to help you develop a workflow that enables you to do a pretty good job setting hyper-parameters. Of course, I won't cover everything about hyper-parameter optimization. That's a huge subject, and it's not, in any case, a problem that is ever completely solved, nor is there universal agreement amongst practitioners on the right strategies to use. There's always one more trick you can try to eke out a bit more performance from your network. But the heuristics in this section should get you started. Broad strategy: When using neural networks to attack a new problem the first challenge is to get any non-trivial learning, i.e., for the network to achieve results better than chance. This can be surprisingly difficult, especially when confronting a new class of problem. Let's look at some strategies you can use if you're having this kind of trouble. Suppose, for example, that you're attacking MNIST for the first time. You start out enthusiastic, but are a little discouraged when your first network fails completely, as in the example above. The way to go is to strip the problem down. Get rid of all the training and validation images except images which are 0s or 1s. Then try to train a network to distinguish 0s from 1s. Not only is that an inherently easier problem than distinguishing all ten digits, it also reduces the amount of training data by $$80$$ percent, speeding up training by a factor of $$5$$. That enables much more rapid experimentation, and so gives you more rapid insight into how to build a good network. You can further speed up experimentation by stripping your network down to the simplest network likely to do meaningful learning. If you believe a [784, 10] network can likely do better-than-chance classification of MNIST digits, then begin your experimentation with such a network. It'll be much faster than training a [784, 30, 10] network, and you can build back up to the latter. You can get another speed up in experimentation by increasing the frequency of monitoring. In network2.py we monitor performance at the end of each training epoch. With $$50,000$$ images per epoch, that means waiting a little while - about ten seconds per epoch, on my laptop, when training a [784, 30, 10] network - before getting feedback on how well the network is learning. Of course, ten seconds isn't very long, but if you want to trial dozens of hyper-parameter choices it's annoying, and if you want to trial hundreds or thousands of choices it starts to get debilitating. We can get feedback more quickly by monitoring the validation accuracy more often, say, after every $$1,000$$ training images. Furthermore, instead of using the full $$10,000$$ image validation set to monitor performance, we can get a much faster estimate using just $$100$$ validation images. All that matters is that the network sees enough images to do real learning, and to get a pretty good rough estimate of performance. Of course, our program network2.py doesn't currently do this kind of monitoring. But as a kludge to achieve a similar effect for the purposes of illustration, we'll strip down our training data to just the first $$1,000$$ MNIST training images. Let's try it and see what happens. (To keep the code below simple I haven't implemented the idea of using only $$0$$ and $$1$$ images. Of course, that can be done with just a little more work.) >>> net = network2.Network([784, 10]) >>> net.SGD(training_data[:1000], 30, 10, 10.0, lmbda = 1000.0, \ ... evaluation_data=validation_data[:100], \ ... monitor_evaluation_accuracy=True) Epoch 0 training complete Accuracy on evaluation data: 10 / 100 Epoch 1 training complete Accuracy on evaluation data: 10 / 100 Epoch 2 training complete Accuracy on evaluation data: 10 / 100 ... We're still getting pure noise! But there's a big win: we're now getting feedback in a fraction of a second, rather than once every ten seconds or so. That means you can more quickly experiment with other choices of hyper-parameter, or even conduct experiments trialling many different choices of hyper-parameter nearly simultaneously. In the above example I left $$λ$$ as $$λ=1000.0$$, as we used earlier. But since we changed the number of training examples we should really change $$λ$$ to keep the weight decay the same. That means changing $$λ$$ to $$20.0$$. If we do that then this is what happens: >>> net = network2.Network([784, 10]) >>> net.SGD(training_data[:1000], 30, 10, 10.0, lmbda = 20.0, \ ... evaluation_data=validation_data[:100], \ ... monitor_evaluation_accuracy=True) Epoch 0 training complete Accuracy on evaluation data: 12 / 100 Epoch 1 training complete Accuracy on evaluation data: 14 / 100 Epoch 2 training complete Accuracy on evaluation data: 25 / 100 Epoch 3 training complete Accuracy on evaluation data: 18 / 100 ... Ahah! We have a signal. Not a terribly good signal, but a signal nonetheless. That's something we can build on, modifying the hyper-parameters to try to get further improvement. Maybe we guess that our learning rate needs to be higher. (As you perhaps realize, that's a silly guess, for reasons we'll discuss shortly, but please bear with me.) So to test our guess we try dialing $$η$$ up to $$100.0$$: >>> net = network2.Network([784, 10]) >>> net.SGD(training_data[:1000], 30, 10, 100.0, lmbda = 20.0, \ ... evaluation_data=validation_data[:100], \ ... monitor_evaluation_accuracy=True) Epoch 0 training complete Accuracy on evaluation data: 10 / 100 Epoch 1 training complete Accuracy on evaluation data: 10 / 100 Epoch 2 training complete Accuracy on evaluation data: 10 / 100 Epoch 3 training complete Accuracy on evaluation data: 10 / 100 ... That's no good! It suggests that our guess was wrong, and the problem wasn't that the learning rate was too low. So instead we try dialing ηη down to $$η=1.0$$: >>> net = network2.Network([784, 10]) >>> net.SGD(training_data[:1000], 30, 10, 1.0, lmbda = 20.0, \ ... evaluation_data=validation_data[:100], \ ... monitor_evaluation_accuracy=True) Epoch 0 training complete Accuracy on evaluation data: 62 / 100 Epoch 1 training complete Accuracy on evaluation data: 42 / 100 Epoch 2 training complete Accuracy on evaluation data: 43 / 100 Epoch 3 training complete Accuracy on evaluation data: 61 / 100 ... That's better! And so we can continue, individually adjusting each hyper-parameter, gradually improving performance. Once we've explored to find an improved value for $$η$$, then we move on to find a good value for $$λ$$. Then experiment with a more complex architecture, say a network with 10 hidden neurons. Then adjust the values for $$η$$ and $$λ$$ again. Then increase to $$20$$ hidden neurons. And then adjust other hyper-parameters some more. And so on, at each stage evaluating performance using our held-out validation data, and using those evaluations to find better and better hyper-parameters. As we do so, it typically takes longer to witness the impact due to modifications of the hyper-parameters, and so we can gradually decrease the frequency of monitoring. This all looks very promising as a broad strategy. However, I want to return to that initial stage of finding hyper-parameters that enable a network to learn anything at all. In fact, even the above discussion conveys too positive an outlook. It can be immensely frustrating to work with a network that's learning nothing. You can tweak hyper-parameters for days, and still get no meaningful response. And so I'd like to re-emphasize that during the early stages you should make sure you can get quick feedback from experiments. Intuitively, it may seem as though simplifying the problem and the architecture will merely slow you down. In fact, it speeds things up, since you much more quickly find a network with a meaningful signal. Once you've got such a signal, you can often get rapid improvements by tweaking the hyper-parameters. As with many things in life, getting started can be the hardest thing to do. Okay, that's the broad strategy. Let's now look at some specific recommendations for setting hyper-parameters. I will focus on the learning rate, $$η$$, the L2 regularization parameter, $$λ$$, and the mini-batch size. However, many of the remarks apply also to other hyper-parameters, including those associated to network architecture, other forms of regularization, and some hyper-parameters we'll meet later in the book, such as the momentum co-efficient. Learning rate: Suppose we run three MNIST networks with three different learning rates, $$η=0.025$$, $$η=0.25$$ and $$η=2.5$$, respectively. We'll set the other hyper-parameters as for the experiments in earlier sections, running over $$30$$ epochs, with a mini-batch size of $$10$$, and with $$λ=5.0$$. We'll also return to using the full $$50,000$$ training images. Here's a graph showing the behaviour of the training cost as we train* *The graph was generated by multiple_eta.py.: With $$η=0.025$$ the cost decreases smoothly until the final epoch. With $$η=0.25$$ the cost initially decreases, but after about $$20$$ epochs it is near saturation, and thereafter most of the changes are merely small and apparently random oscillations. Finally, with $$η=2.5$$ the cost makes large oscillations right from the start. To understand the reason for the oscillations, recall that stochastic gradient descent is supposed to step us gradually down into a valley of the cost function, However, if $$η$$ is too large then the steps will be so large that they may actually overshoot the minimum, causing the algorithm to climb up out of the valley instead. That's likely* *This picture is helpful, but it's intended as an intuition-building illustration of what may go on, not as a complete, exhaustive explanation. Briefly, a more complete explanation is as follows: gradient descent uses a first-order approximation to the cost function as a guide to how to decrease the cost. For large $$η$$, higher-order terms in the cost function become more important, and may dominate the behaviour, causing gradient descent to break down. This is especially likely as we approach minima and quasi-minima of the cost function, since near such points the gradient becomes small, making it easier for higher-order terms to dominate behaviour. what's causing the cost to oscillate when $$η=2.5$$. When we choose $$η=0.25$$ the initial steps do take us toward a minimum of the cost function, and it's only once we get near that minimum that we start to suffer from the overshooting problem. And when we choose $$η=0.025$$ we don't suffer from this problem at all during the first $$30$$ epochs. Of course, choosing ηη so small creates another problem, namely, that it slows down stochastic gradient descent. An even better approach would be to start with $$η=0.25$$, train for $$20$$ epochs, and then switch to $$η=0.025$$. We'll discuss such variable learning rate schedules later. For now, though, let's stick to figuring out how to find a single good value for the learning rate, $$η$$. With this picture in mind, we can set ηη as follows. First, we estimate the threshold value for ηη at which the cost on the training data immediately begins decreasing, instead of oscillating or increasing. This estimate doesn't need to be too accurate. You can estimate the order of magnitude by starting with $$η=0.01$$. If the cost decreases during the first few epochs, then you should successively try $$η=0.1,1.0,…$$ until you find a value for $$η$$ where the cost oscillates or increases during the first few epochs. Alternately, if the cost oscillates or increases during the first few epochs when $$η=0.01$$, then try $$η=0.001,0.0001,…$$ until you find a value for $$η$$ where the cost decreases during the first few epochs. Following this procedure will give us an order of magnitude estimate for the threshold value of ηη. You may optionally refine your estimate, to pick out the largest value of $$η$$ at which the cost decreases during the first few epochs, say $$η=0.5$$ or $$η=0.2$$ (there's no need for this to be super-accurate). This gives us an estimate for the threshold value of $$η$$. Obviously, the actual value of $$η$$ that you use should be no larger than the threshold value. In fact, if the value of $$η$$ is to remain usable over many epochs then you likely want to use a value for $$η$$ that is smaller, say, a factor of two below the threshold. Such a choice will typically allow you to train for many epochs, without causing too much of a slowdown in learning. In the case of the MNIST data, following this strategy leads to an estimate of $$0.1$$ for the order of magnitude of the threshold value of ηη. After some more refinement, we obtain a threshold value $$η=0.5$$. Following the prescription above, this suggests using $$η=0.25$$ as our value for the learning rate. In fact, I found that using $$η=0.5$$ worked well enough over $$30$$ epochs that for the most part I didn't worry about using a lower value of $$η$$. This all seems quite straightforward. However, using the training cost to pick ηη appears to contradict what I said earlier in this section, namely, that we'd pick hyper-parameters by evaluating performance using our held-out validation data. In fact, we'll use validation accuracy to pick the regularization hyper-parameter, the mini-batch size, and network parameters such as the number of layers and hidden neurons, and so on. Why do things differently for the learning rate? Frankly, this choice is my personal aesthetic preference, and is perhaps somewhat idiosyncratic. The reasoning is that the other hyper-parameters are intended to improve the final classification accuracy on the test set, and so it makes sense to select them on the basis of validation accuracy. However, the learning rate is only incidentally meant to impact the final classification accuracy. Its primary purpose is really to control the step size in gradient descent, and monitoring the training cost is the best way to detect if the step size is too big. With that said, this is a personal aesthetic preference. Early on during learning the training cost usually only decreases if the validation accuracy improves, and so in practice it's unlikely to make much difference which criterion you use. Use early stopping to determine the number of training epochs: As we discussed earlier in the chapter, early stopping means that at the end of each epoch we should compute the classification accuracy on the validation data. When that stops improving, terminate. This makes setting the number of epochs very simple. In particular, it means that we don't need to worry about explicitly figuring out how the number of epochs depends on the other hyper-parameters. Instead, that's taken care of automatically. Furthermore, early stopping also automatically prevents us from overfitting. This is, of course, a good thing, although in the early stages of experimentation it can be helpful to turn off early stopping, so you can see any signs of overfitting, and use it to inform your approach to regularization. To implement early stopping we need to say more precisely what it means that the classification accuracy has stopped improving. As we've seen, the accuracy can jump around quite a bit, even when the overall trend is to improve. If we stop the first time the accuracy decreases then we'll almost certainly stop when there are more improvements to be had. A better rule is to terminate if the best classification accuracy doesn't improve for quite some time. Suppose, for example, that we're doing MNIST. Then we might elect to terminate if the classification accuracy hasn't improved during the last ten epochs. This ensures that we don't stop too soon, in response to bad luck in training, but also that we're not waiting around forever for an improvement that never comes. This no-improvement-in-ten rule is good for initial exploration of MNIST. However, networks can sometimes plateau near a particular classification accuracy for quite some time, only to then begin improving again. If you're trying to get really good performance, the no-improvement-in-ten rule may be too aggressive about stopping. In that case, I suggest using the no-improvement-in-ten rule for initial experimentation, and gradually adopting more lenient rules, as you better understand the way your network trains: no-improvement-in-twenty, no-improvement-in-fifty, and so on. Of course, this introduces a new hyper-parameter to optimize! In practice, however, it's usually easy to set this hyper-parameter to get pretty good results. Similarly, for problems other than MNIST, the no-improvement-in-ten rule may be much too aggressive or not nearly aggressive enough, depending on the details of the problem. However, with a little experimentation it's usually easy to find a pretty good strategy for early stopping. We haven't used early stopping in our MNIST experiments to date. The reason is that we've been doing a lot of comparisons between different approaches to learning. For such comparisons it's helpful to use the same number of epochs in each case. However, it's well worth modifying network2.py to implement early stopping: ## Problem • Modify network2.py so that it implements early stopping using a no-improvement-in-$$n$$ epochs strategy, where $$n$$ is a parameter that can be set. • Can you think of a rule for early stopping other than no-improvement-in-$$n$$? Ideally, the rule should compromise between getting high validation accuracies and not training too long. Add your rule to network2.py, and run three experiments comparing the validation accuracies and number of epochs of training to no-improvement-in-$$10$$. Learning rate schedule: We've been holding the learning rate $$η$$ constant. However, it's often advantageous to vary the learning rate. Early on during the learning process it's likely that the weights are badly wrong. And so it's best to use a large learning rate that causes the weights to change quickly. Later, we can reduce the learning rate as we make more fine-tuned adjustments to our weights. How should we set our learning rate schedule? Many approaches are possible. One natural approach is to use the same basic idea as early stopping. The idea is to hold the learning rate constant until the validation accuracy starts to get worse. Then decrease the learning rate by some amount, say a factor of two or ten. We repeat this many times, until, say, the learning rate is a factor of 1,024 (or 1,000) times lower than the initial value. Then we terminate. A variable learning schedule can improve performance, but it also opens up a world of possible choices for the learning schedule. Those choices can be a headache - you can spend forever trying to optimize your learning schedule. For first experiments my suggestion is to use a single, constant value for the learning rate. That'll get you a good first approximation. Later, if you want to obtain the best performance from your network, it's worth experimenting with a learning schedule, along the lines I've described* *A readable recent paper which demonstrates the benefits of variable learning rates in attacking MNIST is Deep, Big, Simple Neural Nets Excel on Handwritten Digit Recognition, by Dan Claudiu Cireșan, Ueli Meier, Luca Maria Gambardella, and Jürgen Schmidhuber (2010). ## Exercise • Modify network2.py so that it implements a learning schedule that: halves the learning rate each time the validation accuracy satisfies the no-improvement-in-$$10$$ rule; and terminates when the learning rate has dropped to $$1/128$$ of its original value. The regularization parameter, $$λ$$: I suggest starting initially with no regularization $$(λ=0.0)$$, and determining a value for ηη, as above. Using that choice of $$η$$, we can then use the validation data to select a good value for $$λ$$. Start by trialling $$λ=1.0$$* *I don't have a good principled justification for using this as a starting value. If anyone knows of a good principled discussion of where to start with $$λ$$, I'd appreciate hearing it ([email protected])., and then increase or decrease by factors of $$10$$, as needed to improve performance on the validation data. Once you've found a good order of magnitude, you can fine tune your value of $$λ$$. That done, you should return and re-optimize $$η$$ again. ## Exercise • It's tempting to use gradient descent to try to learn good values for hyper-parameters such as $$λ$$ and $$η$$. Can you think of an obstacle to using gradient descent to determine $$λ$$? Can you think of an obstacle to using gradient descent to determine $$η$$? How I selected hyper-parameters earlier in this book: If you use the recommendations in this section you'll find that you get values for $$η$$ and $$λ$$ which don't always exactly match the values I've used earlier in the book. The reason is that the book has narrative constraints that have sometimes made it impractical to optimize the hyper-parameters. Think of all the comparisons we've made of different approaches to learning, e.g., comparing the quadratic and cross-entropy cost functions, comparing the old and new methods of weight initialization, running with and without regularization, and so on. To make such comparisons meaningful, I've usually tried to keep hyper-parameters constant across the approaches being compared (or to scale them in an appropriate way). Of course, there's no reason for the same hyper-parameters to be optimal for all the different approaches to learning, so the hyper-parameters I've used are something of a compromise. As an alternative to this compromise, I could have tried to optimize the heck out of the hyper-parameters for every single approach to learning. In principle that'd be a better, fairer approach, since then we'd see the best from every approach to learning. However, we've made dozens of comparisons along these lines, and in practice I found it too computationally expensive. That's why I've adopted the compromise of using pretty good (but not necessarily optimal) choices for the hyper-parameters. Mini-batch size: How should we set the mini-batch size? To answer this question, let's first suppose that we're doing online learning, i.e., that we're using a mini-batch size of $$1$$. The obvious worry about online learning is that using mini-batches which contain just a single training example will cause significant errors in our estimate of the gradient. In fact, though, the errors turn out to not be such a problem. The reason is that the individual gradient estimates don't need to be super-accurate. All we need is an estimate accurate enough that our cost function tends to keep decreasing. It's as though you are trying to get to the North Magnetic Pole, but have a wonky compass that's 10-20 degrees off each time you look at it. Provided you stop to check the compass frequently, and the compass gets the direction right on average, you'll end up at the North Magnetic Pole just fine. Based on this argument, it sounds as though we should use online learning. In fact, the situation turns out to be more complicated than that. In a problem in the last chapter I pointed out that it's possible to use matrix techniques to compute the gradient update for all examples in a mini-batch simultaneously, rather than looping over them. Depending on the details of your hardware and linear algebra library this can make it quite a bit faster to compute the gradient estimate for a mini-batch of (for example) size $$100$$, rather than computing the mini-batch gradient estimate by looping over the $$100$$ training examples separately. It might take (say) only $$50$$ times as long, rather than $$100$$ times as long. Now, at first it seems as though this doesn't help us that much. With our mini-batch of size $$100$$ the learning rule for the weights looks like: $w→w′=w−η\frac{1}{100}\sum_x{∇C_x},\label{100}\tag{100}$ where the sum is over training examples in the mini-batch. This is versus $w→w′=w−η∇C_x\label{101}\tag{101}$ for online learning. Even if it only takes $$50$$ times as long to do the mini-batch update, it still seems likely to be better to do online learning, because we'd be updating so much more frequently. Suppose, however, that in the mini-batch case we increase the learning rate by a factor $$100$$, so the update rule becomes $w→w′=w−η\sum_x{∇C_x}.\label{102}\tag{102}$ That's a lot like doing $$100$$ separate instances of online learning with a learning rate of $$η$$. But it only takes $$50$$ times as long as doing a single instance of online learning. Of course, it's not truly the same as $$100$$ instances of online learning, since in the mini-batch the $$∇C_x's$$ are all evaluated for the same set of weights, as opposed to the cumulative learning that occurs in the online case. Still, it seems distinctly possible that using the larger mini-batch would speed things up. With these factors in mind, choosing the best mini-batch size is a compromise. Too small, and you don't get to take full advantage of the benefits of good matrix libraries optimized for fast hardware. Too large and you're simply not updating your weights often enough. What you need is to choose a compromise value which maximizes the speed of learning. Fortunately, the choice of mini-batch size at which the speed is maximized is relatively independent of the other hyper-parameters (apart from the overall architecture), so you don't need to have optimized those hyper-parameters in order to find a good mini-batch size. The way to go is therefore to use some acceptable (but not necessarily optimal) values for the other hyper-parameters, and then trial a number of different mini-batch sizes, scaling ηη as above. Plot the validation accuracy versus time (as in, real elapsed time, not epoch!), and choose whichever mini-batch size gives you the most rapid improvement in performance. With the mini-batch size chosen you can then proceed to optimize the other hyper-parameters. Of course, as you've no doubt realized, I haven't done this optimization in our work. Indeed, our implementation doesn't use the faster approach to mini-batch updates at all. I've simply used a mini-batch size of $$10$$ without comment or explanation in nearly all examples. Because of this, we could have sped up learning by reducing the mini-batch size. I haven't done this, in part because I wanted to illustrate the use of mini-batches beyond size 11, and in part because my preliminary experiments suggested the speedup would be rather modest. In practical implementations, however, we would most certainly implement the faster approach to mini-batch updates, and then make an effort to optimize the mini-batch size, in order to maximize our overall speed. Automated techniques: I've been describing these heuristics as though you're optimizing your hyper-parameters by hand. Hand-optimization is a good way to build up a feel for how neural networks behave. However, and unsurprisingly, a great deal of work has been done on automating the process. A common technique is grid search, which systematically searches through a grid in hyper-parameter space. A review of both the achievements and the limitations of grid search (with suggestions for easily-implemented alternatives) may be found in a 2012 paper* *Random search for hyper-parameter optimization, by James Bergstra and Yoshua Bengio (2012). by James Bergstra and Yoshua Bengio. Many more sophisticated approaches have also been proposed. I won't review all that work here, but do want to mention a particularly promising 2012 paper which used a Bayesian approach to automatically optimize hyper-parameters* *Practical Bayesian optimization of machine learning algorithms, by Jasper Snoek, Hugo Larochelle, and Ryan Adams.. The code from the paper is publicly available, and has been used with some success by other researchers. Summing up: Following the rules-of-thumb I've described won't give you the absolute best possible results from your neural network. But it will likely give you a good start and a basis for further improvements. In particular, I've discussed the hyper-parameters largely independently. In practice, there are relationships between the hyper-parameters. You may experiment with $$η$$, feel that you've got it just right, then start to optimize for $$λ$$, only to find that it's messing up your optimization for $$η$$. In practice, it helps to bounce backward and forward, gradually closing in good values. Above all, keep in mind that the heuristics I've described are rules of thumb, not rules cast in stone. You should be on the lookout for signs that things aren't working, and be willing to experiment. In particular, this means carefully monitoring your network's behaviour, especially the validation accuracy. The difficulty of choosing hyper-parameters is exacerbated by the fact that the lore about how to choose hyper-parameters is widely spread, across many research papers and software programs, and often is only available inside the heads of individual practitioners. There are many, many papers setting out (sometimes contradictory) recommendations for how to proceed. However, there are a few particularly useful papers that synthesize and distill out much of this lore. Yoshua Bengio has a 2012 paper* *Practical recommendations for gradient-based training of deep architectures, by Yoshua Bengio (2012). that gives some practical recommendations for using backpropagation and gradient descent to train neural networks, including deep neural nets. Bengio discusses many issues in much more detail than I have, including how to do more systematic hyper-parameter searches. Another good paper is a 1998 paper* *Efficient BackProp, by Yann LeCun, Léon Bottou, Genevieve Orr and Klaus-Robert Müller (1998) by Yann LeCun, Léon Bottou, Genevieve Orr and Klaus-Robert Müller. Both these papers appear in an extremely useful 2012 book that collects many tricks commonly used in neural nets* *Neural Networks: Tricks of the Trade, edited by Grégoire Montavon, Geneviève Orr, and Klaus-Robert Müller.. The book is expensive, but many of the articles have been placed online by their respective authors with, one presumes, the blessing of the publisher, and may be located using a search engine. One thing that becomes clear as you read these articles and, especially, as you engage in your own experiments, is that hyper-parameter optimization is not a problem that is ever completely solved. There's always another trick you can try to improve performance. There is a saying common among writers that books are never finished, only abandoned. The same is also true of neural network optimization: the space of hyper-parameters is so large that one never really finishes optimizing, one only abandons the network to posterity. So your goal should be to develop a workflow that enables you to quickly do a pretty good job on the optimization, while leaving you the flexibility to try more detailed optimizations, if that's important. The challenge of setting hyper-parameters has led some people to complain that neural networks require a lot of work when compared with other machine learning techniques. I've heard many variations on the following complaint: "Yes, a well-tuned neural network may get the best performance on the problem. On the other hand, I can try a random forest [or SVM or…… insert your own favorite technique] and it just works. I don't have time to figure out just the right neural network." Of course, from a practical point of view it's good to have easy-to-apply techniques. This is particularly true when you're just getting started on a problem, and it may not be obvious whether machine learning can help solve the problem at all. On the other hand, if getting optimal performance is important, then you may need to try approaches that require more specialist knowledge. While it would be nice if machine learning were always easy, there is no a priori reason it should be trivially simple. 3.5: How to choose a neural network's hyper-parameters? is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Michael Nielson via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-07-03 14:38:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.696549654006958, "perplexity": 625.2025953850432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00302.warc.gz"}
https://mech.subwiki.org/w/index.php?title=Stokes%27_law&oldid=478
# Stokes' law (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) $F = 6\pi \mu R v$ where $v$ is the speed of flow, $R$ is the radius of the sphere, and $\mu$ is the dynamic viscosity for the fluid and the surface of the sphere.
2020-07-02 13:26:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184272646903992, "perplexity": 822.6680898559841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00436.warc.gz"}
https://physics.stackexchange.com/questions/411854/what-is-actually-gravitational-potential/411866
# What is actually Gravitational potential? Gravitational potential is work done by gravitational force to bring unit mass to its field from infinity. but how can gravitational force work there? I mean, in infinity? As it can only work at his own field! And where the gravitational force is inactive, we say it infinity. Physically, infinity is impossible. So when we talk about bringing in particles from infinity, we mean from a very far distance away where the gravitational field is negligible but still technically non-zero. Of course mathematically infinity is fine, and this is why it is spoken of in this way. • so, you want to say, the unit mass is driven by that negligible force? :/ we are told that infinity is the place where actually the force doesn't work, isn't it? kindly explain the whole thing. – Alessandrini Jun 14 '18 at 21:46 • @Alessandrini When we talk about bringing in the particle from infinity, we are not saying that the gravitational force is doing that work on its own. If I take a particle at infinity (relative to the gravity source) and move it towards the source, gravity is still going to do work, even though I am moving it. We don't need for gravity to actually make the configuration in order to talk about the potential energy in the system. – Aaron Stevens Jun 14 '18 at 23:06 • How can you say that? like, if a particle is in space (earth is the source) and if you move that particle in space, how earth's gravity works there actually? And in definition, we're told that the work is done by gravity! thanks anyway! :) – Alessandrini Jun 15 '18 at 9:07 As you said gravitational potential energy $U$ represents the potential some body has to do work located at some point in a gravitational field. For a uniform gravitational field we can approximate the gravitational potential energy (PE) as $$U = \text{Work} \times \text{height} = mgh,$$ where $h$ is usually taken to be the height above the surface of the Earth. For this particular approximation of PE we normally assign the zero of potential energy at $h=0$. This is synonymous with deciding on the origin of a coordinate system. It makes sense. However, it is arbitrary and can be placed anywhere you like so long as you remain consistent. What about the more general potential $$U = \frac{GMm}{r},$$ The choice of placing our zero point for the gravitational potential energy at $r=\infty$ is convenient for caluclations. It does make sense. As we start to achieve large $r$ values, the gravitational force is rapidly approaching $\infty$. So the further we get away from an object the gravitational force is rapidly decreasing. Or, in other words, the gravitational force quickly asymptotes to zero. So it makes sense to have the zero of the gravitational PE at $r=\infty$. Remember that a potential $\Phi$, say in this case gravitational, is an arbitrary function which in physics follows the equations: $\vec{F}=-\nabla\Phi$ $\Phi=-\int\,\vec{F}\cdot d\vec{r}$ whereas you talk about a conservative force field $\vec{F}$. Work is related to the second equation, BUT TYPICALLY (AND MATHEMATICALLY) it is a path integral. The potential (when evaluated between two points in the field) will be equal to minus the work exerted. Also mathematically, $\Phi$ can be any function, so its physical importance or meaning is negligible (as you can introduce any function that respects the equation above). In other words, you don't measure a gravitational potential in a laboratory, but you can measure the action that the gravitational field exerts on a point mass - for instance - and then work out a potential for it, in case it is possible. In electrostatics, the same occurs for the electric field $\vec{E}$ and the electric potencial $V$. I don't know if you are familiar with this, but let's see. Consider the Poisson equation for a gravitational potential: $\nabla^2\Phi=4\pi G\rho$ which leads to famous Newton's gravitational law: $\vec{F}=-\dfrac{GMm}{r^2}\hat{r}$ When you solve this partial differential equation for this problem, you get that the gravitational potential (by unit mass) is: $\Phi_m=-\frac{GM}{r}$ with $\rho=M\delta(r)$ where $\delta(r)$ is Dirac's delta function in spherical coordinates for the radius $r$. The latter shows that in $r=0$ you have the physical singularity of the "big mass $M$" object and due to it, you have a physical interaction for $r>0$. The force vector field for this potential will be "active" for any point in its space, but in "infinity", which for us in physics means $r\rightarrow\infty$ but not equal (and you can check it directly from Newton's equation above), force will be very, very, very weak.
2019-11-22 16:22:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8798124194145203, "perplexity": 243.4651730116185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671363.79/warc/CC-MAIN-20191122143547-20191122172547-00115.warc.gz"}
https://brilliant.org/discussions/thread/divisibility-rule-7-13-17-19-23-29-31-37-41-43-47/
# This note has been used to help create the Divisibility Rules (2,3,5,7,11,13,17,19,...) wiki 7 Subtract 2 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 7, the original number is also divisible by 7 Check for 945: : 94-(2*5)=84. Since 84 is divisible by 7, the original no. 945 is also divisible 13 Add 4 times the last digit to the remaining truncated number. Repeat the step as necessary. If the result is divisible by 13, the original number is also divisible by 13 Check for 3146:: 314+ (46) = 338:: 33+(48) = 65. Since 65 is divisible by 13, the original no. 3146 is also divisible 17 Subtract 5 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 17, the original number is also divisible by 17 Check for 2278:: 227-(5*8)=187. Since 187 is divisible by 17, the original number 2278 is also divisible. 19 Add 2 times the last digit to the remaining truncated number. Repeat the step as necessary. If the result is divisible by 19, the original number is also divisible by 19 Check for 11343:: 1134+(23)= 1140. (Ignore the 0):: 11+(24) = 19. Since 19 is divisible by 19, original no. 11343 is also divisible 23 Add 7 times the last digit to the remaining truncated number. Repeat the step as necessary. If the result is divisible by 23, the original number is also divisible by 23 Check for 53935:: 5393+(75) = 5428 :: 542+(78)= 598:: 59+ (7*8)=115, which is 5 times 23. Hence 53935 is divisible by 23 29 Add 3 times the last digit to the remaining truncated number. Repeat the step as necessary. If the result is divisible by 29, the original number is also divisible by 29 Check for 12528:: 1252+(38)= 1276 :: 127+(36)= 145:: 14+ (3*5)=29, which is divisible by 29. So 12528 is divisible by 29 31 Subtract 3 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 31, the original number is also divisible by 31 Check for 49507:: 4950-(37)=4929 :: 492-(39) :: 465:: 46-(3*5)=31. Hence 49507 is divisible by 31 37 Subtract 11 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 37, the original number is also divisible by 37 Check for 11026:: 1102 – (116) =1036. Since 103 – (116) =37 is divisible by 37. Hence 11026 is divisible by 37 41 Subtract 4 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 41, the original number is also divisible by 41 Check for 14145:: 1414 – (45) =1394. Since 139 – (44) =123 is divisible by 41. Hence 14145 is divisible by 41 43 Add 13 times the last digit to the remaining truncated number. Repeat the step as necessary. If the result is divisible by 43, the original number is also divisible by 43.*This process becomes difficult for most of the people because of multiplication with 13. Check for 11739:: 1173+(13*9)= 1290:: 129 is divisible by 43. 0 is ignored. So 11739 is divisible by 43 47 Subtract 14 times the last digit from remaining truncated number. Repeat the step as necessary. If the result is divisible by 47, the original number is also divisible by 47. This too is difficult to operate for people who are not comfortable with table of 14. Check for 45026:: 4502 – (146) =4418. Since 441 – (148) =329, which is 7 times 47. Hence 45026 is divisible by 47 3 years, 7 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Wow! This is useful! - 3 years, 7 months ago To prove this... Oh no. - 3 years, 7 months ago What do you mean? I did this all in my head. E.g. $47\mid 10a+b\iff 47\mid 140a+14b\iff 47\mid 14b-a$ This exact algorithm works for every one of the provided rules. Just multiply $$10a+b$$ by the number mentioned in the rule (in the case I've shown it's $$14$$; let's call it $$t$$) and subtract $$da$$ until $$|k|$$ is the smallest, where $$d\mid ka+tb$$ (here $$d$$ is the divisor for which the rule is proved). - 3 years, 7 months ago I meant to prove that this works might require some time and might require 23 cases of spamming... Doesn't seem like the best thing to prove. - 2 years, 9 months ago Divisibility rule of 3 - 1 month ago Divisibility rule of 11 - 1 month ago - 6 months, 3 weeks ago - 2 years, 11 months ago Thank you. Its very useful for Maths students. - 2 years, 12 months ago
2018-07-22 16:48:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9320819973945618, "perplexity": 1208.3397695895542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00369.warc.gz"}
http://www.cureffi.org/2017/12/18/protective-prp-missense-variants/
Above: close-up of the lysine (K) residue in an NMR structure of human prion protein with the E219K substitution, which reduces a person’s risk of sporadic prion disease. From structure 2LFT [Biljan 2012]. On multiple occasions recently I’ve gotten a question about the protective missense variants in the prion protein gene, PRNP. There are two questions people ask: 1. Do these variants protect universally against all types of prion disease? 2. Do these variants offer some insight into how to develop a therapy for prion disease? To which the short answers are 1) not necessarily, and 2) yes, but not in the way you might think. The long answer is this blog post. ### the literature on protective missense variants What we’re talking about here are missense variants — genetic changes resulting in amino acid substitutions in the prion protein (PrP) — that reduce the risk of prion disease. There are examples documented in several species. For instance, V136A, R154H, and Q217R in sheep [Laplanche 1993, Hunter 1994, Westaway 1994, Baylis 2004], and G96S in white-tailed deer [Johnson 2006]. But in this post I’ll focus on the protective variants in humans, of which I am aware of three: G127V, M129V, and E219K. For those who aren’t familiar, the notation used in this post is as follows. Number, letter, refers to an allele, meaning one version of a gene, so for instance 129V refers to the version of PRNP with V at codon 129. Number, letter, letter, refers to genotype, so for instance 129MM means someone who has M at codon 129 on both copies of their PRNP gene; 129MV is someone with one M and one V, 129VV is someone with V on both copies. Letter, number, letter, for instance M129V, refers to a variant, meaning the question of what someone’s genotype is. The reference allele (usually the more common version) comes before the number, and the alternate allele (usually the rarer version) comes after. Here’s a quick rundown of what we know about each of these: #### M129V This is the most common variant in PRNP in humans, with 129V having a ~30% allele frequency in most human populatons, except among East Asians where it’s only about 2%. The variant is well-known to be associated with a longer disease duration [Pocchiari 2004] and it also affects the neuropathological presentation of the disease [Parchi 1999]. But its effects on risk are not uniform across the different types of prion disease. Even a single 129V allele provides nearly complete protection against variant CJD [Mok 2017], and some protection against kuru [Mead 2003]. Yet 129V is associated with increased risk of human growth hormone-related CJD, perhaps due to compatibility with the genotype of the unknown infected donor [Collinge 1991, Brandel 2003, Moore 2016]. In sporadic prion disease, codon 129 exhibits heterozygote advantage. 129MV individuals have about one-third the risk of sporadic prion disease, compared to either homozygous genotype [Palmer 1991, Mead 2012]. There isn’t evidence that codon 129 affects penetrance (lifetime disease risk) in genetic prion disease. For age of onset, as reviewed here, the answer depends on which mutation. For the 6-OPRI mutation, there is evidence that 129MV is associated with a later age of onset (but still complete penetrance), compared to 129MM [Mead 2006]. For P102L, reports are conflicting [Kovacs 2005, Mead 2006, Webb 2008]. For other mutations, there is no evidence that codon 129 has any effect on age of onset. I’m not yet sure what to make of it, but recently I noticed something interesting while looking back at some old data — Table S10 from [Minikel 2016]. Among Japanese prion disease cases with the low-penetrance V180I mutation, the genotype distribution is: 162 MM, 54 MV. The Japanese V180I allele is in cis with 129M, so that means that 25% (54/216) of trans alleles are 129V, which struck me as a dramatic enrichment, since 129V has an allele frequency of only a few percent in Japan. To see if this was significant, I did some digging and found that 129V was found to have an allele frequency of 3% among 645 Japanese controls [Nozaki 2010]. The raw numbers aren’t given, but that probably corresponds to ~39 V alleles out of 1290 chromosomes. So I did a test (in R: fisher.test(matrix(c(1290-39,39,162,54), nrow=2, byrow=T), alternative='two.sided')) and sure enough, 129V is enriched ~10-fold among V180I prion disease cases compared to the general population, P = 1 × 10-24. I don’t have any data to rule out the possibility that this is due to population stratification — maybe V180I is just more common in some region of Japan where 129V is also more common — but it is possible that 129V does actually increase risk, compared to 129M, when found in trans to V180I. I’m not sure if this is real, but one should not assume that 129V can only be helpful, not harmful, in genetic prion disease — after all, in acquired prion disease, it can be either. #### E219K E219K has an allele frequency of ~4% among East Asians and South Asians, while it is very rare in other populations. It’s been studied extensively in the Japanese population, where it has a frequency of about 6-7% [Nozaki 2010]. E219K appears to have no effect on the risk of acquired prion disease, at least for dura mater graft CJD, which is the only type of acquired prion disease common enough in Japan for there to be a reasonable amount of data [Nozaki 2010]. In sporadic prion disease, E219K is associated with reduced risk of prion disease [Shibuya 1998, Nozaki 2010]. 219EK heterozygotes appear to have about 20-fold reduced risk, as there were 3 EK and 561 EE genotypes in sporadic CJD in Japan at last count [Nozaki 2010], corresponding to an allele frequency of ~0.3% in cases, compared to 6% in controls. Oddly, we don’t yet know if the homozygous KK genotype is protective or not, because this genotype is rare enough a priori that even though zero KK sporadic CJD cases have been observed, that doesn’t yet add up to a statistically significant depletion of this genotype. (If KK risk was equal to EE risk, then out of 564 genotyped sporadic CJD cases, you’d only expect ~2 cases, so observing 0 is not too surprising.) We can say that KK almost certainly does not increase risk, though. In genetic prion disease, it’s not clear what effect, if any, E219K might have. In the aggregate, it was reported that the K allele is depleted among genetic prion disease cases in Japan [Nozaki 2010], but it’s not yet clear to me whether that difference is statistically significant. The paper reported P < 0.001 but it is difficult to do the statistics right here — the N=214 individuals with genetic prion disease in that paper are not actually 214 independent events, as most statistical tests would assume, because there might be, say, only 20 or so genetic prion disease haplotypes in Japan (we don’t know the true number). The answer could also vary depending on which mutation a person has. #### G127V G127V has a frequency probably down arond 1% or lower in the Papua New Guinea highlands [Mead 2009] and has only ever been seen in one other individual. Homozygotes have never been observed, but the 127GV heterozygous genotype is associated with dramatic, possiblty complete, protection from kuru [Mead 2009]. A transgenic mouse study found that 127V also protected against some variant CJD and iatrogenic CJD isolates [Asante 2015], but we don’t have any human data on this. There are also no human data on the effects of 127V on the risk of sporadic prion disease. The mouse study found a large protective effect [Asante 2015], although that was upon inoculation with sporadic CJD brain homogenate, so it doesn’t tell us about the risk of spontaneously developing prion disease. For example, it doesn’t rule out the possibility that there might just exist a transmission barrier between 127G prions and 127V prions, with G127V individuals having equal or even increased risk to spontaneously form prions of a 127V strain. There are no data on the effects of G127V on genetic prion disease. ### summary table Here’s a table summarizing everything I described above: variant allele frequency acquired sporadic genetic M129V ~30% worldwide Nearly complete protection from vCJD and kuru but increased risk of HGH CJD. Heterozygotes have ~1/3 the risk of either homozygous genotype. No known effect for most mutations. Later onset for 6-OPRI mutation. E219K ~6% in Japan No effect. ~20-fold reduced risk in heterozygotes. Homozygous effect unknown. Not clear. G127V ~1% in Papua New Guinea Nearly complete protection from kuru. Evidence for protection from mouse study, but no human data. No data. So to summarize the answer to question #1, protective PrP missense variants do not necessarily have uniformly protective effects across different types of prion disease. ### implications for therapy Now we come to question #2 — do these protective variants offer some insight into how to develop a therapy for prion disease? Yes, absolutely: the existence of these mutations is one of many lines of evidence telling us that PrP is the cause of prion disease, and is the right drug target for therapies to prevent, delay, or treat prion disease. But beyond that, I submit that these variants do not actually provide any more specific inspiration for how to develop a therapy. We’re not going to CRISPR these variants into people’s genomes, because we don’t have the technology to deliver CRISPR to the 100 billion neurons of the adult human brain, and even if we did, we’d probably just aim to knock out PRNP, since that is easier to do with CRISPR and has stronger proofs of concept for being protective against prion disease. And we’re probably not going to infuse PrP molecules containing these mutations into people’s brains, because we also lack the technology to deliver proteins or peptides to a wide swath of the human brain, and anyway it’s not clear whether a recombinant protein floating around unanchored would have the same protective effect as PrP anchored to the cell surface, where it’s supposed to be. (People have tried this peptide infusion idea in mouse models [Furuya 2006], but the results haven’t been too promising). There’s also an idea that maybe these protective missense variants can somehow provide inspiration for a small molecule drug for prion disease, which sounds great on the face of it, but it’s just not obvious how exactly you would design that molecule. One study used a computational approach to try to identify molecules that had structural similarity to the parts of PrP containing protective variants, and then tested some of the candidate molecules in cell culture [Perrier 2000]. But the molecules all had EC50 values of >15 μM in cell culture (that’s not very potent), and a later study could not replicate any binding to PrP nor any activity in cell culture [Kamatari 2013]. If anyone has specific ideas for how to translate these protective variants into a therapy, I’d be curious to hear them, but from everything I’ve seen and read, the translation into therapy is anything but obvious. And while these variants are clearly protective — sometimes dramatically so — in some forms of prion disease, we do not have for them nearly the proof of concept that we have for lowering PrP. As reviewed here, for lowering PrP, we know there is total protection in knockouts [Bueler 1993], there is a dose-response curve across a wide range of expression levels [Bueler 1994, Fischer 1996], PrP is needed not only for replication but also neurotoxicity [Brandner 1996], and shutting off PrP expression is beneficial even after the disease process has begun [Mallucci 2003, Safar 2005]. The protective missense variants provide one further line of evidence pointing to PrP as a drug target, but they do not open up a more promising therapeutic strategy than PrP lowering.
2018-01-23 15:42:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42146292328834534, "perplexity": 4129.7596120059825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00486.warc.gz"}
http://mathhelpforum.com/calculus/163054-integrate-sinx-sin4x.html
1. ## integrate sinx/sin4x Integrate Code: sinx/sin4x I exapned sin4x and did some partial fractions stuff, but the answer which I get is not the back answer 2. Originally Posted by ice_syncer Integrate Code: sinx/sin4x I exapned sin4x and did some partial fractions stuff, but the answer which I get is not the back answer integrate Sin&#91;x&#93;&#47;Sin&#91;4x&#93; - Wolfram|Alpha Click on Show steps. 3. ;D 4. Originally Posted by mr fantastic Wolfram's steps look too frightening. I'd rather go for a trigonometric substitution: $\displaystyle{\tan\frac{x}{2}=t\Longrightarrow dx=\frac{2}{1+t^2}\,dt\,,\,\,\cos x=\frac{1-t^2}{1+t^2}\,,\,\,\sin x=\frac{2t}{1+t^2}}$ , so with a little trigonometry: $\displaystyle{\int \frac{\sin x}{\sin 4x}\,dx=\int\frac{\sin x}{2\sin 2x\cos 2x}\,dx=\int\frac{dx}{4\cos x(2\cos^2x-1)}\,dx}=$ $\displaystyle{\frac{1}{4}\int\frac{(1+t^2)^2}{(1-t^2)\left[t^2-(3+2\sqrt{2})\right]\left[t^2-(3-2\sqrt{2})\right]}}$ , and we have an integral of a rational function. Indeed this still is a nasty integral, but perhaps a little less messier than in the steps of the proposed solution in Wolfram's...perhaps. Tonio 5. Originally Posted by tonio $\int\frac{dx}{4\cos x(2\cos^2x-1)}\,dx}$ $\displaystyle\frac{1}{\cos x(2{{\cos }^{2}}x-1)}=\frac{\cos x}{(1-{{\sin }^{2}}x)(1-2{{\sin }^{2}}x)},$ substitute $t=\sin t$ and get \begin{aligned} \int{\frac{dt}{(1-{{t}^{2}})(1-2{{t}^{2}})}}&=\int{\frac{2(1-{{t}^{2}})-(1-2{{t}^{2}})}{(1-{{t}^{2}})(1-2{{t}^{2}})}\,dt} \\ & =2\int{\frac{dt}{1-2{{t}^{2}}}}-\int{\frac{dt}{1-{{t}^{2}}}\,dt}, \\ \end{aligned} kill those in the same fashion. ----- that's one of the biggest reasons on why i don't absolute recommend wolfram for tricky problems, we gotta let our people think. 6. Wolfram also made mess of the partial fractions: If letting $u = \sin{x}$ gives $\displaystyle \int\frac{1}{8 u^4-12 u^2+4}\;{du}$, then simply write: $\displaystyle 8u^4-12u^2+4 = 8t^2-12t+4 = 4(2t^2-3t+1)[/tex] [LaTeX ERROR: Convert failed] " alt="\displaystyle 8u^4-12u^2+4 = 8t^2-12t+4 = 4(2t^2-3t+1)[/tex] [LaTeX ERROR: Convert failed] " />[LaTeX ERROR: Convert failed] $ [tex]\displaystyle = \frac{1}{8}\int\frac{1}{(u-1)(2u^2-1)}\;{du}-\frac{1}{8}\int\frac{1}{(u+1)(2u^2-1)}\;{du}$ [LaTeX ERROR: Code too long, max. 1000 characters] $ [LaTeX ERROR: Convert failed] " alt=" [LaTeX ERROR: Convert failed] " />[LaTeX ERROR: Convert failed] $ [LaTeX ERROR: Convert failed] " alt=" [LaTeX ERROR: Convert failed] " />[LaTeX ERROR: Convert failed] $ [LaTeX ERROR: Convert failed] " alt=" [LaTeX ERROR: Convert failed] " />[LaTeX ERROR: Convert failed] $ [LaTeX ERROR: Convert failed] " alt=" [LaTeX ERROR: Convert failed] " />[LaTeX ERROR: Convert failed] $, we have: [tex]\displaystyle I = \frac{1}{16(1-\sqrt{2})}\ln\left\{\frac{u-1}{\sqrt{2}u-1}\right\}+\frac{1}{16(1+\sqrt{2})}\ln\left\{\frac {u-1}{\sqrt{2}u+1}\right\}$ $\displaystyle -\frac{1}{16(1+\sqrt{2})}\ln\left\{\frac{u+1}{\sqrt {2}u-1}\right\}+\frac{1}{16(\sqrt{2}-1)}\ln\left\{\frac{u+1}{\sqrt{2}u+1}\right\}+k$ Thus our original integral evaluates to: $\displaystyle I = \frac{1}{16(1-\sqrt{2})}\ln\left\{\frac{\sin{x}-1}{\sqrt{2}\sin{x}-1}\right\}+\frac{1}{16(1+\sqrt{2})}\ln\left\{\frac {\sin{x}-1}{\sqrt{2}\sin{x}+1}\right\}$ $\displaystyle -\frac{1}{16(1+\sqrt{2})}\ln\left\{\frac{\sin{x}+1} {\sqrt{2}\sin{x}-1}\right\}+\frac{1}{16(\sqrt{2}-1)}\ln\left\{\frac{\sin{x}+1}{\sqrt{2}\sin{x}+1}\r ight\}+k$ , , , , , , , , , , , , , , # integration of sinx/sin4x Click on a term to search for related topics.
2017-01-24 20:25:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527377843856812, "perplexity": 8066.888466769966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00186-ip-10-171-10-70.ec2.internal.warc.gz"}
https://bitcointechweekly.com/issues/2020-07-09/
#### Releases project release date ledger-live-common v13.1.0 • inverse Native Segwit and Segwit account ordering in Add Accounts • #773 filtering mcus to flash based on the provider to avoid flashing unreleased mcus (QA impact: a firmware update should still work, booting your device in Bootloader Mode and doing a "Repair" will NOT flash an unreleased mcu) • #763 add firmwareUnsupported function in lib/manager. 2020-07-06 ledger-live-desktop v2.8.0 2020-07-06 lnd v0.10.3-beta This is the 3rd minor release in the v0.10.0-beta-series. Unlike v0.10.2-beta which only includes bug-fixes, this release also includes some refactoring to the main lnd package that allows lnd to more easily be embedded as a normal struct (by importing the package) on other Go projects. # Verifying the Release In order to verify the release, you'll need to have gpg or gpg2 installed on your system. Once you've obtained a copy (and hopefully verified that as well), you'll first need to import the keys that have signed this release if you haven't done so already: curl https://keybase.io/roasbeef/pgp_keys.asc | gpg --import Once you have the required PGP keys, you can verify the release (assuming manifest-v0.10.3-beta.txt and manifest-v0.10.3-beta.txt.sig are in the current directory) with: gpg --verify manifest-v0.10.3-beta.txt.sig You should see the following if the verification was successful: gpg: assuming signed data in &#39;manifest-v0.10.3-beta.txt&#39; gpg: Signature made Mon Jul 6 12:59:00 2020 PDT gpg: using RSA key 4AB7F8DA6FAEBB3B70B1F903BC13F65E2DC84465 gpg: Good signature from &#34;Olaoluwa Osuntokun <[email protected]>&#34; [ultimate] That will verify the signature of the manifest file, which ensures integrity and authenticity of the archive you've downloaded locally containing the binaries. Next, depending on your operating system, you should then re-compute the sha256 hash of the archive with shasum -a 256 <filename>, compare it with the corresponding one in the manifest file, and ensure they match exactly. ## Verifying the Release Binaries Our release binaries are fully reproducible. Third parties are able to verify that the release binaries were produced properly without having to trust the release manager(s). See our reproducible builds guide for how this can be achieved. The release binaries are compiled with go1.13.12, which is required by verifiers to arrive at the same ones. They include the following build tags: autopilotrpc, signrpc, walletrpc, chainrpc, invoicesrpc, routerrpc, and watchtowerrpc. Note that these are already included in the release script, so they do not need to be provided. The make release command can be used to ensure one rebuilds with all the same flags used for the release. If one wishes to build for only a single platform, then make release sys=<os-arch> tag=<tag> can be used. Finally, you can also verify the tag itself with the following command: git verify-tag v0.10.3-beta # Building the Contained Release Users are able to rebuild the target release themselves without having to fetch any of the dependencies. In order to do so, assuming that vendor.tar.gz and lnd-source-v0.10.3-beta.tar.gz are in the current directory, follow these steps: tar -xvzf vendor.tar.gz tar -xvzf lnd-source-v0.10.3-beta.tar.gz GO111MODULE=on go install -v -mod=vendor -ldflags &#34;-X github.com/lightningnetwork/lnd/build.Commit=v0.10.3-beta&#34; ./cmd/lnd GO111MODULE=on go install -v -mod=vendor -ldflags &#34;-X github.com/lightningnetwork/lnd/build.Commit=v0.10.3-beta&#34; ./cmd/lncli The -mod=vendor flag tells the go build command that it doesn't need to fetch the dependencies, and instead, they're all enclosed in the local vendor directory. Additionally, it's now possible to use the enclosed release.sh script to bundle a release for a specific system like so: make release sys=&#34;linux-arm64 darwin-amd64&#34; ⚡️⚡️⚡️ OK, now to the rest of the release notes! ⚡️⚡️⚡️ # Release Notes ## lnd Package Refactoring In prior releases, we've started to refactor the way lnd is initialized and started to make it easier to emebdd lnd in other Go applications. The primary consumer of these APIs so far has been our mobile bindings for lnd. In this release we continue the process to further abstract the lnd package with a series of PRs that remove a number of global variables, allow external sub-server registration, and external logging hooks. The full list of changes since v0.10.2-beta can be found here: # Contributors (Alphabetical Order) Oliver Gugger /[email protected] 2020-07-06 lnd v0.10.2-beta This marks the second minor release in the v0.10.0 series! This release allows lnd to be compatible with bitcoind 0.20, resolves some peer connection instability issues, fixes an issue that can cause payments to hang ina state until a connection is cycled, and fixes an important bug related to an on disk Static Channel Backups (SCB). # Verifying the Release In order to verify the release, you'll need to have gpg or gpg2 installed on your system. Once you've obtained a copy (and hopefully verified that as well), you'll first need to import the keys that have signed this release if you haven't done so already: curl https://keybase.io/roasbeef/pgp_keys.asc | gpg --import Once you have the required PGP keys, you can verify the release (assuming manifest-v0.10.2-beta.txt and manifest-v0.10.2-beta.txt.sig are in the current directory) with: gpg --verify manifest-v0.10.2-beta.txt.sig You should see the following if the verification was successful: gpg: assuming signed data in &#39;manifest-v0.10.2-beta.txt&#39; gpg: Signature made Mon Jul 6 12:33:41 2020 PDT gpg: using RSA key 4AB7F8DA6FAEBB3B70B1F903BC13F65E2DC84465 gpg: Good signature from &#34;Olaoluwa Osuntokun <[email protected]>&#34; [ultimate] That will verify the signature of the manifest file, which ensures integrity and authenticity of the archive you've downloaded locally containing the binaries. Next, depending on your operating system, you should then re-compute the sha256 hash of the archive with shasum -a 256 <filename>, compare it with the corresponding one in the manifest file, and ensure they match exactly. ## Verifying the Release Binaries Our release binaries are fully reproducible. Third parties are able to verify that the release binaries were produced properly without having to trust the release manager(s). See our reproducible builds guide for how this can be achieved. The release binaries are compiled with go1.14.4, which is required by verifiers to arrive at the same ones. They include the following build tags: autopilotrpc, signrpc, walletrpc, chainrpc, invoicesrpc, routerrpc, and watchtowerrpc. Note that these are already included in the release script, so they do not need to be provided. The make release command can be used to ensure one rebuilds with all the same flags used for the release. If one wishes to build for only a single platform, then make release sys=<os-arch> tag=<tag> can be used. Finally, you can also verify the tag itself with the following command: git verify-tag v0.10.2-beta # Building the Contained Release Users are able to rebuild the target release themselves without having to fetch any of the dependencies. In order to do so, assuming that vendor.tar.gz and lnd-source-v0.10.2-beta.tar.gz are in the current directory, follow these steps: tar -xvzf vendor.tar.gz tar -xvzf lnd-source-v0.10.2-beta.tar.gz GO111MODULE=on go install -v -mod=vendor -ldflags &#34;-X github.com/lightningnetwork/lnd/build.Commit=v0.10.2-beta&#34; ./cmd/lnd GO111MODULE=on go install -v -mod=vendor -ldflags &#34;-X github.com/lightningnetwork/lnd/build.Commit=v0.10.2-beta&#34; ./cmd/lncli The -mod=vendor flag tells the go build command that it doesn't need to fetch the dependencies, and instead, they're all enclosed in the local vendor directory. Additionally, it's now possible to use the enclosed release.sh script to bundle a release for a specific system like so: make release sys=&#34;linux-arm64 darwin-amd64&#34; ⚡️⚡️⚡️ OK, now to the rest of the release notes! ⚡️⚡️⚡️ # Release Notes ## Network Gossip Channel Announcement Feature Bit Encoding This release fixes an existing bug that would cause us to not carry over the ChannelUpdate level feature bits when sending a channel announcement suit to a connected node. Some nodes have started to use this space to communicate that they support "wumbo" channels. Before this change, if lnd was instructed to send the channel announcement for a wumbo channel, it would omit this feature bit information, causing the requesting node to reject the announcement as the signature would fail. This may have cause some peer connection instability since these wumbo channels have started to be more widely propagated across the network. This release modifies the way a new payment is sent to the first outgoing channel it needs to traverse before being sent off to the network. Before this commit, the router would hand the payment off to the switch in an asynchronous manner. Recently, it was brought to our attention that this behavior could at times cause a payment to unnecessarily fail later if the target link wasn't online, or not fully available. In this new release, this process is now synchronous. End users should observe that they see less internal payment failures due to out of date bandwidth hints, as is now able to full see through the addition of a new HTLC. ## bitcoind Compatibility With this new release, lnd can now be used with bitcoind 0.20 as it's full-node chain backend. ## SCB Bug Fix This release includes an important bug fix for static channel backups. Before this release, if a new lnd node was started with a data directory that contained an existing SCB file, then that existing file would be completely overridden by whatever channel state the new lnd node started with. With this new release, of lnd, we'll fail to start if we're unable to read an existing SCB file on disk. Additionally, we'll always combine the contents of the SCB file with our in-memory channel state. The full list of changes since v0.10.1-beta can be found here: # Contributors (Alphabetical Order) Andras Banki-Horvath Olaoluwa Osuntokun Oliver Gugger Joost Jager Wilmer Paulino /[email protected] 2020-07-06 #### RFC type rfc # title date status bip bip-0339 BIP 339: WTXID-based transaction relay 2020-07-08 Merged bolt X What is the threshold for to_self_delay to be unreasonably large? 2020-07-07 Closed bolt X Move founds before closing channel 2020-07-07 Closed bolt X No way (according to LND dev wpaulino) to identify the network of a connection request. 2020-07-07 Closed bolt X Problem with only the funder being able to control the fee rate 2020-07-07 Closed bolt peer protocol BOLT 2: Clarifying requirements for multiple TLV record occurrences of the same type 2020-07-07 Closed bolt X Lightning Specification Meeting 2020/06/22 2020-07-07 Closed bolt transactions BOLT 3: fix definition of flip(B) in P. 2020-07-07 Merged bolt X clarification: when does max_accepted_htlcs apply? 2020-07-05 Update slip slip-0044 Slip-0044 add YOU(#1010) 2020-07-07 Merged slip slip-0044 slip-0044: add GHOST 2020-07-06 Merged slip X Add SUSHI to coin types 2020-07-06 Closed slip X add HST 2020-07-06 Merged slip slip-0077 Typo in Slip77 2020-07-04 Merged
2020-10-22 20:04:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35136643052101135, "perplexity": 7485.590547663151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00019.warc.gz"}
https://projectautox.com/pet-insurance-cxyck/916eff-properties-of-an-estimator
Pcmanfm Desktop Profile, Samia Finnerty Wikipedia, Basketball Warm Up Songs 2020, Electronics Course In Tesda, Fiskars Hobby Knife, "/> ## properties of an estimator In assumption A1, the focus was that the linear regression should be “linear in parameters.” However, the linear property of OLS estimator means that OLS belongs to that class of estimators, which are linear in Y, the dependent variable. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. Consider the linear regression model where the outputs are denoted by , the associated vectors of inputs are denoted by , the vector of regression coefficients is denoted by and are unobservable error terms. Thus, this difference is, and should be zero, if an estimator is unbiased. 1. Estimator is Unbiased. The closer the expected value of the point estimator is to the value of the parameter being estimated, the less bias it has. In other such an estimator would produce the following result: ECONOMICS 351* -- NOTE 4 M.G. Properties of Least Squares Estimators Each ^ iis an unbiased estimator of i: E[ ^ i] = i; V( ^ i) = c ii˙2, where c ii is the element in the ith row and ith column of (X0X) 1; Cov( ^ i; ^ i) = c ij˙2; The estimator S2 = SSE n (k+ 1) = Y0Y ^0X0Y n (k+ 1) is an unbiased estimator of ˙2. But if this is true in the particular context where the estimator is a simple average of random variables you can perfectly design an estimator which has some interesting properties but whose expected value is different than the parameter $$\theta$$. This video elaborates what properties we look for in a reasonable estimator in econometrics. KSHITIZ GUPTA. Unbiasedness S2. Then an "estimator" is a function that maps the sample space to a set of sample estimates. When some or all of the above assumptions are satis ed, the O.L.S. It should be unbiased: it should not overestimate or underestimate the true value of the parameter. Putting this in standard mathematical notation, an estimator is unbiased if: Prerequisites. Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. The two main types of estimators in statistics are point estimators and interval estimators. We say that the PE β’ j is an unbiased estimator of the true population parameter β j if the expected value of β’ j is equal to the true β j. Hence an estimator is a r.v. An estimator ^ n is consistent if it converges to in a suitable sense as n!1. The first one is related to the estimator's bias.The bias of an estimator $\hat{\Theta}$ tells us on average how far $\hat{\Theta}$ is from the real value of $\theta$. Author(s) David M. Lane. New content will be added above the current area of focus upon selection A biased estimator can be less or more than the true parameter, giving rise to both positive and negative biases. PROPERTIES OF BLUE • B-BEST • L-LINEAR • U-UNBIASED • E-ESTIMATOR An estimator is BLUE if the following hold: 1. Measures of Central Tendency, Variability, Introduction to Sampling Distributions, Sampling Distribution of the Mean, Introduction to Estimation, Degrees of Freedom Learning Objectives. An estimator ^ for Now customize the name of a clipboard to store your clips. The bias of an estimator θˆ= t(X) of θ is bias(θˆ) = E{t(X)−θ}. If bias(θˆ) is of the form cθ, θ˜= θ/ˆ (1+c) is unbiased for θ. We say that ^ is an unbiased estimator of if E( ^) = Examples: Let X 1;X 2; ;X nbe an i.i.d. This document derives the least squares estimates of 0 and 1. There are three desirable properties every good estimator should possess. This presentation lists out the properties that should hold for an estimator to be Best Unbiased Linear Estimator (BLUE). You can change your ad preferences anytime. It is unbiased 3. The property of unbiasedness (for an estimator of theta) is defined by (I.VI-1) where the biasvector delta can be written as (I.VI-2) and the precision vector as (I.VI-3) which is a positive definite symmetric K by K matrix. See our Privacy Policy and User Agreement for details. ©AnalystPrep. Indradhanush: Plan for revamp of public sector banks, revised schedule vi statement of profit and loss, Representation of dalit in indian english literature society, Customer Code: Creating a Company Customers Love, Be A Great Product Leader (Amplify, Oct 2019), Trillion Dollar Coach Book (Bill Campbell), No public clipboards found for this slide. Suppose there is a fixed parameter  that needs to be estimated. E [ t ] = τ set of sample estimates personalize ads and to provide you relevant. You ’ ve clipped this slide to already estimator of βj­ where represents... Both positive and negative biases used to estimate the population mean,.... Blue: an estimator ^ n is consistent if it converges to a. Document derives the least squares estimates of 0 and 1 that will be the best estimate the... From sample to sample small-sample propertiesof an estimator of is usually denoted by the symbol )! Deriving point estimators and interval estimators be unbiased: it should not overestimate underestimate... The OLS estimator is said to be best unbiased linear estimator: estimator. X and S2 are unbiased estimators of and ˙2 respectively of its sampling distribution equals... To collect important slides you want to go back to later ^ for properties of estimators: Unbiasedness variance! The closer the expected value of the distribution of the distribution of the population parameter provided. Satis ed, the O.L.S of βj­ where n represents the sample size.... To improve functionality and performance, and to provide you with relevant advertising should... More parameters of a population or more than the true parameter, giving to. Assumptions are satis ed, the O.L.S one that has a minimum variance is not good estimated, the.. With mean and standard deviation ˙ at each property in detail: Unbiasedness presented! Value ( the mean location of the population mean, μ and necessarily... This website accuracy or quality of AnalystPrep for a good point estimator is BLUE it... And therefore varies from sample to sample maps the sample size increases types of estimators in statistics point. Sample to sample intuitively, an unbiased estimator is unbiased but does not endorse, promote or warrant accuracy. Efficiency ; Consistency ; Let’s now look at each property in detail: Unbiasedness following properties: 1 two types! Its quality is to be evaluated in terms of the parameter Rights ReservedCFA does..., this difference is, and to provide you with relevant advertising point estimator is an property... A single statistic that will be the best estimate of the distribution of oldest! Estimator '' is an efficient estimator ( BLUE ) be evaluated in terms of the it! Profile and activity data to personalize ads and to show you more relevant ads to both and. Latter produces a range of values L-LINEAR • U-UNBIASED • E-ESTIMATOR an estimator that unbiased! Estimators and interval estimators it uses sample data when calculating a single value while the latter presents formal of. Equals the parameter it is an estimator ^ for properties of estimators unbiased estimators and... Calculating a single statistic that will be the best estimate of the distribution of the above assumptions to estimated... Evaluated in terms of the oldest methods for deriving point estimators intuitively an! Satis ed, the O.L.S the most fundamental desirable small-sample propertiesof an estimator that is unbiased but not! When some or all of the parameter increases as the sample size are three properties! You ’ ve clipped this slide to already Financial Analyst® are registered owned! For deriving point estimators estimator '' is a measure of the estimator. X and S2 are unbiased estimators of and ˙2 respectively method for parameters... The following properties: efficiency, Consistency and minimum variance usually denoted by the symbol value while latter... Value ( the mean location of the point estimator statistical Estimation method estimator should possess: an estimator whose of. Helps statisticians to estimate the population parameter independent variables statistics, bias is... Above assumptions are satis ed, the less bias it has of BLUE • •., Efficiency, Sufficiency, Consistency and minimum variance unbiased estimator is when... Latter presents formal proofs of almost all the results reviewed below as well as an extensive bibliography Rights Institute. And finally we draw conclusions in Section 6 and finally we draw conclusions in Section 7 clips... Respect to the value of the Likelihood that something will happen distribution of the oldest methods for point!
2021-05-15 14:43:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5283934473991394, "perplexity": 1627.8743821530252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00162.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/5891/how-to-prove-universality-for-a-set-of-gates
# How to prove universality for a set of gates? Which of the following sets of gates are universal for quantum computation? 1. {H, T, CPHASE} 2. {H, T, SWAP} And how do we prove it? • Have you already covered an example of a universal set of gates? Do you have to do from scratch or can you reduce to a previous case and then use a result from class? Apr 10 '19 at 18:34 • A gate set cannot be universal if it cannot create entanglement, so {H,T, SWAP} is not universal. Apr 10 '19 at 18:34 • Apr 10 '19 at 19:08 • Hi, hey0god. Welcome to Quantum Computing SE! Please note that we're not a homework help site. I've removed the unnecessary details from v2 of the question. Anyway, I believe the edited v3 of the question is generic enough and would be useful for future visitors to the site and so I'm leaving it open. Apr 10 '19 at 19:10 • Welcome to Quantum Computing SE! If possible, in order to get better answers more directly dealing with your problem, would you be able to edit this question explaining what exactly you've tried so far and where exactly you're stuck? Thanks! Apr 10 '19 at 19:31 On the other hand, the best way to prove that a gate set is not universal is to show that you can simulate the evolution of any circuit efficiently on a classical computer. To the extent that we believe that classical and quantum computers are different is the extent to which we believe that gate set cannot be universal for quantum computation. So, how would you efficiently simulate arbitrary single qubit gates being applied to a set of $$n$$ qubits? Can you update your simulation algorithm to include SWAP without losing efficiency? (Hint: Yes)
2022-01-19 10:09:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4754367470741272, "perplexity": 462.67756836660715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00221.warc.gz"}
https://numbersandshapes.net/posts/academic_text_matching/
# Every academic their own text-matcher Share on: ## Plagiarism, text matching, and academic integrity Every modern academic teacher is in thrall to giant text-matching systems such as Ouriginal or Turnitin. These systems are sold as "plagiarism detectors", which they are not - they are text matching systems, and they generally work by providing a report showing how much of a student's submitted work matches text from other sources. It is up to the academic to decide if the level of text matching constitutes plagiarism. Although Turnitin sells itself as a plagiarism detector, or at any rate a tool for supporting academic integrity, its software is closed source, so, paradoxically, there's no way of knowing if any of its source code has been plagiarized from another source. Such systems work by having access to a giant corpus of material: published articles, reports, text on websites, blogs, previous student work obtained from all over, and so on. The more texts a system can try to match a submission against, the more confident an academic is supposed to have in its findings. (And the more likely an administration will see fit to paying the yearly licence costs.) Of course in the arms-race of academic integrity, you'll find plenty of websites offering advice on "how to beat Turnitin"; but in the interests of integrity I'm not going to link to any, but they're not hard to find. And of course Turnitin will presumably up its game to counter these methods, and the sites will be rewritten, and so on. ## My problem I have been teaching a fully online class; although my university is slowly trying to move back (at least partially) into on-campus delivery after 2 1/2 years of Covid remote learning, some classes will still run online. My students were completing an online "exam": a timed test (un-invigilated) in which the questions were randomized so that no students got the same set of questions. They were all "Long Answer" questions in the parlance of our learning management system; at any rate for each question a text box was given for the student to enter their answer. The test was to be marked "by hand". That is, by me. Many of my students speak English as a second language, and although they are supposed to have a basic competency sufficient for tertiary study, many of them struggle. And if a question asks them to define, for example, "layering" in the context of cybersecurity, I have not the slightest problem with them searching for information online, finding it, and copying it into the textbox. If they can search for the correct information and find it, that's good enough for me. This exam is also open book. As far as I'm concerned, finding correct information is a useful and valuable skill; testing for the use of what they might remember, and "in their own words" is pedagogically indefensible. So, working my way grimly through these exams, I had a "this seems familiar..." moment. And indeed, searching through some previous submissions I found exactly the same answer submitted by another student. Well, that can happen. What is less likely to happen, at least by chance, is for almost all of the 16 questions to have the same submissions as other students. People working in the area of academic integrity sometimes speak of a "spidey sense" a sort of sixth sense that alerts you that something's not right, even if you can't quite yet pinpoint the issue. This was that sense, and more. It turned out that the entire test and all answers could be downloaded and saved as a CSV File, and hence loaded into Python as a Pandas DataFrame. My first attempt had me looking at all pairs of students and their test answers, to see if any of the answer text strings matched. And some indeed did. Because of the randomized nature of the test, one student might receive as question 7, say, the same question that another student might see as question 5, or question 8. The data I had to work with consisted of two DataFrames. Once contained all the exam information: 1examdata.dtypes 2 4FirstName object 5LastName object 6Q # int64 7Q Text object 9Score float64 10Out Of float64 11dtype: object This DataFrame was ordered by student, and then by question number. This meant that every student had up to 16 rows of the DataFrame. I had another DataFrame containing just the names and cohorts (there were two distinct cohorts, and this information was not given in the dump of exam data to the CSV file.) 1names.dtypes 2 4FirstName object 5LastName object 6Cohort object 7dtype: object I added the cohorts by hand. This could then be merged with the exam data: 1data = examdata.merge(names,on=["Username","FirstName","LastName"],how='left').reset_index(drop=True) ## String similarity Since the exam answers in my DataFrame were text strings, any formatting that the student might have given in an answer, such as bullet points or a numbered list, a table, font changes, were ignored. All I had to work in were ascii strings. However, exact string matching led to very few results. This is because there might have been a difference in starting or ending whitespace or other characters, or even if one student's submission included another student's submission as a substring. Consider for example these two (synthetic) examples: • "A man-in-the-middle attack is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, as the attacker has inserted themselves between the two parties." (from the Wikipedia page on the Man-In-The-Middle attack.) • "I think it's this: A man-in-the-middle attack is a cyberattack where Mallory secretly relays and possibly alters the communications between Alice and Bob who believe that they are directly communicating with each other, as Mallory has inserted himself between them." There are various ways of measuring the distance between strings, or alternatively of their similarity. Two much used methods are the Jaro similarity measure (named for Matthew Jaro, who introduced it in 1989), and the Jaro-Winkler measure, a version named also for William Winkler, who discussed it in 1990. Both of these are defined on their Wikipedia page. Winkler's measure adds to the original Jaro measure a factor based on the equality of any beginning substring. It turns out that the Jaro-Winkler similarity of the two strings above is about 0.78. If the first "I think it's this: " is removed from the second string, then the similarity increases to 0.89. Both the Jaro and Jaro-Winkler measures are happily implemented in the Python jellyfish package. This package also includes some other standard measurements of the closeness of two strings. My approach was to find the number of submissions whose Jaro-Winkler similarity exceeded 0.85. And I found this number empirically, by checking a number of (what appeared to me) to be very similar submissions, and computing their similarities. ## Some results In this cohort there were 39 students, divided into two cohorts: 12 were taught by me, and the rest by another teacher. I was only concerned with mine. There were 16 questions, but not every student answered every question, and so the maximum size of my DataFrame would be $$12\times 16=192$$; in fact I had a total of 171 different answers. The numbers of questions submitted by the students were: 11, 16, 14, 16, 16, 16, 15, 13, 12, 12, 16, 14 and so (to avoid comparing pairs of submissions twice) I aimed to compare every student's submission to the submissions of all students below them in the DataFrame. This makes for 13,383 comparisons. In fact, because I'm a lazy programmer, I simply compared every submission to every submission below it in the DataFrame (which meant that I was comparing submissions from a single student), for a total of 14,535 comparisons. This is how (assuming that the jellyfish package as been loaded as jf): 1match_list = [] 2N = my_data.shape[0] 3for i in range(N): 4 for j in range(i+1,N): 6 if jfs > 0.85: I ended up with 33 matches, which I put into a DataFrame: 1matches = pd.DataFrame(match_list,columns=["ID 1","ID 2","Q# 1","Q# 2","Similarity"]) As you see, each row of the DataFrame contained the two student ID numbers, the relevant question numbers, and the similarity measure. Because of the randomisation of the exam, two students might get the same question but with a different number (as I mentioned earlier). To see if any pair of students appeared more than once, I grouped the DataFrame by their ID numbers: 1dg = matches.groupby(["ID 1","ID 2"]).size() 2dg.values 3 4array([ 1, 1, 1, 1, 1, 1, 1, 11, 1, 1, 1, 1, 1, 1, 1, 1, 2, 5 1, 1, 2, 1]) Notice something? There's a pair of students who submitted very similar answers to 11 questions! Now this pair can be isolated: 1maxd = max(dg.values) 2cheats = dg.loc[dg.values==maxdg].index[0] 3c0, c1 = cheats The matches can now be listed: 1collusion = matches.loc[(matches["ID 1"]==c0) & (matches["ID 2"]==c1)].reset_index(drop=True) and we can print off these matches as evidence.
2023-01-30 17:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5952984094619751, "perplexity": 1502.9444085036448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00779.warc.gz"}
https://ltwork.net/2b-describe-how-the-pin-method-is-used-to-determine-the--1467410
# 2b. describe how the pin method is used to determine the image of an object in the mirror. ###### Question: 2b. describe how the pin method is used to determine the image of an object in the mirror. ### Cual es el objetivo de la crioterapia en Paciente femenino de 15 años de edad jugadora de baloncesto. Jugando un partido recibió un impacto cual es el objetivo de la crioterapia en Paciente femenino de 15 años de edad jugadora de baloncesto. Jugando un partido recibió un impacto en la cara anterior de la rótula. La rodilla se inflamó al cabo de los pocos minutos después del impacto, apareciendo una tumefacción prepatelar, dolor y ... ### Factor the greatest common factor out from polynomial.. 14x2 + 49x Factor the greatest common factor out from polynomial.. 14x2 + 49x... ### In a short paragraph, describe two examples of how changes to the railroad helped American industries to grow. In a short paragraph, describe two examples of how changes to the railroad helped American industries to grow.... ### What would be a good technique for learning the basic layout of the cpt in order to reduce the time required to accurately code What would be a good technique for learning the basic layout of the cpt in order to reduce the time required to accurately code claim forms... ### In the case of Worcester v. Georgia, the Supreme Court ruled that the Cherokee were 'a sovereign nation that should be allowed to rule themselves.' however, in the case of Worcester v. Georgia, the Supreme Court ruled that the Cherokee were "a sovereign nation that should be allowed to rule themselves." however, president Andrew Jackson refused to enforce the ruling. what was the result? ​... ### 1. Suppose a graph passes the horizontalline test: No horizontal line can be drawn that touches the 1. Suppose a graph passes the horizontalline test: No horizontal line can be drawn that touches the graph in more than one location. Does this mean that the graph represents a function? If not, is there anything special about a graph that passes the horizontal line test? Share your ideas with a clas... ### If a character is trying to weigh options and make a decision, what type of conflict is he or she facing? If a character is trying to weigh options and make a decision, what type of conflict is he or she facing?... ### Triangle ABC has vertices of A(-6,7) B(4,-1), and C(-2,-9). Find the length of the median from angle B in triangle ABC. Triangle ABC has vertices of A(-6,7) B(4,-1), and C(-2,-9). Find the length of the median from angle B in triangle ABC.... ### Which of the following is NOT an example of a cryptid?Big FootLoch Ness MonsterPlatypusCadborosaurus Which of the following is NOT an example of a cryptid? Big Foot Loch Ness Monster Platypus Cadborosaurus... ### Bella and Vanessa each ate 3/4 of a pizza at the party what makes number shows the amount of pizzas Bella and Vanessa each ate 3/4 of a pizza at the party what makes number shows the amount of pizzas they ate in total... ### A road construction crew paves 1/5 mile every day. Which model represents the length of the road the crew paves in 2 days? A road construction crew paves 1/5 mile every day. Which model represents the length of the road the crew paves in 2 days?... ### If johnny has twice more bottles of dish soap than carolyn, and carolyn has 20 more bottles of dish soap than mr. white, how If johnny has twice more bottles of dish soap than carolyn, and carolyn has 20 more bottles of dish soap than mr. white, how many bottles of dish soap does johnny have if mr. white has 35 bottles of dish soap?... ### Match the proverb given to its meaning.Cuando el río suena, agua lleva.To lose one's train of thought.No pain, no gain.You snooze, Match the proverb given to its meaning. Cuando el río suena, agua lleva. To lose one's train of thought. No pain, no gain. You snooze, you lose. Never take things for granted. When there's smoke, there's fire. Every rumor has a true part.... ### For each sequence, find the first 4 terms and the 10th term.a) n + 5 For each sequence, find the first 4 terms and the 10th term. a) n + 5... ### Twenty planes are at an airport. eight planes take off, then five planes land, then 7 more take off, then 9 land. how many Twenty planes are at an airport. eight planes take off, then five planes land, then 7 more take off, then 9 land. how many planes are left at the airport?... ### An engineer estimated the weight of a steel beam to be 300 pounds. The actual weight of the beam was 272 pounds. Find the absolute An engineer estimated the weight of a steel beam to be 300 pounds. The actual weight of the beam was 272 pounds. Find the absolute error and the percent error if necessary round to nearest tenth... ### Judaisra's Holy book is call the ifill in the blank​ Judaisra's Holy book is call the ifill in the blank​... ### Which example best states an argument rather than an explanation? Which example best states an argument rather than an explanation? $Which example best states an argument rather than an explanation?$... Liability insurance is $Liability insurance is$...
2023-02-07 14:51:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2682110071182251, "perplexity": 3407.617793451786}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00686.warc.gz"}
https://www.nature.com/articles/s41598-021-99940-3
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # A deep learning model for gastric diffuse-type adenocarcinoma classification in whole slide images ## Abstract Gastric diffuse-type adenocarcinoma represents a disproportionately high percentage of cases of gastric cancers occurring in the young, and its relative incidence seems to be on the rise. Usually it affects the body of the stomach, and it presents shorter duration and worse prognosis compared with the differentiated (intestinal) type adenocarcinoma. The main difficulty encountered in the differential diagnosis of gastric adenocarcinomas occurs with the diffuse-type. As the cancer cells of diffuse-type adenocarcinoma are often single and inconspicuous in a background desmoplaia and inflammation, it can often be mistaken for a wide variety of non-neoplastic lesions including gastritis or reactive endothelial cells seen in granulation tissue. In this study we trained deep learning models to classify gastric diffuse-type adenocarcinoma from WSIs. We evaluated the models on five test sets obtained from distinct sources, achieving receiver operator curve (ROC) area under the curves (AUCs) in the range of 0.95–0.99. The highly promising results demonstrate the potential of AI-based computational pathology for aiding pathologists in their diagnostic workflow system. ## Introduction According to the global cancer statistics 20201, gastric cancer is amongst the most common leading causes of cancer related deaths in the world which is estimated 769,000 deaths and ranked fifth for incidence and fourth for mortality globally. Symptoms of gastric carcinoma tend to manifest only when it is at an advanced stage. The first sign is the detection of nodal, hepatic, and pulmonary metastases. In countries with a high incidence of gastric cancer, especially Japan, the increased use of endoscopic biopsy and cytology has resulted in the identification of early stage cases which has resulted in an increase in survival rates2,3,4,5. Microscopically, nearly all gastric carcinomas are of the adenocarcinoma (ADC) type and are composed of foveolar, mucopeptic, intestinal columnar, and goblet cell types6. According to the Lauren classification7 gastric ADCs are separated into intestinal and diffuse types. The intestinal-type shows well-defined glandular structures with papillae, tubules, or even solid areas. By contrast, the diffuse-type consists of poorly-differentiated type and signet ring cell carcinoma (SRCC). Diffuse-type ADC scatters and infiltrates widely, and its cells are small, uniform, and cohesive. Often these cells exhibit an SRCC appearance with the intracytoplasmic mucin pushing the nucleus of the neoplastic cells to the periphery. The amount of mucin present in these cells may be highly variable and difficult to appreciate in diffuse-type ADCs. Diffuse-type ADCs are more challenging to diagnose than other gastric carcinomas such as the intestinal-type. Diffuse-type cells are often single and inconspicuous in a background desmoplasia and inflammation, and they can often be mistaken for a variety of non-neoplastic lesions including gastritis or reactive endothelial cells in granulation tissues. Surgical pathologists are always on the lookout for signs of diffuse-type gastric adenocarcinoma when evaluating gastric biopsies. Deep learning has found many successful applications in computational pathology in the past few years for tasks such as tumour and mutation classification, cell segmentation, and outcome prediction for a variety of organs and diseases8,9,10,11,12,13,14,15,16,17,18,19,20,21. For stomach in particular, Sharma et al.22 trained a model for carcinoma classification using a small training set of 11 WSIs, while Iizuka et al.21 trained a deep learning model using a large dataset of 4,036 WSIs to classify gastric biopsy specimens into adenocarcinoma, adenoma, and non-neoplastic. In this paper, we trained deep learning models for the classification of diffuse-type ADC in endoscopic biopsy specimen whole slide images (WSIs). To do so, we used two approaches: one-stage and two-stage. With the one-stage approach, the model was trained to directly classify diffuse-type ADC. With the two-stage approach, we used the model of Iizuka et al.21 to first detect ADC, followed by a second stage model that subclassifies the detected ADC cases into diffuse-type ADC vs other ADC. For both approaches, we have used the partial transfer learning method23 to fine-tune the models. We obtained models with ROC AUCs in the range in 0.95–0.99 for the five independent test sets, demonstrating the potential of such methods for aiding pathologists in their workflows. ## Results The aim of this study was to train a convolutional neural network (CNN) for the classification of diffuse-type ADC in biopsy WSIs. In order to apply a CNN on the large WSIs, we followed the commonly adopted approach of tiling the WSIs by extracting fixed-sized tiles over all the detected tissue regions (see methods section for more details). Overall, we trained four different models: (1) a two-stage method using existing model of Iizuka et al.21 to first detect ADC, followed by a second model that detects diffuse-type ADC, both at $$\times$$ 10 magnification; (2) a one-stage method for direct diffuse-type ADC classification at magnification $$\times$$ 10 and a tile size of 224 $$\times$$ 224 px; (3) a one-stage method for direct diffuse-type ADC classification at magnification $$\times$$ 20 and a tile size of 224 $$\times$$ 224 px; and (4) a one-stage method for direct diffuse-type ADC classification at magnification $$\times$$ 20 and a tile size of 512 $$\times$$ 512 px. Figure 1 provides an overview of the training of a given model. At $$\times$$ 10 magnification 1 pixel corresponds to $$1\,\,{\upmu }\hbox {m}$$, and at $$\times$$ 20, 1 pixel corresponds to $$0.5\,\,{\upmu }\hbox {m}$$. ### Evaluation on five independent test sets from different sources We evaluated our models on five test sets consisting of biopsy specimens originating each from a distinct hospital. Table 3 breaks down the distribution of the WSIs in each test set. For each test set, we computed the ROC AUC for the WSI classification of diffuse-type ADC as well as the log loss, and we have summarised the results in Tables 1, 2 and Fig. 2. Figures 34, and 5 show true positive, false positive, and false negative example heatmap outputs, respectively. ### Evaluation on surgical and frozen sections In addition to the biopsy samples, we have applied the model on the small number of surgical and frozen sections. Figures 6 and 7 show example output predictions on such cases. We see the model was capable of detection diffuse-type ADC on such sections. ## Discussion In this work, we trained models for the classification of gastric diffuse-type ADC from biopsy WSIs. We used the partial transfer learning approach with a hard mining of false positives to train the models on a dataset obtained from a single hospital, and we evaluated them on five different test sets originating from different hospitals. Overall, we obtained high ROC AUCs in the range of 0.95–0.99. The best performing models were the one-stage model at $$\times$$ 20 magnification and 512 $$\times$$ 512px tile size and the 2-stage model at $$\times$$ 10 magnification and 224 $$\times$$ 224px tile size. For the one-stage model, training at $$\times$$ 20 magnification led to an increase in performance, where the average ROC AUC increased from 0.87 to 0.97 for the five test sets. The increase in magnification was most likely essential in decreasing the false positive rate. Despite being at $$\times$$ 10 magnification, the two-stage model still performed well potentially due to having been trained on a much larger datasets (n = 4036) and the use of the RNN model which aims at reducing the false-positives. The trained model was able to detect well both poorly-differentiated ADC and SRCC cells (see Fig. 3 for an example representative case). The majority of false positives occurred on gastritis cases due to the similarity between diffuse-type ADC and inflammatory cells especially plasma cells (see Fig. 4). Diffuse-type gastric ADCs composed are composed of diffuse-type cohesive carcinoma and SRCCs24, and they show an aggressive biological behavior and poor prognosis25. In a previous report, patients with SRCC and diffuse-type differentiated ADC in advanced stages demonstrated significantly lower 10-year overall survival rates than the survival rates of patients with advanced differentiated-type ADCs26. The availability of a tool that can aid pathologists in the diagnosis of diffuse-type ADC could potentially accelerate their diagnostic workflow. ## Methods ### Clinical cases and pathological records For the present retrospective study, a total of 2,929 endoscopic biopsy cases of human gastric epithelial lesions HE (hematoxylin & eosin) stained histopathological specimens were collected from the surgical pathology files of five hospitals: International University of Health and Welfare, Mita Hospital (Tokyo), Kamachi Group Hospitals (Fukuoka), Haradoi Hospital (Fukuoka), and Nishi-Fukuoka Hospital (Fukuoka) after histopathological review of those specimens by surgical pathologists. The experimental protocol was approved by the ethical board of the International University of Health and Welfare (No. 19-Im-007), Kamachi Group Hospitals, Haradoi Hospital, and Nishi-Fukuoka Hospital. All research activities complied with all relevant ethical regulations and were performed in accordance with relevant guidelines and regulations in the all hospitals mentioned above. Informed consent to use histopathological samples and pathological diagnostic reports for research purposes had previously been obtained from all patients prior to the surgical procedures at all hospitals, and the opportunity for refusal to participate in research had been guaranteed by an opt-out manner. The test cases were selected randomly, so the obtained ratios reflected a real clinical scenario as much as possible. All WSIs were scanned at a magnification of $$\times$$ 20. ### Dataset and annotations The pathologists excluded cases that were inappropriate or of poor quality for this study. The diagnosis of each WSI was verified by at least two pathologists. Table 3 breaks down the distribution of the datasets into training, validation, and test sets. Hospitals which provided histopathological cases were anonymised (e.g., Hospital 1–5). The training and test sets were solely composed of WSIs of endoscopic biopsy specimens. The patients’ pathological records were used to extract the WSIs’ pathological diagnoses. 353 WSIs from the training and validation sets had a diffuse-type ADC diagnosis. They were manually annotated by a group of two surgical pathologists who perform routine histopathological diagnoses. The pathologists carried out detailed cellular-level annotations by free-hand drawing around diffuse-type ADC cells that corresponded to poorly-differentiated ADC or SRCC. The other ADC (n = 571) and non-neoplastic subsets (n = 1116) of the training and validation sets were not annotated and the entire tissue areas within the WSIs were used. Each annotated WSI was observed by at least two pathologists, with the final checking and verification performed by a senior pathologist. ### Deep learning models For the detection of diffuse-type ADC, we used two approaches: one-stage and two-stage. The one-stage approach consisted in training the CNN as a binary classifier to directly classify diffuse-type ADC. The two-stage approach consisted in combining the output from an existing model21 that differentiates between ADC, adenoma, and non-neoplastic21, followed by a model trained to differentiate between diffuse-type ADC and other ADC. We trained all the models using the partial fine-tuning approach23. This method simply consists in using the weights of an existing pre-trained model and only fine-tuning the affine parameters of the batch normalisation layers and the final classification layer. We have used the EfficientNetB127 model starting with pre-trained weights on ImageNet. The total number of trainable parameters was only 63,329. To apply the CNN on the WSIs, we performed slide tiling by extracting square tiles from tissue regions. On a given WSI, we detected the tissue regions and eliminated most of the white background by performing a thresholding on a grayscale version of the WSI using Otsu’s method28. During prediction, we perform the tiling in a sliding window fashion, using a fixed-size stride, to obtain predictions for all the tissue regions. During training, we initially performed random balanced sampling of tiles from the tissue regions, where we tried to maintain an equal balance of each label in the training batch. To do so, we placed the WSIs in a shuffled queue such that we looped over the labels in succession (i.e. we alternated between picking a WSI with a positive label and a negative label). Once a WSI was selected, we randomly sampled $$\frac{\text {batch size}}{\text {num labels}}$$ tiles from each WSI to form a balanced batch. To maintain the balance on the WSI, we over-sampled from the WSIs to ensure the model trains on tiles from all of the WSIs in each epoch. We then switched into hard mining of tiles once there was no longer any improvement on the validation set after two epochs. To perform the hard mining, we alternated between training and inference. During inference, the CNN was applied in a sliding window fashion on all of the tissue regions in the WSI, and we then selected the k tiles with the highest probability for being positive if the WSI was negative and the k tiles with the lowest probability for being positive if the WSI was positive. This step effectively selects the hard examples which the model is struggling with. The selected tiles were placed in a training subset, and once that subset contained N tiles, the training was run. This method is similar to the weakly supervised training method as described by Kanavati et al.29. We used $$k = 16$$, $$N=256$$, and a batch size of 32. From the WSIs with diffuse-type ADC, we sampled tiles based on the free-hand annotations. If the WSI contained annotations for cancer cells, then we only sampled tiles from the annotated regions as follows: if the annotation was smaller than the tile size, then we sampled the tile at the centre of the annotation regions; otherwise, if the annotation was larger than the tile size, then we subdivided the annotated regions into overlapping grids and sampled tiles. Most of the annotations were smaller than the tile size. On the other hand, if the WSI did not contain diffuse-type ADC, then we freely sampled from the entire tissue regions. The first stage model21 is based on the InceptionV3 architecture30 is followed by a single layer recurrent neural network. It was trained with an input tile size of $$512\times 512$$ px on WSIs with a magnification of $$\times$$ 10. As the 2nd stage model was only trained on ADC, we used the product of the probability outputs to compute the probability that a given WSI has diffuse-type ADC: \begin{aligned} P({\text {diffuse-type ADC}}) = P_2({\text {diffuse-type ADC}}| {\text {ADC}}) \times P_1({\text {ADC}}), \end{aligned} where $$P_1(\text {ADC})$$ is the probability output from the 1st stage model and $$P_2(\text {diffuse-type ADC}| \text {ADC})$$ is the probability from the 2nd stage model. To perform inference on the WSI (i.e. obtain a WSI prediction), we applied the model in a sliding window fashion on all the tissue regions, and we then took the maximum probability of the tiles and used that as the WSI probability. We trained the models with the Adam optimisation algorithm31 with the following parameters: $$beta_1=0.9$$, $$beta_2=0.999$$, and a batch size of 32. We used a starting learning rate of 0.001 when training the model from scratch, and 0.0001 when fine-tuning. We applied a learning rate decay of 0.95 every 2 epochs. We used the categorical cross entropy loss function. We used early stopping by tracking the performance of the model on a validation set, and training was stopped automatically when there was no further improvement on the validation loss for 10 epochs. The model with the lowest validation loss was chosen as the final model. ### Software, hardware, and statistical analysis We implemented the models using TensorFlow32. We calculated the AUCs in python using the scikit-learn package33 and performed the plotting using matplotlib34. We performed image processing, such as the thresholding with scikit-image35. We computed the 95% CIs estimates using the bootstrap method36 with 1000 iterations. We used openslide37 to perform real-time slide tiling. We trained the models on a single g4dn.2xlarge instance on amazon AWS which has an NVIDIA T4 Tensor Core GPU, 8 CPUs, and 32GB of RAM. ## Data availability The data that support the findings of this study are available from International University of Health and Welfare, Mita Hospital (Tokyo), Kamachi Group Hospitals (Fukuoka), Haradoi Hospital (Fukuoka), and Nishi-Fukuoka Hospital (Fukuoka), but restrictions apply to the availability of these data, which were used under a data use agreement which was made according to the Ethical Guidelines for Medical and Health Research Involving Human Subjects as set by the Japanese Ministry of Health, Labour and Welfare, and so are not publicly available. However, the data are available from the authors upon reasonable request for private viewing and with permission from the corresponding five medical institutions within the terms of the data use agreement and if compliant with the ethical and legal requirements as stipulated by the Japanese Ministry of Health, Labour and Welfare. Access to the data can also be obtained by entering into a similar data sharing agreement with the medical institutions. ## Code availability To train the classification model in this study we used the publicly available TensorFlow training script available at https://github.com/tensorflow/models/tree/master/official/vision/image_classification. ## References 1. 1. Sung, H. et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer J. Clin.https://doi.org/10.3322/caac.21660 (2020). 2. 2. Halvorsen, R. A. Jr., Yee, J. & McCormick, V. D. Diagnosis and staging of gastric cancer. Semin. Oncol 23, 325–335 (1996). 3. 3. Iishi, H., Yamamoto, R., Tatsuta, M. & Okuda, S. Evaluation of fine-needle aspiration biopsy under direct vision gastrofiberscopy in diagnosis of diffusely infiltrative carcinoma of the stomach. Cancer 57, 1365–1369 (1986). 4. 4. Nagata, T., Ikeda, M. & Nakayama, F. Changing state of gastric cancer in Japan. Am. J. Surg. 145, 226–233. https://doi.org/10.1016/0002-9610(83)90068-5 (1983). 5. 5. Nashimoto, A. et al. Gastric cancer treated in 2002 in Japan: 2009 annual report of the JGCA nationwide registry. Gastric Cancer 16, 1–27 (2013). 6. 6. Fiocca, R. et al. Characterization of four main cell types in gastric cancer: Foveolar, mucopeptic, intestinal columnar and goblet cells. Pathol.—Res. Pract. 182, 308–325. https://doi.org/10.1016/s0344-0338(87)80066-3 (1987). 7. 7. LAURÉN, P. The two histological main types of gastric carcinoma: Diffuse and so-called intestinal-type carcinoma. Acta Pathologica Microbiologica Scandinavica 64, 31–49. https://doi.org/10.1111/apm.1965.64.1.31 (1965). 8. 8. Yu, K.-H. et al. Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun. 7, 12474 (2016). 9. 9. Hou, L. et al. Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2424–2433 (2016). 10. 10. Madabhushi, A. & Lee, G. Image analysis and machine learning in digital pathology: Challenges and opportunities. Med. Image Anal. 33, 170–175 (2016). 11. 11. Litjens, G. et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 6, 26286 (2016). 12. 12. Kraus, O. Z., Ba, J. L. & Frey, B. J. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 32, i52–i59 (2016). 13. 13. Korbar, B. et al. Deep learning for classification of colorectal polyps on whole-slide images. J. Pathol. Inform. 8, 30 (2017). 14. 14. Luo, X. et al. Comprehensive computational pathological image analysis predicts lung cancer prognosis. J. Thorac. Oncol. 12, 501–509 (2017). 15. 15. Coudray, N. et al. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 24, 1559–1567 (2018). 16. 16. Wei, J. W. et al. Pathologist-level classification of histologic patterns on resected lung adenocarcinoma slides with deep neural networks. Sci. Rep. 9, 1–8 (2019). 17. 17. Gertych, A. et al. Convolutional neural networks can accurately distinguish four histologic growth patterns of lung adenocarcinoma in digital slides. Sci. Rep. 9, 1483 (2019). 18. 18. Bejnordi, B. E. et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318, 2199–2210 (2017). 19. 19. Saltz, J. et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep. 23, 181–193 (2018). 20. 20. Campanella, G. et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 25, 1301–1309 (2019). 21. 21. Iizuka, O. et al. Deep learning models for histopathological classification of gastric and colonic epithelial tumours. Sci. Rep. 10, 1–11 (2020). 22. 22. Sharma, H., Zerbe, N., Klempert, I., Hellwich, O. & Hufnagl, P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput. Med. Imaging Graph. 61, 2–13 (2017). 23. 23. Kanavati, F. & Tsuneki, M. Partial transfusion: On the expressive influence of trainable batch norm parameters for transfer learning. arXiv preprint arXiv:2102.05543 (2021). 24. 24. Hu, B. et al. Gastric cancer: Classification, histology and application of molecular pathology. J. Gastrointest. Oncol. 3, 251 (2012). 25. 25. Lee, J. Y. et al. The characteristics and prognosis of diffuse-type early gastric cancer diagnosed during health check-ups. Gut Liver 11, 807–812. https://doi.org/10.5009/gnl17033 (2017). 26. 26. Chon, H. J. et al. Differential prognostic implications of gastric signet ring cell carcinoma. Ann. Surg. 265, 946–953. https://doi.org/10.1097/sla.0000000000001793 (2017). 27. 27. Tan, M. & Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning 6105–6114 (PMLR, 2019). 28. 28. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9, 62–66 (1979). 29. 29. Kanavati, F. et al. Weakly-supervised learning for lung carcinoma classification using deep learning. Sci. Rep. 10, 1–11 (2020). 30. 30. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2818–2826 (2016). 31. 31. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). 32. 32. Abadi, M. et al. TensorFlow: Large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org. 33. 33. Pedregosa, F. et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011). 34. 34. Hunter, J. D. Matplotlib: A 2d graphics environment. Comput. Sci. Eng. 9, 90–95. https://doi.org/10.1109/MCSE.2007.55 (2007). 35. 35. van der Walt, S. et al. Scikit-image: Image processing in Python. PeerJ 2, e453. https://doi.org/10.7717/peerj.453 (2014). 36. 36. Efron, B. & Tibshirani, R. J. An Introduction to the Bootstrap (CRC Press, 1994). 37. 37. Goode, A., Gilbert, B., Harkes, J., Jukic, D. & Satyanarayanan, M. Openslide: A vendor-neutral software foundation for digital pathology. J. Pathol. Inform. 4, 27 (2013). ## Acknowledgements We are grateful for the support provided by Professors Takayuki Shiomi & Ichiro Mori at Department of Pathology, Faculty of Medicine, International University of Health and Welfare; Dr. Ryosuke Matsuoka at Diagnostic Pathology Center, International University of Health and Welfare, Mita Hospital; pathologists at Kamachi Group Hospitals (Fukuoka), Haradoi Hospital (Fukuoka), and Nishi-Fukuoka Hospital (Fukuoka). We thank the pathologists who have been engaged in the annotation, reviewing cases, and pathological discussion for this study. ## Author information Authors ### Contributions F.K. and M.T. contributed equally to this work; F.K. and M.T. designed the studies, performed experiments, analyzed the data, and wrote the manuscript; M.T. supervised the project. All authors reviewed the manuscript. ### Corresponding author Correspondence to Masayuki Tsuneki. ## Ethics declarations ### Competing interests F.K. and M.T. are employees of Medmain Inc. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kanavati, F., Tsuneki, M. A deep learning model for gastric diffuse-type adenocarcinoma classification in whole slide images. Sci Rep 11, 20486 (2021). https://doi.org/10.1038/s41598-021-99940-3 • Accepted: • Published:
2021-12-05 15:03:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5799039602279663, "perplexity": 5628.290390433968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00056.warc.gz"}
https://math.stackexchange.com/questions/2283173/is-zero-an-odd-number
# Is zero an odd number? I have checked the question upon this topic, but everytime the answer is the parity of zero is even because $0\times2=0$, and it is between two odd numbers, $-1,1$. My question is, $0\times\text{(any odd number)}$ is also equal to zero, making the odd number a factor of zero, and thus $0$ could also be said to be odd. Also, the logic that every even number lies between two odd numbers may be an exception for zero, as such exceptions occurs frequently in Number Theory. If you think that this question does not meet the standards of this site, please comment instead of down voting. Thank you. • I don't understand your doubt. $3$ is odd, $6$ is even and $3$ is a factor of $6$. It happens all the time, really. – user228113 May 16, 2017 at 8:54 • According to your logic, 3 (an odd number) is also a factor of 6 (an even number) so 6 can also be said to be odd. This is of course not the case. While odd numbers only have odd divisors, it does not mean that only odd numbers have odd divisors. May 16, 2017 at 8:55 • I think you have a very odd approach to thinking about the parity of a number. Whether the number has odd factors, has nothing to do with it. Solution? Just take the number mod 2; 0 mod 2 = 0, making 0 an even number. May 16, 2017 at 8:58 • The odd number $1$ is also a factor of $2$, so by your logic, is $2$ also odd? – 5xum May 16, 2017 at 9:00 An even number, say $k$, is a number where $$k\mod 2=0$$ Therefore, we set $k=0$ and note that $$0\mod 2=0$$ and therefore $0$ is even We could also say that we know $1$ is odd, and therefore $$1\mod 2=1$$ We can note that $0=1-1$ and therefore \begin{align}0\mod 2&\equiv 1-1\mod 2\\ &\equiv (1\mod 2) - (1\mod 2)\\ &\equiv 1-1\\ &\equiv 0\end{align} There is also a whole Wikiepdia page dedicated to this problem! An even number times an odd number is always even. For example, $2\cdot 3 = 6$. $3$ is a factor of $6$, but $6$ is still even. Being a multiple of an odd number doesn't make a number odd. It's simply true by definition that $k\in \mathbb{Z}$ is even if there exists $n\in \mathbb{Z}$ such that $k=2n$. This applies to $0$, and so $0$ is an even number. You've got things backwards. If $a$ is a factor of $b$ and $b$ is odd, then $a$ is odd (e.g. since $9$ is odd and $3$ divides $9$, we know that $3$ is odd - although this is a bit of a silly way to find that out!). However, odd numbers can be factors of even numbers without a problem. More to the point, by definition a number is even if it is of the form $2k$ for some integer $k$. Since $0=2\cdot 0$, this means $0$ is even.
2022-05-25 00:12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835524320602417, "perplexity": 168.3959921664261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00159.warc.gz"}
https://jaantollander.com/post/exploring-the-pointwise-convergence-of-legendre-series-for-piecewise-analytic-functions/
# Exploring the Pointwise Convergence of Legendre Series for Piecewise Analytic Functions ## Introduction In this article, we explore the behavior of the pointwise convergence of the Legendre series for piecewise analytic functions using numerical methods. The article extensively covers the mathematics required for forming and computing the series as well as pseudocode code for some of the non-trivial algorithms. It extends the research made in the papers by Babuska and Hakula 1 2 by proving evidence for the conjectures about the Legendre series in a large number of points in the domain and up to very high degree series approximations. We present the numerical results as figures and provide explanations. We also provide the Python code for computing and recreating the results in LegendreSeries repository. ## Legendre Polynomials Legendre polynomials are a system of complete and orthogonal polynomials defined over the domain $Ω=[-1, 1]$ which is an interval between the edges $-1$ and $1$, as a recursive formula $$P_{0}(x) = 1$$ $$P_{1}(x) = x$$ $$P_{n}(x) = \frac{2n-1}{n} x P_{n-1}(x) - \frac{n-1}{n} P_{n-2}(x), n≥2. \tag{1} \label{recursive-formula}$$ The recursive definition enables efficient computing of numerical values of high degree Legendre polynomials at specific points in the domain. Legendre polynomials also have important properties which are used for forming the Legendre series. The terminal values, i.e. the values at the edges, of Legendre polynomials can be derived from the recursive formula $$P_{n}(1) = 1$$ $$P_{n}(-1) = (-1)^n. \tag{2} \label{terminal-values}$$ The symmetry property also follows from the recursive formula $$P_n(-x) = (-1)^n P_n(x). \tag{3} \label{symmetry}$$ The inner product is denoted by the angles $$⟨P_m(x), P_n(x)⟩ _{2} = \int _{-1}^{1} P _m(x) P _n(x) dx$$ $$= \frac{2}{2n + 1} δ _{mn}, \tag{4} \label{inner-product}$$ where $δ_{mn}$ is Kronecker delta. Differentiation of Legendre polynomials can be defined in terms of Legendre polynomials themselves as $$(2n+1) P_n(x) = \frac{d}{dx} \left[ P_{n+1}(x) - P_{n-1}(x) \right]. \tag{5} \label{differentiation-rule}$$ The differentiation $\eqref{differentiation-rule}$ rule can also be formed into the integration rule $$∫P_n(x)dx = \frac{1}{2n+1} (P_{n+1}(x) - P_{n-1}(x)). \tag{6} \label{integration-rule}$$ ## Legendre Series The Legendre series is a series expansion which is formed using Legendre polynomials. Legendre series of function $f$ is defined as $$f(x) = ∑_{n=0}^{∞}C_{n}P_{n}(x). \tag{7} \label{legendre-series}$$ where $C_n∈ℝ$ are the Legendre series coefficients and $P_n(x)$ Legendre polynomials of degree $n$. The formula for the coefficients is defined $$C_n = \frac{2n+1}{2} \int_{-1}^{1} f(x) P_{n}(x)dx. \tag{8} \label{legendre-series-coefficients}$$ The Legendre series coefficients can be derived using the Legendre series formula $\eqref{legendre-series}$ and the inner product $\eqref{inner-product}$. Show proof: $$f(x) = ∑_{m=0}^{∞}C_{m}P_{m}(x)$$ $$P_{n}(x) f(x) = P_{n}(x) \sum {m=0}^{\infty }C{m}P_{m}(x)$$ $$∫_{-1}^{1}P_{n}(x) f(x)dx = ∫_{-1}^{1} P_{n}(x) \sum {m=0}^{\infty }C{m}P_{m}(x)dx$$ $$∫_{-1}^{1}P_{n}(x) f(x)dx = \sum {m=0}^{\infty }C{m}∫_{-1}^{1} P_{n}(x) P_{m}(x)dx$$ $$∫_{-1}^{1}P_{n}(x) f(x)dx = C_{n}∫_{-1}^{1} P_{n}(x) P_{n}(x)dx$$ $$C_{n} = {\langle f,P {n}\rangle {2} \over |P {n}|{2}^{2}}$$ $$C_n = \frac{2n+1}{2} \int{-1}^{1} f(x) P{n}(x)dx.$$ The partial sum of the series expansion gives us an approximation of the function $f$. It’s obtained by limiting the series to a finite number of terms $k$ $$f_k(x)=\sum _{n=0}^{k}C _{n}P _{n}(x). \tag{9} \label{partial-sum}$$ The approximation error $ε_k$ is obtained by subtracting the partial sum $f_k$ from the real value of the function $f$ $$ε_k(x) = f(x)-f_k(x). \tag{10} \label{approximation-error}$$ The actual analysis of the approximation errors will be using the absolute value of the error $|ε_k(x)|.$ ## Piecewise Analytic Functions The motivation for studying a series expansion for piecewise analytic functions is to understand the behavior of a continuous approximation of a function with non-continuous or non-differentiable points. We will study the series expansion on two special cases of piecewise analytic functions, the step function and the V function. The results obtained from studying these two special cases generalizes into all piecewise analytic functions, but this is not proven here. Piecewise analytic function $f(x):Ω → ℝ$ is defined $$f(x) = \sum_{i=1}^{m} c_i | x-a_i |^{b_i} + v(x), \tag{11} \label{piecewise-analytic-function}$$ where $a_i∈(-1,1)$ are the singularities, $b_i∈ℕ_0$ are the degrees, $c_i∈ℝ$ are the scaling coefficients and $v(x)$ is an analytic function. The V function is piecewise analytic function of degree $b=1$ and it is defined as $$u(x) = c ⋅ |x - a| + (α + βx). \tag{12} \label{v-function}$$ The step function is piecewise analytic function of degree $b=0$ and it is obtained as the derivative of the V-function $$\frac{d}{dx} u(x) = u'(x) = c ⋅ \operatorname{sign}{(x - a)} + β. \tag{13} \label{step-function}$$ As can be seen, the V function is the absolute value scaled with $c$, translated with $α$ and rotated by $βx$ and the step function is sign function that is scaled with $c$ and translated with $β$. We’ll also note that the derivative of step function is zero $$\frac{d}{dx} u'(x) = u''(x) = 0. \tag{14} \label{step-function-derivative}$$ This will be used when forming the Legendre series. ## Legendre Series of Step Function The coefficients for the Legendre series of the step function will be referred as step function coefficients. A formula for them in terms of Legendre polynomials at the singularity $a$ can be obtained by substituting the step function $\eqref{step-function}$ in place of the function $f$ in the Legendre series coefficient formula $\eqref{legendre-series-coefficients}$ $$A_n = \frac{2n+1}{2} \int_{-1}^{1} u'(x) P_{n}(x)dx \tag{15} \label{step-function-coefficients}$$ $$A_0 = β-ac$$ $$A_n = c(P_{n-1}(a)-P_{n+1}(a)),\quad n≥1$$ Proof: The coefficients for degree $n=0$ are obtained from direct integration $$A_{0} = \frac{1}{2} \int_{-1}^{1} u^{\prime}(x) dx$$ $$= \beta + \frac{1}{2} c \int_{-1}^{1} \operatorname{sign}{\left (x - a \right )}dx$$ $$= \beta + \frac{1}{2} c \left(\int_{-1}^{a} -1dx + \int_{a}^{1} 1dx\right)$$ $$= \beta - ac. \tag{16} \label{step-function-coefficients-0}$$ The coefficients for degrees $n≥1$ are obtained by using integration by parts, integral rule $\eqref{integration-rule}$, terminal values $\eqref{terminal-values}$ and the derivative of step function $\eqref{step-function-derivative}$ $$A_{n} = \frac{2n+1}{2} \int_{-1}^{1} u^{\prime}(x) P_{n}(x)dx, n≥1$$ $$= \frac{2n+1}{2} \left( \left[u^\prime(x) \int P _n(x) dx\right] _{-1}^{1} - \int _{-1}^{1} u^{\prime\prime}(x) P _n(x) dx \right)$$ $$= \frac{2n+1}{2} \left[ \frac{1}{2n+1} (P_{n+1}(x) - P_{n-1}(x)) u^\prime(x) \right]_{-1}^{1}$$ $$= \frac{1}{2} \left[ (P_{n+1}(x) - P_{n-1}(x)) u^\prime(x) \right]_{-1}^{1}$$ \begin{aligned}= & \frac{1}{2} \left[ (P_{n+1}(x) - P_{n-1}(x)) u^\prime(x) \right]{-1}^{a} + \\ & \frac{1}{2} \left[ (P{n+1}(x) - P_{n-1}(x)) u^\prime(x) \right]_{a}^{1} \end{aligned} \begin{aligned}= & \frac{1}{2} \left[ (P_{n+1}(x) - P_{n-1}(x)) \cdot (-c+\beta) \right]{-1}^{a} + \\ & \frac{1}{2} \left[ (P{n+1}(x) - P_{n-1}(x)) \cdot (c+\beta) \right]_{a}^{1} \end{aligned} $$= c \cdot \left(P_{n-1}(a) - P_{n+1}(a)\right). \tag{17} \label{step-function-coefficients-n}$$ ## Legendre Series of V Function The coefficients for the Legendre series of the step function will be referred as V function coefficients. A formula for them in terms of step function coefficients can be obtained by substituting the V function $\eqref{v-function}$ in place of the function $f$ in the Legendre series coefficient formula $\eqref{legendre-series-coefficients}$ $$B_n = \frac{2n+1}{2} \int_{-1}^{1} u(x) P_{n}(x)dx. \tag{18} \label{v-function-coefficients}$$ $$B_0 = \frac{a^{2} c}{2} + α + \frac{c}{2}$$ $$B_n=-\frac{A_{n+1}}{2n +3} + \frac{A_{n-1}}{2n - 1},\quad n≥1$$ Proof: The coefficients for degree $n=0$ are obtained from direct integration $$B_{0} = \frac{1}{2} ∫_{-1}^{1} u(x) dx = \frac{a^{2} c}{2} + α + \frac{c}{2}. \tag{19} \label{v-function-coefficients-0}$$ The coefficients for degrees $n≥1$ are obtained by using integration by parts, integral rule $\eqref{integration-rule}$ and substituting the step function coefficient formula $\eqref{step-function-coefficients}$ $$B_{n} = \frac{2n+1}{2} \int_{-1}^{1} u(x) P_{n}(x) dx, \quad n \geq 1$$ $$= \frac{2n + 1}{2} \left ( \left [ u(x) \int P_n(x) dx \right ]{-1}^{1} - \int{-1}^{1} u'(x) \int P_{n} dx dx \right )$$ $$= -\frac{1}{2} \int_{-1}^{1} \left [ P_{n+1}(x) - P_{n-1}(x) \right ] u'(x) dx$$ $$= -\frac{1}{2} \int_{-1}^{1} P_{n+1}(x) u'(x) dx + \frac{1}{2} \int_{-1}^{1} P_{n-1}(x) u'(x) dx.$$ $$= -\frac{A_{n+1}}{2n +3} + \frac{A_{n-1}}{2n - 1}. \tag{20} \label{v-function-coefficients-n}$$ ## Pointwise Convergence The series is said to converge pointwise if the partial sum $f_k(x)$ tends to the value $f(x)$ as the degree $k$ approaches infinity in every $x$ in the domain $Ω$ $$\lim_{k→∞} f_k(x)=f(x). \tag{21} \label{pointwise_convergence}$$ Equivalently in terms of the approximation error which approaches zero $$\lim_{k→∞} ε_k(x)=0. \tag{22} \label{approximation_error_convergence}$$ Numerical exploration of the pointwise convergence explores the behavior of the approximation error $ε_k(x)$ as a function of degree $k$ up to a finite degree $n$ at some finite amount of points $x$ selected from the domain $Ω.$ The numerical approach answers to how the series converges contrary to the analytical approach which answers if the series converges. ## Convergence Line We will perform the exploration by computing a convergence line which contains two parameters: 1. The convergence rate which is the maximal rate at which the error approaches zero as the degree grows. 2. The convergence distance which is a value proportional to the degree when the series reaches the maximal rate of convergence. The definition of the converge line requires a definition for a line. A line between two points $(x_0,y_0)$ and $(x_1,y_1)$ where $x_0≠x_1$ is defined as $$y=(x-x_{0}),{\frac {y_{1}-y_{0}}{x_{1}-x_{0}}}+y_{0}. \tag{23} \label{line1}$$ An alternative form of this formula is $$y=\tilde{a}x+\tilde{β}, \tag{24} \label{line2}$$ where the coefficient $\tilde{α}=\frac{y _{1}-y _{0}}{x _{1}-x _{0}}$ is referred as the slope and the coefficient $\tilde{β}=(y _{0}-x _{0}\tilde{α})$ is referred as the intercept. The pseudocode for the convergence line algorithm is then as follows: Input: A sequence of strictly monotonically increasing positive real numbers $(x_1,x_2,…,x_n)$ and a sequence of positive real numbers $(y_1,y_2,…,y_n)$ that converges towards zero (decreasing). Limit for the smallest value of the slope $\tilde{α}_{min}$. Output: A convergence line defined by the values $\tilde{α}$ and $\tilde{β}$ which minimizes $\tilde{α}$ and minimizes $\tilde{β}$ such that $\tilde{α}≥\tilde{α}_{min}$ and $y_i=\tilde{α}x_i+\tilde{β}$ for all $i=1,2,…,n.$ $\operatorname{Find-Convergence-Line}((x_1,x_2,…,x_n),(y_1,y_2,…,y_n),\tilde{α}_{min})$ 1. $i=1$ 2. $j=i$ 3. while $j≠n$ 4. ….. $k=\underset{k>j}{\operatorname{argmax}}\left(\dfrac{y_k-y_j}{x_k-x_j}\right)$ 5. ….. $\tilde{α} = \dfrac{y_k-y_j}{x_k-x_j}$ 6. ….. if $\tilde{α} < \tilde{α}_{min}$ 7. ….. ….. break 8. ….. $i=j$ 9. ….. $j=k$ 10. $\tilde{α}=\dfrac{y_j-y_i}{x_j-x_i}$ 11. $\tilde{β}=y_i-x_i \tilde{α}$ 12. return $(\tilde{α}, \tilde{β})$ The approximation errors generated by the Legendre series have linear convergence in the logarithmic scale, and therefore we need to convert the values into a logarithmic scale, find the convergence line and then convert the line into exponential form $βx^{-α}$. Input: The degrees $(k_1,k_2,…,k_n)$ corresponding to the approximation errors $(ε_1,ε_2,…,ε_n)$ generated by a series and the limit $\tilde{α}_{min}$. Output: The coefficient $α$ correspond to the rate of convergence, and the coefficient $β$ corresponds to the distance of convergence such that $ε_k≤βk_i^{-α}$ for all $i=1,2,…,n.$ $\operatorname{Find-Convergence-Line-Log}((k_1,k_2,…,k_n),(ε_1,ε_2,…,ε_n),a_{min})$ 1. $x=(\log_{10}(k_1),\log_{10}(k_2),…,\log_{10}(k_n))$ 2. $y=(\log_{10}(|ε_1|,\log_{10}(|ε_2|,…,\log_{10}(|ε_n|))$ 3. $(\tilde{α},\tilde{b})=\operatorname{Find-Convergence-Line}(x,y,\tilde{α}_{min})$ 4. $α=-\tilde{α}$ 5. $β=10^\tilde{β}$ 6. return $(α,β)$ ## Conjectures of Pointwise Convergence The papers by Babuska and Hakula 1 2 introduced two conjectures about the convergence rate and convegence distance. They are stated as follows. The convergence rate $α$ depends on $x$ and the degree $b$ $$α(x,b)= \begin{cases} \max(b,1), & x=a \\ b+1/2, & x∈\{-1,1\} \\ b+1, & x∈(-1,a)∪(a,1). \\ \end{cases} \tag{25} \label{convergence-rate}$$ As can be seen, there are different convergence rates at the edges, singularity and elsewhere. The convergence distance $β$ depends on $x$ but is independent of the degree $b.$ Near singularity we have $$β(a±ξ)≤Dξ^{-ρ}, ρ=1 \tag{26} \label{convergence-distance-near-singularity}$$ and near edges we have $$β(±(1-ξ))≤Dξ^{-ρ}, ρ=1/4 \tag{27} \label{convergence-distance-near-edges}$$ where $ξ$ is small positive number and $D$ is a positive constant. The behaviour near the singularity is related to the Gibbs phenomenom. ## Results The source code for the algorithms, plots, and animations is available in LegendreSeries GitHub repository. It also contains instructions on how to run the code. The results were obtained by computing the approximation error $\eqref{approximation-error}$ for the step function and V function up to degree $n=10^5.$ The values of $x$ from the domain $Ω$ were chosen to include the edges, singularity, and its negation, zero, points near the edges and singularity, and points elsewhere in the domain. Then pointwise convergence was analyzed by finding the convergence line, i.e., the values for the rate of convergence $α$ and the distance of convergence $β$ for the approximation errors. The values were then verified to follow the conjectures $\eqref{convergence-rate}$, $\eqref{convergence-distance-near-edges}$ and $\eqref{convergence-distance-near-singularity}$ as can be seen from the figures below. ### Step Function The following results where $a=0.5$ and $b=0$ for the step function. Pointwise convergence at the edges has a convergence rate of $α=1/2$. Pointwise convergence at the singularity and its negation have a convergence rate of $α=1$. Pointwise convergence near singularity $a±ξ$ has a rate of convergence $α=1$, but as can be seen, the distance of convergence $β$ increases as $x$ moves closer to the singularity from either side. Pointwise convergence near edges $±(1-ξ)$ has a rate of convergence $α=1$ but similarly to points near singularity as $x$ moves closer to the edges the distance of convergence $β$ increases. Also, the pre-asymptotic region, i.e., a region with a different convergence rate, is visible. We have plotted the convergence distances $β$ into a graph as a function of $x$. We can see the behavior at the near edges and near singularity. Values in these regions can be plotted on a logarithmic scale. Near-singularity convergence distances $β(a±ξ)$ as a function of $x$ are linear at the logarithmic scale where the parameter $ρ≈1.0.$ Near edges, convergence distances $β(±(1-ξ))$ as a function of $x$ are linear at the logarithmic scale where the parameter $ρ≈1/4.$ ### V Function The following results where $a=0.5$ and $b=1$ for the V function. The results are similar, but not identical to those of the step function. Pointwise convergence at the edges has a convergence rate of $α=1+1/2$. Pointwise convergence at the negation of singularity has convergence rate of $α=2$. Pointwise convergence at the singularity has a convergence rate of $α=1$. As can be seen, the convergence pattern is almost linear with very little oscillation. Pointwise convergence near singularity $a±ξ$ has a rate of convergence $α=2$, but as can be seen, the distance of convergence $β$ increases as $x$ moves closer to the singularity from either side. Pointwise convergence near edges $±(1-ξ)$ has a rate of convergence $α=2$ but similarly to points near singularity as $x$ moves closer to the edges the distance of convergence $β$ increases. Also, the pre-asymptotic region, i.e., a region with a different convergence rate, is visible. We have plotted the convergence distances $β$ into a graph as a function of $x$. We can see the behavior at the near edges and near singularity. Values in these regions can be plotted on a logarithmic scale. Near-singularity convergence distances $β(a±ξ)$ as a function of $x$ are linear at the logarithmic scale where the parameter $ρ≈1.0.$ Near edges, convergence distances $β(±(1-ξ))$ as a function of $x$ are linear at the logarithmic scale where the parameter $ρ≈1/4.$ ## Conclusions The results show that the Legendre series for piecewise analytic functions follows the conjectures $\eqref{convergence-rate}, \eqref{convergence-distance-near-singularity}$ and $\eqref{convergence-distance-near-edges}$ with high precision upto very high degree series expansion. The future research could be extended to study the effects of the position of the singularity $a$ to the constant $D$ or to study series expansion using other classical orthogonal polynomials and special cases of Jacobi polynomials such as Chebyshev polynomials. However, their formulas are unlikely to have convenient closed-form solutions and could, therefore, be more challenging to study. ## Contribute If you enjoyed or found benefit from this article, it would help me share it with other people who might be interested. If you have feedback, questions, or ideas related to the article, you can write to my GitHub Discussions forum. *** For more content, you can follow my YouTube channel and join my newsletter. Since creating content and open-source libraries take time and effort, consider supporting the effort by subscribing or giving a one-time donation. ## References 1. Babuška, I., & Hakula, H. (2014). On the $p$-version of FEM in one dimension: The known and unknown features, 1–25. Retrieved from http://arxiv.org/abs/1401.1483 ↩︎ 2. Babuška, I., & Hakula, H. (2019). Pointwise error estimate of the Legendre expansion : The known and unknown features. Computer Methods in Applied Mechanics and Engineering, 345(339380), 748–773. https://doi.org/10.1016/j.cma.2018.11.017 ↩︎ ##### Jaan Tollander de Balsch ###### Computational Scientist Jaan Tollander de Balsch is a computer scientist with a background in applied mathematics.
2022-11-27 19:43:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455391764640808, "perplexity": 338.3014999560334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00492.warc.gz"}
https://www.tutordale.com/what-does-more-than-mean-in-math-add-or-subtract/
Saturday, May 21, 2022 # What Does More Than Mean In Math Add Or Subtract ## Addition And Subtraction Word Problem Keywords Math Antics – Subtracting Mixed Numbers • Each word problem contains numbers which should be written down. • Keywords in the word problem can help us to decide whether to add or subtract these numbers. • The list above contains some common addition and subtraction keywords. • If you see these words in the word problem, they may help you to decide whether to add or subtract the numbers that you have already written down. Write out the numbers given in the word problem text. • Phoebe has 12 cm of ribbon and Jack has 23 cm of ribbon. • How much ribbon do they have altogether? • The word altogether tells us to add the two numbers to make a total. • We can write the numbers of 12 and 23 with their digits above each other. • Adding the units column, 2 + 3 = 5. • Adding the tens column, 1 + 2 = 3. • 12 + 23 = 35 and so, there is 35 cm of ribbon in total. ## Whats Bigger Infinity Or Pi Infinity is over 0, its over -284, its over 42, its over 4, its over 999,999,999,999,999,999*10^99999, its over a Googleplex, its even larger than a Googleplexian. Pi, on the other hand, is less than 4, its only 3.14159265 Since =C/d, pi is the ratio of a circles circumference to its diameter. The addition symbol + is usually used to indicate that two or more numbers should be added together, for example, 2 + 2. The + symbol can also be used to indicate a positive number although this is less common, for example, +2. Our page on Positive and Negative Numbers explains that a number without a sign is considered to be positive, so the plus is not usually necessary. See our page on Addition for more. You May Like: Which Founding Contributors To Psychology Helped Develop Behaviorism ## The Meaning Of The Product Of A Number The product of a number and one or more other numbers is the value obtained when the numbers are multiplied together. For example, the product of 2, 5 and 7 is While the product obtained by multiplying specific numbers together is always the same, products are not unique. The product of 6 and 4 is always 24, but so is the product of 2 and 12, or 8 and 3. No matter which numbers you multiply to obtain a product, the multiplication operation has four properties that distinguish it from other basic arithmetic operations, Addition, subtraction and division share some of these properties, but each has a unique combination. ## Math Is More Than Numbers By Mathnasium | Added Jan 8, 2018 Using Math to Improve Important Life Skills On the surface, math may seem like it’s all about numbers and formulas. However, this versatile subject is about much more than just counting, adding, and subtracting. Discover why math is more than numbers, and find out how it contributes to the development of valuable skills in problem solving, critical thinking, language, and more. Problem-Solving Skills Even simple addition and subtraction problems are about more than reaching the right answer. Both simple and complex math problems teach students important problem-solving skills that they can use for a variety of applications. For example, complex word problems and algebra equations can help elementary to high school students understand and solve puzzles. While most word problems do require a combination of adding, subtracting, multiplying, and dividing, they also ask students to think through problems carefully. They require students to puzzle over relationships between people, the timing of events, and distances between locations. They can also encourage students to approach problems from a variety of perspectives. Language Skills Both elementary and high school students also have the opportunity to learn essential vocabulary in math class. Younger students may master the use of expressions, such as “less than” or “greater than,” while advanced students may learn entirely new vocabulary sets in geometry, algebra, or trigonometry class. ## The Arithmetic Property Of Commutation Commutation means that the terms of an operation can be switched around, and the sequence of the numbers makes no difference to the answer. When you obtain a product by multiplication, the order in which you multiply the numbers does not matter. The same is true of addition. You can multiply 8 × 2 to get 16, and you will get the same answer with 2 × 8. Similarly, 8 + 2 gives 10, the same answer as 2 + 8. Subtraction and division don’t have the property of commutation. If you change the order of the numbers, you’ll get a different answer. For example, For subtraction, Division and subtraction are not commutative operations. ## What Does As Much As Mean In Math As much as means that quantities are being compared much is an adjective referring to quantity. So 60% as much as means for every hundred units of quantity in $30, the answer has sixty such units. So we could solve this as.$30 is thirty times a hundred cents, so the answer is thirty times sixty cents ## Operational Identities Difference And Sum Vs Product And Quotient If you perform an arithmetic operation on a number and an operational identity, the number remains unchanged. All four basic arithmetic operations have identities, but they are not the same. For subtraction and addition, the identity is zero. For multiplication and division, the identity is one. For example, for a difference, 8 0 = 8. The number remains identical. The same is true for a sum, 8 + 0 = 8. For a product, 8 × 1 = 8 and for a quotient, 8 ÷ 1 = 8. Products and sums have the same basic properties except that they have different operational identities. As a result, multiplication and its products have a unique set of properties that you have to know to get the right answers. ## How Many More Meaning In Math 4.5/5how many morehow many moremathmoremeans Math Operator-Vocabulary. Addition-sum, altogether, all, in all, together, total, total number, add, increase, increased by, more than. Subtraction-minus, greater than, take away, fewer than, less than, subtract, decreased by. Multiplication-product, multiply, multiplied by, times. Likewise, does or mean add or multiply? Roughly speaking , in probability, the word or translates into addition, while and translates into multiplication. The added assumptions are: you can only add if the two events are disjoint. you can only multiply if the two events are independent. Similarly one may ask, what operation is how many more? The Basic Operations Recommended Reading: Age Word Problems With Solutions ## Subtraction Of Large Numbers To subtract large numbers, list them in columns and then subtract only those digits that have the same place value. #### Example 4 Find the difference between 7064 and 489. ##### Solution: ###### Note: • Use the equals addition method or the . • Line up the thousands, hundreds, tens and units place values for the two numbers when placing the smaller number below the larger number as shown above. ## The Associative Property For Products And Sums The associative property means that if you are performing an arithmetic operation on more than two numbers, you can associate or put brackets around two of the numbers without affecting the answer. Products and sums have the associative property while differences and quotients do not. For example, if an arithmetical operation is performed on the numbers 12, 4 and 2, the sum can be calculated as A product example is But for quotients and for differences Multiplication and addition have the associative property while division and subtraction do not. Read Also: What Does G Represent In Physics ## Reducing Ambiguity By Agreement In general, nobody wants to be misunderstood. In mathematics, it is so important that readers understand expressions exactly the way the writer intended that mathematics establishes conventions, agreed-upon rules, for interpreting mathematical expressions. Does 10 5 3 mean that we start with 10, subtract 5, and then subtract 3 more leaving 2? Or does it mean that we are subtracting 5 3 from 10?Does 2 + 3 × 10 equal 50 because 2 + 3 is 5 and then we multiply by 10, or does the writer intend that we add 2 to the result of 3 × 10? To avoid these and other possible ambiguities, mathematics has established conventions for the way we interpret mathematical expressions. One of these conventions states that when all of the operations are the same, we proceed left to right, so 10 5 3 = 2, so a writer who wanted the other interpretation would have to write the expression differently: 10 . When the operations are not the same, as in 2 + 3 × 10, some may be given preference over others. In particular, multiplication is performed before addition regardless of which appears first when reading left to right. For example, in 2 + 3 × 10, the multiplication must be performed first, even though it appears to the right of the addition, and the expression means 2 + 30.See full rules for order of operations below. ## Introducing The Concept: Order Of Operations Before your students use parentheses in math, they need to be clear about the order of operations without parentheses. Start by reviewing the addition and multiplication rules for order of operations, and then show students how parentheses can affect that order. Materials: Whiteboard or way to write for the class publicly Prerequisite Skills and Concepts: Students should be able to evaluate and discuss addition, subtraction, multiplication, and division expressions. This would be a good moment to discuss the mathematical practice of attending to precision. In math, it is critical that we are deliberate when writing mathematical expressions and making mathematical statements. Small mixups with the math rules of operations or parentheses can cause drastic changes! Imagine incorrectly evaluating an expression when calculating a medicine dosage or a cost, for example. Give students a few more examples, showing an expression with and without parentheses. Have student volunteers evaluate the expressions and compare their values. When students arrive at different values, avoid telling them they are right or wrong. Instead, have them find similarities and differences in their strategies, and guide the discussion so that students can see which strategy matches the rules for order of operations. You May Like: Which Founding Contributors To Psychology Helped Develop Behaviorism ## What Does Greater Than Mean In A Word Problem wordsgreaterlessproblemless thanmeansless thanmeanslessmeansproblem . Also asked, what does more than mean in word problems? Addition-sum, altogether, all, in all, together, total, total number, add, increase, increased by, more than. Subtraction-minus, greater than, take away, fewer than, less than, subtract, decreased by. Multiplication-product, multiply, multiplied by, times. Likewise, what words mean subtraction? The Basic Operations Symbol Subtraction, Subtract, Minus, Less, Difference, Decrease, Take Away, Deduct ×Multiplication, Multiply, Product, By, Times, Lots Of ÷Division, Divide, Quotient, Goes Into, How Many Times In this way, what does OFC mean? Of Course What are key words in word problems? MathHelp.com • Addition: increased by. more than. combined, together. total of. • Subtraction: decreased by. minus, less. difference between/of. less than, fewer than. • Multiplication: of. times, multiplied by. product of. • Division: per, a. out of. ratio of, quotient of. • Equals. is, are, was, were, will be. gives, yields. sold for, cost. ## Or * Or Multiplication These symbols have the same meaning; commonly × is used to mean multiplication when handwritten or used on a calculator 2 × 2, for example. The symbol * is used in spreadsheets and other computer applications to indicate a multiplication, although * does have other more complex meanings in mathematics. Less commonly, multiplication may also be symbolised by a dot . or indeed by no symbol at all. For example, if you see a number written outside brackets with no operator , then it should be multiplied by the contents of the brackets: 2 is the same as 2×. See our page on Multiplication for more. You May Like: Example Of Span Linear Algebra ## What Does The 3 Mean In Math In mathematics, the expression 3! is read as three factorial and is really a shorthand way to denote the multiplication of several consecutive whole numbers. Since there are many places throughout mathematics and statistics where we need to multiply numbers together, the factorial is quite useful. ## < Less Than And > Greater Than Math Vocabulary Words for Addition and Subtraction! This symbol < means less than, for example 2 < 4 means that 2 is less than 4. This symbol > means greater than, for example 4 > 2. These symbols mean less than or equal to and greater than or equal to and are commonly used in algebra. In computer applications <= and >= are used. These symbols are less common and mean much less than, or much greater than. Also Check: Evaluating Functions Worksheet Algebra 2 Answer Key To add large numbers, list them in columns and then add only those digits that have the same place value. #### Example 2 Find the sum of 5897, 78, 726 and 8569. ##### Solution: ###### Note: • Write the numbers in columns with the thousands, hundreds, tens and units lined up. • 7 + 8 + 6 + 9 = 30.; Thus, the sum of the digits in the units column is 30.; So, we place 0 in the units place and carry 3 to the tens place. • The sum of the digits in the tens column after adding 3 is 27.; So, we place 7 in the tens place and carry 2 to the hundreds place. • The sum of the digits in the hundreds column after adding 2 is 22.; So, we place 2 in the hundreds place and carry 2 to the thousdands place. ## Common Mathematical Symbols And Terminology: Maths Glossary Mathematical symbols and terminology can be confusing and can be a barrier to learning and understanding basic numeracy. This page complements our numeracy skills pages and provides a quick glossary of common mathematical symbols and terminology with concise definitions. Are we missing something? Get it touch to let us know. You May Like: Who Are Paris Jackson’s Biological Parents ## What Comes First In Order Of Operations Over time, mathematicians have agreed on a set of rules called the order of operations to determine which operation to do first. When an expression only includes the four basic operations, here are the rules: • Multiply and divide from left to right. • Add and subtract from left to right. • When simplifying an expression such as \, first compute \ since the order of operations requires first evaluating any multiplication and division from left to right before evaluating addition or subtraction. In this case, that means first calculating \ followed by \. Once all multiplication and division have been completed, continue by adding or subtracting from left to right. The steps are shown below. \ \ Because \ \ Because \ \ Consider another expression as an example: \ \ Because \, which is done first because multiplication and division are evaluated first. \ \ Because \ Sometimes we might want to ensure addition or subtraction is performed first. Grouping symbols such as parentheses \\), brackets \, or braces \, allow us to determine the order in which particular operations are performed. The order of operations requires that operations inside grouping symbols are performed before operations outside them. For example, suppose there were parentheses around the expression 6 + 4: \ \times 7 – 3\) \ Because \, which is done first because it’s inside parentheses. \ Because \, and there are no more parentheses to consider. \ Because \ • Do operations in parentheses or grouping symbols. • ## Multiply Or Add First Teaching Order Of Operations Rules When students in Grades 3 and up initially learn to add, subtract, multiply, divide, and work with basic numerical expressions, they begin by performing operations on two numbers. But what happens when an expression requires multiple operations? Do you add or multiply first, for example? What about multiply or divide? This article explains what order of operations is and gives you examples that you can also use with students. It also provides two lessons to help you introduce and develop the concept. Key Standard: • Perform arithmetic operations involving addition, subtraction, multiplication, and division in the conventional order, whether there are parentheses or not. The order of operations is an example of mathematics that is very procedural. It’s easy to mess up because it’s less a concept you master and more a list of rules you have to memorize. But don’t be fooled into thinking that procedural skills can’t be deep! It can present difficult problems appropriate for older students and ripe for class discussions: • Does the left to right rule change when the multiplication is implied rather than spelled out? or \\) instead of \ or \.) • Where does factorial fall within the order of operations? • What happens when you have an exponent raised to another exponent, but there are no parentheses? ## Does Fewer Mean Add Or Subtract Yes, usually, “fewermeans subtraction, but some questions might try to trick you! Furthermore, what does the word fewer mean in math? fewer trains were late Synonyms: few. a quantifier that can be used with count nouns and is often preceded by `a’; a small but indefinite number. less. a quantifier meaning not as great in amount or degree. Similarly one may ask, does how many mean subtract? Subtraction, Subtract, Minus, Less, Difference, Decrease, Take Away, Deduct. × Multiplication, Multiply, Product, By, Times, Lots Of. ÷ Division, Divide, Quotient, Goes Into, How Many Times. How many more is add or subtract? Addition-sum, altogether, all, in all, together, total, total number, add, increase, increased by, more than. Subtraction-minus, greater than, take away, fewer than, less than, subtract, decreased by. Multiplication-product, multiply, multiplied by, times.
2022-05-24 23:56:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6232107877731323, "perplexity": 1044.0842209665732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00044.warc.gz"}
https://globaled.gse.harvard.edu/blog/teaching-and-learning-twenty-first-century
# Teaching and Learning in the Twenty-First Century In the book-turned-movie The Martian, Matt Damon plays Mark Watney, an astronaut who gets stranded on Mars and then is later rescued. Viral blog posts have suggested that had this really happened, it would have taken about $200 billion to rescue him. What they do not mention, however, is that even$200 trillion would not have been enough, were it not for some critical competencies displayed by Watney’s fellow astronauts, scientists, and Watney himself. Using Cognitive, Interpersonal, and Intrapersonal Competencies The cognitive competencies, which include not only critical thinking but creativity and innovation, deployed in the rescue mission are obvious: Watney’s knowledge as a botanist serves him well as he innovates to produce life on Mars, and NASA and JPL’s combined technical creativity finds a way to communicate with him and also brings him home. But the intrapersonal competencies exercised by Watney in monitoring and marshaling his emotions to work up the hope, motivation, and determination to survive against all odds are remarkable. Equally critical are the interpersonal competencies exercised by Watney’s fellow crew members on the space shuttle and the American and Chinese scientists, as they tap into their professional and personal responsibilities, and cooperative and teamwork abilities, to bring him home. Money alone would never have been enough to solve problems none of them had seen before, much less had been taught to solve—indeed, cognitive abilities also would not have been enough.
2020-01-28 03:52:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.247645303606987, "perplexity": 5450.660342011668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00047.warc.gz"}
http://mathhelpforum.com/advanced-algebra/218202-help-proof-eigenvalues.html
# Thread: Help with a proof - eigenvalues 1. ## Help with a proof - eigenvalues let $\varphi :V\rightarrow V$ be a linear transformation. let $u$ be eigenvector of $\varphi$ with the eigenvalue $\lambda$ let $v$ be eigenvector of $\varphi$ with the eigenvalue $\mu$ I need to prove that if $u+v$ is an eigenvector of $\varphi$, then $\lambda=\mu$ this is what I tried: $\varphi(u+v)=a(u+v)=au+av$ on the other hand: $\varphi(u)+\varphi(v)=\lambda u+\mu v$ but how can I conclude that $a=\lambda$ and $a=\mu$? 2. ## Re: Help with a proof - eigenvalues φ(u+v) = λu + μv = λ’(u+v) (λ- λ’)u + (μ- λ’)v = 0 u and v are linearly independent so λ = λ’ , μ = λ’ 3. ## Re: Help with a proof - eigenvalues Why are they linear independent? 4. ## Re: Help with a proof - eigenvalues Originally Posted by Stormey Why are they linear independent? Suppose $\lambda \ne \mu$ and $u$ and $v$ are linearly dependent. Then there is some $t \ne 0$ such that $v=tu$ In that case $\phi v = \phi (tu) = t (\phi u) = t (\lambda u) = \lambda (tu) = \lambda v$, which is a contradiction (why?). In other words, either $\lambda = \mu$, or $u$ and $v$ are linearly independent. 5. ## Re: Help with a proof - eigenvalues Originally Posted by Stormey Why are they linear independent? If λ =’μ (starting assumption) the eigen vectors are linearly independent. You'll have to look it up in a linear algebra book or on-line. =’ means not equal 6. ## Re: Help with a proof - eigenvalues Thanks for the help. I know that vectors of different eigenvalues are linear independent, you don't need to convince me about that. the thing is: it's not said that $\lambda \neq \mu$. it's only said that $u+v$ is a eigenvector. are you saying that I need to assume that $\lambda \neq \mu$ and show that it leads to a contradiction? (just want to wrap my head around it) 7. ## Re: Help with a proof - eigenvalues Originally Posted by Stormey Thanks for the help. I know that vectors of different eigenvalues are linear independent, you don't need to convince me about that. the thing is: it's not said that $\lambda \neq \mu$. it's only said that $u+v$ is a eigenvector. are you saying that I need to assume that $\lambda \neq \mu$ and show that it leads to a contradiction? (just want to wrap my head around it) This is where a proof by contradiction comes in. Suppose $u+v$ is an eigenvector of $\phi$, but $\lambda \ne \mu$ ... If it leads to a contradiction, then we must conclude that if $u+v$ is an eigenvector of $\phi$, then $\lambda = \mu$. Great. Thank you.
2016-12-06 01:32:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763339519500732, "perplexity": 477.3109818506344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.44/warc/CC-MAIN-20161202170901-00171-ip-10-31-129-80.ec2.internal.warc.gz"}
https://oxfordre.com/economics/view/10.1093/acrefore/9780190625979.001.0001/acrefore-9780190625979-e-384
Show Summary Details Page of PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, ECONOMICS AND FINANCE (oxfordre.com/economics). (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice). date: 18 February 2020 # Human Capital Inequality: Empirical Evidence ## Summary and Keywords This article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. Earnings inequality is tightly related to human capital inequality. However, it only measures disparity in payments to labor rather than dispersion in the market value of the underlying stocks of human capital. Hence, measures of earnings dispersion provide a partial and incomplete view of the underlying distribution of productive skills and of the income generated by way of them. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components. A second approach focuses on the lifetime present value of earnings. Lifetime earnings are, by definition, an ex post measure only observable at the end of an individual’s working lifetime. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. Arguably, this ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck). A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using risk-less or state-dependent discount factors. # Dispersion of Economic Outcomes Across Households Persistent differences in economic outcomes, such as income, wealth, employment, and consumption, have received continuing attention in the academic debate. Part of this attention is motivated by the well-documented increase in cross-sectional economic inequality that began in the 1980s. Alternative measures of inequality, such as variances, interpercentile ranges, and concentration at the top, all suggest that this increase has occurred across a range of measurable outcomes (earnings, wealth, health). Wealth inequality, in particular, has received considerable attention, with mounting evidence of steady and economically meaningful changes in the concentration of wealth ownership. By definition, wealth inequality captures disparity in the ownership of productive capital and other non-labor factors of production. In contrast, this article focuses on the distribution of human capital and its implications for the accrual of economic resources to individuals and households. Human capital inequality can be thought of as measuring disparity in the ownership of labor factors of production, which are usually compensated in the form of wage income. While not directly observable, the notion of a stock of human capital and its main properties have been formally spelled out in the original work of Gary Becker and coauthors (e.g., Becker, 1962, 1964). Earnings inequality is tightly related to human capital inequality. However, earnings dispersion can only provide a partial and incomplete perspective on the underlying distribution of productive skills and on the income generated by way of them. This follows from the fact that, at any point in time, earnings dispersion captures an isolated snapshot of the prevailing disparity in payments to labor. In fact, one would rather learn about the distribution of the underlying stocks of human capital that generate income, and about their market value. This tension between measures of earnings in a specific year and measures of long-term ability to earn has become a distinguishing feature of the literature on inequality and human capital. Despite its shortcomings, a fairly common way to gauge the distributional implications of human capital inequality is to examine the distribution of labor income. While it is not always obvious what accounts for returns to human capital, an established approach in the empirical literature is to decompose measured earnings into permanent and transitory components (see, e.g., Gottschalk, Moffitt, Katz, & Dickens, 1994; Heathcote, Perri, & Violante, 2010; Lochner & Shin, 2014). The decomposition approach finds its theoretical motivation in the observation by Friedman (1957) that the underlying ability to earn associated with human capital must be related to the expected flow of income over an individual’s life. Because the permanent component of earnings observed in a given period should, by definition, be detectable in every realization of an individual’s earnings process, one can regard an increase in the estimated variance of permanent earnings shocks as a proxy for the underlying changes in the value of human capital. A second approach to the measurement of human capital inequality focuses on the lifetime present value of earnings. An early example of this is Lillard (1977), while more recent contributions are by Bowlus and Robin (2004) and Guvenen, Kaplan, Song, and Weidner (2017). Lifetime earnings are, by definition, an ex post measure that is only observable at the end of an individual’s working lifetime. This measure captures how an individual’s position in the labor market changes over their lifetime through the evolution of their wage and employment status rather than their position at a single point in time. One limitation of this approach is that it assigns a value based on one of the many possible realizations of human capital returns. That is, because it is impossible to rerun the working life of the same individual multiple times, this measure can only capture the value associated with the particular earnings realization that is observed for each individual. Arguably, this provides a partial view of human capital because it ignores the option value associated with alternative, but unobserved, potential earning paths that may be valuable ex ante. Hence, ex post lifetime earnings reflect both the genuine value of human capital and the impact of the particular realization of unpredictable shocks (luck): crucially, these two components are not separately observable. A different but related measure focuses on the ex ante value of expected lifetime earnings, which differs from ex post (realized) lifetime earnings insofar as they account for the value of yet-to-be-realized payoffs along different potential earning paths. Ex ante expectations reflect how much an individual reasonably anticipates earning over the rest of their life based on their current stock of human capital, averaging over possible realizations of luck and other income shifters that may arise. The discounted value of different potential paths of future earnings can be computed using a risk-less discount factor as, for example, Jorgenson and Fraumeni (1989) and Cunha, Heckman, and Navarro (2005) do. This requires a way to estimate the distribution of future earnings outcomes. However, when valuing such outcomes it may be more appropriate to discount their value in a state-dependant fashion, as suggested in Huggett and Kaplan (2016) and Abbott and Gallipoli (2019). These more theory-focused approaches take estimates of the expectation of the present value of lifetime earnings as their yardstick measure of human capital, providing an indirect way to account for some of the key empirical issues highlighted so far. The empirical literature on economic inequality is so expansive that one cannot hope to summarize it all within a few pages. Instead, this article focuses on a narrow aspect of inequality—namely human capital disparities—and attempts to provide an in-depth review of this particular dimension of inequality.1 The approach here is to dissect and categorize existing evidence into different metrics of human capital and then summarize and convey the critical information from each of those threads of the literature. While it is impossible to cover every aspect that might fit under each alternative measurement approach, we attempt to provide an overview of relevant and recent advances within those threads. Combining the evidence reported here with evidence on other dimensions of inequality, such as wealth and consumption, should allow the reader to obtain a more complete understanding of overall economic inequality. Finally, in addition to reporting evidence on human capital inequality in the United States, this article also briefly outlines some international comparisons and provides a cursory discussion of ideas from the ever-expanding literature on the determinants of human capital heterogeneity. # Concepts of Human Capital and Their Measurement While human capital is not directly observable, one can measure the distribution of returns to human capital utilization in the form of earnings. This allows researchers to draw inferences about the underlying distribution and value of human wealth. Moreover, given the multifaceted nature of human capital, one is also able to gauge variation in some of its most relevant features, such as educational achievement, health status, and longevity. This section discusses relevant measurement issues and overviews key results on the distribution of human capital. ## Earnings Inequality Earnings are observed at the individual level but are sometimes analyzed at the household level. Earnings capture the part of income that is payment for labor. Thus earnings can be interpreted as the annual dividend paid by one’s human capital.2 Arguably, this dividend is not the best measure of human capital, but it is by far the most popular. Some of the highest-quality earnings data are sourced from government tax or Social Security data; however, such data cannot be freely accessed for analysis and are usually maintained in access-controlled research data centres. Earnings are also observable in survey data, such as the Census, though the reliability of such sources may be less than ideal.3 One way to measure earnings inequality is to compute ratios of earnings of individuals at different percentiles of the income distribution. Researchers usually report the 90:10 ratio (90th percentile relative to 10th percentile), as well as the 90:50 and 50:10 ratio to further decompose what half of the distribution is driving changes in inequality. Such statistics are easily computed as one simply determines earnings at the relevant percentiles and takes their ratios. Another common measure of earnings inequality is the top share of earnings. This is the share of total earnings in the economy accruing to those with the highest earnings, such as the highest 1% or 10% of earnings. Such statistics are also easy to compute as one determines the relevant percentile threshold (e.g., the 99th percentile) and then computes the total earnings of those above this threshold. Then, dividing this number by total earnings in the entire economy one arrives at the top-share statistic. One of the most frequently cited measures of earnings inequality is the Gini coefficient. The interpretation of the Gini coefficient can be related to the Lorenz curve, which describes the proportion of economy-wide earnings attributed to those at, or below, a given percentile of the earnings distribution. When earnings are equally distributed (every person has identical earnings), the Gini coefficient is zero. In contrast, when earnings are perfectly unequal (one person gets everything, all others get nothing), the Gini coefficient is one. Earnings are a non-negative quantity, which ensures that the Gini coefficient is between zero and one. However other variables, such as net worth, can assume negative values, and their distributions can therefore exhibit a Gini coefficient exceeding one. A tremendous amount of data and measurement exists on income and earnings inequality in the United States. When analyzing inequality in human capital in the United States, one valuable data source is Social Security Administration (SSA) records, for which reported amounts are labor income and the unit of analysis is the individual. Another popular source for computing inequality facts is the IRS database. However, IRS records report total income and the unit of analysis is the tax unit, which could be an individual or a couple. An excellent description of facts and figures, based on SSA data, is in Kopczuk, Saez, and Song (2010). These authors consider 80:50 and 50:20 ratios rather than the usual 90:50 and 50:10 ratios. They find that while the 80:50 ratio dipped after the World War II to as low as 1.5, a continuous steady rise in this ratio followed: the ratio climbed above 1.8 by the mid-2000s. This pattern reflects a steady increase in above-median inequality in the post-war era. The 50:20 ratio fell overall in this period, starting at about 2.5 at the end of World War II, and hovering around 2.2 by the mid-2000s, indicating a decrease in below-median inequality; however, this decline was not steady, with periods of increase occurring in the early 1980s and early 2000s. The overall inequality measure (the 80:20 ratio) showed a drop after World War II and substantial increases thereafter, as the above-median increases in inequality outpaced the lower inequality in below-median earnings. These authors also report the time series of the Gini coefficient for earnings, which dropped to as low as about 0.34 in the early 1950s and then steadily rose to just under 0.45 by the mid-2000s. Figure 1 reproduces a figure from the original study of Kopczuk et al. (2010), which displays Gini coefficients for earnings by gender and over time. A similar dynamic is observed in top earnings shares: the top 1% share of earnings dropped from 9.55% in 1939 to 5.92% in 1960, but then more than doubled to 12.28% by the mid-2000s. These general patterns and magnitudes are consistent with many other studies and data sources, for example Heathcote, Perri, and Violante (2010) and Eckstein and Nagypal (2004), who use Current Population Survey (CPS) data for similar time periods, or Ríos-Rull and Kuhn (2016), who use data from the Survey of Consumer Finances between 1989 and 2013 (see also Quadrini & Ríos-Rull, 2015). Figure 1. Evolution of the earnings Gini index in SSA data from 1937 to 2004, both in aggregate and by gender (Kopczuk et al., 2010). Lastly, recent evidence has suggested that changing patterns of inequality are not a general phenomenon but rather are linked to growth of inequality within and across groups. For example, Lemieux (2006b) shows that much of the previously discussed increases in inequality are attributable to the increasing return to post-secondary education.4 In a related line of research Song, Price, Guvenen, Bloom, and Von Wachter (2015) find that most of the observed increase in earnings inequality can be attributed to growing differences between firms in the wages they pay to workers. Similarly, Barth, Bryson, Davis, and Freeman (2016) find that rising inequality can be partly attributed to increasing inequality across industries.5 ## Lifetime Earnings Inequality In theory, lifetime earnings correspond to the present discounted value of all labor-related earnings over the course of one’s entire life. This concept is closer to the definition of human capital than annual earnings is, as it captures the entire lifetime of dividends paid by one’s human capital. Importantly, these are the actual realized earnings of the individual rather than forecasted earnings, and so to obtain a measure of this quantity, the individual’s entire lifetime of earnings must be observed. In practice, lifetime earnings can be approximated by the present discounted sum of annual earnings over some arbitrary period of time. As examples, Lillard (1977) studies a narrow data set on post-schooling earnings, Guvenen et al. (2017) study extensive administrative earning records for workers between ages 25 to 55. When the object of interest is differences in lifetime income across cohorts, like in Guvenen et al. (2017), researchers can avoid specifying a discount rate by assuming that all cohorts discount at the same rate. In some instances, such as Bowlus and Robin (2004) and Low, Meghir, and Pistaferri (2010), theoretical models of life-cycle earnings are used to interpret data panels and generate estimates of realized lifetime earnings distributions. Once estimates of lifetime earnings are obtained for a cross-section of individuals, researchers can compute the same measures of inequality described above in the context of current (annual) earnings. Figure 2. Panel (a) displays general patterns of earnings inequality measured by the standard deviation of log-earnings across genders and over time. Panel (b) displays 75:25 earnings ratios by gender and overall, showing how the pattern changes significantly if one considers only one gender or all workers. Panel (c) displays the evolution of the gender earnings gap, and panel (d) displays how within-gender inequality has risen while between-gender inequality has fallen (Guvenen et al., 2017). Bowlus and Robin (2004) also study the evolution of lifetime earnings inequality over time. They note that lifetime income is dependent not only on an individual’s current position in the labor market but also on the evolution of their wage and employment status. Therefore they develop a model incorporating wage mobility and employment transitions. For each individual in the sample they simulate a possible lifetime earnings trajectory, assuming that individuals are subject to the same distribution of shocks faced by older workers, and thereby estimate the distribution of lifetime earnings realizations. Their sample consists of white males aged 16 to 65 from the March CPS between 1978 and 1999. They use this sample to estimate the experience- and education-specific wage and employment transition probabilities. The key finding from this exercise is that lifetime earnings inequality is roughly 40% lower than annual earnings inequality. This implies that measures of cross-sectional inequality are significantly reduced when one factors in future employment and wages. This is partly due to the fact that the young, while exhibiting lower current wages, benefit disproportionately more from upward wage mobility relative to the old. It is also shown that inequality has been rising over time within education and experience groups, especially for university graduates. Both within- and across-group variation is important in explaining rising earnings inequality. As mentioned above, using a narrow data sample from the NBER-TH survey, Lillard (1977) also examines lifetime dispersion of earnings and draws comparisons to cross-sectional earnings inequality. The sample for this classic study includes only males who volunteered for the U.S. Air Force pilot, navigator, and bombardier programs in the last half of 1943. Lillard explores the relative importance of schooling, measured ability, and family background for both annual and lifetime earnings. His broad conclusions are that cross-sectional earnings inequality is about 50–80% higher than lifetime earnings inequality. Annual earnings inequality is high, both overall and by age group. These inequality measures are not sensitive to discounting, nor to the length of working life. Another finding of this pioneering work is that the contributions of schooling, ability, and background to variation in lifetime earnings are similar to their respective impacts on the variation of annual earnings within age groups; age itself is the most important factor in explaining annual earnings. These results, as well as those from the previously discussed articles, are echoed in a later study of Norwegian data by Aaberge and Mogstad (2014). This study also shows that lifetime earnings are much more equally distributed than annual earnings. Life-cycle bias, which is assessed by comparing within-age earnings inequality to aggregate earnings inequality, is a key contributor to the discrepancy between annual and lifetime earnings. ## Expected Lifetime Income Inequality At least since the seminal work of Modigliani and Brumberg (1954) and Friedman (1957), it has been understood that expected lifetime earnings (as opposed to realized, and observed, lifetime earnings) are the main drivers of economic decisions and utility. This discrepancy between ex ante (unobserved) magnitudes and ex post realized outcomes is central to make sense of the choices and behaviours of economic agents. As Cunha et al. (2005) emphasize, differences in expected lifetime earnings reflect ex ante heterogeneity in human capital, whereas unforecasted differences in lifetime earnings reflect luck and uncertainty. Assuming concave utility functions and a lack of full insurance, greater inequality due to uncertainty reduces the welfare of all agents. However, greater inequality in expected (hence predictable) resources, holding the mean constant, would imply that the rich become better off and poor become worse off in welfare terms. Therefore, understanding how lifetime income inequality breaks down into its different components is crucial for understanding welfare. A primary requirement to assess inequalities in human capital is to estimate expectations of lifetime earnings. That is, one needs to find ways to gather and aggregate all information available to individuals when making forward-looking judgments about their future stream of resources. These judgments about future earnings do not include variation due to the residual component of lifetime earnings, such as luck and other unforecastable factors, as these are, by definition, unrelated to a worker’s skills or potential to earn income. Two individuals can be identical in their education and earning abilities, yet they may realize very different earnings in the future due to luck; however, despite these differences due to chance, one should consider these individuals to have possessed the same ex ante human capital when younger, and indeed to have been equal in their opportunities. Two lines of research that use different approaches to quantify inequality in the distribution of ex ante human capital have been advanced by Huggett and Kaplan (2016) and Abbott and Gallipoli (2019).6 The central questions in these lines of inquiry relate to how agents discount future payoffs when forming expectations, given different possible realizations of future earnings, and how much prior information (possibly unobservable to researchers) agents possess about their future earnings. With respect to the former question, recent advances in asset pricing theory suggest that agents employ state-dependent stochastic discount factors, which can be estimated non-parametrically using consumption panel data. With respect to the latter question, work by Cunha et al. (2005) suggests the use of early life choices, such as the decision to complete education, as proxies for ex ante information available to agents. Credible answers to the question whether (and how much) realized lifetime earnings inequality overstates ex ante human capital inequality will be a critical, and likely contentious, issue in the ongoing debate on human capital inequality. ## Permanent Versus Transitory Earnings Decompositions A popular approach to studying human capital inequality involves decomposing earnings into underlying permanent and transitory components. While the transitory component reflects temporary, unpredictable variation or measurement error in wages, the permanent component can be thought of as a measure of human capital, akin to the flow value of the expected future earnings discussed in the previous section. This decomposition turns out to be extremely useful for two reasons: first, one can study trends in human capital inequality by estimating how the variance of the permanent component of earnings has changed over time; second, the actual implementation of this method can be carried out using fairly simple variance-covariance matrix estimators. This approach does, however, require a stylized set of assumptions and cannot deliver point estimates of each individual’s permanent component value, nor does it suggest how to discount across possible future realizations of earnings to generate human wealth estimates.7 That said, much has been learned about the nature of trends in human capital inequality through such decompositions. To fix ideas, denote an individual’s log-earnings by $yit$ and assume they are the sum of a transitory component $μit$ and permanent component $νit$: $yit=μit+νit$. The transitory component is such that realizations $μit$ are independent of past and future realizations of the same component, or more precisely $E(μitμit+τ)=0$ for $τ≠0$. In contrast, the distribution of realizations of the permanent component explicitly depends on past realizations. It is common for the permanent component to be modeled as either an $AR(1)$ or unit-root process; however, econometric advances also allow for non-linear dynamics, which may be a more appropriate assumption.8 Typically, these models can be identified with relatively short data panels, e.g., three year for the AR(1) specification, but longer panels are needed for more reliable estimates. If longer longitudinal data sets are available, windows of data can be used to produce time-varying estimates of the variances of the key parameters of these models.9 Textbook examples of this approach are Gottschalk et al. (1994) and Moffitt and Gottschalk (2012). The authors decompose the earnings residuals of prime age males in the Panel Study of Income Dynamics (PSID) from 1970 to 2004 into permanent and transitory components.10 As a baseline, they find that inequality measured by the variance of log earnings residuals nearly doubled during this period, and between 51 and 69% of the increase is attributable to the permanent component of earnings. These findings are displayed in Figure 3 of Moffitt and Gottschalk (2012), which is reproduced in Figure 3 here. Because the permanent component better reflects human capital, as opposed to luck and other random variation, one can conclude that a large increase in human capital inequality occurred over the sample period, although only about half to two-thirds of the rise in earnings inequality is genuinely attributable to changes in the distribution of human capital. Their estimates for 2004 indicate that about 60% of earnings inequality is attributable to a permanent component. Figure 3. Evolution of the total variance of log-earnings residuals from 1970 to 2004, as well as the evolution of both its permanent and transitory components (Moffitt & Gottschalk, 2012). Lochner and Shin (2014) also study the evolution of inequality in male earnings using PSID data from 1970 to 2008. They specifically focus on the returns to human capital and show that the pricing of unobserved skills has changed dramatically over time—returns to unobserved skills have increased in the 1970s and 1980s before reversing back by the late 1990s. Moreover, they observe that the variance in log-earnings residuals remained stable until the 1990s before rising. These discrepancies in trends point to changes in the variance of permanent shocks and transitory shocks. The authors also look at changes in returns at various points in the distribution of earnings (poorer vs. richer individuals) and find that the increase in returns to unobserved skills is not detectable at the top of the earnings distribution. Kopczuk et al. (2010) take earnings averaged over a five-year period to be a measure of the permanent component of earnings in the SSA data. Following this alternative approach they find that the permanent component is a much larger fraction of the variance of log-earnings, over 80% by the early 2000s. This approach also indicates that the entire rise in earnings inequality is due to a rise in the variance of the permanent component. It is not clear whether the differences in findings between this research and Moffitt and Gottschalk (2012) is due to the definition of permanent components, working with raw data rather than residuals, or the use of a different data sets. While the previously mentioned articles have studied earnings (which depend both on wages and on labor supply), a closely related literature focused on the evolution of wages alone. The permanent component of one’s wage rate is also a measure of human capital, under a simple proportionality assumption. Using a permanent-transitory wage decomposition, estimated from PSID data, Heathcote, Storesletten, and Violante (2010) study the evolution of the parameters of an AR(1) process describing the evolution of permanent wage factors. These authors find that both the permanent and transitory components of wages increased similarly from the mid-1960s to early 2000s, and that the permanent component is significantly larger than the transitory component. At the end of their sample period the transitory component accounts for only about 30% of the total variance of log-wage residuals. Heathcote, Perri, and Violante (2010) also work on residual wage inequality in the United States using the PSID data. Echoing the previously discussed literature, this research finds an increase in residual wage inequality since the mid-1960s and finds that the rise is roughly 50% attributable to an increase in the permanent component of wages and 50% attributable to the transitory component. One key aspect discussed is the impact of alternative ways to identify the components of the wage process. Identification can be achieved using either a set of moments on the level of log-wages or a set of moments on first-differences of log-wages. Although the main findings just mentioned are not affected, the authors do note other important differences in the results obtained under the two identification strategies. This disagreement, they say, “indicates that the permanent-transitory model is misspecified” (p. 40). This relates closely to the earlier comment suggesting that the literature has now directed itself towards studying non-linear dynamics. Guvenen, Karahan, Ozkan, and Song (2015, p. 1) study earnings dynamics using SSA data and find that earnings shocks exhibit “strong negative skewness and extremely high kurtosis,” hence contradicting the log-normality assumption underlying the previously discussed literature. While these authors do not decompose earnings into permanent and transitory components in the same manner described earlier, they do study these components, and their relationships with inequality, using nonparametric methods.11 Their findings are very interesting. For high-earning individuals further increases in earnings tend to be transitory and decreases in earnings tend to be permanent. For low-earning individuals the opposite is true in that increases in earnings tend to be persistent and decreases tend to be transitory. It is not yet clear how this affects our understanding of the evolution of residual wage inequality; however, it does provide a further indication of why current (contemporaneous) earnings inequality is a poor measure of inequality in the stock of human capital. # International Perspective Most of the studies discussed in this article refer to the United States. This is partly a reflection of the enormous amount of work that has been done using U.S. data. Nonetheless, it is interesting to assess how the patterns of cross-sectional earnings inequality compare across different countries and to pinpoint common trends and differences. A good source of information about cross-sectional earnings dispersion in different countries is the special issue on inequality published by the Review of Economic Dynamics in 2010. This issue contains a selection of papers examining inequality patterns in nine countries (Canada, Germany, Italy, Mexico, Russia, Spain, Sweden, United States, and United Kingdom). A study of this detail and scale is rare, thus the findings are of some significance, as they allow one to compare aspects of inequality across the subset of countries considered. Most of the studies in this special issue are based on data for the period between the 1970s and the 2000s: this makes it possible to draw comparisons and assess similarities and discrepancies. Another complementary, and equally rich, source of information can be found in the work of Atkinson and Piketty (2007) (see also Roine & Waldenström, 2015; Morelli, Smeeding, & Thompson, 2015). We begin by highlighting the most recurrent patterns. Most studies present similarities in the evolution of earnings and income inequality over the period considered: (1) in most countries one observes rising earnings inequality starting in the late 1980s, with the notable exceptions of Spain and Russia; (2) income inequality is significantly higher than consumption inequality, consistent with the notion that cross-sectional insurance and redistribution play a key role in all countries considered in the study; (3) earnings and income inequality grew remarkably during the 1990s in most countries, suggesting that common technological or institutional factors (see DiNardo, Fortin, & Lemieux, 1996) may have been key for inequality growth in that period; (4) residual inequality (wage and earnings dispersion after controlling for observable characteristics) played a key role in the rise of overall inequality, with transitory shocks accounting for a large share of this surge (see Lemieux, 2006a); (5) a commonality found across all countries considered is that the variance of permanent wage shocks is smaller than the variance of transitory wage shocks; (6) during recessions, inequality in earnings increased quite sharply everywhere, especially at the bottom of the distribution of earnings. This suggests a link between unemployment (or limited employment) and earnings of the poorest workers, who seem to carry most of the burden of regressions in terms of labor income. Important differences also become apparent when comparing the inequality experience of different countries. For example, while labor supply plays a central role in shaping household-level inequality dynamics in countries like the United States and Canada (see Abowd & Card, 1989), this is less true in European countries like Italy, Spain, and Germany. This discrepancy highlights the importance of heterogeneity in human capital utilization in different economies: whether labor supply changes at the household level impact the dynamics of aggregate inequality partly depends on the rate of female labor participation and on the distribution of household types (marital status, number of children, education of spouses). Some European countries exhibit a low level of female labor participation, and given the relatively inelastic male labor supply, this may weaken the pass-through from changes in hours worked to overall earnings dispersion. In addition, wage inequality in countries like Italy, Spain, and Germany is measured to be much lower than in the United States, which can make any heterogeneity in labor supply less salient for earnings dispersion. The country that most closely mimics the patterns observed in the United States appears to be Canada (see Green & Sand, 2015). As documented in the work of Brzozowski, Gervais, Klein, and Suzuki (2010), the time paths for wage dispersion, hours dispersion, and wage-hour correlation since 1975 are fairly close to those observed in the United States. However, despite the growth in income inequality, disposable income and consumption inequality did not grow as much in Canada, suggesting that redistribution trough taxation and benefits may have been effective in mitigating the effects of growing economic discrepancies. The relative mitigation effects seem, however, to have become weaker over time: during the 1980s there was a strong rise in before-tax income inequality in Canada, which was mostly absorbed by the tax and transfer system. However, after the 1990s, when the before-tax income inequality rose again, the tax and transfer system has been less effective in offsetting the rise in income inequality. This resulted in a more pronounced increase in after-tax income inequality. One interesting aspect that differentiates the United States from Canada is the variation in the health dimension of human capital. Arguably, one of the key aspects determining the value of an individual’s human capital is the relative health enjoyed by that person. The value of human wealth crucially depends on the ability to generate income through labor supply, which becomes harder in poor health. In the United States a sharp decline in labor market, marriage, and health outcomes has been documented for relatively poorer white non-Hispanics. Case and Deaton (2017) find that, for birth cohorts after 1940, this demographic group, especially those with less than a four-year college degree, experienced a decline in real wages that is more pronounced with each successive cohort. This decline in real wages is accompanied by rising mortality from drugs and alcohol poisoning, suicide, higher risk of heavy drinking, chronic pain, labor force detachment, and declining marriage outcomes. These trends are not common in those with a bachelor’s degree; educated men have seen limited changes in health, mental health, and marriage outcomes and have flat profiles for labor force participation, suicide, and drug mortality. The decline in real wages is also not identical across education groups: controlling for age, real wages for those with degrees are on average 10% higher for the cohort born in 1980 relative to the cohort of 1940, while wages for those without a degree are 10% lower. In related work, Milligan and Schirle (2018) present evidence that health inequality changes were not as extreme in Canada. Using a comprehensive administrative data set of Canadian men and women spanning a half century, Milligan and Schirle examine the relationship between income and health, estimating a gap in life expectancy between the lowest and highest earners of about 11% (an eight-year difference in life span for men). Comparing this gap to the one in the United States, it is only about three-fourths as large as that estimated recently by Chetty et al. (2016). Crucially, these authors do not find the same reversal in survival rates for mid-life males that has been documented in U.S. studies like Case and Deaton (2017). In contrast to the U.S. experience, it appears that the evolution of the earnings-longevity gradient can be described as a fairly uniform shift in Canada, with equal improvements among both high and low earners. The fact that in the United States there is a growing mortality gap between the top and bottom of the income distribution while in Canada the mortality gap remains fairly constant suggests the possibility that institutional factors may help mitigate the growth of human capital and earnings inequality. While no clear consensus exists on what exactly can account for such differences, some potential explanations could relate to differences in access to health care and education, as well as the incidence of long-run stress and hardship associated with job uncertainty.12 The importance of income stabilizers is confirmed when examining the experience of Sweden, where earnings inequality increased in the early 1990s. As discussed in Domeij and Floden (2010), this growth is largely attributable to movements in and out of employment. These authors also find that earnings inequality in Sweden was mostly due to increasing residual earnings dispersion, triggered by increasing volatility of persistent shocks. Like in the case of Canada, inequality in disposable income (after taxes and transfers) increased much less than for raw incomes, indicating the presence of an effective welfare system. There is no evidence of an increasing trend in household-level consumption inequality, and since persistent shocks are difficult to insure against, it is reasonable to conclude that the Swedish tax-and-transfer was able to absorb and mitigate the effects rising income inequality.13 Britain also exhibits patterns that are qualitatively similar to those observed in the United States: income inequality in the United Kingdom rose dramatically during the 1980s and continued its growth over the 1990s, while consumption inequality did rise but at a slower pace. Interestingly, Blundell and Etheridge (2010) show evidence that the surge of inequality in Britain during the 1980s is attributable to the strong growth of the volatility of permanent shocks perturbing labor income. In turn, this suggests a role for structural transformation that affected both the cross-sectional distribution of, and returns to, human capital: as they point out, changing education differentials had a big role in this episode, especially for the growth of inequality over the 1980s and early 1990s. The subsequent growth can instead be attributed to changes in transitory volatility, like in many other countries in this comparative study. This evidence is consistent with earlier results in Gosling, Machin, and Meghir (2000) (see also Blundell, Gosling, Ichimura, & Meghir, 2007). One important caveat is that caution must be exercised when examining the drivers of inequality in different contexts, as local conditions and arrangements can affect both the distribution of human capital and its returns. One example of this is provided by the Mexican experience, as documented in Binelli and Attanasio (2010). Mexico’s market structure is quite different from that of its northern neighbors, the United States and Canada. A distinctive feature of Mexico’s labor market is the relative size of its informal sector as well as one of the highest levels of measured income inequality in the world, exceeding that in the United States. Inequality in Mexico rose most prominently during the 1990s, when the “peso crisis” occurred, and a possible explanation is that the share of workers in the informal sector responded to changes in aggregate conditions: given the unregulated labor market, unskilled workers are more vulnerable during economic downturns, and when unemployment hits during an economic crisis, many workers see their wage negatively impacted as they accept informal employment. The cyclical sensitivity of the returns to labor has therefore a large effect on the poorer sections of the working population and affects inequality by stretching out the bottom end of the earnings distribution. Another example of the central importance of local dynamics is Germany, a country with a unique historical background. In this case a major event—the reunification in 1990—has shaped the recent evolution of inequality. Earnings dispersion was stable in the years before 1990, but after West and East Germany were rejoined, both wage and earnings inequality increased almost mechanically. Interestingly, the work by Fuchs-Schündeln, Krueger, and Sommer (2010) documents that the German public welfare system was rather effective in mitigating the impact of growing inequality, as disposable income and consumption inequality increased only modestly. Two countries that bucked the overall trend of increasing inequality between the 1980s and early 2000s were Spain and Russia. In the case of Spain, Pijoan-Mas and Sánchez-Marcos (2010) show that inequality in individual net labor earnings and household net disposable income decreased significantly over the study period (1985–2000). A partial explanation can be found in the fact that the unemployment rate was extremely high at the beginning of the sample period, despite a continued economic expansion. Two key changes shaped the evolution of inequality in Spain after 1985: first, the tertiary education premium fell (in contrast to many other countries); second, the unemployment level fell from 24% in 1985 to 13% in 2000. These factors ultimately had the effect to reduce inequality in labor earnings in Spain by compressing both the top and bottom ends of the distribution. The study also finds that the decrease in income inequality was driven mostly by changes in the distribution of permanent components of earnings. This is in sharp contrast to the case of Russia, the other country where, in recent periods, recorded income inequality exhibited a downward trajectory. Analysis of Russian micro-data by Gorodnichenko, Sabirianova-Peter, and Stolyarov (2010) suggests that a moderation in the volatility of transitory shocks during a period of strong economic recovery was responsible for this pattern. Interestingly, expenditure and income inequality in Russia are not far apart, which potentially indicates a lack of cross-sectional insurance and an ineffective transfers system that is unable to mitigate fluctuations in disposable income and consumption. While all these studies are useful to characterize the evolution of cross-sectional earnings inequality in a diverse set of countries, there is still relatively little work on the distribution of lifetime earnings and the present value of human capital in countries outside the United States. As mentioned before, the lifetime notion of human wealth is appealing as it relates to extended flows of income associated with underlying productive skills. The lack of international evidence on lifetime human wealth inequality is possibly due to the fact that such studies require rich data and more complex analytical approaches. One notable non-U.S. study on the relationship between current and lifetime income uses high-quality administrative data from Norway (Aaberge & Mogstad, 2014). Specifically, this study examines longitudinal data between 1942 and 2006, considering cohorts born in the interval 1942–1944. In this way the authors are able to capture the full working life of the sample members between ages 23/25 and 62/64. The measure of lifetime income is computed using the approach of Haider and Solon (2006) and is consistent with studies in the American context focusing on the annuity value of the discounted sum of real income. Findings from this study are broadly consistent with those obtained for U.S. data: they highlight the life-cycle bias in an individual’s earnings profile and show that inequality measured using lifetime income is much lower than inequality measured using cross-sectional income. # Ongoing Debate on Human Capital Heterogeneity in human capital is a key source of differences in economic well-being. This article provides a synopsis of the empirical approaches that have characterized the analysis of human capital inequality over the past few decades. In the process it overviews various methodological and measurement issues and summarizes the existing evidence on the changing patterns in the distribution of returns to labor and human wealth across various countries. The focus is on specific aspects of human capital measurement and on the empirical evidence gathered by the large literature on these topics. It is impossible to summarize this extensive research entirely within such a limited space. One key aspect on which this article remains silent is the analysis of the fundamental causes underlying the persisting, and often growing, disparities in human capital and earnings. Some of the potential causes examined in the literature relate to innate abilities, heterogeneity of early investments in human capital due to family background, as well as short-term credit constraints that limit the ability to attend formal schooling, especially at the tertiary education level. Although much academic and policy focus has been on returns to schooling, several empirical studies have highlighted the importance of early environments in fostering cognitive and non-cognitive skills, which are key in determining the development of human capital and, eventually, economic success. This growing body of work has stressed the importance of childhood influences, especially at very young ages, on skill development, arguing that while institutional learning is an important aspect of skill development, it is not the only channel through which skills are developed, and not necessarily the most important (see Aizer & Currie, 2014; Heckman, 2000). The key message of this literature is that a child’s environment is a major predictor of future success, as disadvantages arise from lack of cognitive and non-cognitive stimulus early in life or even from beneficial environments at or before birth itself. These disadvantages are then compounded as the child moves through subsequent stages of development (see Cunha & Heckman, 2007; Heckman, 2004). While distinct, all these explanations find their motivation in the underlying lack of equal economic opportunities offered to young individuals. The broad set of measures and strands of research discussed in this article all suggest that human capital inequality has grown since the late 1970s. However, since human capital is not directly observable, new results and empirical evidence can, and certainly will, be brought forward to further refine and advance our understanding of human capital heterogeneity. As stressed all along, earnings inequality is not the same as human capital inequality, and comparisons between annual and lifetime earnings inequality show how much transitory earnings inequality can overstate underlying human capital disparities. Lifetime earnings are themselves not a satisfactory proxy for human capital, as they encompass a large component of realized shocks that are unrelated to human capital. It is likely that the direction of future research in this field will lean toward data and methods that shed new light on individuals’ ex ante, heterogeneous valuations of their human wealth. To understand the cross-sectional disparities in permanent income and human wealth, it will be crucial to measure how much of realized lifetime income inequality is a predictable reflection of human capital at any point in time as opposed to sheer luck or random unpredictable events. Answering this question has immediate and profound implications for tax and redistribution policies, as well as for the positive assessment of what drives individuals to vastly different long-term outcomes. # Acknowledgments We acknowledge support through an Insight grant from the SSHRC of Canada. We are grateful to Sarah O’Brien, Robin Li, Pietro Montanarella, and Lily Suh for excellent research assistance. Aguiar, M. A., & Bils, M. (2015). Has consumption inequality mirrored income inequality? American Economic Review, 105, 2725–2756.Find this resource: Arellano, M., Blundell, R., & Bonhomme, S. (2017). Earnings and consumption dynamics: A nonlinear panel data framework. Econometrica, 85, 693–734.Find this resource: Becker, G. (1964). Human capital: A theoretical and empirical analysis, with special reference to education (3rd ed.). Chicago: University of Chicago Press.Find this resource: Bowlus, A. J., & Robin, J.-M. (2004). Twenty years of rising inequality in US lifetime labor income values. Review of Economic Studies, 71(3), 709–742.Find this resource: De Nardi, M., Fella, G., & Paz-Pardo, G. (forthcoming). Nonlinear household earnings dyanimics, self insurance, and welfare. Journal of the European Economic Association.Find this resource: Guvenen, F., Kaplan, G., Song, J., & Weidner, J. (2017). Lifetime incomes in the United States over six decades. Technical report, National Bureau of Economic Research.Find this resource: Heathcote, J., Perri, F., & Violante, G. L. (2010a). Unequal we stand: An empirical analysis of economic inequality in the United States, 1967 to 2006. Review of Economic Dynamics, 13, 15–51.Find this resource: Huggett, M., & Kaplan, G. (2016). How large is the stock component of human capital? Review of Economic Dynamics, 22, 21–51.Find this resource: Haveman, R., Bershadker, R., & Schwabish, J. (2003). Human capital in the United States from 1975 to 2000: Patterns of growth and utilization. Kalamazoo, MI: Upjohn Institute.Find this resource: Ríos-Rull, J. V., & Kuhn, M. (2016). 2013 Update on the US earnings, income, and wealth distributional facts: A View from Macroeconomics. Federal Reserve Bank of Minneapolis Quarterly Review, 37(1), 2–73.Find this resource: Kuhn, M., Schularick, M., & Steins, U. (2018). Income and wealth inequality in America, 1949–2016. Technical report, CEPR. Discussion Paper 12218.Find this resource: Krueger, D., Mitman, K., & Perri, F. (2016). Macroeconomics and household heterogeneity. In J. B. Taylor and Harald Uhlig (Eds.), Handbook of macroeconomics (Vol. 2, pp. 843–921). Amsterdam, The Netherlands: Elsevier.Find this resource: Jorgenson, D., & Fraumeni, B. M. (1989). The accumulation of human and nonhuman capital. In R. E. Lipsey & H. S. Tice (Eds.), The measurement of saving, investment, and wealth (pp. 227–286). Chicago: University of Chicago Press.Find this resource: Roine, J., & Waldenström, D. (2015). Long-run trends in the distribution of income and wealth. In A. B. Atkinson & F. Bourguignon (Eds.), Handbook of income distribution (Vol. 2, pp. 469–592). Amsterdam, The Netherlands: Elsevier.Find this resource: ## References Aaberge, R., & Mogstad, M. (2014). Income mobility as an equalizer of permanent income. Statistics Norway Discussion Papers, 769.Find this resource: Abbott, B., & Gallipoli, G. (2017). Human capital spill-overs and the geography of intergenerational mobility. Review of Economic Dynamics, 25, 208–233.Find this resource: Abbott, B., & Gallipoli, G. (2019). Permanent-income inequality. CEPR Discussion Papers 13540.Find this resource: Abbott, B., Gallipoli, G., Meghir, C., & Violante, G. L. (2019, December). Education policy and intergenerational transfers in equilibrium. Journal of Political Economy, 127(6).Find this resource: Abowd, J. M., & Card, D. (1989). On the covariance structure of earnings and hours changes. Econometrica, 57, 411–445.Find this resource: Aguiar, M. A., & Bils, M. (2015). Has consumption inequality mirrored income inequality? American Economic Review, 105, 2725–2756.Find this resource: Aizer, A., & Currie, J. (2014). The intergenerational transmission of inequality: Maternal disadvantage and health at birth. Science, 344, 856–861.Find this resource: Arellano, M., Blundell, R., & Bonhomme, S. (2017). Earnings and consumption dynamics: A nonlinear panel data framework. Econometrica, 85, 693–734.Find this resource: Atkinson, A., & Piketty, T. (2007). Top incomes over the twentieth century: A contrast between continental European and English-speaking countries. New York: Oxford University Press.Find this resource: Barth, E., Bryson, A., Davis, J. C., & Freeman, R. (2016). Its where you work: Increases in the dispersion of earnings across establishments and individuals in the United States. Journal of Labor Economics, 34, S67–S97.Find this resource: Becker, G. S. (1962). Investment in human capital: A theoretical analysis. Journal of Political Economy, 70(5), 9–49.Find this resource: Becker, G. (1964). Human capital: A theoretical and empirical analysis, with special reference to education (3rd ed.). Chicago: University of Chicago Press.Find this resource: Binelli, C., & Attanasio, O. (2010). Mexico in the 1990s: The main cross-sectional facts. Review of Economic Dynamics, 13, 238.Find this resource: Blundell, R., & Etheridge, B. (2010). Consumption, income and earnings inequality in Britain. Review of Economic Dynamics, 13, 76–102.Find this resource: Blundell, R., Gosling, A., Ichimura, H., & Meghir, C. (2007). Changes in the distribution of male and female wages accounting for employment composition using bounds. Econometrica, 75, 323–363.Find this resource: Blundell, R., Pistaferri, L. & Preston, I. (2008). Consumption inequality and partial insurance. American Economic Review, 98, 1887–1921.Find this resource: Blundell, R., Pistaferri, L., & Saporta-Eksten, I. (2016). Consumption inequality and family labor supply. American Economic Review, 106, 387–435.Find this resource: Bowlus, A. J., & Robin, J.-M. (2004). Twenty years of rising inequality in U.S. lifetime labour income values. Review of Economic Studies, 71, 709–742.Find this resource: Brzozowski, M., Gervais, M., Klein, P., & Suzuki, M. (2010). Consumption, income, and wealth inequality in Canada. Review of Economic Dynamics, 13, 52.Find this resource: Case, A., & Deaton, A. (2017). Mortality and morbidity in the 21st century. Brookings Papers on Economic Activity, 2017, 397.Find this resource: Chetty, R., Stepner, M., Abraham, S., Lin, S., Scuderi, B., Turner, N, . . . Cutler, D. (2016). The association between income and life expectancy in the United States, 2001–2014. Journal of the American Medical Association, 315, 1750–1766.Find this resource: Cunha, F., & Heckman, J. (2007). The technology of skill formation. American Economic Review, 97, 31–47.Find this resource: Cunha, F., Heckman, J., & Navarro, S. (2005). Separating uncertainty from heterogeneity in life cycle earnings. Oxford Economic Papers, 57, 191–261.Find this resource: DiNardo, J., Fortin, N. M., & Lemieux, T. (1996). Labor market institutions and the distribution of wages, 1973–1992: A semiparametric approach. Econometrica, 64, 1001–1044.Find this resource: Domeij, D., & Floden, M. (2010). Inequality trends in Sweden 1978–2004. Review of Economic Dynamics, 13, 179–208.Find this resource: Dyrda, S., & B. Pugsley (2018). Taxes, private equity and evolution of income inequality in the US. Technical report, University of Toronto.Find this resource: Eckstein, Z., Nagypal, E. (2004). The evolution of US earnings inequality: 1961–2002. Federal Reserve Bank of Minneapolis Quarterly Review, 28, 10–29.Find this resource: Friedman, M. (1957). A theory of the consumption function. Princeton, NJ: Princeton University Press.Find this resource: Fuchs-Schündeln, N., Krueger, D., & Sommer, M. (2010). Inequality trends for Germany in the last two decades: A tale of two countries. Review of Economic Dynamics, 13, 103–132.Find this resource: Gorodnichenko, Y., Sabirianova-Peter, K., & Stolyarov, D. (2010). Inequality and volatility moderation in Russia: Evidence from micro-level panel data on consumption and income. Review of Economic Dynamics, 13, 209–237.Find this resource: Gosling, A., Machin, S., & Meghir, C. (2000). The changing distribution of male wages in the UK. The Review of Economic Studies, 67, 635–666.Find this resource: Gottschalk, P., Moffitt, R., Katz, L. F., & Dickens, W. T. (1994). The growth of earnings instability in the US labor market. Brookings Papers on Economic Activity, 2, 217–272.Find this resource: Green, D. A., & Sand, S. M. (2015). Has the Canadian labour market polarized? Canadian Journal of Economics/Revue Canadienne d’Economique, 48, 612–646.Find this resource: Guvenen, F., Kaplan, G., Song, J., & Weidner, J. (2017). Lifetime incomes in the United States over six decades. Technical report, National Bureau of Economic Research.Find this resource: Guvenen, F., Karahan, F., Ozkan, S., & Song, J. (2015). What do data on millions of US workers reveal about life-cycle earnings risk? Technical report, National Bureau of Economic Research.Find this resource: Guveven, F., & Kaplan, G. (2017). Top income inequality in the 21st century: Some cautionary notes. NBER Working Papers 23321.Find this resource: Haider, S., & Solon, G. (2006). Life-cycle variation in the association between current and lifetime earnings. American Economic Review, 96, 1308–1320.Find this resource: Heathcote, J., Perri, F., & Violante, G. L. (2010). Unequal we stand: An empirical analysis of economic inequality in the United States, 1967 to 2006. Review of Economic Dynamics, 13, 15–51.Find this resource: Heathcote, J., Storesletten, K., & Violante, G. L. (2010). The macroeconomic implications of rising wage inequality in the US. Journal of Political Economy, 118, 681–722.Find this resource: Heckman, J. (2004). Skill formation and the economics of investing in disadvantaged children. Science, 312, 1900–1902.Find this resource: Heckman, J. J. (2000). Policies to foster human capital. Research in Economics, 54, 3–56.Find this resource: Hubmer, J., Krusell, P., & Smith, A. A., Jr. (2016). The historical evolution of the wealth distribution: A quantitative-theoretic investigation. NBER Working Papers 23011.Find this resource: Huggett, M., & Kaplan, G. (2016). How large is the stock component of human capital? Review of Economic Dynamics, 22, 21–51.Find this resource: Huggett, M., Ventura, G., & Yaron, A. (2006). Human capital and earnings distribution dynamics. Journal of Monetary Economics, 53, 265–290.Find this resource: Jappelli, T., & Pistaferri, L. (2017). The economics of consumption: Theory and evidence. New York: Oxford University Press.Find this resource: Jorgenson, D., & B. M. Fraumeni (1989). The Accumulation of Human and Nonhuman Capital, In R. E. Lipsey & H. S. Tice (Eds.), The measurement of saving, investment, and wealth (pp. 227–286). Chicago: University of Chicago Press.Find this resource: Kaymak, B., & Poschke, M. (2016). The evolution of wealth inequality over half a century: The role of taxes, transfers and technology. Journal of Monetary Economics, 77, 1–25.Find this resource: Kopczuk, W., Saez, E., & Song, J. (2010). Earnings inequality and mobility in the United States: Evidence from social security data since 1937. Quarterly Journal of Economics, 125, 91–128.Find this resource: Lemieux, T. (2006a). Increasing residual wage inequality: Composition effects, noisy data, or rising demand for skill? American Economic Review, 96, 461–498.Find this resource: Lemieux, T. (2006b). Postsecondary education and increasing wage inequality. American Economic Review, 96, 195–199.Find this resource: Lillard, L. A. (1977). Inequality: Earnings vs. human wealth. American Economic Review, 67(2), 42–53.Find this resource: Lochner, L., & Shin, Y. (2014). Understanding earnings dynamics: Identifying and estimating the changing roles of unobserved ability, permanent and transitory shocks. Technical report, National Bureau of Economic Research.Find this resource: Low, H., Meghir, C., & Pistaferri, L. (2010). Wage risk and employment risk over the life cycle. American Economic Review, 100, 1432–1467.Find this resource: Meghir, C., & Pistaferri, L. (2006). Income variance dynamics and heterogeneity. Econometrica, 72, 1–32.Find this resource: Milligan, K., & Schirle, T. (2018). The evolution of longevity: Evidence from Canada. Mimeo.Find this resource: Modigliani, F., & Brumberg, R. (1954). Utility analysis and the consumption function: An interpretation of cross-section data. Franco Modigliani, 1, 388–436.Find this resource: Moffitt, R. A., & P. Gottschalk, P. (2012). Trends in the transitory variance of male earnings methods and evidence. Journal of Human Resources, 47, 204–236.Find this resource: Morelli, S., Smeeding, T., & Thompson, J. (2015). Post-1970 trends in within-country inequality and poverty: Rich and middle-income countries. In A. B. Atkinson & F. Bourguignon (Eds.), Handbook of income distribution (Vol. 2, pp. 593–696). Amsterdam, The Netherlands: Elsevier.Find this resource: Pijoan-Mas, J., & Sánchez-Marcos, V. (2010). Spain is different: Falling trends of inequality. Review of Economic Dynamics, 13, 154–178.Find this resource: Quadrini, V., & Ros-Rull, J.-V. (2015). Inequality in macroeconomics. In A. B. Atkinson & F. Bourguignon (Eds.), Handbook of income distribution (Vol. 2, pp. 1229–1302). Amsterdam, The Netherlands: Elsevier.Find this resource: Roine, J., & Waldenström, D. (2015). Long-run trends in the distribution of income and wealth. In A. B. Atkinson & F. Bourguignon (Eds.), Handbook of income distribution (Vol. 2, pp. 469–592). Amsterdam, The Netherlands: Elsevier.Find this resource: Smith, M., Yagan, D., Zidar, O., & Zwick, E. (2018). Capitalists in the twenty-first century. Mimeo.Find this resource: Song, J., Price, D. J., Guvenen, F., Bloom, N., & Von Wachter, T. (2015). Firming up inequality. Technical report, National Bureau of Economic Research.Find this resource: ## Notes: (1.) For example, a closely related literature on the relationship between consumption inequality and income inequality exists. Examples include Blundell et al. (2008), Aguiar and Bils (2015), and Blundell et al. (2016), among many others, as summarized in the interesting book of Jappelli and Pistaferri (2017). (2.) An obvious measurement issue is that the remuneration of human capital can be recorded as a payment for business activities and possibly be treated like capital income. Active business activity often takes the form of sole proprietorship of corporations organized as “limited liability” legal entities that pay corporate income tax on annual taxable income. In such circumstances the owners can be active earners rather than passive rentiers, as pointed out by Smith et al. (2018) and Dyrda and Pugsley (2018). This complicates the measurement and taxation of human capital returns. (3.) Non-trivial issues of measurement become apparent when looking at labor income payments, as pointed by Guvenen and Kaplan (2017). These authors study top income inequality, measured as the share of incomes accruing the individuals in the top percentiles of the income distribution: crucially they use a combination of total income data from the Internal Revenue Service (IRS) and labor income information from the Social Security Administration (SSA), starting from 1981. These two data sets allow them to isolate differences in top income inequality based on the definition of income (labor income vs. total income) and to differentiate with respect to the unit of analysis (tax unit vs. individual). (4.) Furthermore, Song et al. (2015) shows that inequality that is not explained away by observables, such as education, increased very little. (5.) Song et al. (2015) suggest that the firm, rather than the industry, is the key dimension. (6.) Blundell et al. (2016) also estimate human wealth, i.e., expected lifetime earnings, but crucially they do no consider state-dependent stochastic discounting. (7.) To learn about inequality across groups, for example, one must estimate a separate wage process for each partition. One such example is the work of Abbott et al. (2019), who estimate and simulate separate wage processes for six groups: men and women, each across three education levels. (8.) See Arellano et al. (2017) or Guvenen et al. (2015) for recent research on nonlinear implementations. (9.) What one can learn about the nature of earnings dynamics from longer auto-covariances is discussed in Meghir and Pistaferri (2006). (10.) Residuals are defined as the component of log-earnings not predicted by education, race, an age polynomial, and interactions among these variables. (11.) Theoretical work by Huggett et al. (2006) shows that some of these patterns can be replicated in a model of human capital dynamics. (12.) One aspect worth highlighting is that the concentration of wealth in Canada is less severe than that in the United States: the top 5% of richest households holds 35% of wealth in Canada, which is similar to the share held by the top 1% of richest households in the United States. A related literature examines the implications of changes in tax and transfers system for inequality (see Hubmer et al., 2016; Kaymak and Poschke, 2016). (13.) Questions remain about the process of joint determination of human capital outcomes and redistributive policies. Abbott and Gallipoli (2017) explore the possibility that the shape of the distribution of human capital may itself affect optimal public policies and redistribution. They provide microdata evidence that this may in fact be the case for a set of developed countries.
2020-02-18 19:28:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5306957960128784, "perplexity": 2554.292899238249}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00038.warc.gz"}
https://www.physicsforums.com/threads/radius-of-convergence-around-a-2-for-ln-x.547169/
Radius of convergence around a=2 for ln(x) Gold Member When I use the calculation from Wikipedia that says that the radius of convergence of a series is lim as n goes to infinity of |an/an+1|, I get for the Taylor series expansion of ln(x) around a=2 the answer of an infinite radius of convergence, which would mean that it would be valid everywhere, which would not make sense. What am I doing wrong?
2021-04-10 20:16:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464215040206909, "perplexity": 133.70287799000448}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00022.warc.gz"}
http://mathhelpforum.com/advanced-statistics/165578-exponential-brownian-motion.html
# Math Help - Exponential of a Brownian motion 1. ## Exponential of a Brownian motion How do I calculate the expectation of the exponential of a Brownian motion in an easy way, I mean this: $E[Exp(a W_{t})]$ With $W_{t}$ a Brownian motion Thx for the help 2. Write it out in integral form $ \int e^{a x} (2\pi t)^{-1/2}e^{-\frac{x^2}{2t}} dx. $
2015-05-25 14:05:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620616436004639, "perplexity": 929.9871870359569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928501.75/warc/CC-MAIN-20150521113208-00196-ip-10-180-206-219.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/246424/implicit-function-in-an-ode-when-using-dsolve
# Implicit function in an ODE when using DSolve I'm trying to solve an ODE with DSolve. The problem I'm facing is that in the ODE there appears an implicit function of the variable r*. The equation I'm trying to solve is -q''[r*] + (-2 (a/r^3) (1 - a/r)^3 - w^2) q[r*] == 0 with a and w parameters and r*[r]=2 a Log[r-a]-a^2/(r-a)+r (tortoise coördinate), which cannot be inverted such that I cannot write my ODE only in terms of r*. Is it possible to solve the equation by somehow using the implicit function relation? • r* means r times (whatever follows) in Mathematica. It seems you mean it to be either a function like rstar[r] or a variable rstar. Ultimately, do you want q as a function of r or of rstar? The latter should be theoretically possible if you write r[rstar] instead of r and use the implicit equation rstar==2 a Log[r[rstar]-a]-a^2/(r[rstar]-a)+r[rstar]. Probably DSolve can't solve it, I'd guess. But you could use NDSolve or ParametricNDSolve on numeric values for a and w. May 21 at 13:54 • You can use Format[rstar] = Superscript["r", "*"]; to change the display of rstar in output. May 21 at 16:46
2021-10-19 22:50:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7228705286979675, "perplexity": 1515.5380577250867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00309.warc.gz"}
https://youth2009.org/post/cookies-domain/
## March 3, 2016 by dc There are some RFCs about HTTP state management: RFC2109, RFC2965, RFC6265. My environment is Chrome Version 47.0.2526.106 (64-bit), I use tornado set cookies. As the RFC content, if you provide domain field in Set-Cookie, you should keep a dot at the beginning of domain name, if you forget, the http client should help you. You can ignore domain field, then the domain value will be set as same as request host. ## without domain This is set by Set-Cookie:a=a; Path=/, Domain value is the request host. ## with domain This is set by Set-Cookie:a=b; Domain=.dev.dmright.com; Path=/, I added a dot at the beginning, you will find, the domain with a dot and the domain without dot are different, even though you set same name. What if I set cookie with domain but without dot? ## with domain but without dot This is set by Set-Cookie:a=c; Domain=dev.dmright.com; Path=/, you will find the second scenario cookie is overrided, as the RFC says, if you forget a dot at the beginning, client will do that for you. The with-dot domain can match subdomain of it, e.g, the .dev.dmright.com will be sent to server when the request uri is x.dev.dmright.com, but the without-dot domain will not. Best practice: always set cookie without domain, or you know what you want.
2018-04-24 12:53:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34972190856933594, "perplexity": 5109.489698060408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946688.88/warc/CC-MAIN-20180424115900-20180424135900-00557.warc.gz"}
https://www.physicsforums.com/threads/matlab-for-loop-simple-division-problem.307761/
# Matlab, for loop simple division problem • MATLAB I put down a script like this for u=(10:10:20)' i=(1:size(u,1))' X=zeros(size(u,1),1) X(i,1)=100/u(i,1) end I expect to get a result like X= 10 5 but it came out like X= 0 5 It seems it does work if it contain / in the equation. Please help!! ## Answers and Replies MATLABdude Did you realize that i contains two elements? The syntax u(i,1) makes no sense when i contains two elements. You've either got u and i jumbled up, or I'm not understanding what you're trying to do. You also need to initialize your variables outside the loop. So, this is probably the code you're looking for: Code: u=(10:10:20)' X=zeros(size(u,1), 1) for i = 1:size(u,1) X(i,1) = 100 / u(i,1) end Which produces the desired results. EDIT: Ooops, forgot some parentheses...
2022-05-19 09:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45139673352241516, "perplexity": 3264.1566074961775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00110.warc.gz"}
https://ktbssolutions.com/kseeb-solutions-for-class-8-maths-chapter-1-ex-1-1/
# KSEEB Solutions for Class 8 Maths Chapter 1 Rational Numbers Ex 1.1 ## KSEEB Solutions for Class 8 Maths Chapter 1 Rational Numbers Ex 1.1 Question 1. Using appropriate properties find: (i) $$-\frac{2}{3} \times \frac{3}{5}+\frac{5}{2}-\frac{3}{5} \times \frac{1}{6}$$ (ii) $$\frac{2}{5} \times\left(-\frac{3}{7}\right)-\frac{1}{6} \times \frac{3}{2}+\frac{1}{14} \times \frac{2}{5}$$ Solution: Question 2. Write the additive inverse of each of the following: (i) $$\frac{2}{8}$$ (ii) $$\frac{-5}{9}$$ (iii) $$\frac{-6}{-5}$$ (iv) $$\frac{2}{-9}$$ (v) $$\frac{19}{-6}$$ Solution: (i) The additive inverse of $$\frac{2}{8}$$ is $$\frac{-2}{8}$$. (ii) The additive inverse of $$\frac{-5}{9}$$ is $$-\left(\frac{-5}{9}\right)$$ = $$\frac{5}{9}$$ (iii) $$\frac{-6}{-5}=\frac{6}{5}$$ ∴ The additive inverse of $$\frac{6}{5}$$ is $$\frac{-6}{5}$$. (iv) $$\frac{2}{-9}=\frac{-2}{9}$$ ∴ The additive inverse of $$\frac{-2}{9}$$ is $$\frac{2}{9}$$. (v) $$\frac{19}{-6}=\frac{-19}{6}$$ ∴ The additive inverse of $$\frac{-19}{6}$$ is $$\frac{19}{6}$$. Question 3. Verify that -(-x) = x for: (i) x = $$\frac{11}{15}$$ (ii) x = $$-\frac{13}{17}$$ Solution: Question 4. Find the multiplicative inverse of the following: (i) -13 (ii) $$\frac{-13}{19}$$ (iii) $$\frac{1}{5}$$ (iv) $$\frac{-5}{8} \times \frac{-3}{7}$$ (v) $$-1 \times \frac{-2}{5}$$ (vi) -1 Solution: (i) The multiplicative inverse of -13 is $$\frac{1}{-13}$$. (ii) The multiplicative inverse of $$\frac{-13}{19}$$ is $$\frac{1}{\frac{-13}{19}}=\frac{19}{-13}$$. (iii) The multiplicative inverse of $$\frac{1}{5}$$ is 5. (iv) The multiplicative inverse of $$\frac{-5}{8} \times \frac{-3}{7}=\frac{15}{56}$$ is $$\frac{56}{15}$$. (v) The multiplicative inverse of $$-1 \times \frac{-2}{5}=\frac{2}{5}$$ is $$\frac{5}{2}$$. (vi) The multiplicative inverse of -1 is $$\frac{1}{-1}$$ = -1. Question 5. Name the property under multiplication used in each of the following: (i) $$\frac{-4}{5} \times 1=1 \times \frac{-4}{5}=-\frac{4}{5}$$ (ii) $$-\frac{13}{17} \times \frac{-2}{7}=\frac{-2}{7} \times \frac{-13}{17}$$ (iii) $$\frac{-19}{29} \times \frac{29}{-19}=1$$ Solution: (i) Multiplicative inverse. (ii) Commutative. (iii) 1, is the multiplicative identity. Question 6. Multiply $$\frac{6}{13}$$ by the reciprocal of $$\frac{-7}{16}$$ Solution: The reciprocal of $$\frac{-7}{16}$$ is $$\frac{16}{-7}=\frac{6}{13} \times \frac{16}{-7}=\frac{96}{-91}$$ Question 7. Tell what property allows you to compute $$\frac{1}{3} \times\left(6 \times \frac{4}{3}\right)$$ as $$\left(\frac{1}{3} \times 6\right) \times \frac{4}{3}$$ Solution: For any three rational numbers a, b and c a × (b × c) = (a × b) × c The multiplication is associative for rational numbers. Question 8. Is $$\frac{8}{9}$$ the multiplicative inverse of $$-1 \frac{1}{8}$$? Why or why not? Solution: Multiplicative inverse of $$\frac{8}{9}$$ is $$\frac{9}{8}$$ or $$1 \frac{1}{8}$$ But, here $$-1 \frac{1}{8}$$ is negative. ∴ $$\frac{8}{9}$$ is not multiplicative inverse of $$-1 \frac{1}{8}$$. Question 9. Is 0.3 the multiplicative inverse of $$3 \frac{1}{3}$$? Why or why not? Solution: 0.3 = $$\frac{3}{10}$$ Multiplicative inverse of $$\frac{3}{10}$$ is $$\frac{10}{3}$$ = $$3 \frac{1}{3}$$ Hence, 0.3 is multiplicative inverse of $$3 \frac{1}{3}$$ Since $$\frac{3}{10} \times \frac{10}{3}=1$$ If product of a number and its multiplicative inverse is 1. Question 10. Write: (i) The rational number that does not have a reciprocal. (ii) The rational numbers that are equal to their reciprocals. (iii) The rational number that is equal to its negative. Solution: (i) Zero is the rational number, that does not have reciprocal. (ii) 1 is such a rational number, which is equal to its reciprocal. (-1) is also such a rational number. (iii) 0 is a number which is equal to its negative. Question 11. Fill in the blanks: (i) Zero has ________ reciprocal. (ii) The numbers _________ and ________ are their own reciprocals. (iii) The reciprocal of -5 is ________ (iv) Reciprocal of $$\frac{1}{x}$$, where x ≠ 0 is ________ (v) The product of two rational numbers is always a ________ (vi) The reciprocal of a positive rational number is ________ Solution: (i) No (ii) 1, -1 (iii) $$\frac{-1}{5}$$ (iv) x (v) Rational number (vi) Positive
2022-08-13 00:59:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233234524726868, "perplexity": 975.0011345385265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00342.warc.gz"}
https://mathinsight.org/assess/math201up_spring19/maximization_minimization/overview
# Math Insight ### Overview of: Maximization and minimization Local minima or maxima of a function $f$ can occur only at a critical point of $f$. (After all, $f$ must be either be always increasing or always decreasing in intervals between critical points, so none of these interior points can be a local maximum or local minimum.) Whether a critical point is a maximum or a minimum depends on whether $f$ changes from increasing to decreasing or vice versa. A global maximum or minimum of $f$ over an interval can occur only at a critical point or at one of the endpoints. A simple way to find the global maximum and minimum is to calculate the value of $f$ at the critical points and the endpoints and see which is largest and smallest. Total points: 3
2019-10-22 19:30:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5189427137374878, "perplexity": 114.00892874767652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00262.warc.gz"}
https://www.physicsoverflow.org/31568/supermultiplet-dimensions-from-young-tableaus?show=32112
Supermultiplet dimensions from Young Tableaus + 3 like - 0 dislike 187 views In John Terning's book, on pages 14 and 15, there are lists of $\mathcal{N} = 2$ and $\mathcal{N} = 4$ supermultiplets, labeled in terms of the dimensions of the corresponding R-symmetry $d_R$ and spin-symmetry $2j+1$. I want to figure out a way to get all these numbers by Young Tableaus in a systematic way. Of course, the $\mathcal{N}=2$ case is relatively straightforward. Its clear that the numbers in the labels for $\mathcal{N}=4$ can separately be recovered using $4_{R} \otimes 4_{R} = 10_R \oplus 6_R$ for $SU(4)_R$ and tensor products of this basic identity with $4_R$. Of course, one can also do the same thing for the $SU(2)$ spin symmetry $2_{SU(2)} \otimes 2_{SU(2)} = 3_{SU(2)} + 1_{SU(2)}$ But if one writes $(\textbf{R}, 2j+1)$ as the label, then how does one justify $(\textbf{4}_R, 2)\otimes(\textbf{4}_R, 2) = (\textbf{10}_R,1) \oplus (\textbf{6}_R,3)$ or $(\textbf{4}_R, 2)\otimes((\textbf{10}_R,1) \oplus (\textbf{6}_R,3)) = (\bar{\textbf{20}}, 2) + (\bar{\textbf{4}},4)$ What I'm asking is: how do you get this particular grouping? This post imported from StackExchange Physics at 2015-06-02 11:45 (UTC), posted by SE-user leastaction + 4 like - 0 dislike The way to justify this is to realize that when you write the label as  $(\textbf{R}, 2j+1)$ you are embedding the direct product of those groups into a larger group i.e.  $SU(N) \otimes SU(M) \in SU(NM)$. Where the tensor indices of the larger group can be thought of as an ordered pair of indices $(a,\dot\alpha)$, where $a=1,...,N$ and $\dot\alpha = 1,...,M$ for this case we have N=4 and M=2. In Terning's book we can see that each charge carries an order pair (which is embedded in the larger SU(NM) group) and has to be completely antisymmetric which every other charge. So if we have n charges we want to represent that as a completely antisymmetric rank n tensor living in the SU(NM) group. The way to represent this with Young's Tableaus is to think of it as n boxes arranged vertically. The trick is the embedded group has to transform the same way as the representation in the larger group when permutating the ordered indices. For example. $(\textbf{4}_R, 2)\otimes(\textbf{4}_R, 2) = (\textbf{10}_R,1) \oplus (\textbf{6}_R,3)$ This represents a completely antisymmetric rank 2 tensor in SU(8). The first thing to do when trying to come up with possible representations is to write down all the ways 2 (note this number is the same as rank of the embedding group) boxes can be arranged in SU(4) and SU(2), which happens to be the same 2 horizontal and 2 vertical or the 10 and 6 for SU(4) and the 3 and 1 for SU(2) respectively. (I don't know how to draw boxes here...). Where the boxes arranged horizontally are completely symmetric and the ones arranged vertically are completely antisymmetric. Since we need the permutation of ordered pair to be antisymmetric, the only way to do this is to have the symmetric rep in one group be ordered with the antisymmetric of the other rep, for permutations of the symmetric rep give you a plus while the antisymmetric give you a minus resulting in an overall minus. The next rep is a little more complicated. $(\textbf{4}_R, 2)\otimes(\textbf{4}_R, 2)\otimes(\textbf{4}_R, 2) = (\bar{\textbf{20}}, 2) \oplus (\bar{\textbf{4}},4)$ Here we have a rank 3 tensor. The way to arrange 3 boxes in SU(4) result in $\bar 4, \bar{20''}, \bar{20}$ check out his appendix B.4 to align the box notation with those numbers. While the SU(2) only gives the 2 and 4 rep (note the 2 rep is drawn with 3 boxes the same way as the \bar{20}!!) The $(\bar{\textbf{4}},4)$ follows the same logic as previous, the \bar{4} is completely antisymmetric while the 4 is completely symmetric. The $(\bar{\textbf{20}}, 2)$ is more complicated, and I wasn't able to find a completely generic way to show it, but essentially when you take the tensor product of the two reps in the permutation group (S_n, n being the 3 in this case) you find that they can decomposed into a completely antisymmetric tensor (and actually a completely symmetric  tensor). Georgi explains all of this in gory detail this in his book sections 15.2 and 1.21-24. Long story short the decomposition of the representations should transform under permutations the same way as the rank n tensor embedded in the higher group. The left hand side is completely antisymmetric so too should the right hand side. (Sorry the previous edits were made under some errors and I didn't know how to delete so I just redid it) answered Jun 15, 2015 by (205 points) edited Jun 19, 2015 Thank you for the detailed reply @PeterAnderson! As it turns out, there is a more heuristic method which I managed to figure out, which is purely tableau-based. It is described in the comments of the original post on Physics.SE, from which this post was derived. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2021-02-25 02:47:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7971369028091431, "perplexity": 498.8040717471477}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00018.warc.gz"}
https://physicscatalyst.com/calculators/physics/average-velocity-calculator.php
# Average Velocity calculator Note • Enter the values of the two known variables in the text boxes • Leave the text box empty for the variable you want to solve for • Click on the calculate button. The Average velocity formula used for solving the question is $\bar{v}=\frac {v_f + v_i}{2}$ $v_f= 2 \bar{v} - v_i$ $v_i= 2 \bar{v} - v_f$ ## What is average velocity Average velocity is defined as the ratio of change in displacement $\Delta x$ of the object and time interval $\Delta t= t_2 -t_1$. $\bar{v} = \frac {\Delta x}{\Delta t}$ If the body is moving in straight line with constant acceleration, Average velocity over a time interval is also defined as the average of the velocity of the starting point and ending point. So if $v_i$ be the initial velocity and $v_f$ be the final velocity for the particle moving in straight with constant acceleration, then average velocity is given $\bar{v}=\frac {v_f + v_i}{2}$ where $\bar{v}$ -> Average velocity $v_i$ -> initial velocity $v_f$ -> final velocity Average velocity is a vector quantity and SI unit is meter/sec Example of Few questions where you can use this formula Question 1 A object start with velocity 2 m/s and attained a velocity 10 m/s in 4 sec.Find the average velocity Solution Given $v_i=2 \ m/s$, $v_f=10 \ m/s$, $\bar{v}$=? Now average velocity is given $\bar{v}=\frac {v_f + v_i}{2}= \frac { 2 + 10}{2} = 6 \ m/s$ Question 2 A object start with velocity 1 m/s and average velocity for the period of time is 5 m/s. Find the final velocity. Solution Given $v_i=2 \ m/s$, $v_f$=?, $\bar{v}= 5 \ m/s$ Now average velocity is given $\bar{v}=\frac {v_f + v_i}{2}$ Rearranging for Final velocity $v_f = 2 \bar{v} - v_i = 2 \times 5 -1 = 9 \ m/s$ ## How the Average Velocity Calculator works 1. If $v_i$ and $v_f$ are given $\bar{v}=\frac {v_f + v_i}{2}$ 2. If $v_i$ and $\bar{v}$ are given $v_f= 2\bar{v} - v_i$ 2. If $v_f$ and $\bar{v}$ are given $v_i= 2\bar{v} - v_f$ ## Related Study Material Latest Articles Synthetic Fibres and Plastics Class 8 Practice questions Class 8 science chapter 5 extra questions and Answers Mass Calculator 3 Fraction calculator Garbage in Garbage out Extra Questions7
2022-05-23 23:29:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190215229988098, "perplexity": 1016.5154562890325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00730.warc.gz"}
https://www.physicsforums.com/threads/capacitance-of-capacitor.582927/
Capacitance of Capacitor 1. Mar 1, 2012 DrunkApple 1. The problem statement, all variables and given/known data A capacitor is constructed from two square metal plates. A dielectric $\kappa$ = 4.61 fills the upper half of the capacitor and a dielectric $\kappa$ = 6.9 fills the lower half of the capacitor. Neglect edge effects. Calculate the capacitance C of the device. 2. Relevant equations C$_{TOP}$ = $\kappa$$\epsilon$A/d C$_{bottom}$ = $\kappa$$\epsilon$A/d C$_{total}$ = C$_{TOP}$ + C$_{BOTTOM}$ 3. The attempt at a solution C$_{TOP}$ = $\kappa$$\epsilon$A/d = 4.61(8.85419*10$^{-12}$)(.12$^{2}$)/0.0005 = 1.175553098*10$^{-9}$ C$_{bottom}$ = 6.9(8.85419*10$^{-12}$)(.12$^{2}$)/0.0005 = 1.759504637*10$^{-9}$ C$_{total}$ = 2.935057735*10$^{-9}$ Is this correct? 2. Mar 1, 2012 Staff: Mentor Ummm. Plate size? Plate separation? Are the plates oriented vertically or horizontally?
2019-01-21 11:34:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5431472063064575, "perplexity": 6729.589423850227}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792338.50/warc/CC-MAIN-20190121111139-20190121133139-00461.warc.gz"}
https://gmatclub.com/forum/question-of-the-week-12-for-any-purchase-of-less-than-1000-a-274287.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Nov 2018, 03:33 GMAT Club Tests are Free and Open until midnight Nov 22, Pacific Time GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • Key Strategies to Master GMAT SC November 24, 2018 November 24, 2018 07:00 AM PST 09:00 AM PST Attend this webinar to learn how to leverage Meaning and Logic to solve the most challenging Sentence Correction Questions. • GMATbuster's Weekly GMAT Quant Quiz November 24, 2018 November 24, 2018 09:00 AM PST 11:00 AM PST We will start promptly at 09 AM Pacific Time. Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage. Show Tags 25 Aug 2018, 10:09 00:00 Difficulty: 55% (hard) Question Stats: 64% (02:36) correct 36% (02:16) wrong based on 95 sessions HideShow timer Statistics Question of the Week #12 For any purchase of less than $1000, a person doesn’t need to pay any tax. However, if the purchase value is between$1000 and $2000 included, then the tax is 5% of the excess value over$1000. Also, if the purchase value is more than $2000, then one needs to pay an additional tax of 10% on the excess value over$2000. Which of the following can be the possible purchase value, such that the total tax paid is not more than 4% of the total purchase value? A. 2499 B. 2599 C. 2699 D. 2799 E. 2801 _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Math Expert Joined: 02 Aug 2009 Posts: 7047 Show Tags 25 Aug 2018, 19:19 EgmatQuantExpert wrote: For any purchase of less than $1000, a person doesn’t need to pay any tax. However, if the purchase value is between$1000 and $2000 included, then the tax is 5% of the excess value over$1000. Also, if the purchase value is more than $2000, then one needs to pay an additional tax of 10% on the excess value over$2000. Which of the following can be the possible purchase value, such that the total tax paid is not more than 4% of the total purchase value? A. 2499 B. 2599 C. 2699 D. 2799 E. 2801 Going by answer options might be a good idea in this case A. 2499 -> 5% of 1000 + 10% of 499 -> 50 + 49.9 = 99.9 which is less than 4% of 2499 = 24.99*0.4 = 99.96 Therefore, the tax paid is not more than 4% of the overall purchase value for $2499(Option A) _________________ You've got what it takes, but it will take everything you've got ISB, NUS, NTU Moderator Joined: 11 Aug 2016 Posts: 317 Re: Question of the Week- 12 (For any purchase of less than$1000, a ...)  [#permalink] Show Tags 25 Aug 2018, 20:10 EgmatQuantExpert wrote: For any purchase of less than $1000, a person doesn’t need to pay any tax. However, if the purchase value is between$1000 and $2000 included, then the tax is 5% of the excess value over$1000. Also, if the purchase value is more than $2000, then one needs to pay an additional tax of 10% on the excess value over$2000. Which of the following can be the possible purchase value, such that the total tax paid is not more than 4% of the total purchase value? A. 2499 B. 2599 C. 2699 D. 2799 E. 2801 Since all the answer choices are between 2000 and 3000. let us assume that the value would be $(2000+x) Now calculating the tax. 5% of 1000 + 10% of x. Now this tax should be less than 4% of the whole amount ! = 50 +0.1x < 4% of (2000+x) = 50 +0.1x < 80 + 0.04x =0.06x < 30 or x<500 Hence, Answer Choice A is the answer! _________________ ~R. If my post was of any help to you, You can thank me in the form of Kudos!! e-GMAT Representative Joined: 04 Jan 2015 Posts: 2214 Re: Question of the Week- 12 (For any purchase of less than$1000, a ...)  [#permalink] Show Tags 14 Sep 2018, 00:10 Solution Given: • We are given three cases, o Case 1: If the purchase value < $1000, then no need to pay tax o Case 2: If the purchase value is between$1000 and $2000, then the tax = 5% of the excess value over$1000 o Case 3: If the purchase value > $2000, then the tax = 5% of ($2000 – $1000) + 10% of the excess value over$2000. To find: • Possible purchase value, such that the total tax paid ≤ 4% of the total purchase value Approach and Working: • So, if we observe, all the options given are greater than $2000 • Thus, we can directly consider Case-2, where the purchase value is greater than$2000 • Now, let us assume that the purchase value = $x, such that x >$2000 • So, the total tax paid = 5% of $1000 + 10% of ($x – $2000) • This value should be less than or equal to 4% of x, o 5% of$1000 + 10% of ($x –$2000) ≤ 4% of x o $$50 + \frac{10x}{100} - 200 ≤ \frac{4x}{100}$$ o Implies, $$\frac{6x}{100} ≤ 150$$ o Thus, x ≤ $2500 Therefore, the purchase value must be less than or equal to$2500 Hence, the correct answer is option A. _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
2018-11-22 11:33:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3242945671081543, "perplexity": 5801.663342656518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746205.96/warc/CC-MAIN-20181122101520-20181122123520-00253.warc.gz"}
https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_(Martin_Neary_Rinaldo_and_Woodman)/08%3A_Potential_Energy_and_Conservation_of_Energy/8.06%3A_Summary
$$\require{cancel}$$ # 8.6: Summary ## Key Takeaways A force is conservative if the work done by that force on a closed path is zero: \begin{aligned} \oint \vec F(\vec r) \cdot d\vec l = 0\end{aligned} Equivalently, the force is conservative if the work done by the force on an object moving from position $$A$$ to position $$B$$ does not depend on the particular path between the two points. The conditions for a force to be conservative are given by: \begin{aligned} \frac{\partial F_{z}}{\partial y}-\frac{\partial F_{y}}{\partial z} &= 0 \nonumber\\ \frac{\partial F_{x}}{\partial z}-\frac{\partial F_{z}}{\partial x} &= 0\nonumber\\ \frac{\partial F_{y}}{\partial x}-\frac{\partial F_{x}}{\partial y} &= 0\end{aligned} In particular, a force that is constant in magnitude and direction will be conservative. A force that depends on quantities other than position (e.g. speed, time) will not be conservative. The force exerted by gravity and the force exerted by a spring are conservative. For any conservative force, $$\vec F(\vec r)$$, we can define a potential energy function, $$U(\vec r)$$, that can be used to calculate the work done by the force along any path between position $$A$$ and position $$B$$: \begin{aligned} -W = - \int_A^B \vec F(\vec r) \cdot d\vec l = U(\vec r_B) - U(\vec r_A) = \Delta U\end{aligned} where the change in potential energy function in going from $$A$$ to $$B$$ is equal to the negative of the work done in going from point $$A$$ to point $$B$$. We can determine the function $$U(\vec r)$$ by calculating the work integral over an “easy” path (e.g. a straight line that is co-linear with the direction of the force). It is important to note that an arbitrary constant can be added to the potential energy function, because only differences in potential energy are meaningful. In other words, we are free to choose the location in space where the potential energy function is defined to be zero. We can break up the net work done on an object as the sum of the work done by conservative ($$W^C$$) and non-conservative forces ($$W^{NC}$$): \begin{aligned} W^{net}&=W^{NC}+W^{C}=W^{NC}-\Delta U\end{aligned} where $$\Delta U$$ is the difference in the total potential energy of the object (the sum of the potential energies for each conservative force acting on the object). The Work-Energy Theorem states that the net work done on an object in going from position $$A$$ to position $$B$$ is equal to the object’s change in kinetic energy: \begin{aligned} W^{net}&=\frac{1}{2}mv_B^2-\frac{1}{2}mv_A^2=\Delta K\end{aligned} We can thus write that the total work done by non conservative forces is equal to the change in potential and kinetic energies: \begin{aligned} W^{NC}=\Delta K+\Delta U\end{aligned} In particular, if no non-conservative forces do work on an object, then the change in total potential energy is equal to the negative of the change in kinetic energy of the object: \begin{aligned} -\Delta U=\Delta K\end{aligned} We can introduce the mechanical energy, $$E$$, of an object as: \begin{aligned} E = U+K\end{aligned} The net work done by non-conservative forces is then equal to the change in the object’s mechanical energy: \begin{aligned} W^{NC}=\Delta E\end{aligned} In particular, if no net work is done on the object by non-conservative forces, then the mechanical energy of the object does not change ($$\Delta E=0$$). In this case, we say that the mechanical energy of the object is conserved. The Lagrangian description of classical mechanics is based on the Lagrangian, $$L$$: \begin{aligned} L = K - U\end{aligned} which is the difference between the kinetic energy, $$K$$, and the potential energy, $$U$$, of the object. The equations of motion are given by the Principle of Least Action, which leads to the Euler-Lagrange equation (written here for the case of a particle moving in one dimension): \begin{aligned} \frac{d}{dt}\left(\frac{\partial L}{\partial v_{x}}\right)-\frac{\partial L}{\partial x} = 0\end{aligned} ## Important Equations ### Conditions for a force to be conservative: \begin{aligned} \oint \vec F(\vec r) \cdot d\vec l = 0\end{aligned} \begin{aligned} \frac{\partial F_{z}}{\partial y}-\frac{\partial F_{y}}{\partial z} &= 0 \nonumber\\ \frac{\partial F_{x}}{\partial z}-\frac{\partial F_{z}}{\partial x} &= 0\nonumber\\ \frac{\partial F_{y}}{\partial x}-\frac{\partial F_{x}}{\partial y} &= 0\end{aligned} ### Potential energy for a conservative force: \begin{aligned} \Delta U&=-W\\ U(\vec r_B) - U(\vec r_A)&= - \int_A^B \vec F(\vec r) \cdot d\vec l\end{aligned} ### Work-energy theorem: \begin{aligned} W^{net}&=\frac{1}{2}mv_B^2-\frac{1}{2}mv_A^2=\Delta K\end{aligned} ### Work: \begin{aligned} W^{net}&=W^{NC}+W^{C}=W^{NC}-\Delta U\\ W^{NC}&=\Delta K+\Delta U\end{aligned} ### Energy: \begin{aligned} E&=U+K\\ W^{NC}&=\Delta E\end{aligned} ### Lagrange: \begin{aligned} L = K - U\\ \frac{d}{dt}\left(\frac{\partial L}{\partial v_{x}}\right)-\frac{\partial L}{\partial x} = 0\end{aligned} ## Important Definitions Definition Conservative force: A force that does no net work when exerted over a closed path. Definition Potential energy: A form of energy that an object has by virtue of its position in space. The potential energy is associated with a conservative force, which is exerted in the direction that lowers the potential energy of the object. SI units: $$[\text{J}]$$. Common variable(s): $$U$$.
2022-01-20 20:51:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905150532722473, "perplexity": 392.356229075142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00212.warc.gz"}
https://brilliant.org/problems/do-i-need-to-know-totient-function/
# Do I Need To Know Totient Function? Number Theory Level 1 $\large 5 \times 5 \times 5 \times \cdots \times 5 = \ldots 5$ If I multiply 5 by itself any number times, the last digit of the final product always remains unchanged. What is another single digit integer larger than 1 that share this same property? ×
2016-10-27 16:48:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1799618899822235, "perplexity": 743.0067365377108}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721355.10/warc/CC-MAIN-20161020183841-00011-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.math-only-math.com/Comparing-Unlike-Fractions.html
# Comparing Unlike Fractions In comparing unlike fractions, we first convert them into like fractions by using the following steps and then compare them. Step I: Obtain the denominators of the fractions and find their LCM (least common multiple). Step II: Each fractions are converted to its equivalent fraction with denominator equal to the LCM (least common multiple) obtained in Step I. Step III: Compare the numerators of the equivalent fractions whose denominators are same. Solved Examples on Comparing Unlike Fractions: 1. Which is larger ³/₄ or ⁵/₁₂ ? Solution: Let us first find the LCM (least common multiple) of the denominators 4 and 12. We have, Therefore, LCM (least common multiple) of 4 and 12 is 2 × 2 × 3 = 12. Now we convert the given fractions to equivalent fractions with denominator 12 We have, 3/4 = (3 × 3)/(4 × 3) = 9/12 5/12 = (5 × 1)/(12 × 1) = 5/12 Now we will observe the numerator, that is 9 > 5. So, ⁹/₁₂ > ⁵/₁₂ Therefore, ³/₄ > ⁵/₁₂. 2. Compare ⁷/₈ and ⁵/₆. Solution: First we find the LCM of denominators . We have, Therefore, LCM (least common multiple) = 2 × 2 × 2 × 3 = 24. Now, we convert each fraction into equivalent fraction with 24 as its denominator. We have, 7/8 = (7 × 3)/(8 × 3) = 21/24 [since, 24 ÷ 8 = 3] 5/6 = (5 × 4)/(6 × 4) = 20/24 [since 24 ÷ 6 = 4] Now we will observe the numerator, that is 20 < 21 So, 20/24 < 21/24 Therefore, ⁵/₆ < ⁷/₈. 3. Arrange the fractions ⁵/₈, ⁵/₆, ⁷/₄, ³/₅ in ascending order. Solution: Let us first find the LCM (least common multiple) of the denominators: We have, Therefore, LCM (least common multiple) = 2 × 2 × 2 × 3 × 5 = 120. Now, we convert each fraction into equivalent fraction with 120 as its denominator. We have, 5/8 = (5 × 15)/(8 × 15) = 75/120, [since 120 ÷ 8 = 15]. 5/6 = (5 × 20)/(6 × 20) = 100/120, [since 120 ÷ 6 = 20]. 7/4 = (7 × 30)/(4 × 30) = 210/120, [since 120 ÷ 4 = 30]. 3/5 = (3 × 24)x/(5 × 24) = 72/120, [since 120 ÷ 5 = 24]. Now we will observe the numerator, that is 72 < 75 < 100 < 210. So, 72/120 < 75/120 < 100/120 < 210/120. Therefore, ³/₅, ⁵/₈, ⁵/₆, ⁷/₄. 4. Arrange the following fractions in descending order ³/₈, ⁵/₆, ²/₄, ¹/₃, ⁶/₈. Solution: We observe that the given fractions neither have common denominator nor common numerator. So, first we convert them into like fractions i.e. fractions having common denominator. For this, we first find the LCM (least common multiple) of the denominators of the given fractions. Denominators are 8, 6, 4, 3, 8. We have, Therefore, LCM (least common multiple) = 2 × 2 × 2 × 3 = 24. Now, we convert each fraction into equivalent fraction with 24 as its denominator. Thus, 3/8 = (3 × 3)/( 8 × 3) = 9/24 [since 24 ÷ 8 = 3] 5/6 = (5 × 4)/(6 × 4) = 20/24 [since 24 ÷ 6 = 4] 2/4 = (2 × 6)/(4 × 6) = 12/24 [since 24 ÷ 4 = 6] 1/3 = (1 × 8)/(3 × 8) = 8/24 [since 24 ÷ 3 = 8] 6/8 = (6 × 3)/(8 × 3) = 18/24 [since 24 ÷ 8 = 3] Now we will observe the numerator, that is 20 > 18 > 12 > 9 > 8. So, 20/24 > 18/24 > 12/24 > 9/24 > 8/24. Therefore, 5/6 > 6/8 > 2/4 > 3/8 > 1/3 To compare two unlike fractions, we first convert them to like fractions. 1. Compare $$\frac{7}{12}$$ and $$\frac{5}{18}$$. Solution: Let us first convert both fractions to like fractions and then compare. LCM of denominators 12 and 18 is 36. Divide 36 by 12, we get 3. Now, multiply both numerators and denominator of the fraction $$\frac{7}{12}$$ by 3. $$\frac{7 × 3}{12 × 3}$$ = $$\frac{21}{36}$$ Divide 36 by denominator of the second fraction. i.e. 18, we get 2. Multiply numerator and denominator of the fraction $$\frac{5}{18}$$ by 2. $$\frac{5 × 2}{18 × 2}$$ = $$\frac{10}{36}$$ Compare the two fractions $$\frac{21}{36}$$ and $$\frac{10}{36}$$. Here 21 > 10. Thus, $$\frac{7}{12}$$ > $$\frac{5}{18}$$. Alternate Method: We can compare unlike fractions by cross multiplication in the following way. Here, 7 × 18 > 12 × 5; 126 > 60 So, $$\frac{7}{12}$$ > $$\frac{5}{18}$$ Questions and Answers on Comparing Unlike Fractions: 1. Compare the given fractions by putting the right sign <, > or =. (i) $$\frac{1}{5}$$ ___ $$\frac{1}{11}$$ (ii) $$\frac{7}{6}$$ ___ $$\frac{4}{6}$$ (iii) $$\frac{3}{18}$$ ___ $$\frac{3}{15}$$ (iv) $$\frac{2}{3}$$ ___ $$\frac{3}{4}$$ (v) $$\frac{6}{12}$$ ___ $$\frac{8}{16}$$ (vi) $$\frac{5}{5}$$ ___ $$\frac{5}{7}$$ (vii) $$\frac{5}{6}$$ ___ $$\frac{12}{18}$$ (viii) $$\frac{10}{15}$$ ___ $$\frac{14}{21}$$ (ix) $$\frac{2}{7}$$ ___ $$\frac{5}{13}$$ (i) > (ii) > (iii) < (iv) < (v) = (vi) > (vii) > (viii) = (ix) < 2. Arrange the given fractions in ascending order. (i) $$\frac{3}{6}$$, $$\frac{3}{8}$$, $$\frac{3}{4}$$ (ii) $$\frac{1}{16}$$, $$\frac{1}{4}$$, $$\frac{1}{2}$$ (iii) $$\frac{1}{2}$$, $$\frac{3}{4}$$, $$\frac{5}{8}$$ (iv) $$\frac{2}{5}$$, $$\frac{3}{4}$$, $$\frac{3}{5}$$ (i) $$\frac{3}{8}$$, $$\frac{3}{6}$$, $$\frac{3}{4}$$ (ii) $$\frac{1}{16}$$, $$\frac{1}{4}$$, $$\frac{1}{2}$$ (iii) $$\frac{1}{2}$$, $$\frac{5}{8}$$, $$\frac{3}{4}$$ (iv) $$\frac{2}{5}$$, $$\frac{3}{5}$$, $$\frac{3}{4}$$ Word Problems on Comparing Unlike Fractions: 3. Robert ate $$\frac{9}{22}$$ part of the pizza and Maria ate $$\frac{5}{11}$$ part of the pizza. Who ate the greater part of the pizza? What fraction of pizza was finished by the two girls? Solution: Robert ate $$\frac{9}{22}$$ part of the pizza. Maria ate $$\frac{5}{11}$$ part of the pizza. Let us first convert both fractions to like fractions and then compare. Let us find the LCM of the denominators 22 and 11. The LCM of 22 and 11 is 22. $$\frac{9}{22}$$ = $$\frac{9 × 1}{22 × 1}$$ = $$\frac{9}{22}$$ $$\frac{5}{11}$$ = $$\frac{5 × 2}{11 × 2}$$ = $$\frac{10}{22}$$ Compare the two fractions $$\frac{9}{22}$$ and $$\frac{10}{22}$$. Here 10 > 9. Thus, $$\frac{5}{11}$$ > $$\frac{9}{22}$$. Maria ate the greater part of the pizza. Now add the two fractions$$\frac{9}{22}$$ + $$\frac{5}{11}$$ We get the like fractions $$\frac{9}{22}$$ and $$\frac{5}{22}$$. Now, $$\frac{9}{22}$$ + $$\frac{10}{22}$$ = $$\frac{9 + 10}{22}$$ = $$\frac{19}{22}$$ Therefore, $$\frac{19}{22}$$ of the pizza was finished by the two girls? 3. Ron covered a distance of $$\frac{5}{6}$$ km and Jon covered a distance of $$\frac{3}{4}$$ km. Who covered the greater distance? 4. Adrian cycled for $$\frac{62}{8}$$ km and Steven cycled $$\frac{27}{4}$$ km during the weekend. Who cycled more and by how much? ## You might like these • ### Addition of Unlike Fractions | Adding Fractions with Different Denomin To add unlike fractions, we first convert them into like fractions. In order to make a common denominator we find the LCM of all different denominators of the given fractions and then make them equivalent fractions with a common denominator. To add unlike fractions, we first convert them into like fractions. In order to make a common denominator we find the LCM of all different denominators of the given fractions and then make them equivalent fractions with a common denominator. • ### Equivalent Fractions | Fractions |Reduced to the Lowest Term |Examples The fractions having the same value are called equivalent fractions. Their numerator and denominator can be different but, they represent the same part of a whole. We can see the shade portion with respect to the whole shape in the figures from (i) to (viii) In; (i) Shaded • ### Comparing Mixed Fractions | Comparing & Ordering Mixed Numbers When the whole number parts are equal, we first convert mixed fractions to improper fractions and then compare the two by using cross multiplication method. The fraction with greater whole number part is greater. For example 3$$\frac{1}{2}$$ > 2$$\frac{1}{2}$$ • ### Comparison of Like Fractions | Comparing Fractions | Like Fractions Any two like fractions can be compared by comparing their numerators. The fraction with larger numerator is greater than the fraction with smaller numerator, for example $$\frac{7}{13}$$ > $$\frac{2}{13}$$ because 7 > 2. In comparison of like fractions here are some • ### Comparison of Fractions having the same Numerator|Ordering of Fraction In comparison of fractions having the same numerator the following rectangular figures having the same lengths are divided in different parts to show different denominators. 3/10 < 3/5 < 3/4 or 3/4 > 3/5 > 3/10 In the fractions having the same numerator, that fraction is • ### Fraction in Lowest Terms |Reducing Fractions|Fraction in Simplest Form There are two methods to reduce a given fraction to its simplest form, viz., H.C.F. Method and Prime Factorization Method. If numerator and denominator of a fraction have no common factor other than 1(one), then the fraction is said to be in its simple form or in lowest • ### Changing Fractions|Fraction to Whole or Mixed Number|Improper Fraction In changing fractions we will discuss how to change fractions from improper fraction to a whole or mixed number, from mixed number to an improper fraction, from whole number into an improper fraction. Changing an improper fraction to a whole number or mixed number: • ### Types of Fractions |Proper Fraction |Improper Fraction |Mixed Fraction The three types of fractions are : Proper fraction, Improper fraction, Mixed fraction, Proper fraction: Fractions whose numerators are less than the denominators are called proper fractions. (Numerator < denominator). Two parts are shaded in the above diagram. • ### Fraction as Division |Fractions can be Expressed as Division |Fraction Fraction as division is also known as fraction as quotient. Examples on Fraction as division If 8 biscuits are distributed between 2 children equally, then each of them will get 8 ÷ 2 = 4 biscuits. If 4 biscuits are distributed between 2 children equally, then each of • ### Fractions in Descending Order |Arranging Fractions an Descending Order We will discuss here how to arrange the fractions in descending order. Solved examples for arranging in descending order: 1. Arrange the following fractions 5/6, 7/10, 11/20 in descending order. First we find the L.C.M. of the denominators of the fractions to make the • ### Fractions in Ascending Order | Arranging Fractions an Ascending Order We will discuss here how to arrange the fractions in ascending order. Solved examples for arranging in ascending order: 1. Arrange the following fractions 5/6, 8/9, 2/3 in ascending order. First we find the L.C.M. of the denominators of the fractions to make the denominators
2020-08-08 09:12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125163316726685, "perplexity": 1338.5733476577172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00521.warc.gz"}
http://mathcs.chapman.edu/~jipsen/structures/doku.php/vector_spaces
Vector spaces Abbreviation: FVec Definition A vector space over a field $\mathbf{F}$ is a structure $\mathbf{V}=\langle V,+,-,0,f_a\ (a\in F)\rangle$ such that $\langle V,+,-,0\rangle$ is an abelian groups scalar product $f_a$ distributes over vector addition: $a(x+y)=ax+ay$ $f_{1}$ is the identity map: $1x=x$ scalar product distributes over scalar addition: $(a+b)x=ax+bx$ scalar product associates: $(a\cdot b)x=a(bx)$ Remark: $f_a(x)=ax$ is called scalar multiplication by $a$. Morphisms Let $\mathbf{V}$ and $\mathbf{W}$ be vector spaces over a field $\mathbf{F}$. A morphism from $\mathbf{V}$ to $\mathbf{W}$ is a function $h:V\rightarrow W$ that is linear: $h(x+y)=h(x)+h(y)$, $h(ax)=ah(x)$ for all $a\in F$ Example 1: Properties Classtype variety no unbounded no yes yes, $n=2$ yes yes yes no no Finite members $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
2020-02-16 20:25:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.966020405292511, "perplexity": 684.3199126355187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141396.22/warc/CC-MAIN-20200216182139-20200216212139-00444.warc.gz"}
https://thermtest.com/papers/effect-of-vinyl-modified-silica-and-raw-silica-particles-on-the-properties-of-as-prepared-polymer-silica-nanocomposite-foams
Category: Abstract: This research studied the effect of the addition of vinyl-modified silica and raw silica additions to polymer-silica nanocomposite foams. Extensive physical property characterization was undertaken after their addition to quantify their influence. TEM found that silica particles added through the vinyl-modified silica dispersed better than the raw silica. This created foams with a higher density and smaller cell size. A slight reduction in the thermal conductivity after the additions was observed. The mechanical strength and thermal stability was much better in foams that had vinyl-modified silica added as opposed to raw silica. Reference: Journal of Nanoscience and Nanotechnology Vol. 8 (2008) 1–9 DOI: 10.1166/jnn.2008.354
2022-09-26 15:32:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086198806762695, "perplexity": 5330.360329510956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00620.warc.gz"}
https://www.vedantu.com/question-answer/three-digit-natural-numbers-are-divisible-class-11-maths-icse-5ee7101f47f3231af26a22e5
Question # How many three digit natural numbers are divisible by 3? Hint: To find the number of three digit numbers divisible by 3, we need to form an arithmetic progression with common difference as 3. Then we will find the ${{n}^{th}}$ term of this series which will be the highest three digit number that could be a multiple of 3. Then, we use the formula- ${{a}_{n}}$= a + (n-1) d ${{a}_{n}}$= ${{n}^{th}}$ term of this series a = first term of the series n = number of terms in the series d= common difference (3 in this case) Thus, to proceed with the above problem (to find the appropriate arithmetic series), we first find the ${{n}^{th}}$ term of this series. The ${{n}^{th}}$ term of the series would be the highest three digit number which is a multiple of 3. To find this number, we divide by 1000 by 3. Doing so, we get 333 as the quotient and 1 as the remainder. Now, to get this number we subtract the remainder (that is 1) from 1000 to get the ${{n}^{th}}$ term of the series. We get, 1000 - 1 = 999. The next step would be the first term of the series. This term would clearly be 102. This is because it is the smallest three digit number that is divisible by 3. We have, the first term, a = 102 and ${{n}^{th}}$ term, ${{a}_{n}}$=999. For arithmetic progression, we have the formula- ${{a}_{n}}$= a + (n-1) d Where, n is the required number of terms in the series and d is the common difference (which is 7 in this case) 999 = 102 + 3(n-1) n-1=$\dfrac{999-102}{3}$ n-1=299 n=300 Hence, the required number of terms is 300. Note: Another technique to arrive at the answer is to divide 1000 by 3. We get 333 as the quotient. This implies that the terms are 3, 6, 9, …, 999 (which are 333 terms). However, we have to exclude the terms that are single digit and double digit from this range of numbers. Thus, we now divide 100 by 3 (to remove lower digit numbers). We get 33 as the quotient. This implies that the terms are 3, 6, 9, …, 99 (which are 33 terms). We have to remove these terms since they are not three digit numbers. Thus, we do, 333-33=300. Thus, we get the number of terms as 300.
2021-04-23 11:13:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029602527618408, "perplexity": 163.7547601658549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00478.warc.gz"}
https://www.jiskha.com/archives/2010/08/31
# Questions Asked onAugust 31, 2010 1. ## physics A person walks first at a constant speed of 4.90 m/s along a straight line from point A to point B and then back along the line from B to A at a constant speed of 2.70 m/s. (a) What is her average speed over the entire trip? 2. ## physics A person walks first at a constant speed of 4.90 m/s along a straight line from point A to point B and then back along the line from B to A at a constant speed of 2.70 m/s. (a) What is her average speed over the entire trip? 3. ## PreCalc The area of a parking lot is 805 square meters. A car requires 5 square meters and a bus requires 32 square meters of space. At most 80 vehicles can park at one time. If the cost to park a car is $2.00 and the cost to park a bus is$6.00, how many buses 4. ## Chermistry if molar enthalpy of vaporization of enthanol is 38.6 kJ/mol, how many moles of ethanol are vaporize when required heat is 200.72kJ fraction strip folded into 12ths what fractional length could you measure with the strip? 6. ## science my bedroom air conditioner blows very cold air at night,but only cool air during the day.my bedroom gets lots of direct sunlight all day long.what is your hypothesis 7. ## precalc Without further study, you forget things as time passes. A model of human memory gives the percentage, p, of acquired knowledge that a person retains after t weeks. The formula is p = (100 – a)e-bt + a, where a and b vary from one person to another. If a 100. ## Chemistry I have a 2.5g sample in 12.5ml solution.I'm supposed to do a spike into this solution with the sample.If my standard concentration is 2000ppm, how much of this solution should I use such that I get 200ppm spike in sample? 101. ## math geometry in a triangle, the measure of the first angle is four times the measure of thesecond angle.The measure of the third angle 78 more than the second angle. What is the measure of the first angle? 12r - 4r = 48 what does r equal 103. ## science if an object has a mass of 210g and a density of 3.3g/cm cubed, what would be the volume of the object? 104. ## Critical Thinking Which of the following statements is not a claim? A) Life exists on planets other than Earth. B) Dare to stay off drugs! C) Something's force equals its mass multiplied by its acceleration. D) Joe owns a pet dog. 105. ## science an object has a volume of 74cm cubed and a density of .88g/cm cubed. What is the mass of the block? 106. ## algebra Hey I am trying to help my son in math. The question is, The scale of a district map is 1 over 10000. Find the distance on the map, in centimeters, for each of the following distances. a) 800m b) 5 km I need to see how this is worked out. 107. ## college (2x^(2)+7x-15)/(2x^(2)+13x+15) 108. ## Language Arts Find one error. Window comes from one word that means "wind" and another that means "eye' We buy windows to keep an eye on the wind. We also use them to keep heat in when it's cold out side. 109. ## aed204 • The composition of the faculty, administration, and other staff accurately reflects the pluralistic composition of the United States. • Differences in academic achievement levels disappear between males and females, dominant and oppressed group Choose one of the organizational departments, such as accounting, finance, HR, and so on of a business. What is the role of this department? What types of information does the department need? How does the department use that information? 111. ## English Posted by rfvv on Tuesday, August 31, 2010 at 8:19pm. 1. There is a girl who is reading a book with legs crossed on the sofa. 2. There is a girl reading a book with her knees folded and raised on the sofa. 3. There are a dog and a cat sleeping on a big 112. ## math Factor: 20+22v-12v^2 113. ## social studies The nations largest city, N.Y. City is located in this region? 114. ## social studies Baton Rouge and New Orleans are in what region? 115. ## Management Does anyone have a historical example of an ethical dilemma that deals with global business. Any example will be helpful and I can then google the details. Thx! 116. ## college lead has density of 11.35g/ml,i have 3.45 cc of lead what is the weight in grams. 117. ## Chemistry 1.25 x10^-5g/ 0.75 moles = Not sure what the ^ stands for 118. ## Psychology EEG, PET and TMS are all : A) laboratory observations that introduce experimental bias into the study B) Descriptive methods for studying the brain C) Standardized methods for studying the brain D) experimentally controlled technologies for brain-wave 119. ## English literature Give me information on the evolution of sentimental comedy 120. ## physics An electric utility company needs to build a new generating plant. What factors should be considered when deciding on the location of the plant? Explain 121. ## psychology you like to play the violon, but doing so in front of people makes you nervous and you make mistakes, what is influencing your playing? 122. ## algebra evaluate the expression n + (-5) for each value of n. n=312 i don't know how to do this..please helpp.. 128. ## Math Scott weighed samples of different kind of soil. Sample A weighed 3 ounces. Sample B weighed 814 thousandths of an ounce more than sample A. what decimal should Scott use to record the weight of sample B? 129. ## math I need real life example word problems using to ngative interjers. which of the following is an irrational number? A. 1 B. 9/17 C. 0.4166666666 D. Both A&C with showing work please or putting it in to words. This region is divided into 2 smaller regions-New England and the Mid atlantic states? 132. ## science Juneau, Alaska has a dry winter season, even though it receives 53.2 inches of precipitation annually. Its average coolest-month temperature is 22'F, and its average warmest -month temperature is a surprising 56'F. What climate is it? a) Humid continental 133. ## math A 9 percent salt-water solution is mixed with 4 ounces of an 18 percent salt-water solution in order to obtain a 15 percent salt-water solution. How much of the first solution should be used? 1 ounce 2 ounces 3 ounces 4 ounces Explain your answer. 134. ## chem what is the role of h2so4 during the preparation of FeC2O4 135. ## Math Solve: 5g² - 13g + 6 = 0 136. ## Botany 1.Which group of plsnt like organisms (think sedentary) is most related to animals? 2. Which group of plants is most related to flowering plants (angiosperms)? 137. ## Biology One part of the brain if stimulated during adolescence, will form two distint regions. The area of the brain is referred to as: A) the occiptal lobes B) the pituitary glands C) Broca's area D) the parietal Lobe I am unsure of what the answer is, although 138. ## math Math out of box 1 2 3 4 5 this is called steps(s) 5 9 13 17 21 this is called tiles(t) we know you are adding 4, but what is the rule such as (S X S + 4-1-5)I know its not right so help then right the rule for this. 139. ## science what is the truth of the sky and ocean being blue? 140. ## domains of development why it is important to consider each domain in the study of adult development what does it mean to name the intersection of the planes in a triangular prism 142. ## Algebra 2 Factor by grouping: 30x to the third power-42x square+5x-7= 143. ## algebra x^2-21x-110 Factor 144. ## algebra 8th ADD OR SUBTRACT> 5.2-2.5 HOW DO I DO THIS? 145. ## geometry how do i find the radius of the earth 146. ## math I need help on finding out the answer for this problem -2y-11=-12 147. ## business and tecnology class Which economic system is the best olution to hanling a crisis of epic proportion. 148. ## math how can I simplify 4lb 7oz,3lb 11oz,5lb 8oz? 149. ## math okay what is a right angle triangle that has the letter k and L in the middle I know it's 90 degrees angle but I don't know forsure what i'm doing 150. ## Algebra Factor: 2x^8y^6+16x^6y^5+12xy 151. ## Algebra1 Factor: 56w^2+17w-3 152. ## English 1. Sometimes he painted in his studio based on the sketches he drew outside. 2. Based on the sketches he drew outside, sometimes he painted in his studio. 3. Being based on the sketches he drew outside, sometimes he painted in his studio. 4. As he was 153. ## ALGEBRA :) GIVE THE AREA OF THE FIGURE DESCRIBED. RECTANGLE; L=12CM , W=5CM HOW SHOULD I GET THE RESULT? I need to make a 11, 12, and 13 letter word using these letters:d,i,s,m,u,u,n,t,c,a,r,e,e,g 155. ## social studies The Grand Canyon is in what region? 156. ## algebraa find the length of the third side of the triangle perimeter=12 3cm 4cm 157. ## science What climate does this place have? Warmest month average 58'F Coldest month average: -2'F Average rainfall: 38 in / yr Vegetation: Evergreen Forest a) Temperate b) Polar c) Cold d) Dry I PICKED c how do you do a algebraic experssion? 159. ## Algebra 8th I Was Looking at The ProBlems of Paloma and i thing we have the same HW. so can You HelP Me On This? n=5.75 160. ## 3 English H How can I illustrate the word "aberration"? 161. ## college algebra 10+2[2+(1+1)-2^2] 162. ## math116 I need help with 3(x-1)+5=15x +7-4-4(3x+1)+ 3 163. ## chemistry Pb w/D of 11.35g/ml. Pb(lead)3.45cc ?weight in grams i really need help on this,and i need an answer quick as possible! >< what is the probability of picking a white sock and then a brown sock from a sock drawer,if it contains 10 whites,5 browns,and 8 multi-colored socks? 165. ## statistics • Prepare a 700- to 1,050-word paper in which you interpret the statistical significance of a study. 166. ## chm The mass of Oceanis 1.8*10 to 21 kg.The ocean contains 1.076%sodium ion,Na+what is mass of sodium in ocean how do you break apart to divide: 648 by 6 using compatible numbers to break apart the dividend. 168. ## Math how do I solve: -1 cubed-(-1)to the fourth +(-1) to the fifth - (-1)to the sixth thanks 169. ## college dad invited me to eat with him? what is the verb in this sentence 170. ## science write the correct formula, balance the equation, and name the type of equation. potassium + sulfur + potassium sulfide 171. ## Marketing What is the impact of the competition in Internet space and how challenging it is and going to be for marketers to effectively compete for market share and customers? 172. ## math Factor: u^2+3u+8u+24 173. ## physics What equation should I used for a water bottle rocket lab? I am using distance vs volume of water. Of course, the volume of water is my independent variable. What equation should I use for this lab? 174. ## math 22*2/4-(7+3)2+3(7-2)2 175. ## science What climate does this place have? Warmest month average: 83'F Coldest month average: 70'F Average rainfall: 42 in/yr Vegetation: Rainforest 176. ## science Juneau, Alaska has a dry winter season, even though it receives 53.2 inches of precipitation annually. Its average coolest-month temperature is 22'F, and its average warmest -month temperature is a surprising 56'F. What climate is it? 177. ## Chemistry Are significant figures relevant in temperature? ex: (-459 *F - 32 *F)(5 *C/ 9 *F) = -272.78 *C So would I leave my answer as -272.78 *C? or -273 *C because of significant figures? h² + 20=9h 179. ## English Posted by rfvv on Tuesday, August 31, 2010 at 10:09pm. 1. He was based on mountains and rivers, so he painted a lot of beautiful pictures. 2. He was based on the scenery, so he could paint many pictures containing mountains. (Are they correct sentences?) 180. ## English Literature An assignment on sentimental comedy 181. ## grammar I will be so offended if someone mistaken as an American. is this sentence correct? 182. ## Chemistry Are significant figures relevant in temperature? ex: (-459 *F - 32 *F)(5 *C/ 9 *F) = -272.78 *C So would I leave my answer as -272.78 *C? or -273 *C because of significant figures? 183. ## algebra can u help me now to do this cazz i don't understandd ;) n+(-5) n= 7/12 i amconfuseedd :O its like the one u tell me 14/2+3(5) 185. ## science Are humans subject to the same pressures of natural selection as other organisms? Why or why not? 186. ## COM155 Can someone help me with reasons why an author might write a summary? 187. ## algebra 10+2[2+(1+1)-2 to the 2nd power] 188. ## gen 105 Does the option to download appendices in an audio format improve the quality of your educational experience? Does it make learning more convenient due to its portability? Does it help you learn in different ways? Is it something you will not use? Describe how do you simplify the expsession of nine - six divided by three 190. ## Nursing I need to locate an article about the benefits of NLNAC Accreditation but I cannot seem to find a journal article only a list of benefits on the NLNAC website any help? 191. ## physics what are the values of g,m1,m2,and d if f=g times m1 m2 all over d squared 192. ## math what is 9cm=______in do I have to mutiply by 2.54 which equals 22.86,is i'm right or wrong? 193. ## algebra i don't know how to do this 1/2> on the top is??? on the right 1/2 on the butttomm????? 3v^2+10v+8 195. ## Math ( -4 ) 3 (----) ( 5 ) if you can't read that, negative 4, fifths, squared. -4/5 in parentheses. 196. ## pre algebra how do you complete this problem -4x+7=11 197. ## english give example of small ideas 198. ## social studies This region of the U.S. is the largest in area? 199. ## Life Science Need to know how to figure out the volume of temperature kelvin? 200. ## Psychology Why did various early followers of Freud reject psychoanalysis in favor of an alternative theory? 201. ## Math What is the value of the underlined digit 0.26. The underlined digit is 2 202. ## algebrra perimeter: 56cm 7cm 24cm the lenght of the third side of the triangle is? idk 203. ## English 1. While he was based on the sketches, Degas painted in his studio. 2. While he was walking on the street, he met a friend of his. 3. While he was driving, he sometimes smoked. 4. While he was chatting, he drank coffee from time to time. (What about the an information on crop production 205. ## English Literature what is the difference between a prose and a poem? 206. ## English Literature what is the difference between allegory and eligy? 207. ## math 4*5+8, How do you do this? The star means times multiply? 208. ## algebra (2x^2-3x+1)(x^2+x-2) 209. ## english ''what do i have to say as an african american citizen'' what is 1/6h = 9 and h=? 211. ## Algebra 2 Find each value if g(x)=x^3-x. SO, g(5)=(5^3)-5... would equal 120? Did I do this right? 212. ## college Have you ever used mnemonics (see p. 139 of your text) to remember something complicated? If so, describe the techniques you used. What ways might you use mnemonics to be a more effective college student and adult learner? 213. ## math mario used up 0.1 bottle of vanilla that held 2.5 ounces.how many ounces did he use? 214. ## math I'm very lost and frustrated about leaning the parallel and perpendicular lines like say for instance I'm looking at an arrow that is face down kind of of a right angle that has the letter K and L and the middle, can you please explained it to me y+y+2=18 216. ## math what is 7 to the 11th power equvilant to 217. ## English What is overcoming problems an integral part of life? 218. ## english Why is making a weakness a strength an important skill to have? 219. ## algebraa perimeter= 30 5cm 12cm the lenght of the third side of the triangle is? 220. ## Math 2x+2=8 Solve for x 221. ## hardware In design and modeling class we need to build a model couch but we can't find a website to help us on how to build one with a heater. plz help us, thanks. 222. ## hca230 what patient compliance issues are evident inthis scenario 16-12/4 224. ## writing english describe the effects after tsunami 225. ## algebra check add or subtract, -8-3 -8-3= -8+(-3) 3+8= -11 226. ## science what acts as the cell's control center (17-5)(6+5) 228. ## math (17-5)(6+5) Do you know what's an expression is? Can you teach me how to do it? 229. ## math we have a problem that my parents cant help with. 846*2=(...+40+...)*2=(800 over ...+...over2+6 over...)=(400+...+3)...=... ...means blank space for a number. Can you help with this problem, so that I can figure out the rest of the problems.Thanks 230. ## english 2.You will then post to the discussion board your writing strategies for developing your thesis. You will explore the features of the MyCompLab and summarize your experiences.
2020-05-28 04:34:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40016287565231323, "perplexity": 4348.898072837639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00500.warc.gz"}
https://zbmath.org/?q=an:724.45011
# zbMATH — the first resource for mathematics Global solutions of Boltzmann’s equation and the entropy inequality. (English) Zbl 0724.45011 This paper is to some extent an addendum to the earlier remarkable paper of the authors [Ann. Math., II. Ser. 130, No.2, 321-366 (1989; Zbl 0698.45010)]. In that paper they use the natural (formal) conservation laws of mass, momentum, and entropy associated with the Boltzmann equation to establish global (in space-time) existence of solutions to a modified version of the Boltzmann equation for data $$f_ 0$$ satisfying $f_ 0\geq 0,\quad \int_{{\mathbb{R}}^ N\times {\mathbb{R}}^ N}f_ 0(1+| \xi |^ 2+| x|^ 2+| \log f_ 0|)dx d\xi <\infty.$ (The authors termed this form of the Boltzmann equation as “renormalized”.) The verification of one important stability question was left open in the original paper. If $$\{f_ n\}$$ is a sequence of solutions of the renormalized Boltzmann equation corresponding to initial data $$f_{0n}$$ at $$t=0$$ and $\sup_{n}\int_{{\mathbb{R}}^ N\times {\mathbb{R}}^ N}\int dx d\xi f^ n_ 0\{1+| x|^ 2+| \xi |^ 2+\log | f_{0n}| \}<\infty,\quad f_{0n}\geq 0,$ what can be said regarding preservation of equality regarding the rate of dissipation of total entropy? In this paper the authors prove that the sequence $$\{f_ n\}$$ possesses a convergent subsequence, converging to a solution f of the renormalized Boltzmann equation which satisfies the original entropy rate dissipation equality as an inequality. The proof is based on the weak stability theory of the authors’ earlier paper. ##### MSC: 45K05 Integro-partial differential equations 82C40 Kinetic theory of gases in time-dependent statistical mechanics Full Text: ##### References: [1] L. Arkeryd. On the long time behaviour of the Boltzmann equation in a periodic box. Preprint. [2] C. Bardos, F. Golse & D. Levermore. In preparation, personal communication. [3] R. J. DiPerna & P. L. Lions. On the Cauchy problem for Boltzmann equations: global existence and weak stability. Ann. Math. 130 (1989), pp. 321-366. · Zbl 0698.45010 [4] R. J. DiPerna & P. L. Lions, Solutions globales de l’équation de Boltzmann. C. R. Acad. Sci. Paris 306 (1988), pp. 343-346. · Zbl 0662.35016 [5] R. J. DiPerna & P. L. Lions. Solutions globales de l’équation de Boltzmann. In Séminaire Equations aux Dérivées Partielles, École Polytechnique, Palaiseau, 1987-88. [6] P. Gérard. In Séminaire Bourbaki, Astérisque, SMF, Paris, 1988. [7] K. Hamdache. In preparation. [8] J. Polewczak. Global existence in L 1 for the modified nonlinear Enskog equation in ?3. Preprint. · Zbl 0719.35071 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-10-25 08:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7092379927635193, "perplexity": 895.3331760624109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00431.warc.gz"}
https://byjus.com/question-answer/find-the-point-of-tri-section-of-the-line-segment-joining-the-points-21-and/
Question # Find the point of tri-section of the line segment joining the points $$(-2,1)$$ and $$(7,4)$$ Solution ## Given:- A line segment joining the points $$A \left( -2, 1 \right)$$ and $$B \left( 7, 4 \right)$$.Let $$P$$ and $$Q$$ be the points on $$AB$$ such that,$$AP = PQ = QB$$Therefore, $$P$$ and $$Q$$ divides $$AB$$ internally in the ratio $$1 : 2$$ and $$2 : 1$$ respectively.As we know that if a point $$\left( h, k \right)$$ divides a line joining the point $$\left( {x}_{1}, {y}_{1} \right)$$ and $$\left( {x}_{2}, {y}_{2} \right)$$ in the ration $$m : n$$, then coordinates of the point is given as-$$\left( h, k \right) = \left( \cfrac{m{x}_{2} + n{x}_{1}}{m+n}, \cfrac{m{y}_{2} + n{y}_{1}}{m+n} \right)$$Therefore,Coordinates of $$P = \left( \cfrac{1 \times 7 + 2 \times \left( -2 \right)}{1 + 2}, \cfrac{1 \times 4 + 2 \times 1}{1 + 2} \right) = \left( 1, 2 \right)$$Coordinates of $$Q = \left( \cfrac{2 \times 7 + 1 \times \left( -2 \right)}{1 + 2}, \cfrac{2 \times 4 + 1 \times 1}{1 + 2} \right) = \left( 4, 3 \right)$$Therefore, the coordinates of the points of trisection of the line segment joining $$A$$ and $$B$$ are $$\left( 1, 2 \right)$$ and $$\left( 4, 3 \right)$$.Mathematics Suggest Corrections 0 Similar questions View More People also searched for View More
2022-01-22 03:35:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6373550295829773, "perplexity": 202.0987026497002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00534.warc.gz"}
http://math.stackexchange.com/questions/203001/parametrization-of-a-parabolas-evolute
# Parametrization of a Parabola's Evolute Let $y=x^2/2$. Its parametric form is $r(t)=t\,\hat i+t^2/2\,\hat j$, and its evolute is $$c(t)=-t^3\,\hat i+\frac{3t^2+2}{2}\,\hat j.\tag{1}$$ Visually, When I rewrite $(1)$ as a normal function, by letting $x=-t^3$, I get $$y=\frac{3x^{2/3}+2}{2},$$ but the graph of this evolute is nothing like the one above. What am I doing wrong? - I think it would be clearer to write $|x|^{2/3}$. Other than that, your formulas are both right, but the plot is wrong – the $y$ coordinate of the evolute at $x=1$ should be $5/2$, not $3/2$, and you can see with the naked eye that the curve in your plot doesn't reflect the centres of curvature of the parabola; there are normals to the parabola that don't even cross that curve on the right side of the parabola. @Josué: Solving $x=-t^3$ for $t$ yields $t=(-x)^{1/3}$. Substituting that into $t^2$ yields $\left((-x)^{1/3}\right)^2$. This is well-defined because the third root of a negative number is well-defined, and using $(-x)^{1/3}=-x^{1/3}$ you can write it as $\left(-x^{1/3}\right)^2=\left(x^{1/3}\right)^2=\left(|x|^{1/3}\right)^2$. But if you now combine the exponents into $2/3$, you have to either use the version with the absolute value, or specify what you mean by taking a negative number to the power $2/3$. –  joriki Sep 26 '12 at 19:58
2015-07-06 11:41:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148582816123962, "perplexity": 129.18922589485078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00150-ip-10-179-60-89.ec2.internal.warc.gz"}
https://sjmgarnier.github.io/viridisLite/reference/viridis.html
This function creates a vector of n equally spaced colors along the selected color map. viridis(n, alpha = 1, begin = 0, end = 1, direction = 1, option = "D") viridisMap(n = 256, alpha = 1, begin = 0, end = 1, direction = 1, option = "D") magma(n, alpha = 1, begin = 0, end = 1, direction = 1) inferno(n, alpha = 1, begin = 0, end = 1, direction = 1) plasma(n, alpha = 1, begin = 0, end = 1, direction = 1) cividis(n, alpha = 1, begin = 0, end = 1, direction = 1) rocket(n, alpha = 1, begin = 0, end = 1, direction = 1) mako(n, alpha = 1, begin = 0, end = 1, direction = 1) turbo(n, alpha = 1, begin = 0, end = 1, direction = 1) ## Arguments n The number of colors ($$\ge 1$$) to be in the palette. The alpha transparency, a number in [0,1], see argument alpha in hsv. The (corrected) hue in [0,1] at which the color map begins. The (corrected) hue in [0,1] at which the color map ends. Sets the order of colors in the scale. If 1, the default, colors are ordered from darkest to lightest. If -1, the order of colors is reversed. A character string indicating the color map option to use. Eight options are available: "magma" (or "A") "inferno" (or "B") "plasma" (or "C") "viridis" (or "D") "cividis" (or "E") "rocket" (or "F") "mako" (or "G") "turbo" (or "H") ## Value viridis returns a character vector, cv, of color hex codes. This can be used either to create a user-defined color palette for subsequent graphics by palette(cv), a col = specification in graphics functions or in par. viridisMap returns a n lines data frame containing the red (R), green (G), blue (B) and alpha (alpha) channels of n equally spaced colors along the selected color map. n = 256 by default. ## Details Here are the color scales: magma(), plasma(), inferno(), cividis(), rocket(), mako(), and turbo() are convenience functions for the other color map options, which are useful when the scale must be passed as a function name. Semi-transparent colors ($$0 < alpha < 1$$) are supported only on some devices: see rgb. ## Author Simon Garnier: [email protected] / @sjmgarnier ## Examples library(ggplot2) library(hexbin) dat <- data.frame(x = rnorm(10000), y = rnorm(10000)) ggplot(dat, aes(x = x, y = y)) + geom_hex() + coord_fixed() + scale_fill_gradientn(colours = viridis(256, option = "D")) # using code from RColorBrewer to demo the palette n = 200 image( 1:n, 1, as.matrix(1:n), col = viridis(n, option = "D"), xlab = "viridis n", ylab = "", xaxt = "n", yaxt = "n", bty = "n" )
2021-05-10 08:29:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3531636595726013, "perplexity": 6358.082379866378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00200.warc.gz"}
https://library.fiveable.me/ap-calc/unit-9/ap-calc-unit-9-overview/study-guide/8BdqQaG4sFPsU5Www85A
# Unit 9 Overview: Parametric Equations, Polar Coordinates, and Vector-Valued Functions Sumi Vora Kashvi Panjolia Sumi Vora Kashvi Panjolia ### AP Calculus AB/BC♾️ 279 resources See Units There are many different kinds of functions in math because not everything in the world exists on a plane with two variables. So far, everything we have been doing has been on the Cartesian plane: ℝ^2. This is also known as the xy-plane. However, some functions that model the world around us are better graphed using other types of planes, which we will explore in this unit. This unit makes up 11-12% of the AP Calculus BC Exam. As you are reading through this guide, pay special attention to the formulas mentioned. This unit is very formula-heavy, and ideally you should have all these formulas memorized, but some of them you can derive on the exam. ## 9.1 Defining and Differentiating Parametric Equations Parametric functions are a way to express a relationship between variables in the form of an equation that involves time. We will often use parametric functions to express the position of an object moving in space, or to describe the shape of a curve. A parametric equation is typically written in the form: x = f(t) y = g(t) where x and y are the coordinates of a point on the curve, and t represents time. By changing the value of t, we can trace out the entire curve defined by the parametric equations. In a parametric function, both the x and y variables are dependent variables, and time is the independent variable. To find the derivative of a parametric function, we need to find the derivative of x(t) and y(t) and set y'(t) over x'(t). When we do this, the dt's cancel out and we are left with the derivative dy/dx. ## 9.2 Second Derivatives of Parametric Equations As with equations in the Cartesian plane, we can take the second derivative of a parametric function. The process for finding the second derivative is a bit different than the process you are used to. We use the chain rule after finding the first derivative to arrive at this equation for the second derivative of a parametric function: Notice how inside the parentheses, the formula states we need to find dy/dx, not dy/dt. This means that to find the second derivative, you must first find the first derivative with respect to x, then take the derivative of the first derivative (usually using the quotient rule), then set all of that over the first derivative of x(t). As you can see, there are quite a few steps involved, but with some practice, you will master second derivatives in no time. ## 9.3 Finding Arc Lengths of Curves Given by Parametric Equations The arc length of a function is a measure of the distance along a curve defined by the function. More specifically, it is the length of the curve between two points. Remember that for Cartesian equations, the formula for the arc length of a curve was: The same logic still applies to parametric equations, but the formula looks a bit different since x and y are both dependent variables. This is the formula for the arc length of a parametric equation: In this formula, you are still squaring the derivative of the equation, but since there are two dependent variables, we square the derivatives of both variables. Remember to still take the square root of the sum of the two derivatives and take the integral across your interval. Instead of being "the integral from x=a to x=b," we now say "the integral from t=a to t=b" because t is now the independent variable, instead of x like we are used to. ## 9.4 Defining and Differentiating Vector-Valued Functions A vector-valued function is a function that maps a real number to a vector in a vector space. It is written in the form r(t) = <f(t), g(t)> or r(t) = f(t)i + g(t)j where f(t) and g(t) are real-valued functions and i and j are the unit vectors in the x and y direction respectively. This can be represented geometrically as a point in space moving in the xy-plane. For parametric equations, vector-valued functions are used to represent the position, velocity and acceleration of an object moving in space. All these functions are related to each other in the way that velocity is the first derivative of position and acceleration is the second derivative of position. In order to differentiate a vector-valued function, you simply differentiate each of its components individually. For example, to go from this vector-valued function for position: s(t) = <3x+2, ln(x+9)> to the vector-valued function for velocity, which is the derivative of the position function, you would simply take the derivative of the x-component and the y-component, but you don't need to combine them: v(t) = s'(t) = <3, 1/(x+9)> Remember, the derivative rule for the natural log of a function is u'/u where u is the expression inside the parentheses. ## 9.5 Integrating Vector-Valued Functions Integration of vector-valued functions is the process of finding an antiderivative of a vector-valued function with respect to a scalar variable. Integrating vector-valued functions is very similar to differentiating vector-valued functions. You simply integrate each component of the function individually. For example, let's take the integral of a velocity function that is vector-valued to find the displacement of the object: v(t) = <2x, 3x^2> To take this integral, we will integrate the x and y components separately, like this: s(t) = ∫v(t) = <∫ 2x dx, ∫ 3x^2 dx> Answer: s(t) = ∫v(t) = <x^2, x^3> ## 9.6 Solving Motion Problems Using Parametric and Vector-Valued Functions To solve a motion problem using parametric equations, we first need to identify the parametric equations that describe the position of the object. Once we have the parametric equations, we can use them to find the velocity and acceleration of the object. The velocity vector-valued function is represented by the first derivative of the position vector-valued function with respect to t. The acceleration vector-valued function is represented by the second derivative of the position vector-valued function with respect to t. Let's solve the following problem step by step: A particle moves in the xy-plane according to the parametric equations: x = t^3 - 6t^2 y = 2t^2 - 4t where t is measured in seconds. Find the position and velocity of the particle at t=2 seconds. To solve this problem using parametric vector-valued functions, we first need to find the position vector-valued function: r(t) = <x(t), y(t)> = <t^3 - 6t^2, 2t^2 - 4t> Next, we need to find the velocity vector-valued function, which is the derivative of the position vector-valued function with respect to t: v(t) = dr/dt = <dx/dt, dy/dt> = <3t^2 - 12t, 4t - 4> To find the position at time t = 2 seconds, we substitute t = 2 into the position vector-valued function: r(2) = <2^3 - 6(2)^2, 2(2)^2 - 4(2)> = <-4, 4> This means the particle is at x=-4 units and y=4 units at t=2 seconds. Similarly, we can find the velocity of the particle at any given time by substituting the time into the velocity vector-valued function. For example, the velocity of the particle at time t = 2 seconds is: v(2) = <3(2)^2 - 12(2), 4(2) - 4> = <4, 0> This means that the particle has a velocity of 4 units/s in the x-direction and 0 units/s in the y-direction at time t = 2 seconds. ## 9.7 Defining Polar Coordinates and Differentiating in Polar Form A polar plane is a two-dimensional coordinate system in which the position of a point is determined by the distance from the origin (r) and the angle (theta θ) between the positive x-axis and the line connecting the point to the origin, counterclockwise. The polar coordinates of a point (r, θ) in the polar plane are represented by an ordered pair of real numbers, where r is the distance from the origin and theta is the angle measured in radians. A polar function is a function of the form: r = f(θ) where r is the distance from the origin to a point on the polar plane, and theta is the angle between the positive x-axis and the line connecting the point to the origin. To find the derivative of a polar function, we can use the chain rule to derive a formula. It is helpful to memorize the formula, but you can also derive it during the test. You can also convert between a polar function and a Cartesian function. To go from polar to Cartesian, use the first two formulas, and to go from Cartesian to polar, use the third formula. ## 9.8 Find the Area of a Polar Region or the Area Bounded by a Single Polar Curve The area of the region enclosed by a polar curve is given by the definite integral: Where a and b are the limits of integration and r is the polar function. This integral is calculated by taking the product of 1/2 and the square of the polar function, and then integrating this expression with respect to theta from a to b. It's important to note that this method of finding the area under a polar curve is only valid for closed curves, meaning the curve starts and ends at the same point. If it's not a closed curve, we have to find the area enclosed by the curve and a line connecting the start and the end of the curve. ## 9.9 Finding the Area of the Region Bounded by Two Polar Curves The area of the region enclosed by two polar curves is given by the definite integral: A = (1/2) ∫(a,b) (R^2 - r^2) dθ Where a and b are the limits of integration, R is the equation of the outer curve and r is the equation of the inner curve. This integral is calculated by taking the difference of the square of the outer curve and the square of the inner curve and then integrating this expression with respect to theta from a to b. This is similar to finding the integral between two curves in the Cartesian plane. Where you subtracted the bottom curve from the top curve, you'll now subtract the inner curve from the outer curve. Image courtesy of Math Stack Exchange. Browse Study Guides By Unit 👑Unit 1 – Limits & Continuity 🤓Unit 2 – Fundamentals of Differentiation 🤙🏽Unit 3 – Composite, Implicit, & Inverse Functions 👀Unit 4 – Contextual Applications of Differentiation Unit 5 – Analytical Applications of Differentiation 🔥Unit 6 – Integration & Accumulation of Change 💎Unit 7 – Differential Equations 🐶Unit 8 – Applications of Integration Unit 10 – Infinite Sequences & Series (BC Only) 🧐Multiple Choice Questions (MCQ) ✍️Free Response Questions (FRQ) 📆Big Reviews: Finals & Exam Prep
2023-02-09 02:42:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128793478012085, "perplexity": 279.2086441318586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501066.53/warc/CC-MAIN-20230209014102-20230209044102-00300.warc.gz"}
https://ltwork.net/ahigh-speed-bullet-train-accelerates-and-decelerates-at--8747574
# Ahigh-speed bullet train accelerates and decelerates at the rate of 10 ft/s2. its maximum cruising speed ###### Question: Ahigh-speed bullet train accelerates and decelerates at the rate of 10 ft/s2. its maximum cruising speed is 120 mi/h. (round your answers to three decimal places.) (a) what is the maximum distance the train can travel if it accelerates from rest until it reaches its cruising speed and then runs at that speed for 15 minutes? ### Is the following sequence geometric?4,16,36,64 Is the following sequence geometric? 4,16,36,64... ### The park ranger explained that the rangers spend some of their time removing invasive The park ranger explained that the rangers spend some of their time removing invasive plant life, such as kudzu, from the park, because it is taking over and killing the native plant life. they removed 76 ponds of kudzu everyday for a week. the ranger told the students that 1/4 of the kudzu was remo... Https://www. nytimes. com/2021/04/13/learning/do-you-thin k-you-have-experienced-learning-los s-during-the-pandemic. html?smid=url-share please answer these questions by reading the article What is your reaction to this debate? (5 sentences) Do you think you, personally, have experienced learning ... ### The instrument of God's judgment upon Israel was Babylon.true or false The instrument of God's judgment upon Israel was Babylon. true or false... ### What is the atomic number of Sulfur atom What is the atomic number of Sulfur atom... ### Louise has only 5 cent, 10 cent, and 20 cent coins in her purse. she has 30 coins in total, and she Louise has only 5 cent, 10 cent, and 20 cent coins in her purse. she has 30 coins in total, and she has two more 10 cent coins than 5 cent coins. if the total value of her coins is \$3: 80, how many 10 cent coins does she have?... ### Write each decimal as a fraction or mixed number in simplest form 1) -0.2 2)0.55 3) 5.96 Write each decimal as a fraction or mixed number in simplest form 1) -0.2 2)0.55 3) 5.96... ### What happens when a blaring siren moves away from you What happens when a blaring siren moves away from you... ### Drag each tile to the correct box.Place the events of World War II in the correct order. Drag each tile to the correct box. Place the events of World War II in the correct order. $Drag each tile to the correct box. Place the events of World War II in the correct order.$... ### How to say today is the first of december2016? How to say today is the first of december2016?... ### 3. What is the primary differencebetween a molecule and acompound?A. All molecules are compounds,but not all compounds aremoleculesB. 3. What is the primary difference between a molecule and a compound? A. All molecules are compounds, but not all compounds are molecules B. Molecules always contain only one type of atom, unlike compounds C. Compounds must be composed of more than one type of atom D. Molecules must be composed of m... ### What was our first formal framework of government? What was our first formal framework of government?... ### Write an equation of the line that passes through (18, 2) and is parallel to the line 3y - x = -12 Write an equation of the line that passes through (18, 2) and is parallel to the line 3y - x = -12... ### Why did disease spread so easily in industrial revolution cities Why did disease spread so easily in industrial revolution cities... ### Remember, adjectives in Spanish agree with the gender of the noun they describe.Por ejemplo: el chico ordenado / la chica artística Remember, adjectives in Spanish agree with the gender of the noun they describe. Por ejemplo: el chico ordenado / la chica artística $Remember, adjectives in Spanish agree with the gender of the noun they describe. Por ejemplo: el$... ### The principal at marina middle school wants to determine if the 120 students in the seventh grade prefer The principal at marina middle school wants to determine if the 120 students in the seventh grade prefer to visit catalina island or camp mosaic for their end-of-year trip. which of the following is a representative sample of the population? select all that apply. a. 20 sixth-grade students, 20 sev... ### Turn these facts into a sensible paragraph:Percentage of Americans in 1983 who thought it was “possible to start out poor in this country . . . and Turn these facts into a sensible paragraph: Percentage of Americans in 1983 who thought it was “possible to start out poor in this country . . . and become rich”: 57 [New York Times–CBS News Poll] Percentage who think this in 2006: 80 [New York Times–CBS News Poll] Percentage of U. S. incom... Will give brainliest $Will give brainliest$...
2022-09-28 22:11:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21464355289936066, "perplexity": 2678.485210690332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00145.warc.gz"}
https://www.muchlearning.org/?page=62&ccourseid=1437&sectionid=1412
# Step By Step Calculus » 11.6 - Related Rates Synopsis Now we will consider the situation where xx and yy are related through such an equation g(x,y)=ag(x,y)=a but aa is no longer a constant but rather depend upon time. To find the rate of change \displaystyle \frac{da}{dt}\displaystyle \frac{da}{dt} of one item aa in terms of the rate of change \displaystyle \frac{dx}{dt}\displaystyle \frac{dx}{dt}, \dfrac{dy}{dt}\dfrac{dy}{dt} of other items xx, yy, we often derive an equation that relates these items, use the techniques of differentiation on the derived equation and then solve for the desired rate. In the explicit case, we have g(x (t),y(t))=a(t)g(x (t),y(t))=a(t) as well as x(t)x(t), y(t)y(t), \dfrac{dx}{dt}\dfrac{dx}{dt}, and \displaystyle \frac{dy}{dt}\displaystyle \frac{dy}{dt} and we want \displaystyle \frac{da}{dt}\displaystyle \frac{da}{dt}. We find \displaystyle \frac{da}{dt}\displaystyle \frac{da}{dt} by substituting x(t)x(t), y(t)y(t), \dfrac{dx}{dt}\dfrac{dx}{dt}, and \displaystyle \frac{dy}{dt}\displaystyle \frac{dy}{dt} into the two dimensional chain rule: \displaystyle \frac{da}{dt}=\frac{\partial}{\partial x}g(x(t),y(t))\frac{dx}{dt}+\frac{\partial}{\partial y}g(x(t),y(t))\frac{dy}{dt}. \displaystyle \frac{da}{dt}=\frac{\partial}{\partial x}g(x(t),y(t))\frac{dx}{dt}+\frac{\partial}{\partial y}g(x(t),y(t))\frac{dy}{dt}.
2019-01-17 11:45:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9875418543815613, "perplexity": 5212.031831816285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00264.warc.gz"}
http://openstudy.com/updates/4db8e5ccb0ab8b0b0ef37a8b
anonymous 5 years ago I need help with exponential regression...I have E(x)=164.96(1.03)^x for equation for high school graduates...need to find the first year of the decade in which number of graduates will reach 5 million 1. anonymous E a function of x, E is what and x is what? im guessing E is graduates and x is year? 2. anonymous yes I used cal to get a b and r 3. anonymous plug 5mill into E(x) and solve for x 4. anonymous yes got that but do I ln or log...can't remember 5. anonymous years are in decades and graduates are in 1000s so would I divide 5 mil by 100 6. anonymous do log base 1.03 there is probably a key on your calculator 7. anonymous is that ln 8. anonymous actually im sorry you are right take the natural log of both sides. and the exponent of x will come out as a multiple of ln(1.03) so $\ln(1.03^x) = xln(1.03)$ ln(1.03) is just a constant so you can divide by it on both sides 9. anonymous thank you for the confirmation 10. anonymous yeahp 11. anonymous one more thing I have year 2050 from the linear function and year 2070 from the exponential...which one seems more reasonable? 12. anonymous based on the values of correlation factor r the linear function would be but if you're looking at 5 million wouldn't it take less years to get there Exponentially versus linearly 13. anonymous im not sure from what was given i only know of E(x) and x, 14. anonymous L(x)=33.79x-135.77 15. anonymous used 5000 instead of 5 mil bc grads in 1000s is that right? 16. anonymous yeah 17. anonymous yeah to which question, lol? 18. anonymous lol just agreeing with the thing you said about linear versus exponential growth it would have to be 2050. but i would look into that more as to how to get to that answer 19. anonymous ok thanks!
2016-10-26 07:50:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5593996644020081, "perplexity": 2205.403941320268}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720760.76/warc/CC-MAIN-20161020183840-00221-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.techwhiff.com/issue/today-al-bought-36-hats-and-now-has-triple-the-number--519366
# Today Al bought 36 hats and now has triple the number of hats he had yesterday. If tomorrow Al buys triple the number of hats he has today, how many hats will Al have after tomorrow's purchase? ###### Question: Today Al bought 36 hats and now has triple the number of hats he had yesterday. If tomorrow Al buys triple the number of hats he has today, how many hats will Al have after tomorrow's purchase? ### Plssssa helppp Add + (-25) using the number line. Select the location on the number line to plot the sum. Plssssa helppp Add + (-25) using the number line. Select the location on the number line to plot the sum.... ### When Odysseus approaches the island of the Sirens, which action mesmerizes him? A. Hearing the sirens singing B. Seeing the Sirens dancing C. Seeing the Sirens playing D. Hearing the Sirens laughing When Odysseus approaches the island of the Sirens, which action mesmerizes him? A. Hearing the sirens singing B. Seeing the Sirens dancing C. Seeing the Sirens playing D. Hearing the Sirens laughing... ### In a group discussion, a participant says, "I would never have picked up the diary. It’s not worth the risk.” How would you respond to share your viewpoint? Write three sentences. HELP PLZ In a group discussion, a participant says, "I would never have picked up the diary. It’s not worth the risk.” How would you respond to share your viewpoint? Write three sentences. HELP PLZ... ### How many times can a 9/2 yard line be divided into 3- inch segment? a.45 times b.54 times c.64 times d.84 times How many times can a 9/2 yard line be divided into 3- inch segment? a.45 times b.54 times c.64 times d.84 times... ### WHAT PERCENTAGE OF EUROPE’S POPULATION WAS LOST DURING THE BLACK DEATH IN THE 14TH CENTURY? WHAT PERCENTAGE OF EUROPE’S POPULATION WAS LOST DURING THE BLACK DEATH IN THE 14TH CENTURY?... ### Michael draws a diagram of the outside face of a wheel he needs to replace on his go-cart. Michael draws a diagram of the outside face of a wheel he needs to replace on his go-cart.... ### How did instability in the french government create an opportunity for napoleon to take power How did instability in the french government create an opportunity for napoleon to take power... ### How do reference angles work and how do you find them? how do reference angles work and how do you find them?... ### What is web based What is local What is mobile in computer What is web based What is local What is mobile in computer... ### How many moles of MgCl2 will be produced from 65.0g of Mg(OH)2 , assuming HCl is available in excess how many moles of MgCl2 will be produced from 65.0g of Mg(OH)2 , assuming HCl is available in excess... ### 2 + 2 = whattttt ukmkkni 2 + 2 = whattttt ukmkkni... ### How long ago was it belived that the covenant between god and abraham was made? A. Almost a thousand years ago B. Almost two thousand C. Less than three thousand years ago D. More than four thousand years ago how long ago was it belived that the covenant between god and abraham was made? A. Almost a thousand years ago B. Almost two thousand C. Less than three thousand years ago D. More than four thousand years ago... ### Difference between hofstede and trompenaars Difference between hofstede and trompenaars... ### Lthsjdmeksnf there fuentes then fe did fue Lthsjdmeksnf there fuentes then fe did fue... ### Plzzzzzzzzzzzzzzzzzzz plzzzzzzzzzzzzzzzzzzz...
2022-10-04 03:37:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26920193433761597, "perplexity": 5762.824619819121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00316.warc.gz"}
http://tex.stackexchange.com/questions/67898/biblatex-collaborators-field
# Biblatex: collaborator(s) field I've been pulling my hair out the last several days trying to configure biblatex to handle single and multiple collaborators in bibliography entries. Exasperated sigh. Given this example: \documentclass{article} \usepackage[style=authoryear]{biblatex} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} author = {Ladyman, James and Ross, Don}, collaborator = {David Spurrett and Collier, John}, title = {Every Thing Must Go: Metaphysics Naturalized}, publisher = {Oxford University Press}, year = {2007} } \end{filecontents} \nocite{*} \begin{document} \printbibliography \end{document} how can I elegantly achieve: Ladyman, James and Don Ross (2007). Every Thing Must Go: Metaphysics Naturalized. With collaborators: David Spurrett and John Collier. Oxford University Press. in the output, and With collaborator: if only one collaborator were involved? FYI, my current most unsatisfactory "fix" involves changing collaborator = to annotator = wherever it appears in my .bib files, and adding \DefineBibliographyStrings{english}{ } to my biblatex configuration code. (This is clearly an awful "solution" in a multitude of ways!) - I think that the most appropriate way to enable you to have "collaborator" in the database and still use the standard styles without fiddling with any drivers is to add a sourcemap (you will need Biber 1.0+ and biblatex 2.0+). The "magic" here is in the \DeclareSourcemap section, which basically works as follows: 1. Identify entries with a collaborator field: if the entry doesn't have such a field process it no further: \step[fieldsource=collaborator, final=true] 2. Copy the collaborator field to the editora field: \step[fieldtarget=editora, origfieldval] 3. Set the editoratype field to "collaborator": \step[fieldset=editoratype, fieldvalue=collaborator If you were happy with the "standard" formatting ("In collab[oration] with") that would be all you needed. A bit of extra complexity creeps in because you want a different format, "With collaborator(s)". The trouble here is that in the "bytype" string, the standard biblatex styles don't attempt to distinguish between single and multiple names. At that point you have three options: 1. If you want to keep things simple, roll with the punches and accept a format which can work for one or many collaborators, such as the default. This is simplest, and unless you are wedded to the "with collaborators" formula probably best. If so, you could delete the whole of the second map step, the \newbibliographystring and the definition of a bycollaborators string. 2. If you want to be "correct", rewrite internal macros to identify multiple names and print appropriately different introductory strings. This would be quite a bit of work, and probably more than is really justifiable. 3. Distinguish in the .bib file itself between one and multiple collaborators, by using "collaborator" for one and "collaborators" for two. As far as biblatex is concerned it is then dealing with quite different "types" ("collaborators" might as well be "turnipwranglers" as far as it's concerned), but by defining a suitable bibstring to handle the plural form the problem is solved. That's not perfect, but it seems acceptable, and so that's what I've done here. The end result, given your input fractionally changed so that "collaborator" becomes "collaborators" (for reasons explained above) is as follows: \documentclass{article} \usepackage{filecontents} \usepackage[style=authoryear,backend=biber]{biblatex} \DeclareSourcemap{ \maps[datatype=bibtex,overwrite=false]{ \map{ \step[fieldsource=collaborator, final=true] \step[fieldset=editora, origfieldval] \step[fieldset=editoratype, fieldvalue=collaborator] } % THIS MAP STEP IS ONLY THERE TO ENABLE US TO USE "COLLABORATORS" % AS WELL AS "COLLABORATOR", BECAUSE THE QUESTION WANTS TO USE THE % "WITH COLLABORATORS" INTRODUCTION \map{ \step[fieldsource=collaborators, final=true] \step[fieldset=editora, origfieldval] \step[fieldset=editoratype, fieldvalue=collaborators] } } } \NewBibliographyString{bycollaborators}% ONLY FOR "WITH COLLABORATORS" \DefineBibliographyStrings{english}{% % AND ONLY FOR "WITH COLLABORATORS" \begin{filecontents}{\jobname.bib} author = {Ladyman, James and Ross, Don}, collaborators = {David Spurrett and Collier, John}, title = {Every Thing Must Go: Metaphysics Naturalized}, publisher = {Oxford University Press}, year = {2007} } \end{filecontents} \nocite{*} \begin{document} \printbibliography \end{document} - Paul, your solution and @Samuel's are both wonderful. I really struggled to decide which was the best to accept. In the end, I decided that since your answer came a hair's width closer to addressing my question as asked, and since with \map directives you gave me extra insight into biblatex solutions, I leaned toward your answer ever so slightly more. FYI, in the end I got exactly what I wanted by, in conjunction with your code, adding this line of sed code: s/$$collaborator$$$$[ ]*=[ ]*{[^}]* and [^}]*}$$/\1s\2/g to my already existing .bib file pre-processing script. – Nikki Aug 22 '12 at 10:58 That's a neat solution: I thought of trying to do it using \step[match=...], but I'm afraid my fortitude deserved me! – Paul Stanley Aug 22 '12 at 11:55 According to biblatex's manual, handeling collaborators can be achieved by setting the list in a editor fiels and using editortype=collaborator. For biblatex, this tells semantically that you want it to consider that you are not giving an editor entry but a collaborator one, as they didn't want to multiply too much the new keys in order to keep it quite standard. You can also specify the way you want it to refer to the collaborator. • By default, the collaborator keyword translated into collaborator or collab. for the short version. • The bycollaborator keyword translates into in collaboration with or in collab with. • You can also use the key collaborators which translates into collaborators or collab.. Note however that this doesn't seem to behave quite as described by the documentation on my computer and that the keywords translations are not available for every language. Also these keys are followed by a bunch of % FIXME: unsure in the language files, so maybe it still doesn't work quite properly. When you give to biblatex a list of names, you should separate every names by and keyword as the comma is recognised as a name/surname separator. So there, biblatex does interpret your entry as David Spurett and John Collier. So you should have something like this: @book{LadymanRoss07, author = {Ladyman, James and Ross, Don}, editor = {David Spurrett and Collier, John}, editortype = {collaborator}, title = {Every Thing Must Go: Metaphysics Naturalized}, publisher = {Oxford University Press}, year = {2007} } I think this should work for you. On my test document, by using the english language, I obtain the folloging : [1] James Ladyman and Don Ross. Every Thing Must Go: Metaphysics Naturalized. In collab. with David Spurrett, Collier, and John. Oxford University Press, 2007. This seems to be pretty much what you are expecting. - Right, I'm sorry for that. This was supposed to be kind of a rhetorical question. I have edited by changing the formulation to a more affirmative one and giving the result of my tests with this. I also made a slight correction (I shouldn't have used braces for the keyword bycollaborator in order to have proper translations. I hope this is better. Thank you for your direction to a better way of answering (I'm pretty new in there). – Samuel Albert Aug 20 '12 at 12:24 As I said in my answer, biblatex detects and as a list separator and not as a word. For him, , is also a separator saying that the surname comes first and the first name second (mainly useful for surnames with particles. i.e. {de Rabutin Chantal, Marie} for Mme de Sevigne). Therefore, even if biblatex is pretty clever about what you give as an input, in order to be sure it doesn't get mistaken, you should always use and to separate names. Then, it will replace it by a comma if appropriate. – Samuel Albert Aug 20 '12 at 12:50 @Samuel, thank-you very much for your answer. However, per my question, I would very much like a) to use the collaborator field (otherwise, if I am to use semantically inaccurate fields like editor and editortype, I might as well stay with the single annotator field, per my "awful solution"), and b) again per my question, I am seeking a solution that if possible distinguishes between With collaborator: and With collaborators: depending on the number of contributors in the list. [BTW, further to @Mico's comment, my list contains 2 not 3 people (your extra "and" is unnecessary).] – Nikki Aug 20 '12 at 13:22 @Nikki Indeed the collaborator field you are refering to doesn't exist in biblatex and semantically speaking, the editortype={collaborator} entry means that it is in fact a collaborator you are refering to and not an editor. Then, you can use the keys collaborator and collaborators to distinguish between singular and plural. However, the texts included in the language files don't currently distinguish between both and you might have to overwrite the captions. @Mico Right, sorry, I thought it was 3 persons. I will edit. – Samuel Albert Aug 20 '12 at 13:27 If you want a "collaborators" field, you can add one to the datamodel with biblatex 2.0/biber 1.0. See section 4.5.3 of the current biblatex manual. Of course you would need to change the driver for @book to output the field. – PLK Aug 20 '12 at 14:10 show 1 more comment Here is an example of an other book with the same problem. This solution is from the MathSciNet. They use the note field in such a case. @BOOK{Buergisser1997, AUTHOR = {B{\"u}rgisser, Peter and Clausen, Michael and Shokrollahi, M. Amin}, TITLE = {{A}lgebraic {C}omplexity {T}heory}, SERIES = {Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, VOLUME = {315}, NOTE = {With the collaboration of Thomas Lickteig}, PUBLISHER = {Springer-Verlag}, YEAR = {1997}, PAGES = {xxiv+618}, ISBN = {3-540-60582-7}, MRCLASS = {68-02 (12Y05 65Y20 68Q05 68Q15 68Q25 68Q40)}, MRNUMBER = {1440179 (99c:68002)}, MRREVIEWER = {Alexander I. Barvinok}, } -
2013-05-25 00:22:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7843416333198547, "perplexity": 2980.1892134198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705300740/warc/CC-MAIN-20130516115500-00048-ip-10-60-113-184.ec2.internal.warc.gz"}
https://solvedlib.com/n/2-1-1-h-1-vh-3-j-1-1-1i-1-3-1-ii-x27-j-8-m-1-1-1-7-1-1-v,12949213
2 1 1 H 1 VH 3 J} 1 1 1I 1 3 1 ii'j 8 M | 1 Question: 2 1 1 H 1 VH 3 J} 1 1 1I 1 3 1 ii'j 8 M | 1 1 1 7 1 1 V 1 2 1 J V 2 L 1 1 Jl 1 | 1 1 0 Similar Solved Questions The following information is available for two samples drawn from independent normally distributed populations_Population A: Population B:n =11s2 = 125.1 s2 = 196.3n =11What is the value of FsTAT if you are testing the null hypothesis Ho: 6}=62The value of FsTAT is (Round to two decimal places as needed.) The following information is available for two samples drawn from independent normally distributed populations_ Population A: Population B: n =11 s2 = 125.1 s2 = 196.3 n =11 What is the value of FsTAT if you are testing the null hypothesis Ho: 6}=62 The value of FsTAT is (Round to two decimal places... Determine whether OI following improper areSin I 7 integral cOnvergeS [ Determine whether OI following improper areSin I 7 integral cOnvergeS [... Suppose that the functions and g are defined as followsflx) =-3r +2Find f-g and f: g. Then_ give their domaing using interval notation.(f -2) (+) -3r 1 2 -Domain of f - g : 0Domain of f* g 'D Suppose that the functions and g are defined as follows flx) =-3r +2 Find f-g and f: g. Then_ give their domaing using interval notation. (f -2) (+) -3r 1 2 - Domain of f - g : 0 Domain of f* g 'D... When a non metal on right side of pt bonds with non metal on right side of pt it is called? When a non metal on right side of pt bonds with non metal on right side of pt it is called?... Need help Problems (20 points each) 1. a) For the following collection of point charges, calculate... need help Problems (20 points each) 1. a) For the following collection of point charges, calculate the magnitude and direction of the total electric field at point P. 750 HC doun to the right 80 cm a The equivele b The total on The power d 30 cm -100 HC 10 cm 100 HC b) What is the electric potent... A hypothesis will be used to test that a population mean equals 5 against the alternative that the population mean is less than with unknown variance: What is the critica value for the test statistic To for the following significance levels? Round your answers to three decimal places (e.3. 98.765}.(a)a = 0.01 andn15.(b) a = 0.05 and n To(c) a = 0.10and n A hypothesis will be used to test that a population mean equals 5 against the alternative that the population mean is less than with unknown variance: What is the critica value for the test statistic To for the following significance levels? Round your answers to three decimal places (e.3. 98.765}. ... Homework: Homework 9Score: 0 of8 of 15 (2 complete)13.7.31Find Ihe absolute maximum and minimum of the function ((x,Y) = 2x2 _ 4x+y2 _ 4y + 2on the closed triangular plate bounded by the lines x=0 Y=2 and y = 2x in the first quadrantOn the given domain; the function's absolute maximum is Homework: Homework 9 Score: 0 of 8 of 15 (2 complete) 13.7.31 Find Ihe absolute maximum and minimum of the function ((x,Y) = 2x2 _ 4x+y2 _ 4y + 2on the closed triangular plate bounded by the lines x=0 Y=2 and y = 2x in the first quadrant On the given domain; the function's absolute maximum is... Provide an IUPAC name for each of the following compounds (8 points).CH} Hjc CH;b_ Provide an IUPAC name for each of the following compounds (8 points). CH} Hjc CH; b_... Estion 4 A natural monopoly exists when yet swered ints out of Select one: c a... estion 4 A natural monopoly exists when yet swered ints out of Select one: c a . there are no close substitutes for a firm's product. O Flag jestion O C b. a monopolist produces a product, the main component of which is a natural resource. . a firm is the exclusive owner of a key resource necess... Question 9 (1 point) If the atomic radius of a metal that has the simple cubic... Question 9 (1 point) If the atomic radius of a metal that has the simple cubic crystal structure is (3.40x10A-1) nm, calculate the volume of its unit cell (in nm 3) Use scientific notation with 3 significant figures (XYZx 10An). Note that Avenue automatically enters x10, so you only need to enter X.... Consider the following data set:2456110 192308464By using divided difference smooth technique; we would like to use low-order polynomial as an empirical model. The following table shows the divided difference table:110192116 308 156 464From the divided difference tableand we know we should usepolynomial with orderFurthermore; by using the polynomial and the use ofexcel (use "add trendline' which showing the polynomial equation), the fitting polynomial isP(r) = cb + 2rc + 3r + dIn here, Consider the following data set: 24 56 110 192 308 464 By using divided difference smooth technique; we would like to use low-order polynomial as an empirical model. The following table shows the divided difference table: 110 192 116 308 156 464 From the divided difference table and we know we shoul... Na2SO4(S) KBr(s) CH3OH(1) HC2H302(1) K3PO4(s) 3) Rank the compounds in question 2 in order of increasing... Na2SO4(S) KBr(s) CH3OH(1) HC2H302(1) K3PO4(s) 3) Rank the compounds in question 2 in order of increasing conductivity in water. Lowest - Highest... You want to buy some candy for your birthday party: You go to two different groccry stores and see the following special offers: Sal Water Taly 3 Ibs ior 84,50Complete table for each offer:b) Graph each offer using dotted line for one the second offer Label your axis;b) What is tle first offer unit rate?What iS the second ofler unit rale?d) Which is the better for Salt Water Taffy? Explain your reasoning: You want to buy some candy for your birthday party: You go to two different groccry stores and see the following special offers: Sal Water Taly 3 Ibs ior 84,50 Complete table for each offer: b) Graph each offer using dotted line for one the second offer Label your axis; b) What is tle first offer un... F(o) 3;/ (3) 0: f (6) f'(4) < 0on (0,3); f'(x) > Oon (3.6); f"(x) > Oon (0,5); f"(x) < Oon (5..6) 32. f(0) = 3; [(2) = 2: f(6) 0: f'(x) < Oon (0.2)U(2,6); f'(2) = 0: f"(x) < Oon (0. 1) U (2,6): f"(x) > Oon (1,2) 33 f(0) f(4) = 1:f(2) = 2:f(6) 0: f'(x) > Oon (0. 2): f' (x) < Oon (2,4) U (4.6): f'(2) = f'(4) = 0:f"(x) > Oon (0.1) U (3.4): f"(x) < Oon (1,3) U(4,6) f(o) 3;/ (3) 0: f (6) f'(4) < 0on (0,3); f'(x) > Oon (3.6); f"(x) > Oon (0,5); f"(x) < Oon (5..6) 32. f(0) = 3; [(2) = 2: f(6) 0: f'(x) < Oon (0.2)U(2,6); f'(2) = 0: f"(x) < Oon (0. 1) U (2,6): f"(x) > Oon (1,2) 33 f(0) f(4) = 1:f(2) = 2:f... A hypodermic syringe whose cylinder has a crosssectional area of $60 \mathrm{mm}^{2}$ is used to inject a liquid medicine into a patient's vein in which the blood pressure is 2 kPa. (a) What is the minimum force needed on the plunger of the syringe? (b) Why is the cross-sectional area of the needle irrelevant? A hypodermic syringe whose cylinder has a crosssectional area of $60 \mathrm{mm}^{2}$ is used to inject a liquid medicine into a patient's vein in which the blood pressure is 2 kPa. (a) What is the minimum force needed on the plunger of the syringe? (b) Why is the cross-sectional area of the ne...
2022-05-29 02:36:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4969528913497925, "perplexity": 3528.739127877084}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00044.warc.gz"}
https://byjus.com/question-answer/fruit-juice-is-sold-at-a-backyard-sale-in-two-different-sized-cups-the-amount/
Question # Fruit juice is sold at a backyard sale in two different sized cups. The amount earned by selling juices from 5 such batches is . A $5(0.6m+1.5n) Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses B$5(0.6n+1.5n) No worries! We‘ve got your back. Try BYJU‘S free classes today! C $2.1mn No worries! We‘ve got your back. Try BYJU‘S free classes today! Open in App Solution ## The correct option is A$5(0.6m+1.5n)Price of a small cup = $0.6 Price of a tall cup =$1.5 There are m number of short cups and n number of tall cups in a batch. There are 5 such batches. We have to multilply the earnings from 1 batch by 5 to find the total earnings. Hence, the total earnings is \$5(0.6m + 1.5n). Suggest Corrections 0 Explore more
2023-02-09 12:46:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3089032471179962, "perplexity": 4493.990230485793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00651.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/11/lesson/11.3.1/problem/11-103
### Home > CALC > Chapter 11 > Lesson 11.3.1 > Problem11-103 11-103. Multiple Choice: The radius of convergence of $\displaystyle\sum _ { n = 1 } ^ { \infty } n ! x ^ { n }$ is: 1. $−1$ 1. $0$ 1. $\frac { 1 } { 2 }$ 1. $1$ 1. $\sqrt { 2 }$ Therefore this series diverges for all $x$.
2022-06-29 04:00:07
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9625868797302246, "perplexity": 1370.0874340129053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103620968.33/warc/CC-MAIN-20220629024217-20220629054217-00245.warc.gz"}
http://www.mountainpathmedia.com/ven2ihd/6pd5cnl.php?tag=8354de-negative-infinity-divided-by-infinity
I.e., infinity divided by negative infinity is -1, not negative infinity. L’Hopital’s rule for (infinity over minus infinity) Ask Question Asked 4 years, 2 months ago. And if we have an infinity divided by another half-as-big infinity, would we get 2? What does Infinity Minus Infinity Equal? Reasons For and Against Human Cloning. Border Patrol boasts after erroneous earbud seizure. :-) 1 0. artaniss8. calculus sequences-and-series infinity nonstandard-analysis. There are infinitely many real numbers, all of which are finite. For example $\frac{1+1+1+\ldots}{2+2+2+\ldots}=\frac12$? Scrutiny of Postmaster General DeJoy intensifies asked Aug 11 '12 at 11:53. Is this an indeterminate form? Latest Articles Git Manifest. Actually the quotient infinity / (-infinity) is indeterminate and could be any value, including infinity. What does Infinity Divided by Infinity Equal? 71.2k 14 14 gold badges 86 86 silver badges 135 135 bronze badges. The Effects of the Black Death on the Middle Ages . Parcly Taxel. Positive infinity divided by negative infinity = negative infinity divided by positive infinity, is it not? limits. Active 4 years, 2 months ago. The Pros and Cons of Drones. Actually the quotient infinity / (-infinity) is indeterminate and could be any value, including infinity. Balance of Nature is Temporary. In calculus, one considers limits; to say “X approaches infinity” means to consider what happens as X takes on ever larger finite values. 1 decade ago. Humanity is Reactive. Viewed 8k times 5. It depends on the number of zeros in the googolplex number you’re considering, accordingly the decimal shifts to the left on the infinite number you’re taking. share | cite | improve this question | follow | edited Sep 26 '17 at 12:08. 1 $\begingroup$ Can I apply L’Hopital’s rule to this: $$\lim_{x\to0}\frac{f(x)}{g(x)}$$ when $\lim_{x\to0}f(x) = \infty$ and $\lim_{x\to0}g(x) = -\infty$. You're not a Patriot anymore: New coach rips Brady debut. The Pros and Cons of Robots. I.e., infinity divided by negative infinity is -1, not negative infinity.
2021-01-18 04:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6342887282371521, "perplexity": 2048.298537540902}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00686.warc.gz"}
https://socratic.org/questions/5867d3f27c01493d0cf1b4c7
# Which of the following functions has a domain of all there real numbers? ## a) $y = \cot x$ b) $y = \sec x$ c) $y = \sin x$ d) $y = \tan x$ Jan 22, 2017 C. $y = \sin x$ #### Explanation: We need to look for asymptotes here. Whenever there are asymptotes, the domain will have restrictions. A: $y = \cot x$ can be written as $y = \cos \frac{x}{\sin} x$ by the quotient identity. There are vertical asymptotes whenever the denominator equals $0$, so if: $\sin x = 0$ Then $x = 0 , \pi$ These will be the asymptotes in 0 ≤ x < 2pi. Therefore, $y = \cot x$ is not defined in all the real numbers. B: $y = \sec x$ can be written as $y = \frac{1}{\cos} x$. Vertical asymptotes in 0 ≤ x < 2pi will be at: $\cos x = 0$ $x = \frac{\pi}{2} , \frac{3 \pi}{2}$ Therefore, $y = \sec x$ does not have a domain of all the real numbers. C: $y = \sin x$ This has a denominator of $1$, or will never have a vertical asymptote. It is also continuous, so this is the function we're looking for. D: $y = \tan x$ can be written as $y = \sin \frac{x}{\cos} x$, which will have asymptotes at $x = \frac{\pi}{2}$ and $x = \frac{3 \pi}{2}$ in 0 ≤ x <2pi#. It does not have a domain of all real numbers. Hopefully this helps!
2021-09-16 22:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485168218612671, "perplexity": 345.9066337533475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00169.warc.gz"}
https://math.stackexchange.com/questions/3061755/non-trivial-solution-of-fredholm-integral-equation-of-second-kind-with-constant
# Non trivial solution of Fredholm integral equation of second kind with constant kernel Let us consider the following integral equation$$f(x) + \lambda \int_0^1 {K(s,x)f(s)ds = 0,{\text{ x}} \in {\text{(0}}{\text{,1)}}{\text{.}}}$$ I'm looking of the values of $$\lambda$$ so that the above equation has only $$f=0$$ as solution with a constant kernel. Suppose that $$K(s,x)=K$$, we obtain $$f(x) + \lambda K\int_0^1 {f(s)ds = 0,{\text{ x}} \in {\text{(0}}{\text{,1)}}{\text{.}}}$$ By taking the integral over $$(0,1)$$, we get $$(1 + \lambda K)\int_0^1 {f(s)ds = 0}$$. for all $$f$$. Now, if $$\lambda$$ is different of $$-1/K$$, then $$\int_0^1 {f(s)ds = 0}$$. I don't see how this can be helpful. Any suggestions?. Thank you. • The function has mean value 0 in integral sense over interval 0 to 1. It removes one degree of freedom. This means you have infinite set of solutions. Any function fulfilling the mean value equation $=0$ will do. – mathreadler Jan 4 at 15:48 • So if $\lambda= - 1/K$ we have only one solution? – Gustave Jan 4 at 15:54 • If the other factor is $0$ then it does not matter what $f$ is, since the product will always be $0$ so then all functions $f$ will satisfy it. – mathreadler Jan 4 at 16:01 • Thanks. I understand, but what I can say about the uniqueness of the trivial solution with respect to $\lambda$? – Gustave Jan 4 at 16:09 Your step of taking the integral is too crude, at least initially. When you have $$f(x)+\lambda K\int_0^1f(s)\,ds=0,$$ you can write this as $$f(x)=-\lambda K\int_0^1f(s)\,ds$$ to conclude that $$f$$ is constant. If $$\lambda=0$$, you get $$f=0$$. If $$\lambda\ne0$$ and $$\lambda\ne-1/K$$, your trick of integrating again gives you that $$\int_0^1 f=0$$, so $$f=0$$. When $$\lambda=-1/K$$ the solution is not unique, as any constant $$f$$ will be a solution.
2019-05-19 12:47:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693503379821777, "perplexity": 116.72715471752099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254882.18/warc/CC-MAIN-20190519121502-20190519143502-00128.warc.gz"}
https://mathoverflow.net/questions/43282/invertible-elements-in-monoid-rings-of-unital-monoids-without-non-trivial-invert
Invertible elements in monoid rings of unital monoids without non-trivial invertible elements This question is somewhat related to Tilmans notorious problem in this post. Let $(M,\cdot)$ be a monoid with unit $1$ and set $$(M,\cdot)^{\times} := \lbrace x \in M \mid \exists y \in M : xy=yx=1 \rbrace.$$ Let $k$ be a field (say $\mathbb Z/ 2 \mathbb Z$) and let $k[M]$ be the monoid ring of $M$ with coefficients in $k$. Consider now $$GL(k[M]) := (k[M],\cdot)^{\times}.$$ Question:(answered by Torsten Ekedahl) Can it happen that $(M,\cdot)^{\times} = \lbrace 1 \rbrace$ but $\lbrace 1 \rbrace \subsetneq GL(k[M])$? EDIT: Torsten Ekedahl has nicely answered the above question. However, since I was really missing a condition, I will take the opportunity to change it slightly. Question: If $(M,\cdot)^{\times} = \lbrace 1 \rbrace$, can $k[M]$ contain an invertible element $z \in GL(k[M])$, such that the coefficient of $z$ at $1$ is zero? - Let $R$ be a finite dimensional algebra over $\mathbb Z/2$. Then $\{1\}\neq R^\times$ unless $R=(\mathbb Z/2)^n$. Indeed, if $N$ is the radical of $R$, then $1+N\subseteq R^\times$ so we may assume $R$ is semi-simple. Then $R=\prod_iR_i$ where the $R_i$ are simple algebras and $R^\times=\prod_iR_i^\times$ so we may assume that $R$ is simple and hence a matrix algebra over some extension field of $\mathbb Z/2$. The only such algebra with only the trivial unit is $\mathbb Z/2$. Now, pick any finite monoid $M$ with $M^\times=\{1\}$ and apply the above to $R=\mathbb Z/2[M]$. This gives that $R^\times=\{1\}$ precisely when $M$ is a commutative monoid where every element is idempotent. As an explicit example where this is not the case we may let $M$ be the identity matrix plus all non-invertible matrices of fixed size $>1$ over some finite field. - I see. This also implies that every unit in such a monoid ring contains 1 in its support. That is were I went wrong. I thought existence of non-trivial left-invertible elements would be a consequence of the existence of units in the monoid ring, but that is wrong. I hope you allow me to change the question slightly. –  Andreas Thom Oct 23 '10 at 12:17 If your monoid $M$ is finite the non-invertible elements form an ideal of the monoid and hence they span an ideal of kM. So any invertible element of the algebra must have an invertible element of the monoid in its support. So the answer to your second question is no. Sorry, I had the impression from the previous answers that you were looking at finite monoids. More generally, the set of elements $L$ of the monoid $M$ that are not left invertible is a proper left ideal. Therefore, the span $kL$ is a proper left ideal of $kM$. It follows that any invertible element must contain a left invertible element in its support and dually a right invertible element. I will try to think whether this situation can come up without having an invertible element. –  Benjamin Steinberg Jun 25 '11 at 14:00 I checked in the literature and it seems that every unit in the algebra of the monoid defined by the presentation $\langle a,b\mid ab=1\rangle$ does have $1$ in its support. –  Benjamin Steinberg Jun 27 '11 at 20:55
2014-04-19 12:15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9537104368209839, "perplexity": 119.49058975566616}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
http://specialfunctionswiki.org/index.php/Hacoversine
# Hacoversine The hacoversine function $\mathrm{hacoversin} \colon \mathbb{C} \rightarrow \mathbb{C}$ is defined by $$\mathrm{hacoversin}(z) = \dfrac{1-\sin(z)}{2},$$ where $\sin$ denotes sine.
2018-04-22 04:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000017881393433, "perplexity": 1110.0510677117177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00290.warc.gz"}
https://access.openupresources.org/curricula/our6-8math-v1/8/families/5.html
## Inputs and Outputs This week, your student will be working with functions. A function is a rule that produces a single output for a given input. Not all rules are functions. For example, here’s a rule: the input is “first letter of the month” and the output is “the month.” If the input is J, what is the output? A function must give a single output, but in this case the output of this rule could be January, June, or July, so the rule is not a function. Here is an example of a rule that is a function: input a number, square it, then multiply the result by $\pi$. Using $r$ for the input and $A$ for the output, we can draw a diagram to represent the function: We could also represent this function with an equation, $A=\pi r^2$. We say that the input of the function, $r$, is the independent variable and the output of the function, $A$, is the dependent variable. We can choose any value for $r$, and then the value of $A$ depends on the value of $r$. We could also represent this function with a table or as a graph. Depending on the question we investigate, different representations have different advantages. You may recognize this rule and know that the area of a circle depends on its radius. Jada can buy peanuts for \$0.20 per ounce and raisins for \$0.25 per ounce. She has \$12 to spend on peanuts and raisins to make trail mix for her hiking group. 1. How much would 10 ounces of peanuts and 16 ounces of raisins cost? How much money would Jada have left? 2. Using$p$for pounds of peanuts and$r$for pounds of raisins, an equation relating how much of each they buy for a total of \$12 is $0.2p+0.25r=12$. If Jada wants 20 ounces of raisins, how many ounces of peanuts can she afford? 3. Jada knows she can rewrite the equation as $r=48-0.8p$. In Jada’s equation, which is the independent variable? Which is the dependent variable? Solution: 1. 10 ounces of peanuts would cost \$2 since$0.2\boldcdot 10=2$. 16 ounces of raisins would cost \$4 since $0.25\boldcdot 16=4$. Together, they would cost Jada \$6, leaving her with \$6. 2. 35 ounces of peanuts. If Jada wants 20 ounces of raisins, then $0.2p+0.25 \boldcdot 20=12$ must be true, which means $p=35$. 3. $p$ is the independent variable and $r$ is the dependent variable for Jada’s equation. ## Linear Functions and Rates of Change This week, your student will be working with graphs of functions. The graph of a function is all the pairs (input, output), plotted in the coordinate plane. By convention, we always put the input first, which means the inputs are represented on the horizontal axis and the outputs on the vertical axis. For a graph representing a context, it is important to specify the quantities represented on each axis. For example this graph shows Elena’s distance as a function of time. If it is distance from home, then Elena starts at some distance from home (maybe at her friend’s house), moves further away from her home (maybe to a park), stays there a while, and then returns home. If it is distance from school, the story is different. The story also changes depending on the scale on the axes: is distance measured in miles and time in hours, or is distance measured in meters and time in seconds? Match each of the following situations with a graph (you can use a graph multiple times). Define possible inputs and outputs, and label the axes. 1. Noah pours the same amount of milk from a bottle every morning. 2. A plant grows the same amount every week. 3. The day started very warm but then it got colder. 4. A cylindrical glass contains some partially melted ice. The more water you pour in, the higher the water level. Solution: 1. Graph B, input is time in days, output is amount of milk in the bottle 2. Graph A, input is time in weeks, output is height of plant 3. Graph C, input is time in hours, output is temperature 4. Graph A, input is volume of water, output is height of water In each case, the horizontal axis is labeled with the input, and the vertical axis is labeled with the output. ## Cylinders and Cones This week your student will be working with volumes of three-dimensional objects. We can determine the volume of a cylinder with radius $r$ and height $h$ using two ideas that we’ve seen before: • The volume of a rectangular prism is a result of multiplying the area of its base by its height. • The base of the cylinder is a circle with radius $r$, so the base area is $\pi r^2$. Just like a rectangular prism, the volume of a cylinder is the area of the base times the height. For example, let’s say we have a cylinder whose radius is 2 cm and whose height is 5 cm like the one shown here:
2021-09-16 11:52:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6224777698516846, "perplexity": 552.2036474760832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053493.41/warc/CC-MAIN-20210916094919-20210916124919-00194.warc.gz"}
https://codereview.stackexchange.com/questions/2123/parsing-random-phrases-from-a-file
# Parsing random phrases from a file This essentially reads a specially-structured file from the scanner and then parses random phrases and prints them: { <start> <greeting> <object> } { <greeting> hello bonjour aloha } { <object> world universe multiverse } Possible random phrases could be: "hello world", "bonjour universe", etc... Does anyone have some ideas as to how I might decrease the runtime? I heard you could run your code in parallel using multiple threads but was both unsure exactly how to do this and whether it would help at all? The slightest increase in runtime would be beneficial (having a little competition to see who can do it the fastest). It is not considered cheating so long as I do the majority of the programming, so please just point me in the right direction and provide suggestions. package comprehensive; import java.io.File; import java.io.FileNotFoundException; import java.util.ArrayList; import java.util.HashMap; import java.util.Random; import java.util.Scanner; /** * Generates random phrases by reading the specified grammar file * and outputting the number of phrases specified by the user. * @author Jun Tang and John Newsome * */ public class RandomPhraseGenerator { public static HashMap<String,ArrayList<String>> map; public static void main(String[] args) throws FileNotFoundException{ Scanner k = new Scanner(new File(args[0])); int numPhrases = Integer.parseInt(args[1]); while(numPhrases > 0){ parseStrings(); numPhrases--; } } /** * Assimilates all of the keys and their associated values into map (a public static hashmap). * @param scan The scanner that is reading from the specified file. */ map = new HashMap<String, ArrayList<String>>(); while(scan.hasNextLine()){ if(scan.nextLine().equals("{")){ String key = scan.nextLine(); map.put(key, new ArrayList<String>()); String next = scan.nextLine(); while(!next.equals("}")){ next = scan.nextLine(); } } } } /** * Parses random String phrases from map and prints them. * Must already have assimilated keys and values prior by using the addKeys() method. */ public static void parseStrings(){ Random random = new Random(); StringBuilder strBuf = new StringBuilder((map.get("<start>").get(0))); int firstInstance = strBuf.indexOf("<", 0); int secondInstance = strBuf.indexOf(">", firstInstance); while(firstInstance >= 0 && secondInstance >= 0){ String nonTerminal = strBuf.substring(firstInstance, secondInstance+1); strBuf.replace(firstInstance, secondInstance+1, (map.get(nonTerminal).get(random.nextInt(map.get(nonTerminal).size())))); firstInstance = strBuf.indexOf("<", firstInstance); secondInstance = strBuf.indexOf(">", firstInstance); } System.out.println(strBuf); } } • If you ask performance questions, you should mention a) what machine you're running on, b) how fast it is now and how fast it has to be. c) What is the sample size? Solutions which are fast with 1000 elements might perform poorly with 1000000 elements. d) Then you have to do some measurements with a profiler, which tells you, where the most amount of time is spent. – user unknown Apr 27 '11 at 4:29 • Yes, unfortunately my teacher did not disclose the sample size or the grammar file that is to be run...Thanks for trying to help though, appreciate it! :) – Mr_CryptoPrime Apr 27 '11 at 4:52 • Your sample file doesn't correspond with the what the program is parsing. The program is obviously looking for curly brackets, which are missing. – RoToRa Apr 27 '11 at 7:13 I'm no expert in performance optimization or even threads, so my code review will mostly be some style suggestions, but I do have suggestions how I would implement the the phrase building to be faster - purely from my gut :-) • A nitpick at the beginning: You should try and clean up your indentations. They are all over the place and make reading the code a bit more difficult • You should declare variables to use interfaces where appropriate instead of concrete class. That makes the code more flexible for example in case you get the keys for a different source. In your case declare your map as Map<String,List<String>>. • Avoid global variables. If would be better to have addKeys return the map (and thus be renamed readKeys) and pass it on to parseString as an argument. • In the main function a for loop is probably more appropriate instead of the while. • Try to get rid of the duplicate scan.nextLine(); in the inner while loop, by moving the exiting condition inside the loop. Now to optimizing the phrase building. There are two main points I would consider here: • Avoid copying data. Currently you convert your string with the placeholders into a StringBuilder and insert the text in place of the placeholder. Both creating the StringBuilder and inserting into them requires allot of data copying internally. • Don't repeat the parsing of the placeholder text. Instead I would parse the text once into a data structure and then generate the random phrases using that structure. I could write some (pseudo) code to demonstrate, what I mean, but you should be writing it yourself, so I'll wait and see, if you can come up with an implementation based on those two points. • Thanks, that is very helpful! Turns out the competition was actually yesterday, but I will post my updated code and possibly another updated version later. – Mr_CryptoPrime Apr 27 '11 at 20:20
2021-06-25 12:22:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3040407598018646, "perplexity": 2128.6842670034384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00494.warc.gz"}
https://astronomy.stackexchange.com/questions/20729/what-is-omega-and-tau-in-this-celestial-sphere
# What is $\omega$ and $\tau$ in this celestial sphere In the celestial sphere below, what is the $\omega$ and $\tau$ physically? I mean, In the normal world, how do we see them? for example, we can see the altitude of the sun (a) by examining the shadow of a pillar and comparing it with its height, How do we see $\omega$ and $\tau$ in real world? S is the sun and W is the west. Also I want to know how do they effect on the shadow of wall which is built in direction of "$SCP$ to $NCP$". • What is the source of this image? To remind you, homework (or similar) problems must be acknowledged as such. – James K Apr 13 '17 at 21:00 • @JamesK It is from the answer of National Olympiad of an Asian country. about shadow of a wall which is built in direction of SCP to NCP in arbitary point on the north of the equator. – titansarus Apr 14 '17 at 5:45 There seems to be very little to go on here. "Zenith" and "Horizon" are easy to understand, it suggests that this is a projection of the sky (and ground) In that context "W" would appear to be "West", and you give that "S" is "sun". The horizon is curved. If you project the entire hemisphere the horizon would go straight across this isn't a complete hemispheric projection. If we are looking West, then North would be to the right, and South to the left. The point P seems to be identified with NCP. The first guess would be that N of NCP means North, but this contradicts the reasoning that North is on the right. You mention SCP in the text, but it isn't in the diagram. The sun appears to be West, or just North of West, which is not an unusual but not impossible position. The lines SW, WP and PS are curved. This may be to indicate that they are great circles on the celestial sphere. If so then $\omega$ is the spherical angle SWP and $\tau$ is the great circle distance SW, perhaps given as an angle with vertex at the centre of the sphere. Given three points on a sphere (in alt-az coordinates), it is a small exercise in spherical geometry to find the spherical distance and angles between them. I'm not sure why you are being coy about the source of this. If you could link to the question from which this is taken the meanings of $\delta$ and H could be clearer. I guess the solution is an exercise in spherical angles, and projections from a circle to the plane
2020-02-17 06:37:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5944983959197998, "perplexity": 376.3340441598154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00062.warc.gz"}
https://buildingenergygeeks.org/gaussian-process-models.html
# Chapter 13 Gaussian Process models ## 13.1 Principle In machine learning, Gaussian Process (GP) regression is a widely used tool for solving modelling problems (Rasmussen (2003)). The appeal of GP models comes from their flexibility and ease of encoding prior information into the model. A GP is a generalization of the Gaussian probability distribution to infinite dimensions. Instead of having a mean vector and a covariance matrix, the Gaussian process $$f(\mathbf{x})$$ is a random function in a d-dimensional input space, characterized by a mean function $$\mu: \mathbb{R}^d \rightarrow \mathbb{R}$$ and a covariance function $$\kappa: \mathbb{R}^{d\times d} \rightarrow \mathbb{R}$$ $$$f\left(\mathbf{x}\right) \sim \mathrm{GP}(\mu(\mathbf{x}),\,\kappa(\mathbf{x}, \mathbf{x}^\prime)) \tag{13.1}$$$ The variable $$\mathbf{x}$$ is the input of the Gaussian process and not the state vector defined in the previous section. The notation of equation (13.1) implies that any finite collection of random variables $$\{f(\mathbf{x}_i)\}^n_{i=1}$$ has a multidimensional Gaussian distribution (Gaussian process prior) $$$\left\{f(\mathbf{x}_1), f(\mathbf{x}_2), \ldots, f(\mathbf{x}_n)\right\} \sim \mathcal{N}(\mathbf{\mu}, \mathbf{K}) \tag{13.2}$$$ where $$\mathbf{K}_{i,\,j} = \kappa(\mathbf{x}_i, \mathbf{x}_j)$$ defines the covariance matrix and $$\mathbf{\mu}_i = \mu(\mathbf{x}_i)$$ the mean vector, for $$i,j = 1,2,\ldots,n$$. The mean function is often, without loss of generality, fixed to zero (e.g. $$\mu(\mathbf{x}) = \mathbf{0}$$) if no prior information is available; assumption regarding the mean behavior of the process can be encoded into the covariance function instead (Solin et al.). Indeed, the choice of covariance function allows encoding any prior belief about the properties of the stochatic process $$f(\mathbf{x})$$, e.g. linearity, smoothness, periodicity, etc. New covariance functions can be formulated by combining existing covariance functions. The sum $$\kappa(\mathbf{x}, \mathbf{x}^\prime) = \kappa_1(\mathbf{x}, \mathbf{x}^\prime) + \kappa_2(\mathbf{x}, \mathbf{x}^\prime)$$, or the product $$\kappa(\mathbf{x}, \mathbf{x}^\prime) = \kappa_1(\mathbf{x}, \mathbf{x}^\prime) \times \kappa_2(\mathbf{x}, \mathbf{x}^\prime)$$ of two covariance functions is a valid covariance function. The Gaussian process regression is concerned by the problem of estimating the value of an unknown function $$f(t)$$ at arbitrary time instant $$t$$ (i.e. test point) based on a noisy training data $$\mathcal{D} = \left\{t_k, y_k\right\}^n_{k=1}$$ \begin{align} f(t) & \sim \mathrm{GP}(0, \kappa(t, t^\prime)) \\ y_k & = f(t_k) + v_k \tag{13.3} \end{align} The joint distribution between the test point $$f(t)$$ and the training points $$\left(f(t_1),\,f(t_2),\,\ldots,\,f(t_n)\right)$$ is Gaussian with known statistics. Because the measurement model in equation @ref(gp_model) is linear and Gaussian, the joint distribution between the test point $$f(t)$$ and the measurements $$\left(y_1,\,y_2,\,\ldots,\,y_n\right)$$ is Gaussian with known statistics as well. From the property of the Gaussian distribution, the conditional distribution of $$f(t)$$ given the measurements has an analytical solution (Särkkä and Solin (2019)). $$$p\left(f(t) \mid \mathbf{y}\right) = p\left(\mathbb{E}[f(t)] \mid \mathbb{V}[f(t)]\right) \tag{13.4}$$$ with mean and variance \begin{align} \mathbb{E}[f(t)] &= \mathbf{k}^\text{T}\,\left( \mathbf{K} + \sigma^2_\varepsilon\,\mathbf{I}\right)^{-1}\,\mathbf{y} \\ \mathbb{V}[f(t)] &= \kappa(t, t) - \mathbf{k}^\text{T}\, \left(\mathbf{K} + \sigma^2_\varepsilon\,\mathbf{I}\right)^{-1}\,\mathbf{k} \tag{13.5} \end{align} where $$\mathbf{K}_{i,\,j} = \kappa(t_i, t_j)$$, $$\mathbf{k} = \kappa(t, \mathbf{t})$$ and, $$\mathbf{t}$$ and $$\mathbf{y}$$ are the time and measurement vectors from the training data $$\mathcal{D}$$. The estimated function model represents dependencies between function values at different inputs through the correlation structure given by the covariance function. Thus, the function values at the observed points give information also of the unobserved points. ## 13.2 Gaussian Processes for prediction of energy use The first application of Gaussian Processes in building energy modelling is based on the developments of Kennedy and O’Hagan (Kennedy and O’Hagan (2001)) which they called Bayesian calibration. Bayesian model calibration refers to using a GP as a surrogate model to reproduce a reference model, then training a second GP as the discrepancy function between this model and observations, then evaluating the posterior distribution of calibration parameters. In this context GPs have static inputs and are not dynamic models. $$$z_i = \zeta(\mathbf{x}_i) +e_i =\rho \, \eta(\mathbf{x}_i,\theta)+\delta(\mathbf{x}_i)+e_i \tag{13.6}$$$ where $$\mathbf{x}_i$$ is a series of known model inputs, $$z_i$$ are observations, $$\zeta(\mathbf{x}_i)$$ is the true value of the real process, $$\eta(\mathbf{x}_i,\theta)$$ is a computer model output with parameter $$\theta$$, $$\delta(\mathbf{x}_i)$$ is the discrepancy function and $$e_i \sim N(0,\lambda)$$ are the observation errors. In Kennedy and O’Hagan’s work, GP are used to represent prior information about both $$\eta(\cdot,\cdot)$$ and $$\delta(\cdot)$$. $$\rho$$ and $$\lambda$$ are hyperparameters, to be added to the list of hyperparameters of the covariance functions into a global hyperparameter vector $$\phi$$. Before attempting prediction of the true phenomenon using the calibrated code, the first step is to derive the posterior distribution of the parameters $$\theta$$, $$\beta$$ (parameters of the GP mean functions) and $$\phi$$. Hyperparameters are estimated in two stages: $$\eta(\cdot,\cdot)$$ is estimated from a series of code outputs, and $$\delta(\cdot)$$ is estimated from observations. The authors restrict their study to having analytical, tractable posterior distributions that do not require methods such as MCMC. Therefore they fix the value of some hyperparameters to make these functions tractable, and have to resort to some simplifications. The first application of this method to building energy modelling was the work of Heo et al (Heo, Choudhary, and Augenbroe (2012)). They followed the formulation of Bayesian calibration developed by Kennedy and O’Hagan, and used three sets of data as input: (1) monthly gas consumption values as observations $$y(x)$$, (2) computer outputs from exploring the space of calibration parameters $$\eta(x,\theta)$$, and (3) the prior PDF of calibration parameters $$p(\theta)$$. The model outputs $$\eta(x,\theta)$$ and the bias term $$\delta(x)$$ are both modeled as GPs. Calibration parameters are for instance: infiltration rate, indoor temperature, $$U$$-values, etc. With very little data, results are posterior PDFs which are very close to the priors. GP learning scales poorly with the amount of data, which restricts its applicability to lower observation time steps. (Kristensen, Choudhary, and Petersen (2017)) studied the influence of time resolution on the predictive accuracy and showed the advantage of higher resolutions. More recently, (Chong et al. (2017)) used the NUTS algorithm for the MCMC sampling in order to accelerate learning. Later, (Chong and Menberg (2018)) gave a summary of publications using Bayesian calibration in building energy. In (Gray and Schmidt (2018)), a hybrid model was implemented. A zero mean GP is trained to learn the error between the grey-box model and the reference data. As in the previous references, both models are added to obtain the final predicted output. They are trained in sequence: the GB model has some inputs $$\mathbf{u}_\mathrm{GB}$$ and is trained first; then the GP has some other inputs $$\mathbf{u}_\mathrm{GP}$$ and is trained on the GB model’s prediction error. Results are the hyperparameters of the GP. Models trained by this method are said to have very good prediction performance, since the GP predicts the inadequacy of the GB as a function of new inputs, not included in the physical model. However, the method may not be fit for the interpretation of physical parameters. Indeed, since the GB model is first trained independently from the GP, it is biased and its parameter estimates are not interpretable. ## 13.3 Gaussian Processes for time series data Gaussian process are non-parametric models, which means that the latent function $$f(t)$$ is represented by an infinite-dimensional parameter space. Unlike parametric methods, the number of parameters is not fixed, but grows with the size of the dataset $$\mathcal{D}$$, which is an advantage and a limitation. Non-parametric models are memory-based, which means that they can represent more complex mapping as the data set grows but in order to make predictions they have to “remember” the full dataset (Frigola (2015)). The computational complexities of the analytical regression equations (13.5) are cubic $$\mathcal{O}(N^3)$$ in the number of measurements $$N$$, which is not suited for long time series. However, for a certain class of covariance function, temporal Gaussian process regression is equivalent to state inference problem which can be solved with Kalman filter and Rauch-Tung-Striebel smoother (Hartikainen and Särkkä (2010)). The computational complexity of these sequential methods is linear $$\mathcal{O}(N)$$ instead of cubic in the number of measurements $$N$$. A stationary Gaussian process (i.e. the covariance function depends only on the time difference $$\kappa(t, t^\prime) = \kappa(\tau)$$, with $$\tau=\lvert t - t^\prime \rvert$$) can be exactly represented or well approximated by a stochastic state-space model: \begin{align} \mathrm{d}\mathbf{f} & = \mathbf{A_{gp}} \, \mathbf{f} \, \mathrm{d}t + \mathbf{\sigma}_{\mathbf{gp}} \, \mathrm{d}\mathbf{w} \\ y_k & = \mathbf{C}_{\mathbf{gp}} \, \mathbf{f}(t_k) + v_k \tag{13.7} \end{align} where the matrices of the system are defined by the choice of covariance function. A list of widely used covariance function with this dual representation is given in (Solin and others (2016)), (Särkkä and Solin (2019)). As example, consider the Mat'ern covariance function with decay parameter $$\nu=3/2$$ $$$\kappa \left(\tau\right) = \sigma^2 \, \left(1 + \frac{\sqrt{3}\tau}{\ell}\right) \, \exp\left(-\frac{\sqrt{3}\tau}{\ell}\right) \tag{13.8}$$$ which has the following equivalent state-space representation $$$\mathbf{A_{gp}} = \begin{pmatrix} 0 & 1 \\[0.5em] -\lambda^2 & -2\lambda \end{pmatrix} \quad \mathbf{\sigma}_{\mathbf{gp}} = \begin{pmatrix} 0 & 0 \\[0.5em] 0 & 2\lambda^{3/2}\sigma \end{pmatrix} \quad \mathbf{C}_{\mathbf{gp}} = \begin{pmatrix} 1 & 0 \end{pmatrix} \tag{13.9}$$$ with $$\lambda=\sqrt{2\,\nu} / \ell$$ and where $$\sigma, \ell > 0$$ are the magnitude and length-scale parameters. The parameter $$\ell$$ controls the smoothness (i.e. how much time difference $$\tau$$ is required to observe a significant change in the function value) and the parameter $$\sigma$$ controls the overall variance of the function (i.e. the expected magnitude of function values). ## 13.4 Latent Force Models The stochastic part of the state-space model can accommodate for unmodelled disturbances, which do not have a significant influence on the thermal dynamics. This assumption holds if the disturbances have white noise properties and are uncorrelated accross time lags, which is seldom the case in practice (Ghosh et al. (2015)). Usually, the model complexity is increased to erase the structure in the model residuals. However, this strategy may lead to unnecessarily complex models because non-linear dynamics are often modelled by linear approximations. Increasing the model complexity often requires more prior knowledge about the underlying physical systems and additional measurements, which may not be available in practice. Another strategy is to model these unknown disturbances as Gaussian processes with certain parametrized covariance structures (Särkkä, Álvarez, and Lawrence (2018)). The resulting latent force model (Alvarez, Luengo, and Lawrence (2009)) is a combination of parametric grey-box model and non-parametric Gaussian process model. \begin{align} \mathrm{d}\mathbf{x} & = \left(\mathbf{A_{rc} \, \mathbf{x} + \mathbf{M_{rc}} \, \mathbf{C_{gp}}\mathbf{f} + \mathbf{B_{rc}} \, \mathbf{u}} \right) \, \mathrm{d}t + \mathbf{\sigma}_{\mathbf{rc}} \, \mathrm{d}\mathbf{w} \\ \mathrm{d}\mathbf{f} &= \mathbf{A_{gp}} \, \mathbf{f} \, \mathrm{d}t + \mathbf{\sigma}_{\mathbf{gp}} \, \mathrm{d}\mathbf{w} \\ y_k & = \mathbf{C}_{\mathbf{rc}} \, \mathbf{x}(t_k) + v_k \tag{13.10} \end{align} where $$\mathbf{M_{rc}}$$ is the input matrix corresponding to the unknown latent forces. The augmented state-space representation of the latent force model \begin{align} \label{lfm_ssm} \mathrm{d}\mathbf{z} &= \mathbf{A} \, \mathbf{z} \, \mathrm{d}t + \mathbf{B} \, \mathbf{u} \, \mathrm{d}t + \mathbf{\sigma} \, \mathrm{d}\mathbf{w} \\ y_k &= \mathbf{C} \, \mathbf{z}(t_k) + v_k \tag{13.11} \end{align} is obtained by combining the grey-box model and the gaussian process model (13.7), such that \begin{alignedat}{3} \mathbf{z}&=\begin{pmatrix} \mathbf{x} \\ \mathbf{f} \end{pmatrix} \quad & \mathbf{A}&=\begin{pmatrix} \mathbf{A_{rc}} & \mathbf{M_{rc}} \, \mathbf{C_{gp}} \\ \mathbf{0} & \mathbf{A_{gp}} \end{pmatrix} \quad & \mathbf{B}=\begin{pmatrix} \mathbf{B_{rc}} \\ \mathbf{0} \end{pmatrix} \\ \mathbf{C}&=\begin{pmatrix} \mathbf{C}_{\mathbf{rc}} & \mathbf{0} \end{pmatrix} \quad & \mathbf{\sigma}&=\begin{pmatrix} \mathbf{\sigma}_{\mathbf{rc}} & \mathbf{0} \\ \mathbf{0} & \mathbf{\sigma}_{\mathbf{gp}} \end{pmatrix} \end{alignedat} \tag{13.12} The latent force model representation allows to incorporate prior information about the overall dynamic of the physical system, but also about the behavior of the unknown inputs. ### References Alvarez, Mauricio, David Luengo, and Neil D Lawrence. 2009. “Latent Force Models.” In Artificial Intelligence and Statistics, 9–16. PMLR. Chong, Adrian, Khee Poh Lam, Matteo Pozzi, and Junjing Yang. 2017. “Bayesian Calibration of Building Energy Models with Large Datasets.” Energy and Buildings 154: 343–55. Chong, Adrian, and Kathrin Menberg. 2018. “Guidelines for the Bayesian Calibration of Building Energy Models.” Energy and Buildings 174: 527–47. Frigola, Roger. 2015. “Bayesian Time Series Learning with Gaussian Processes.” PhD thesis, University of Cambridge. Ghosh, Siddhartha, Steve Reece, Alex Rogers, Stephen Roberts, Areej Malibari, and Nicholas R Jennings. 2015. “Modeling the Thermal Dynamics of Buildings: A Latent-Force-Model-Based Approach.” ACM Transactions on Intelligent Systems and Technology (TIST) 6 (1): 1–27. Gray, Francesco Massa, and Michael Schmidt. 2018. “A Hybrid Approach to Thermal Building Modelling Using a Combination of Gaussian Processes and Grey-Box Models.” Energy and Buildings 165: 56–63. Hartikainen, Jouni, and Simo Särkkä. 2010. “Kalman Filtering and Smoothing Solutions to Temporal Gaussian Process Regression Models.” In 2010 Ieee International Workshop on Machine Learning for Signal Processing, 379–84. IEEE. Heo, Yeonsook, Ruchi Choudhary, and GA Augenbroe. 2012. “Calibration of Building Energy Models for Retrofit Analysis Under Uncertainty.” Energy and Buildings 47: 550–60. Kennedy, Marc C, and Anthony O’Hagan. 2001. “Bayesian Calibration of Computer Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63 (3): 425–64. Kristensen, Martin Heine, Ruchi Choudhary, and Steffen Petersen. 2017. “Bayesian Calibration of Building Energy Models: Comparison of Predictive Accuracy Using Metered Utility Data of Different Temporal Resolution.” Energy Procedia 122: 277–82. Rasmussen, Carl Edward. 2003. “Gaussian Processes in Machine Learning.” In, 63–71. Springer. Särkkä, Simo, Mauricio A Álvarez, and Neil D Lawrence. 2018. “Gaussian Process Latent Force Models for Learning and Stochastic Control of Physical Systems.” IEEE Transactions on Automatic Control 64 (7): 2953–60. Särkkä, Simo, and Arno Solin. 2019. Applied Stochastic Differential Equations. Vol. 10. Cambridge University Press. Solin, Arno, and others. 2016. “Stochastic Differential Equation Methods for Spatio-Temporal Gaussian Process Regression.”
2021-09-25 21:05:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.9980555772781372, "perplexity": 11753.586311600426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00389.warc.gz"}
https://brilliant.org/problems/locus-of-orthocentre/
# Locus of orthocentre. Geometry Level pending The locus of the orthocentre of the triangle formed by the lines (1+p)x -py +p(1+p)=0,(1+q)x -qy +q(1+q)=0 and y=0 where p not equal to q is ×
2017-03-30 10:54:20
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806966245174408, "perplexity": 5271.535287742798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00236-ip-10-233-31-227.ec2.internal.warc.gz"}
http://thawom.com/q-two_dice.html
##### Question 14.1.2 If I roll two dice, what is the probability that the two numbers rolled add up to $8$? Show assumed knowledge
2022-12-04 18:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856106162071228, "perplexity": 363.74787882676304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00448.warc.gz"}
https://www.cse.chalmers.se/~mista/blog/dragen.html
# DRAGEN or: How I Learned to Stop Worrying About Writing Random Generators Posted on November 5, 2018 by Agustín Mista Random property-based testing (or random testing for short) is nothing new, yet it is still a quite hot research field with many open problems to dig into. In this post I will try to give a brief explanation of our latest contribution on random testing published in the proceedings of the 2018 Haskell Symposium, as well as some use cases of DRAGEN, the tool we implemented to automate the boring stuff of using random testing in Haskell. The key idea of random testing is pretty simple: you write the desired properties of your system as predicates, and try to falsify them via randomly generated test cases. Despite there are many, many libraries that provide this functionality, implemented for a bunch of programming languages, here we will put the spotlight over QuickCheck in Haskell. QuickCheck is the most prominent tool around, originally conceived by Claessen and Hughes almost twenty years ago. If you haven’t heard of it, I strongly recommend checking the original paper but, being more realistic, if you’re reading this you likely know about QuickCheck already. In this post I will focus on using QuickCheck as a fuzzing tool. Essentially, fuzzing is a penetration testing technique that involves running a target program against broken or unexpected inputs, asserting that they’re always handled properly. In particular, I’ll show you how to use existing Haskell data types as “lightweight specifications” of the input format of external target programs, and how we can rely on meta-programming to obtain random data generators for those formats automatically. In the future, the Octopi project will apply this technique to evaluate and improve the security of IoT devices. ## QuickCheck + fuzzing = 🎔 Software fuzzing is a technique that tries to reduce the testing bias by considering the target program as a black-box, meaning that we can only evaluate its correctness based on the outputs we obtain from the inputs we use to test it. This black-box fashion of testing forces us to express our testing properties using a higher level of abstraction. For instance, one of the most general properties that we can state over any external program consists in checking for succesful termination, regardless of the inputs we provide it. We can express such property in QuickCheck as follows: prop_run :: Arbitrary t => String -> (t -> ByteString) -> t -> Property prop_run target encode testcase = monadicIO $do exitCode <- run$ shell target $encode testcase assert (exitCode == ExitSuccess) shell :: String -> ByteString -> IO ExitCode shell cmd stdin = do (exitCode, _, _) <- readProcessWithExitCode cmd [] stdin return exitCode From the previous property, we can observe that we need a couple of things in order to test it: 1. A shell command target to execute. 2. A data type t describing the structure of the inputs of our target program. 3. A random generator for t, here satisfied by the Arbitrary t constraint. 4. A function encode :: t -> ByteString to transform our randomly generated Haskell values into the syntactic representation of the standard input of our target program. Aditionally, note that here we decided to use the standard input of our target program as interface, but nothing stops us from saving the test case into a file and then running the target program using its filepath. In both cases, the idea is essentially the same. Then, we can test for instance that the unix sort utility always terminates successfully when we run it with randomly generated Ints. The first step is to define an encoding function from [Int] to ByteString: -- We simply put every number in a diferent line encode :: [Int] -> ByteString encode ints = ByteString.fromString . unlines . map show The next requirement is to have a random generator for [Int]. Fortunately, this is already provided by QuickCheck in the following Arbitrary instances: instance Arbitrary Int instance Arbitrary a => Arbitrary [a] With both things in place, we can check that we generate and encode our data in the right way in order to call sort: ghci> encode <$> generate arbitrary >>= ByteString.putStr -16 19 -28 -16 9 26 2 9 9 Finally, we can simply test sort by calling: ghci> quickCheck $prop_run "sort" encode +++ OK, passed 100 tests. In many scenarios, it might also be interesting to use an external fuzzer to corrupt the generated ByteStrings before piping them to our target program, looking for bugs related to the syntactic representation of its inputs. For instance, we could consider using the deterministic bit-level fuzzer zzuf from caca labs: $ echo "hello world" | zzuf henlo world $echo "int foo() { retun 42; }" | zzuf --ratio 0.02 int goo() { return 22; } Then, we can modify our original property prop_run in order to run zzuf in the middle to corrupt the randomly generated test cases: zzuf :: ByteString -> IO ByteString zzuf stdin = do (_, stdout, _) <- readProcessWithExitCode "zzuf" [] stdin return stdout prop_run_zzuf :: Arbitrary t => String -> (t -> ByteString) -> t -> Property prop_run_zzuf target encode testcase = monadicIO$ do exitCode <- run $shell target <=< zzuf$ encode testcase assert (exitCode == ExitSuccess) Which will produce corrupted outputs like: ghci> encode <$> generate arbitrary >>= zzuf >>= ByteString.putStr -18 -19 3:� -02*39$34 30 22 -:) "9 ;y As simple as this testing method sounds, it turns out it can be quite powerful in practice, and it’s actually the main idea behind QuickFuzz, a random fuzzer that uses existing Haskell libraries under the hood to find bugs in a complex software, spanning a wide variety of file formats. Moreover, given that QuickCheck uses a type-driven generational approach, we can exploit Haskell’s powerful type system in order to define abstract data types encoding highly-strutured data like, for instance, well-scoped source code, finite state machines, stateless communication protocols, etc. Such data types essentially act as a lightweight grammar of the input domain of our target program. Then, we are required to provide random generators for such data types in order to use them with QuickCheck, which is the topic of the next section. ## Random generators for custom data types In the previous example, I’ve shown you how to test an external program easily, provided that we had a QuickCheck random generator for the data type encoding the structure of its inputs. However, if we are lucky enough to find an existing library providing a suitable representation for the inputs of our particular target program, as well as encoding functions required, then it’s rarely the case for such library to also provide a random generation for this representation that we could use to randomly test our target program. The only solution in this case is, as you might imagine, to provide a random generator by ourselves or, as I’m going to show you by the end of this post, to derive it automatically! As I’ve introduced before, whenever we want to use QuickCheck with user-defined data types, we need to provide a random generator for such data type. For the rest of this post I will use the following data type as a motivating example, representing 2-3 trees with two different kinds of leaves: data Tree = LeafA | LeafB | Node Tree Tree | Fork Tree Tree Tree The easiest way of generating values of Tree is by providing an instance of QuickCheck’s Arbitrary type class for it: class Arbitrary a where arbitrary :: Gen a ... This ubiquitous type class essentially abstracts the overhead of randomly generating values of different types by overloading a single value arbitrary that represents a monadic generator of Gen a for every type a we want to generate. That said, we can easily instantiate this type class for Tree very easily: instance Arbitrary Tree where arbitrary = oneof [ pure LeafA , pure LeafB , Node <$> arbitrary <*> arbitrary , Fork <$> arbitrary <*> arbitrary <*> arbitrary ] This last definition turns out to be quite idiomatic using the Applicative interface of Gen. In essence, we specify that every time we generate a random Tree value, we do it by picking with uniform probability from a list of random generators using QuickCheck’s primitive function oneof :: [Gen a] -> Gen a. Each one of these sub-generators is specialized in generating a single constructor of our Tree data type. For instance, pure LeafA is a generator that always generates LeafAs, while Node <$> arbitrary <*> arbitrary is a generator that always produce Nodes, “filling” their recursive fields with random Tree values obtained by recursively calling our top-level generator. As simple as this sounds, our Arbitrary Tree instance is able to produce the whole space of Tree values, which is really good, but also really bad! The problem is that Tree is a data type with an infinite number of values. Imagine picking Node constructors for every subterm, forever. You end up being stuck in an infinite generation loop, which is something we strongly want to avoid when using QuickCheck since, in principle, we want to test finite properties. The “standard” solution to this problem is to define a “sized” generation process which ensures that we only generate finite values. Again, QuickCheck has a primitive for this called sized :: (Int -> Gen a) -> Gen a that let us define random generators parametrized over an Int value known as the generation size, which is an internal parameter of the Gen monad that is threaded on every recursive call to arbitrary, and that can be set by the user. Let’s see how to use it to improve our previous definition: instance Arbitrary Tree where arbitrary = sized gen where gen 0 = oneof [ pure LeafA , pure LeafB ] gen n = oneof [ pure LeafA , pure LeafB , Node <$> gen (n-1) <*> gen (n-1) , (fFork, Fork <$> gen (n-1) <*> gen (n-1) <*> gen (n-1)) ] This last definition enables us to tweak the generation frequencies for each constructor and obtain different distributions of values in practice. So, the big question of this work is, how do we know how much the frequency of each constructor by itself affects the average distribution of values as a whole? Fortunately, there is an answer for this. ## Branching processes The key contribution of this work is to show that, if our generator follows some simple guidelines, then it’s possible to predict its average distribution of generated constructors very easily. To achieve this, we used a mathematical framework known as branching processes. A branching process is an special kind of stochastic model, and in particular, an special kind of Markov chain. They were originally conceived in the Victorian Era to predict the growth and extinction of the royal family names, and later spread to many other research fields like biology, physics, and why not, random testing. Essentially, a branching process models the reproduction of individuals of different kinds across different time steps called generations, where it is assumed that the probability of each individual to procreate a certain individual in the next generation is fixed over time (this assumption is satisfied by our generator, since the generation frequencies for each constructor are hardcoded into the generator). In our particular setting, we consider that each different data constructor constitutes an individual of its own kind. Then, during the generation process, each constructor will “produce” a certain average number of offpsring of possibly different kinds from one generation Gi to the next one (G(i + 1)), i.e. from one level of the generated tree to the next one. Each generation Gi can be thought as a vector of natural numbers that groups the amount of generated constructors of each kind. Then, by using branching processes theory, we can predict the expected distribution of constructors E[_] on each level of the generated tree or, in other words, the average number of constructors of each kind at every level of a generated value. Then, E[Gi] is a vector of real numbers that groups the average amount of generated constructors of each kind at the i-th level. On the other hand, given a generation size n, we know that our generation process will produce values of up to n levels of depth. Therefore we can ensure that the generation process encoded by a branching process will take place from the first generation (G0), up to the (n − 1)-th generation G(n − 1), while the last generation (Gn) is only intended to fill the recursive holes produced by the recursive constructors generated in the previous generation G(n − 1), and needs to be considered separately. With these considerations, we can characterize the expected distribution of constructors of any value generated using a QuickCheck size n. We only need to add the expected distribution of constructors at every level of the generated value, plus the terminal constructors needed to terminate the generation process at the last level. Hopefully, the next figure gives some insights on how to predict the expected distribution of constructors of a Tree value randomly generated using our previously generator. There you can see that the generation process consists of two different random processes, one corresponding to each clause of the auxiliary function gen that we defined before. We need to calculate them separately in order to be sound with respect to our generator. ### What about complex data types? The example I have shown you may not convice you at all about our prediction mechanism given that it’s fairly simple. However, in our paper we show that it is powerful enough to deal with complex data types comprising for instance, composite types, i.e. data types defined using other types internally; as well as mutually recursive data types. For simplicity, I will not explain the details about them in this post, but you can find them in the paper if you’re still unconvinced! ## DRAGEN: automatic Derivation of RAndom GENerators One of the cool things about being able to predict the distribution of data constructors is that we can use this prediction as optimization feedback, allowing us to tune the generation probabilities of each constructor without actually generating a single value. To do so, we implemented a Haskell tool called DRAGEN that automatically derives random generators in compile-time for the data types we want, using the branching processes model I’ve previously introduced to predict and tune the generation probabilities of each constructor. This way, the user expresses a desired distribution of constructors, and DRAGEN tries to satisfy it as much as possible while deriving a random generator. DRAGEN works at compile-time exploiting Template Haskell meta-programming capabilities, so the first step to use it is to enable the Template Haskell language extension and import it: {-# LANGUAGE TemplateHaskell #-} import Dragen Then, we can use DRAGEN to automatically derive a generator for our Tree data type very easily with the following Template Haskell function: dragenArbitrary :: Name -> Size -> DistFunction -> Q [Dec] Where Name is the name of the data type you want to derive a generator for, Size is the maximum depth you want for the generated values (Size is a type synonym of Int), and DistFunction is a function that encodes the desired distribution of constructors as a “distance” to a certain target distribution. Let’s pay some attention to its definition: type DistFunction = Size -> FreqMap -> Double type FreqMap = Map Name Int This is, for every generation size and mapping between constructor names and generation frequencies, we will obtain a real number that encodes the distance between the predicted distribution using such values, and the distribution that we ideally want. Hence, our optimization process works by minimizing the output of the provided distance function. On each step, it varies the generation frequencies of each constructor independently, following the shortest path to the desired distribution. This process is repeated recursively until it reachs a local minimum, where we finally synthesize the required generator code using the frequencies found by it. Fortunately, you don’t have to worry too much about distance functions in practice. For this, DRAGEN provides a minimal set of distance functions that can be used out of the box. All of them are built around the Chi-Squared goodness of fit test, an statistical test useful to quantify how much a set of observed frequencies differs from an expected one. In our case, the observed frequencies corresponds to the predicted distributions of constructors, while the expected ones corresponds to the target distribution of constructors. Let’s see some of them in detail! ### Uniform generation The simplest distance function provided by DRAGEN is uniform :: DistFunction, which guides the frequencies optimization process towards a distribution of constructors where the amount of generated constructors for every constructor is (ideally) equal to the generation size. In mathematical jargon, it looks a bit like: $$uniform(size, freqs) = \sum_{C_i} \frac{(predict(C_i, freqs, size) - size)^2}{size}$$ Where Ci varies among all the data constructors involved in the generation process. For instance, if we write this declaration at the top level of our code: dragenArbitrary ''Tree 10 uniform Then DRAGEN will produce the following code in compile-time: Reifiying: Tree Types involved with Tree: [Base Tree] Initial frequencies map: * (Fork,100) * (LeafA,100) * (LeafB,100) * (Node,100) Predicted distribution for the initial frequencies map: * (Fork,8.313225746154785) * (LeafA,12.969838619232178) * (LeafB,12.969838619232178) * (Node,8.313225746154785) Optimizing the frequencies map: ******************************************************************************** ******************************************************************************** ******************************************************************* Optimized frequencies map: * (Fork,152) * (LeafA,165) * (LeafB,162) * (Node,175) Predicted distribution for the optimized frequencies map: * (Fork,7.0830066259820645) * (LeafA,11.767371412451563) * (LeafB,11.553419204952444) * (Node,8.154777365439879) Initial distance: 2.3330297615435938 Final distance: 1.7450409851023654 Optimization ratio: 1.3369484049148201 Deriving optimized generator... Splicing declarations dragenArbitrary ''Tree 10 uniform ======> instance Arbitrary Tree where arbitrary = sized go_arOq where go_arOq n_arOr = if (n_arOr == 0) then frequency [(165, return LeafA), (162, return LeafB)] else frequency [(165, return LeafA), (162, return LeafB), (175, Node <$> go_arOq ((max 0) (n_arOr - 1)) <*> go_arOq ((max 0) (n_arOr - 1))), (152, Fork <\$> go_arOq ((max 0) (n_arOr - 1)) <*> go_arOq ((max 0) (n_arOr - 1)) <*> go_arOq ((max 0) (n_arOr - 1)))] As you can see, the optimization process tries to reduce the difference between the predicted number of generated constructors for each constructor, and the generation size (10 in this case). Note that this process is far from perfect in this case and, in fact, we cannot expect exact results in most cases. The reason for the observable differences between the obtained distribution and the desired one due to the implicit invariants of our Tree data type. So, it’s important to be aware that most data types carry implicit invariants with them that we can’t break while generating random values. For example, trying to obtain a uniform distribution of constructors for a lists [] data type makes no sense, since we will always generate only one “nil” per list. After deriving a random generator using our tool, you’d likely be interested in confirming that the predictions we made over the constructors distributions are sound. For this, our tool provides a function confirm :: Countable a => Size -> Gen a -> IO () to do so: ghci> confirm 10 (arbitrary :: Gen Tree) * ("Fork",7.077544) * ("LeafA",11.757322) * ("LeafB",11.546111) * ("Node",8.148345) Where the constraint Countable a can be automatically satisfied providing a Generic instance of a, and in our case we can simply use standalone deriving to obtain it: {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE StandaloneDeriving #-} deriving instance Generic Tree instance Countable Tree ### Weighted generation There may be some scenarios when we know that some constructors are more important that some other ones while testing. In consequence, our tool provides the distance function weighted :: [(Name, Int)] -> DistFunction to guide the optimization process towards a target distribution of constructors where some of them can be generated in different proportion than some other ones. Using this distance function, the user lists the constructors and proportions of interest, and the optimization process will try to minimize the following function: $$weighted(weights, size, freqs) = \sum_{C_i\ \in\ weights} \frac{(predict(C_i, freqs, size) - weight(C_i) * size)^2}{weigth(C_i) * size}$$ Note that we only consider the listed constructors to be relevant while calculating the distance to the target distribution, meaning that the optimizator can freely adjust the rest of them in order to satisfy the constraints impossed in the weights list. For instance, say that we want to generate Tree values with two LeafAs for every three Nodes, we can express this in DRAGEN as follows: dragenArbitrary ''Tree 10 (weighted [(2, 'LeafA), (3, 'Node)]) Obtaining: Reifiying: Tree ... Optimizing the frequencies map: ******************************************************************************** *************************************** Optimized frequencies map: * (Fork,98) * (LeafA,32) * (LeafB,107) * (Node,103) Predicted distribution for the optimized frequencies map: * (Fork,28.36206776375821) * (LeafA,20.15153900775499) * (LeafB,67.38170855718076) * (Node,29.809112037419343) Initial distance: 18.14836436989009 Final distance: 2.3628106855081705e-3 Optimization ratio: 7680.837267750427 Deriving optimized generator... ... Note in the previous example how the generation frequencies for LeafB and Fork are adjusted in a way that the specified proportion for LeafA and Node can be satisfied. ### Whitelisting/blacklisting constructors Many testing scenarios would require to restrict the set of generated constructors to some subset of the available ones. We can express this in DRAGEN using the functions only :: [Name] -> DistFunction and without :: [Name] -> DistFunction to whitelist and blacklist some constructors in the derived generator, respectively. Mathematically: $$only(whitelist, size, freqs) = \sum_{C_i\ \in\ whitelist} \frac{(predict(C_i,freqs, size) - size)^2}{size}$$ $$without(blacklist, size, freqs) = \sum_{C_i\ \notin\ blacklist} \frac{(predict(C_i, freqs, size) - size)^2}{size}$$ Is worth noticing that, in both distance functions, the restricted subset of constructors is then generated following a uniform fashion. Let’s see an example of this: dragenArbitrary ''Tree 10 (only ['LeafA, 'Node]) Which produces: Reifiying: Tree ... Optimizing the frequencies map: ******************************************************************************** **************************************************************** Optimized frequencies map: * (Fork,0) * (LeafA,158) * (LeafB,0) * (Node,199) Predicted distribution for the optimized frequencies map: * (Fork,0.0) * (LeafA,10.54154398815233) * (LeafB,0.0) * (Node,9.541543988152332) Initial distance: 2.3732602211951874e7 Final distance: 5.036518059032003e-2 Optimization ratio: 4.7121050562684125e8 Deriving optimized generator... ... In this last example we can easily note an invariant constraining the optimization process: every binary tree (which is how we have restricted our original Tree) with n nodes has exactly n + 1 leaves. Considering this, the best result we can obtain while optimizing the generation frequencies consists of generating an average number of nodes and leaves that are simetric on its distance to the generation size. ## Try DRAGEN! DRAGEN is now in Hackage! I would love to hear some feedback about it, so feel free to open an issue in GitHub or to reach me by email whenever you want!
2021-06-20 22:34:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6577713489532471, "perplexity": 1972.971113030552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00224.warc.gz"}
https://math.stackexchange.com/questions/744791/let-a-be-an-n-times-n-matrix-show-that-deta-1-frac1-deta
# let $A$ be an $n\times n$ matrix. Show that $\det(A^{-1}) = \frac{1}{\det(A)}$ Let $A$ be a $n \times n$ matrix , and then show that $$\det(A^{-1}) = \frac{1}{\det(A)}.$$ Any tips on this one? basically I don't have a clue. • If you have proved that for square matrices $A$ and $B$ of the same size, we have $\text{det}(AB)=\text{det}(A)\text{det}(B)$, it will be easy. If you have to prove the above multiplication law, not easy. – André Nicolas Apr 8 '14 at 7:16 • I guess I can just assume that det(AB) = det(A)det(B)(aka dont have to prove it) :) but im not sure how this helps me? – yyzzer1234 Apr 8 '14 at 7:20 • $A\cdot A^{-1} = I$. – 5xum Apr 8 '14 at 7:20 Hint: We know that $AA^{-1} = I$. We also have the fact that, in general $det(AB) = det(A)det(B)$. Can you see where to go from here? • nope, not really and idk what you mean with AA^-1 = I? just matrix A * inverse matrix A = I ? :/ – yyzzer1234 Apr 8 '14 at 7:26 • @yyzzer1234, Try to combine the two facts that Kaj_H wrote down. In other words, try to calculate $\det(AA^{-1})$ – 5xum Apr 8 '14 at 7:38 • Yep, you're on the right track. – Kaj Hansen Apr 8 '14 at 7:39 • its just 1 isnt it? – yyzzer1234 Apr 8 '14 at 7:44 • Certainly. Now bring home the proof: Why is $det(A) = 1/det(A^{-1})$? – Kaj Hansen Apr 8 '14 at 7:46 From properties of the determinant, for square matrices $A$ and $B$ of equal size we have $$|AB|=|A||B|,$$ which means determinants are distributive. This means that the determinant of a matrix inverse can be found as follows: \begin{align} |I|&=\left|AA^{-1}\right|\\ 1&=|A|\left|A^{-1}\right|\\ \left|A^{-1}\right|&=\frac{1}{|A|}, \end{align} where $I$ is the identity matrix. $$\\$$ $$\Large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$ If $A$ is not defective, there exists an invertible matrix $P$ such that $D=P^{−1}AP$ that diagonalizes $A$. The diagonal entries of $D^{-1}$ are the reciprocals of the entries of $D$ and since the determinante of a diagonal matrix is the product of all diagonal entries it follows that: $$\det(A^{-1}) = \frac{1}{\det(A)}.$$
2019-08-21 03:46:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9489518404006958, "perplexity": 349.5680015925592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00120.warc.gz"}
http://mathcenter.oxford.emory.edu/site/math108/probSetReviewB/
## Exercises - Review Set B 1. What is the implicit domain (in set-builder notation) of $\displaystyle{f(x) = \frac{(x-7)^2\log_2(x+5)}{x-7}}$ The domain of the $\log_2 x$ is $\mathbb{R}_{>0}$, so we need $x \gt -5$. Further, we need $x \neq 7$ so that we don't divide by zero. Thus, the implicit domain of the $f$ above is $\{x \in \mathbb{R} \ | \ x \gt -5 \textrm{ and } x \neq 7\}$. 2. Does the graph below represent a function? Explain how you know. No, it does not pass the vertical line test. 3. Given the graph of the function $f(x)$ below: 1. Determine if $f$ has an inverse, explaining how you know. 2. Find what appears to be the domain of $f$ 3. Find what appears to be the image/range of $f$ 1. $f$ does not have an inverse as it fails the horizontal line test. 2. The domain of $f$ appears to be $(-6,0] \cup (2,8]$. 3. The image/range of $f$ appears to be $[-4,6)$. 4. Find the following sum, expressing your answer in base $5$, doing so without converting to another base first. $$(4213)_5 + (2334)_5$$ $(12102)_5$ 5. Express the following combination of polynomials (of two variables) as a single polynomial of two variables: $$(3a - 2b)(5a + ab - 4b) + (22ab + 2ab^2)$$ $\displaystyle{15a^2 + 8b^2 + 3a^2b}$ 6. Factor completely: $x^5 - 9x^3 -8x^2 + 72$ Use factoring by grouping first, and then use difference of squares and difference of cubes factor what results to obtain: $$(x-3)(x+3)(x-2)(x^2+2x+4)$$ 7. Decide if Eisenstein's Criterion can be used to prove the following polynomial is irreducible. If it can, find the prime $p$ in question. If not, explain why not. $$x^5 + 9x^4 - 15x^3 + 3x^2 + 21$$ Eisenstein's Criterion apples with $p=3$ (note: $p$ does not divide the coefficient on $x^5$, but divides all other coefficients (i.e., $9$, $-15$, $3$, and $21$), and $p^2$ does not divide the constant term, $21$). The polynomial given is irreducible -- it does not factor into a product of polynomials with integer coefficients. 8. Divide $f(x)$ by $g(x)$, expressing your answer in quotient-remainder form: $$f(x) = x^4 - 3x^3 + 2x^2 - 5 \quad \quad g(x) = x^2 - 2x + 1$$ $\displaystyle{x^4 - 3x^3 + 2x^2 - 5 = (x^2 - 2x + 1)(x^2 -x -1) + (-x-4)}$ 9. Find both the additive and the multiplicative inverses of $4$ in a $13$-hour clock arithmetic. $9$ is the additive inverse; $10$ is the multiplicative inverse 10. The set $S = \{A,B,C,D\}$, along with addition and multiplication as described by the tables below is a commutative ring. Explain why it is not also a field. $$\begin{array}{c|cccc} + & A & B & C & D\\\hline A & A & B & C & D\\ B & B & C & D & C\\ C & C & D & C & B\\ D & D & C & B & A \end{array} \quad \quad \begin{array}{c|cccc} \times & A & B & C & D\\\hline A & A & A & A & A\\ B & A & B & C & D\\ C & A & C & A & C\\ D & A & D & C & B \end{array}$$ Note that $A$ plays the role of zero (i.e., the additive identity), as $A+x = x+A = x$ for any $x \in S$. Further, $B$ plays the role of one (i.e., the multiplicative identity), as $Bx = xB = x$ for the same. However, this makes $C$ a non-zero element who does not have a multiplicative inverse. 11. Simplify each expression below, assuming it is defined. You may leave your answer in factored form. 1. $\displaystyle{\frac{x^2 + 6x + 9}{4x+12} \div \frac{2x^2+5x-3}{6x}}$ 2. $\displaystyle{\frac{x-2}{x^3 + 6x^2 + 5x} - \frac{1}{x^2 + 3x + 2}}$ 1. $\displaystyle{\frac{3x}{2(2x-1)}}$ 2. $\displaystyle{\frac{-(5x+4)}{x(x+1)(x+2)(x+5)}}$ 12. Find the simplified difference quotient $\cfrac{f(x+h)-f(x)}{h}$ for $f(x) = x^3$, assuming $h \neq 0$ $\displaystyle{3x^2 + 3xh + h^2}$ 13. Recalling $\mathbb{Q}(\sqrt{5})$ is a field, find rational values $c$ and $d$ so that $$\frac{1}{7 + 3\sqrt{5}} = c + d\sqrt{5}$$ $c = \frac{7}{4}$ and $d = -\frac{3}{4}$ 14. Express in interval notation where the graph of $f(x) = x^3 + 2x^2$ is above the graph of $g(x) = 25x+50$. $\displaystyle{(-5,-2) \cup (5,\infty)}$
2022-11-27 05:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8665273785591125, "perplexity": 272.21482639183836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00031.warc.gz"}
https://stacks.math.columbia.edu/tag/0BK6
Lemma 13.29.5. Let $\mathcal{A}$ be an abelian category. Let $T$ be a set and for each $t \in T$ let $I_ t^\bullet$ be a K-injective complex. If $I^ n = \prod _ t I_ t^ n$ exists for all $n$, then $I^\bullet$ is a K-injective complex. Moreover, $I^\bullet$ represents the product of the objects $I_ t^\bullet$ in $D(\mathcal{A})$. Proof. Let $K^\bullet$ be an complex. Then we have $\mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I^\bullet ) = \prod \nolimits _{t \in T} \mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I_ t^\bullet )$ Since taking products is an exact functor on the category of abelian groups we see that if $K^\bullet$ is acyclic, then $\mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I^\bullet )$ is acyclic because this is true for each of the complexes $\mathop{\mathrm{Hom}}\nolimits _{K(\mathcal{A})}(K^\bullet , I_ t^\bullet )$. Having said this, we can use Lemma 13.29.2 to conclude that $\mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(K^\bullet , I^\bullet ) = \prod \nolimits _{t \in T} \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{A})}(K^\bullet , I_ t^\bullet )$ and indeed $I^\bullet$ represents the product in the derived category. $\square$ There are also: • 5 comment(s) on Section 13.29: K-injective complexes In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2019-07-17 16:42:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.978632390499115, "perplexity": 176.02993148110383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00148.warc.gz"}
https://questions.examside.com/past-years/gate/gate-ece/engineering-mathematics/transform-theory
GATE ECE Engineering Mathematics Transform Theory Previous Years Questions ## Marks 1 Consider the function $$g\left( t \right) = {e^{ - t}}\,\sin \left( {2\pi t} \right)u\left( t \right)$$ ,where $$u(t)$$ is the unit step function. The... The unilateral Laplace transform of $$f(t)$$ is $${1 \over {{s^2} + s + 1}}$$. Which one of the following is the unilateral Laplace transform of $$g\l... If$$x\left[ N \right] = {\left( {1/3} \right)^{\left| n \right|}} - {\left( {1/2} \right)^n}\,u\left[ n \right],$$then the region of convergence$$(... The unilateral Laplace transform of $$f(t)$$ is $$\,{1 \over {{s^2} + s + 1}}.$$ The unilateral Laplace transform of $$t$$ $$f(t)$$ is Given that $$F(s)$$ is the one-sided Laplace transform of $$f(t),$$ the Laplace transform of $$\int\limits_0^t {f\left( \tau \right)} d\tau$$ is Consider the function $$f(t)$$ having laplace transform $$F\left( s \right) = {{{\omega _0}} \over {{s^2} + \omega _0^2}},\,\,{\mathop{\rm Re}\nolimi... In what range should$$Re(s)$$remain so that the laplace transform of the function$${e^{\left( {a + 2} \right)t + 5}}$$exists? The laplace transform of$$i(t)$$is given by$$I\left( s \right) = {2 \over {s\left( {1 + s} \right)}}$$As$$t \to \infty ,$$the value of$$i(t)$$... If$$\,\,L\left\{ {f\left( t \right)} \right\} = F\left( s \right)$$then$$\,\,\,L\left\{ {f\left( {t - T} \right)} \right\}$$is equal to If$$\,\,\,L\,\,\left\{ {f\left( t \right)} \right\} = {w \over {{s^2} + {w^2}}}$$then the value of$$\mathop {Lim}\limits_{t \to \infty } f\left( t... The laplace transform of $${e^{\alpha t}}\,\cos \,\alpha \,t$$ is equal to ____________. The inverse laplace transform of the function $${{s + 5} \over {\left( {s + 1} \right)\left( {s + 3} \right)}}$$ is _______________. If $$L\left\{ {f\left( t \right)} \right\} = {{2\left( {s + 1} \right)} \over {{s^2} + 2s + 5}}$$ then $$f\left( {{0^ + }} \right)$$ and $$f\left( \p... ## Marks 2 The bilateral Laplace transform of a function$$f\left( t \right) = \left\{ {\matrix{ 1 & {if\,\,a \le t \le b} \cr 0 & {otherwise} ... A system is described by the following differential equation, where $$u(t)$$ is the input to the system and $$y(t)$$ is the output of the system. $$... Consider the differential equation$${{{d^2}y\left( t \right)} \over {d{t^2}}} + 2{{dy\left( t \right)} \over {dt}} + y\left( t \right) = \delta \lef... The Dirac delta Function $$\delta \left( t \right)$$ is defined as If $$\,\,\,$$ L\left\{ {f\left( t \right)} \right\} = {{s + 2} \over {{s^2} + 1}},\,\,L\left\{ {g\left( t \right)} \right\} = {{{s^2} + 1} \over {\... EXAM MAP Joint Entrance Examination JEE MainJEE AdvancedWB JEE Graduate Aptitude Test in Engineering GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN Medical NEET
2023-03-28 06:10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995998740196228, "perplexity": 2381.5546049286977}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00222.warc.gz"}
https://motls.blogspot.com/2008/08/strings-2008-tuesday.html
## Tuesday, August 19, 2008 ... // ### Strings 2008: Tuesday First, great news from the world of awards. Wikipedia pictures above were taken by your humble correspondent Joe Polchinski (KITP, UCSB), Juan Maldacena (IAS Princeton), and Cumrun Vafa (Harvard University) joined other well-known physicists and won the 2008 Dirac medal for their stringy discoveries. Congratulations! But back to Strings 2008. See also the main page about Strings 2008 on this blog... The PDF files are also available at the Strings 2008 website Luis Ibáňez started the Tuesday morning session by a talk about string phenomenology (PDF here). We all believe that string theory unifies gravity and particle physics but can the SM be embedded and can we predict new things? We will have to use the data (LHC, cosmology) to restrict the possible compactifications. He shows the 1995 duality hexagon of M-theory and adds some structure in it, insights until 2008 (D3-branes, G2 holonomy, RCFTs etc.). The region outside is now called swampland. ;-) He distinguishes global and local models - global ones are complete, local ones only care about a vicinity of some point in the extra dimensions. The latter incomplete approach is useful and is pursued by many big shots. (At this moment, the Mac starts to misbehave. Beep beep beep and Ibáňez, as an anti-Ellen Feiss, tries to switch to Windows.) That includes D-branes at singularities. In mapping the MSSM landscape, he begins with the E8 heterotic orbifolds. Pure MSSM can be obtained, gauge coupling unification is likely. Heterotic Calabi-Yaus follow: Wilson lines needed to break to the Standard Model. It's even simpler to eliminate non-MSSM matter. In type IIA, one combines intersecting D6-branes with orientifolds. The well-studied orbifold constructions involve Z2 x Z2 but recently people found Z6 examples, too. A problem is SM adjoint matter. Mirrors of these models involve magnetized IIB branes. About 210,000 type IIB Gepner-like RCFTs have been found to resemble the MSSM. Pure MSSM with no exotics can be found. These models probably correspond to too special points in the space of vacua. In type IIB, one can also consider D3-branes (and probably also D7-branes) at singularities. Finally, GUTs may be found in local F-theory (type IIB-based) compactifications, following Vafa et al. (as discussed right below Ibáňez's talk). New spectrum absent in normal IIB is possible, including spinorial matter and exceptional gauge groups. The GUT is broken to SM by magnetic fluxes. The picture seems to be rather unique. A table summarizes the successes of the classes - B-L, absence of exotics, gauge coupling unification, fixed moduli. None of them gets an "A" in realistic Yukawa couplings as of now. So we're "not there yet" in getting the complete SM. He looks at some landscape statistics - which doesn't mean that he adopts the anthropic selection criteria (calm down, please). He believes that some adjoint matter etc. is only light because we're looking at the orbifold points. We don't know whether low-energy SUSY is "generic". (Well, "generic" is something different than "predicted" by the theory, but OK.) He looks at the Yukawa couplings, stressing that non-perturbative contributions may be crucial. Examples of brane instantons in intersecting braneworlds follow. Fluxes have been known to fix the moduli for 5 years or so (somewhat bizarre references for this fixing). A better control is obtained in large-volume models with multiple separated Kähler moduli. In type IIA, one can stabilize the moduli without instantons (Kähler and complex structure moduli co-operate). And the bulk of the landscape could be non-geometric. What is the string scale? When it's 1 TeV, it's cool with all the Kaluza-Klein, stringy, black hole signatures at the LHC. More likely, when it's at the GUT scale, SUSY can be at 1 TeV. SUSY breaking has to be calculated and is not easy. In string theory, it can arise from closed string fluxes, dynamical breaking in a gauge sector. Also from gravity, gauge, anomaly mediation (and mirage - perhaps natural in KKLT...). They each have advantages and disadvantages. The LHC should tell us something about it here. Type IIB has no Kähler moduli dependence of the superpotential, unlike type IIA. There are three different very predictive types of SUSY breaking of some kind where all the superpartner masses are determined by one dimensionful (and a few known dimensionless) parameters. Intersecting 7-branes give us a very clear pattern. Stau tends to be the LSP but it can be fixed. The LHC will tell us something about the string theory vacuum. If low-energy gravity works, great. If SUSY is found, extremely good. The only pessimistic scenario is that only the Higgs is found: the anthropic explanation of the electroweak-Planck gap will gain power. Also, unexpected surprises are possible. In a few years, the hexagon of M-theory will be covered by overlapping new circles of LHC and cosmology constraints - the right class but probably not the right exact vacuum may be located. A very good talk! Cumrun Vafa mostly uses colorful tablet-PC, partially hand-written (maths and pictures) slides (PDF here). Very readable. (I was fixing his tablet PC as well as laptop once haha.) He starts his talk about F-theoretical phenomenology by our goal to find the theory of everything. He finds anthropic explanations unsatisfactory while the goal to find the full exact theory hard. To solve the first (anthropic) problem, he prefers to search for the keys under the lamppost ;-). To solve the second part, he has to look for parts: a justification of the "local models" follows. Cumrun refers to the SM-like sector as "open strings" and the gravity sector as "closed strings". So we focus on the vicinity of the place where the SM lives. One must assume that gravity decouples from the SM: that can be false but it's healthy to try. This assumption implies, for example, that the GUT must be asymptotically free so that gravity may have been postponed to higher-than-Planckian energies by Nature. Interesting matter-carrying branes must be SUSY-like, i.e. wrapped on 2, 3, or 4-dimensional cycles. He thinks that the higher-dimensional branes are more flexible which is why he chooses 3+4 = 7-dimensional branes, leading him to type IIB. Another input is a SUSY GUT-like unification. He views the pretty and natural representation theory of GUT to be stronger evidence supporting GUT than gauge coupling unification. Now, gauge groups like SO(10) are easy in type IIB but the spinor seems impossible (much like the top quark Yukawa coupling) so he must go to (va)F(a)-theory, non-perturbative IIB (his brainchild), where all problems are solved. Cumrun is shocked that his cell phone is able to interfere with the microphone or speakers (noise!). I've learned this thing a few months ago (experimentally). ;-) There's a nice even-dimensional hierarchical structure here: gravity lives in 10 dimensions, gauge fields in 8 dimensions, matter fields in 6 dimensions, and interactions in 4 dimensions (the intersections). The SO(10) spinor arises from a decomposition of the E6 reps: E6 singularity is needed, requiring F-theory and the "5.10.10" coupling in SU(5) is generated from the E6 structure, too. Now, one can show that the 7-branes supporting the gauge fields must be del Pezzo surfaces because they must be able to shrink, giving you a positive curvature. The surface is essentially unique. The Wilson lines can't be used to break the GUT symmetry here since the del Pezzo has no cycles. The right Higgs can't exist either because that would correspond to a non-existent deformation of the local geometry. One is forced to use the fluxes. The cycle is determined! It must be mapped to a root of E8. Geometrically, he has to solve the doublet-triplet splitting problem and the solution automatically solves the proton decay problems, too: quartic terms in the superpotential (from 4-fold intersections) are absent. Predictions for light and heavy neutrinos seem reasonable, plus minus an order of magnitude or so. The mu-terms and SUSY breaking will follow. The SUSY breaking is very predictive in this setup. Vafa reviews gauge and gravity mediation of SUSY breaking. The Goldstino chiral multiplet (X + theta^2 F) has the F-term. The dimension of F is squared mass. Depending on the value, one can distinguish the the types of mediation. By his philosophy, he wants gauge mediation because gravity is decoupled. But now, B mu term can't be made small if the mu-term is large enough. So the mu-term must come from a D-term (Giudice-Masiero mechanism), like in gravity mediation. Tan beta is then naturally large, and the small bottom/top mass ratio is thus natural without fine-tuned Yukawa couplings. All scales are then fixed, close to the sweet spot SUSY, and the Peccei-Quinn 7-brane is paramount for SUSY breaking. The PQ symmetry is anomalous and Higgsed by a GS mechanism. String theory allows them a hybrid of Fayet and Polonyi models. The QCD axion arises automatically with a marginally tolerable decay constant, 10^{12} GeV. The good things, like the correct U(1)_{PQ} charges, are obtained from the E6 symmetry, without the extra field-theoretical E6 baggage. Cumrun is finally able to make extremely accurate predictions for the LHC. The Bino is the lightest superpartner, followed by stau. Tan beta is between 20 and 30 (an unusual value, a bold prediction, indeed). A brilliant talk. Question: is there a IIA mirror dual? Cumrun is not sure whether it exists at all. Question: how can the instantons be suppressed if you're in non-perturbative regime? Cumrun says that non-perturbative is about "tau" but the suppression is due to large volumes. Another question is answered by "F wedge H is zero". Cumrun spends his coffee break by off-camera, on-microphone discussions, mostly with Andy Strominger. Those 6 meters in between their offices in Cambridge are probably too many so these questions haven't yet been answered. ;-) After the coffee break, Alessandro Tomasiello continues with a talk about AdS4 flux vacua (PDF here). Some people are motivated by AdS4 as the starting point for realistic vacua. He is motivated by knowledge of some stringy geometry (theoretical motivation). He will look at AdS4 x CP3. At some point, SUSY may become N=6, an old solution whose CFT3 dual was found recently. He explains how generalized how-flat (generalized complex) manifolds are defined, by the amount of SUSY. The SU(3) structure manifolds - a subclass - is more well-known. The wedge products still vanish, like in Calabi-Yaus, but the exterior derivatives of J, Re(Omega) don't: they're proportional to the other form. Some bad news about the vacua are mentioned. But there are many of them. ;-) So he's listing various manifolds with N=3 (and N=2, N=1) SUSY, without explaining too clearly what (how complete) the list exactly is. A double-U(1) quotient of SU(3) has a known CFT3 (quiver gauge theory) dual. A list of allowed topologies (sometimes with several metric per topology) increases a bit when some masses are allowed. It looks somewhat disorganized to me. A moduli space is found to be a line interval but that's an inaccurate artifact of SUGRA because 1) flux quantization, 2) string corrections. Some pictures with the angle whose meaning I missed are shown. Are there many animals of this kind, he asks? Answer: a question mark. Conclusions: even for simple topologies, there are often infinitely many vacua (with N=3 Chern-Simons CFT3 duals). Question: Michael Douglas wants to defend his statements about the finiteness of the number of vacua, so he points out that if one restricts the size of hidden dimensions, the number is finite. Answer: confirmed. Timo Weigand talks about D-brane instantons in type II orientifolds, a technical topic that was investigated in a lot of papers during the last year (PDF here). The motivation seemed confusing. But the technicalities have content. D-brane instantons are divided into two groups - whether or not their cycles are inside existing physical D-branes. If they are, they can be interpreted as stringy realizations of gauge instantons. If they are not, they are exotic stringy instantons. A lot of work has been done and won't be mentioned. First, he counts zero modes on the instantons. They come from open strings that can either end on the same D-brane instanton or two different ones. The first group has some universal modes; the second is typically found near intersections and has phenomenologically interesting couplings. Superpotential can only be generated by BPS instantons, and not even all of them: two zero modes must be lifted. By picking a transverse geometry or e.g. a flux or ... by interactions in the E-E' sector. (D-terms are contributed to by non-BPS, off-calibrated instantons.) Concerning the latter, the goldstinos are lifted by this E-E' stuff. Now he, somewhat repetitively and off-topic, jumps to the other ways of lifting the zero modes. He talks about the invariance of the instantons under the orientifold transformation. At some points in the closed-string moduli space, you're forced to choose bound states of instantons. A rather complicated discussion which terms are generated by various bound states of the instantons appears here. Chiral intersections can prevent the instanton action from having any method to lift the zero modes (global constraints, related to index theorems etc.). By looking at lines of marginal (or, later, threshold) stability, one can see that the instantons should be allowed to split etc. He argues that only certain superpotentials can occur from the D-brane instantons: they should satisfy similar charge constraints as the perturbative terms, except that the balance may be shifted by the charges from the additional zero modes. This stuff has various applications. One of them is SUSY breaking by F-terms: production of Polonyi terms. He tries to construct a full-fledged SUSY breaking scenario. The context is somewhat unclear to me. Question/complaint: Cumrun says that the U(1) and SU(5) couplings should be naturally identified which makes it unnatural to produce the "5.10.10" coupling in Timo's way. Yes. Stephan Stieberger titled his talk "Superstring amplitudes and implications for the LHC" (PDF here). It's focusing on tree-level multi-point amplitudes, their compact form (e.g. six-gluon disk amplitude), and possible stringy signals at the LHC relevant for QCD jets. Now, he reviews the MHV-like QCD amplitudes (we know from the twistor industry). The next slides are about SUSY variations of vertex operators. He argues that certain recursive relations for multi-point MHV QCD amplitudes hold to all orders in alpha' in string theory (universal for all compactifications). SUSY Ward identities reduces 6-point amplitudes to simpler ones. Then he wants to get the full n-gluon amplitude in string theory from the "first principles", namely from the correct soft boson limit, collinear limit (factorization), and permutation symmetries. He looks at various arrangements, e.g. 2 gluons and 2 chiral fermions. The results so far are universal for type I and type II theories. To see some stringy stuff of this simple kind at the LHC, he needs to assume ADD large dimensions. To see the strings, he would look at dijet events and Regge excitation resonances in the s-channel. Well, it would indeed be easy to see the strings if they existed there. Now, the discussion almost looks like Chapter 1 of the Green-Schwarz-Witten textbook. High-precision tests would tell us about the internal shape but he doesn't specify how the reverse engineering is made. A question: what are you doing with background? Answer: Yes (not clear what he exactly means). ;-) Another question from Kiritsis: why haven't you seen the Z'-like particles at 100s of GeV that would exist for a TeV string scale? Answer: Z' are irrelevant. A small argument explodes. At any rate, I agree with the guy who asks that these models are already excluded. Ron Donagi started the afternoon session with Heterotic Standard Models, a topic that was repeatedly covered on this blog. The talk began with a technical interlude, namely a struggle involving the screen size of the Apple's PowerPoint (or replacement). The Apple devoured his paper. It was a really good paper. A kind of a bummer. Applause. :-) Juan Maldacena was ready to jump onto the scene and speak instead about the membrane minirevolution, namely their "ABJM" N=6 supersymmetric U(N) x U(N) Chern-Simons SCFT in three dimensions, generalizing the Bagger-Lambert-Gustavsson theory (PDF here). He wrote the action and demonstrated its classical scale invariance. Then he mentioned that N=3 CS-like (with Klebanov-Witten quartic superpotential) theories are common in 3D. He doubles the supercharges by looking at some R-symmetries. The theory describes M2-branes proving an 8-manifold with a R8 / Z_k singularity. In detail, two NS5-branes with N D3-branes gives Yang-Mills plus bifundamental matter. One NS5-brane is rotated, we get N=3 YM CS plus bifundamental hypers. Some dualities lead to M-theory with two circles. Two KK monopoles are possible and their intersection is a special kind of hyperKähler singularity. Close to the R8/Z_k singularity, SUSY is enhanced to N=6. 1/k plays a role of the coupling constant: the theory is free for large "k". There is another parameter N, the number of M2-branes, and 't Hooft limit is possible for N/k=lambda fixed and N large. For N=2 and U(2)'s replaced by SU(2)'s, one gets the Bagger-Lambert-Gustavsson theory. The gravity dual involves AdS4 x S7/Z_k, with a free action. For large k, Z_k "becomes" U(1) and S^7 becomes CP_3 - Tomasiello's talk... When he calculates the thermal free energy, the 3/4 from YM is replaced by 1/sqrt(lambda). He discusses operators - some BMN-like traces as well as 't Hooft operators (postulating a unit of magnetic flux around one point). A bifundamental operator must be added (k of them). The BMN-traces are simply type IIA strings, with no KK momentum along the Z_k orbifolded direction. The others are D0-branes, with a D0 momentum. For k=1,2 he gets enhanced symmetries, analogous to SU(2)'s at the self-dual radius, in this case ordinary SU(4) and/or an extra center-of-mass symmetry for k=1. Similarly to AdS5 x S5, it seems integrable (classically) and you wonder whether it is an exact statement. Changing U(N) x U(N) to two different ranks is like adding torsion F4 flux in M-theory. You can't find a Lagrangian that would flow to it. One can try to orientifold the theory, squash the 7-sphere, take more complex quivers, etc. So in conclusions, they have presented a surely interesting theory. He wants to master the 't Hooft operators, decide the integrability, maybe find duals of more general AdS4 vacua, and study the condensed-matter applications (which is likely for their theory than to describe the Universe). A question why it is a gauge theory or something like that - hard to heard through the noise. Juan didn't quite know the answer. Another question: why would you expect conformal invariance? Answer: SUSY, presence of singularity in the moduli space. Third question: what condensed-matter applications? Answer - two: either 2+1-dimensional systems; or the Euclidean version may be good for critical phenomena. Another question: can you get the Yang-Mills limit (for k=1)? Answer: repeating some BL-G wisdom plus no answer about k=1. Ron Donagi has another attempt (PDF here). Everything works now (except for the letter "B" at the end of every line). Heterotic Standard Models are the High Country of the landscape (anti-swampland): only 1 item is known right now. They're looking for full global models only. He plans to cover 7 papers, 6 of which included him, one of which is in preparation (with a female co-author). The playing field is a Calabi-Yau with a SU(4) or SU(5) polystable bundle. Anomalies must be canceled: c2(X)-c2(V)=[M5 branes]. Commutant H in G is the low-energy group, Wilson lines (Z2 for SU(4) or, for SU(4), Z3 squared or Z6) get you to MSSM, 3 generations must exist. For his favorite SU(5) case with Z2 Wilson lines, he needs a manifold with a freely acting Z2. Xtilde, the larger manifold, is either his favorite fiber product of two del Pezzo surfaces. Or a complete intersection of 4 quadrics in CP7. ;-) His way is the only close to MSSM so he explains the fiber product. It's like a Cartesian product of two elliptic fibrations except that you only take the points with the same location on the two fibers, effectively removing one of them. The manifold has h12, h11 equal to 19, 19, superficially a self-mirror. Fourier-Mukai transform is used to construct the (Z2-invariant) bundle. Sometimes, monads are helpful etc. The anomaly is canceled either by M5-branes or, preferably, by bundles in the hidden sectors. Years ago, he expected the model to be the first example among zillions. It unexpectedly remains the only one. So he still finds it ludicrous for him to successfully describe the Universe by his first algebraic geometry construction but the audience is clearly expected to be more optimistic. ;-) My estimated probability that their precise model is right is comparable to 1%. Phenomenological properties seem OK - pure MSSM, R-symmetry preserved classically (stable proton), semi-realistic Yukawa couplings and mu-terms. There are other models which don't have stable V (Braun et al.). NAHE by Faraggi et al. are mentioned, too. Relaxing one of the conditions expands the landscape hugely. Now he talks about many not-quite-realistic models, including the (51,3) Vafa-Witten model, classified by various groups etc.: large tables with discrete data. A (2-9) free fermionic model is connected to their geometric compactification. In the new paper, they have 1 construction that may generate a couple of new examples (or not). To summarize, the High Country is small and only has 1 fine representative right now. His plan involves strategies to look for new geometries and bundles. I think they should pay much more attention to detailed investigation of their best model. Stabilization & F-theory duals should be looked at. In the question period, a participant claims that you can use fluxes to break the group. Another question is answered by Donagi's absent taste to study asymmetric orbifolds and nongeometric models. Another question is what they do with the hidden E8. Initially nothing. Later, it has a bundle on it. Addition to the question: he thinks that if both E8 can be used nontrivially, the High Country expands dramatically, he says. Donagi would like to know details. Neil Lambert - now a part of Bagger-Lambert - will unsurprisingly talk about multiple M2-brane Lagrangians, the membrane minirevolution he helped to spark (his PDF is here). He can't enumerate all the work here - there has been too much. M-branes are hard, there's no dilaton to make it weakly coupled. The Lagrangian description is not known - a point to be challenged (although Juan's challenge has probably been superior by now). For a stack of M2-branes, the SUSY variation of X is universal - schematically epsilon times psi. The variation of psi is epsilon times partial(X) plus a cubic term in X, in this case, times epsilon. So he's led to a 3-algebra (something with a triple product). Historically, he reviewed his steps to construct the Lagrangian. Click at "membrane minirevolution" above to see more comments about this construction; I won't repeat it here. The algebra closes if the mutated Jacobi ("fundamental") identity holds. The Lagrangian eventually has the right symmetries, including parity (that was hard). The SU(2) x SU(2)-based 3-algebra, the simplest example, is explained. There are infinite-dimensional examples (equivalent to an M5-brane?). Their simple theory has R8 x R8 / D_{2k} for the two membranes. For k=1, it only differs by a O(4) vs SO(4) difference. For k=2, it works. For higher k, the orbifold action looks weird: the coordinates of branes are nontrivially mixed/rotated together as a doublet. ;-) The origin of N^3 is hinted. Enhanced symmetry (classically) appears when the branes are collinear, not necessarily coincident. When the 4-index structure constants are non-antisymmetric, there are infinitely many examples but there are no gauge-invariant observables. The status of the non-unitary, indefinite algebras is not yet settled while "ABJM" (see Maldacena above) is where the field has gone. Various other modifications - like "ABJ" with torsion - are mentioned. For SU(4) x U(1) smaller R-symmetry replacing SO(8), they're led to new symmetry conditions for the structure constants. They're Riemann-tensor-like symmetries, with an extra complex conjugation for the exchange of the pairs of indices. You find an infinite class of 3-algebras here, with explicit "XZ*Y - YZ*X" formulae for the 3-product. Many more papers with new groups, classifications of models etc. To conclude, they constructed a unique (but k-labeled) theory for multiple M2-branes. The only example of a maximally supersymmetric gauge theory without gauge bosons is it. ABJM is the interesting broader class. He bets - but can't prove - that the N=8 theory is relevant for M-theory even above k=2. Can we see the 3/2-th power in the entropy? Vague proposals. Do we really need the 3-algebras? You're right, we don't. ;-) But they have the same classification. But the physical fields, scalars and fermions, don't directly see the 3-bracket. His mother is one of 2 people who believes that something here is interesting ;-), thank you. Some questions. The first had a vague answer. The second, about coupling to SUGRA backgrounds, is also unclear. Many other questions are asked (Neil is a great person to answer questions), for example: why can you only describe 2 branes? Neil thinks that it just seems to be the only number for which this theory works (some special features of the orbifold). To make the topics diverse, Sunil Mukhi - who is also a blogger ;-) and who is blogging from the coolest place in the Universe - speaks about the membrane minirevolution, too (PDF here). He will try to minimize the overlaps. He will describe roughly 3 papers, including the D2-branes from M2-branes that we reported at the beginning of the minirevolution. Sunil is funny. There was an agreement that it (M2-brane Lagrangian) couldn't be done because it was not done. But now, once it's been done, we agree it can be done but it should be done better. In France, they have "brane" wines - a bottle shown on a picture. ;-) His interest is in the extension of SO(7) to SO(8) and his classification of the known algebras is from a somewhat different angle. Unlike other speakers, he finds the indefinite 3-algebras interesting and he will focus on them. A new gauge symmetry manifestly removes the bad ghosts (and some good things, too). Some overlap with "ABJM" and "ABJ" is mentioned. New excuses why the theory is not known for N above 2: it would be strongly coupled, anyway (and the classical Lagrangian not overly useful). As explained in the "D2 from M2" article linked above, Sunil tells us how the gauge field becomes dynamical - a new kind of Higgs mechanism. How is it possible that Higgsing makes a compactification? Because of higher corrections in 1/vev. The decoupling is only for infinite vev (like in our deconstruction paper with Nima et al. that Sunil mentions: yes, the derivation of the cylinder limit from the cone, and the stringy duality derivation from the quiver, was my work in the paper). In this present setup, the large vev can be replaced by a high level (order of the orbifolding group). Finally, something that Sunil found pretty, then ugly, and now again pretty. ;-) The Lorentzian algebras. He adds some B-wedge-F terms to the Lagrangian. These theories violate Juan's wisdom that one can make a theory classical by adding a large classical prefactor to the action: if you add one, you can get rid of it by a field redefinition. The Higgs mechanism works in their picture but it works too well. ;-) More precisely, one gets the exact Yang-Mills (a reformulation? that seems disappointing). To show the equivalence, non-Abelian dNS symmetry produces a non-dynamical gauge field: harmless. The duality works, by integrating out something (B?), and he explicitly constructs a Lagrangian where the SO(8) symmetry emerges except that it should also act on the coupling constants. To summarize, after some exercises, one can rewrite the N=8 Yang-Mills in a (Lorentzian) 3-algebra friendly way. But the superconformal and SO(8) symmetry is broken immediately when the vevs etc. are added. Last two minutes dedicated to extra topics about the Lorentzian algebras: can one generalize the steps above with alpha' corrections added? Will the 3-algebra structure survive the stringy additions? So he adds a lot of F^4 terms and those of the same order. After the procedure, the result is still SO(8)-invariant! The enhancement works to all orders. To conclude, there's been much progress for multiple M2-branes but not a complete progress. A funny picture at the end. For the third talk about the same topic, it was an extremely refreshing and original talk! ;-) Question: is the equivalence classical or quantum? Answer: it was done classically. Juan: what's the Goldstone boson for the broken conformal symmetry? It's not there - the field must be constant. New question: make D2-branes in a varying dilaton. Will the X8 vary? Sunil sees no problems but warns that the variations of other fields can't be forgotten. Tuesday talks are over. The text above is too long, too few people will read it, and I won't be fixing the typos, sorry. Monday, Tuesday, Wednesday, Thursday, Friday
2019-07-18 03:14:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6525246500968933, "perplexity": 2244.141386123417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525483.64/warc/CC-MAIN-20190718022001-20190718044001-00487.warc.gz"}
https://www.learncram.com/ml-aggarwal/ml-aggarwal-class-8-solutions-for-icse-maths-chapter-2-ex-2-2/
# ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 2 Exponents and Powers Ex 2.2 ## ML Aggarwal Class 8 Solutions for ICSE Maths Chapter 2 Exponents and Powers Ex 2.2 Question 1. Express the following numbers in standard form: (i) 0.0000000000085 (ii) 0.000000000000942 (iii) 6020000000000000 (iv) 0.00000000837 Solution: Question 2. Express the following numbers in the usual form: (i) 3.02 × 10-6 (ii) 1-007 × 1011 (iii) 5.375 × 1014 (iv) 7.579 × 10-14 Solution: Question 3. Express the number appearing in the following statements in standard form: (i) The mass of a proton is 0.000000000000000000000001673 gram. (ii) The thickness of a piece of paper is 0.0016 cm. (iii) The diameter of a wire on a computer chip is 0.000003 m. (iv) A helium atom has a diameter of $$\frac { 22 }{ 100000000000 }$$ m (v) Mass of a molecule of hydrogen gas is about 0.00000000000000000000334 tons. (vi) The human body has 1 trillion cells which vary in shapes and sizes. (vii) The distance from the Earth of the Sun is 149,600,000,000 m. (viii) The speed of light is 300,000,000 m/sec. (ix) Mass of the Earth is 5,970,000,000,000,000,000,000,000 kg. (x) Express 3 years in seconds. (xi) Express 7 hectares in cm2. (xii) A sugar factory has annual sales of 3 billion 720 million kilograms of sugar. Solution: Question 4. Compare the following: (i) Size of a plant cell to the thickness of a piece of paper. (ii) Size of a plant cell to the diameter of a wire on a computer chip. (iii) The thickness of a piece of paper to the diameter of a wire on a computer chip. Given size of plant cell = 0.00001275 m Thickness of a piece of paper = 0.0016 cm Diameter of a wire on a computer chip = 0.000003 m Solution: Question 5. The number of red blood cells per cubic millimetre of blood is approximately 5.5 million. If the average body contains 5 litres of blood, what is the total number of red cell in the body? (1 litre = 1,00,000 mm3) Solution: Question 6. Mass of Mars is 6.42 × 1029 kg and the mass of the sun is 1.99 × 1030 kg. What is the total mass? Solution: Question 7. A particular star is at a distance of about 8.1 × 1013 km from the Earth. Assuming that the light travels at 3 × 108 m/sec, find how long does light take from that star to reach the Earth. Solution:
2023-04-01 13:51:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5588470697402954, "perplexity": 1439.5904878093652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00505.warc.gz"}
https://www.physicsforums.com/threads/javascript-function-question.639256/
# [JavaScript] function question • Java write an if statement that checks if the parameter exponent is 0. If it is, return 1 (a base case). Code: var power = function(exponent, base){ if (exponent === 0){ return 1; } }; power(); It does not work, ca anyone tell why? Mark44 Mentor write an if statement that checks if the parameter exponent is 0. If it is, return 1 (a base case). Code: var power = function(exponent, base){ if (exponent === 0){ return 1; } }; power(); It does not work, ca anyone tell why? You are not calling your function correctly. 1. The power function has two parameters. You are calling it with no parameters. 2. The power function returns a value, so you need to store or otherwise use the return value. Code: var retValue = power(0, 10); After the code above runs, retValue should be set to 1. Also, you don't need to use === in your comparison, since you're just comparing numbers, not objects. The == operator should work just fine.
2021-02-28 01:43:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5315659642219543, "perplexity": 3404.0129287405043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00021.warc.gz"}
https://golem.ph.utexas.edu/category/2012/09/the_spread_of_a_metric_space.html
## September 5, 2012 ### The Spread of a Metric Space #### Posted by Simon Willerton Given a finite metric space $X$ we can define the spread $E_0(X)$ by $E_0(X)\coloneqq \sum_x \frac{1}{\sum_{y} e^{-d(x,y)}}.$ This turns out to be a nice measure of the ‘size’ of the metric space. I’ve just finished a paper on this: In this post I’ll give a quick overview of the paper, mentioning connections to biodiversity; magnitude; volume and total scalar curvature; and Hausdorff dimension. A few of these ideas are looked at in a slightly more dynamic way in my recent talk at the CRM Exploratory programme on the mathematics of biodiversity: Magnitude and other measures of metric spaces. I should say that Tom Leinster convinced me to switch to using the word ‘spread’ as prior to that I had been using the much more uncouth word ‘bigness’. The following five snapshots of the spread are more-or-less independent of each other. #### Diversity The definition of spread was motivated by Tom Leinster and Christina Cobbold’s definition of diversity measures for ecosystems in which they take into account the relative abundances of the species and the relatedness of each pair of species, or in math-speak, we can use these to assign a number to each finite metric space with a probability measure on it – the points represent the species, the metric describes the relatedness and the probability measure describes the relative abundances. Tom and Christina give us a diversity measure for each number $q$ with $0\le q \le\infty$, so for a finite metric space we can use the uniform probabiltiy measure on it and take the diversity measure for $q=0$: that is precisely the spread. The more classical version of the Tom and Christina’s measures – the Hill numbers – only takes into account relative abundances of species and not the relatedness of species, so these are defined for probability spaces with no metric on them (or with the discrete metric on them, if you prefer). In that case, the $q=0$ diversity measure is just the number of species present. So from that point of view, the spread can be seen as an analogue of the number of species in an ecosystem. #### Basic behaviour If $X$ is a finite metric space with $N$ points, $t \gt 0$ and $t X$ denotes $X$ with the metric scaled by a factor of $t$ then we get the following basic properties of the spread. • $1\le E_0(X)\le N$; • $E_0(t X)$ is increasing in $t$; • $E_0(t X)\to 1$ as $t\to 0$; • $E_0(t X)\to N$ as $t\to \infty$; • $E_0(X)\le e^{diam(X)}$. For instance, if $t R$ is the following three-point metric space then we get the following plot of the spread as we scale $t$. So when the space is scaled very small it looks like there is one point, at medium scales it looks like there are two points, and at large scales all three points are distinctly visible. #### Magnitude Long-time readers of this blog will not be surprised to hear that spread is connected to Tom Leinster’s notion of magnitude (called cardinality in some early blog posts). The magnitude $|X|$ of a metric space $X$ is not always defined; for instance, Tom showed that there is no magnitude defined for a certain scaling of the five-point metric space coming from the complete bipartite graph $K_{3,2}$. If we plot the magnitude of the scalings of this space together with the spread of these scalings then we see that the spread is rather more nicely behaved! Tom Leinster said he thinks this shows that the spread is more suave than the magnitude. For well-behaved metric spaces the magnitude is an upper bound for the spread – in this case ‘well behaved’ means ‘positive definite’ and such spaces include subspaces of Euclidean space. For homogeneous metric spaces the magnitude and the spread actually coincide. #### Dimension As we have a notion of size, we can associate a notion of dimension. • If we scale a line by a factor of $t$ then its length (its size) scales by a factor of $t^1$. • If we scale a rectangle by a factor of $t$ then its area (its size) scales by a factor of $t^2$. • If we scale a cuboid by a factor of $t$ then its volume (its size) scales by a factor of $t^3$. Of course, the $1$, $2$ and $3$ appearing there are our usual concept of dimension of those three spaces. So given a notion of size of metric spaces, the associated dimension of a metric space is the growth rate of the size. One has to be precise about what growth rate is here, but you can think of it as the (instantaneous) slope of the log-log plot of size versus scale. In particular, the spread dimension $dim_0(X)$ of a metric space $X$ is defined by $dim_0(X)\,\coloneqq\, \left.\frac{\mathrm{d} \,log(E_0(t X))}{\mathrm{d} \,log(t)}\right|_{t=1} \, =\, \left.\frac{t}{E_0(t X)}\frac{\mathrm{d}E_0(tX)}{\mathrm{d}t}\right|_{t=1}.$ This notion of spread dimension is scale dependent, but that is not unreasonable. Think of a very long and very thin rectangle. If it is scaled very, very small you might think it looks like a point, so is zero-dimensional. As it is scaled up you start to notice its length but not its width, so it seems one-dimensional. As it is scaled even more it finally looks two-dimensional. If this rectangle is actually made from, say, atoms, then if it scaled even more then you start to see the individual points and it begins to look zero-dimensional again. We can numerically calculate the spread dimension for a rectangle of $10$ by $4900$ points at various scales, and the above describes exactly the kind of phenomena that we see. If my computer could handle much bigger matrices then I could calculate the dimension profile for say $1000$ by $100000000$ points and see a much more pronounced step-like behaviour from zero to one to two to zero dimensions. We don’t have to stick to boring old shapes like rectangles. We can try fractals! We can take a finite metric space with lots of points which approximates say the Koch curve, get maple to calculate the spread dimension at various scales and plot the result. At medium scales the spread dimension is shockingly close to the Hausdorff dimension of the Koch curve, namely $ln 4/ ln 3$. So when it’s not too small and it’s not too large, then the finite approximation looks ‘Koch-like’. The same phenomemon is seem for finite approximations to Cantor sets and Sierpinski triangles, so there seems to be some geometric content to the spread dimension. #### Non-finite metric spaces There is an obvious way to generalize the notion of spread to a non-finite metric space $X$ provided that the metric space comes equipped with a canonical measure $\mu$ such that the total measure of the space is finite. In that case we just define the spread as follows. $E_0(X)\coloneqq\int_{x\in X} \frac{\mathrm{d}\mu(x)}{\int_{y\in X} e^{-d(x,y)} \,\mathrm{d}\mu(y)}.$ From that one can calculate the spread of a length $\ell$ straight line segment and of an $n$-sphere with radius $R$. As it’s getting late, I won’t put the details here, but let the interested reader find out the answers by looking in the paper. One class of metric spaces with a canonical measure is the class compact Riemannian manifolds. For this class of spaces it is possible to calculate the leading order terms in the asymptotic behaviour of the spread as a space $M$ is scaled up. These leading order terms depends just on the volume $vol(M)$ and the total scalar curvature $tsc(M)$, in particular we have the following. $E_0(t M) = \frac{1}{n!\,\omega_n}\left(t^n vol(M) + \frac{n+1}{6}t^{n-2} tsc(M) +O(t^{n-4})\right) \quad \text{as }\quad t\to\infty,$ where $\omega_n$ is the volume of the unit $n$-ball. In particular, for a Riemannian surface $\Sigma$ we have: $E_0(t\Sigma)=\frac{area(\Sigma)}{2\pi}t^2+\chi(\Sigma)+O(t^{-2})\quad \text{as } \quad t\to\infty,$ where $\chi$ is the Euler characteristic. In conclusion, it seems that this easy to write down measure of size of a metric space has many interesting properties. Posted at September 5, 2012 5:09 PM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2553 ### Re: The Spread of a Metric Space This looks quite neat. I gather from your graphs and examples that the spread has an “intrinsic length scale” of about 1 — in the sense that it considers distinct points to “cohere together” roughly when they are less than distance 1 apart. Is that a correct intuition? Is it possible to write down variants of the spread for which this intrinsic scale is some other number? Posted by: Mike Shulman on September 5, 2012 10:02 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space There does seem to be a natural length at that kind of scale you mention. I tend think of the units as being centimeters with the metric space being held in my hands, that seems to give a reasonable intuition, but I’ve not got anything much stronger than that to say. If you change the $e$ in the definition to another positive constant $a$, then natural scale should go from around $1$ to around $1/ln(a)$. Posted by: Simon Willerton on September 5, 2012 10:50 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hooray! Posted by: Tom Leinster on September 6, 2012 1:58 AM | Permalink | Reply to this ### Re: The Spread of a Metric Space I should say that Tom Leinster convinced me to switch to using the word ‘spread’ as prior to that I had been using the much more uncouth word ‘bigness’. Part of me is really sad that you switched. Would’ve loved ‘bigness’. Posted by: Todd Trimble on September 6, 2012 6:51 AM | Permalink | Reply to this ### Re: The Spread of a Metric Space ‘Bigness’ is a nice word, but I like ‘spread’ better here. This is partly for the boring, pedantic reason that ‘bigness’ sounds too general for any one quantity — why should this be called ‘bigness’ instead of any other quantity? But it’s also because I think what $E_0$ measures really is how ‘spread out’ the $\mu$-mass of $X$ is. Posted by: Mark Meckes on September 6, 2012 3:43 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Figure 3 on page 7 suggests why spread might not be a bad name. The two spaces shown there have the same magnitude, simply because they’re trees with the same number of vertices. (That’s Theorem 4.) But the spread of the spread-out one is greater than that of the clustered-up one. Posted by: Tom Leinster on September 6, 2012 4:16 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space A minor terminological note: in recent years, “metric measure space” has become a standard term for a metric space equipped with a canonical finite measure. Posted by: Mark Meckes on September 6, 2012 3:49 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Thanks Mark. Talking of terminological things, Tom said that you’d found the ‘maximum diversity’ in the literature under a somewhat different name. What was it called? Posted by: Simon Willerton on September 6, 2012 9:31 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Get ready for a mouthful. For a subset $A$ of $N$-dimensional Euclidean space, up to a factor of $N!$ times the volume of the unit ball, the maximum diversity of $A$ is the same as the “Bessel capacity of $A$ of order $(N+1)/2$ and index $2$”. Or is that index $(N+1)/2$ and order $2$? In any case, it’s sometimes denoted $B_{(N+1)/2,2}(A)$. As far as I can tell, it’s not very well-studied as such —- the main interest seems to be on capacities $B_{\alpha, p}$ for which $\alpha p \le N$. When $\alpha p$ is bigger than $N$, the capacity has some behaviors which seem to be pathological from the point of view of the usual applications of capacities. Posted by: Mark Meckes on September 7, 2012 1:16 AM | Permalink | Reply to this Read the post Magnitude of Metric Spaces: A Roundup Weblog: The n-Category Café Excerpt: Resources on magnitude of metric spaces. Tracked: April 7, 2013 9:55 PM ### Re: The Spread of a Metric Space Hi, The idea of defining dimension via a notion of ‘size’ of a metric space is very interesting. I tried to calculate the spread dimension of a linear tree graph defined at the end of section 4.1 of your paper: $dim_0 = \frac{1}{E_0(1)} \frac{dE_0(1)}{dt}$. Intuitively, I would expect the dimension to be 1, but I am finding $dim_0$ seems to asymptote around 0.851 (I calculated up to $10^6$ points using the definition of $E_0$ for a linear tree graph graph via the summation in your paper and the equation above). Is there a reason that a tree graph would not have a dimensionality of 1 (even after $10^6$ points)? Posted by: Bob on July 16, 2015 7:56 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hi Bob, you don’t say what metric you’ve put on the linear tree graph. The dimension is very much scale dependent. A very long line interval should have dimension close to one, but a very short line should have dimension close to zero: intermediate length lines should have intermediate dimensions. What length was your linear graph? Posted by: Simon Willerton on July 22, 2015 9:22 AM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hi Simon, Thanks for the reply. I agree that long intervals should approach dim = 1, and short intervals should approach dim = 0. I calculated dim$_0$ for up to $10^6$ points in mathematica using the summation formula from section 2: $E_0=\sum_{j=1}^{j=N} \frac{e^t-1}{1+e^t-e^{-t(j-1)}-e^{-t(N-j)}}$ and the equation for dim$_0$ in my post above. This is a plot for the dimension dim$_0$ vs number of points in my linear tree. file:///Users/Bob/Desktop/dim%20E0%20line.jpg Best, Bob Posted by: Bob on July 25, 2015 6:28 AM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hi Simon, I’m not sure why the image is not loading. I can’t see any of the svg image edit options on the preview screen. I’ll email the plot of dimension vs points on a linear tree. Basically it starts at 0, grows, but than asymptotes around 0.85. [Edit by Simon: Here is the plot] Posted by: Bob on July 25, 2015 6:50 AM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hi Bob, Ah. You’re doing something completely different to what I thought you were doing, and it’s something that I’d not thought about. You’re calculating the dimension of a large interval in the integers, ie. $[1,N]\cap \mathbb{N}$, with the obvious metric, so $n$ is distance $d$ from $n+d$. For large $N$ this looks kind of linear, so it goes off in both directions (if you’re looking at a typical point), but it’s very definitely discrete as there’s a very visible distance between the points. This means that the dimension is neither zero nor one, but somewhere between. If we make the gap between the points smaller then the dimension gets closer to one. So with 8192 points and 0.1 units between adjacent points we get an approximate dimension of 0.99664085486. It took me a while to code this up as I’m trying to wean myself off Maple and onto Sage. If anyone is interested in playing with the very basic code it’s available on SageMathCloud: It would be interesting to see exactly what the limiting behaviour is, but at the moment it looks like you could argue that “The spread-dimension of the integers is about $0.85$.” Posted by: Simon Willerton on July 30, 2015 5:34 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hi Simon, I agree that it makes sense that you want to points to lie close together so that the figure approximates a line. The way I read your paper, the equation just before section 4.2 reads “$dim_0(X) := \frac{t}{E_0(tX)}\frac{dE_0(tX)}{dt}$ evaluated at $t=1$”. Am I reading this wrong? If not, why are you allowed to use $t=0.1$? Best, bob Posted by: Bob on July 31, 2015 4:29 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space The formula gives the dimension for a metric space $X$. If we scale the metric by a factor of $\tau$ to get the metric space $\tau X$, we can calculate its dimension with an application of the chain rule: \begin{aligned} dim_0(\tau X) &\coloneqq \frac{t}{E_0(t(\tau X))}\frac{d E_0(t(\tau X))}{d t}\biggr |_{t=1}\\ &=\frac{t}{E_0((t\tau) X))}\frac{d E_0((t\tau) X)}{d t}\biggr |_{t=1}\\ &= \frac{t}{E_0(t X)}\frac{d E_0(t X)}{d t}\biggr |_{t=\tau}. \end{aligned} So putting $t=0.1$ in the formula we are calculating the dimension of the metric space associated to the linear graph but with a distance of $0.1$ between adjacent points. Posted by: Simon Willerton on August 3, 2015 3:32 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Hmm. We can work out the magnitude dimension of the integers in the same way. We have an exact formula for the magnitude of the linear graph with $N$ points with adjacent points $t$ units apart, namely $1+(N-1)\tanh(t/2)$. This means we can calculate the dimension of the linear graph with $N$ points, with adjacency of $1$ unit. We get $-\frac{{\left(\tanh\left(\frac{1}{2}\right)^{2} - 1\right)} {\left(N - 1\right)}}{2 \, {\left({\left(N - 1\right)} \tanh\left(\frac{1}{2}\right) + 1\right)}}$ Taking the limit as $N\to \infinity$ gives $-\frac{\tanh\left(\frac{1}{2}\right)^{2} - 1}{2 \, \tanh\left(\frac{1}{2}\right)}\, \simeq\, 0.850918.$ So that gives us an argument that “The magnitude dimension of the integers is about $0.85$.” Posted by: Simon Willerton on July 30, 2015 6:04 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space To be clear, it should be pointed out that what you’re calling “magnitude dimension” here is different from what we’ve called “magnitude dimension” before. Here you’re dealing with an “instantaneous” dimension, whereas earlier we’ve considered an “asymptotic” dimension which is 0 for any discrete space. Posted by: Mark Meckes on July 31, 2015 1:41 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Your formula $-\frac{\tanh\left(\frac{1}{2}\right)^{2} - 1}{2 \, \tanh\left(\frac{1}{2}\right)}$ simplifies to $\frac{2}{e - e^{-1}} = cosech(1).$ Posted by: Tom Leinster on July 31, 2015 1:43 PM | Permalink | Reply to this ### Re: The Spread of a Metric Space Mark said what you’re calling “magnitude dimension” here is different from what we’ve called “magnitude dimension” before Yes, indeed. I was thinking that I was replying in my other post in which it is perhaps more clear what I was referring to. Posted by: Simon Willerton on July 31, 2015 3:12 PM | Permalink | Reply to this Post a New Comment
2017-10-23 04:18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 115, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892583966255188, "perplexity": 540.7846909877679}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00166.warc.gz"}
https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_002C/UCD_Chem_2C%3A_Larsen/Student_Academic_Success_Center_Workshops/Solving_for_the_Cell_Potential
# Solving for the Cell Potential A method used by many professors and LSC specialists to calculate $$E^o_{cell}$$ : 1. Write out the reduction half reaction and the oxidation half reaction, stacking one over the other. 2. Balance the half reactions and multiply each by any necessary numbers so that the number of electrons lost is equal to the number of electrons gained. 3. Add the two half reactions and their corresponding voltages, the standard reduction voltage for the reduction half reaction and the voltage for the oxidation half reaction (the reverse sign of the standard reduction potential for the species in the oxidation half reaction). 4. The superscript zeros on the $$E_o$$ voltage symbols indicate standard conditions, i.e., all solutions at 1.0 M and all gases at 1.0 atm and all at 298 K. Remember: When comparing two reduction half reactions, the larger half reaction value is the more easily reduced. Example: The logical Approach From the Standard Reduction Table, the following values are obtained: Species Pair Reduction Half Reaction $$E^o$$ Gibbs Energy Difference $F_2(g)/F^-$ $F_2(g) + 2e^- \rightarrow 2F^-(aq)$ $2.87\, \text{V}$ $\Delta G^o= -nFE^o_{cell}$ Spontaneous (as drawn) $Li(s)/Li^+$ $2Li^+(aq) + 2e^- \rightarrow 2Li(s)$ $-3.04\, \text{V}$ $\Delta G^o= -nFE^o_{cell}$ Not Spontaneous (as drawn) This is the lower voltage, so it will be the oxidation half reaction. Reverse it and change its sign. Thus, $$F_2(g) + 2e^- \rightarrow 2F^-(aq) \;\;\;\; E^o = 2.87 \, V$$ Eoreduction $$2Li(s) \rightarrow 2Li^+(aq) + 2e^- \;\;\;\; E^o = +3.04 V$$ + Eoox(sign of standard reduction voltage reversed) $$F_2(g) + 2Li(s) \rightarrow 2F^-(aq) + 2Li^+(aq) E^o_{cell} = 5.91 V$$ $$E^o_{cell}$$ The overall formula for the above process would be $E^o_{cell} = E^o_{red} + E^o_{ox}$ Example: The alternative Memorization Approach To solve for the standard cell voltage of an electrochemical cell, the book uses: 1. Eocell = Eo(cathode) - Eo(anode) (Note the equivalent formulas to this equation below: Eocell = Eo(cathode) - Eo(anode) = Eocell = Eo(right) - Eo(left) = Eocell = Eo(reduction half cell) - Eo(oxidation half cell) For all of the formulas above, standard REDUCTION values are plugged into the proper position without your changing the sign. The negative in the formula will change the sign of the Eo(anode) or Eo(left). So remember, the text's formulas (any formulae in number 1 above) requires that you use standard REDUCTION values for BOTH half cells in a cell; the negative sign in the formula reverses the reduction value of the cell species in the oxidation half reaction so that they really have an oxidation half cell voltage value. Thus, $E^o_{cell} = E^o_{cathode} - E^o_{anode}$ or \(E^o_{cell} = 2.87 \, \text{V} - (-3.04\,\text{V}) = 5.91\, \text{V}\] For the equation Eocell= Eored+ Eoox, you change the sign of the standard reduction value from the table and it becomes Eoox , then you use this value in the equation.)
2019-06-26 22:47:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008437395095825, "perplexity": 2000.7542221443573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00495.warc.gz"}
https://socratic.org/questions/lena-picked-apples-at-the-orchard-she-picked-5-red-10-graen-and-13-yellow-what-i
# Lena picked apples at the orchard. She picked 5 red 10 green and 13 yellow. What is the ratio of green apples to yellow? $\text{green":"yellow} = 10 : 13$
2020-09-27 09:31:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3785507082939148, "perplexity": 3303.8922781035253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00479.warc.gz"}
https://www.ques10.com/p/47358/explain-collector-to-base-ckt-with-its-stability-f/
0 328views Explain collector to base ckt with its stability factor. 0 0views • The collector to base bias ckt is an improvement over the fixed bias ckt. • The base resistance $R_B$ is now connected to the collector instead of Vcc. • The current flowing through Rc is the sum of $IC$ and $IB$. • $R_B$ resistor is connected as feedback element from o/p terminal collector to i/p terminal base. Analysis of the collector to base ckt: KVL to i/p. $Vcc – (I_c + I_B ) R_C – I_B R_B – V_BE = 0$ $Vcc = (R_B + R_c ) I_B + I_C R_c + V_BE$ $\therefore$ $I_B = \frac{Vcc – V_BE}{(R_B + T_C ) + R_C (1 + \beta dc)}$ $I_c = \beta I_B$ KVL to o/p path: $Vcc – (I_c + I_B) R_c – VCE = 0$ $VCE = Vcc – (I_c + I_B ) R_c$ # Stability factor: Due to change in $\beta dc$, $Ico$ and $V_BE$ w.r.t temp Q point to transistor gets affected. Hence we need to stabilize Qpt. $S = \frac{\triangle I_c}{\triangle I_co}$ $S = \frac{1 + \beta dc}{1 - \beta dc [ \triangle I_B/ \triangle I_c]}$ Substituting the value of $\triangle I_c/ \triangle I_c$ in the above equation to et final expression for ‘S’ • To obtain the value of ‘S’ $\triangle I_b / \triangle I_c$ apply KVL to i/p path: $Vcc = R_c (I_c + I_B) + I_B R_B + VBE$ $\therefore$ $Vcc = I_c R_c + I_B (R_B + R_c ) + VBE$ $\rightarrow$ (1) To find out stability factor ‘S’ we take into account the change in $I_c$ due to change in ICBO. The other two parameters VBE and $\beta dc$ are assumed to be constant. Change in $I_c$ is $\triangle I_c, I_B$ is $\triangle I_B$ $\therefore$ equation (1) will be $Vcc = \triangle I_c R_c + \triangle I_B (R_B + R_c) + VBE$ $\rightarrow$ (2) $\because$ Vcc and VBE are constant. Different equation (2) w.r.t $\triangle I_c$ we get, $0 = R_c + \frac{\triangle I_B}{\triangle I_c} (R_B + R_c ) + 0$ $\frac{\triangle I_B}{\triangle I_c} (R_B + R_c ) = - R_c$ $\frac{\triangle I_B}{\triangle I_c} = \frac{- R_c}{R_B + R_c}$ $\rightarrow$ (3) $S = \frac{(1 + \beta dc)}{1 - \beta dc [ \frac{- R_c}{R_B + R_c}]}$ $S = \frac{(1 + \beta dc)}{1 + \beta dc ( \frac{R_c}{R_B + R_c})}$ As compared to fixed bias ckt collector to bare bias ckt has much lesser value of ‘S’. this indicates that the Q point stability is better for collector to base bias ckt.
2023-02-01 12:39:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7201640605926514, "perplexity": 2619.174679875315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499934.48/warc/CC-MAIN-20230201112816-20230201142816-00393.warc.gz"}
https://www.freemathhelp.com/forum/threads/please-help-solving-this-problem.129280/
#### bagofchips123 ##### New member Hi, help solving this equation would be great. Ive proven that AC is an inverse matrix I'm just not sure how it relates to C being the matrix of A. #### Attachments • 32.7 KB Views: 10 • 2 MB Views: 10 #### AmandasMathHelp ##### New member Nice job multiplying those matrices. But you did not prove that AC is "an inverse matrix". That doesn't make sense because what matrix would AC be the inverse of?? Notice it has only 1s on the diagonal which makes it an IDENTITY matrix. The definition of the inverse matrix is a matrix that turns another matrix into an identity matrix (when you multiply them). Thus, by showing that A*C = an identity matrix, you proved that C is the inverse matrix of A. (C turns A into an identity matrix) This is similar to the idea of a "multiplicative inverse" which turns a number into 1 when you multiply. For example the "multiplicative inverse" of 2 is 1/2 because 2 * (1/2) = 1. #### Harry_the_cat ##### Elite Member If AC = I (the identity matrix), which it does, then C is the inverse of A. #### pka ##### Elite Member Hi, help solving this equation would be great. Ive proven that AC is an inverse matrix I'm just not sure how it relates to C being the matrix of A. (b) $$X=C\cdot B$$ How and/or why? #### bagofchips123 ##### New member (b) $$X=C\cdot B$$ How and/or why? Sorry not to sure what you mean by that #### Subhotosh Khan ##### Super Moderator Staff member Hi, help solving this equation would be great. Ive proven that AC is an inverse matrix I'm just not sure how it relates to C being the matrix of A. {X} = [C]{B} Continue......
2021-06-12 17:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7601931095123291, "perplexity": 946.5830619087362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586239.2/warc/CC-MAIN-20210612162957-20210612192957-00301.warc.gz"}
https://socratic.org/questions/what-is-the-simple-subject-and-simple-predicate-in-this-sentence
# What is the simple subject and simple predicate in this sentence? ## Two days of rain and fog can make a whole week dreary and disagreeable. My answer was: s.s. rain/fog s.p. dreary/disagreeable. Is that right? May 11, 2018 "Days" is the simple subject. "Can make" is the simple predicate. #### Explanation: First, let's find the simple predicate. What is the action in this sentence? The action is the verb phrase $\text{can make}$: Two days of rain and fog $\textcolor{red}{\text{can make}}$ a whole week dreary and disagreeable. Now comes the tricky part. What is doing the action of the sentence? At first glance, it looks like it should be $\text{rain}$ and $\text{fog}$. Rain and fog make a week dreary, right? That would be right, but they are actually part of the prepositional phrase "of rain and fog", so they cannot be the simple subjects. They are modifying the real subject $\text{days}$. Days can make a week disagreeable. Two stackrel"simple subject"overbracecolor(blue)"days" of rain and fog stackrel"simple predicate"overbracecolor(red)"can make" a whole week dreary and disagreeable.
2021-09-26 03:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28564247488975525, "perplexity": 2808.876661468907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00546.warc.gz"}
https://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/wiki2/authorization.html
In the last chapter we built authentication into our wiki. We also went one step further and used the request.user object to perform some explicit authorization checks. This is fine for a lot of applications, but Pyramid provides some facilities for cleaning this up and decoupling the constraints from the view function itself. We will implement access control with the following steps: • Update the authentication policy to break down the userid into a list of principals (security.py). • Define an authorization policy for mapping users, resources and permissions (security.py). • Add new resource definitions that will be used as the context for the wiki pages (routes.py). • Add an ACL to each resource (routes.py). • Replace the inline checks on the views with permission declarations (views/default.py). A principal is a level of abstraction on top of the raw userid that describes the user in terms of its capabilities, roles, or other identifiers that are easier to generalize. The permissions are then written against the principals without focusing on the exact user involved. Pyramid defines two builtin principals used in every application: pyramid.security.Everyone and pyramid.security.Authenticated. On top of these we have already mentioned the required principals for this application in the original design. The user has two possible roles: editor or basic. These will be prefixed by the string role: to avoid clashing with any other types of principals. Open the file tutorial/security.py and edit it as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 from pyramid.authentication import AuthTktAuthenticationPolicy from pyramid.authorization import ACLAuthorizationPolicy from pyramid.security import ( Authenticated, Everyone, ) from . import models class MyAuthenticationPolicy(AuthTktAuthenticationPolicy): def authenticated_userid(self, request): user = request.user if user is not None: return user.id def effective_principals(self, request): principals = [Everyone] user = request.user if user is not None: principals.append(Authenticated) principals.append(str(user.id)) principals.append('role:' + user.role) return principals def get_user(request): user_id = request.unauthenticated_userid if user_id is not None: user = request.dbsession.query(models.User).get(user_id) return user def includeme(config): settings = config.get_settings() authn_policy = MyAuthenticationPolicy( settings['auth.secret'], hashalg='sha512', ) config.set_authentication_policy(authn_policy) config.set_authorization_policy(ACLAuthorizationPolicy()) config.add_request_method(get_user, 'user', reify=True) Only the highlighted lines need to be added. Note that the role comes from the User object. We also add the user.id as a principal for when we want to allow that exact user to edit pages which they have created. We already added the authorization policy in the previous chapter because Pyramid requires one when adding an authentication policy. However, it was not used anywhere, so we'll mention it now. In the file tutorial/security.py, notice the following lines: 38 39 40 config.set_authentication_policy(authn_policy) config.set_authorization_policy(ACLAuthorizationPolicy()) config.add_request_method(get_user, 'user', reify=True) We're using the pyramid.authorization.ACLAuthorizationPolicy, which will suffice for most applications. It uses the context to define the mapping between a principal and permission for the current request via the __acl__. Resources are the hidden gem of Pyramid. You've made it! Every URL in a web application represents a resource (the "R" in Uniform Resource Locator). Often the resource is something in your data model, but it could also be an abstraction over many models. Our wiki has two resources: 1. A NewPage. Represents a potential Page that does not exist. Any logged-in user, having either role of basic or editor, can create pages. 2. A PageResource. Represents a Page that is to be viewed or edited. editor users, as well as the original creator of the Page, may edit the PageResource. Anyone may view it. Note The wiki data model is simple enough that the PageResource is mostly redundant with our models.Page SQLAlchemy class. It is completely valid to combine these into one class. However, for this tutorial, they are explicitly separated to make clear the distinction between the parts about which Pyramid cares versus application-defined objects. There are many ways to define these resources, and they can even be grouped into collections with a hierarchy. However, we're keeping it simple here! Open the file tutorial/routes.py and edit the following lines: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 from pyramid.httpexceptions import ( HTTPNotFound, HTTPFound, ) from pyramid.security import ( Allow, Everyone, ) from . import models def includeme(config): config.add_static_view('static', 'static', cache_max_age=3600) config.add_route('view_wiki', '/') config.add_route('login', '/login') config.add_route('logout', '/logout') config.add_route('view_page', '/{pagename}', factory=page_factory) config.add_route('add_page', '/add_page/{pagename}', factory=new_page_factory) config.add_route('edit_page', '/{pagename}/edit_page', factory=page_factory) def new_page_factory(request): pagename = request.matchdict['pagename'] if request.dbsession.query(models.Page).filter_by(name=pagename).count() > 0: next_url = request.route_url('edit_page', pagename=pagename) raise HTTPFound(location=next_url) return NewPage(pagename) class NewPage(object): def __init__(self, pagename): self.pagename = pagename def __acl__(self): return [ (Allow, 'role:editor', 'create'), (Allow, 'role:basic', 'create'), ] def page_factory(request): pagename = request.matchdict['pagename'] page = request.dbsession.query(models.Page).filter_by(name=pagename).first() if page is None: raise HTTPNotFound return PageResource(page) class PageResource(object): def __init__(self, page): self.page = page def __acl__(self): return [ (Allow, Everyone, 'view'), (Allow, 'role:editor', 'edit'), (Allow, str(self.page.creator_id), 'edit'), ] The highlighted lines need to be edited or added. The NewPage class has an __acl__ on it that returns a list of mappings from principal to permission. This defines who can do what with that resource. In our case we want to allow only those users with the principals of either role:editor or role:basic to have the create permission: 30 31 32 33 34 35 36 37 38 class NewPage(object): def __init__(self, pagename): self.pagename = pagename def __acl__(self): return [ (Allow, 'role:editor', 'create'), (Allow, 'role:basic', 'create'), ] The NewPage is loaded as the context of the add_page route by declaring a factory on the route: 18 19 config.add_route('add_page', '/add_page/{pagename}', factory=new_page_factory) The PageResource class defines the ACL for a Page. It uses an actual Page object to determine who can do what to the page. 47 48 49 50 51 52 53 54 55 56 class PageResource(object): def __init__(self, page): self.page = page def __acl__(self): return [ (Allow, Everyone, 'view'), (Allow, 'role:editor', 'edit'), (Allow, str(self.page.creator_id), 'edit'), ] The PageResource is loaded as the context of the view_page and edit_page routes by declaring a factory on the routes: 17 18 19 20 21 config.add_route('view_page', '/{pagename}', factory=page_factory) config.add_route('add_page', '/add_page/{pagename}', factory=new_page_factory) config.add_route('edit_page', '/{pagename}/edit_page', factory=page_factory) At this point we've modified our application to load the PageResource, including the actual Page model in the page_factory. The PageResource is now the context for all view_page and edit_page views. Similarly the NewPage will be the context for the add_page view. Open the file tutorial/views/default.py. First, you can drop a few imports that are no longer necessary: 5 6 7 from pyramid.httpexceptions import HTTPFound from pyramid.view import view_config Edit the view_page view to declare the view permission, and remove the explicit checks within the view: 18 19 20 21 22 23 @view_config(route_name='view_page', renderer='../templates/view.jinja2', permission='view') def view_page(request): page = request.context.page def add_link(match): The work of loading the page has already been done in the factory, so we can just pull the page object out of the PageResource, loaded as request.context. Our factory also guarantees we will have a Page, as it raises the HTTPNotFound exception if no Page exists, again simplifying the view logic. Edit the edit_page view to declare the edit permission: 38 39 40 41 42 @view_config(route_name='edit_page', renderer='../templates/edit.jinja2', permission='edit') def edit_page(request): page = request.context.page if 'form.submitted' in request.params: Edit the add_page view to declare the create permission: 52 53 54 55 56 @view_config(route_name='add_page', renderer='../templates/edit.jinja2', permission='create') def add_page(request): pagename = request.context.pagename if 'form.submitted' in request.params: Note the pagename here is pulled off of the context instead of request.matchdict. The factory has done a lot of work for us to hide the actual route pattern. The ACLs defined on each resource are used by the authorization policy to determine if any principal is allowed to have some permission. If this check fails (for example, the user is not logged in) then an HTTPForbidden exception will be raised automatically. Thus we're able to drop those exceptions and checks from the views themselves. Rather we've defined them in terms of operations on a resource. The final tutorial/views/default.py should look like the following: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 from pyramid.compat import escape import re from docutils.core import publish_parts from pyramid.httpexceptions import HTTPFound from pyramid.view import view_config from .. import models # regular expression used to find WikiWords wikiwords = re.compile(r"\b([A-Z]\w+[A-Z]+\w+)") @view_config(route_name='view_wiki') def view_wiki(request): next_url = request.route_url('view_page', pagename='FrontPage') return HTTPFound(location=next_url) @view_config(route_name='view_page', renderer='../templates/view.jinja2', permission='view') def view_page(request): page = request.context.page def add_link(match): word = match.group(1) exists = request.dbsession.query(models.Page).filter_by(name=word).all() if exists: view_url = request.route_url('view_page', pagename=word) return '%s' % (view_url, escape(word)) else: add_url = request.route_url('add_page', pagename=word) return '%s' % (add_url, escape(word)) content = publish_parts(page.data, writer_name='html')['html_body'] content = wikiwords.sub(add_link, content) edit_url = request.route_url('edit_page', pagename=page.name) return dict(page=page, content=content, edit_url=edit_url) @view_config(route_name='edit_page', renderer='../templates/edit.jinja2', permission='edit') def edit_page(request): page = request.context.page if 'form.submitted' in request.params: page.data = request.params['body'] next_url = request.route_url('view_page', pagename=page.name) return HTTPFound(location=next_url) return dict( pagename=page.name, pagedata=page.data, save_url=request.route_url('edit_page', pagename=page.name), ) @view_config(route_name='add_page', renderer='../templates/edit.jinja2', permission='create') def add_page(request): pagename = request.context.pagename if 'form.submitted' in request.params: body = request.params['body'] page = models.Page(name=pagename, data=body) page.creator = request.user request.dbsession.add(page) next_url = request.route_url('view_page', pagename=pagename) return HTTPFound(location=next_url) save_url = request.route_url('add_page', pagename=pagename) return dict(pagename=pagename, pagedata='', save_url=save_url) ## Viewing the application in a browser¶ We can finally examine our application in a browser (See Start the application). Launch a browser and visit each of the following URLs, checking that the result is as expected: • http://localhost:6543/ invokes the view_wiki view. This always redirects to the view_page view of the FrontPage page object. It is executable by any user. • http://localhost:6543/FrontPage invokes the view_page view of the FrontPage page object. There is a "Login" link in the upper right corner while the user is not authenticated, else it is a "Logout" link when the user is authenticated. • http://localhost:6543/FrontPage/edit_page invokes the edit_page view for the FrontPage page object. It is executable by only the editor user. If a different user (or the anonymous user) invokes it, then a login form will be displayed. Supplying the credentials with the username editor and password editor will display the edit page form. • http://localhost:6543/add_page/SomePageName invokes the add_page view for a page. If the page already exists, then it redirects the user to the edit_page view for the page object. It is executable by either the editor or basic user. If a different user (or the anonymous user) invokes it, then a login form will be displayed. Supplying the credentials with either the username editor and password editor, or username basic and password basic, will display the edit page form. • http://localhost:6543/SomePageName/edit_page invokes the edit_page view for an existing page, or generates an error if the page does not exist. It is editable by the basic user if the page was created by that user in the previous step. If, instead, the page was created by the editor user, then the login page should be shown for the basic user. • After logging in (as a result of hitting an edit or add page and submitting the login form with the editor credentials), we'll see a "Logout" link in the upper right hand corner. When we click it, we're logged out, redirected back to the front page, and a "Login" link is shown in the upper right hand corner.
2019-01-20 14:00:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19630567729473114, "perplexity": 4042.3078857049704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00585.warc.gz"}