url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://planetmath.org/SquareRootOf5
# square root of 5 The square root of 5 is an irrational number involved in the formula for the golden ratio. It is also used in statistics when dealing with 5-business day weeks. Its decimal expansion begins 2.2360679774997896964, see http://www.research.att.com/ njas/sequences/A002163sequence A002163 in Sloane’s OEIS. Its simple continued fraction is 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, … One formula for the square root of 5 involves some of the same numbers as in Euler’s identity (but with a 2 instead of the 1): $e^{i\pi}+2\phi$. The square root of 5 modulo a prime number is employed in some ECM algorithms. A rectangle with unit height and $\sqrt{5}$ width can be split into two golden rectangles of the same size and a square, or into two golden rectangles of different sizes. The conjecture stating “that any abelian surface with RM by $Q(\sqrt{5})$ is isogenous to a simple factor of the Jacobian of a modular curve $X_{0}(N)$ for some $N$” still stands. John Wilson has produced equations for curves of genus 2 with Jacobians of the specified RM. ## References • 1 Francois Morain. Primality Proving Using Elliptic Curves: An Update. Springer: Berlin (2004) • 2 Robert Nemiroff and Jerry Bonnell. A million digits of sqrt(5) at Project Gutenberg http://www.gutenberg.org/dirs/etext96/5sqrt10.txthttp://www.gutenberg.org/dirs/etext96/5sqrt10.txt • 3 Clifford Pickover. Wonders of Numbers, Oxford: Oxford University Press (2001) p. 106. • 4 John Wilson “Curves of genus 2 with real multiplication by a square root of 5” p. i Dissertation, Oxford University, Oxford (1998) http://eprints.maths.ox.ac.uk/32/01/wilson.pdfhttp://eprints.maths.ox.ac.uk/32/01/wilson.pdf Title square root of 5 SquareRootOf5 2013-03-22 17:28:12 2013-03-22 17:28:12 MathNerd (17818) MathNerd (17818) 6 MathNerd (17818) Definition msc 11A25
2018-03-20 04:21:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 5, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.672029972076416, "perplexity": 839.2280322912563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647280.40/warc/CC-MAIN-20180320033158-20180320053158-00343.warc.gz"}
http://math.stackexchange.com/questions/596864/solving-trigonometry-identities-by-simplifying-terms
# Solving trigonometry identities by simplifying terms Verify the identity by simplifying the left side. $\sin^2x-\sin^2y=\cos^2y-\cos^2x$ - ## migrated from mathematica.stackexchange.comDec 7 '13 at 14:14 This question came from our site for users of Mathematica. p.s. trigonometric identity this is the way but I wouldn't call it "simplifying". –  Kuba Dec 7 '13 at 14:00 Sum $(\sin (y))^2+(\cos(x))^2$to both sides of the equality. –  Git Gud Dec 7 '13 at 14:15 We have $$\displaystyle \sin^2x+\cos^2x=\sin^2y+\cos^2y$$ as both are equal to $1$ Now change the sides of $\displaystyle \sin^2y,\cos^2x$ - –  lab bhattacharjee Dec 7 '13 at 15:16 This can be verified by using $\cos^2(x)=1-\sin^2(x)$ and $\cos^2(y)=1-\sin^2(y)$. - $$\sin^2 x-\sin^2 y=1-\sin^2 y-(1-\sin^2 x)=\sin^2 x-\sin^2 y,$$ because $$\cos^2 x=1-\sin^2 x,$$ and $$\cos^2 y=1-\sin^2 y$$ -
2015-01-30 20:40:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873092532157898, "perplexity": 2157.744983724997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861872.41/warc/CC-MAIN-20150124161101-00210-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.msri.org/workshops/654/schedules/17304
# Mathematical Sciences Research Institute Home » Workshop » Schedules » Convergence of Riemannian manifolds and Metric Measure Spaces # Convergence of Riemannian manifolds and Metric Measure Spaces ## Connections for Women on Optimal Transport: Geometry and Dynamics August 22, 2013 - August 23, 2013 August 22, 2013 (09:30 AM PDT - 10:30 AM PDT) Speaker(s): Christina Sormani (CUNY, Graduate Center) Location: MSRI: Simons Auditorium Video Abstract Recent advances in Geometric Analysis and in Optimal Transport have provided new insight into both fields. In Geometric Analysis we study the limits of sequences of Riemannian manifolds and produce limit spaces with a variety of structures. With lower bounds on Ricci curvature, Cheeger-Colding combined Gromov's Compactness Theorem and ideas of Fukaya to show that sequences of Riemannian manifolds with uniform lower Ricci curvature bounds have metric measure limits. They show these limits are metric measure spaces with a doubling measure that has many of the properties of a Riemannian manifold with a lower Ricci curvature bound. Sturm and Lott-Villani then generalized the notion of a lower Ricci curvature bound to metric measure spaces using notions from Optimal Transport. Sturm also defined a new notion of metric measure convergence of metric measure spaces based upon the Optimal Transport notion of a Wasserstein distance between probability measures. Other notions of convergence provide more structure on the limit spaces and do not require the lower Ricci curvature bound. One notion, with applications in General Relativity, is the intrinsic flat distance introduced in joint work with Stefan Wenger (building upon work of Ambrosio-Kirchheim) produces integer weighted countably $\mathcal{H}^m$ rectifiable limit spaces. A newer notion soon to be introduced in joint work with Guofang Wei (building upon work of Solorzano and on Sturm's metric measure convergence) preserves the structure of the tangent bundle. We provide a brief survey of these notions with examples. The details in the papers can be understood after the audience has attended the introductory workshop next week. Supplements
2017-08-23 06:22:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6666148900985718, "perplexity": 529.6504748464223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00333.warc.gz"}
https://brilliant.org/problems/daniels-integer-solutions/
# Daniel's integer solutions Find the number of ordered quadruples of positive integers $$(x,y,p,q)$$ satisfying $x^3y-xy^3=pq,$ and $$p,q$$ are prime numbers. This problem is posed by Daniel C. × Problem Loading... Note Loading... Set Loading...
2017-07-22 16:58:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4976808428764343, "perplexity": 3770.2213499772943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424088.27/warc/CC-MAIN-20170722162708-20170722182708-00214.warc.gz"}
http://wordpress.mrreid.org/tag/space/
# How to Look at the Back of Your Head Using a Black Hole A black hole is created when matter is compressed into such a small volume that the gravitational field that it creates becomes so strong that light cannot escape. The size that an object needs to be compressed to is called the Schwarzschild radius, and is given by a simple equation: $r_S = \frac{2GM}{c^2}$ where $r$ is the Schwarzschild radius, $G$ is the constant of universal gravitation, $M$ is the mass of the object, and $c$ is the speed of light. There is no lower limit to the mass $M$ involved: even the Earth could become a black hole if it were compressed into a ball less than eighteen millimetres across. A black hole has no physical size (it is a singularity) but the “edge” of a black hole is usually taken to be its event horizon: the point beyond which even light cannot escape. The distance of the event horizon from “centre” of a black hole is equal to the black hole’s Schwarzschild radius. If some familiar objects from our solar system were to become black holes, their event horizons would be as follows: Object Event Horizon Earth 8.87 mm Saturn 84.4 cm Jupiter 2.82 m Sun 2.95 km Objects will orbit a black hole just as they would orbit any other object with mass. If our Sun were to spontaneously become a black hole, the orbit of the planets in the solar system would be unaffected. The closer an orbiting object is to the object it is orbiting, the faster it has to be travelling. That sentence is a bit difficult to understand, so I’ll explain it with an example: moving the Earth closer and closer to the Sun. Distance Sun to Earth /AU Required orbital speed /km/s 1.00 29.8 0.75 34.4 0.50 42.2 0.25 59.6 Eventually we get so close to the Sun that the speed required to remain in orbit becomes equal to the speed of light. This means that only photons (which travel at the speed of light) could orbit, and in fact, would orbit. Any photons that bounced off the back of your head would travel in a circular orbit around the black hole and end up at your eyes – you’d be able to look directly at the back of your head without using mirrors! This effect is called a photon sphere, and can definitely exist around black holes.* It is also possible that a neutron star could be so compact that a photon sphere could exist, but this has never been observed. * Around a rotating black hole there would be two photon spheres due to frame dragging: one would be closer, with the direction of orbit being the same as the direction of rotation, and one would be further away, with the direction of orbit being opposite to the direction of rotation. # Hohmann Transfers The Hohmann transfer is an orbital manoeuvre used to transfer a satellite between two different circular orbits. On the left, the two circular orbits between which the transfer will take place. On the right, the elliptical Hohmann transfer orbit. The orbit of the lower (blue) orbit has the lowest energy (i.e. the specific orbital energy) of the three, the Hohmann transfer orbit has a higher energy than that, and the highest (orange) orbit has the greatest energy of the three.  The gravitational potential and kinetic energies of the initial and final circular orbits are fairly constant, but the gravitational potential and kinetic energies of the Hohmann transfer orbit vary substantially as the orbiting object transfers gravitational potential to kinetic as it approaches Earth and vice versa. The Hohmann transfer takes place along (half of) an elliptical orbit with one half of the ellipse touching the lower orbit and the other half touching the higher orbit. Two different thruster impulses are used: one to move it onto the elliptical orbit, and then a second one to move it onto the higher orbit. Each time the thruster is fired this increases the kinetic energy of the satellite, which is then transferred to the gravitational potential energy of its new orbit. Because orbits are reversible, moving from a higher orbit to a lower orbit still involves two impulses, but they are in a direction opposite to the motion of the satellite, causing it to decrease in speed and “fall” into the lower orbit. # Real and Apparent Weightlessness The strength of the gravitational field at Earth’s surface g is 9.81 newtons per kilogram. This means that every kilogram of mass feels a force of 9.81 newtons pulling it downwards towards the centre of Earth. As you climb higher and higher, the value of g becomes smaller and smaller. At the peak of Mount Everest, g?=?9.79?N/kg, and at the summit of Chimborazo, the farthest point from Earth’s centre, g?=?9.78?N/kg. So what is the value of g aboard the International Space Station, in orbit around Earth? You would be forgiven for thinking that the answer is 0?N/kg, because the environment the astronauts are in is often described as “zero-g”, but this is not the case. The value of g aboard the ISS is actually 8.65?N/kg, only 12% less than on Earth’s surface. An apple floats in mid-air aboard the ISS. The astronauts aboard the ISS experience apparent weightlessness, not true weightlessness. The reason they appear to be in a zero-g environment is only because they are in orbit around the Earth – if the ISS were to slow to a halt it would fall towards Earth just like an object on Earth’s surface (though it’d fall a little bit slower at first). In orbit the ISS is falling towards Earth at just the same rate as Earth is curving away, keeping it at a constant distance from Earth’s surface.* Because astronauts are in constant freefall they don’t push against the “floor” and the “floor” doesn’t push against them and therefore they feel weightless. If you’ve ever felt a little bit lighter in a lift as it started to move downwards, just imagine that effect taken to an extreme in a lift shaft where you can never hit the bottom. Even geostationary satellites, like those that supply satellite TV, at a distance of 36000 km from Earth aren’t in zero-g. If they weren’t within Earth’s gravitational field they wouldn’t orbit and would fly off into space; for geostationary satellites g?=?0.224?N/kg. The closest human beings have ever been to true zero-g is on the way to the Moon, at the point at which the gravitational pull of the Earth in one direction was equal to the gravitational pull of the Moon in the other direction. Because the mass of the Moon is about 1/81 of the mass of the Earth and the strength of a body’s gravitational field depends on the square of the distance from the body this point is about nine-tenths of the distance between the two: 346?000 km from Earth. At this point the pull from the Earth is 0.0032?g and the pull from the Moon is the same, but in the other direction. * Imagine a giant cannon firing horizontally: too slow and the cannon ball will hit the ground, too quick and the cannon ball will fly off into space. But if the cannon fires at just the right speed the cannon ball will drop towards the ground at just the same rate as the ground curves away. # Lagrange points In a system of two bodies, where one mass is much larger than the other (such as the Sun and Earth, or the Earth and Moon), there are five points, known as the Lagrange points, where the resultant gravitational field is such that it provides the centripetal force for that object to remain in constant orbit with those two bodies.* What this means is that an object – such as a satellite – placed at a Lagrange point will remain at that point (relative to the two bodies) without need for any external force such as that provided by rocket or ion thrusters. For example an object placed at point L1 in the diagram below will always remain between the Earth and the Sun and an object at L2 will always remain in the Earth’s shadow. The five Lagrange points of the Sun-Earth system. The white lines are gravitational contour lines: an object in free-fall would trace out a path along one of these contours. ## L1 L1 is probably the easiest Lagrange point to understand. In the diagram above the Sun pulls towards the left, and the Earth pulls to the right, and the resultant force is just right to keep it in orbit between the Sun and Earth. The L1 point is ideal for making observations of the Sun-Earth system as satellites placed here are never in the shadow of either the Sun or the Earth. The SOHO, ACE and WIND satellites are all positioned at L1. ## L2 L2 is a good site for space-based observatories as it is almost perfectly protected from solar radiation and doesn’t receive any light pollution from Earth. The Planck telescope is currently at L2, the James Webb Space Telescope is scheduled to be placed there in 2018, and the WMAP and Herschel satellites were also previously positioned there. ## L3 No satellites are currently positioned at L3, as it lies about 300 million kilometres from Earth. It has been suggested as a useful site for solar activity monitoring, as it would be able to spot any activity likely to affect Earth (sunspots, etc.) before the side of the Sun containing that activity was pointed towards Earth. L3 is also popular in science fiction, as a place to “hide” something from Earth. After the development of interplanetary probes we checked and unfortunately there’s nothing there (yet). ## L4 and L5 The L4 and L5 points lie at a 60° angle from the Sun-Earth axis, at the furthermost point of an equilateral triangle formed by the Sun and the Earth. Because this point is the same distance from both the Sun and Earth the ratio of their gravitational pulls is the same as if it were at the barycentre of the Sun-Earth system and objects placed there remain stationary. No satellites are currently placed at either point. The Trojan asteroids are located at the L4 and L5 points of the Jupiter-Sun system. Asteroids are constantly being transferred between the asteroid belt and these two points. * Lagrange points exist for any two-body system in which one object is much more massive than the other, but I’ll only be looking at the Sun-Earth system. # Solar eclipse types The word “eclipse” is a general astronomical term that applies to any situation in which the view of one object is blocked by another, caused either by the second object’s shadow falling over the first, or by the second object coming in between the first object and the observer. In general use the term eclipse usually applies to one of two situations: a solar eclipse, in which the Moon obscures our view of the Sun; or a lunar eclipse in which the Earth’s shadow falls over the Moon. The planes in which the Earth orbits the Sun and which the Moon orbits the Earth are at an angle to each other, which is why there is not a total solar eclipse once every month. There are three types of solar eclipse: • A total eclipse, in which the Moon appears the same size as the Sun and blocks light from it completely. • An annular eclipse, in which the Moon passes between the Sun and the Earth but because of the relative distances of each it appears slightly smaller than the Sun, causing a ring of light to appear. • A partial eclipse, in which the orbits of the Moon and Sun are such that the Moon only covers only part of the Sun. A comparison of a total solar eclipse (left) and an annular eclipse (right). Only in the case of a total solar eclipse is the Sun’s corona (the white “cloud”) visible. A diagram showing how the three different types of eclipse are formed. There are also hybrid eclipses, which appear as total eclipses to some parts of the Earth and an annular eclipse to other parts. These are very rare, with the most recent in April 2005 and visible mainly from the Pacific Ocean and also Costa Rica, Panama, Colombia and Venezuela; and the next to occur in November 2013, visible from Central Africa (Gabon, Congo, Democratic Republic of the Congo, Uganda, Kenya, Ethiopia and Somalia).
2018-12-12 17:29:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6197937726974487, "perplexity": 423.4615231952599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00094.warc.gz"}
https://www.wisdomjobs.com/e-university/oracle-10g-tutorial-160/configuring-the-components-of-automatic-storage-management-5443.html
# Configuring the Components of Automatic Storage Management - Oracle 10g This section starts by presenting a brief overview of the components of Automatic Storage Management and discusses some considerations and guidelines that you should be aware of as you configure your ASM instance. Then, specific operations that you use to configure and maintain the ASM instance are discussed. If you have a database instance running and are actively using Automatic Storage Management, you can keep the database open and running while you reconfigure disk groups. The SQL statements introduced in this section are only available in an ASM instance. You must first start the ASM instance. The following topics are contained in this section: • Considerations and Guidelines for Configuring an ASM Instance • Creating a Disk Group • Altering the Disk Membership of a Disk Group • Mounting and Dismounting Disk Groups • Managing Disk Group Templates • Managing Disk Group Directories • Managing Alias Names for ASM Filenames • Dropping Files and Associated Aliases from a Disk Group • Checking Internal Consistency of Disk Group Metadata • Dropping Disk Groups Considerations and Guidelines for Configuring an ASM Instance The following are some considerations and guidelines to be aware of as you configure Automatic Storage Management. Determining the Number of Disk Groups The following criteria can help you determine the number of disk groups that you create: • Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then it would be better to form several disk groups accordingly. • For recovery reasons, you might feel more comfortable having separate disk groups for your database files and flash recovery area files. Using this approach, even with the loss of one disk group, the database would still be intact. Storage Arrays and Automatic Storage Management With Automatic Storage Management, the definition of the logical volumes of a storage array is critical to database performance. Automatic Storage Management cannot optimize database data placement when the storage array disks are subdivided or aggregated. Aggregating and subdividing the physical volumes of an array into logical volumes can hide the physical disk boundaries from Automatic Storage Management. Consequently, careful consideration of storage array configuration is required. Consider Performance Characteristics when Grouping Disks Automatic Storage Management eliminates the need for manual disk tuning. However, to ensure consistent performance, you should avoid placing dissimilar disks in the same disk group. For example, the newest and fastest disks might reside in a disk group reserved for the database work area, and slower drives could reside in a disk group reserved for the flash recovery area. Automatic Storage Management load balances file activity by uniformly distributing file extents across all disks in a disk group. For this technique to be effective it is important that the disks used in a disk group be of similar performance characteristics. There may be situations where it is acceptable to temporarily have disks of different sizes and performance co-existing in a disk group. This would be the case when migrating from an old set of disks to a new set of disks. The new disks would be added and the old disks dropped. As the old disks are dropped, their storage is migrated to the new disks while the disk group in online. Effects of Adding and Dropping Disks from a Disk Group Automatic Storage Management automatically rebalances--distributes file data evenly across all the disks of a disk group--whenever disks are added or dropped. ASM allocates files in such a way that rebalancing is not required when the number of disks is static. A disk is not released from a disk group until data is moved off of the disk through rebalancing. Likewise, a newly added disk cannot support its share of the I/O workload until rebalancing completes. It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data. You can add or drop disks without shutting down the database. However, a performance impact on I/O activity may result. Failure Groups and Mirroring Mirroring of metadata and user data is achieved through failure groups. ASM requires at least two failure groups for normal redundancy disk groups and at least three failure groups for high redundancy disk groups. System reliability can be hampered if an insufficient number of failure groups are provided. Consequently, failure group configuration is very important to creating a highly reliable system. Scalability ASM imposes the following limits: • 63 disk groups in a storage system • 10,000 ASM disks in a storage system • 4 petabyte maximum storage for each ASM disk • 40 exabyte maximum storage for each storage system • 1 million files for each disk group • 2.4 terabyte maximum storage for each file Creating a Disk Group You use the CREATE DISKGROUP statement to create disk groups. This statement enables you to assign a name to the disk group and to specify the disks that are to be formatted as ASM disks belonging to the disk group. You specify the disks as one or more operating system dependent search strings that Automatic Storage Management then uses to find the disks. You can specify the disks as belonging to specific failure groups, and you can specify the redundancy level for the disk group. If you do not specify a disk as belonging to a failure group, that disk comprises its own failure group. The redundancy level can be specified as NORMAL REDUNDANCY or HIGH REDUNDANCY, as defined by the disk group templates. You can also specify EXTERNAL REDUNDANCY for external redundancy disk groups, which do not have failure groups. You might do this if you want to use storage array protection features instead. Automatic Storage Management programmatically determines the size of each disk. If for some reason this is not possible, or if you want to restrict the amount of space used on a disk, you are able to specify a SIZE clause for each disk. Automatic Storage Management creates operating system independent names for the disks in a disk group that you can use to reference the disks in other SQL statements. Optionally, you can provide your own name for a disk using the NAME clause. The ASM instance ensures that any disk being included in a disk group is addressable and usable. This requires reading the first block of the disk to determine if it already belongs in a disk group. If not, a header is written. It is not possible for a disk to be a member of multiple disk groups. However, you can force a disk that is already a member of another disk group to become a member of the disk group you are creating by specifying the FORCEclause. For example, a disk with an ASM header might have failed temporarily, so that its header could not be cleared when it was dropped from its disk group. Once the disk is repaired, it is no longer part of any disk group, but it still has an ASM header. The FORCE flag is required to use the disk in a new disk group. The original disk group must not be mounted, and the disk must have a disk group header, otherwise the operation fails. Note that if you do this, you may cause another disk group to become unusable. If you specify NOFORCE, which is the default, you receive an error if you attempt to include a disk that already belongs to another disk group. The CREATE DISKGROUP statement mounts the disk group for the first time, and adds the disk group name to the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used and you want the disk group to be automatically mounted at instance startup, then you must remember to add the disk group name to the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance. Creating a Disk Group: Example The following examples assume that the ASM_DISKSTRING is set to '/devices/*'. Assume the following: • ASM disk discovery identifies the following disks in directory /devices. • The disks diska1 - diska4 are on a separate SCSI controller from disks diskb1 - diskb4. The following SQL*Plus session illustrates starting an ASM instance and creating a disk group named dgroup1. In this example, dgroup1 is composed of eight disks that are defined as belonging to either failure group controller1 or controller2. Since NORMAL REDUNDANCY level is specified for the disk group, then Automatic Storage Management provides redundancy for all files created in dgroup1 according to the attributes specified in the disk group templates. For example, in the system default template shown in the table in "Managing Disk Group Templates" on page 12-21, normal redundancy for the online redo log files (ONLINELOG template) is two-way mirroring. This means that when one copy of a redo log file extent is written to a disk in failure group controller1, a mirrored copy of the file extent is written to a disk in failure group controller2. You can see that to support normal redundancy level, at least two failure groups must be defined. Since no NAME clauses are provided for any of the disks being included in the disk group, the disks are assigned the names of dgroup1_0001, dgroup1_0002, ...,dgroup1_0008. Altering the Disk Membership of a Disk Group At a later time after the creation of a disk group, you can change its composition by adding more disks, resizing disks, or dropping disks. You use clauses of the ALTER DISKGROUP statement to perform these actions. You can perform multiple operations with one ALTER DISKGROUP statement. Automatic Storage Management automatically rebalances file extents when the composition of a disk group changes. Because rebalancing can be a long running operation, the ALTER DISKGROUP statement does not wait until the operation is complete before returning. To monitor progress of these long running operations, query the V$ASM_OPERATION dynamic performance view. Adding Disks to a Disk Group The ADD clause of the ALTER DISKGROUP statement lets you add disks to a disk group, or to add a failure group to the disk group. The ALTER DISKGROUP clauses that you can use when adding disks to a disk group are similar to those that can be used when specifying the disks to be included when initially creating a disk group. The new disks will gradually start to carry their share of the workload as rebalancing progresses. Automatic Storage Management behavior when adding disks to a disk group is best illustrated through examples. Adding Disks to a Disk Group: Example 1 The following statement adds disks to dgroup1: Since no FAILGROUP clauses are included in the ALTER DISKGROUP statement, each disk is assigned to its own failgroup. The NAME clauses assign names to the disks, otherwise they would have been assigned system generated names. Adding Disks to a Disk Group: Example 2 The statements presented in this example demonstrate the interactions of disk discovery with the ADD DISK operation. Assume that disk discovery now identifies the following disks in directory Issuing the following statement adds disks /devices/diska5 -/devices/diska8 to dgroup1. It ignores /devices/diska1 -/devices/diska4 because they already belong to dgroup1, but does not fail because the other disks that match the search string are not already members of any other disk group. The following statement will fail, since the /devices/diska2 search string only matches a disk that already belongs to dgroup1: ALTER DISKGROUP dgroup1 ADD DISK '/devices/diska2', '/devices/diskd4'; The following statement to add disks to dgroup1 will fail because the search string matches a disk that is contained in another disk group. Specifically, The following statement succeeds in adding /devices/diskc4 and /devices/diskd1 - /devices/diskd4 to disk group dgroup1. It does not matter that /devices/diskd4 is included in both search strings (or that /devices/diska4 and /devices/diskb4 are already members of dgroup1). The following use of the FORCE clause allows /devices/diskc3 to be added to dgroup2, even though it is a current member of dgroup3. For this statement to succeed, dgroup3 cannot be mounted. Dropping Disks from Disk Groups To drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUP statement. You can also drop all of the disks in specified failure groups using the DROP DISKS IN FAILGROUP clause. When a disk is dropped, the disk group is rebalanced by moving all of the file extents from the dropped disk to other disks in the disk group. The header on the dropped disk is cleared. If you specify the FORCE clause for the drop operation, the disk is dropped even if Automatic Storage Management cannot read or write to the disk. You cannot use the FORCE flag when dropping a disk from an external redundancy disk group. Dropping Disks from Disk Groups: Example 1 This example drops diska5 (the operating system independent name assigned to /devices/c0t4d0s2 in "Adding Disks to a Disk Group: Example 1") from disk group dgroup1. ALTER DISKGROUP dgroup1 DROP DISK diska5; Dropping Disks from Disk Groups: Example 2 This example also shows dropping diska5 from disk group dgroup1, but it illustrates how multiple actions are possible with one ALTER DISKGROUP statement. Resizing Disks in Disk Groups The RESIZE clause of ALTER DISKGROUP enables you to perform the following operations: • Resize all disks in the disk group • Resize specific disks • Resize all of the disks in a specified failure group If you do not specify a new size in the SIZE clause then Automatic Storage Management uses the size of the disk as returned by the operating system. This could be a means of recovering disk space when you had previously restricted the size of the disk by specifying a size smaller than disk capacity. The new size is written to the ASM disk header record and if the size of the disk is increasing, then the new space is immediately available for allocation. If the size is decreasing, rebalancing must relocate file extents beyond the new size limit to available space below the limit. If the rebalance operation can successfully relocate all extents, then the new size is made permanent, otherwise the rebalance fails. Resizing Disks in Disk Groups: Example The following example resizes all of the disks in failgroup failgrp1 of disk group dgroup1. If the new size is greater than disk capacity, the statement will fail. Undropping Disks in Disk Groups The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending drops of disks within disk groups. If a drop disk operation has already completed, then this statement cannot be used to restore it. This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause. Undropping Disks in Disk Groups: Example The following example cancels the dropping of disks from disk group dgroup1: ALTER DISKGROUP dgroup1 UNDROP DISKS; Manually Rebalancing a Disk Group You can manually rebalance the files in a disk group using the REBALANCE clause of the ALTER DISKGROUP statement. This would normally not be required, since Automatic Storage Management automatically rebalances disk groups when their composition changes. You might want to do a manual rebalance operation if you want to control the speed of what would otherwise be an automatic rebalance operation. The POWER clause of the ALTER DISKGROUP ... REBALANCE statement specifies the degree of parallelization, and thus the speed of the rebalance operation. It can be set to a value from 0 to 11, where a value of 0 stops rebalancing and a value of 11 (the default) causes rebalancing to occur as fast as possible. The power level of an ongoing rebalance operation can be changed by entering the rebalance statement with a new level. If you specify a power level of 0, rebalancing is halted until the statement is either implicitly or explicitly reinvoked. The dynamic initialization parameter ASM _POWER _LIMIT specifies a limit on the degree of parallelization for rebalance operations. Even if you specify a higher value in the POWER clause, the degree of parallelization will not exceed the value specified by the ASM _POWER _LIMIT parameter. The ALTER DISK GROUP ... REBALANCE statement uses the resources of the single node upon which it is started. ASM can perform one rebalance at a time on a given instance. The rebalance power is constrained by the value of the ASM_POWER_LIMIT initialization parameter. You can query the V$ASM_OPERATION view for help adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations. If the DESIRED_POWER column value is less than the ACTUAL_POWER column value for a given rebalance operation, then ASM_POWER_LIMIT is impacting the rebalance. The EST_MINUTES column indicates the amount of time remaining for the operation to complete. You can see the effect of changing the power of rebalance by observing the change in the time estimate. Rebalancing continues across a failure of the ASM instance performing the rebalance. Manually Rebalancing a Disk Group: Example The following example manually rebalances the disk group dgroup2: ALTER DISKGROUP dgroup2 REBALANCE POWER 5; Mounting and Dismounting Disk Groups Disk groups that are specified in the ASM_DISKGROUPS initialization parameter are mounted automatically at ASM instance startup. This makes them available to all database instances running on the same node as Automatic Storage Management. The disk groups are dismounted at ASM instance shutdown. Automatic Storage Management also automatically mounts a disk group when you initially create it, and dismounts a disk group if you drop it. There may be times that you want to mount or dismount disk groups manually. For these actions use the ALTER DISKGROUP ... MOUNT or ALTER DISKGROUP ... DISMOUNT statement. You can mount or dismount disk groups by name, or specify ALL. If you try to dismount a disk group that contains open files, the statement will fail, unless you also specify the FORCE clause. Dismounting Disk Groups: Example The following statement dismounts all disk groups that are currently mounted to the ASM instance: ALTER DISKGROUP ALL DISMOUNT; Mounting Disk Groups: Example The following statement mounts disk group dgroup1: ALTER DISKGROUP dgroup1 MOUNT; Managing Disk Group Templates A template is a named collection of attributes that are applied to files created within a disk group. Oracle provides a set of initial system default templates that Automatic Storage Management associates with a disk group when it is created. The table that follows lists the system default templates and the attributes they apply to the various file types that Automatic Storage Management supports. You can add new templates to a disk group, change existing ones, or drop templates using clauses of the ALTER DISKGROUP statement. The V$ASM_TEMPLATE view lists all of the templates known to the ASM instance. Automatic Storage Management System Default File Group Templates Adding Templates to a Disk Group To add a new template for a disk group use the ADD TEMPLATE clause of the ALTER DISKGROUP statement. You specify the name of the template, its redundancy attributes, and its striping attribute. Adding Templates to a Disk Group: Example 1 The following statement creates a new template named reliable: ALTER DISKGROUP dgroup2 ADD TEMPLATE reliable ATTRIBUTES (MIRROR FINE); This statement creates a template that applies the following attributes to files: Modifying a Disk Group Template The ALTER TEMPLATE clause of the ALTER DISKGROUP statement enables you to modify the attribute specifications of an existing system default or user-defined disk group template. Only specified template properties are changed. Unspecified properties retain their current value. When you modify an existing template, only new files created by the template will reflect the attribute changes. Existing files maintain their attributes. Modifying a Disk Group Template: Example The following example changes the striping attribute specification of the reliable template for disk group dgroup2. ALTER DISKGROUP dgroup2 ALTER TEMPLATE reliable ATTRIBUTES (COARSE); Dropping Templates from a Disk Group Use the DROP TEMPLATE clause of the ALTER DISKGROUP statement to drop one or more templates from a disk group. You can only drop templates that are user-defined; you cannot drop system default templates. Dropping Templates from a Disk Group: Example This example drops the previously created unprotected template for dgroup2: ALTER DISKGROUP dgroup2 DROP TEMPLATE unreliable; Managing Disk Group Directories Every ASM disk group contains a hierarchical directory structure consisting of the fully qualified names of the files in the disk group, along with alias filenames. A fully qualified filename, also called a system alias, is always generated automatically by Automatic Storage Management when a file is created. You can create an additional (more user-friendly) alias for each ASM filename during file creation. You can also create an alias for an existing filename using clauses of the ALTER DISKGROUP statement as described in "Managing Alias Names for ASM Filenames". But you must first create a directory structure to support whatever alias file naming convention you choose to use. This section describes how to use the ALTER DISKGROUP statement to create a directory structure for alias filenames. It also describes how you can rename a directory or drop a directory. Creating a New Directory Use the ADD DIRECTORY clause of the ALTER DISKGROUP statement to create a hierarchical directory structure for alias names for ASM files. Use the slash character (/) to separate components of the directory path. The directory path must start with the disk group name, preceded by a plus sign (+), followed by any subdirectory names of your choice. The parent directory must exist before attempting to create a subdirectory or alias in that directory. Creating a New Directory: Example 1 The following statement creates a hierarchical directory for disk group dgroup1, which can contain, for example, the alias name +dgroup1/mydir/control_file1: ALTER DISKGROUP dgroup1 ADD DIRECTORY '+dgroup1/mydir'; Creating a New Directory: Example 2 Assume no subdirectory exists under the directory +dgoup1/mydir, then the following statement will fail: Renaming a Directory The RENAME DIRECTORY clause of the ALTER DISKGROUP statement enables you to rename a directory. System created directories (those containing system alias names) cannot be renamed. Renaming a Directory: Example The following statement renames a directory: ALTER DISKGROUP dgroup1 RENAME DIRECTORY '+dgroup1/mydir' TO '+dgroup1/yourdir'; Dropping a Directory You can delete a directory using the DROP DIRECTORY clause of the ALTER DISKGROUP statement. You cannot drop a system created directory. You cannot drop a directory containing alias names unless you also specify the FORCE clause. Dropping a Directory: Example This statement deletes a directory along with its contents: ALTER DISKGROUP dgroup1 DROP DIRECTORY '+dgroup1/yourdir' FORCE; Managing Alias Names for ASM Filenames After you have created the hierarchical directory structure for alias names, you can create alias names in the disk group. Alias names are intended to provide a more user-friendly means of referring to ASM files, rather than using the fully qualified names (system aliases) that Automatic Storage Management always generates when it creates a file. As mentioned earlier, these alias names can be created when the file is created in the database, or by adding an alias or renaming existing alias names using the ADD ALIAS or RENAME ALIAS clauses of the ALTER DISKGROUP statement. You delete an alias using the DROP ALIAS clause. You cannot delete or rename a system alias. The V$ASM_ALIAS view contains a row for every alias name known to the ASM instance. It contains a column, SYSTEM_CREATED, that specifies if the alias is system generated Adding an Alias Name for an ASM Filename Use the ADD ALIAS clause of the ALTER DISKGROUP statement to create an alias name for an ASM filename. The alias name must consist of the full directory path and the alias itself. Adding an Alias Name for an ASM Filename: Example 1 The following statement adds a new alias name for a system generated file name: Adding an Alias Name for as ASM Filename: Example 2 This statement illustrates another means of specifying the ASM filename for which the alias is to be created. It uses the numeric form of the ASM filename, which is an abbreviated and derived form of the system generated filename. Renaming an Alias Name for an ASM Filename Use the RENAME ALIAS clause of the ALTER DISKGROUP statement to rename an alias for an ASM filename. The old and the new alias names must consist of the full directory paths of the alias names. Renaming an Alias Name for an ASM Filename: Example The following statement renames an alias: Dropping an Alias Name for an ASM Filename Use the DROP ALIAS clause of the ALTER DISKGROUP statement to drop an alias for an ASM filename. The alias name must consist of the full directory path and the alias itself. The underlying file to which the alias refers is unchanged. Dropping an Alias Name for an ASM Filename: Example 1 The following statement deletes an alias: ALTER DISKGROUP dgroup1 DELETE ALIAS '+dgroup1/payroll/compensation.dbf'; Dropping an Alias Name for an ASM Filename: Example 2 The following statement will fail because it attempts to delete a system alias. This is not allowed: Dropping Files and Associated Aliases from a Disk Group You can delete ASM files and their associated alias names from a disk group using the DROP FILE clause of the ALTER DISKGROUP statement. You must use a fully qualified filename, a numeric filename, or an alias name when specifying the file that you want to delete. Some reasons why you might need to delete files include: • Files created using aliases are not Oracle-managed files. Consequently, they are not automatically deleted. • A point in time recovery of a database might restore the database to a time before a tablespace was created. The restore does not delete the tablespace, but there is no reference to the tablespace (or its datafile) in the restored database. You can manually delete the datafile. Dropping an alias does not drop the underlying file on the file system. Dropping Files and Associated Aliases from a Disk Group: Example 1 The following statement uses the alias name for the file to delete both the file and the alias: ALTER DISKGROUP dgroup1 DROP FILE '+dgroup1/payroll/compensation.dbf'; Dropping Files and Associated Aliases from a Disk Group: Example 2 In this example the system generated filename is used to drop the file and any associated alias: Checking Internal Consistency of Disk Group Metadata You can check the internal consistency of disk group metadata using the ALTER DISKGROUP ... CHECK statement. Checking can be specified for specific files in a disk group, specific disks or all disks in a disk group, or specific failure groups within a disk group. The disk group must be mounted in order to perform these checks. If any errors are detected, an error message is displayed and details of the errors are written to the alert log. Automatic Storage Management attempts to correct any errors, unless you specify the NOREPAIR clause in your ALTER DISKGROUP ... CHECK statement. The following statement checks for consistency in the metadata for all disks in the dgroup1 disk group: ALTER DISKGROUP dgroup1 CHECK ALL; Dropping Disk Groups The DROP DISKGROUP statement enables you to delete an ASM disk group and optionally, all of its files. You can specify the INCLUDING CONTENTS clause if you want any files that may still be contained in the disk group also to be deleted. The default is EXCLUDING CONTENTS, which provides syntactic consistency and prevents you from dropping the diskgroup if it has any contents The ASM instance must be started and the disk group must be mounted with none of the disk group files open, in order for the DROP DISKGROUP statement to succeed. The statement does not return until the disk group has been dropped. When you drop a disk group, Automatic Storage Management dismounts the disk group and removes the disk group name from the ASM_DISKGROUPS initialization parameter if a server parameter file is being used. If a text initialization parameter file is being used, and the disk group is mentioned in the ASM_DISKGROUPS initialization parameter, then you must remember to remove the disk group name from the ASM_DISKGROUPS initialization parameter before the next time that you shut down and restart the ASM instance. The following statement deletes dgroup1: DROP DISKGROUP dgroup1; After ensuring that none of the files contained in dgroup1 are open, Automatic Storage Management rewrites the header of each disk in the disk group to remove ASM formatting information. The statement does not specify INCLUDING CONTENTS, so the drop operation will fail if the diskgroup contains any files.
2019-12-10 20:07:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.221926748752594, "perplexity": 2573.1351375828594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00116.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=291po1gd73nou15olrsn39gpq7&topic=341.5;wap2
MAT244-2013F > MidTerm MT, P4 << < (2/2) Victor Ivrii: --- Quote from: Xiaozeng Yu on October 10, 2013, 09:43:00 AM ---ahh...omg, because the rectangle must contain the initial value point (0,1) in order to have an unique solution of the initial value problem. sin1>0 make the function increasing, so the retangle (a<0<b, 0<y<pi) which containing the initial point contains the unique solution function of (0,1) which is increasing? --- End quote --- You need to prove that the solution while increasing as $t$ increases never goes above $\pi$ and also  that the solution while decreasing as $t$ decreases never goes below $0$. Nuff said. The solution remains pending. Victor Ivrii: Since nobody posted a correct solution. Observe first that the solution is unique since $f(t,y)= \frac{\sin{y}}{1+\sin^2(t)}$ satisfied conditions of  (Theorem 2.4.2 in the textbook). Since $|f(t,y)\le 1$ solutions exists for $t\in (-\infty,+\infty)$. Observe also that $f(t,y)=0$ iff $\sin(y)=0 \iff y=n\pi$ with $n=0,\pm 1, \pm 2,\ldots$ Therefore all other solutions cannot cross these values and remain confined  between them; in particular, solution with $y(0)=1$ remains confined between $y=0$ and $y=2\pi$. Since $y'=f(t,y)>0$ iff $2n\pi< y< (2n+1)\pi$ such solutions are monotone increasing. Since $y'=f(t,y)<0$ iff $(2n-1)\pi< y< 2n\pi$ such solutions are monotone decreasing. In particular, solution with $y(0)=1$ is monotone increasing. Additional remarks: since $|f(t,y)|\ge \frac{1}{2}|\sin (\epsilon)|$ as $y \in (n\pi+\epsilon), (n+1)\pi-\epsilon)$ solutions will cross any other line and * Any solution confined between $2n\pi$ and $(2n+1)\pi$, tends to $2n\pi$ as $t\to -\infty$ and to $(2n+1)\pi$ as $t\to +\infty$; * Any solution confined between $(2n-1)\pi$ and $2n\pi$, tends to $2n\pi$ as $t\to -\infty$ and to $(2n-1)\pi$ as $t\to +\infty$. Using language we learn in Chapter 9, $y=2n\pi$ are asymptotically unstable stationary solutions;  $y=(2n+1)\pi$ are asymptotically stable stationary solutions. See attached picture
2021-12-02 07:56:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900450587272644, "perplexity": 836.4597803418152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00124.warc.gz"}
https://www.physicsoverflow.org/21766/there-treatment-double-field-theory-other-local-coordinates
# Where is there a treatment of double field theory other than in local coordinates? + 4 like - 0 dislike 175 views The n-lab seems to lack a treatment of double field theory. Where is there a treatment other than in local coordinates? Or at least one which identifies the coordinates as local coordinates for a specified global object? This post imported from StackExchange MathOverflow at 2014-08-07 22:19 (UCT), posted by SE-user Jim Stasheff retagged Aug 7, 2014 arxiv.org/pdf/1305.1907v2.pdf is not to your liking? This post imported from StackExchange MathOverflow at 2014-08-07 22:19 (UCT), posted by SE-user Carlo Beenakker Thanks, Carlo I was unaware of that paper. I don't subscribe to hep-th; wish cross-referencing were more common. Have also discovered Vaisman's 1203.0836 This post imported from StackExchange MathOverflow at 2014-08-07 22:19 (UCT), posted by SE-user Jim Stasheff Now there is arxiv.org/abs/1406.3601 This post imported from StackExchange MathOverflow at 2014-08-07 22:19 (UCT), posted by SE-user Jim Stasheff Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOv$\varnothing$rflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2019-04-26 01:57:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4479139447212219, "perplexity": 3370.1593984245906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578747424.89/warc/CC-MAIN-20190426013652-20190426035652-00503.warc.gz"}
http://stockgame1.iktogo.com/post/sports-activities-betting--what-is-actually-the-deal/
Sports Activities Betting - What Is Actually The Deal? There is nothing at all, it would seem, fairly as organic to human beings as betting is. Betting has been a portion of human tradition on each and every continent on Earth. From the Natives of North The us to China and just about everywhere in between, putting a wager on the final result of a sport has been a part of sports lifestyle. Athletics betting proceeds to be a massive component of our culture right now. Just as in times past, there is not a single activity you can identify that does not have some variety of betting audience. Proponents of sports betting say that it is a harmless way to incorporate a minor entertaining to the match. Even if UFABET168 have by no means been to a bookmaker, odds are that you have manufactured some type of wager on a athletics event. It might be a fantasy pool, it may possibly just bet for a beer with a buddy, but you have been drawn by the appeal of generating a right prediction. For some people, sports betting is much more than just a way to spice up a preferred past time it is large business. All in excess of the planet, bets are placed on lacrosse, cricket, football, soccer, baseball, and each and every other activity you can name. Some folks acquire large, some men and women acquire persistently, but it really is always the textbooks that arrive out on prime. Let’s take a further search at what athletics betting is all about, and some of the burning concerns individuals have on the matter. A single of the biggest concerns encompassing sports activities betting is regardless of whether or not the exercise is legal. The truth is that in a lot of components of the entire world, sports activities betting is authorized. Most of Europe and Asia control sports betting very greatly, but bettors can spot their wagers with out worry of legal reprisals. North The united states is a different story. In Canada and the United States, wagering on sports is only really permitted in four states: Nevada, Delaware, Montana, and Oregon. Of these, only Nevada in fact permits sports gambling outfits to function. Now, this does not automatically imply that North Individuals are out of luck if they want to wager on a game. The Net has opened up a vast variety of options for citizens west of the Atlantic to areas bets on sporting activities, although they must do so via guides operated in an spot in which sports activities gambling is lawful. Even so, the position of those functions is a tiny little bit shady. Official sports activities bets, those which just take area through bookies relatively than buddies, are very carefully calculated odds offered by shrewd business quantity crunchers. No matter whether we are speaking about Las Vegas or Beijing, you can be certain that the books are 1 phase in advance of your typical bettor when it arrives to wagering. This is not to say that you never stand a chance of winning when you area a guess, since a single of the appeals of laying a wager on a athletics occasion is that victory is equivalent components information and luck (as opposed to casino wagering, which is fairly a lot just luck no make a difference what Charlton Heston has to say!). The sporting activities publications offer many distinct sorts of bets, all of which are developed so that the book by itself tends to make a earnings no make a difference the end result of the celebration. That earnings is identified as the vigorish (vig for brief). It really is generally close to $10, paid by the individual who loses the wager. Usually, bettors will choose one of two choices when wagering on a sports activities event. The first is the cash line, in which a straight up get by the crew picked will end result in cash returned to the bettor. They appear like this, in a That instance tells us two factors. Very first of all, the White Sox are the favorites. That’s indicated by the unfavorable signal. If you bet the Sox, then you have to set down$two hundred in buy to acquire $100. That is the second point the instance exhibits us the quantities reveal how considerably you win if the staff you decide on will come out on prime. For the Yankees, the underdogs, you only have to pay$a hundred and fifty to get a shot at that hundred bucks. But, of course, the Yankees will have to earn! The other type of wager created on sporting activities is the unfold. Below, bookmakers will offer bettors a chance to earn even if the team they wager on loses. Here’s a look at how spreads are expressed: Once once again, the damaging indicator suggests that the Bulls are the preferred. Even so, in this scenario, a bettor wagers not on just who will earn, but by how a lot. If you have been to guess on the Bulls and they gained, but only by eight, you would even now shed the bet. The Bulls have to get by more than 10 points if a wager on them is to return cash. Conversely, you could bet on the underdog Nuggets and nevertheless earn if the group loses by considerably less than ten factors.
2020-08-05 23:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20136629045009613, "perplexity": 1987.9573452403283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00496.warc.gz"}
https://physics.stackexchange.com/questions/488507/throwing-something-into-orbit?noredirect=1
# Throwing something into orbit [duplicate] is it possible with newtonian gravity to throw something (which has no on-board propulsion) so that it will orbit the earth without colliding with it(friction is not present)? my intuition tells me that no, because all closed newtonian orbits are periodic so that if I throw some thing it will eventually come back to earth. But I can't seem to show it with the math. Any help would be appreciated. If you are standing at a height $$h$$ above the surface, the orbital velocity you need for a circular orbit at your height is given by: $$\frac{mv^2}{R+h}=\frac{GMm}{(R+h)^2}\Rightarrow v=\sqrt{\frac{GM}{R+h}}.$$ You can simply throw the object horizontally, tangential to the surface of the planet, at this velocity and the object will orbit at your height. If you wish to throw an object to a circular orbit at a height above you, this is impossible. See here for the details. If you want an elliptical orbit, then your observation that closed Newtonian orbits are periodic is correct. This is because a unique orbit can be defined by an orbital energy and a point on the orbit. By imparting a particular kinetic energy on the object at your location, you guarantee that the object will return to your location since it must be on the orbit you created. Yes. You can absolutely do that in a Newtonian framework. In fact all the thoughts around the issue before Tsiolkovsky were based on the idea of launching projectiles with no on-board propulsion systems, it was him (two centruies after Newton) that started to point to rockets as a novel mean to reach orbital motion from Earth without extreme initial accelerations (thus saving the lives of any traveler inside the device). Even Newton himself, explained what an orbit around Earth is in terms of shooting cannon balls from the top of a mountain! This is sometimes called Newton's cannonball and it appeares in the Principia Mathematica itself, the fundations of Classical Mechanics. Newton's concept is that if you shoot a connonball from an elevation pointing to the horizon the projectile will fall to the ground after travelling several kilometers at best. But if your cannon is powerfull enought then your cannonball will never touch the ground. Why? Because as the cannonball falls to the ground it reaches such long distances that the ground itself get's farther from it because it bends under the curvature of the Earth. The cannonball is always falling but never reaches the ground since it is dodgin it and remains always far below it as it flies. The Moon is doing the exact same thing, it is falling towards the Earth, but it never collides with it as the planet's surface bends below it's trajectory. This was the key idea presented by Newton. All knew that gravity existed before, but it was Newton that understanded that the Moon is falling just as an apple from a tree does. You can also point your cannon perpendicular to the ground. In that case the projectile will return to Earth if it has a velocity below the escape velocity of the planet (which is around $$11\; km/s$$ on Earth). But if you have a powerfull cannon you can. It wouldn't be a closed orbit tho but it will never return to Earth. Gravity is a force that extends infinitely but it gets weaker very rapidly, so if your projectile slows down slower than that then you are always going faster than needed to escape the planet (even if the force never really dissapears). The whole method was also depicted in literature when Jules Verne showed his idea of a large space cannon in "From the Earth to the Moon". Although rockets are great in comparison to cannons for a number of reasons (rockets accelerate more gradually but cannons concentrate all the acceleration while you are speeding up in the barrel) the cannons have also been taken into consideration for launching compact unmanned probes into orbit easily. These so called space guns have been proposed and some have been built. The thing is that they have to shoot the object to large speeds because they need to get out of the atmosphere with still several kilometers per second speeds, the friction and intense accelerations involved make these projects difficult and the efficiency of rockets is nowadays imposible to overcome by these means. • thanks for the detailed answer! my intent was to launch something so it will orbit at height above my own and I've been linked to an answer which shows it to be impossible. I see that my question was not clear, and I thank you, – Yizhar Amir Jun 28 '19 at 18:27
2021-01-21 15:31:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6455438137054443, "perplexity": 323.7782947947736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00780.warc.gz"}
https://mathoverflow.net/questions/99086/maximum-size-of-k-wise-linearly-independent-set-within-lbrace-1-2-3
# Maximum size of $k$-wise linearly independent set within $\lbrace 1, 2, 3, …, u \rbrace^k$ Given a positive integer $u$, how many $k$-dimensional vectors whose coordinates are all in $\lbrace 1, 2, 3, ..., u\rbrace$ can you choose so that any $k$ of them are linearly independent? Equivalently, what is the size of the largest subset of $\lbrace 1, 2, 3, ... u \rbrace^k$ so that each hyperplane through the origin contains at most $k-1$ of them? If $k=2$, two vectors are linearly dependent iff they have the same slope, so the maximum number of pairwise independent vectors is the number of distinct slopes $y/x$ with $1\le x,y \le u$, $$-1 + 2\sum_{n=1}^u \phi(n),$$ since the number of slopes up to $1$ with reduced denominator $n$ is $\phi(n)$, and slopes other than $1$ come in reciprocal pairs. - Hei, this site is for research-level questions, so this question is not appropriate here. See mathoverflow.net/faq#whatnot for some places where your question will be more likely to get an answer. –  MTS Jun 8 '12 at 4:23 I rewrote the main question so that it does sound research level, at least at the moment, at least to my ears. (Is it easy and I don't see it, MTS?) I couldn't make sense of the "sub problem" so I deleted that part for now. –  David Feldman Jun 8 '12 at 4:57 David, thank you. Before I submitted the question, I read through mathoverflow.net/howtoask. I just wanted to make this question very clear, specific and precise. Actually, my original question is an optimization problem as you stated with many constraints. –  Xiali Hei Jun 8 '12 at 5:16 Which asymptotic region are you interested in? –  Gjergji Zaimi Jun 8 '12 at 8:41 I think I misunderstood the question. Withdrawing my vote to close. –  MTS Jun 8 '12 at 16:09 Tony already mentioned that the maximum size of a set of vectors that are $k$-wise linearly independent over a finite field $\mathbb F_q$ grows linearly with $q$. In our situation, however, this is no longer true, and the right order of asymptotics is $O(u^{k/(k-1)})$. That is, if you keep $k$ fixed, the maximum number of $k$-wise linearly independent vectors from $\lbrace 1,2,\dots, u\rbrace ^k$ is $\sim u^{k/(k-1)}$. One has the same order of magnitude for the minimal number of linear subspaces needed to cover the points $\lbrace 1,2,\dots, u\rbrace ^k$. These statements are proved in I. Barany, G. Harcos, J. Pach and G. Tardos, "Covering Lattice Points by Subspaces", Per. Math. Hung. 43, 2001, 93-103. Here is the arxiv link. I think that the right order of magnitude for the maximum number of such $k$-dimensional vectors so that any $r$ are linearly independent is not known for $r < k$. See this artcle for references on such generalizations. - I am traveling. Thank everyone here. After my traveling, I will update my ideas and goals. –  Xiali Hei Jun 11 '12 at 3:58 Not an answer, but you may also be interested in the 'finite field' analogue of your question. Namely, given a finite field $\mathbb{F}_q$, what is the maximum size of a subset of vectors $S \subset \mathbb{F}_q^k$ so that every subset of $S$ of size $k$ is linearly independent. An old conjecture of Segre asserts that the maximum size of such a set is at most $q+1$, (except for some exceptional cases). This paper of Ball proves the conjecture for $q$ prime (and some other cases). He also proves that the largest examples are all essentially equivalent to the following example: $$S:=\{(1,t, t^2, \dots, t^{k-1}) : t \in \mathbb{F}_q \} \cup \{(0, \dots, 0, 1)\}$$
2014-04-24 13:59:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876516819000244, "perplexity": 236.62867600710854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/calculus-of-variations-nature-of-the-functional.750996/
# Calculus of Variations: Nature of the Functional 1. Apr 27, 2014 ### devd Let $\normalsize S[y] = \int ^{a}_{b} f[y, \dot{y}, x] dx$ be the functional i want to minimize. Why does $\normalsize f$ (inside the integral) take this specific form? Would i not be able to minimize the integral, $\normalsize S$ , if $f$ had any other form instead of $f = f[x, y, \dot{y}]$? 2. Apr 28, 2014 ### HallsofIvy Staff Emeritus Do you understand what "f[x,y,y˙]" means? f is a function that can depend upon x, y, or the derivative of y but the "dependence" on any one can be 0- that is, this includes f(x), with f depending on x only, f(y) with f depending on y only, or f(y') with f depending on the derivative of y only. What more generality do you want? 3. Apr 28, 2014 ### pasmith Mathematically one can consider a functional of the form $$S[y] = \int_a^b f(x,y,y', \dots, y^{(n)})\,dx$$ for any $n \geq 1$, where the optimal solution satisfies $$\sum_{k=0}^n (-1)^k \frac{d^k}{dx^k}\left( \frac{\partial f}{\partial y^{(k)}}\right) = 0,$$ which is in principle a $2n$-order ODE subject to boundary conditions on $y$, $y'$, ..., $y^{(n-1)}$ at both $x = a$ and $x = b$. However in physical applications one generally has $$\frac{\partial f}{\partial y^{(k)}} = 0$$ for $k \geq 2$ so there is no point in going beyond $n = 1$. Also the method of deriving the above ODE does not involve any ideas which are not required for the derivation of the Euler-Lagrange equation for the case $n = 1$; it just requires more integrations by parts. 4. Apr 28, 2014 ### devd Yes, i understand what f(x, y, y') means here. I was thinking about generalizations of the form that pasmith mentioned. Most of the texts are physically motivated, i guess. Probably that's why i didn't find the general form. Thanks, all! :)
2017-11-21 03:02:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548992276191711, "perplexity": 538.7873582795147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00694.warc.gz"}
http://nanometrologie.cz/niget/doc/html5/A1.html
# Appendix A Data fitting We use types of data fitting: the Deming fit for straight lines, the least squares fit of the 3/2 power and orthogonal distance regression [2] for power law functions. ## A.1 Orthogonal distance regression Orthogonal distance regression, also called generalized least squares regression, errors-in-variables models or measurement error models, attempts to tries to find the best fit taking into account errors in both x- and y- values. Assuming the relationship $y^{*}=f(x^{*};\beta)$ (60) where $\beta$ are parameters and $x^{*}$ and $y^{*}$ are the “true” values, without error, this leads to a minimization of the sum $\min_{\beta,\delta}\sum_{i=1}^{n}\left[\left(y_{i}-f(x_{i}+\delta;\beta)\right% )^{2}+\delta_{i}^{2}\right]$ (61) which can be interpreted as the sum of orthogonal distances from the data points $(x_{i},y_{i})$ to the curve $y=f(x,\beta)$. It can be rewritten as $\min_{\beta,\delta,\varepsilon}\sum_{i=1}^{n}\left[\varepsilon_{i}^{2}+\delta_% {i}^{2}\right]$ (62) subject to $y_{i}+\varepsilon_{i}=f(x_{i}+\delta_{i};\beta).$ (63) This can be generalized to accomodate different weights for the datapoints and to higher dimensions $\min_{\beta,\delta,\varepsilon}\sum_{i=1}^{n}\left[\varepsilon_{i}^{T}w^{2}_{% \varepsilon}\varepsilon_{i}+\delta_{i}^{T}w^{2}_{\delta}\delta_{i}\right],$ where $\varepsilon$ and $\delta$ are $m$ and $n$ dimensional vectors and $w_{\varepsilon}$ and $w_{\delta}$ are symmetric, positive diagonal matrices. Usually the inverse uncertainties of the data points are chosen as weights. We use the implementation ODRPACK [2]. There are different estimates of the covariance matrix of the fitted parameters $\beta$. Most of them are based on the linearization method which assumes that the nonlinear function can be adequately approximated at the solution by a linear model. Here, we use an approximation where the covariance matrix associated with the parameter estimates is based $\left(J^{T}J\right)^{-1}$, where $J$ is the Jacobian matrix of the x and y residuals, weighted by the triangular matrix of the Cholesky factorization of the covariance matrix associated with the experimental data. ODRPACK uses the following implementation [1] $\hat{V}=\hat{\sigma}^{2}\left[\sum_{i=1}^{n}\frac{\partial f(x_{i}+\delta_{i};% \beta)}{\partial\beta^{T}}w^{2}_{\varepsilon_{i}}\frac{\partial f(x_{i}+\delta% _{i};\beta)}{\partial\beta}+\frac{\partial f(x_{i}+\delta_{i};\beta)}{\partial% \delta^{T}}w^{2}_{\delta_{i}}\frac{\partial f(x_{i}+\delta_{i};\beta)}{% \partial\delta}\right]$ (64) The residual variance $\hat{\sigma}^{2}$ is estimated as $\hat{\sigma}^{2}=\frac{1}{n-p}\sum_{i=1}^{n}\left[\left(y_{i}-f(x_{i}+\delta;% \beta)\right)^{T}w^{2}_{\varepsilon_{i}}\left(y_{i}-f(x_{i}+\delta;\beta)% \right)+\delta_{i}^{T}w^{2}_{\delta_{i}}\delta_{i}\right]$ (65) where $\beta\in\mathbb{R}^{p}$ and $\delta_{i}\in\mathbb{R}^{m},\ i=1,\dots,n$ are the optimized parameters, ## A.2 Total least squares - Deming fit The Deming fit is a special case of orthogonal regression which can be solved analytically. It seeks the best fit to a linear relationship between the x- and y-values $y^{*}=ax^{*}+b,$ (66) by minimizing the weighted sum of (orthogonal) distances of datapoints from the curve $S=\sum_{i=1}^{n}\frac{1}{\sigma_{\epsilon}^{2}}(y_{i}-ax_{i}^{*}-b)^{2}+\frac{% 1}{\sigma_{\eta}^{2}}(x_{i}-x_{i}^{*})^{2},$ with respect to the parameters $a$, $b$, and $x_{i}^{*}$. The weights are the variances of the errors in the x-variable ($\sigma_{\eta}^{2}$) and the y-variable ($\sigma_{\epsilon}^{2}$). It is not necessary to know the variances themselves, it is sufficient to know their ratio $\delta=\frac{\sigma_{\epsilon}^{2}}{\sigma_{\eta}^{2}}.$ (67) The solution is $\displaystyle a$ $\displaystyle=$ $\displaystyle\frac{1}{2s_{xy}}\left[s_{yy}-\delta s_{xx}\pm\sqrt{(s_{yy}-% \delta s_{xx})^{2}+4\delta s_{xy}^{2}}\right]$ (68) $\displaystyle b$ $\displaystyle=$ $\displaystyle\bar{y}-a\bar{x}$ (69) $\displaystyle x_{i}^{*}$ $\displaystyle=$ $\displaystyle\ x_{i}+\frac{a}{\delta+a^{2}}\left(y_{i}-b-ax_{i}\right),$ (70) where $\displaystyle\bar{x}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}x_{i}$ (71) $\displaystyle\bar{y}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}y_{i}$ (72) $\displaystyle s_{xx}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\bar{x})^{2}$ (73) $\displaystyle s_{yy}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}$ (74) $\displaystyle s_{xy}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\bar{x})(y_{i}-\bar{y}).$ (75) ## A.3 Least squares - 3/2 power fit We seek the best fit $y=ax^{3/2}+b,$ (76) by minimizing the sum of (vertical) distances of datapoints from the curve $S=\sum_{i=1}^{n}(y_{i}-ax_{i}^{3/2}-b)^{2},$ with respect to the parameters $a$, $b$. The solution is $\displaystyle a$ $\displaystyle=$ $\displaystyle\frac{\overline{x^{3/2}y}-\overline{x^{3/2}}\bar{y}}{\overline{x^% {3}}-\left(\overline{x^{3/2}}\right)^{2}}$ (77) $\displaystyle b$ $\displaystyle=$ $\displaystyle\bar{y}-a\overline{x^{3/2}}$ (78) where $\displaystyle\overline{x^{3/2}y}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}x_{i}^{3/2}y_{i}$ (79) $\displaystyle\overline{x^{3/2}}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}x_{i}^{3/2}$ (80) $\displaystyle\overline{x^{3}}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}x_{i}^{3}$ (81) $\displaystyle\bar{y}$ $\displaystyle=$ $\displaystyle\frac{1}{n}\sum_{i=1}^{n}y_{i}$ (82)
2022-01-27 21:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 78, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7024406790733337, "perplexity": 367.20782905829407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00387.warc.gz"}
https://wikimili.com/en/Understanding
# Understanding Last updated Understanding is a psychological process related to an abstract or physical object, such as a person, situation, or message whereby one is able to think about it and use concepts to deal adequately with that object. Understanding is a relation between the knower and an object of understanding. Understanding implies abilities and dispositions with respect to an object of knowledge that are sufficient to support intelligent behaviour. [1] A person is a being that has certain capacities or attributes such as reason, morality, consciousness or self-consciousness, and being a part of a culturally established form of social relations such as kinship, ownership of property, or legal responsibility. The defining features of personhood and consequently what makes a person count as a person differ widely among cultures and contexts. A message is a discrete unit of communication intended by the source for consumption by some recipient or group of recipients. A message may be delivered by various means, including courier, telegraphy, carrier pigeon and electronic bus. A message can be the content of a broadcast. An interactive exchange of messages forms a conversation. Concepts are mental representations, abstract objects or abilities that make up the fundamental building blocks of thoughts and beliefs. They play an important role in all aspects of cognition. ## Contents Understanding is often, though not always, related to learning concepts, and sometimes also the theory or theories associated with those concepts. However, a person may have a good ability to predict the behaviour of an object, animal or system—and therefore may, in some sense, understand it—without necessarily being familiar with the concepts or theories associated with that object, animal or system in their culture. They may have developed their own distinct concepts and theories, which may be equivalent, better or worse than the recognised standard concepts and theories of their culture. Thus, understanding is correlated with the ability to make inferences. Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle. Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular premises to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce, distinguishing abduction from induction, where abduction is inference to the best explanation. ## Examples 1. One understands the weather if one is able to predict (e.g. if it is very cloudy, it may rain) and/or give an explanation of some of its features, etc. 2. A psychiatrist understands another person's anxieties if he/she knows that person's anxieties, their causes, and can give useful advice on how to cope with the anxiety. 3. One understands a piece of reasoning or an argument if one can consciously reproduce the information content conveyed by the message. 4. One understands a language to the extent that one can reproduce the information content conveyed by a broad range of spoken utterances or written messages in that language. ## Shallow and deep Someone who has a more sophisticated understanding, more predictively accurate understanding, and/or an understanding that allows them to make explanations that others commonly judge to be better, of something, is said to understand that thing "deeply". Conversely, someone who has a more limited understanding of a thing is said to have a "shallow" understanding. However, the depth of understanding required to usefully participate in an occupation or activity may vary greatly. For example, consider multiplication of integers. Starting from the most shallow level of understanding, we have (at least) the following possibilities: Multiplication is one of the four elementary mathematical operations of arithmetic; with the others being addition, subtraction and division. An integer is a number that can be written without a fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, 5 1/2, and 2 are not. 1. A small child may not understand what multiplication is, but may understand that it is a type of mathematics that they will learn when they are older at school. This is "understanding of context"; being able to put an as-yet not-understood concept into some kind of context. Even understanding that a concept is not part of one's current knowledge is, in itself, a type of understanding (see the Dunning–Kruger effect, which is about people who do not have a good understanding of what they do not know). 2. A slightly older child may understand that multiplication of two integers can be done, at least when the numbers are between 1 and 12, by looking up the two numbers in a times table. They may also be able to memorise and recall the relevant times table in order to answer a multiplication question such as "2 times 4 is what?". This is a simple form of operational understanding; understanding a question well enough to be able to do the operations necessary to be able to find an answer. 3. A yet older child may understand that multiplication of larger numbers can be done using a different method, such as long multiplication, or using a calculator. This is a more advanced form of operational understanding because it supports answering a wider range of questions of the same type. 4. A teenager may understand that multiplication is repeated addition, but not understand the broader implications of this. For example, when their teacher refers to multiplying 6 by 3 as "adding 6 to itself 3 times", they may understand that the teacher is talking about two entirely equivalent things. However, they might not understand how to apply this knowledge to implement multiplication as an algorithm on a computer using only addition and looping as basic constructs. This level of understanding is "understanding a definition" (or "understanding the definition" when a concept only has one definition). 5. A teenager may also understand the mathematical idea of abstracting over individual whole numbers as variables, and how to efficiently (i.e. not via trial-and-error) solve algebraic equations involving multiplication by such variables, such as ${\displaystyle 2x=6}$. This is "relational understanding"; understanding how multiplication relates to division. 6. An undergraduate studying mathematics may come to learn that "the integers equipped with multiplication" is merely one example of a range of mathematical structures called monoids, and that theorems about monoids apply equally well to multiplication and other types of monoids. For the purpose of operating a cash register at McDonald's, a person does not need a very deep understanding of the multiplication involved in calculating the total price of two Big Macs. However, for the purpose of contributing to number theory research, a person would need to have a relatively deep understanding of multiplication — along with other relevant arithmetical concepts such as division and prime numbers. A cash register, also referred to as a till in the United Kingdom and other Commonwealth countries, is a mechanical or electronic device for registering and calculating transactions at a point of sale. It is usually attached to a drawer for storing cash and other valuables. The cash register is also usually attached to a printer, that can print out receipts for record keeping purposes. McDonald's is an American fast food company, founded in 1940 as a restaurant operated by Richard and Maurice McDonald, in San Bernardino, California, United States. They rechristened their business as a hamburger stand, and later turned the company into a franchise, with the Golden Arches logo being introduced in 1953 at a location in Phoenix, Arizona. In 1955, Ray Kroc, a businessman, joined the company as a franchise agent and proceeded to purchase the chain from the McDonald brothers. McDonald's had its original headquarters in Oak Brook, Illinois, but moved its global headquarters to Chicago in early 2018. The Big Mac is a hamburger sold by international fast food restaurant chain McDonald's. It was introduced in the Greater Pittsburgh area, United States, in 1967 and nationwide in 1968. It is one of the company's flagship products. ## Assessment It is possible for a person, or a piece of "intelligent" software, that in reality only has a shallow understanding of a topic, to appear to have a deeper understanding than they actually do, when the right questions are asked of it. The most obvious way this can happen is by memorization of correct answers to known questions, but there are other, more subtle ways that a person or computer can (intentionally or otherwise) deceive somebody about their level of understanding, too. This is particularly a risk with artificial intelligence, in which the ability of a piece of artificial intelligence software to very quickly try out millions of possibilities (attempted solutions, theories, etc.) could create a misleading impression of the real depth of its understanding. Supposed AI software could in fact come up with impressive answers to questions that were difficult for unaided humans to answer, without really understanding the concepts at all, simply by dumbly applying rules very quickly. (However, see the Chinese room argument for a controversial philosophical extension of this argument.) Memorization is the process of committing something to memory. Mental process undertaken in order to store in memory for later recall items such as experiences, names, appointments, addresses, telephone numbers, lists, stories, poems, pictures, maps, diagrams, facts, music or other visual, auditory, or tactical information. In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving". The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The central point of the argument is a thought experiment known as the Chinese room. Examinations are designed to assess students' understanding (and sometimes also other things such as knowledge and writing abilities) without falling prey to these risks. They do this partly by asking multiple different questions about a topic to reduce the risk of measurement error, and partly by forbidding access to reference works and the outside world to reduce the risk of someone else's understanding being passed off as one's own. Because of the faster and more accurate computation and memorization abilities of computers, such tests would arguably often have to be modified if they were to be used to accurately assess the understanding of an artificial intelligence. A test or examination is an assessment intended to measure a test-taker's knowledge, skill, aptitude, physical fitness, or classification in many other topics. A test may be administered verbally, on paper, on a computer, or in a predetermined area that requires a test taker to demonstrate or perform a set of skills. Tests vary in style, rigor and requirements. For example, in a closed book test, a test taker is usually required to rely upon memory to respond to specific items whereas in an open book test, a test taker may use one or more supplementary tools such as a reference book or calculator when responding. A test may be administered formally or informally. An example of an informal test would be a reading test administered by a parent to a child. A formal test might be a final examination administered by a teacher in a classroom or an I.Q. test administered by a psychologist in a clinic. Formal testing often results in a grade or a test score. A test score may be interpreted with regards to a norm or criterion, or occasionally both. The norm may be established independently, or by statistical analysis of a large number of participants. An exam is meant to test a persons knowledge or willingness to give time to manipulate that subject. Conversely, it is even easier for a person or artificial intelligence to fake a shallower level of understanding than they actually have; they simply need to respond with the same kind of answers that someone with a more limited understanding, or no understanding, would respond with — such as "I don't know", or obviously wrong answers. This is relevant for judges in Turing tests; it is unlikely to be effective to simply ask the respondents to mentally calculate the answer to a very difficult arithmetical question, because the computer is likely to simply dumb itself down and pretend not to know the answer. ## As a model Gregory Chaitin, a noted computer scientist, propounds a view that comprehension is a kind of data compression. [2] In his essay "The Limits of Reason", he argues that understanding something means being able to figure out a simple set of rules that explains it. For example, we understand why day and night exist because we have a simple model—the rotation of the earth—that explains a tremendous amount of data—changes in brightness, temperature, and atmospheric composition of the earth. We have compressed a large amount of information by using a simple model that predicts it. Similarly, we understand the number 0.33333... by thinking of it as one-third. The first way of representing the number requires five concepts ("0", "decimal point", "3", "infinity", "infinity of 3"); but the second way can produce all the data of the first representation, but uses only three concepts ("1", "division", "3"). Chaitin argues that comprehension is this ability to compress data. ## Components ### Cognition and affect Cognition is the process by which sensory inputs are transformed. Affect refers to the experience of feelings or emotions. Cognition and affect constitute understanding. ## Religious perspectives In Catholicism and Anglicanism, understanding is one of the Seven gifts of the Holy Spirit. ## Related Research Articles Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics therefore excludes topics in "continuous mathematics" such as calculus or Euclidean geometry. Discrete objects can often be enumerated by integers. More formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics." Indeed, discrete mathematics is described less by what is included than by what is excluded: continuously varying quantities and related notions. In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f(x) = x. Mathematics includes the study of such topics as quantity, structure, space, and change. In mathematics, a group is a set equipped with a binary operation which combines any two elements to form a third element in such a way that four conditions called group axioms are satisfied, namely closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation, but groups are encountered in numerous areas within and outside mathematics, and help focusing on essential structural aspects, by detaching them from the concrete nature of the subject of the study. In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element. In mathematics, the natural numbers are those used for counting and ordering. In common mathematical terminology, words colloquially used for counting are "cardinal numbers" and words connected to ordering represent "ordinal numbers". The natural numbers can, at times, appear as a convenient set of codes ; that is, as what linguists call nominal numbers, foregoing many or all of the properties of being a number in a mathematical sense. Division is one of the four basic operations of arithmetic, the others being addition, subtraction, and multiplication. The mathematical symbols used for the division operator are the obelus (÷) and the slash (/). Addition is one of the four basic operations of arithmetic; the others are subtraction, multiplication and division. The addition of two whole numbers is the total amount of those values combined. For example, in the adjacent picture, there is a combination of three apples and two apples together, making a total of five apples. This observation is equivalent to the mathematical expression "3 + 2 = 5" i.e., "3 add 2 is equal to 5". Commonsense reasoning is one of the branches of artificial intelligence (AI) that is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day. These assumptions include judgments about the physical properties, purpose, intentions and behavior of people and objects, as well as possible outcomes of their actions and interactions. A device that exhibits commonsense reasoning will be capable of predicting results and drawing conclusions that are similar to humans' folk psychology and naive physics. Mathematics encompasses a growing variety and depth of subjects over history, and comprehension requires a system to categorize and organize the many subjects into more general areas of mathematics. A number of different classification schemes have arisen, and though they share some similarities, there are differences due in part to the different purposes they serve. In addition, as mathematics continues to be developed, these classification schemes must change as well to account for newly created areas or newly discovered links between different areas. Classification is made more difficult by some subjects, often the most active, which straddle the boundary between different areas. Principles and Standards for School Mathematics (PSSM) are guidelines produced by the National Council of Teachers of Mathematics (NCTM) in 2000, setting forth recommendations for mathematics educators. They form a national vision for preschool through twelfth grade mathematics education in the US and Canada. It is the primary model for standards-based mathematics. Extremal combinatorics is a field of combinatorics, which is itself a part of mathematics. Extremal combinatorics studies how large or how small a collection of finite objects can be, if it has to satisfy certain restrictions. Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously." A proof of impossibility, also known as negative proof, proof of an impossibility theorem, or negative result, is a proof demonstrating that a particular problem cannot be solved, or cannot be solved in general. Often proofs of impossibility have put to rest decades or centuries of work attempting to find a solution. To prove that something is impossible is usually much harder than the opposite task; it is necessary to develop a theory. Impossibility theorems are usually expressible as universal propositions in logic. Zero is an even number. In other words, its parity—the quality of an integer being even or odd—is even. This can be easily verified based on the definition of "even": it is an integer multiple of 2, specifically 0 × 2. As a result, zero shares all the properties that characterize even numbers: for example, 0 is neighbored on both sides by odd numbers, any decimal integer has the same parity as its last digit—so, since 10 is even 0 will be even, and if y is even then y + x has the same parity as x—and x and 0 + x always have the same parity. In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer. The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run. Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols; it is a unifying thread of almost all of mathematics. It includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra; the more abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. In algebra, which is a broad division of mathematics, abstract algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, and algebras. The term abstract algebra was coined in the early 20th century to distinguish this area of study from the other parts of algebra. In arithmetic geometry, a Frobenioid is a category with some extra structure that generalizes the theory of line bundles on models of finite extensions of global fields. Frobenioids were introduced by Shinichi Mochizuki (2008). The word "Frobenioid" is a portmanteau of Frobenius and monoid, as certain Frobenius morphisms between Frobenioids are analogues of the usual Frobenius morphism, and some of the simplest examples of Frobenioids are essentially monoids. ## References 1. Bereiter, Carl. "Education and mind in the Knowledge Age". Archived from the original on 2006-02-25. 2. Chaitin, Gregory (2006), The Limits Of Reason (PDF), archived from the original (PDF) on 2016-03-04
2019-08-21 10:15:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5142683982849121, "perplexity": 828.7735725826643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00321.warc.gz"}
https://nrich.maths.org/public/topic.php?code=32&cl=2&cldcmpid=1014
Resources tagged with: Multiplication & division Filter by: Content type: Age range: Challenge level: There are 135 results Broad Topics > Calculations and Numerical Methods > Multiplication & division What Two ...? Age 7 to 11 Short Challenge Level: 56 406 is the product of two consecutive numbers. What are these two numbers? Tom's Number Age 7 to 11 Challenge Level: Work out Tom's number from the answers he gives his friend. He will only answer 'yes' or 'no'. Clever Santa Age 7 to 11 Challenge Level: All the girls would like a puzzle each for Christmas and all the boys would like a book each. Solve the riddle to find out how many puzzles and books Santa left. Surprising Split Age 7 to 11 Challenge Level: Does this 'trick' for calculating multiples of 11 always work? Why or why not? What Is Ziffle? Age 7 to 11 Challenge Level: Can you work out what a ziffle is on the planet Zargon? 1, 2, 3, 4, 5 Age 7 to 11 Challenge Level: Using the numbers 1, 2, 3, 4 and 5 once and only once, and the operations x and ÷ once and only once, what is the smallest whole number you can make? Ducking and Dividing Age 7 to 11 Challenge Level: Your vessel, the Starship Diophantus, has become damaged in deep space. Can you use your knowledge of times tables and some lightning reflexes to survive? Being Resilient - Primary Number Age 5 to 11 Challenge Level: Number problems at primary level that may require resilience. Divide it Out Age 7 to 11 Challenge Level: What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10? Age 11 to 14 Challenge Level: Visitors to Earth from the distant planet of Zub-Zorna were amazed when they found out that when the digits in this multiplication were reversed, the answer was the same! Find a way to explain. . . . Amy's Dominoes Age 7 to 11 Challenge Level: Amy has a box containing domino pieces but she does not think it is a complete set. She has 24 dominoes in her box and there are 125 spots on them altogether. Which of her domino pieces are missing? One O Five Age 11 to 14 Challenge Level: You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . . One Million to Seven Age 7 to 11 Challenge Level: Start by putting one million (1 000 000) into the display of your calculator. Can you reduce this to 7 using just the 7 key and add, subtract, multiply, divide and equals as many times as you like? Repeaters Age 11 to 14 Challenge Level: Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. Super Shapes Age 7 to 11 Short Challenge Level: The value of the circle changes in each of the following problems. Can you discover its value in each problem? Throw a 100 Age 7 to 11 Challenge Level: Can you score 100 by throwing rings on this board? Is there more than way to do it? Oh! Hidden Inside? Age 11 to 14 Challenge Level: Find the number which has 8 divisors, such that the product of the divisors is 331776. Age 11 to 14 Challenge Level: Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true. Napier's Bones Age 7 to 11 Challenge Level: The Scot, John Napier, invented these strips about 400 years ago to help calculate multiplication and division. Can you work out how to use Napier's bones to find the answer to these multiplications? Difficulties with Division Age 5 to 11 This article for teachers looks at how teachers can use problems from the NRICH site to help them teach division. Looking at Lego Age 7 to 11 Challenge Level: This task offers an opportunity to explore all sorts of number relationships, but particularly multiplication. Two Many Age 11 to 14 Challenge Level: What is the least square number which commences with six two's? Book Codes Age 7 to 11 Challenge Level: Look on the back of any modern book and you will find an ISBN code. Take this code and calculate this sum in the way shown. Can you see what the answers always have in common? Multiples Sudoku Age 11 to 14 Challenge Level: Each clue in this Sudoku is the product of the two numbers in adjacent cells. Age 11 to 14 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? Learning Times Tables Age 5 to 11 Challenge Level: In November, Liz was interviewed for an article on a parents' website about learning times tables. Read the article here. Jumping Age 7 to 11 Challenge Level: After training hard, these two children have improved their results. Can you work out the length or height of their first jumps? Escape from the Castle Age 7 to 11 Challenge Level: Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out? Remainders Age 7 to 14 Challenge Level: I'm thinking of a number. My number is both a multiple of 5 and a multiple of 6. What could my number be? The Clockmaker's Birthday Cake Age 7 to 11 Challenge Level: The clockmaker's wife cut up his birthday cake to look like a clock face. Can you work out who received each piece? Eminit Age 11 to 14 Challenge Level: The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M? Long Multiplication Age 11 to 14 Challenge Level: A 3 digit number is multiplied by a 2 digit number and the calculation is written out as shown with a digit in place of each of the *'s. Complete the whole multiplication sum. Like Powers Age 11 to 14 Challenge Level: Investigate $1^n + 19^n + 20^n + 51^n + 57^n + 80^n + 82^n$ and $2^n + 12^n + 31^n + 40^n + 69^n + 71^n + 85^n$ for different values of n. Countdown Age 7 to 14 Challenge Level: Here is a chance to play a version of the classic Countdown Game. Fingers and Hands Age 7 to 11 Challenge Level: How would you count the number of fingers in these pictures? X Is 5 Squares Age 7 to 11 Challenge Level: Can you arrange 5 different digits (from 0 - 9) in the cross in the way described? Today's Date - 01/06/2009 Age 5 to 11 Challenge Level: What do you notice about the date 03.06.09? Or 08.01.09? This challenge invites you to investigate some interesting dates yourself. Domino Numbers Age 7 to 11 Challenge Level: Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? Machines Age 7 to 11 Challenge Level: What is happening at each box in these machines? The Pied Piper of Hamelin Age 7 to 11 Challenge Level: This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether! Four Go for Two Age 7 to 11 Challenge Level: Four Go game for an adult and child. Will you be the first to have four numbers in a row on the number line? Current Playing with Number Upper Primary Teacher Age 7 to 11 Challenge Level: Resources to support understanding of multiplication and division through playing with number. Square Subtraction Age 7 to 11 Challenge Level: Look at what happens when you take a number, square it and subtract your answer. What kind of number do you get? Can you prove it? Penta Post Age 7 to 11 Challenge Level: Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g? Next Number Age 7 to 11 Short Challenge Level: Find the next number in this pattern: 3, 7, 19, 55 ... Being Resourceful - Primary Number Age 5 to 11 Challenge Level: Number problems at primary level that require careful consideration. Magic Potting Sheds Age 11 to 14 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? X Marks the Spot Age 11 to 14 Challenge Level: When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . Dice and Spinner Numbers Age 7 to 11 Challenge Level: If you had any number of ordinary dice, what are the possible ways of making their totals 6? What would the product of the dice be each time?
2020-01-17 20:14:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2351791262626648, "perplexity": 1471.6054099593168}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00290.warc.gz"}
https://www.etutorworld.com/math/pre-algebra-online-tutoring/ordering-of-rational-numbers.html
Select Page # Rational Numbers Home | Online Math Tutoring | Pre-Algebra Tutoring | Ordering of Rational Numbers rational number is one that is a part of a whole denoted as a fraction, decimal or a percentage. It is a number which cannot be expressed as a fraction of two integers (or we can say that it cannot be expressed as a ratio). For instance, let’s consider the square root of 3. It is irrational and cannot be expressed by two integers. It is an irrational number because it cannot be denoted as a fraction with just two integers. #### Integers and Rational numbers They are the numbers you usually count and they will continue upto infinity. Whole numbers are all natural numbers including 0. Example: 0, 1, 2, 3, 4… Integers are all whole numbers and their negative sides too. Example….. -6,-5,-4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6… Every integer is a rational number, as each integer n can be written in the form . For example 5 =  and so 5 is a rational number. However, numbers like  ,,,, and –   are also rational; since they are fractions whose numerator and denominator are integers. 1. Arrange the following integers in the ascending order: 1.  -17, 1, 0, -15, 16,8, 33, 6 2. -44, -66, 0, 23, 41, 55, 15, 11, 10, -1, -2 1. Identify whether the following is rational or irrational number. 1. 0.5 2. 9.0 3. 5
2020-07-09 23:27:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894982635974884, "perplexity": 625.9481456761353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902377.71/warc/CC-MAIN-20200709224746-20200710014746-00324.warc.gz"}
https://read.dukeupress.edu/demography/article/54/6/2181/167849/Compulsory-Schooling-Laws-and-Migration-Across
## Abstract Educational attainment is a key factor for understanding why some individuals migrate and others do not. Compulsory schooling laws, which determine an individual’s minimum level of education, can potentially affect migration. We test whether and how increasing the length of compulsory schooling influences migration of affected cohorts across European countries, a context where labor mobility is essentially free. We construct a novel database that includes information for 31 European countries on compulsory education reforms passed between 1950 and 1990. Combining this data with information on recent migration flows by cohorts, we find that an additional year of compulsory education reduces the number of individuals from affected cohorts who migrate in a given year by 9 %. Our results rely on the exogeneity of compulsory schooling laws. A variety of empirical tests indicate that European legislators did not pass compulsory education reforms as a reaction to changes in emigration rates or educational attainment. ## Introduction Educational attainment is a key factor for understanding why some individuals migrate and others do not (see, e.g., Borjas 1987). For instance, in most European countries, individuals with secondary educational attainment display much lower emigration rates compared with both the primary- or tertiary-educated. Compulsory schooling laws determine an individual’s minimum level of education, and they thus have the potential to affect migration. Our article addresses how changes in compulsory schooling laws affect migration across European countries. To the best of our knowledge, we are the first to analyze how compulsory schooling laws affect international migration in a multicountry setting. Results from our analysis are of particular interest to policymakers who might be concerned whether and how much of a country’s investments in education could be lost to “brain drain.” Our analysis tests how across-country and across-time differences in compulsory schooling laws influence European migration of affected cohorts, exploiting that education laws change over time and that hence different cohorts face different lengths of compulsory schooling in each country. We construct a novel database that contains information on changes to the length of compulsory education passed between 1950 and 1990 in 31 European countries: all 28 European Union (EU) countries plus Norway, Liechtenstein, and Macedonia. We merge this database with recent cohort data on migration flows for 2008–2012. The European setting with basically unrestricted labor mobility is ideal for our analysis because it allows us to isolate the role of education policies.1 Many other countries tend to place stricter limits on the entry of low-educated compared with highly educated individuals, which makes it difficult to disentangle the effects of education policies from those of migration restrictions.2 To isolate the effect of compulsory schooling laws in place when a certain cohort was in school on that cohorts’ migration decisions later in life, we have to control for other determinants of migration choices that could be correlated with such laws. Including country, age group, and year fixed effects as well as the interactions of country and age group with year dummy variables, we are able to account for any macroeconomic variables that do not vary by age group (e.g., GDP per capita) as well as cohort-specific factors that influence migration propensities (e.g., different labor market prospects of young and old individuals). Additionally, the European Union has 24 official working languages that are quite distinct.3 Aparicio Fenoll and Kuehn (2016) showed that exposure to foreign language classes during compulsory education increases migration to countries where these languages are spoken. Hence, reforms to compulsory education that also alter the particular content of school curricula related to foreign languages might have an additional and potentially different effect on migration. Thus in our estimations, we also control for the fact that certain cohorts in some countries learned foreign languages during compulsory education but others did not. Our findings, which thus result from refined comparisons of cohorts, reveal that one additional year of compulsory schooling reduces migration of affected cohorts in a given year by 9 %. One potential explanation for this finding is that increases in the length of compulsory schooling shift a significant fraction of the population from low to medium educational attainment and, as mentioned earlier, emigration rates of medium-educated individuals are lower compared with those of low-educated individuals in the majority of European countries (see Fig. S1, Online Resource 1).4 In line with this hypothesis, we find that additional years of compulsory education effectively translated into higher average educational attainment. Hence, although this article focuses on the impact of compulsory schooling laws on migration, our findings also shed light on the more general question of how educational attainment affects migration. On the other hand, our analysis has the advantage of allowing for general equilibrium effects of education policies on migration that go beyond their effect on educational attainment. For instance, increasing the length of compulsory schooling is likely to improve labor market opportunities for teachers and might reduce crime rates, given that individuals are obliged to stay in school for additional years. Such aspects might additionally affect individuals’ future migration choices. The validity of our results relies on the exogeneity of education reforms with respect to migration and educational attainment. Following Lleras-Muney (2002) and Landes and Solmon (1972), we perform two empirical tests regarding the potential relationship of compulsory schooling reforms with different socioeconomic variables, migration out-flows, and educational attainment measured around the time when reforms were passed. Similar to Aaronson and Mazumder (2011), we also add country-specific birth cohort trends to our main specification. Our results indicate that European lawmakers did not pass compulsory education reforms as a reaction to changes in migration rates or increased educational attainment. Our study is closely related to literature using education laws and related policies to estimate the causal effect of education on within-country migration.5 For instance, Machin et al. (2012) used a change in compulsory schooling laws in Norway and found education to increase internal mobility. Results for the effect of education on state-to-state migration in the United States are mixed. Malamud and Wozniak (2010b) used the risk of being drafted for the Vietnam War as an instrument for college-level education and estimated a positive causal effect of education on migration. They also found that when years of schooling were instrumented by quarter of birth, the estimates turned negative but not significant (Malamud and Wozniak 2010a). As the authors suggested, if the effect of education on migration depends on individuals’ baseline educational attainments, such contrasting results might arise. McHenry (2013) used changes to compulsory schooling laws across U.S. states and showed that for lower levels of education, additional attainment has a negative effect on state-to-state migration, similar to our findings.6 In the context of international migration, most studies have used observed educational attainment to predict wages in the destination country. Taking the number of immigrants as given, their focus has been on the sign of the selection effect. Results from these studies, all based on Mexican-U.S. migration, have ranged from negative self-selection of immigrants with individuals from the bottom of the skill distribution being more likely to migrate (Fernández-Huertas Moraga 2011), to positive self-selection (Chiquiar and Hanson 2005), to a U-shaped relationship (Caponi 2010).7 Our analysis of how compulsory schooling laws affect international migration is important because findings regarding the relationship between education policies and internal mobility cannot simply be extrapolated to the context of international migration. Within a country, educational attainment is easily transferable, but education obtained in one country might not be fully recognized in another country (see Chiswick 2008; Greenwood and McDowell 1991). Furthermore, the degree to which human capital is transferable across countries tends to depend on foreign language proficiency. Hence, the relationship between compulsory schooling laws and international migration is likely to be governed by a different set of rules. We account for some of these differences in our estimations by controlling for cohorts’ exposure to foreign languages during compulsory education and the number of years that cohorts in different countries have lived in the EU, a relatively integrated labor market that has recently tried to make at least university degrees comparable across countries. ## Estimation Strategy Our estimation strategy makes use of all three dimensions of variation in the data, comparing individuals across time, age groups, and countries. We could have estimated the effects of compulsory schooling laws on migration by comparing individuals only across age groups, considering different cohorts from the same country who were affected by different laws. However, such a specification would be affected by differences in the propensities to migrate by age. Another alternative would have been to compare individuals of the same age from different countries. However, nationals of different countries have different propensities to migrate, independently of education policies. A third approach would be to observe individuals of a certain age and country at different points in time, using the fact that education laws change over time. However, because we have data on migration flows for only five years, variability over time is very limited. But even if we disposed of additional data, migration patterns change over time. We improve on these approaches by combining them all and using fixed effects to account for the potential problems mentioned earlier. Our empirical strategy compares migration decisions of (1) different cohorts from the same country who were exposed to different educational polices because of policy changes, and (2) identical cohorts from different countries who were exposed to different educational policies because of differences in legislation across countries. Using fixed effects, this strategy allows us to control for confounding factors that vary across age, time, and countries, and across pairwise combinations of age and time and countries and time. As a result, our estimated coefficients arise from refined comparisons of cohorts, and they are robust to the potential influence of a long list of unobserved factors. To assess the effect of changes to the length of compulsory schooling on migration, we estimate the following model: $ma,i,t=α0+α1CSa,i,t+α2La,i,t+α3Xa,i,t-1+α4EUa,i,t+α5Da+α6Di+α7Dt+α8Da,t+α9Di,t+εa,i,t,$ 1 where ma,i,t is the natural logarithm of the number of individuals of age a who migrate from country i in year t. CSa,i,t denotes the number of years of compulsory schooling that individuals who are of age a in year t faced in country i when they were younger, and La,i,t are indicator variables that take on a value of 1 if these individuals were taught foreign languages during their compulsory education. We include these last controls because exposure to foreign language classes during compulsory education has been linked to increased migration flows of affected cohorts to countries where these languages are spoken (see Aparicio Fenoll and Kuehn 2016).8 Because only certain European languages are studied during compulsory education, our set is restricted to English, German, French, Spanish, and Italian.9Xa,i,t – 1 are control variables, such as total population, the stock of migrants from country i living in other European countries, and the difference in unemployment rates between country i and other European countries. All three variables are measured in t – 1, and we include them disaggregated by age group. Although gravity models of migration typically include differences in GDP per capita across countries as explanatory variables, such differences in our specification are captured by interaction terms of year and country dummy variables. Including differences in unemployment rates by age group allows us to capture the fact that countries’ relative economic attractiveness might vary by cohort. Following Bertrand et al. (2004), we cluster standard errors at the country level to allow for serial correlation in migration outflows over time. We also include as a control variable the number of years that individuals in country i who are of age a in year t have lived in the EU (EUa,i,t). Joining the EU implies that migration costs for citizens of new member countries are significantly reduced. How this affects their migration decisions depends on their age and on how many years of their working lives they could still spend in a different EU country. Hence, different migration patterns for different cohorts could arise. Including the variable EUa,i,t ensures that we are not erroneously attributing changes in migration patterns caused by a country’s EU membership to changes in the length of compulsory education. Furthermore, the European context and our period of analysis (2008–2012) also call for the inclusion of interaction terms of country and year dummy variables, particularly because in 2009 and 2011, work restrictions in some countries for nationals of Central European countries that joined the EU in 2004 and 2007, respectively, were finally lifted.10 Our identification strategy makes use of the fact that education reforms that changed the length of compulsory schooling during the twentieth century for different cohorts generate within- and across-country variation. Figure 1 displays an example. Consider the situation of four individuals: A, B, C, and D. Individuals A and B are Portuguese citizens and were born in 1968 and 1985, respectively, while individuals C and D, also born in 1968 and 1985, are from Denmark. In Portugal in 1986, the length of compulsory schooling increased from six to nine years. Given a school entry age of 6, all individuals age 6 or younger in 1986—those born in 1980 or later—were affected by this reform. Individual B is hence assigned nine years of compulsory education, while individual A is assigned six years. In Denmark, on the other hand, the length of compulsory schooling increased from seven to nine years in 1971. Given a school entry age of 7, all individuals age 7 or younger in 1971—that is, those born in 1964 or later—were affected. Hence, both individuals from Denmark are assigned the same length of compulsory schooling of nine years. Controlling for a large variety of other potential determinants of migration, our main estimation then tests whether differences in the length of compulsory schooling have an effect on cohorts’ subsequent migration decisions. Although one expects additional years of compulsory schooling to lead to higher educational attainment, this need not be the case. Reforms might not be enforced, or even prior to reforms, individuals could already be staying in school beyond the minimum years required by law. To test whether changes to the length of compulsory schooling have an effect on actual educational attainment, we run the following regression: $YSa,i=α0+α1CSa,i+α2La,i+α3Xa,i+α4EUa,i+α5Da+α6Di+εa,i,$ 2 where YSa,i denotes the average years of schooling of individuals of age a who live in country i. Variables on the right side of the equation are as defined before, but they lack the time dimension because we have information on average years of schooling for only one year (2010). Otherwise, by including the same controls as in our main specification, we make this regression as comparable to a first-stage estimation as possible. Finally, to test for the potential endogeneity of education reforms, we run two regressions. First, Landes and Solmon (1972) proposed addressing endogeneity issues of education reforms by testing whether compulsory schooling laws can “predict” past educational attainment. If so, the authors argued, then exogeneity would not hold, and causality would be likely to run from educational attainment to compulsory schooling. Because our research question considers the effect of compulsory schooling laws on migration, our main concern is the exogeneity of education laws with respect to migration. To test for it, we estimate the following regression: $Ei,τ−10=α0+α1CSi,τ+α2Yi,τ+α3ΔYi,τ−10+α4Di+α5Dτ+εi,τ−10,$ 3 where Ei , τ − 10 denotes country i’s emigration rate measured 10 years prior to the reform passed in τ. Yi , τ represents socioeconomic variables (GDP per capita, population growth, log of migration, share of the urban population, and average years of schooling) measured at the time of the reform, and ΔYi , τ − 10 are their 10-year variation rates. Dummy variables for reform years are denoted by Dτ. Although exogeneity of education reforms with respect to educational attainment in our context is of secondary importance, we also rerun the estimation with country i’s average educational attainment 10 years prior to the reform as the dependent variable. Second, in the spirit of Lleras-Muney (2002), we explore the potential determinants of compulsory schooling laws by running the following regression: $CSi,τ=α0+α1Yi,τ+α2ΔYi,τ−5+α3ΔYi,τ−10+α4Di+α5Dτ+εi,τ,$ 4 where CSi , τ are years of compulsory schooling that were passed in year τ, and we test for relationships with socioeconomic variables and their 5- and 10-year variation rates. ## Data For our analysis, we use Eurostat data on immigration by five-year age groups and citizenship for 2008–2012, available for all 28 EU countries plus Norway, Liechtenstein, and Macedonia. In particular, we construct migrant outflows for each year by summing the number of citizens of each country who leave for any of the following destination countries: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, Germany, Greece, Hungary, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Netherlands, Norway, Poland, Romania, Slovakia, Slovenia, Spain, and Sweden. For arrivals in Germany and Austria, missing data for 2009–2012 and 2010, respectively, are complemented with data from the Statistische Bundesamt and Statistik Austria. Data on migrants who arrive in the UK come from the International Passenger Survey of the Office for National Statistics (ONS). We also rely on Eurostat for data on national unemployment rates, total population, and the stock of migrants by country of origin. These three variables are considered disaggregated by five-year age groups and are measured one year prior to migration—that is, in 2007–2011. We restrict our sample to young individuals aged 25–44, who are most likely to migrate for work-related reasons. We exclude individuals younger than age 25 because it is difficult to disentangle migration from education decisions for this group, particularly in the presence of a large-scale EU program (Erasmus) that provides subsidies for studying abroad.11 Regarding older workers, the number of years that individuals have to work to become eligible for pension payments varies widely across countries: for example, France and Spain require 15 years, while Germany requires 5 years. Hence, for older individuals, such policy aspects that are unrelated to individuals’ educational attainment might influence their migration decisions. Our database on compulsory schooling reforms that includes information on the length of compulsory education for each cohort in each country is mainly based on the following four sources: Brunello et al. (2009), Garrouste (2010), Hörner et al. (2007), and Murtin and Viarengo (2011). The assignment of years of compulsory schooling to age groups is not always straightforward. Thus, for age groups in which only some individuals were affected by changes to compulsory education, we construct a weighted average for years of compulsory schooling.12 As weights, we use the number of individuals of each exact age within the age group, which we obtain from Eurostat. Table S1 in Online Resource 1 displays all effective changes to the length of compulsory education for cohorts in our sample. A detailed description of these education reforms for all countries (including countries where no effective changes occurred) can be found in section A.1 of Online Resource 1. As mentioned earlier, our database also includes information on reforms regarding foreign languages in compulsory school curricula. For each cohort and country, we have information, mainly from the European Commission’s Education, Audiovisual and Culture Executive Agency (Eurydice), regarding the starting age for studying foreign languages during compulsory education as well as the specific languages studied. We summarize this information in Table S2 in Online Resource 1; in section A.1 of the online supplement, we describe these reforms in more detail. For our estimation that tests whether education reforms were effective at increasing educational attainment, we consider data on average years of education for each age group from Barro and Lee (2013) for 2010.13 For our exogeneity checks, we have data on migration outflows from various editions of the United Nations Statistical Yearbooks, available for 1950–1995 for 24 of the 31 countries in our sample.14 We use data from the OECD for GDP per capita, which are available beginning with 1960. However, for former communist countries (such as the Czech Republic, Croatia, Hungary, Poland, Slovakia, and Slovenia), this series starts only in 1990. To avoid losing observations, we hence set missing values for GDP per capita for 1960–1989 for these countries to 0, and we define an indicator variable for missing data. Finally, we again rely on data from Barro and Lee (2013) for past educational attainment and on World Bank data for population growth and the percentage of the population living in urban areas. These last three variables are available from 1960 onward for all countries except Macedonia. Thus, our final sample for the exogeneity check includes 23 countries.15 Table 1 provides summary statistics for all our variables. For the variables included in our main specification, we have observations for 620 cells defined by the combination of country, age group, and year. On average, 4,163 individuals in each age group from each country migrate each year. However, we observe significant variation in these migration flows. For instance, in 2009, only 1 individual aged 40–44 migrated from Liechtenstein, while 54,766 individuals aged 25–29 migrated from Poland. Average years of compulsory schooling are 9 years, ranging from 6 years (for older cohorts in most countries) to 13 years for younger cohorts in Germany. We observe a maximum difference of 32 percentage points in 2008 between Macedonia’s unemployment rate for individuals aged 25–29 and unemployment for the same age group in other European countries. Other control variables that we include into our main specification, disaggregated by age group and measured one year before migration, are the stock of migrants in other European countries and total population. Regarding the latter, each country has, on average, a little more than 1 million inhabitants per age group, ranging from only 2,224 individuals aged 25–29 in Liechtenstein to more than 7 million individuals aged 35–39 in Germany in 2008. Regarding the stock of migrants, each country has on average approximately 34,607 individuals of each age group living in other European countries—that is, more than eight times the average annual outflow. However, although only 52 individuals aged 40–44 from Liechtenstein were residing abroad in 2011, the stock of migrants from Romania aged 30–34 in other European countries was 407,107 in 2012. Finally, we also include indicator variables for cohorts who were exposed to different foreign languages during their compulsory education. More than two-thirds of individuals in our sample potentially learned English, followed by 47 % and 43 % who could have learned German or French, respectively. Only recently, individuals in most countries can also choose Spanish or Italian as compulsory foreign languages, affecting less than 10 % of individuals in our sample. ## Results We first test whether changes to the length of compulsory schooling have an effect on actual educational attainment (see Eq. (2)). As mentioned earlier, we do not conduct a two-stage least squares (2SLS) estimation, but we make the regression as comparable to a first-stage estimation as possible.16 In particular, we regress average educational attainment for different age groups on our measure of years of compulsory schooling, and we control for the same variables as in our main specification. Table S3 in Online Resource 1 shows the results. The estimated coefficients indicate that policies that increased the length of compulsory schooling were effective at increasing average years of education of affected cohorts. Given that already prior to reforms, individuals could have been staying in school longer, the increase is less than proportional. In particular, we find that one additional year of compulsory schooling increases average educational attainment of affected cohorts by 0.26 years. Our estimate thus is close to the additional 0.2 and 0.23 years that Oreopoulos (2007) estimated for an increase in the school-leaving age from 15 to 16 years in the United States and Canada, respectively. Thus, part of the effect of compulsory schooling reforms on indirectly related outcomes, such as migration, operates through changes in educational attainment. We then turn to our main model, as specified in Eq. (1), which estimates the effect of changes to the length of compulsory education on migration. Results are displayed in Table 2. The first column corresponds to the basic regression that includes dummy variables for year, age group, and countries; lagged control variables for unemployment differences, stock of migrants, and population by age group; dummy variables for compulsory foreign language classes; and the number of years individuals have lived in the EU. In column 2, we add the interaction term for age and year. Column 3 presents results for the most complete specification that also includes the interaction term between country and year. Our coefficient of interest is negative, significant, and very stable across all specifications. One additional year of compulsory schooling reduces the number of individuals from the affected cohorts who migrate in a given year by 9 %. Having been exposed to English, French, German, Italian, or Spanish during compulsory education affects the odds of migrating to particular European countries (see Aparicio Fenoll and Kuehn 2016) but does not significantly affect the general odds of migrating. Table S4 in Online Resource 1 displays the full set of estimated coefficients. After we control for macroeconomic and cohort-specific factors by including age-by-year and country-by-year dummy variables, age-specific controls do not play a significant role; see column 3. Coefficients in columns 1 and 2, when significant, show the expected signs. Higher unemployment rates and larger cohorts are related to more out-migration. On the other hand, older cohorts in countries that have been in the EU longer migrate less; and as more individuals of a cohort leave, the number of emigrants falls.17 ## Discussion Why would additional years of compulsory schooling reduce migration? One potential explanation is linked to the fact that an increase in the length of compulsory schooling shifts a significant fraction of the population from low to medium educational attainment, and medium-educated individuals display lower emigration rates compared with low-educated individuals in the majority of European countries. However, this opens up a second question: Why do medium educated individuals migrate less? Or, put differently, Why would returns to migration be higher for primary- and tertiary-educated individuals (i.e., at both tails of the education distribution)? Several aspects could explain such a pattern in returns to migration. First, the median voter in most European countries has an intermediate level of education (see Fig. S2). As a result, labor market regulations in each country are likely to be tailored to medium-educated individuals. McHenry (2013), who found compulsory schooling to reduce mobility across U.S. states, suggested that staying in school allows individuals to strengthen their community networks, which might help them find jobs. Low- and high-educated individuals, on the other hand, could find foreign labor markets relatively more appealing. For instance, low-educated individuals could migrate to countries that offer higher minimum wages, better public services, and higher social benefits, as argued by the literature on welfare magnets initiated by Borjas (1999). Highly educated individuals could be responding more strongly to demand for their skills in foreign labor markets and/or so-called brain gain policies that some countries implement to attract highly skilled and talented individuals (e.g., researchers); see Giannoccolo (2012) or Mahroum (2005) for a survey of such policies in Europe and OECD countries, respectively. Second, the model in Stark (1991: chapter 11) proposed a different mechanism that can give rise to a U-shaped pattern of returns to migration. The model is based on the facts that educational degrees are not automatically valid across countries and that degree recognition is costly in terms of time and money. In such an environment, only highly educated individuals find it worthwhile to pay the cost of degree recognition. For medium-educated individuals, on the other hand, the costs are too high relative to the additional wage they could obtain abroad; and without a recognized degree, they are able to earn only a minimum wage in the destination country. Hence, when a minimum wage and a higher wage level exist in some destination countries but wages are increasing in education in both destination and origin countries, a U-shaped pattern for migration can arise: individuals with higher education have their degree recognized and migrate, medium-educated individuals stay, and low-educated individuals migrate to earn the minimum wage of the destination country. The model’s outcome is hence in line with European data on lower emigration rates of secondary-educated compared with both primary- or tertiary-educated individuals. The model also suggests a potential mechanism for our findings. However, reasons unrelated to individuals’ educational attainment could also explain why laws that increase the length of compulsory education might lead to less migration by affected cohorts. As Eisenberg (1988) described, typical arguments in favor of compulsory schooling laws in the United States during the nineteenth century suggested that such laws would lead to improved civic societies and would thus entail reductions in crime rates. These laws made schooling compulsory, and they are therefore different from reforms considered here that change the length of compulsory education. Nevertheless, cohorts affected by additional years of compulsory education will also live surrounded by better-educated peers, which for similar reasons could potentially reduce out-migration. Finally, increases in the length of compulsory schooling require hiring additional teachers. This implies improved local labor market opportunities and incomes, mostly for the parental generation of affected cohorts, and growing up under better economic circumstances could make these cohorts less likely to migrate. ### Endogeneity Concern Since the seminal study by Angrist and Krueger (1991), compulsory schooling laws have been used in studies of the causal relationship between education and many outcomes, such as earnings (Harmon and Walker 1995), health (Brunello et al. 2013), and citizenship (Milligan et al. 2004). More relevant to the current study and as discussed earlier, changes in the length of compulsory education have also been used in analyses of the effect of education on internal migration in Norway (Machin et al. 2012) and the United States (Malamud and Wozniak 2010a; McHenry 2013). Exogeneity with respect to educational attainment is a necessary condition for these approaches to be valid. Previous studies have found mixed results regarding the exogeneity of compulsory schooling laws. Lleras-Muney (2002) for U.S. laws passed between 1915 and 1939, and Nasif Edwards (1978) for U.S. laws passed in 1960, established that those were exogenous to educational attainment. However, studies of other periods in U.S. history found that compulsory law changes might have been endogenous to educational attainment (e.g., for 1940–1955, see Eisenberg 1988; Landes and Solmon 1972; Nasif Edwards 1978; Stigler 1950: appendix B). Given these mixed results and that, to the best of our knowledge, there are no studies regarding the exogeneity of compulsory schooling laws in Europe, we formally address this issue. Education reforms could be endogenous to migration or educational attainment in two ways: (1) reverse causality, if education reforms were enacted because of past migration outflows or past educational attainment, and (2) omitted variables, if determinants of cohort-specific migration patterns (e.g., differences in cohort-specific labor market conditions between origin and destination countries) persisted over time and had influenced reforms that were implemented when our cohorts were in school. Regarding reverse causality, education reforms are predetermined with respect to migration patterns in 2008–2012, but migration patterns could be highly persistent over time. However, education reforms could at most be driven by aggregate migration, whereas our approach considers cohort-specific migration flows. Still, it is possible that compulsory schooling reforms took place in countries that were exhibiting certain long-term trends in educational attainment for reasons unrelated to education reforms. Therefore, we follow Aaronson and Mazumder (2011) by including country-specific birth-cohort trends into our main specification. Results in Table S5 in Online Resource 1 show that the estimated coefficient of interest is not significantly different from the one in our main regression. From a political economy point of view, migration flows are unlikely to influence education policies. As discussed earlier, governments design their education policies focusing on the median voter who stays, instead of targeting those who migrate. Moreover, the time between the implementation of education reforms and students’ completion of compulsory education and entry into the labor market—be it at home or abroad—is likely to exceed governments’ mandates. Hence, because governments might not be able to reap the potential fruits in terms of more or less migration, migration flows or brain drain concerns are very unlikely determinants of policies affecting compulsory education.18 However, as Eisenberg (1988) noted, laws introducing compulsory schooling in the United States in the 1880s were passed after opposition to such laws was limited.19 To formally test for the potential endogeneity of the passage of compulsory schooling laws, we follow Landes and Solmon (1972). Using a broader set of reforms, including those that did not imply changes to compulsory schoolings for cohorts in our sample (see Table S6, Online Resource 1), we estimate Eq. (3) and check whether reforms are able to predict past migration rates or past educational attainment. Results displayed in Table 3 show that none of the estimated coefficients are significant at the 10 % level. Hence, we find no evidence that compulsory education reforms in Europe were passed as a reaction to changes in migration or educational attainment. Regarding omitted variables, to proxy labor market conditions in our specification, we control for differences in cohort-specific unemployment rates in the year before migration. Our estimated coefficients remain unchanged, suggesting that differences in labor market conditions between countries are not driving education reforms implemented in the past. One might think that unemployment rates at the time of the reform could be a relevant omitted variable; however, those are unlikely to affect migration patterns in 2008–2012, particularly after contemporaneous unemployment is controlled for. In general, to address these concerns, one would like to know more about the determinants of education reforms.20 To this end, our exogeneity check, following Lleras-Muney (2002) and as specified in Eq. (4), tests which variables are correlated with changes in the length of compulsory schooling. Again, we use the broader set of reforms (see Table S6, Online Resource 1) and regress the length of compulsory schooling on a variety of potentially related variables measured in the reform year regarding demographics (log migration out-flow, population growth), urban development (percentage urban population), education (average years of schooling), and economic development (GDP per capita). We also include 5- and 10-year variation rates for all variables, as well as year and country fixed effects. Table 4 shows the results from this estimation. We estimate a significant relationship between the timing of compulsory schooling reforms and the growth rate in urban population 10 years prior to reforms. To ensure that our main result is not altered by this relationship, we conduct two robustness checks. First, we check which countries are driving this result and rerun our main estimation excluding them (Malta, Norway, Portugal, and Sweden). Our results remain robust (see Table S7, Online Resource 1). Second, we include as an additional control variable into our main regression the 10-year variation rate in urban population measured at the time each cohort entered compulsory schooling. Results displayed in Table S8 of Online Resource 1 show that our coefficient of interest is not very different from the one in our main specification. Our results suggest that compulsory schooling laws that were passed during the second half of the twentieth century in Europe are exogenous to changes in emigration rates or educational attainment. However, we cannot rule out that changes in migration rates or educational attainment led to governments adopting other policies that affected returns to migration (e.g., certain labor market policies). Nevertheless, for our main result to be due to alternative policies instead of education reforms, such policies would have had to affect returns to migration of different cohorts distinctively, and they also would have had to occur simultaneously with education reforms. ### Other Robustness Checks After the end of communism, some countries in Central and Eastern Europe split. However, they share a common past, including education reforms that occurred before 1990. To account for this, we cluster standard errors by country using the following country definitions pertinent to 1990: (1) the Soviet Union (Estonia, Latvia, Lithuania), (2) Yugoslavia (Croatia, Macedonia, and Slovenia), and (3) Czechoslovakia (Czech Republic and Slovakia). Table S9 shows that results from this estimation are almost indistinguishable from those in our main specification. In the wake of this same event, some of these countries reduced the length of compulsory schooling. Given that the end of the Cold War also led to important migration outflows, one might wonder whether such reductions in compulsory schooling could be driving our results. However, this would be possible only if differences in migration outflows by age group persisted to the present. We rerun our main estimation excluding all countries that reduced the length of compulsory schooling at some point in time (see Table S10, Online Resource 1). Our coefficient of interest is somewhat smaller, but the difference is not statistically significant. As mentioned earlier, four countries in our sample (Croatia, Liechtenstein, Macedonia, and Norway) did not belong to the EU during 2008–2012 and hence might have faced different migration restrictions than other countries in our sample. We repeat our estimation excluding these countries. Table S11 shows that results remain robust. Finally, evidence suggests that some countries implemented reforms gradually over many years. In this case, our assignment of years of compulsory education to cohorts is somewhat inaccurate. Maybe not surprisingly, when we run our main estimation excluding those countries (Norway and Finland), our coefficients are somewhat larger, although the difference is not statistically significant (see Table S12). ## Conclusion Previous research has used education reforms to test whether more education is associated with more or less within-country mobility, finding mixed results. We consider an international context with basically unrestricted mobility—namely, Europe—and test for the direct effect of education policies on migration. We show that increases in the length of compulsory schooling reduce the propensity to migrate across European countries. One additional year of compulsory education reduces migration of affected cohorts in a given year by 9 %. After we perform a variety of exogeneity checks, our results show that European lawmakers did not pass education reforms as a reaction to changes in migration rates or educational attainment. We also find that additional years of compulsory education effectively translated into higher average educational attainment, and hence part of the effect of compulsory schooling reforms on migration operates through changes in educational attainment. In this context, identification comes from individuals at the low to medium part of the education distribution, whose educational attainment changes as a result of the reforms (the so-called compliers). Hence, although our results show that governments can be fairly confident that more years of compulsory schooling are unlikely to be lost to brain drain, this might not apply to reforms of higher education. How education policies that alter individuals’ propensity to go into higher education affect migration choices seems an interesting question for future research. In 2014, unemployment was 24 % to 26 % in Spain and Greece; Germany had one of the lowest unemployment rates, at 5 %. European authorities are trying to foster mobility to reduce such large differences. Machin et al. (2012) argued that lower average educational attainment in Europe compared with the United States might be a reason for the relatively low labor mobility in Europe. Our results, however, suggest that different distributions of education matter—in particular, the higher share of medium-educated individuals in Europe compared with the United States.21 In this context, if high degree recognition costs prevent medium-educated individuals from migrating (see Stark 1991), then reducing those costs could be key for increasing returns to migration and fostering mobility. Finally, one of the top priorities of the European Union’s 2020 agenda is to improve educational outcomes. Education policies that lead to a more-educated and better-prepared workforce are essential for future growth and job creation. Our results show that increasing educational attainment while increasing labor mobility across countries requires coordinated education and labor market policies.22 ## Acknowledgments A previous version of this article, titled “Education Policies and Migration across European Countries,” was awarded the 4th Giorgio Rota Best Paper Award. We thank Jesús Fernández-Huertas Moraga, David McKenzie, and Jennifer Graves; and seminar participants at Collegio Carlo Alberto, the 14th IZA/SOLE Transatlantic Meeting of Labor Economists, the Workshop on Migration Barriers in Jena, the 28th SAEe 2015, the 13th IZA Annual Migration Meeting, and the workshop on “Evaluating Policies Fostering Child Development,” particularly Alessandra Venturini, for their helpful comments and suggestions. Zoe Kuehn acknowledges financial support from the Ministerio de Economía y Competitividad (ECO2013-44920-P) in Spain. ## Notes 1 EU law guarantees free labor mobility, but countries can impose temporary restrictions for nationals of new member states. Prior to 2014, some EU member states required that Bulgarian and Romanian nationals obtained residence and work permits. Norway and Liechtenstein belong to the Schengen area, which has guaranteed free mobility since 2001 and 2011, respectively. Croatia joined the EU only in 2014. Macedonia is an EU candidate country, and since 2009, its residents can travel visa-free to the Schengen area. 2 For instance, Canada and Australia use a point system that grants visas to more-skilled and higher-educated individuals; see Aydemir (2011) for an analysis of the selection effects of such systems. 3 Linguists identify at least seven European language families: Celtic, Italic, Germanic, Baltic-Slavik, Greek, Uralic, and Semitic (see Gray and Atkinson 2003; Harding and Sokal 1988). 4 For instance, Brunello et al. (2009) provided evidence for a number of European countries for which compulsory schooling laws significantly affect educational attainment for all but the upper part of the education distribution. 5 Using data on educational attainment entails problems of reverse causality if individuals’ education choices are influenced by their desire to migrate. For instance, McKenzie and Rapoport (2011) found that Mexican boys from a household with international migration experience are more likely to drop out of school. 6 Regarding a related outcome variable—immigrant assimilation—and using an analysis similar to the current article, Lleras-Muney and Shertzer (2015) considered changes in education policies, including compulsory schooling laws and the imposition of English as language of instruction. The authors found no effect of these policies on migrant assimilation in the United States between 1910 and 1930. 7 Quinn and Rubb (2005) proposed to reconcile these different findings by looking at education-occupation matches. They found that individuals who work in occupations that require less education are more likely to migrate, while the opposite is true for individuals whose level of education is below that required for their occupation. Alternatively, McKenzie and Rapoport (2010) found that the selection effect along educational lines regarding Mexican-U.S. migration depends on the size of networks, with larger (smaller) networks attracting disproportionately more-uneducated (educated) individuals. 8 For countries where students can choose among foreign languages, we consider all options for two reasons. First, for most of these countries, we do not have information on languages chosen by each cohort. Second, we wish to avoid measuring individual choices, which would turn foreign languages into a bad control. Limited reliability of data on years of exposure to foreign language classes is the main reason why we do not consider such a refinement. 9 Russian is the most widely taught second foreign language in Latvia, Estonia, and Lithuania, but we ignore this option given that we do not have data on migration flows to Russia. Although individuals in Finland and Belgium can additionally study Swedish and Dutch, respectively, we do not include these options explicitly given that they apply to all cohorts and are hence indistinguishable from country fixed effects. The same holds true for any controls accounting for the fact that some countries—such as the UK and Ireland, or Germany and Austria—share the same official language. 10 When in place, these restrictions applied to individuals of all ages. 11 For 2007–2013, the EU allocated €3.1 billion to the Erasmus program. 12 One potential concern could be that individuals did not obtain their schooling in their country of origin. However, using data by citizenship, instead of country of previous residence, mitigates this concern substantially. 13 Unfortunately, we cannot exploit changes over time because these data are available only every five years. Barro and Lee (2013) provided data for all countries in our sample except for Liechtenstein. 14 We use the following editions: 1952, 1954, 1957, 1959, 1962, 1966, 1968, 1970, 1977, 1985, 1989, and 1996. For Croatia, Macedonia, and Slovenia, we use data for Yugoslavia that are available for 1956–1977. For the Czech Republic and Slovakia, data for Czechoslovakia are used until 1992. Because no data on emigration from the USSR are available, we cannot assign data to Estonia, Latvia, or Lithuania. Data for Bulgaria, Romania, and Liechtenstein are only sporadically available, and numbers for emigration flows from Ireland exceed those of the Irish population; hence, we do not include any of these countries. For Spain, Norway, Finland, and the UK, data before 1962, before 1960, for 1960–1966, and for 1961–1963, respectively, could not be used because they included only intercontinental migration in the case of Spain; excluded migration to other Scandinavian countries; or in the case of the UK, referred only to migration to other Commonwealth countries. 15 Aggregate migration outflow data are available for 24 countries from 1950–1995, but data for all other control variables are not available for Macedonia, and for all other countries only from 1960 onward. The final sample for the exogeneity check includes the following countries: Austria, Belgium, Croatia, Cyprus, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, and the UK. 16 Running a 2SLS IV estimation with data for 2010 only results in a negative and significant estimate of the impact of education on migration. However, because of the reduced sample size, the instrument is weak according to the Stock and Yogo (2005) criterion. Furthermore, as mentioned in the Introduction, it is not clear that compulsory schooling reforms affect migration only via educational attainment, something that would invalidate the exclusion restriction. 17 The presence of network effects could imply that as more individuals leave, more follow. However bear in mind that unlike the literature’s typical example regarding Mexican-U.S. migration, the stock of immigrants in our case is spread over 30 countries. The negative coefficient is in line with the notion that after a large fraction of an age group has migrated, those who remain have a much lower propensity to leave. 18 To our knowledge, the only example of a government explicitly providing training such that its citizens become better migrant workers is the training of nurses in the Philippines (see Lorenzo et al. 2007). However, the effect of such specialized training of adult workers on migration is much more immediate than the one resulting from education reforms regarding compulsory schooling. 19 Opposition tended to arise because of limitations to child labor and parental freedom of decision, as well as taxation for financing schools; see Eisenberg (1988) and Butts and Cremin (1953). 20 To the best of our knowledge, no established theory in political economy addresses education reforms. However, increasing the length of compulsory education requires additional resources, such as teachers and facilities. Thus, the actual implementation of those reforms depends on the availability of resources. 21 Alternative explanations for the low European mobility focus on relatively high unemployment benefits (Antolin and Bover 1997) and stronger employment protection (Belot 2007) in European countries compared with the United States. 22 Boldrin and Canova (2001) argued that EU policies aimed at achieving convergence in economic conditions across Europe seem to discourage migration. ## References Aaronson, D., & Mazumder, B. ( 2011 ). The impact of Rosenwald schools on black achievement . Journal of Political Economy , 119 , 821 888 . 10.1086/662962 Angrist, J. D., & Krueger, A. B. ( 1991 ). Does compulsory school attendance affect schooling and earnings? . Quarterly Journal of Economics , 106 , 979 1014 . 10.2307/2937954 Antolin, P., & Bover, O. ( 1997 ). Regional migration in Spain: The effect of personal characteristics and of unemployment, wage and house price differentials using pooled cross-sections . Oxford Bulletin of Economics and Statistics , 59 , 215 235 . 10.1111/1468-0084.00061 Aparicio Fenoll, A., & Kuehn, Z. ( 2016 ). Does foreign language proficiency foster migration of young individuals within the European Union? In B.-A. Wickstroem & M. Gazzola (Eds.), The economics of language policy (pp. 331 355 ). Cambridge, MA : MIT Press . Aydemir, A. ( 2011 ). Immigrant selection and short-term labor market outcomes by visa category . Journal of Population Economics , 24 , 451 475 . 10.1007/s00148-009-0285-0 Barro, R., & Lee, J.-W. ( 2013 ). A new data set of educational attainment in the world, 1950–2010 . Journal of Development Economics , 104 , 184 198 . Belot, M. ( 2007 ). Why is employment protection stricter in Europe than in the United States? . Economica , 74 , 397 423 . 10.1111/j.1468-0335.2006.00552.x Bertrand, M., Duflo, E., & Mullainathan, S. ( 2004 ). How much should we trust differences-in-differences estimates? . Quarterly Journal of Economics , 119 , 249 275 . 10.1162/003355304772839588 Boldrin, M., & Canova, F. ( 2001 ). Inequality and convergence in Europe’s regions: Reconsidering European regional policies . Economic Policy , 16 , 206 253 . 10.1111/1468-0327.00074 Borjas, G. T. ( 1987 ). Self-selection and the earnings of immigrants . American Economic Review , 77 , 531 553 . Borjas, G. T. ( 1999 ). Immigration and welfare magnets . Journal of Labor Economics , 17 , 607 637 . 10.1086/209933 Brunello, G., Fabbri, D., & Fort, M. ( 2013 ). The causal effect of education on body mass: Evidence from Europe . Journal of Labor Economics , 31 , 195 223 . 10.1086/667236 Brunello, G., Fort, M., & Weber, G. ( 2009 ). Changes in compulsory schooling, education and the distribution of wages in Europe . Economic Journal , 119 , 516 539 . 10.1111/j.1468-0297.2008.02244.x Butts, R. F., & Cremin, L. A. ( 1953 ). A history of education in American culture . New York, NY : Holt, Rinehart and Winston . Caponi, V. ( 2010 ). Heterogeneous human capital and migration: Who migrates from Mexico to the US? . Annals of Economics and Statistics , 97/98 , 207 234 . 10.2307/41219116 Chiquiar, D., & Hanson, G. H. ( 2005 ). International migration, self-selection, and the distribution of wages: Evidence from Mexico and the United States . Journal of Political Economy , 113 , 239 281 . 10.1086/427464 Chiswick, B. R. ( 2008 ). The economics of language: An introduction and overview (IZA Discussion Paper No. 3568). Bonn, Germany : Institute for the Study of Labor . Eisenberg, M. J. ( 1988 ). Compulsory attendance legislation in America, 1870 to 1915 (Doctoral dissertation). University of Pennsylvania , . Fernández-Huertas Moraga, J. ( 2011 ). New evidence on emigrant selection . Review of Economics and Statistics , 93 , 72 96 . 10.1162/REST_a_00050 Garrouste, C. ( 2010 ). 100 years of educational reforms in Europe: A contextual database (JRC Scientific and Technical Reports). Luxembourg City, Luxembourg : European Commission . Giannoccolo, P. ( 2012 ). How European nations attract highly skilled workers: Brain drain competition policies . IUP Journal of International Relations , 6 ( 4 ), 56 62 . Gray, R. D., & Atkinson, Q. D. ( 2003 ). Language-tree divergence times support the Anatolian theory of Indo-European origin . Nature , 426 , 435 439 . 10.1038/nature02029 Greenwood, M. J., & McDowell, J. M. ( 1991 ). Differential economic opportunity, transferability of skills, and immigration to the United States and Canada . Review of Economics and Statistics , 73 , 612 623 . 10.2307/2109400 Harding, R. M., & Sokal, R. R. ( 1988 ). Classification of the European language families by genetic distance . Proceedings of the National Academy of Sciences , 85 , 9370 9372 . 10.1073/pnas.85.23.9370 Harmon, C., & Walker, I. ( 1995 ). . American Economic Review , 85 , 1278 1286 . Hörner, W., Döbert, H., von Kopp, B., & Mitter, W. (Eds.). ( 2007 ). The education systems of Europe . Dordrecht, The Netherlands : Springer . Landes, W. M., & Solmon, L. C. ( 1972 ). Compulsory schooling legislation: An economic analysis of law and social change in the nineteenth century . Journal of Economic History , 32 , 54 91 . 10.1017/S0022050700075392 Lleras-Muney, A. ( 2002 ). Were compulsory attendance and child labor laws effective? An analysis from 1915 to 1939 . Journal of Law and Economics , 45 , 401 435 . 10.1086/340393 Lleras-Muney, A., & Shertzer, A. ( 2015 ). Did the Americanization movement succeed? An evaluation of the effect of English-only and compulsory schooling laws on immigrants . American Economic Journal: Economic Policy , 7 ( 3 ), 258 290 . Lorenzo, F. M. E., Galvez-Tan, J., Icamina, K., & Javier, L. ( 2007 ). Nurse migration from a source country perspective: Philippine country case study . Health Services Research , 42 , 1406 1418 . 10.1111/j.1475-6773.2007.00716.x Machin, S., Salvanes, K. G., & Pelkonen, P. ( 2012 ). Education and mobility . Journal of the European Economic Association , 10 , 417 450 . 10.1111/j.1542-4774.2011.01048.x Mahroum, S. ( 2005 ). The international policies of brain gain: A review . Technology Analysis and Strategic Management , 17 , 219 230 . 10.1080/09537320500088906 Malamud, O., & Wozniak, A. K. ( 2010 ). The impact of college education on geographic mobility: Identifying education using multiple components of Vietnam draft risk (NBER Working Paper No. 16463). Cambridge, MA : National Bureau of Economic Research . Malamud, O., & Wozniak, A. K. ( 2010 ). The impact of college education on migration: Evidence from the Vietnam generation . Journal of Human Resources , 47 , 913 950 . McHenry, P. ( 2013 ). The relationship between schooling and migration: Evidence from compulsory schooling laws . Economics of Education Review , 35 , 24 40 . 10.1016/j.econedurev.2013.03.003 McKenzie, D., & Rapoport, H. ( 2010 ). Self-selection patterns in Mexico-U.S. migration: The role of migration networks . Review of Economics and Statistics , 92 , 811 821 . 10.1162/REST_a_00032 McKenzie, D., & Rapoport, H. ( 2011 ). Can migration reduce educational attainment? Evidence from Mexico . Journal of Population Economics , 24 , 1331 1358 . 10.1007/s00148-010-0316-x Milligan, K., Moretti, E., & Oreopoulos, P. ( 2004 ). Does education improve citizenship? Evidence from the United States and the United Kingdom . Journal of Public Economics , 88 , 1667 1695 . 10.1016/j.jpubeco.2003.10.005 Murtin, F., & Viarengo, M. ( 2011 ). The convergence process of compulsory schooling in Western Europe: 1950–2000 . Economica , 78 , 501 522 . 10.1111/j.1468-0335.2009.00840.x Nasif Edwards, L. ( 1978 ). An empirical analysis of compulsory schooling legislation, 1940–1960 . Journal of Law and Economics , 21 , 203 222 . 10.1086/466917 Oreopoulos, P. ( 2007 ). Do dropouts drop out too soon? Wealth, health and happiness from compulsory schooling . Journal of Public Economics , 91 , 2213 2229 . 10.1016/j.jpubeco.2007.02.002 Quinn, M. A., & Rubb, S. ( 2005 ). The importance of education-occupation matching in migration decisions . Demography , 42 , 153 167 . 10.1353/dem.2005.0008 Stark, O. ( 1991 ). The migration of labor . Cambridge, MA : Basil Blackwell . Stigler, G. J. ( 1950 ). Employment and compensation in education . Cambridge, MA : National Bureau of Economic Research . Stock, J. H., & Yogo, M. ( 2005 ). Testing for weak instruments in linear IV regression . In Andrews, D. W. K., & Stock, J. H. (Eds.), Identification and inference for econometric models: Essays in honor of Thomas Rothenberg (pp. 80 108 ). New York, NY : Cambridge University Press .
2022-08-14 13:32:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3161974847316742, "perplexity": 3833.7625738012534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00399.warc.gz"}
https://learn.careers360.com/engineering/question-pleaseplease-help-me-chemical-thermodynamics-jee-main/
# The standard enthalpy of formation $\dpi{100} (\Delta H^{\circ}_{f})$ at 298 K for methane, $\dpi{100} CH_{4(g)}$ is  –74.8 kJ mol-1 . The additional information required to determine the average energy for C – H bond formation would be   Option 1) the dissociation energy of $H_2$ and enthalpy of sublimation of carbon Option 2) latent heat of vaporisation of methane Option 3) the first four ionisation energies of carbon and electron gain enthalpy of hydrogen Option 4) the dissociation energy of hydrogen molecule, $H_2$. As we learnt in Bond dissociation enthalpy - It is the average of enthalpy required to dissociate the said bond present in different gaseous compound in to free atoms in gaseous state. - wherein $N_{2}+Bond\, Energy\rightarrow 2N$ Enthalpy of Sublimation - Amount of enthalpy change to sublimise 1 mole solid into 1 mole vapour at a temperature below its melting point - wherein $H_{2}O_{(s)}\rightarrow H_{2}O_{(g)}$ $\Delta H_{sublimation}= 46.6\, kj/mol$ To calculate average enthappy of C - H bond in methane following information are needed. i) dissociation energy of H2 i.e $\frac{1}{2}H_{2}\rightarrow H(g), \Delta H = x (suppose)$ ii) Sublimation energy of C (graphite) to C (g) C (graphite) $\rightarrow$ C (g), $\rightarrow \Delta$H = y (suppose) Given C (graphite) + 2H2(g)$\rightarrow$ CH4(g), $\Delta$H = 75 KJmol-1 Option 1) the dissociation energy of $H_2$ and enthalpy of sublimation of carbon this is correct option Option 2) latent heat of vaporisation of methane this is incorrect option Option 3) the first four ionisation energies of carbon and electron gain enthalpy of hydrogen this is incorrect option Option 4) the dissociation energy of hydrogen molecule, $H_2$. this is incorrect option ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Subscription) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout JEE Main April 2022 (Subscription) Knockout JEE Main April 2022 Subscription. ₹ 5499/- ##### Knockout JEE Main April 2021 An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 22999/- ₹ 14999/-
2020-09-22 08:18:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7405591011047363, "perplexity": 7154.645647323535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00663.warc.gz"}
https://www.techwhiff.com/issue/before-1879-how-long-did-it-take-to-travel-from-missouri--223233
# Before 1879, how long did it take to travel from Missouri to New Mexico along the Santa Fe Trail? almost a year a day more than a month one week ###### Question: before 1879, how long did it take to travel from Missouri to New Mexico along the Santa Fe Trail? almost a year a day more than a month one week ### What is Gupta empire​ what is Gupta empire​... ### 2) There are four nickels and five dimes in your pocket. You randomly pick a coin out of your pocket and place it on a counter. Then you randomly pick another coin. Both coins are nickels. Are the events independent or dependent? What is the probability? 2) There are four nickels and five dimes in your pocket. You randomly pick a coin out of your pocket and place it on a counter. Then you randomly pick another coin. Both coins are nickels. Are the events independent or dependent? What is the probability?... ### List 3 examples of velocity List 3 examples of velocity... ### Which element can form a chloride with a general formula of MCl2 or MCl3 Which element can form a chloride with a general formula of MCl2 or MCl3... ### Which language is commonly spoken in Afghanistan?​ Which language is commonly spoken in Afghanistan?​... ### The balance sheet of ABC reports total assets of $1,500,000 and$1,700,000 at the beginning and end of the year, respectively. Net income and sales for the year are $240,000 and$2,000,000, respectively. What is ABC's profit margin (round to nearest whole percentage, just put in the number with no %)? The balance sheet of ABC reports total assets of $1,500,000 and$1,700,000 at the beginning and end of the year, respectively. Net income and sales for the year are $240,000 and$2,000,000, respectively. What is ABC's profit margin (round to nearest whole percentage, just put in the number with no %... ### Can someone help me with this please??? Can someone help me with this please???... ### An observational study determined that there is a strong correlation between getting less than 8 hours of sleep a day and lower test scores. Can it be determined that the low test scores are caused by sleep deprivation? Explain. An observational study determined that there is a strong correlation between getting less than 8 hours of sleep a day and lower test scores. Can it be determined that the low test scores are caused by sleep deprivation? Explain.... ### Which of the following is an estimating technique that uses a statistical relationship between historical data and other variables to calculate an estimate for activity parameters such as duration and cost?A. Bottom-up estimatesB. Influence diagramsC. SWOT analysisD. Analogous estimatingE. Parametric estimating Which of the following is an estimating technique that uses a statistical relationship between historical data and other variables to calculate an estimate for activity parameters such as duration and cost?A. Bottom-up estimatesB. Influence diagramsC. SWOT analysisD. Analogous estimatingE. Parametri... ### What is the atomic composition of methane What is the atomic composition of methane... ### Pls help I will mark brainliest ​ Pls help I will mark brainliest ​... ### Autism and Maternal Antidepressant Use A recent study41 compared 298 children with Autism Spectrum Disorder to 1507 randomly selected con- trol children without the disorder. Of the children with autism, 20 of the mothers had used antidepres- sant drugs during the year before pregnancy or the first trimester of pregnancy. Of the control children, 50 of the mothers had used the drugs. (a) Is there a significant association between prena- tal exposure to antidepressant medicine and the risk of au Autism and Maternal Antidepressant Use A recent study41 compared 298 children with Autism Spectrum Disorder to 1507 randomly selected con- trol children without the disorder. Of the children with autism, 20 of the mothers had used antidepres- sant drugs during the year before pregnancy or the first ... ### X y=-3x 5 -3 a 2. 1 -1 6 0 -5 1 с 2 d a X y=-3x 5 -3 a 2. 1 -1 6 0 -5 1 с 2 d a... ### Who was the author of the classical theory? Cesare Lombroso Cesare Beccaria Ernest Burgess Robert Park Who was the author of the classical theory? Cesare Lombroso Cesare Beccaria Ernest Burgess Robert Park... ### Helphelphelphelphelphelphelphelphelphelp helphelphelphelphelphelphelphelphelphelp... ### HELP I don’t understand HELP I don’t understand...
2023-01-31 19:16:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40509068965911865, "perplexity": 2564.533846583889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499890.39/warc/CC-MAIN-20230131190543-20230131220543-00593.warc.gz"}
http://electricalacademia.com/electronics/amplitude-frequency-modulation-theory-definition/
Home / Electronics / Amplitude & Frequency Modulation | Theory | Definition # Amplitude & Frequency Modulation | Theory | Definition Want create site? Find Free WordPress Themes and plugins. Modulation is a process in which an audio wave is combined or superimposed on a carrier wave. ## Amplitude Modulation (AM) Assume a radio transmitter is operating on a frequency of 1000 kHz. A musical tone of 1000 Hz is to be used for modulation. Refer to Figure 1. Using a modulation circuit, the amplitude of the carrier wave is made to vary at the audio signal rate. Figure 1. A CW wave, an audio wave, and the resulting amplitude modulated wave. Let’s look at this process another way. Mixing a 1000 Hz wave with a 1000 kHz wave produces a sum wave and a difference wave, which are also in the radio frequency range. These two waves will be 1001 kHz and 999 kHz. They are known as sideband frequencies. The upper sideband is the higher number and the lower sideband is the lower number, Figure 2. Figure 2. Wave mixing showing the formation of sidebands and modulation envelope. The sum of the carrier wave and its sidebands is an amplitude modulated wave. The audio tone is present in both sidebands, as either sideband results from modulating a 1000 kHz signal with a 1000 Hz tone. The location of the waves on a frequency base is shown in Figure 3. If a 2000 Hz tone was used for modulation, then sidebands would appear at 998 kHz and 1002 kHz. In order to transmit a 5000 Hz tone of a violin using AM, sidebands of 995 kHz and 1005 kHz would be required. The frequency bandwidth to transmit the 5000 Hz musical tone will be 10 kHz. Figure 3. Carrier and sideband locations for modulation tone of 1 kHz and 2 kHz. There is not enough space in the spectrum for all broadcasters to transmit. And, if all broadcasts contained the same message or operated on the same frequency, the effect would be confusing. Therefore, the broadcast band for AM radio extends from 535 kHz to 1605 kHz. It is divided into 106 channels, each 10 kHz wide. Each radio station in a geographic area is licensed to operate at a frequency in one of these 106 channels. The channels are spaced far enough from each other to prevent interference. In order to improve the fidelity and quality of music within these limitations, a vestigial sideband filter is used. A vestigial sideband filter removes a large portion of one sideband. Recall that both sidebands contain the same information. This way, frequencies higher than 5 kHz can be used for modulation, and the fidelity is improved. Modulation Patterns A radio transmitter is not permitted by law to exceed 100 percent modulation. This means that the modulation signal cannot cause the carrier signal to vary over 100 percent of its unmodulated value. Look at the patterns in Figure 4. Notice the amplitude of the modulated waves. Figure 4. Patterns for 0, 100, and 50 percent of modulation and for over-modulation. The 100 percent modulation wave variation is from zero to two times the peak value of the carrier wave. Over-modulation is caused when modulation increases the carrier wave to over two times its peak value. At negative peaks, the waves cancel each other and leave a straight line of zero value. Over-modulation causes distortion and interference called splatter. Percent of modulation can be computed using the following formula: $percentage\text{ }modulation=\frac{{{e}_{\max }}-{{e}_{\min }}}{2{{e}_{c}}}\times 100$ Where emax is the maximum amplitude of the modulated wave, emin is the minimum amplitude of the modulated wave, and ec is the amplitude of the unmodulated wave. Sideband Power The dc input power to the final amplifier of a transmitter is the product of voltage and current. To find the power required by a modulator, the following formula can be used. ${{P}_{audio}}=\frac{{{m}^{2}}\times {{P}_{dc}}}{2}$ Where Paudio is the power of the modulator, m is the percentage of modulation (expressed as a decimal), and Pdc is the input power to the final amplifier. Example: What power is required to modulate a transmitter having a dc power input of 500 watts to 100 percent? ${{P}_{audio}}=\frac{{{1}^{2}}\times 500W}{2}=250W$ This represents a total input power of 750 watts (250 W + 500 W). Notice what happens under 50 percent modulation. ${{P}_{audio}}=\frac{{{0.5}^{2}}\times 500W}{2}=62.5W$ The total input power is only 562.5 watts (62.5 W + 500 W). Where the modulation percentage is reduced to 50 percent, the power is reduced to 25 percent. This is a severe drop in power that decreases the broadcasting range of the transmitter. It is wise to maintain transmitter modulation close to, but not exceeding, 100 percent. The term input power has been used because any final amplifier is far from 100 percent efficient. $\eta =\frac{{{P}_{out}}}{{{P}_{in}}}\times 100$ If a power amplifier had a 60 percent efficiency and a Pdc input of 500 watts, its output power would approach: Pout= % efficiency × Pin= 0.6 × 500 W = 300 W A transmitter has 100 percent modulation and power of 750 watts. 500 watts of this power is in the carrier wave and 250 watts is added to produce the sidebands. Therefore, there are 125 watts of power in each sideband or one-sixth of the total power in each sideband. Recall that each sideband contains the same information and each is a radio frequency wave that will radiate as well as the carrier wave. So why waste all this power? In single sideband transmission, this power is saved. The carrier and one sideband are suppressed. Only one sideband is radiated. At the receiver end, the carrier is put back in. The difference signal (the audio signal) is then detected and reproduced. ## Frequency Modulation (FM) In frequency modulation, a constant amplitude continuous wave (the radio wave) is made to vary in frequency at the audio frequency rate, Figure 5. FM radio is a popular method of electronic communication. Frequency modulation allows a high audio sound to be transmitted while still remaining within the space legally assigned to the broadcast station. Also, FM transmits dual channels of sound (stereo) by multiplex systems. The FM band is from 88 MHz to 108 MHz. A block diagram of an FM transmitter is shown in Figure 6. Figure 5. For FM, the frequency of the wave is varied at an audio rate. Figure 6. The block diagram of a simplified FM transmitter. Each FM station is assigned a center frequency in the FM band. This is the frequency to which a radio is tuned, Figure 7. The amount of frequency variation from each side of the center frequency is called the frequency deviation. Frequency deviation is set by the amplitude, or strength, of the audio modulating wave. Figure 7. The amplitude of the modulating signal determines the frequency swing from the center frequency. A–Weak audio signal. B–Strong audio signal. In part A of Figure 7, a weak audio signal causes the frequency of the carrier wave to vary between 100.01 MHz and 99.99 MHz. The deviation is ±10 kHz. In part B of the figure, a stronger audio signal causes a frequency swing between 100.05 MHz and 99.95 MHz or a deviation of ±50 kHz. The stronger the modulation signal, the greater the frequency departure, and the more the band is filled. The rate of frequency deviation depends on the frequency of the audio modulating signal. See Figure 8. If the audio signal is 1000 Hz, the carrier wave goes through its greatest deviation 1000 times per second. If the audio signal is 100 Hz, the frequency changes at a rate of 100 times per second. Notice that the modulating frequency does not change the amplitude of the carrier wave. Figure 8. The rate of frequency variation depends on the frequency of the audio modulating signal. An FM signal forms sidebands. The number of side-bands produced depends on the frequency and amplitude of the modulating signal. Each sideband is separated from the center frequency by the amount of the frequency of the modulating signal, Figure 9The power of the carrier frequency is reduced a great deal by the formation of sidebands, which take power from the carrier. The amount of power taken from the carrier depends on the maximum deviation and the modulating frequency. Although a station is assigned a center frequency and stays within its maximum deviation, the formation of sidebands determines the bandwidth required for transmission. In FM, the bandwidth is specified by the frequency range between the upper and lower significant sidebands. A significant sideband has an amplitude of one percent or more of the unmodulated carrier. Figure 9. Sidebands generated by a 10 kHz modulating signal on a 100 MHz carrier wave. ## Narrow Band FM The maximum deviation of a carrier wave can be limited so that the FM wave occupies the same space as an AM wave carrying the same message. This is called narrowband FM. Some distortion occurs in the received signal. This is satisfactory for voice communication but not for quality music sound systems. ## Modulation Index The modulation index is the relationship between the maximum carrier deviation and the maximum modulating frequency: $\text{Modulation Index=}\frac{\text{Maximum Carrier Deviation}}{\text{Maximum Modulating Frequency}}$ Using this index, the number of significant side-bands and the bandwidth of the FM signal can be figured. The complete index can be found in more advanced texts. Examples of the use of the modulation index are given in Figure 10. Figure 10. Examples of modulation index use. When calculating bandwidth, f is the modulating frequency. If the amplitude of a modulating signal causes a maximum deviation of 10 kHz and the frequency of the modulating signal was 1000 Hz, the index would be: $Modulation\text{ }Index=\frac{10,000}{1000}=10$ If you examine Figure 10, you can see that this FM signal would have 14 significant sidebands and occupy a bandwidth of 28 kHz. ## Percent of Modulation The percent of modulation has been set at a maximum deviation of ± 75 kHz for FM radio. The FM sound transmission in television is limited to ± 25 kHz. Did you find apk for android? You can find new Free Android Games and apps.
2019-01-21 02:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5178013443946838, "perplexity": 1177.4472428555655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00244.warc.gz"}
https://www.beatthegmat.com/jason-can-stack-two-shelves-in-3-hours-and-maria-can-stack-t303806.html
• Award-winning private GMAT tutoring Register now and save up to $200 Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0 Available with Beat the GMAT members only code # Jason can stack two shelves in 3 hours and Maria can stack tagged by: AAPL 00:00 A B C D E ## Global Stats Difficult Magoosh Jason can stack two shelves in 3 hours and Maria can stack three shelves in 2 hours. How long will it take them together, working at a constant rate, to stack thirteen shelves? A. 5 hours B. 6 hours C. 6.5 hours D. 11 hours E. 12 hours OA B. ### GMAT/MBA Expert Elite Legendary Member Joined 23 Jun 2013 Posted: 9761 messages Followed by: 487 members 2867 GMAT Score: 800 Hi All, We're told that Jason can stack 2 shelves in 3 hours and Maria can stack 3 shelves in 2 hours. We're asked how long will it take them together, working at their constant rates, to stack thirteen shelves. This question is a 'rate' question, so you can work through the math in a number of different ways. Here's an approach that puts both rates in terms of 'output per hour': Jason stacks 2 shelves every 3 hours, so he stacks 2/3 of a shelf per hour. Maria stacks 3 shelves every 2 hours, so he stacks 3/2 of a shelf per hour. Combined, they stack 2/3 + 3/2 = 4/6 + 9/6 = 13/6 shelves per hour. We're asked how long it would take for the two to stock 13 shelves.... (X)(13/6 shelves/hour) = 13 shelves X = 13(6/13) = 6 hours GMAT assassins aren't born, they're made, Rich _________________ Contact Rich at [email protected] ### GMAT/MBA Expert GMAT Instructor Joined 25 May 2010 Posted: 14733 messages Followed by: 1849 members 13060 GMAT Score: 790 AAPL wrote: Magoosh Jason can stack two shelves in 3 hours and Maria can stack three shelves in 2 hours. How long will it take them together, working at a constant rate, to stack thirteen shelves? A. 5 hours B. 6 hours C. 6.5 hours D. 11 hours E. 12 hours Let each shelf = 6 pounds. Since Jason takes 3 hours to stack 2 6-pound shelves -- for a total of 12 pounds -- Jason's rate = 12/3 = 4 pounds per hour. Since Maria takes 2 hours to stack 3 6-pound shelves -- for a total of 18 pounds -- Maria's rate = 18/2 = 9 pounds per hour. Total weight of 13 6-pound shelves = 13*6 = 78 pounds. Combined rate for Jason and Maria = 4+9 = 13 pounds per hour. Time for Jason and Maria to stack the 13 shelves = 78/13 = 6 hours. The correct answer is B. _________________ Mitch Hunt Private Tutor for the GMAT and GRE [email protected] If you find one of my posts helpful, please take a moment to click on the "UPVOTE" icon. Available for tutoring in NYC and long-distance. Student Review #1 Student Review #2 Student Review #3 Free GMAT Practice Test How can you improve your test score if you don't know your baseline score? Take a free online practice exam. Get started on achieving your dream score today! Sign up now. ### GMAT/MBA Expert GMAT Instructor Joined 09 Apr 2015 Posted: 1461 messages Followed by: 16 members 39 AAPL wrote: Magoosh Jason can stack two shelves in 3 hours and Maria can stack three shelves in 2 hours. How long will it take them together, working at a constant rate, to stack thirteen shelves? A. 5 hours B. 6 hours C. 6.5 hours D. 11 hours E. 12 hours The combined rate of Jason and Maria is 2/3 + 3/2 = 4/6 + 9/6 = 13/6. So it takes them 13/(13/6) = 13 x 6/13 = 6 hours to stack 13 shelves. _________________ Jeffrey Miller Head of GMAT Instruction ### Top First Responders* 1 Jay@ManhattanReview 83 first replies 2 Brent@GMATPrepNow 68 first replies 3 fskilnik 55 first replies 4 GMATGuruNY 36 first replies 5 ceilidh.erickson 13 first replies * Only counts replies to topics started in last 30 days See More Top Beat The GMAT Members ### Most Active Experts 1 fskilnik GMAT Teacher 199 posts 2 Brent@GMATPrepNow GMAT Prep Now Teacher 160 posts 3 Scott@TargetTestPrep Target Test Prep 109 posts 4 Jay@ManhattanReview Manhattan Review 95 posts 5 GMATGuruNY The Princeton Review Teacher 90 posts See More Top Beat The GMAT Experts
2018-09-23 09:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20672237873077393, "perplexity": 8184.13805667981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159165.63/warc/CC-MAIN-20180923075529-20180923095929-00163.warc.gz"}
https://aptitude.gateoverflow.in/708/cat-2006-question-21
297 views Answer the questions based on the information given below: K, L, M, N, P, Q, R, S, U and W are the only ten members in the department. There is a proposal to form a team from within  the members of the department, subject to the following conditions. • A team must include exactly one among P, R and S. • A team must include either M or Q, but not both. • If a team includes K, then it must also include L and vice versa. • If a team includes one among S, U and W, then it must also include the other two. • L and N cannot be the members of the same team. • L and U cannot be the members of the same team. • The size of the team is defined as the number of members in the team. What could be the size of the team that includes K? 1. $2$ or $3$ 2. $2$ or $4$ 3. $3$ or $4$ 4. only $2$ 5. only $4$
2023-01-29 08:44:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5706021785736084, "perplexity": 530.4409777075527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499710.49/warc/CC-MAIN-20230129080341-20230129110341-00629.warc.gz"}
https://community.microfocus.com/collaboration/zenworks/zcm/f/zcm/365717/roaming-profiles---startmenu-problem-under-windows-10/1705576
# Roaming profiles - Startmenu problem under Windows 10 Hello, we start to do a complete roll out of windows 10 with Zenworks. Therefore we used this very nice best practices: https://www.novell.com/communities/coolsolutions/windows-10-best-practices-using-zcm/ If you are using roaming profiles with Novell on non-Windows shares, you have to do some special settings. For example you have to copy a default profile and move it in the user profile folder - as descripted here: https://www.novell.com/documentation/zenworks114/zen11_cm_policies/data/bvkn1rh.html It you don’t do this roaming profile will not work correctly. Now to my problem. If I logon to the computer with this default profile the Windows 10 start menu is not working - no reaction if you klick on it. Roaming profile on multiple PCs is working fine, except of the menu. I already tried the "default solutions" for this problem: 1. sfc /scannow 2. powershell with $manifest = (Get-AppxPackage Microsoft.WindowsStore).InstallLocation '\AppxManifest.xml' ; Add-AppxPackage -DisableDevelopmentMode -Register$manifest 3. Copy and Paste the folder C:\Users\<user>\AppData\Local\TileDataLayer\Database of a local working user If I don’t follow the instruction in the link above "Assigning a Roaming Profile Policy for a User Profile ..." and let Windows create the roaming profile on the file share I don’t have a start menu problem, but I can't login to different PC -> "Group Policy Client Service failed the sign-in" Error. Has anybody done something like this before with Zenworks and Windows 10? Christian #### Tags: Parents • sheinrich;2464578 wrote: Thanks! We decided to go the way of folder redirection instead of roaming profiles. Is there a description how to do this redirection to the users Homedir on OES servers. In all descriptions I found you need an AD. I have no AD. I want to redirect like this
2021-09-28 11:03:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23209248483181, "perplexity": 4721.146341675595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00335.warc.gz"}
https://mbi-berlin.de/de/p/davidcasas
## MBI-Mitarbeiter - Persönliche Daten ### David Casas Nicht mehr am MBI #### MBI Publikationen 1. Instantaneous charge state of uranium projectiles in fully ionized plasmas from energy loss experiments Physics of Plasmas 24 (2017) 042703/1-11 2. Calculations on charge state and energy loss of argon ions in partially and fully ionized carbon plasmas Physical Review E 93 (2016) 033204/1-10 3. Stopping power of a heterogeneous warm dense matter Laser and Particle Beams 0263-0346/16 (2016) 1-9
2021-10-23 19:51:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8432320356369019, "perplexity": 13292.032539690304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00453.warc.gz"}
https://support.bioconductor.org/p/9145586/
DESeq2 - Batch effect correction in 2 conditions + 2 genotypes + interaction term design 1 0 Entering edit mode @barbaramariotti-8083 Last seen 10 days ago Italy Dear all, I am analyzing RNAseq data with DESeq2 for a dataset that resemble the type "2 conditions, 2 genotypes and interaction term"; specifically I have healthy donors and patients for both male and female population. I am interested in obtaining: 1- genes modulated in male patients as compared to male controls 2- genes modulated in female patients as compared to female controls 3- genes for which the disease effect is different across sex Here the code used for this analysis dds<- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design = ~SEX+DISEASE+SEX:DISEASE) dds$DISEASE <- relevel(dds$DISEASE, ref="HD") dds$SEX <- relevel(dds$SEX, ref="M") dds <- DESeq(dds,fitType="local") results(dds, name="SEXF.DISEASEDISEASE", pAdjustMethod = "fdr") #DISEASE effect is different across SEX results(dds, list( c("DISEASE_DISEASE_vs_HD","SEXF.DISEASEDISEASE") ), pAdjustMethod = "fdr") #effect of DISEASE on F results(dds, contrast=c("DISEASE","DISEASE","HD"), pAdjustMethod = "fdr") #effecto of DISEASE on M At this point I decide to control some biological variable known to influence my results. I considered them as batch effect. Here the code dds1<- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design = ~AGE+COMORBIDITY+SMOKING+DISEASE_SEVERITY_GOLD+SEX+DISEASE+SEX:DISEASE) dds1$DISEASE <- relevel(dds1$DISEASE, ref="HD") dds1$SEX <- relevel(dds1$SEX, ref="M") dds1 <- DESeq(dds1,fitType="local") results(dds1, name="SEXF.DISEASEDISEASE", pAdjustMethod = "fdr") #DISEASE effect is different across SEX results(dds1, list( c("DISEASE_DISEASE_vs_HD","SEXF.DISEASEDISEASE") ), pAdjustMethod = "fdr") #effect of DISEASE on F results(dds1, contrast=c("DISEASE","DISEASE","HD"), pAdjustMethod = "fdr") #effecto of DISEASE on M But when I compared the results from dds and dds1 I show that they are exactly the same. Considering that we know that variable such as Age, disease severity and comorbidity affect our data I was wondering: where is the mistake in my code? Should be better to work using LRT with something like the folowing? dds1<- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design = ~AGE+COMORBIDITY+SMOKING+DISEASE_SEVERITY_GOLD+SEX+DISEASE+SEX:DISEASE) dds1$DISEASE <- relevel(dds1$DISEASE, ref="HD") dds1$SEX <- relevel(dds1$SEX, ref="M") dds1 <- DESeq(dds1,fitType="local", test="LRT", reduced=~AGE+COMORBIDITY+SMOKING+DISEASE_SEVERITY_GOLD) Thanks a lot for all your suggestiona and explanation. Barbara DESeq2 • 164 views 0 Entering edit mode @mikelove Last seen 2 days ago United States For questions about statistical analysis and interpretation, I recommend working with a local statistician or someone familiar with linear models in R.
2022-08-14 15:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2462531179189682, "perplexity": 14336.904360927823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00447.warc.gz"}
https://www.expii.com/t/improper-fraction-to-mixed-number-conversion-practice-9085
Expii # Improper Fraction to Mixed Number — Conversion & Practice - Expii To convert a fraction to a mixed number, divide the numerator by the denominator. The quotient is the integer, and the remainder is the new numerator.
2021-04-10 11:18:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9394261240959167, "perplexity": 796.9879027944257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00386.warc.gz"}
http://www.solipsys.co.uk/cgi-bin/sews.py?Identity
# Identity The Identity or Identity element is a special element of a set under a binary operation which when combined with any element of a set results in the same element. Normally denoted by e. If in a set A under a binary operation * there exists an element e such that for all a $\in\$ A then a * e = a then e is called the identity element. Some care has to be taken if the operation is non-commutative when the definition above specifically defines what is called the right-identity. The left identity is defined as that element such that for all a $\in\$ A then e * a = a. The identity element of the set of integers under the binary operation of addition is the number zero.
2020-01-23 19:49:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8695306181907654, "perplexity": 184.98100856843558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00219.warc.gz"}
https://physics.aps.org/story/v27/st2
Focus: Relativity Powers Your Car Battery Phys. Rev. Focus 27, 2 The lead-acid battery found in most cars owes much of its voltage to relativistic effects in the lead atom, as shown by simulations. You don’t need a near-light-speed spaceship to see the effects of relativity–they can arise even in a slow-moving automobile. The lead-acid battery that starts most car engines gets about 80 percent of its voltage from relativity, according to theoretical work in the 7 January Physical Review Letters. The relativistic effect comes from fast-moving electrons in the lead atom. The computer simulations also explain why tin-acid batteries don’t work, despite apparent similarities between tin and lead. Electrons typically orbit their atoms at speeds much less than the speed of light, so relativistic effects can largely be ignored when describing atomic properties. But notable exceptions include the heaviest elements in the periodic table. Their electrons must orbit at near light speed to counter the strong attraction of their large nuclei. According to relativity, these high-energy electrons act in some ways as though they have greater mass, so their orbitals must shrink in size compared with slower electrons to maintain the same angular momentum. This contraction, which is most pronounced in the spherically-symmetric s-orbitals of heavy elements, explains why gold has a yellowish hue and why mercury is liquid at room temperature [1]. Previous work has studied the relativistic effects on lead’s crystal structure, but little research has been done on this heavy element’s chemical properties. So Rajeev Ahuja of Uppsala University in Sweden and his colleagues decided to study the most ubiquitous form of lead chemistry: the lead-acid battery. This 150-year-old technology is based on cells consisting of two plates–made of lead and lead dioxide ( $PbO2$)–immersed in sulfuric acid ( $H2SO4$). The lead releases electrons to become lead sulfate ( $PbSO4$), while the lead dioxide gains electrons and also becomes lead sulfate. The combination of these two reactions results in a voltage difference of 2.1 volts between the two plates. Although theoretical models of the lead-acid battery already exist, Ahuja and his collaborators are the first to derive one from fundamental physics principles. To find the cell’s voltage, the team calculated the energy difference between the electron configurations of the reactants and the products. As with textbook physics problems involving balls rolling down hills, there was no need to simulate the details of intermediate states, as long as the initial and final energies could be calculated. “The really difficult part is simulating the sulfuric acid electrolyte,” says team member Pekka Pyykkö of the University of Helsinki. To avoid it, the researchers imagined that the reaction started not with the acid, but with the creation of the acid from $SO3$, which is easier to simulate. At the end they subtracted the energy for the acid creation (known from previous measurements) from the total. By switching relativistic parts of the models “on” and “off” the team found that relativity accounts for 1.7 volts of a single cell, which means that about 10 of the 12 volts in a car battery come from relativistic effects. Without relativity, the authors argue, lead would act more like tin, which is just above it in the periodic table and which has the same number of electrons (four) in its outermost s- and p-orbitals. But tin’s nucleus has only 50 protons, compared with lead’s 82, so the relativistic contraction of tin’s outermost s-orbital is much less. Additional simulations showed that a hypothetical tin-acid battery would produce insufficient voltage to be practical, because tin dioxide does not attract electrons strongly enough. Tin’s comparatively loose s-orbital does not provide as deep an energy well for electrons as lead does, the team found. In the past, researchers only had a qualitative understanding of why tin-acid batteries never worked out. Ram Seshadri of the University of California, Santa Barbara, says that relativistic effects were expected, but he had no idea that they would be so dominant. “On the scope of the work, the ability to reliably simulate so complex a device as a lead-acid battery from (almost) first-principles, including all relativistic effects, is a triumph of modeling,” Seshadri says. –Michael Schirber Michael Schirber is a Corresponding Editor for Physics based in Lyon, France. References 1. P. Pyykkö, “Relativistic Quantum Chemistry,” Adv. Quantum Chem. 11, 353 (1978). For less technical explanations, see the additional information links below Chemical Physics Related Articles Materials Science Synopsis: A New Model for Electrolyte Conductivity A reliable and computationally cheap way of calculating the ionic conductivity of a concentrated electrolyte in a battery involves modeling ion clusters. Read More » Biological Physics Synopsis: Pinning down the Chemistry of Photosynthetic Water Splitting A time-resolved x-ray study indicates that certain chemical changes of oxygen atoms during photosynthesis occur in a different order than current models predict. Read More » Nuclear Physics Viewpoint: Heaviest Element Has Unusual Shell Structure Calculations of the structure in oganesson—the element with the highest atomic number—reveal a uniform, gas-like distribution of its electrons and nucleons. Read More »
2019-04-20 17:20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42341986298561096, "perplexity": 1746.5243090106494}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182858-00439.warc.gz"}
http://nrich.maths.org/public/leg.php?code=32&cl=2&cldcmpid=6948
Search by Topic Resources tagged with Multiplication & division similar to Which Numbers? (1): Filter by: Content type: Stage: Challenge level: There are 158 results Broad Topics > Calculations and Numerical Methods > Multiplication & division Divide it Out Stage: 2 Challenge Level: What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10? X Marks the Spot Stage: 3 Challenge Level: When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . What Two ...? Stage: 2 Short Challenge Level: 56 406 is the product of two consecutive numbers. What are these two numbers? Thirty Six Exactly Stage: 3 Challenge Level: The number 12 = 2^2 × 3 has 6 factors. What is the smallest natural number with exactly 36 factors? Times Right Stage: 3 Challenge Level: Using the digits 1, 2, 3, 4, 5, 6, 7 and 8, mulitply a two two digit numbers are multiplied to give a four digit number, so that the expression is correct. How many different solutions can you find? Curious Number Stage: 2 Challenge Level: Can you order the digits from 1-6 to make a number which is divisible by 6 so when the last digit is removed it becomes a 5-figure number divisible by 5, and so on? Diggits Stage: 3 Challenge Level: Can you find what the last two digits of the number $4^{1999}$ are? The Remainders Game Stage: 2 and 3 Challenge Level: A game that tests your understanding of remainders. Ones Only Stage: 3 Challenge Level: Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones. Long Multiplication Stage: 3 Challenge Level: A 3 digit number is multiplied by a 2 digit number and the calculation is written out as shown with a digit in place of each of the *'s. Complete the whole multiplication sum. What Is Ziffle? Stage: 2 Challenge Level: Can you work out what a ziffle is on the planet Zargon? Number Tracks Stage: 2 Challenge Level: Ben’s class were making cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? Odds and Threes Stage: 2 Challenge Level: A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3. Eminit Stage: 3 Challenge Level: The number 8888...88M9999...99 is divisible by 7 and it starts with the digit 8 repeated 50 times and ends with the digit 9 repeated 50 times. What is the value of the digit M? Which Is Quicker? Stage: 2 Challenge Level: Which is quicker, counting up to 30 in ones or counting up to 300 in tens? Why? Zios and Zepts Stage: 2 Challenge Level: On the planet Vuv there are two sorts of creatures. The Zios have 3 legs and the Zepts have 7 legs. The great planetary explorer Nico counted 52 legs. How many Zios and how many Zepts were there? Factoring Factorials Stage: 3 Challenge Level: Find the highest power of 11 that will divide into 1000! exactly. What's in the Box? Stage: 2 Challenge Level: This big box multiplies anything that goes inside it by the same number. If you know the numbers that come out, what multiplication might be going on in the box? Escape from the Castle Stage: 2 Challenge Level: Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out? Tom's Number Stage: 2 Challenge Level: Work out Tom's number from the answers he gives his friend. He will only answer 'yes' or 'no'. Ordering Cards Stage: 1 and 2 Challenge Level: This problem is designed to help children to learn, and to use, the two and three times tables. Remainders Stage: 3 Challenge Level: I'm thinking of a number. When my number is divided by 5 the remainder is 4. When my number is divided by 3 the remainder is 2. Can you find my number? Oh! Hidden Inside? Stage: 3 Challenge Level: Find the number which has 8 divisors, such that the product of the divisors is 331776. Mystery Matrix Stage: 2 Challenge Level: Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice. Factor-multiple Chains Stage: 2 Challenge Level: Can you see how these factor-multiple chains work? Find the chain which contains the smallest possible numbers. How about the largest possible numbers? Highest and Lowest Stage: 2 Challenge Level: Put operations signs between the numbers 3 4 5 6 to make the highest possible number and lowest possible number. As Easy as 1,2,3 Stage: 3 Challenge Level: When I type a sequence of letters my calculator gives the product of all the numbers in the corresponding memories. What numbers should I store so that when I type 'ONE' it returns 1, and when I type. . . . Penta Post Stage: 2 Challenge Level: Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g? Napier's Bones Stage: 2 Challenge Level: The Scot, John Napier, invented these strips about 400 years ago to help calculate multiplication and division. Can you work out how to use Napier's bones to find the answer to these multiplications? Oranges and Lemons Stage: 2 Challenge Level: On the table there is a pile of oranges and lemons that weighs exactly one kilogram. Using the information, can you work out how many lemons there are? Super Shapes Stage: 2 Short Challenge Level: The value of the circle changes in each of the following problems. Can you discover its value in each problem? Rocco's Race Stage: 2 Short Challenge Level: Rocco ran in a 200 m race for his class. Use the information to find out how many runners there were in the race and what Rocco's finishing position was. Throw a 100 Stage: 2 Challenge Level: Can you score 100 by throwing rings on this board? Is there more than way to do it? Multiplication Squares Stage: 2 Challenge Level: Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only. Calendar Calculations Stage: 2 Challenge Level: Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what happens? A Chance to Win? Stage: 3 Challenge Level: Imagine you were given the chance to win some money... and imagine you had nothing to lose... A Mixed-up Clock Stage: 2 Challenge Level: There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements? Book Codes Stage: 2 Challenge Level: Look on the back of any modern book and you will find an ISBN code. Take this code and calculate this sum in the way shown. Can you see what the answers always have in common? Clock Face Stage: 2 Challenge Level: Where can you draw a line on a clock face so that the numbers on both sides have the same total? Clever Santa Stage: 2 Challenge Level: All the girls would like a puzzle each for Christmas and all the boys would like a book each. Solve the riddle to find out how many puzzles and books Santa left. Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Make 100 Stage: 2 Challenge Level: Find at least one way to put in some operation signs (+ - x ÷) to make these digits come to 100. A First Product Sudoku Stage: 3 Challenge Level: Given the products of adjacent cells, can you complete this Sudoku? Repeaters Stage: 3 Challenge Level: Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. Gift Stacks Stage: 2 Challenge Level: Use the information to work out how many gifts there are in each pile. The Deca Tree Stage: 2 Challenge Level: Find out what a Deca Tree is and then work out how many leaves there will be after the woodcutter has cut off a trunk, a branch, a twig and a leaf. How Do You Do It? Stage: 2 Challenge Level: This group activity will encourage you to share calculation strategies and to think about which strategy might be the most efficient. Machines Stage: 2 Challenge Level: What is happening at each box in these machines? Square Subtraction Stage: 2 Challenge Level: Look at what happens when you take a number, square it and subtract your answer. What kind of number do you get? Can you prove it? Four Goodness Sake Stage: 2 Challenge Level: Use 4 four times with simple operations so that you get the answer 12. Can you make 15, 16 and 17 too?
2014-08-22 09:53:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31785574555397034, "perplexity": 1528.4883572793444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823528.84/warc/CC-MAIN-20140820021343-00051-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-r-review-exercises-page-76/98
## College Algebra (11th Edition) $\dfrac{a^{11/8}}{b^{1/6}}$ $\bf{\text{Solution Outline:}}$ Use the laws of exponents to simplify the given expression, $(a^{3/4}b^{2/3})(a^{5/8}b^{-5/6}) .$ $\bf{\text{Solution Details:}}$ Using the Product Rule of the laws of exponents which is given by $x^m\cdot x^n=x^{m+n},$ the expression above is equivalent to \begin{array}{l}\require{cancel} a^{\frac{3}{4}+\frac{5}{8}}b^{\frac{2}{3}+\left(-\frac{5}{6} \right)} \\\\= a^{\frac{3}{4}+\frac{5}{8}}b^{\frac{2}{3}-\frac{5}{6}} .\end{array} To add $\dfrac{3}{4}$ and $\dfrac{5}{8},$ change the fractions to similar fractions (same denominator) by using the $LCD$. The $LCD$ of the denominators $4$ and $8$ is $8$ since it is the lowest number that can be exactly divided by the denominators. Multiplying the fractions by an expression equal to $1$ that will make the denominator equal to the $LCD$ results to \begin{array}{l}\require{cancel} \dfrac{3}{4}\cdot\dfrac{2}{2}+\dfrac{5}{8} \\\\= \dfrac{6}{8}+\dfrac{5}{8} \\\\= \dfrac{11}{8} .\end{array} To simplify the expression $\dfrac{2}{3}-\dfrac{5}{6} ,$ change the fractions to similar fractions (same denominator) by using the $LCD$. The $LCD$ of the denominators $3$ and $6$ is $6$ since it is the lowest number that can be exactly divided by the denominators. Multiplying the fractions by an expression equal to $1$ that will make the denominator equal to the $LCD$ results to \begin{array}{l}\require{cancel} \dfrac{2}{3}\cdot\dfrac{2}{2}-\dfrac{5}{6} \\\\= \dfrac{4}{6}-\dfrac{5}{6} \\\\= \dfrac{4-5}{6} \\\\= \dfrac{-1}{6} \\\\= -\dfrac{1}{6} .\end{array} The expression, $a^{\frac{3}{4}+\frac{5}{8}}b^{\frac{2}{3}-\frac{5}{6}} ,$ simplifies to \begin{array}{l}\require{cancel} a^{\frac{11}{8}}b^{-\frac{1}{6}} .\end{array} Using the Negative Exponent Rule of the laws of exponents which states that $x^{-m}=\dfrac{1}{x^m}$ or $\dfrac{1}{x^{-m}}=x^m,$ the expression above is equivalent to \begin{array}{l}\require{cancel} \dfrac{a^{\frac{11}{8}}}{b^{\frac{1}{6}}} \\\\= \dfrac{a^{11/8}}{b^{1/6}} .\end{array}
2019-07-23 10:18:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939800500869751, "perplexity": 663.4580392437925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529175.83/warc/CC-MAIN-20190723085031-20190723111031-00371.warc.gz"}
http://chemistry.stackexchange.com/questions/9183/can-someone-please-explain-buffers-to-me
# Can someone please explain buffers to me? This is what I understand so far: The concept of a buffer is to minimize any swings in pH. In order to do this you need to create a solution with both acidic and basic compounds. To create one, you start with an acid: $\ce{HA + H2O -> A- + H3O+}$ [My first question is why for buffers the acid needs to be weak.] The buffer is then created by adding in a salt containing the anion of the base which allows for both the acid and its conjugate base to exist in the solution. Here, why doesn't the conjugate base react with the acid? Second, why does this help minimize pH changes. I see that since that you have a weak acid the conjugate base will be strong and can react with any acid added. However, if you add base what is going to react with it? Isn't the acid too weak to do so? Third, what happens to the $\ce{H3O+}$ that exists in the solution? - Writing out the equilibrium expression for a weak acid in solution: $$[\ce{H3O+}] = K_a\frac{[HA]}{[A^-]}$$ Buffers need to keep the concentration of the weak acid to it's conjugate base similar in order for it be effective. Let's say you add a small amount of base. Some of the $\ce{HA}$ molecules are converted into $\ce{A^-}$ ions. The ratio $\frac{[\ce{HA}]}{[\ce{A^-}]}$ changes but not by a large amount. If we add a small amount of acid, we generate more $\ce{HA}$. Again, the ratio $\frac{[\ce{HA}]}{[\ce{A^-}]}$ changes but not by a large amount. Why do buffers for the acid needs to be weak? If you look at the expression, the concentration of hydronium ions is directly related to the ratio of the conjugate acid and it's base. If you take a strong acid for instance, hydrochloric, it will almost completely deprotonate in water. Thus the ratio of $\frac{[\ce{HA}]}{[\ce{A^-}]}$ is close to 0 and is negligible, and we use the $K_a$ to directly find the concentration of hydronium ions. As strong acids have very large $K_a$ values, we can directly take the concentration and assume that will be the concentration of our hydronium ions. Why doesn't the conjugate base react with the acid? It depends on the acid. In buffer solutions, competition arises between acid and the conjugate base. For instance, if the $K_a$ is large relative to $10^{-7}$, the acid ionization will proceed and you will have an equilibrium with the conjugate base reacting with its acid resulting in no buffer action. If however, the tendency of the acid in donating hydrogen ions, say, to water, is outcompeted by the tendency of the base to accept those hydrogen ions from water, this net reaction produces $\ce{OH^-}$ ions, the $K_a$ of this acid-base pair is less than $10^{-7}$, and we use $K_b$ to find equilibrium. Derivation of the Henderson-Hasselbach Equation: Let's use acetic acid and water as an example: $$\ce{H3CCOOH + H2O <=> H3COO- + H3O+}$$ $$K_a = \ce{\frac{[H3COO-][H3O+]}{[H3CCOOH]}}$$ Taking the log of both sides: $$\log(K_a) = \log(\ce{\frac{[H3COO-][H3O+]}{[H3CCOOH]}})$$ Using the law of logs where $\log(xy) = \log(x) + \log(y)$: \begin{aligned} \log(K_a) &= \log\left(\ce{\frac{[H3COO-]}{[H3CCOOH]}}\right) + \log(\ce{[H3O+]})\\ -\log(\ce{[H3O+]}) &= \log\left(\ce{\frac{[H3COO-]}{[H3CCOOH]}}\right) - \log(K_a) \\ \mathrm{p}\ce{H} &= \log\left(\ce{\frac{[H3COO-]}{[H3CCOOH]}}\right) + \mathrm{p}K_a \\ \mathrm{p}\ce{H} &= \mathrm{p}K_a + \log(\frac{[B]}{[A]}) \\ \end{aligned} With buffer systems, you can calculate the $\mathrm{p}\ce{H}$ of any buffer solution. The HH equation has several caveats however. 1.) you need have both the acid and its conjugate base in solution. 2.) the extent of ionization must be small enough so that the concentrations of hydronium and hydroxide ions are small relative $[\ce{HA}]_0$ and $[\ce{A^-}]_0$. - When you have a buffered solution and add a base, why doesn't the base react with the H3O+ ion (which would change the pH)? Why does it prefer to react with the acid instead? –  1110101001 Mar 13 at 6:12 @user2612743: The base would most probably react with $H_3O^+$ as you suggest, but very soon (nanoseconds at most), the acid will "create a new" $H_3O^+$. It is just equilibrium in zillions of molecules. –  ssavec Mar 13 at 7:22 For the part about acids not reacting with conjugate base, is it because the only way the react is $\ce{HA + A- -> A- + HA}$ resulting in no net change of concentration? –  1110101001 Mar 13 at 18:16 And why must only weak acids be used for buffers? Why can't you use a strong one? –  1110101001 Mar 13 at 18:16 @user2612743 I readdressed those issues and clarified them in my post. –  Jun-Goo Kwak Mar 13 at 22:37
2014-11-27 00:03:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539700150489807, "perplexity": 1306.9946393768125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007625.83/warc/CC-MAIN-20141125155647-00069-ip-10-235-23-156.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/123969/decrease-magnification-for-printing
Decrease magnification for printing I would like to decrease the magnification to a value of my choosing for printing purposes. How can I do this? • Screen magnification, set at the bottom right of the window, doesn't carry to print output. • In the option inspector we have Notebook Options -> Printing Options -> PrintingOptions -> Magnification. It seems to have no effect on printing output. Update: Below I show the exact steps I take to try to reduce the magnification. I am using Mathematica 11.0.0 on OS X 10.11.6. First I create a notebook with a lot of text in it: My printing style environment is set to "Printout" I go to File -> Print, then choose Open PDF in Preview (on OS X). I get an 11-page PDF: Now I set both the Display Options and Printing Options magnification to 0.5. I still get 11 pages when printing. Now I change the printing environment to "Working" and try again. I get 22 pages regardless of magnification settings. • Are you sure that you selected PrintingStyleEnvironment -> Printing in the advanced options for Printing? If it's set to Working, then I can change the print magnification by setting the Magnification not under Printing options but under Display Options instead. – Jens Aug 15, 2016 at 17:23 • @Jens Yes, it is set to "Printout" in File -> Printing Settings -> Printing Environment. Also in Notebook Options -> Printing Options -> PrintingStyleEnvironment in the Option Inspector, which I think is the same. I'm on OS X v11.0.0. Are you saying that it is working for you? Aug 15, 2016 at 20:02 • Yes, it's working for me on version 11.0.0. I'll see if there are any other options I have manually changed, but I believe the only relevant setting I played with were Magnification in those two sub-lists, under Printing and Display options. I then printed it to PDF (open in Preview), and the number of pages was indeed cut in half when I chose 50% magnification. I've used this a lot, in order to produce two-column PDFs from notebooks (choosing two columns and landscape in the print layout dialog). – Jens Aug 15, 2016 at 20:33 • @Jens I described how I print exactly. Could you look over it and see if there is a difference in how you do it? Aug 15, 2016 at 20:49 • Default.nb sets up the environment via Cell[StyleData[All, "Printout"], Magnification->0.72], so try altering that to see what effect it has. Aug 15, 2016 at 21:03 With a little spelunking, you can find how the environment is initially setup: Cell[StyleData[All, "Printout"], Magnification->0.72]‌​ So, add that to your notebook's private stylesheet, and modify the magnification at will. • Thanks for the suggestion. This did not work for me though with MMA 11.0.1 Home on a Windows-64 Intel PC. The "Printout" display string (in the private stylesheet Printout environment style cell) even changed in magnification, but this had no effect on the notebook's printout. Have submitted a bug report to Wolfram, and they are working on it. Once this gets fixed (hopefully), definitely using your method. Dec 6, 2016 at 12:53 • @BillN I hadn't responded, since I didn't feel the need to. That said, testing out 11.0.1 on mac with print to pdf (I have no printers attached) worked fine. Would you elaborate on what it is you're doing? Dec 6, 2016 at 14:04 • Thanks for your interest @rcollyer. I am trying to print notebooks at decreased magnifications. Especially when printing in the "Working" print environment, the default is so large that it word wraps significantly and looks "blown up." I can change the Working mag on screen, but not when it prints. Can not change the Printout mag either. Have tried the printer directly, the MMA print previewer, and printing to PDF files, to no avail. No global or notebook Options Inspector Printout or Working mag settings have any effect, or any private stylesheet Printout environment mag setting. Dec 6, 2016 at 14:50 • @BillN you misunderstand me, can you list the steps you have taken when you add the above to the stylesheet? Specifically, you have to edit the cell structure directly via Cell > Show Expression, but once you've done so, you need to turn it back into normal view, i.e. run Cell > Show Expression a second time. If you forget to do that, it doesn't work. Yes, I've been bitten by this myself. Dec 6, 2016 at 14:58 • Ah...I get what you're asking now (hehe). Yes, the steps you described is exactly what I do. In fact, the resultant "Printout environment" cell displays the example word "Printout" in the reduced magnification I set, but it still prints at the default magnification (or preprint or PDF). I even rebooted the computer, reloaded MMA and the notebook, and checked its personal stylesheet, which has the reduced magnification "Printout" word example as expected. But again it prints at the default mag. FYI, I'm running Windows 10 at default settings for everything (nothing fancy). Tks again. Dec 6, 2016 at 15:20 This answer expands on @rcollyer's answer. I will be very specific as it took me a long time to figure out. 1. Once you have your notebook open in Mathematica, click on format, edit stylesheet. 2. A new window will open giving the private definitions for your notebook. In that window, you will see a link at the top called "Default.nb". 3. Click on that link. It will open a new window. In that window, look for a line: "Local definition for all style in the environment printout". 4. Click the RHS margin of that line and then go to the menu of that window and click on cell/showexpression. 5. The line will now read "Cell[StyleData[All, "Printout"], Magnification->0.72]". Copy that line and then go back to the "private definitions" notebook. 6. Paste that line in the private definitions notebook just after the line with the Default.nb link. 7. Then click on the right-hand margin of this pasted line and change 0.72 to 0.5 or whatever magnification you require. 8. Go to the menu of that window and clicked on cell/showexpression. It should then appear as "Local definition for all style in the environment printout". • You don't need to edit the Default.nb stylesheet... you can just do something like: SetOptions[ EvaluationNotebook[], StyleDefinitions -> Notebook[{ Cell[StyleData[StyleDefinitions -> "Default.nb"]], Cell[StyleData[All, "Printout"], Magnification -> .5] }] ]. I mean there are hundreds of ways to work with the stylesheet system, but this is an easy, unsophisticated one that is 1 step instead of 10. Oct 11, 2019 at 18:58
2022-06-26 23:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.301448792219162, "perplexity": 2670.5784421817257}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00762.warc.gz"}
https://www.tutorialspoint.com/dictionary-data-type-in-python
# Dictionary Data Type in Python PythonServer Side ProgrammingProgramming #### Beyond Basic Programming - Intermediate Python Most Popular 36 Lectures 3 hours #### Practical Machine Learning using Python Best Seller 91 Lectures 23.5 hours #### Practical Data Science using Python 22 Lectures 6 hours Python's dictionaries are kind of hash table type. They work like associative arrays or hashes found in Perl and consist of key-value pairs. A dictionary key can be almost any Python type, but are usually numbers or strings. Values, on the other hand, can be any arbitrary Python object. ## Example Dictionaries are enclosed by curly braces ({ }) and values can be assigned and accessed using square braces ([]). For example − Live Demo #!/usr/bin/python dict = {} dict['one'] = "This is one" dict[2] = "This is two" tinydict = {'name': 'john','code':6734, 'dept': 'sales'} print dict['one'] # Prints value for 'one' key print dict[2] # Prints value for 2 key print tinydict # Prints complete dictionary print tinydict.keys() # Prints all the keys print tinydict.values() # Prints all the values ## Output This produce the following result − This is one This is two {'dept': 'sales', 'code': 6734, 'name': 'john'} ['dept', 'code', 'name'] ['sales', 6734, 'john'] Dictionaries have no concept of order among elements. It is incorrect to say that the elements are "out of order"; they are simply unordered. Updated on 24-Jan-2020 11:32:11
2022-09-25 10:39:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18207184970378876, "perplexity": 14071.403838283559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00727.warc.gz"}
https://electronics.stackexchange.com/questions/539869/why-does-pin-numbering-and-placement-in-general-appears-to-be-done-so-haphazar
# Why does pin numbering and placement (in general) appears to be done so haphazardly to the uninformed person (me)? Be it an ESP development board, an IC controller, even an LED segment number itself, pin placement never seems to follow a logic when it comes to actual numbering, or function. I can't find a simple general answer to why this is. I suppose there are a few parts to this question: ### 1. I assume pin placement has some design factors in it, such as trace length and EMF, am I correct on this? For instance, I would think it would make more sense to put the VIN next to the 5V, next to the 3.3V, next to the ground on the Arduino, just to keep all the power pins grouped together. Is there a technical reason they are not, or is it just a design choice? ### 2. When it comes to ESP development boards, why are the GPIO pins usually all over the place? They seem mostly out of order, and why is the SPI clock always on the opposite side of the board of the MOSI MISO? ### 3. When it comes to ICs, why are pins with similar functions not always completely grouped? A great example of this is the MAX7219, not to list them all but pins 1-5 are DIN, DIG0, DIG4, GND, DIG6. Why not put the digits in the order of pins? It doesn't make sense to me, can someone explain? • for #2, that isn't true of all devices. for #3, Yeah that '7219 does seem a bit sadistic. For lines that carry power or high dI/dt, there can be actual reasons, like keeping important loops small as you suggest. For data lines? It would have to be historical, esp. something like a 7segments which go back a long way. Don't know. Might have to do with internal layout of the IC, so the bonding wires don't cross over for instance. Dec 30, 2020 at 2:46 • The '7219 sounds like an example of why it's a bad idea to let the PCB layout person decide the connector pinouts. See, for example, the ancient S-100 bus. Dec 30, 2020 at 4:40 • glad this is an interesting question! hopefully someone with some historical knowledge can come in Dec 30, 2020 at 18:46 I assume pin placement has some design factors in it, such as trace length and EMF, am I correct on this? Yes, but sometimes it is more about internal routing than external signals. Whether it be tracks on a PCB, or metallization layers on the surface of a monolithic IC, the shorter the route and less vias needed the better. The more compact the layout the harder it is to get signals to pins without being blocked by other tracks, so the designer may decide to change the pin layout instead. For example the Z80 has scrambled data pins for no apparent reason, but makes sense once you understand how it is laid out internally. But wouldn't ease of use be more important than having the shortest possible route inside the IC or module? In many cases no, because the same problem occurs on the PCB the device is put on. That nice orderly pin sequence may get messed up as soon as you try to route the signals to other places. Modern PCB layout programs take care of ensuring that signals go to the correct places, so you don't need to worry about physical pin assignments, just ensure they are correct on the schematic (where you can arrange the symbol's pin order any way you like). MCU GPIO pins are often used for 'random' functions rather than arranged in groups, so the pin order (or even which port it is on) isn't an issue. Some MCUs have programmable pin assignments for special functions as well, allowing even more flexibility in pin routing. The PCB designer can then route signals to whichever 'random' MCU pins produce the best layout, and do the assignments in software. For instance, I would think it would make more sense to put the VIN next to the 5V, next to the 3.3V, next to the ground on the Arduino Putting all the power pins right next to each is good for EMI, but bad for safety. The Arduino was designed for hobbyists with little electronics experience who are likely to make mistakes such as accidentally shorting adjacent pins while working on it. It might be better to have power pins separated by others that don't mind being shorted to them. For example if the Arduino's VIN pin was right next to the +5V pin, shorting them together would apply high voltage to the MCU which would probably destroy it. When it comes to ESP development boards, why are the GPIO pins usually all over the place? Most likely for some of the reasons stated above. To keep the module size down the chip pins will most likely be routed to the nearest module pins. IC pin ordering will be influenced by signal routing on the chip itself. There are several reasons for pin placement, and user convenience is evidently not at the top of the list. Within an IC package the pins have to match the pads on the die itself and that’s likely to be driven by what’s going on in the silicon, which may be difficult to describe, somewhat like a high density PCB with limited layers. In some cases it may be actively desirable to keep high-speed signals apart. From an end-user point of view it may or may not matter - if you’re hand-wiring an Arduino to a breadboard then positioning isn’t really important, you just plug the wires in as needed. If you’re tracking a PCB for a wifi module then for the most part you just route the tracks wherever they need to go.
2022-07-07 03:36:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24922752380371094, "perplexity": 1260.3671268196972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00593.warc.gz"}
http://openstudy.com/updates/50a74745e4b082f0b852ebde
## zigzagoon2000 Group Title The velocity of sound in air is given by the equation v = 20√273+t where v is the velocity in meters per second and t is the temperature in degrees Celsius. Find the velocity when the temperature is 117ºC. Round to the nearest meter/sec. http://i1255.photobucket.com/albums/hh639/zigzagoon20001/Question19.jpg A. 20 meters per second B. 330 meters per second C. 395 meters per second D. 216 meters per second A, B, C, or D? Please explain (: 2 years ago 2 years ago 1. ganeshie8 $$v = 20\sqrt{273+ t}$$ since, u want to find the velocity when t = 117, simply replace t with 117 $$v = 20\sqrt{273+ 117}$$ = ? 2. zigzagoon2000 C. Thank you 3. ganeshie8 good work ! yw :) 4. mayankdevnani any confusion? @zigzagoon2000 5. mayankdevnani Plug the temperature of 117 into the equation for T and solve. 6. mayankdevnani and then over @ganeshie8
2014-11-24 14:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7754210233688354, "perplexity": 1425.3538152358974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380638.30/warc/CC-MAIN-20141119123300-00018-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/577349-sse-in-assembly---wrong-output/
• Advertisement # SSE in Assembly - wrong output This topic is 2776 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts hi, I put together a small app to try using SSE in assembly, but I am getting the wrong output. these are the .cpp file and the .asm file #define WIN32_LEAN_AND_MEAN#include <iostream>using namespace std;extern "C" int *myFunc(float *a, float *b, float *result);int main(int argc, char *argv[]){ float inA[] = {1.0f, 2.0f, 3.0f, 4.0f}; float inB[] = {1.0f, 2.0f, 3.0f, 4.0f}; float ret[4]; myFunc(inA, inB, ret); cout << ret[0] << endl; cout << ret[1] << endl; cout << ret[2] << endl; cout << ret[3] << endl; system("pause"); return 0;} .586P.XMM.MODEL FLAT, C.STACK.DATA.CODEmyFunc PROC a:DWORD, b:DWORD, result:DWORDmovups xmm0, [a]movups xmm1, addps xmm0, xmm1movups [result], xmm0retmyFunc ENDPEND I was expecting to get the result Quote: 2468 but instead got Quote: -1.07374e+008-1.07374e+008-1.07374e+008-1.07374e+008 can anyone help me out? #### Share this post ##### Share on other sites Advertisement Hey there, One thing I did notice is you did not align your data. SSE can only accept 16 bit aligned data. So I think you need to do a __declspec(align(16)) (if the underscores did not get posted then there are two underscores before declspec). Ive not seen the movups command before. I would first load the data into a normal asm register (eax for example) and the do movdqa xmm0, [eax]. Hope this helps #### Share this post ##### Share on other sites You should probably try using the compiler intriniscs. That would let you write all the code inline with your C++ code, without needing the extra assember file at all. #### Share this post ##### Share on other sites Hmmmm, i think that a, being a label, is actually the address of the pointer to the floating point data (&a, not a, in c notation). So i think you are currently loading &a and 96 more garbage bytes into xmm0 and you actualy need to dereference twice. Something like this: mov rcx, [a]movups xmm0, [rcx] a and b should already be in the CPU's registers according to C's calling convention. Compiler SSE intrinsics are much more convenient and are the same at least across GCC, Microsoft and Intel compilers, but some people say that they produce suboptimal code in some cases. I hope I helped... [Edited by - D_Tr on July 20, 2010 4:42:47 PM] #### Share this post ##### Share on other sites Kazuo5000, you are right. Someone on another forum also told me that it was because you can't directly dereference a label. this is allowed: mov edi, amovups xmm0, [edi] but this is not: mov xmm0, [a] I would like to know why I have to do an extra copy for nothing though =/ #### Share this post ##### Share on other sites CPPNick, I think its because you cant directly access the SSE registers like you can with normal CPU registers, but I am not entirely sure on this. You could always try loading in the address of [a] instead using the lea command. Personally, I don't have a great deal of knowledge in SSE but have a little bit of experience in it. Again I hope this helps #### Share this post ##### Share on other sites Right, posts cleaned up. 1) If you have a question which isn't directly related to the thread please post a new thread about it. 2) but don't jump down someone elses throat when they do that however. #### Share this post ##### Share on other sites Is it just me or are these proc declarations a bit confusing? They provide a level of abstraction, which should not be there when you are a beginner. The "pure" way to write a function in assembly is to write the function code under a label and jump to it using the call instruction, accessing the data according to the calling convention you are planning to use (in the OP's case the C calling convention). Doing things the "pure" way will help you understand assembly language better. A great assembly tutorial using nasm (in the form of a short book) can be found here #### Share this post ##### Share on other sites Just use the intrinsics. #include <xmmintrin.h>int main(){ __declspec(align(16)) float inA[] = {1.0f, 2.0f, 3.0f, 4.0f}; __declspec(align(16)) float inB[] = {1.0f, 2.0f, 3.0f, 4.0f}; __m128 i1 = _mm_load_ps(inA); //if you don't align it, you have to use _mm_loadu_ps __m128 i2 = _mm_load_ps(inB); i1 = _mm_add_ps(i1, i2); return 0;} #### Share this post ##### Share on other sites thank phantom. D_Tr: I would rather use MASM..just preference I guess. The way it looks makes more sense to me. I'm past the point of having a hard time understanding assembly, I just need to learn the details and nuances now. I will definitely take a look at the book you recommended...at first glance I am relieved that its only 195 pages....not nearly as intimidating as "The Art of Assembly" which is over 1400...scary clashie: I have tried that, and it did seem to help. I have a question though. In an isolated case like your example, you have to move a total of 48 bytes of data to get the the data in and back out. Does it really work out to be faster? I am doing what I am doing for two reasons; One is just to learn some more assembly, and the second is to try and avoid as much overhead as possible and streamline the procedure. I made the inner embedded loop of a function in assembly using the same method as my original post, and it turned out to be slower. Once I examined the disassembly, I found that VC++ was adding another 15 lines of junk before each call to my assembly procedure to protect the contents of the registers I was overwriting. I figured that if I made the whole procedure assembly instead of only the inner embedded loop, I could avoid that. #### Share this post ##### Share on other sites The problem is your case is somewhat artifical in nature. Generally, when using SSE/SIMD you'll want to be running through large chunks of data which are already nicely formatted for the SSE intructions to handle. So, the source data would just be a chunk of a larger chunk of (aligned) source data already in memory. As for the assembly 'extra copy', well thats is somewhat unavoidable. If we assume the function call uses the stack for all parameters then a 'pure' assembly (without the PROC helper) would have to pull the parameters from the stack anyway and place them into a register so they could be treated as an address to get the data from. The thing is, as mentioned above, generally with SSE you'll be blasting large chunks of data, so you'll perform that load once and then just increment the register(s) to get to the new source data on each run over the loop. A 'fast call' calling convention, which will pass some parameters via registers, might allow you to keep address in register and avoid an 'extra copy' but thats about the only way around it. #### Share this post ##### Share on other sites Quote: Original post by CPPNickclashie: I have tried that, and it did seem to help. You've tried intrinsics but they don't help? Wut? Quote: Original post by CPPNickI have a question though. In an isolated case like your example, you have to move a total of 48 bytes of data to get the the data in and back out. Does it really work out to be faster? Dude, the intrinsics boil down to 3 movaps and an addps. i.e. Exactly the same amount of data is moved in that method as in your asm. Well, no. That's not true. Since you are using the un-aligned version, your code would be moving up to 96 bytes around.... Quote: Original post by CPPNick I am doing what I am doing for two reasons; One is just to learn some more assembly, Don't bother. Use intrinsics. Quote: Original post by CPPNickand the second is to try and avoid as much overhead as possible and streamline the procedure. Then use intrinsics. It's what they are there for. Quote: Original post by CPPNickI made the inner embedded loop of a function in assembly using the same method as my original post, and it turned out to be slower. Yeah, don't use assembler. Use intrinsics because no matter how good you *think* you are, the compiler will outperform you 99.99% of the time. Quote: Original post by CPPNickOnce I examined the disassembly, I found that VC++ was adding another 15 lines of junk before each call to my assembly procedure to protect the contents of the registers I was overwriting. I figured that if I made the whole procedure assembly instead of only the inner embedded loop, I could avoid that. And here's the thing, if you had used intrinsics, the compiler would understand you are writing SSE code and it will help you to optimise it. Using asm these days is not a good idea - you will only end up fighting with the compiler. #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Tags • Advertisement • ### Popular Now • 46 • 11 • 17 • 11 • 13 • Advertisement
2018-02-25 10:10:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1993928849697113, "perplexity": 2289.8021411595214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00086.warc.gz"}
https://discuss.pytorch.org/t/using-autograd-to-compute-jacobian-of-partial-derivatives/62912
# Using autograd to compute Jacobian of partial derivatives I apologize if this question is obvious or trivial. I am very new to pytorch and I am trying to understand the autograd.grad function in pytorch. I have a neural network G that takes in inputs (x,t) and outputs (u,v). Here is the code for G: class GeneratorNet(torch.nn.Module): """ A three hidden-layer generative neural network """ def __init__(self): super(GeneratorNet, self).__init__() self.hidden0 = nn.Sequential( nn.Linear(2, 100), nn.LeakyReLU(0.2) ) self.hidden1 = nn.Sequential( nn.Linear(100, 100), nn.LeakyReLU(0.2) ) self.hidden2 = nn.Sequential( nn.Linear(100, 100), nn.LeakyReLU(0.2) ) self.out = nn.Sequential( nn.Linear(100, 2), nn.Tanh() ) def forward(self, x): x = self.hidden0(x) x = self.hidden1(x) x = self.hidden2(x) x = self.out(x) return x Or simply G(x,t) = (u(x,t), v(x,t)) where u(x,t) and v(x,t) are scalar valued. Goal: Compute $\frac{\partial u(x,t)}{\partial x}$ and $\frac{\partial u(x,t)}{\partial t}$. At every training step, I have a minibatch of size $100$ so u(x,t) is a [100,1] tensor. Here is my attempt to compute the partial derivatives, where coords is the input (x,t) and just like below I added the requires_grad_(True) flag to the coords as well: tensor = GeneratorNet(coords)
2019-12-12 20:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.716500461101532, "perplexity": 4737.913834379022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00415.warc.gz"}
https://es.overleaf.com/latex/templates/sample-policy-memo-for-cornell-info-1200/kybzqhsxjgjk
# Sample Policy Memo for Cornell INFO 1200 Author Alice Chen, with adapted TexMemo package by Rob Oakes AbstractThis sample policy memo serves as a template for Cornell INFO 1200 students who would like to use LaTeX rather than Word. Usage of section headers, bulleted lists, and references are included. Most features including page layout and citation style can be easily tweaked.
2020-03-30 01:23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3088817000389099, "perplexity": 7122.414641222077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00481.warc.gz"}
https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/01%3A_Basics_of_Geometry/1.06%3A_Points_that_Partition_Line_Segments
# 1.6: Points that Partition Line Segments Section formula finds coordinates of a point that splits a line segment in a given ratio. Recall that a median of a triangle is a line segment that connects a vertex of the triangle to the midpoint of the side opposite the vertex. All triangles have three medians and these three medians intersect in one point called the centroid, shown below. The centroid partitions each median in a 2:1 ratio. Find the coordinates of the centroid, given the coordinates of the vertices of the triangle as shown. ## Partitions Suppose you have a line segment $$\overline{AB}$$. A point $$P$$ divides this line segment into two parts such that $$AP=mk$$ and $$PB=nk$$. You can say that point $$P$$ partitions segment $$AB$$ in a $$m:n$$ ratio. (Note that $$\dfrac{mk}{nk}=mn$$, a ratio of $$m:n$$.) A natural question to ask is, what are the coordinates of point $$P$$? It turns out that with the help of similar triangles and algebra, you can come up with a formula that will give you the coordinates of point $$P$$ based on the coordinates of $$A$$, the coordinates of $$B$$, and the ratio $$m:n$$. This formula is sometimes referred to as the section formula. Section Formula: Given $$\overline{AB}$$ with $$A=(x_1, y_1)$$ and $$B=(x_2, y_2)$$, if point $$P$$ partitions $$\overline{AB}$$ in a $$m:n$$ ratio, then the coordinates of point P are: $$P=\left (\dfrac{mx_{2}+nx_{1}}{m+n}, \dfrac{my_{2}+ny_{1}}{m+n}\right )$$ ## Proving Triangle Similarity For segment $$\overline{AB}$$ below, draw two right triangles, one with hypotenuse $$\overline{AP}$$ and one with hypotenuse $$\overline{PB}$$. Show that these triangles are similar. Start by drawing the right triangles. Below, the base and height of each triangle has been labeled in green. Clearly these triangles have one pair of congruent angles (the right angles). What other information do you have about the triangles? You know that for each triangle, the ratio $$\dfrac{height}{base}$$ is the slope of $$\overline{AB}$$. Because these two triangles are attached to the same line segment with the same slope, it means that $$\dfrac{h_1}{b_1} = {h_2}{b_2}$$. This is equivalent to $$\dfrac{b_2}{b_1}=\dfrac{h_2}{h_1}$$. Two pairs of sides are in the same ratio. Not only is there one pair of congruent angles, but there are also two pairs of corresponding sides with the same ratio. The triangles are similar by $$SAS$$∼. #### Finding Length and Height Find the lengths of the bases and heights of each triangle. Use the fact that the triangles are similar to set up and solve proportions for $$x$$ and then for $$y$$ in order to find the coordinates of point $$P$$. The bases and heights can be found in terms of $$x_1$$, $$y_1$$, $$x$$, $$y$$, $$x_2$$, $$y_2$$. Because the triangles are similar, the ratios between pairs of corresponding sides are equal. In particular, you know: 1. $$\dfrac{mk}{nk}=\dfrac{x−x_1}{x_2−x}$$ 2. $$\dfrac{mk}{nk}=\dfrac{y−y_1}{y_2−y}$$ You can use algebra to solve the first equation for $$x$$ and the second equation for $$y$$. 1. $$\dfrac{mk}{nk}=\dfrac{x−x_1}{x_2−x}$$ $$\rightarrow \dfrac{m}{n}=\dfrac{x−x_1}{x_2−x}$$ $$\rightarrow mx_2−mx=nx−nx_1$$ $$\rightarrow mx_2+mx_1=mx+nx$$ $$\rightarrow mx_2−nx_1=x(m+n)$$ $$\rightarrow \dfrac{mx_2−nx_1}{m+n}=\dfrac{x(m+n)}{m+n}$$ $$\rightarrow x=\dfrac{mx_2+nx_1}{m+n}$$ 1. $$\dfrac{mk}{nk}=\dfrac{y−y_1}{y_2−y}$$ $$\rightarrow\dfrac{m}{n}=\dfrac{y−y_1}{y_2−y}$$ $$\rightarrow my_2−my=ny−ny_1$$ $$\rightarrow my_2+my_1=my+ny$$ $$\rightarrow my_2−ny_1=y(m+n)$$ $$\rightarrow \dfrac{my_2−ny_1}{m+n}=\dfrac{y(m+n)}{m+n}$$ $$\rightarrow x=\dfrac{my_2+ny_1}{m+n}$$ Point $$P$$ is at: $$P=\left(\dfrac{mx_2+nx_1}{m+n} ,\dfrac{my_2+ny_1}{m+n}\right)$$ ## Finding the Coordinates of a Point Consider $$\overline{AB}$$ with $$A=(10, 2)$$ and $$B=(4, 1)$$. $$P$$ partitions $$\overline{AB}$$ in a ratio of $$2:3$$. Find the coordinates of point $$P$$. You can use the section formula with $$(x_1,y_1)=(10, 2)$$, $$(x_2, y_2)=(4, 1)$$, $$m=2$$, $$n=3$$. $$P=\left(\dfrac{mx_2+nx_1}{m+n} ,\dfrac{my_2+ny_1}{m+n}\right)=\left(\dfrac{2 \cdot 4+3 \cdot10}{2+3}, \dfrac{2 \cdot 1+3\cdot 2}{2+3}\right)=\left(7.6,1.6\right)$$ You can plot points $$A$$, $$B$$, and $$P$$ to see if this answer is realistic. This does look like $$P$$ partitions the segment from $$A$$ to $$B$$ in a ratio of $$2:3$$. Note that the answer would be different if you were looking for the point that partitioned the segment from $$B$$ to $$A$$. The order of the letters and “direction” of the segment matters. Example $$\PageIndex{1}$$ Earlier, you were asked to find the coordinates of the centroid, given the coordinates of the vertices of the triangle as shown. Solution One way to find the coordinates of the centroid is to use the section formula. You can focus on any of the three medians. Here, look at the median from point A. First, you will need to find the coordinates of the midpoint of $$\overline{BC}$$ (the midpoint formula, a special case of the section formula, is derived in Guided Practice #1 and #2): $$\left(\dfrac{x_2+x_1}{2}, \dfrac{y_2+y_1}{2}\right)=\left (\dfrac{5+6}{2}, \dfrac{5+1}{2}\right)=\left(5.5,3\right )$$ $$\left(\dfrac{mx_2+nx_1}{m+n}, \dfrac{my_2+ny_1}{m+n}\right)=\left(\dfrac{2 \cdot 5.5+1 \cdot 2}{2+1}, \dfrac{2 \cdot 3+1 \cdot 6}{2+1}\right)=\left(\dfrac{13}{3}, 4\right)$$ Looking at the picture, these coordinates for the centroid are realistic. Example $$\PageIndex{2}$$ The midpoint of a line segment is the point exactly in the middle of the line segment. In what ratio does a midpoint partition a segment? Solution 1:1, because the segments connecting the midpoint to each endpoint will be the same length. Example $$\PageIndex{3}$$ The midpoint formula is a special case of the section formula where m=n=1. Derive a formula that calculates the midpoint of the segment connecting $$(x_1,y_1)$$ with $$(x_2, y_2)$$. Solution For a midpoint, $$m=n=1$$. The section formula becomes: $$\left(\dfrac{mx_2+nx_1}{m+n}, \dfrac{my_2+ny_1}{m+n}\right)=\left(\dfrac{x_2+x_1}{2}, \dfrac{y_2+y_1}{2}\right) This is the midpoint formula. Example \(\PageIndex{4}$$ Consider $$\overline{BA}$$ with $$B=(4, 1)$$ and $$A=(10, 2)$$. $$P$$ partitions the segment in a ratio of $$2:3$$. Find the coordinates of point $$P$$. How and why is this answer different from the answer to Example $$C$$? Solution $$(x_1, y_1)=(4,1)$$ and $$(x_2, y_2)=(10,2)$$. $$m=2$$ and $$n=3$$. $$P=\left(\dfrac{mx_2+nx_1}{m+n}, \dfrac{my_2+ny_1}{m+n}\right)=\left(\dfrac{2 \cdot 10+3 \cdot 4}{2+3}, \dfrac{2 \cdot 2+3 \cdot 1}{2+3}\right)=\left(6.4,1.4\right)$$ This answer is different from the answer to Example C because in this case point P is partitioning the segment in a $$2:3$$ ratio starting from point $$B$$. In Example $$C$$, you were starting from point $$A$$. ### Review Find the midpoint of each of the following segments defined by the given endpoints. 1. $$(2, 6)$$ and $$(−4, 8)$$ 2. $$(1, 9)$$ and $$(−2, 5)$$ 3. $$(11, 24)$$ and $$(8, 12)$$ 4. $$(1, 3)$$ is the midpoint of $$\overline{AB}$$ with $$A=(−2, 1)$$. Find the coordinates of $$B$$. 5. $$(2, 4)$$ is the midpoint of $$\overline{CD}$$ with $$C=(−5, 9)$$. Find the coordinates of $$D$$. 6. $$(4, 23)$$ is the midpoint of $$\overline{EF}$$ with $$E=(7, 11)$$. Find the coordinates of $$F$$. Consider $$A=(−9, 4)$$ and $$B=(11, 17)$$. 7. Point $$P_1$$ partitions the segment from $$A$$ to $$B$$ in a $$3:5$$ ratio. Find the coordinates of point $$P_1$$. 8. Point $$P_2$$ partitions the segment from $$B$$ to $$A$$ in a $$3:5$$ ratio. Find the coordinates of point $$P_2$$. 9. Why are the answers to 7 and 8 different? 10. Find the length of $$AP_1$$ and $$P_2B$$. Why should these lengths be the same? Consider $$C=(−6, −1)$$ and $$D=(4, 8)$$. 11. Point $$P_3$$ partitions the segment from $$A$$ to $$B$$ in a $$1:2$$ ratio. Find the coordinates of point $$P_3$$. 12. Point $$P_4$$ partitions the segment from $$B$$ to $$A$$ in a $$4:5$$ ratio. Find the coordinates of point $$P_4$$. 13. Point $$P=(1, 2)$$ partitions the segment from $$E=(9, 6)$$ to $$F$$ in a $$2:5$$ ratio. Find the coordinates of point F. 14. Point $$P=(−6, −4)$$ partitions the segment from $$G=(−4, 6)$$ to $$H$$ in a $$5:3$$ ratio. Find the coordinates of point $$H$$. 15. Point $$P=(6,8)$$ partitions the segment from $$I=(−2,1)$$ to $$J$$ in a $$6:7$$ ratio. Find the coordinates of point $$J$$. 16. A triangle is defined by the points $$(5, 6)$$, $$(9, 17)$$, and $$(−2, 1)$$. Find the coordinates of the centroid of the triangle. ## Vocabulary Term Definition centroid The centroid is the point of intersection of the medians in a triangle. Median The median of a triangle is the line segment that connects a vertex to the opposite side's midpoint. midpoint The midpoint of a line segment is the point on the line segment that splits the segment into two congruent parts. Midpoint Formula The midpoint formula says that for endpoints $$(x_{1},y_{1})$$ and $$(x_{2},y_{2})$$, the midpoint is $$\left (\dfrac{x_1+x_2}{2}, \dfrac{y_1+y_2}{2}\right)$$. Partitions To partition is to divide into parts. Section Formula The section formula states how to find the coordinates of a point that partitions a line segment in a given ratio.
2021-05-12 08:35:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903671383857727, "perplexity": 146.63146882107668}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00132.warc.gz"}
https://www.concepts-of-physics.com/pw/html/a/fa/sfa.html
IIT JEE Physics (1978-2018: 41 Years) Topic-wise Complete Solutions # Ray Tracing and Verification of the Snell's Law Using Rectangular Glass Slab ## Introduction The Snell's laws of refraction are, 1. The incident ray, refracted ray, and normal lies in the same plane. 2. The angle of incidence $$(i)$$, angle of refraction $$(r)$$ and the refractive index of the medium ($$\mu$$) are related by $${\sin i}/{\sin r}=\mu$$. We verify the Snell's law in this experiment. The glass slab produce lateral shift of the incident ray. ## Apparatus Rectangular glass slab, white paper, protector, pencil, scale, pins. ## Procedure 1. Fix a sheet of white paper on a thermocole sheet. Place the slab at its middle. Draw the boundary of the slab, and draw a line RP to meet one of the longer boundaries at P, at an angle (say 30 degree). Fix two pins A, B vertically on this line about 10 cm apart. Look at the image of the pins from other side of the slab. Now fix a pin C such that it appears to be in a straight line with the image of A and B. Fix another pin D (at least 10 cm from C) such that all four pins appear to be in straight line. 2. Remove the pins and join by a straight line the points where the pins C and D were inserted. Extend this line to meet the boundary of the slab at Q. Join PQ. The lines RP, PQ, and QD represent the directions of the incident ray, the refracted ray within the slab and the emergent ray after the second refraction respectively. 3. You will find that the QD is parallel to RP. Also, it is shifted sideways from the direction of RP. Note that the incident ray bent towards the normal at P, as it moved from the optically rarer medium (air) to the optically denser medium (glass). At Q, the ray going from the optically denser medium (glass) to the optically rarer medium (air), bent away from the normal at Q. 4. Measure angle of incidence and angle of refraction and calculate $$\mu$$. 5. Repeat above steps of incident angle of 45 degree and 60 degree. ## Discussion Write results in tabular form.
2020-08-07 18:50:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5809724926948547, "perplexity": 949.4162407217152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00309.warc.gz"}
https://byjus.com/ap-board/ssc-class-10-maths-chapter-12-applications-of-trigonometry/
Learn about smartphone sensors in the next Xcel masterclass with Tinkerbee CEO, Anupam! Learn about smartphone sensors in the next Xcel masterclass with Tinkerbee CEO, Anupam! # AP Class 10 Maths Chapter 12 Applications of Trigonometry Trigonometry was first developed for Astronomy and Geography, but now it is used in Engineering, Physics and Chemistry. In the AP Class 10 Maths Chapter 12 Applications of Trigonometry, we learn how to use trigonometric ratios to determine the height and length of an object. Referring to this AP Board Class 10 Maths Chapter 12 Applications of Trigonometry Notes and Solutions is the best way to master the concepts and prepare for the board exams. Students can see from here how the questions are solved step-wise. ## Using Figures To Solve Problems We should consider the following, while we solve problems of height and distance. • Objects such as trees, towers and buildings should be considered linear for mathematical convenience. • The angle of elevation should be considered with reference to the horizontal line. • If the height of the observer is not given in the problem, it should be neglected. Let us look at a few chapter questions in the next section, to better understand. ### Class 10 Maths Chapter 12 Applications of Trigonometry Questions 1. Two men on either side of a temple of 50-meter height look at its top at the angles of elevation 30º and 60º respectively. Find the distance between the two men. Solution: Consider the distance between the first person and the temple as AD = x and the distance between the second person and temple as CD = d. $$\begin{array}{l}\tan 30^{\circ}=\frac{BD}{AB}\end{array}$$ Substituting the values, we get $$\begin{array}{l}\frac{1}{\sqrt{3}}=\frac{50}{x}\end{array}$$ $$\begin{array}{l}x=50\sqrt{3}\end{array}$$ ……..(1) From ∆BCD $$\begin{array}{l}\tan 60^{\circ}=\frac{BD}{d}\end{array}$$ Substituting the values, we get $$\begin{array}{l}\sqrt{3}=\frac{50}{d}\end{array}$$ $$\begin{array}{l}d=\frac{\sqrt{50}}{3}\end{array}$$ …….(2) From (1) and (2), we can calculate the distance between them as follows, = x + d = $$\begin{array}{l}x=50\sqrt{3}\end{array}$$ + $$\begin{array}{l}d=\frac{\sqrt{50}}{3}\end{array}$$ = $$\begin{array}{l}\frac{(50\times 3)+50}{\sqrt{3}}=\frac{200}{\sqrt{3}}\end{array}$$ Simplifying, we get $$\begin{array}{l}\frac{200}{\sqrt{3}}\times \frac{\sqrt{3}}{\sqrt{3}}=66.66\sqrt{3}\,meters\end{array}$$ Stay tuned to BYJU’S to get the latest notification on SSC exams along with AP SSC model papers, exam pattern, marking scheme and more.
2022-09-24 22:49:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.469137042760849, "perplexity": 1384.7086740460177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00653.warc.gz"}
https://www.aspicteramo.it/classifying-and-balancing-chemical-reactions-calculator/
# classifying and balancing chemical reactions calculator Chemical reactions occur when two molecules collide with each other in a certain orientation and amount of force, which causes a chemical change due to the breaking and forming of the bonds between the atoms. Ling, Balancing and Classifying Chemical Equations For each problem below write a balanced molecular equation, showing all physical states and This chemical equation balancing app auto-calculates the chemical formulas and provides you with detailed solutions for the chemical balancer with a quick process. i) Action of water on quicklime ii) Action of heat on ferrous sulphate crystals iii) Iron nails kept in copper sulphate solutionWe need to make it equal but chemical equation product calculator automatically equals it. synthesis reaction The chemical equation described in section 4. Mg3 (PO4)2 Formula for iron (II) arsenide. 2) Convert the problem data (mass, volume) to the amount of substance (moles). 2 { H }_ { 2 }\left ( g \right) + { O }_ { 2 }\left ( g \right) \rightarrow 2 { H }_ { 2 }O\left ( g \right) 2H 2(g)+ O2(g) → 2H 2O(g) Play this game to review Chemistry. docx -. 4 Şub 2021 Balancing chemical equations means that you write the chemical equation correctly so that there is the same amount of mass on each side of the Equation Balancer is an online chemical equation balancing tool. When you make pizza, which changes are physical and which are Kindly say, the Writing Balancing And Classifying Chemical Reactions Answers is universally compatible with any devices to read Writing Reaction Mechanisms in Organic Chemistry Audrey Miller 2012-12-02 Presentation is clear and instructive: students will learn to recognize that many of the reactions in organic chemistry are closely related and not Instructions on balancing chemical equations: Enter an equation of a chemical reaction and click ‘Balance’. BaCl2 Name of NH4Cl Ammonium Chloride Name of TiBr3 Titanium (III) BromideWe need to make it equal but chemical equation product calculator automatically equals it. 3 Reaction Balanced chemical equations are used in much the same fashion to determine the amount of one reactant required to react with a given View image. To balance a chemical equation, every element must have Then, classify each reaction as synthesis, decomposition, single-replacement, or double-replacement. How to use Equation Balancer? This balance chemical equations calculator helps to balance chemical equations quickly. Predict the product of the following reaction with a chemical equation calculator. Redox calculator is an online tool which you can use for free. To use the chemical balance calculator, follow the steps below: Make a chemical equation by clicking on the elements in the periodic table. The following steps should be followed to use the balancing chemical equation calculator: Step 1: First, enter the chemical formula in the text box. Oct 04, 2017 · Balancing nuclear equations worksheet answers beautiful chessmuseum te chemistry worksheets chemical equation reactions and how to balance 11 steps with pictures chem help conservation of mass phet interactive simulations writing balanced assessment ti science nspired introduction you Balancing Nuclear Equations Worksheet Answers Beautiful Chessmuseum Te Chemistry Worksheets Chemical Equation With our chemical reaction calculator you can balance an equation easily. ” Use the “ Classic Chembalancer ” to balance the equations on this worksheet . Displacement reaction D. The balancing equations calculator is free and easy to use. Transcribed image text: I Classifying Reactions and Balancing Chemical Equations Classify and balance each of the chemical equations shown below. Have students do this “ Simple Chemical Reactions ” crossword puzzle with answers . Examples of complete chemical equations to balance: Fe + Cl 2 = FeCl 3 KMnO 4 + HCl = KCl + MnCl 2 + H 2 O + Cl 2 K 4 Fe (CN) 6 + H 2 SO 4 + H 2 O = K 2 SO 4 + FeSO 4 + (NH 4) 2 SO 4 + CO C 6 H 5 COOH + O 2 = CO 2 + H 2 O K 4 Fe (CN) 6 + KMnO 4 + H 2 SO 4 = KHSO 4 + Fe 2 (SO 4) 3 + MnSO 4 + HNO 3 + CO 2 + H 2 O Calculator of Balancing Redox Reactions Unbalanced Chemical Reaction [Examples : 1) Cr2O7^2- + H^+ + e^- = Cr^3+ + H2O , 2) S^2- + I2 = I^- + S ] Redox Reaction is a chemical reaction in which oxidation and reduction occurs simultaneously and the substance which gains electrons is termed as oxidizing agent. What is Chemical Reaction? A chemical reaction is a process in which one or more reactants are converted into one or more products. Brief Explanation: Let’s consider the following reaction: $$2A + 2B = 2AB$$ Now this is considered a balanced reaction as the number of the atoms on the left and right side are exactly equal. Chapter Outline. Which of the following is the general formula for a decomposition reaction?Calculator of Balancing Redox Reactions Unbalanced Chemical Reaction [Examples : 1) Cr2O7^2- + H^+ + e^- = Cr^3+ + H2O , 2) S^2- + I2 = I^- + S ] Redox Reaction is a chemical reaction in which oxidation and reduction occurs simultaneously and the substance which gains electrons is termed as oxidizing agent. 16 Ağu 2016 This chemistry video shows you how to balance chemical equations especially if you come across a fraction or an equation with polyatomic Chemical equation balancer calculator allows you to balance chemical equations accurately. The reactant entities are given on By contrast, the capability to perform standard matrix inversion is a widely available, built-in function of scientific calculators and basic spreadsheets. This online calculator balances equations of chemical reactions using algebraic method. CaCl 2 +2 AgNO 3 → Ca (NO 3) 2 + 2AgCl Chemical reactions, the transformation from one molecular structure to another, are ubiquitous in the world around us. Pages 1. Which of the following is the general formula for a decomposition reaction? Examples of complete chemical equations to balance: Fe + Cl 2 = FeCl 3 KMnO 4 + HCl = KCl + MnCl 2 + H 2 O + Cl 2 K 4 Fe (CN) 6 + H 2 SO 4 + H 2 O = K 2 SO 4 + FeSO 4 + (NH 4) 2 SO 4 + CO C 6 H 5 COOH + O 2 = CO 2 + H 2 O K 4 Fe (CN) 6 + KMnO 4 + H 2 SO 4 = KHSO 4 + Fe 2 (SO 4) 3 + MnSO 4 + HNO 3 + CO 2 + H 2 O Calculator of Balancing Redox Reactions Unbalanced Chemical Reaction [Examples : 1) Cr2O7^2- + H^+ + e^- = Cr^3+ + H2O , 2) S^2- + I2 = I^- + S ] Redox Reaction is a chemical reaction in which oxidation and reduction occurs simultaneously and the substance which gains electrons is termed as oxidizing agent. Solution. Ca (C2H3O2)2 Formula for barium chloride. -Al(s) + – _02(g This chemical equation balancing app auto-calculates the chemical formulas and provides you with detailed solutions for the chemical balancer with a quick process. _____1. 4 Reaction Yields. -Al(s) + – _02(g 43 terms · chemical formula → a group of chemical symbols an…, chemical equation → A representation of a chemical…, reactants → Elements or compounds that ent…, products → The elements or compounds prod…, coefficients → the numbers that appear before…Balance and classify chemical equations calculator tessshlo equation balancing types of reactions worksheets chemistry learner worksheet classification eq key directions first each the below then course hero u1l7 solved vii classifying chegg com 1 cp name hw ii date teacher notes class write type reaction on equations49 theworksheets Balance And Classify Chemical Equations Calculator Tessshlo Formula for ammonium sulfide. While you enter the reactants, complete equation would be displayed in a few seconds. How to balance a chemical reaction by making sure you have the same number of atoms of each element on both sides. 16 Tem 2021 Steps to Writing & How to Balance Chemical Equations · Identifying the names of the reactants and products. So 19 Kas 2010 classify a given reaction as synthesis, decomposition, Exothermic Reactions: Chemical reactions that produce heat or light as the Apr 9, 2018 – Explore Ashleigh Kruse’s board “Balancing Equations”, followed by 321 people on Pinterest. Simple chemical equations can be balanced by inspection, that is, by trial and error. Answers appear after the final question. The molecules of one reactant are combined with those of another reactant to form a new substance during a chemical reaction. Question 1 It’s important to be able to recognize the major types of chemical reactions. Reaction symbol. An online balancing chemical equations calculator provides a balanced equation, structure, and equilibrium constant with chemical names and formulas. Write the balanced chemical equation for the reaction that occurs when the gasoline additive methyl tert-butyl ether, C5H12O (l), burns in air. The greatest common divisor and the lowest common multiple of two integers. chlorine (g) + aluminum iodide … Chem 30S classifying chemical reactions using those reactions to predict missing reactants and products balancing equations translating from Play this game to review Chemistry. Al2O3 Formula for lithium hydroxide. Classifying and Balancing Equations Multiple Choice. And this is what our free balancing chemical equation calculator does. The chemical equation product calculator works faster and is the best alternative of manual calculations. Balancing nuclear equations worksheet answers beautiful chessmuseum te chemistry worksheets chemical equation reactions and how to balance 11 steps with pictures chem help conservation of mass phet interactive simulations writing balanced assessment ti science nspired introduction you Balancing Nuclear Equations Worksheet Answers Beautiful Chessmuseum Te Chemistry Worksheets Chemical Equation Following the traditional method, the reaction can be balanced as follows: First, the aluminium atoms are balanced. The procedure to use the chemical reaction calculator is as follows: Step 1: Enter the chemical reaction in the input field Step 2: Now click the button “Submit” to get the output Step 3: Finally, the equilibrium constant for the given chemical reaction will be displayed in a new window. Least Common Denominator. Hit the Calculate button to get the balance Calculators used by this calculator. You just have to insert the equation and the calculator will display oxidation and reduction Oct 04, 2017 · Balancing nuclear equations worksheet answers beautiful chessmuseum te chemistry worksheets chemical equation reactions and how to balance 11 steps with pictures chem help conservation of mass phet interactive simulations writing balanced assessment ti science nspired introduction you Balancing Nuclear Equations Worksheet Answers Beautiful Chessmuseum Te Chemistry Worksheets Chemical Equation Balancing Equations Calculator: Free Balancing Equations Tool is a wonderful tool that helps you in balancing any type of chemical equation. The procedure to use the chemical reaction calculator is as follows: Step 1: Enter the chemical reaction in the input field Step 2: Now click the button “Submit” to get the output Step 3: Finally, the equilibrium constant for the given chemical reaction will be displayed in a new window. 1) Write an equation for a chemical reaction and balance it. Simple chemical equations can be balanced by inspection, that is, by trial and error. Product. AB 4 + 2C 2 → AC + BC (unbalanced) We need to determine the number of atoms taking part in the reaction to balance this equation. 3 Reaction Stoichiometry. You just have to insert the equation and the calculator will display oxidation and reduction Chem 30S classifying chemical reactions using those reactions to predict missing reactants and products balancing equations translating from With our chemical reaction calculator you can balance an equation easily. substances
2023-03-31 02:54:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49461647868156433, "perplexity": 2685.235453337597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00145.warc.gz"}
https://math.stackexchange.com/questions/1602188/closed-form-for-the-integral-int-01-tn-log-gammatadt
# Closed Form for the Integral: $\int_0^1 t^n\log\Gamma(t+a)dt$ I am wondering if someone could tell me whether or not the following integral has a closed form representation: $$\int_0^1 t^n\log\Gamma(t+a)dt$$ In Srivastava's and Choi's wonderful book Zeta and q-Zeta Functions and Associated Series and Integrals, they give a closed form for the following Digamma integral: $$\int_0^1 t^n\psi(t)dt$$ and the also list specific cases for the integral I am searching for, but not the general case. The formulae for the integrals containing $n>2$ become extremely complex, which makes me wonder whether a closed form exists but I was hoping someone could settle this question for me. • Almost certainly not. – Alex M. Jan 6 '16 at 16:34 • The 1998 paper by Adamchik "Polygamma functions of negative order" may interest you. The answer is given in function of $\psi^{(-n)}(t)$ (integrals of polygamma or "Negapolygamma" in Adamchik 'parlance'). An example using Alpha. Another paper by Espinosa and Moll. – Raymond Manzoni Jan 6 '16 at 17:02 • Raymond: Thank you so much for the references, I will take a look! – FofX Jan 6 '16 at 17:18 • Glad it interested you @FofX. Note that Gosper's 'negapolygamma' is in fact defined by $$\psi^{(-n-2)}=(z):=\frac 1{n!}\int_0^z (z-t)^{n}\log\Gamma(t)\,dt$$ – Raymond Manzoni Jan 6 '16 at 17:26 This is purely guessing. For integer $n$, we seem to have something like $$\int_0^1 t^n\log\Gamma(t+a)dt = -n! \psi^{(-n-2)}(a)+n!\psi^{(-n-2)}(1+a)-\frac{n!}{1!}\psi^{(-n-1)}(1+a)+\frac{n!}{2!}\psi^{(-n)}(1+a)-\frac{n!}{3!}\psi^{(-n+1)}(1+a)+\frac{n!}{4!}\psi^{(-n+2)}(1+a)-\cdots$$ so I suspect that $$I(a,n)=\int_0^1 t^n\log\Gamma(t+a)dt = n!(-1)^{n+1}\left( \psi^{(-n-2)}(a) -\sum_{k=0}^n \frac{(-1)^k}{k!}\psi^{(-n-2+k)}(1+a) \right)$$ this seems to work numerically for integer $n\ge 0$ and any $a$ I have tried. This gives some sort of closed forms, for example $$I(1,1) = -\frac{1}{4} - 2 \ln A + \psi^{(-3)}(1)\\ I(1,2) = \frac{1}{4}\ln\left(\frac{2\pi}{A^4}\right)\\$$ with $A$ the Glaisher–Kinkelin constant.
2019-08-19 22:52:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375381231307983, "perplexity": 648.2394059645475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00491.warc.gz"}
http://exxamm.com/QuestionSolution12/3D+Geometry/If+2+1+2+and+K+3+5+are+the+triads+of+direction+ratios+of+two+lines+and+the+angle+between+them+is+45+circ/1565880765
If (2, -1, 2) and (K, 3, 5) are the triads of direction ratios of two lines and the angle between them is 45^{\circ # Ask a Question ### Question Asked by a Student from EXXAMM.com Team Q 1565880765.     If (2, -1, 2) and (K, 3, 5) are the triads of direction ratios of two lines and the angle between them is 45^{\circ}, then a value of K is EAMCET 2015 A 2 B 3 C 4 D 6` #### HINT (Provided By a Student and Checked/Corrected by EXXAMM.com Team) #### Access free resources including • 100% free video lectures with detailed notes and examples • Previous Year Papers • Mock Tests • Practices question categorized in topics and 4 levels with detailed solutions • Syllabus & Pattern Analysis
2019-03-22 00:18:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3060075640678406, "perplexity": 3649.3313900275652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202588.97/warc/CC-MAIN-20190321234128-20190322020128-00546.warc.gz"}
https://dsp.stackexchange.com/questions/55378/nyquist-sampling/55396
# Nyquist sampling I know that if $$f_\mathrm{m}$$ is the "Nyquist frequency" (max frequency) and $$f_\mathrm{s}$$ sampling rate then $$f_\mathrm{s}>2f_\mathrm{m}$$. • Am I correct so far? I have a signal $$x(t)$$ with max frequency $$f_1$$ and $$h(t)$$ with $$f_2$$ and we define $$y(t)= h(t)*x(t)$$ ($$*$$ for convolution) and we need to find the sampling frequency/Nyquist frequency of this function. So the Nyquist frequency of $$x(t)$$ is $$f_{\mathrm{s}_{x}} >2f_1$$ and of $$h(t)$$ is $$f_{\mathrm{s}_{h}} >2f_2$$. Now I saw that someone wrote that using the convolution theorem we get $$Y(f)=H(f)X(f)$$, so there must be that $$f_\mathrm{s} \leq \min\{2f_1,2f_2 \}$$ stating that this is an upper bound because frequency may cancel each other. • Why is that true? I must mention that he wrote $$Nq(x)$$ instead of $$f_{\mathrm{s}_{x}}$$ (I just understood that he meant the same), also it's not supposed to be $$f_\mathrm{s} \geq \min\{2f_1,2f_2 \}$$ ? I'm also not sure about is that $$X(f)=0$$ if $$|f|>\frac{Nq(x)}{2}$$. • Why is that? • "someone wrote": Nope, not arguing with some unknown source about something that we both suspect is wrong. Cite, or find a better source. – Marcus Müller Feb 8 '19 at 18:21 • @MarcusMüller if he is wrong can you answer me what is the correct answer? Thanks – user2323232 Feb 8 '19 at 18:54 • you also need to fix your symbols a little bit. i would change "$x$" to "$t$". then change "$f(x)$" to "$x(t)$" and "$g(x)$" to "$y(t)$". then change "$F$" and "$G$" to "$X$" and "$Y$". then use "$\omega$" only for angular frequency and use "$f$" for ordinary frequency for the arguments of $F(\cdot)$ and and $G(\cdot)$ and $H(\cdot)$. – robert bristow-johnson Feb 8 '19 at 20:30 • i don't want to think about the question until i am comfortable about the nomenclature. – robert bristow-johnson Feb 8 '19 at 20:34 • i didn't fix it yet, but Nyquist frequency (as ordinary, not angular frequency) is simply half of the sample rate, $f_\mathrm{s}/2$. it is not the sample rate in any decent modern textbook or lit. that abuse of notation needs to be corrected. – robert bristow-johnson Feb 8 '19 at 20:50 Since nobody answered my I will answer using what I read on the internet, "Nyquist frequency" is the max frequency we can get using given sampling rate. the $$f_s$$ of convolution between 2 function is indeed the min of their max frequency, but I'm not sure yet about the part which stated that this is an upper bound because frequency may cancel each other. Since nobody here could answer it , but myself I'm marking this as an answer until someone will be able to answer it, thanks. I'll give it a try. I know that if fm is the "Nyquist frequency" (max frequency) and fs sampling rate then fs>2fm. Am I correct so far? No. The Nyquist frequency is simply defined as half the same rate. If your sample rate is 48kHz, your Nyquist frequency is 24kHz no matter what and how you sample anything. You may have signals that have frequency content above the Nyquist frequency. If that's the case, you will get aliasing when you sample. However the Nyquist frequency is completely independent from the frequency content of the signal. The frequency content only determines whether you get aliasing or not. Since you already started with a wrong assumption, the rest of your question as written doesn't make a lot of sense. I think you are just confusing the terms "bandwitdh" and "Nyquist frequency" and what you are asking is "I convolve a signal of bandwidth $$f_1$$ with a signal of bandwidth $$f_2$$: what's the bandwidth of the result?". If that's your question, please re-phrase as such. That answer would be $$min(f_1,f_2)$$ • Thank you , but it’s not supposed to be the min? – user2323232 Feb 9 '19 at 13:46 • Yes, of course. Corrected – Hilmar Feb 9 '19 at 15:09 • also you didn’t referred to one of my question , the claim that it is less equal to it, since frequency may cancel each other.. – user2323232 Feb 9 '19 at 16:28 • Well, it's more complicated than that. For example, if you convolve two sine waves of different frequencies, the result will be zero. If your question is really about bandwidth, then please rephrase or ask a new one and so it is clear what specifically you want to know. – Hilmar Feb 10 '19 at 14:05
2021-02-27 22:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613054156303406, "perplexity": 433.94550769849064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00101.warc.gz"}
https://codereview.stackexchange.com/questions/149322/using-dynamic-to-create-and-then-flatten-an-arbitrarily-nested-array
Using Dynamic to create and then flatten an arbitrarily-nested array I have some code that I posted as a possible answer on StackOverflow (and the code I post here is almost an exact duplicate of that, but I'd rather include complete information in this post as well so people don't have to jump back and forth between posts to understand the question). I truthfully haven't quite decided whether it's terrible, really clever, or something in between (which is why I'm asking for reviews here). I'm perfectly aware that the use of dynamic in C# is somewhat controversial. That aside, though, I'll freely admit that I rarely use it and am curious as to whether this is what would normally be considered a "valid" use of it. Basically, the problem is to create and then "flatten" an arbitrarily-nested array. At any given index, the array could contain either an integer or another array of integers, so you could have something like this: [ 1 2 3 4 5 [6 7 [8 9 [10 11 12] 13 14 15] 16 17] 18 19 20] The resulting "flattened" array would be 1, 2, 3, 4, 5, 6, 7...20. I use the following code to create the array: Random random = new Random(); dynamic array = new dynamic[random.Next(3, 10)]; for (int i = 0; i < array.Length; i++) { if (random.NextBool()) { array[i] = new dynamic[random.Next(3, 10)]; for (int j = 0; j < array[i].Length; j++) { if (random.NextBool()) { array[i][j] = random.Next(1, 100); } else { array[i][j] = new int[random.Next(3, 10)]; for (int k = 0; k < array[i][j].Length; k++) { array[i][j][k] = random.Next(1000); } } } } else { array[i] = random.Next(1, 100); } } where NextBool is an extension method on the Random class: public static bool NextBool(this Random random) { return random.Next(0, 1) == 0; } Once I've created this, I can use recursion to "flatten" the list. Essentially, at any point I determine if this is a "base" case (it's just an integer) or if it's a (potentially arbitrarily-nested) array. If it's an array, I just loop over it, add the integers to the final result, and do a recursive call on any arrays I find. private void Flatten(dynamic item, List<int> flattened) { // This is the base case - the item's just an integer if (item.GetType() == typeof(int)) { } else { foreach (dynamic itm in item) { // Handle the case where the current item's an int if (itm.GetType() == typeof(int)) { } // If it's not an int, it must be an array - recursively process it else { Flatten(itm, flattened); } } } } I could probably simply this a little by simply making everything arrays - i.e. instead of "plain" integers I have an array of length 1 - but I'm not sure that this is a dramatic improvement. Anyone have opinions on whether this is a good solution or not? (I honestly won't be offended if people think it's a terrible idea, I'd rather have the honest feedback). Are there ways I could improve this or solve it in a better way? Edit: I'm using arrays here (rather than some kind of a Tree structure, which I think would be preferable for data structured like this) because of constraints in the post I was responding to on SO. • Are you looking for critique on the Flatten method, the creation of the underlying structure, the idea of storing the data in that form, or what? Dec 8 '16 at 16:44 There's never any reason to use dynamic here at all. You never use any of its features. You could literally copy-paste all instances of it with object and the code would function identically. Now, that's not a particularly good design for storing data like this. If you actually want to store a tree of data you're better off with a more traditional tree setup. Create a Node type, and let those nodes have both values and child nodes, which can themselves have child nodes, and so on. You can then traverse that tree structure. That makes it much clearer what's going on, rather than just seeing an object that you happen to just know is a tree structure where it contains either a sequence of other objects, or a value, and where each object in the sequence may itself be a sequence or a value. It also means that when you create operations that work with these trees they can accept instances of those types, so people don't do weird things like pass in a string to your method here, only to have it crash, instead of just failing to compile. As for the actual Flatten method, personally I'd prefer a non-recursive implementation. Most significantly because the stack in C# just isn't that big, and it's not really all that uncommon to have tree structures that are deeper than the stack can support, resulting in you getting stack overflow exceptions. It's a little bit easier code to write the recursive version for many people, but you're constraining yourself to only working with shallow trees by using it. I'd also suggest making the Flatten method an iterator block. There's no sense computing the entire value right from the start, when the underlying algorithm itself is inherently computing one value at the time. Using an iterator block lets the consumer use each value as you compute it, and removes the need for you to do the work of computing values that may not be needed, and also prevents the entire result set from needing to be in memory all at once, assuming the resulting values can be processed one at a time (a common enough situation). • Fair points. For what it's worth I did point out in my SO post that a tree structure would be preferable here, the main reason I used an array was that the OP there was working with a constrained problem so I'm definitely in agreement with you on that point too. Dec 8 '16 at 16:51 You have a bug in this method: public static bool NextBool(this Random random) { return random.Next(0, 1) == 0; } Random.Next uses an inclusive lower bound and an exclusive upper bound. That means your code will always return true. It should instead be: public static bool NextBool(this Random random) { return random.Next(0, 2) == 0; } if(item.GetType() == typeof(int)) You can simplify this by using the is operator if(item is int) This might be slightly slower but the difference is merely noticable even with 100.000.000 iterations. If you have a three of this length/depth you'll run into StackOverflowException long before you notice how slow it is anyway. • Oh, and if T implements IEnumerable in your case...bad things will happen. Dec 9 '16 at 16:02
2021-10-22 23:41:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3203231990337372, "perplexity": 906.800601119833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00714.warc.gz"}
http://forum.vhugot.com/viewtopic.php?f=19&t=661&p=4155
## Curriculum Vitae with LaTeX Random questions or observations about and around computers Mathias COQBLIN Membre projet CRT Posts: 130 Joined: Mon Apr 13, 2009 10:20 pm Location: 127.0.0.1 ### Curriculum Vitae with LaTeX Looking for ways to make CVs with LaTeX, I came accross this little package: moderncv Can be rendered with different syles (2 samples of my own CV linked with this post) Sources can be found here: http://johmathe.nonutc.fr/ressources/cv_uvs.tar.bz2 (self-explanatory) Attachments cv_classic.pdf cv_casual.pdf Your mom is so fat she sat on a binary tree and turned it into a linked list in constant time! Olivier FINOT Membre projet CRT Posts: 136 Joined: Mon Sep 22, 2008 7:22 pm ### Re: Curriculum Vitae with LaTeX Here is a more recent version of the package proposing other colours than blue and also the possibility to add a photo. http://www.ctan.org/tex-archive/macros/ ... /moderncv/ En effet le rire n'est jamais gratuit, l'homme donne à pleurer, mais il prête à rire. (Desproges) gdesvignes Posts: 12 Joined: Mon Dec 28, 2009 12:04 am ### Re: Curriculum Vitae with LaTeX I already use this package. It's pretty usefull ! But ! There is always a "but". I have so much information to write... And I have no more place. I looked for a way to change marging of \section and \subsection. If anyone have a solution ?! Vincent Posts: 3077 Joined: Fri Apr 07, 2006 12:10 pm Location: Schtroumpf Contact: ### Re: Curriculum Vitae with LaTeX Hello, gdesvignes wrote:I looked for a way to change marging I suppose you mean "margin" ? If I understand correctly what you are setting out to achieve, I think the proper solution would be to change the page layout either directly with \hoffset and/or \oddsidemargin (package layout), or indirectly using the geometry package. However if you were thinking of the vertical space surrounding sections and subsections (which is not a margin at all, but my crystal ball tells me that's what you really had in mind) then the simplest solution -- which I used in my own CV -- is to use negative vertical space if and when needed : \vspace{-1cm} for instance. To change the spacing consistently across the whole document would require editing the class file, which is not necessarily a good idea. If you want to try it, look into the CLS file and find Code: Select all % usage: \section{<title>} \newcommand*{\section}[1]{% \vspace*{2.5ex \@plus 1ex \@minus .2ex}% Then change the value to whatever floats your boat. { Vincent Hugot } gdesvignes Posts: 12 Joined: Mon Dec 28, 2009 12:04 am ### Re: Curriculum Vitae with LaTeX You're right, this is exactly what I meant !!!! How does this macro work ? I try something : Code: Select all \vspace{-1cm} \section{\textbf{Expériences}} \subsection{Stages informatiques} ....my boring works... But it seems this is the \subsection has been moved, not the \section... Vincent Posts: 3077 Joined: Fri Apr 07, 2006 12:10 pm Location: Schtroumpf Contact: ### Re: Curriculum Vitae with LaTeX gdesvignes wrote:You're right, Always. gdesvignes wrote:I try something [...] But it seems this is the \subsection has been moved, not the \section... I cannot reproduce this problem. Blind guesses: Try adding \\ after the vspace command to close the line, or using a double carriage return. If this fails, try the starred version: \vspace*{...} -- especially if you are at the beginning of a page -- with and without carriage returns. If all fails, post your tex file along with the exact modern cv .cls and .sty which you are using (one archive plz, use the forum's upload attachment feature), and I'll try and figure it out. { Vincent Hugot } gdesvignes Posts: 12 Joined: Mon Dec 28, 2009 12:04 am ### Re: Curriculum Vitae with LaTeX Youhou ! I did it ! (Yes, I'm singing Dora...) I still don't understand how it works, but I succeed to do something interresting ! Now, I just have to pray that my CV will be beautyful enought for an application ;-) Thanks ^^ ### Who is online Users browsing this forum: No registered users and 94 guests
2019-01-21 08:13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495972156524658, "perplexity": 7139.411908258058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00175.warc.gz"}
https://cracku.in/65-a-man-borrows-6000-at-5-interest-on-reducing-balan-x-xat-2012
Question 65 # A man borrows 6000 at 5% interest, on reducing balance, at the start of the year. If he repays 1200 at the end of each year, find the amount of loan outstanding, in , at the beginning of the third year. Solution Amount man gets after 1 year = $$6000 + (\frac{6000 \times 5 \times 1}{100}) - 1200$$ = $$6000 + 300 - 1200 = 5100$$ $$\therefore$$ Amount at the beginning of third year, i.e. after 2 years = $$5100 + (\frac{5100 \times 5 \times 1}{100}) - 1200$$ = $$5100 + 255 - 1200 = 4155$$
2023-02-09 06:08:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6537486910820007, "perplexity": 1049.254414593288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00356.warc.gz"}
https://techcommunity.microsoft.com/t5/excel/mutual-fund-quot-rating-quot-as-part-of-stock-data-type/td-p/3154641
New Contributor # mutual fund "rating" as part of stock data type Hey All, Regarding stock data types, what is meant by "rating" for mutual funds? Is this a Morningstar rating? If so, why isn't it a whole number (why are there decimals)? 2 Replies # Re: mutual fund "rating" as part of stock data type Good question. I couldn't find an answer, so posted the same question over on this alternate site (also a Microsoft page).
2022-05-21 10:38:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8093551397323608, "perplexity": 4133.905234134046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00798.warc.gz"}
https://math.stackexchange.com/questions/1122097/how-do-i-solve-this-infinitely-nested-radical
# How do I solve this infinitely nested radical? [duplicate] $\sqrt{1+2\sqrt{1+3\sqrt{1+4\sqrt{1+...}}}}$ Hint: Observe $$(x+1)^2 - 1 = x(x+2),$$ or $$x+1 = \sqrt{1 + x(x+2)}.$$ Now recursively substitute: \begin{align*} x+1 &= \sqrt{1 + x\sqrt{1 + (x+1)(x+3)}} , \\ &= \sqrt{1 + x \sqrt{1 + (x+1)\sqrt{1 + (x+2)(x+4)}}}, \\ &= \sqrt{1 + x \sqrt{1 + (x+1)\sqrt{1 + (x+2)\sqrt{1 + (x+3)(x+5)}}}}, \ldots \end{align*} Of course, you need to do something much more rigorous to complete the proof. I leave that to you.
2019-08-18 15:01:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 946.8963383994726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00414.warc.gz"}
http://search.ndltd.org/search.php?q=subject%3A%22convex+optimization%22&start=0
• Refine Query • Source • Publication year • to • Language • 157 • 14 • 14 • 12 • 6 • 1 • 1 • 1 • Tagged with • 242 • 242 • 53 • 45 • 42 • 41 • 38 • 34 • 28 • 27 • 26 • 25 • 24 • 24 • 22 • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations. Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website. 1 #### Greedy Strategies for Convex Minimization Nguyen, Hao Thanh 16 December 2013 (has links) We have investigated two greedy strategies for finding an approximation to the minimum of a convex function E, defined on a Hilbert space H. We have proved convergence rates for a modification of the orthogonal matching pursuit and its weak version under suitable conditions on the objective function E. These conditions involve the behavior of the moduli of smoothness and the modulus of uniform convexity of E. 2 #### Accelerating convex optimization in machine learning by leveraging functional growth conditions Xu, Yi 01 August 2019 (has links) In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization. We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of $\widetilde O(1/\epsilon^{1-\theta})$ and $\widetilde O(1/\epsilon^{2(1-\theta)})$ respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of $O(1/\epsilon)$ to $\widetilde O(1/\epsilon^{1-\theta})$ omitting a logarithmic factor with $\theta\in(0,1]$. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than $O(1/\epsilon^2)$ without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with $\theta\in(0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption. 3 #### Linear phase filter bank design by convex programming Ha, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links) Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the two-channel orthogonal symmetric complex-valued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complex-valued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semi-definite programming (SDP) is applicable. Secondly, for two-channel symmetric real-valued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semi-infinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for two-channel filter banks to M-channel filter banks. However, it is investigated that the design problem of M-channel cosine-modulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zero-forcing filter bank equalizers is obtained. A closed-form expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequency-division multiplexing (OFDM) systems. 4 #### Spectral functions and smoothing techniques on Jordan algebras Baes, Michel 22 September 2006 (has links) Successful methods for a large class of nonlinear convex optimization problems have recently been developed. This class, known as self-scaled optimization problems, has been defined by Nesterov and Todd in 1994. As noticed by Guler in 1996, this class is best described using an algebraic structure known as Euclidean Jordan algebra, which provides an elegant and powerful unifying framework for its study. Euclidean Jordan algebras are now a popular setting for the analysis of algorithms designed for self-scaled optimization problems : dozens of research papers in optimization deal explicitely with them. This thesis proposes an extensive and self-contained description of Euclidean Jordan algebras, and shows how they can be used to design and analyze new algorithms for self-scaled optimization. Our work focuses on the so-called spectral functions on Euclidean Jordan algebras, a natural generalization of spectral functions of symmetric matrices. Based on an original variational analysis technique for Euclidean Jordan algebras, we discuss their most important properties, such as differentiability and convexity. We show how these results can be applied in order to extend several algorithms existing for linear or second-order programming to the general class of self-scaled problems. In particular, our methods allowed us to extend to some nonlinear convex problems the powerful smoothing techniques of Nesterov, and to demonstrate its excellent theoretical and practical efficiency. 5 #### Optimization algorithms in compressive sensing (CS) sparse magnetic resonance imaging (MRI) Takeva-Velkova, Viliyana 01 June 2010 (has links) Magnetic Resonance Imaging (MRI) is an essential instrument in clinical diag- nosis; however, it is burdened by a slow data acquisition process due to physical limitations. Compressive Sensing (CS) is a recently developed mathematical framework that o ers signi cant bene ts in MRI image speed by reducing the amount of acquired data without degrading the image quality. The process of image reconstruction involves solving a nonlinear constrained optimization problem. The reduction of reconstruction time in MRI is of signi cant bene t. We reformulate sparse MRI reconstruction as a Second Order Cone Program (SOCP).We also explore two alternative techniques to solving the SOCP prob- lem directly: NESTA and speci cally designed SOCP-LB. / UOIT 6 #### Novel Convex Optimization Approaches for VLSI Floorplanning Luo, Chaomin January 2008 (has links) The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate measure of performance. This problem is known to be NP-hard, and is particularly challenging if the chip dimensions are fixed. Fixed-outline floorplanning is becoming increasingly important as a tool to design flows in the hierarchical design of Application Specific Integrated Circuits and System-On-Chip. Therefore, it has recently received much attention. A two-stage convex optimization methodology is proposed to solve the fixed-outline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an attractor-repeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchange-free algorithm for local improvement of the floorplan are also presented. Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlap-free and deadspace-free floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC benchmark, but also zero-deadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixed-outline situation. Our method is also applicable to area minimization in classical floorplanning. 7 #### Novel Convex Optimization Approaches for VLSI Floorplanning Luo, Chaomin January 2008 (has links) The floorplanning problem aims to arrange a set of rectangular modules on a rectangular chip area so as to optimize an appropriate measure of performance. This problem is known to be NP-hard, and is particularly challenging if the chip dimensions are fixed. Fixed-outline floorplanning is becoming increasingly important as a tool to design flows in the hierarchical design of Application Specific Integrated Circuits and System-On-Chip. Therefore, it has recently received much attention. A two-stage convex optimization methodology is proposed to solve the fixed-outline floorplanning problem. It is a global optimization problem for wirelength minimization. In the first stage, an attractor-repeller convex optimization model provides the relative positions of the modules on the floorplan. The second stage places and sizes the modules using convex optimization. Given the relative positions of the modules from the first stage, a Voronoi diagram and Delaunay triangulation method is used to obtain a planar graph and hence a relative position matrix connecting the two stages. An efficient method for generating sparse relative position matrices and an interchange-free algorithm for local improvement of the floorplan are also presented. Experimental results on the standard benchmarks MCNC and GSRC demonstrate that we obtain significant improvements on the best results in the literature. Overlap-free and deadspace-free floorplans are achieved in a fixed outline and floorplans with any specified percentage of whitespace can be produced. Most important, our method provides a greater improvement as the number of modules increases. A very important feature of our methodology is that not only do the dimensions of the floorplans in our experiments comply with the original ones provided in the GSRC benchmark, but also zero-deadspace floorplans can be obtained. Thus, our approach is able to guarantee complete area utilization in a fixed-outline situation. Our method is also applicable to area minimization in classical floorplanning. 8 #### Lossless convexification of optimal control problems Harris, Matthew Wade 30 June 2014 (has links) This dissertation begins with an introduction to finite-dimensional optimization and optimal control theory. It then proves lossless convexification for three problems: 1) a minimum time rendezvous using differential drag, 2) a maximum divert and landing, and 3) a general optimal control problem with linear state constraints and mixed convex and non-convex control constraints. Each is a unique contribution to the theory of lossless convexification. The first proves lossless convexification in the presence of singular controls and specifies a procedure for converting singular controls to the bang-bang type. The second is the first example of lossless convexification with state constraints. The third is the most general result to date. It says that lossless convexification holds when the state space is a strongly controllable subspace. This extends the controllability concepts used previously, and it recovers earlier results as a special case. Lastly, a few of the remaining research challenges are discussed. / text 9 #### Linear phase filter bank design by convex programming Ha, Hoang Kha, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2008 (has links) Digital filter banks have found in a wide variety of applications in data compression, digital communications, and adaptive signal processing. The common objectives of the filter bank design consist of frequency selectivity of the individual filters and perfect reconstruction of the filter banks. The design problems of filter banks are intrinsically challenging because their natural formulations are nonconvex constrained optimization problems. Therefore, there is a strong motivation to cast the design problems into convex optimization problems whose globally optimal solutions can be efficiently obtained. The main contributions of this dissertation are to exploit the convex optimization algorithms to design several classes of the filter banks. First, the two-channel orthogonal symmetric complex-valued filter banks are investigated. A key contribution is to derive the necessary and sufficient condition for the existence of complex-valued symmetric spectral factors. Moreover, this condition can be expressed as linear matrix inequalities (LMIs), and hence semi-definite programming (SDP) is applicable. Secondly, for two-channel symmetric real-valued filter banks, a more general and efficient method for designing the optimal triplet halfband filter banks with regularity is developed. By exploiting the LMI characterization of nonnegative cosine polynomials, the semi-infinite constraints can be efficiently handled. Consequently, the filter bank design is cast as an SDP problem. Furthermore, it is demonstrated that the resulting filter banks are applied to image coding with improved performance. It is not straightforward to extend the proposed design methods for two-channel filter banks to M-channel filter banks. However, it is investigated that the design problem of M-channel cosine-modulated filter banks is a nonconvex optimization problem with the low degree of nonconvexity. Therefore, the efficient semidefinite relaxation technique is proposed to design optimal prototype filters. Additionally, a cheap iterative algorithm is developed to further improve the performance of the filter banks. Finally, the application of filter banks to multicarrier systems is considered. The condition on the transmit filter bank and channel for the existence of zero-forcing filter bank equalizers is obtained. A closed-form expression of the optimal equalizer is then derived. The proposed filter bank transceivers are shown to outperform the orthogonal frequency-division multiplexing (OFDM) systems. 10 #### Identification using Convexification and Recursion Dai, Liang January 2016 (has links) System identification studies how to construct mathematical models for dynamical systems from the input and output data, which finds applications in many scenarios, such as predicting future output of the system or building model based controllers for regulating the output the system. Among many other methods, convex optimization is becoming an increasingly useful tool for solving system identification problems. The reason is that many identification problems can be formulated as, or transformed into convex optimization problems. This transformation is commonly referred to as the convexification technique. The first theme of the thesis is to understand the efficacy of the convexification idea by examining two specific examples. We first establish that a l1 norm based approach can indeed help in exploiting the sparsity information of the underlying parameter vector under certain persistent excitation assumptions. After that, we analyze how the nuclear norm minimization heuristic performs on a low-rank Hankel matrix completion problem. The underlying key is to construct the dual certificate based on the structure information that is available in the problem setting.         Recursive algorithms are ubiquitous in system identification. The second theme of the thesis is the study of some existing recursive algorithms, by establishing new connections, giving new insights or interpretations to them. We first establish a connection between a basic property of the convolution operator and the score function estimation. Based on this relationship, we show how certain recursive Bayesian algorithms can be exploited to estimate the score function for systems with intractable transition densities. We also provide a new derivation and interpretation of the recursive direct weight optimization method, by exploiting certain structural information that is present in the algorithm. Finally, we study how an improved randomization strategy can be found for the randomized Kaczmarz algorithm, and how the convergence rate of the classical Kaczmarz algorithm can be studied by the stability analysis of a related time varying linear dynamical system. Page generated in 0.1188 seconds
2023-03-28 15:22:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45636412501335144, "perplexity": 640.0694001264375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00432.warc.gz"}
http://openstudy.com/updates/50fdda37e4b0426c636796d6
## anonymous 3 years ago . 1. anonymous Solving for t simply means that we want to get t by itself on one side. Lets keep it on the left side since it's already there. Then we would want to get rid of that 9 2. anonymous To get rid of that 9 we would subtract both sides by 9 Can you tell me what we then get? 3. anonymous 1. Square it. t + 9^2 = 16^2 = t + 81 = 256 2. Take the 81 from both sides. t = 175 DONE! XD 4. anonymous Sorry Ryan D: Can you confirm my method and answer though 5. anonymous great now we have $\sqrt{t}=7$ now we have to square both sides $\sqrt{t}^2=7^2$ $t=7^2$ Can you tell me what that is? 6. anonymous
2016-10-01 22:22:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.630835771560669, "perplexity": 499.5045214404973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738663308.86/warc/CC-MAIN-20160924173743-00058-ip-10-143-35-109.ec2.internal.warc.gz"}
http://compgroups.net/comp.text.tex/changing-figure-1-to-scheme-1/1915308
f #### Changing "Figure 1" to "Scheme 1" Hi I'm trying to complete a 1-page abstract and am using floatflt to wrap the figures. I have tried to find the answers to the following, but time is of the essence at the moment. My question is regarding one of the captions: I want "Scheme 1" to replace "Figure 2" in the 2nd image I have included. Advice on this is much appreciated and also how to remove the colon after "Figure X:"/"Scheme X:". This is what is written: \usepackage[rflt]{floatflt} \setlength\figgutter{25pt} \newcommand{\incffig}[1]{\includegraphics[width=\floatfltwidth]{#1}} \begin{document} some text \begin{floatingfigure}[l]{0.55\linewidth} \incffig{thefigure} \def\fnum@figure{thecaption} \end{floatingfigure} some more text 0 4/27/2005 6:33:12 PM comp.text.tex 39029 articles. 3 followers. 1 Replies 708 Views Similar Articles [PageSpeed] 19 Hi, I used the caption2 with luck to change the figure captions in an earlier project, but caption2 seems to be obsolete and superseeded by the caption package. In any case, maybe one of them may do the trick for you. Best, Jon Nat wrote: > Hi > > I'm trying to complete a 1-page abstract and am using floatflt to wrap > the figures. I have tried to find the answers to the following, but > time is of the essence at the moment. My question is regarding one of > the captions: I want "Scheme 1" to replace "Figure 2" in the 2nd image > I have included. Advice on this is much appreciated and also how to > remove the colon after "Figure X:"/"Scheme X:". > > This is what is written: > > \usepackage[rflt]{floatflt} > \setlength\figgutter{25pt} > \newcommand{\incffig}[1]{\includegraphics[width=\floatfltwidth]{#1}} > > \begin{document} > > some text > > \begin{floatingfigure}[l]{0.55\linewidth} > \incffig{thefigure} > \def\fnum@figure{thecaption} > \end{floatingfigure} > > some more text -- Jon Sporring Department of Computer Science, University of Copenhagen Universitetsparken 1, DK-2100 Copenhagen, DENMARK Phone: +45 3532 1469 / Fax: +45 3532 1401 E-mail: [email protected] 0 sporring (15) 4/28/2005 8:57:25 AM Similar Artilces: Hi , Hope you are doing great. Please let me take this opportunity to introduce myself, Iam Karthik working with BhanInfoi Inc, a NY based company. We have consultants on our bench on various technologies, my request is to add me to your distribution list and kindly do send me the requirements. i have the below list available 1. Mainframe 2. Java 3.. Financial Analyst 4. Data Architect If there is any vendor ship agreement which has to be signed then I would like to take an opportunity to represent my company and expect your cooperation... ... How to change "X" to "0" or "1" (VHDL) ? Hello, I wroted a simple process to synchronyze Dat signal with a clock, and when a "Dat" pulse edge is very close to "Clock", timing simulation shows "X". And a simulation of the rest signals fails. How to solve it? My text is: A0: process(Clock) begin if Clock'event and Clock='0' then Dat2 <= Dat; end if; end process A0; Result of timing diagram (Aldec 5.2) I put at: http://www.electronicsdesigns.net/img/timings.gif Thank you in advance for any suggestion. Your problem could be the fact that your sensitivity list is incompl... I am running code below so that "RADIO" gets replaced by "RADIO" NAME="1" VALUE="1" etc. Actually there are 5 RADIO buttons for each question, and the initial web page I generate somehow dord not have NAME="1" VALUE="1", etc. after "RADIO". So I am using Perl to replace simple "RADIO" with "RADIO" NAME="1" VALUE="1", "2", "3", "4", and "5" and for the choices for the problem number 2, "RADIO" NAME="2" VALUE="1", &q... a = [ "1", "2", "3" ] v/s a = new Array ( "1", "2", "3" ) identical in all ways? Do these result in identical objects? a = [ "1", "2", "3" ] a = new Array ( "1", "2", "3" ) On Aug 25, 1:30=A0pm, okey wrote: > Do these result in identical objects? > > a =3D [ "1", "2", "3" ] > a =3D new Array ( "1", "2", "3" ) Yes, but:- a =3D [ 2 ]; a =3D new Array( 2 ); - do not. Richard. okey wrote: > Do these result in identical objects? > > a = [ "1", "2", "3" ] > a = new Array ( "1", "2"... Urgent Requirement in """""""""""""NEW YORK"""""""""""""""" Hello Partners, Please find the requirement below. Please send the updated resume along with rate and contact no. REQ#1: Title : Java Developer ( Rating Project) Duration : 6 months Rate : open Location : NY strong java, WebLogic 9.2, Web Services, Oracle REQ#2: Title : Java Developer Duration : 4 months Rate : open Location : NY Strong java, SQL REQ#3: Title : VB.Net Consultant Location : NY Duration : 4 months Rate : open Primarily looking at someone who has Excel, VB.net and Oracle (good to have). Req #4: Title : Java Developer (MSA Project) Duration : 6+ months Rate : open Location : NY Note : Please send your updated resume along with contact no [email protected] : No phone calls please. Thanks & Regards Karthik BhanInfo [email protected] ... "out" and "in out" Hi i found the following explaination: In Ada, "in" parameters are similar to C++ const parameters. They are effectively read-only within the scope of the called subprogram. Ada "in out" parameters have a reliable initial value (that passed in from the calling subprogram) and may be modified within the scope of the called procedure. Ada "out" parameters have no reliable initial value, but are expected to be assigned a value within the called procedure. What does "have no reliable initial value" mean when considering the "out" parameter? By c... "/a" is not "/a" ? Hi everybody, while testing a module today I stumbled on something that I can work around but I don't quite understand. >>> a = "a" >>> b = "a" >>> a == b True >>> a is b True >>> c = "/a" >>> d = "/a" >>> c == d True # all good so far >>> c is d False # eeeeek! Why c and d point to two different objects with an identical string content rather than the same object? Manu Emanuele D'Arrigo wrote: >>>> c = "/a" >>>&... "or" and "and" Hi, I'm just getting to discover ruby, but I find it very nice programming language. I just still don't understand how the "or" and "and" in ruby... I was playing with ruby and for example made a def to print Stem and Leaf plot (for those who didn't have a statistics course or slept on it, e.g. http://cnx.org/content/m10157/latest/) Here is the Beta version of it: class Array def n ; self.size ; end def stem_and_leaf(st = 1) # if st != (2 or 5 or 10) then ; st = 1 ; end k = Hash.new(0) self.each {|x| k[x.to_f] += 1 } k = k.sort{|a, b| a[0].to_f <=&g... why this program snippet display "8,7,7,8,-7,-8" the program is: main() { int i=8; printf("%d\n%d\n%d\n%d\n%d\n%d\n",++i,--i,i++,i--,-i++,-i--); } > why this program snippet display "8,7,7,8,-7,-8" Ask your compiler-vendor because this result is IMHO implementation-defined. Check this out: http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.15 http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.16 Regards, Irina Marudina [email protected] wrote: > why this program snippet display "8,7,7,8,-7,-8&q... "my" and "our" Hi, while testing a program, I erroneously declared the same variable twice within a block, the first time with "my", the second time with "our": { my $fz = 'VTX_Link'; .... ( around 200 lines of code, all in the same block) our$fz = 'VTX_Linkset'; ... } So the initial contents of the $fz declared with "my" is lost, because "our" creates a lexical alias for the global$fz, thus overwriting the previous "my" declaration. It was my error, no question. But I wonder why Perl doesn't mention this - even with "use s... why "::", not "." Why does the method of modules use a dot, and the constants a double colon? e.g. Math::PI and Math.cos -- Posted via http://www.ruby-forum.com/. On Oct 26, 2010, at 01:48 , Oleg Igor wrote: > Why does the method of modules use a dot, and the constants a double > colon? > e.g. > Math::PI and Math.cos For the same reason why inner-classes/modules use double colon, because = they're constants and that's how you look up via constant namespace. Math::PI and ActiveRecord::Base are the same type of lookup... it is = just that Base is a module and PI is a float.... "If then; if then;" and "If then; if;" I have a raw data set which is a hierarchical file: H 321 s. main st P Mary E 21 F P william m 23 M P Susan K 3 F H 324 S. Main St I use the folowing code to read the data to creat one observation per detail(P) record including hearder record(H): data test; infile 'C:\Documents and Settings\retain.txt'; retain Address; input type $1. @; if type='H' then input @3 Address$12.; if type='P' then input @3 Name $10. @13 Age 3. @16 Gender$1.; run; but the output is not what I want: 1 321 s. main H 2 321 s. main P Mary E 21 F 3 321 s... How to change "/" to "\" Now, the output as below /a/b/c.txt b/c.txt x:/b/c.txt , Need to change x:\b\c.txt #!/bin/ksh # echo.ksh a=/a/b/c.txt echo $a echo${a#/*/} y=echo x:/${a#/*/} | tr -s '/' '\' echo$y moon wrote: > Now, the output as below > > /a/b/c.txt > b/c.txt > x:/b/c.txt , Need to change x:\b\c.txt > > > #!/bin/ksh > # echo.ksh > a=/a/b/c.txt > echo $a > echo${a#/*/} > y=echo x:/${a#/*/} | tr -s '/' '\' > echo$y > > One escape necessary: echo x:/${a#/*/} | tr -s '/' '\\' Two escapes necessar... how to change "/" to "\" iam new to shell scripting and i have plz can anyone help in changing the pattrern "/" to "\" using the sed command. [email protected] wrote: > iam new to shell scripting and i have plz can anyone help in changing > the pattrern "/" to "\" using the sed command. sed 's/\//\\/g' will replace all '/' with '\' srp -- http://saju.net.in Saju Pillai <[email protected]> wrote: >> iam new to shell scripting and i have plz can anyone help in changing >> the pattrern "/" to "\" using ... A problem about "[ ]" "( )" "=" I want to read several images saved in a director,and give them to I1,I2 ,I3....,using the following codes: filelist=dir(['c:\MATLAB701\work\...\*.jpg']); for i=1 :length(filelist) I=imread(fullfile('c:\MATLAB701\work\...',filelist(i).name)); end; but failed. Then I used I(i)=imread... ,still failed. How could I do? "John" <[email protected]> wrote in message news:[email protected]... >I want to read several images saved in a director,and give them to > I1,I2 ,I3....,using the following codes: > filelist=dir(['c:\MATLAB701\work\..... "In" "Out" and "Trash" I just bought a new computer and I re-installed Eudora Light on my new computer. But when I open Eudora, the "In", "Out" and "Trash" links are not on the left side of the screen the way they were on my old computer. How can I get these links back on the left side of the screen? Thank you. On 25 Mar 2007 09:49:22 -0700, "abx" <[email protected]> wrote: >I just bought a new computer and I re-installed Eudora Light on my new >computer. But when I open Eudora, the "In", "Out" and "Trash" links >are ... Does it need a ";" at the very after of "if" and "for" write code like: int main(void) { int a=10; if(a<20) {} } Compiler ok on dev-cpp . don't we have to add a ";" after if statement? marsarden said: > write code like: > > int main(void) > { > int a=10; > if(a<20) > {} > } > > Compiler ok on dev-cpp . don't we have to add a ";" after if > statement? The syntax for 'if' is: if(expression) statement There is no semicolon after the ) but before the statement. The statement is either a normal statement (which can be empty), ending in a semicolon:- if(expr) ... (mapcar 'quote '("1" "2" "3" "4")) (mapcar 'quote '("1" "2" "3" "4")) returns ((quote "4") (quote "4") (quote "4") (quote "4")) Interesting and (for me) unexpected. Because (mapcar 'read '("1" "2" "3" "4")) returns (1 2 3 4) and (mapcar 'princ '("1" "2" "3" "4")) gives 1234("1" "2" "3" "4") Why isn't (mapcar 'quote '("1" "2" "3" "4")) returning ((quote "1") (quote "2") (quote "3") (quote "4")) Tom Haws www.hawsedc.com Probably has to do with the fact that 'arg and (quote arg) are equivalent, and LISP gets confused by the construct 'quote, which is about the same as (quote (quote arg)). But I don't pretend to know all of the mechanics of the error; the results are somewhat different in R14, BTW: Command: (mapcar 'quote '("1" "2" "3" "4")) ((<Subr: #22e3e40> "4") (<Subr: #22e3e40> "4") (<Subr: #22e3e40> "4") (<Subr: #22e3e40> "4")) ___ "Tom Haws" <[email protected]> wrote in message news:[email protected]... > (mapcar 'quote '("1" "2" "3"... "Correct" term for a 1:1 relationship between a "database" and an "instance" where > 1 such things are on the same physical server? What is the "correct" term for a 1:1 relationship between a "database" and an "instance" where there are at least two such "things" on the same physical server? Nearly all the Oracle docs and books define a database something like the following: DATABASE - a collection of datafiles and oracle config files; useless without an "instance" to access the database. INSTANCE - a collection of background procs and memory structures, used to access a "database." Where I work, people typically call each "thing" mentioned in my Sub... elementvise division "1./a" versus "1 ./ a" Hi, I wonder why there is a difference in the way scilab interprets "1./a" and "1 ./ a". Is this intentionally or is it a bug? -->a=[2 3 4] a = ! 2. 3. 4. ! -->1./a ans = ! 0.0689655 ! ! 0.1034483 ! ! 0.1379310 ! -->1 ./ a ans = ! 0.5 0.3333333 0.25 ! --> -Torbj�rn. Torbj�rn Pettersen wrote: > Hi, > I wonder why there is a difference in the way scilab interprets "1./a" > and "1 ./ a". Is this intentionally or is it a bug? > > -->a=[2 3 4] > a = > > ! 2. 3. 4. ! > > -->1./a > ans = > > ! 0.0689655 ! > ! 0.1034483 ! > ! 0.1379310 ! > > -->1 ./ a > ans = > > ! 0.5 0.3333333 0.25 ! > It's feature plus an unlucky situation. 1./a is intepreted as (1.0)/a which by definition computes the pseudo-inverse of a . Contrary to that 1 ./ a is the (./) operator. It's expanded into ones(a) ./ a which gives elementwise devision. So it's just that 1. is a floating point constant. There is no problem with say a./b ! Helmut Jarausch Lehrstuhl fuer Numerische Mathematik RWTH - Aachen University D 52056 Aachen, Germany Hm yes you are right... but I'm not sure I like that feature - It took to long to figure out where the error was :-) -Torbj�rn. Helmut Jarausch wrote: > Torbj�rn Pettersen wr... Function ( "TxtA TxtA TxtA TxtB TxtC TxtA"; "TxtC" ; 1 ; "TxtA" ; 1 ) -> "TxtB" ? Hi How would you to extract a string between "TxtC" an "TxtA" ("TxtB") with a function like this one : Function ( "TxtA TxtA TxtA TxtB TxtC TxtA"; "TxtC" ; 1 ; "TxtA" ; 1 ) -> "TxtB" ? On Dec 4, 7:51=A0am, "Grolo" <[email protected]> wrote: > Hi > > How would you to extract a string between "TxtC" an "TxtA" ("TxtB") with = a > function like this one : > > Function ( "TxtA TxtA TxtA TxtB TxtC TxtA"; "TxtC" ; 1 ; "TxtA"... "bb = bb + 1" vs "bb += 1" Hi ! The following code does not compile (as expected): class SimpleJava { public static void main(String args[]) { System.out.println("SimpleJava ...."); byte bb = 0; bb = bb + 1; System.out.println("bb = " + bb); } } While the following one is compiling: class SimpleJava { public static void main(String args[]) { System.out.println("SimpleJava ...."); byte bb = 0; bb += 1; System.out.println("bb = " + bb); } } The += operator is specific to each primitive, so it "know" how to work with integers. The ope... "chapter 1." vs "1. chapter" (docbook) Hi! Could anyone tell me where it is defined that openjade renders chapter titles as "chapter 1. TITLE" and not "1. chapter TITLE"? I am using dsssl 1.77 TIA Igor2 At 2005-12-16T13:11:34+01:00, Igor2 wrote: > Could anyone tell me where it is defined that openjade renders > chapter titles as "chapter 1. TITLE" and not "1. chapter TITLE"? I > am using dsssl 1.77 OpenJade does not specify that. It is done by DSSSL stylesheets, e.g., the DocBook DSSSL stylesheets, that you seem to be using. The function `$component-title$&... How to alias "n1" to "n 1" and "n2" to "n 2" and so on (with style) I have the following bash function: function ..() { counter=0 case "$1" in [0-9][0-9] ) while [ $counter -lt$1 ] do cd .. counter=$(($counter+1)) done ;; * ) echo "staying where I am, give me a number (<99) next time :)" ;; esac } The allows me to move up, say 4 levels, using ".. 4" Now, typing the space annoys me so I define the following aliases: alias ..='.. 1' alias ..2='.. 2' alias ..3='.. 3' alias ..4='.. 4' alias ..5='.. 5' Not very elegant I con... Web resources about - Changing "Figure 1" to "Scheme 1" - comp.text.tex Resources last updated: 3/11/2016 3:28:02 PM
2020-02-18 01:26:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24997588992118835, "perplexity": 7130.404779951625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143455.25/warc/CC-MAIN-20200217235417-20200218025417-00288.warc.gz"}
https://cds.ismrm.org/protected/18MProceedings/PDFfiles/1712.html
### 1712 Using Noise Waves for Simulation and Measurement of Array SNR Penalty due to Passive Impedance Match Arne Reykowski1, Christian Findeklee2, Paul Redder1, Tracy Wynn1, Tim Ortiz1, Randy Duensing2, and Scott B King1 1Invivo Corporation, Gainesville, FL, United States, 2Philips Research, Hamburg, Germany ### Synopsis Active impedance matching versus passive impedance matching of array coils is a concept well understood when designing transmit arrays. Lesser known however is that this concept also applies to receive arrays. Even though it appears that preamplifiers are noise matched to the passive port impedance (usually 50 Ohms), preamplifier noise coupling creates active noise match impedances which are mode dependent. In this context, a mode is defined by a signal vector and the corresponding weighting factors for optimum combined SNR. We use coupled noise waves to explain by simple concepts how the weighted and combined coupled noise changes the active noise match impedance. ### Introduction Receive array noise matching is a well-studied field [1-13], but the implications for MRI array coil design are often overlooked. We present a novel approach using noise waves and scattering parameters to explain matching to active array impedances. Active noise matching takes into account preamplifier noise via a modified noise covariance matrix. In case of using equal preamplifiers at all channels in a reciprocal antenna, the resulting noise match impedances are the equivalent of active array impedances for TX arrays. This complete noise model can also be used to demonstrate that preamplifier decoupling is mainly a convenience for tuning arrays, but has no impact on maximizing array SNR [6,7,10]. In cases of high Q coils and strong mutual coupling, coil designers can achieve significant SNR gains by matching to the active impedance of the dominant coil mode (typically the bird cage mode of volume arrays) to improve SNR. Our formulation provides solutions for active impedance match with LNAs having different input impedances Zin, and different noise figures Fmin, and we assume that optimum noise match impedance Zopt can be adjusted individually. ### Methods We are using a preamplifier noise model involving noise waves to simulate the noise coupling in the receive array [3, 13, 14, 15, 16]. This model places two noise waves at the input of a noise free amplifier. One wave travels toward the LNA, while the other travels into the coil array, represented by a scattering matrix S. The noise waves are uncorrelated when normalized to the optimum noise match impedance. It is also possible to normalize the noise waves to any other impedance, in our case the LNA input impedance. This formulation results in correlated noise waves, but has the advantage of avoiding reflection at the LNA inputs, therefore leading to a more compact formulation for the problem. Figure 1 introduces the concept of active noise match for the example of two LNAs connected to an array. For the sake of the argument, we focus only on the noise emanating from LNA 1 and normalized the LNA gains to unity by including them in the weighting factors wi. The weighted and combined noise from LNA 1 can be written as $$\frac{1}{w_{1} } c^{n} =a_{1}^{n} +\underbrace{\left(S_{11} +\frac{w_{2} }{w_{1} } S_{21} \right)}_{\Gamma _{act} } b_{1}^{n}$$ Where w1 and w2 are the weighting factors used for the signal combination. It can be easily shown that noise contribution from LNA 1 is minimized for $$\Gamma _{act} =\Gamma _{opt} =S_{11} +\frac{w_{2} }{w_{1} } S_{21}$$ This means, that optimum noise match for LNA 1 is achieved if the active reflection coefficient is equal to the optimum reflection coefficient. This is in contrast to traditional noise matching to passive reflection coefficient S11. Extending this concept to large arrays, the active reflection coefficients become: $$\Gamma _{opt}^{i} =\frac{1}{w_{i} } \sum _{k=1}^{N}w_{k} \cdot S_{ki} =\frac{1}{w_{i} } S_{i} w^{T}$$ Optimum weighting factors for a given signal vector Vcoil can be found by using the formula given by [17, 18] $$w_{opt} =\left(\left[R_{coil} +R_{LNA}^{min} \right]^{-1} \cdot V_{coil} \right)^{*}$$ Rcoil and RLNAmin are noise covariance matrices for coil array and LNAs. $$R_{LNA}^{min} =4kBR_{in} \left(T_{min} -S\cdot T_{min} \cdot S^{H} \right)$$ Tmin is a diagonal matrix containing the LNA minimum noise temperatures; Rin is the LNA input resistance. Optimum noise match coefficients are $$\Gamma _{opt} =diag\left(w_{opt} \right)^{-1} S\cdot w_{opt}$$ Noise figure for a set of signals Vcoil can be calculated as $$F_{dB} \left(V_{coil} \right)=10\cdot \log _{10} \left(\frac{V_{coil}^{H} \left(R_{coil} \right)^{-1} V_{coil} }{V_{coil}^{H} \left(R_{coil} +R_{LNA} \right)^{-1} V_{coil} } \right)$$ RLNA is the complete LNA noise covariance matrix ### Results For initial simulation we used a two channel array with both elements tuned to 50 Ohms, quality factor Q=50, resistive coupling kr=20%, magnetic coupling km=2%, minimum LNA noise figure NF_min=0.4dB and impedance matrix: $$Z_{coil} =\left[\begin{array}{cc} {50\Omega } & {\left(10+j50\right)\Omega } \\ {\left(10+j50\right)\Omega } & {50\Omega } \end{array}\right]$$ Maps were created showing combined array noise figure for different signal ratios S2/S1 and different matching conditions. Fig. 2 shows a noise figure map for traditional passive match to 50Ω. The average NF for passive noise match was 0.8dB and minimum NF = 0.6dB. Fig 3 and 4 show active match to the signal mode (+1,+1) with Zopt=60Ω+j50Ω and signal mode (+1,-1) with Zopt=40Ω-j50Ω. For these two cases the average NF was 0.84dB and minimum NF = 0.4dB, the actual minimum NF for the preamps. ### Conclusions Coil coupling can have severe consequences for combined array SNR. If decoupling is not an option, active matching to the dominant mode can improve SNR. The preamplifier input impedance has no effect on combined SNR when all active channels and proper noise statistics are applied. ### Acknowledgements In memoriam Dr. Charles Saylor, PhD. A great scientist and good friend. Gone too soon. ### References 1. Reykowski A and Wang J, Rigid signal-to-noise analysis of coupled MRI coils connected to noisy preamplifiers and the effect of coil decoupling on combined SNR, Proc 8th ISMRM, Denver, Colorado, USA, April 2000, p. 1402. 2. Maaskant R, Woestenburg E, Arts M, A generalized method of modeling the sensitivity of array antennas at system level, Proc 34th EuMC, Amsterdam,Netherlands, October 2004, pp. 1541–1544. 3. Maaskant R, Woestenburg EM, Applying the Active Antenna Impedance to Achieve Noise Match in Receiving Array Antennas, Antennas and Propagation Society International Symposium, IEEE, Honolulu, HI,USA, July 2007, pp. 5889–5892. 4. Warnick KF and Jensen MA, Optimal noise matching for mutually coupled arrays,IEEE Trans Ant Prop,vol. 55, no. 6.2, pp. 1726–1731, June 2007. 5. Warnick KF, Woestenburg B, Belostotski L, Russer P, Minimizing the noise penalty due to mutual coupling for a receiving array, IEEE Trans Ant Prop, vol. 57, no. 6, pp.1634–1644, June 2009. 6. Findeklee C, Improving SNR by generalizing noise matching for array coils,Proc 17th ISMRM, Honululu,HI, USA, April 2009, p. 507. 7. Findeklee C, Array Noise Matching—Generalization Proof and Analogy to Power Matching, IEEE Trans Ant Prop , vol. 59, pp. 452-459, 2011 8. Warnick KF, Ivashina MV, Maaskant R, Woestenburg B, Unified Definitions of Efficiencies and System Noise Temperature for Receiving Antenna Arrays, IEEE Trans Ant Prop,vol 58, pp. 2121 - 2125, 2010 9. Findeklee C, Duensing R, Reykowski A, Simulating Array SNR and Effective Noise Figure In Dependence of Noise Coupling, Proc. Intl. Soc. Mag. Reson. Med. 19 (2011), 1883 10. Reykowski A, Saylor C, Duensing GR, Do We Need Preamplifier Decoupling?, Proc. Intl. Soc. Mag. Reson. Med. 19 (2011), 3883 11. Vester M, Biber S, Rehner R, Wiggins G, Brown R, Sodickson D, Mitigation of inductive coupling in array coils by wideband port matching, Proc. Intl. Soc. Mag. Reson. Med. 20 (2012), 2690 12. Wiggins GC, Brown R, Zhang B, Vester M, Popescu S, Rehner R, Sodickson D, SNR Degradation in Receive Arrays Due to Preamplifier Noise Coupling and A Method for Mitigation, Proc. Intl. Soc. Mag. Reson. Med. 20 (2012), 2689 13. Malzacher M , Vester M , Rehner R, Stumpf C, and Korf P, SNR simulations including coupled preamplifier noise, Proc. Intl. Soc. Mag. Reson. Med. 24 (2016), 2157 14. P. Penfield, Wave representation of amplifier noise, IRE Trans. Circuit Theory, vol. CT-9, pp. 84-86, Mar. 1962. 15. Meys RP, A wave approach to the noise properties of linear microwave devices., IEEE Trans. Microw Theory Techn., vol.vol. MTT-26. pp. 34-37. Jan. 1978. 16. Wedge SW, Rutledge DB, Wave Techniques for Noise Modeling and Measurement, IEEE Trans. Ant Prop, Vol. 40, No. 11, pp.2004-2012, 1992. 17. Appelbaum SP, Adaptive arrays, IEEE Trans Ant Prop, vol. 24, no. 5, pp. 585–598, September 1976. 18. Roemer PB, Edelstein WA, Hayes CE, Souza SP, Mueller OM, The NMR Phased Array, MRM 16, 192-225 (1990). ### Figures Fig 1: Noise waves a1n and b1n emanating form LNA 1 are scattered across the two channel array and result in output noise waves c1n and c2n . The weighted combination cn of these output noise waves gives rise to the active mode impedance. Fig 2: Noise figure map for traditional passive noise match to Zopt=50 Ohms. The average NF for passive noise match was 0.8dB and minimum NF = 0.6dB. Fig 3: Noise figure map for active match to the signal mode (+1,+1) with Zopt=60+j50 Ohms. Average NF was 0.84dB and minimum NF was 0.4dB, the actual minimum NF for the LNAs. Fig 4: Noise figure map for active match to the signal mode (+1,-1) with Zopt=40-j50 Ohms. Average NF was 0.84dB and minimum NF was 0.4dB, the actual minimum NF for the LNAs. Proc. Intl. Soc. Mag. Reson. Med. 26 (2018) 1712
2021-06-16 11:19:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6079183220863342, "perplexity": 9144.394105894966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00054.warc.gz"}
http://mathhelpforum.com/calculus/29936-derivative-correct.html
# Thread: is this derivative correct? 1. ## is this derivative correct? y = e^-0.4x(C cos0.2x + Dsin0.2x) y' = -0.4e^-0.4x(-0.2C sin0.2x + 0.2 D cos0.2x) This from trying to work out a particular solution to a inhomogeneous equation Paul B 2. Originally Posted by poundedintodust y = e^-0.4x(C cos0.2x + Dsin0.2x) y' = -0.4e^-0.4x(-0.2C sin0.2x + 0.2 D cos0.2x) This from trying to work out a particular solution to a inhomogeneous equation Paul B You have to follow the product rule. $y = e^{-0.4x}[C cos(0.2x) + Dsin(0.2x)]$ $y'=-0.4e^{-0.4x}[C cos(0.2x) + Dsin(0.2x)]+e^{-0.4x}[0.2Dcos(0.2x)-0.2Csin(0.2x)]$ 3. if y = 3 and x=0 should i be left with 0.4C + 0.2D = 3? 4. Originally Posted by poundedintodust if y = 3 and x=0 should i be left with 0.4C + 0.2D = 3? If $y(0)=3$, then $C=3$ 5. I am a bit confused now so i think Ill post whole problem and maybe you could tell me where I am going wrong. Starting with: 5(d^2y/d^2x) + 4(dy/dx) + y = 0 I have find a particular solution to the initial value problem where y(0) = -2, y'(0)=3 I know a general solution is y = e^-0.4x(C cos0.2x + Dsin0.2x) after finding the roots. So substituting in the values into the derivative y' = -0.4e^-0.4x(-0.2C sin0.2x + 0.2 D cos0.2x) ends with 0.4C+0.2D =3 So C = 3 and D = 9 Doing the same with the general solution with x=-2 y = e^-0.4x(C cos0.2x + Dsin0.2x) I end up with 1( C x 1 + D x 0) in other words C = -2 My confusion comes from the fact that C should be the same in both instances?? Cheers for the help
2017-01-20 02:08:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097255229949951, "perplexity": 2657.6829080079156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00265-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1604415/prove-that-a-straight-line-is-the-shortest-distance-between-two-points
# Prove that a straight line is the shortest distance between two points? Prove that a straight line is the shortest distance between two points in $E_3$. Use the following scheme; let $\alpha: [a,b]\to E_3$ be an arbitrary curve segment from $p = \alpha(a) , q = \alpha(b)$.Let $u = (q — p)/||q — p ||$. (a) If $\sigma$ is a straight-line segment from $p$ to $q$ , say $$\sigma (t) = (1 - t)p + tq ,\quad 0\leq t\leq1$$ show that $L(\sigma ) = d(p,q)$. What I have done $$L(\sigma)=\int_{0}^{1}||\sigma'(t)||dt=\int_{0}^{1}(p^2+q^2)^{1/2}dt=\sqrt{(p^2+q^2)}(1),$$ $d(p,q)=\sqrt{(b-a)^2+(q-p)^2}$. Where am I doing wrong? It's a problem from O'Neill Elementary Differential Geometry. • How did you get your expression for $\vert\vert \sigma ' (t) \vert\vert$? – πr8 Jan 8 '16 at 14:38 • You should integrate from 0 to 1. – Paul K Jan 8 '16 at 14:41 • integrating the speed of curve from a to b,yes t should be from 0 to 1 – Nebo Alex Jan 8 '16 at 14:42 • But he's integrating the special curve $\sigma$ which is only defined on $[0,1]$. – Paul K Jan 8 '16 at 14:43 • It should be $\int_{0}^1$ not $\int_{a}^b$ – Thomas Andrews Jan 8 '16 at 14:44 It is correct that $\sigma'(t)=q-p,$ what is wrong is the norm of this vector, that is not $\sqrt{p^2+q^2},$ but $||q-p||=d(p,q).$ In coordinates, if $p=(x_1,y_1,z_1)$ and $q=(x_2,y_2,z_2)$, then $q-p$ is the vector of components $(x_2-x_1,y_2-y_1,z_2-z_1)$ whose norm is $$\sqrt{(x_1-x_2)^2+(y_1-y_2)^2+(z_1-z_2)^2}=d(p,q).$$ • i am confused by these $p=α(a),q=α(b)$ – Nebo Alex Jan 8 '16 at 16:40 • @Cielo: to visualize, imagine $t\in[a,b]$ is the time, so $p=\alpha(a)$ is the position at the initial time $a$, it is a point in space $\mathbb{E}_3$, while $q=\alpha(b)$ is the position at the final time $b$. – enzotib Jan 8 '16 at 16:43
2019-04-20 22:50:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.849788248538971, "perplexity": 230.6303590846687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530060.34/warc/CC-MAIN-20190420220657-20190421002657-00239.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-2sqrt3-6sqrt3
How do you simplify 2sqrt3+6sqrt3? $8 \sqrt{3}$ $2 x + 6 x = 8 x$ $2 \sqrt{3} + 6 \sqrt{3} = 8 \sqrt{3}$
2022-10-04 01:31:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.770963728427887, "perplexity": 3475.2790719932273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00325.warc.gz"}
https://gateoverflow.in/309359/%23self-doubt-weak-entity-in-dbms
# #SELF DOUBT(WEAK ENTITY IN DBMS) 52 views Can a weak entity depend on more than one strong entity?If yes then how does that exaclty work? ## Related questions 1 148 views If the identifying relationship set has a descriptive attribute, then the movement of an attribute will be on weak entity side? pls explain with proper reference Why there is always $1:M$ relationship between a strong entity set and a weak entity set ,why not $M:1$ and $M:N$?
2020-08-12 10:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1903257817029953, "perplexity": 5021.01682706445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00208.warc.gz"}
https://www.nature.com/articles/ncomms6153?error=cookies_not_supported&code=17c50b52-3745-401e-af27-4a71053b8007
Introduction Ever since their discovery in the 1950s (ref. 1), neutrinos have continued to surprise us. In the Standard Model (SM) of elementary particle physics, neutrinos are massless particles. However, since the results from the Super-Kamiokande experiment in 1998 (ref. 2), the phenomenon of neutrino oscillations has been well established, indicating that neutrinos do have nonzero and non-degenerate masses and that they can convert from one flavour to another3. This important result was followed by a boom of results from several international collaborations. Certainly, these results have pinned down the values of the various neutrino parameters to an incredible precision, especially considering that neutrinos are extremely elusive particles and the corresponding experiments are extraordinarily complex4. Currently operating experiments and future investigations under construction are aimed at determining the missing neutrino parameters, such as the CP-violating phase (which can be important for understanding the matter–antimatter asymmetry in the Universe), the sign of the large mass-squared difference for neutrinos, and the absolute neutrino mass scale. In addition, the cubic kilometre scale neutrino telescope at the South Pole, IceCube5, has been successfully constructed to search for ultra-high energy astrophysical neutrinos, while a number of underground experiments are looking for neutrinoless double-beta decay (see refs 6, 7, 8, 9, 10) and others are waiting for neutrino bursts from galactic supernova explosions (see refs 11, 12). However, the origin of neutrino masses and lepton flavour mixing remains a mystery, and calls for new physics beyond the SM. It is believed that new physics should appear somewhere above the electroweak scale (that is, ΛEW~102 GeV) but below the Planck scale (that is, ΛP~1019 GeV) for the following reasons. First, the smallness of neutrino masses can be ascribed to the existence of superheavy particles, whose masses are close to the grand unified theory (GUT) scale (for example, ΛGUT~1016 GeV), such as right-handed neutrinos in the canonical seesaw models13,14,15,16,17. Moreover, the out-of-equilibrium and CP-violating decays of heavy right-handed neutrinos in the early Universe can produce a lepton number asymmetry, which will be further converted into a baryon number asymmetry18. Therefore, the canonical seesaw mechanism combined with so-called leptogenesis provides an elegant solution to the generation of tiny neutrino masses and the matter–antimatter asymmetry in our Universe. Second, the strong hierarchy in charged-fermion masses (that is, , and ) and the significant difference between quark and lepton mixing patterns (that is, three small quark mixing angles while two large and one small leptonic mixing angles) could find their solutions in the framework of GUTs extended by a flavour symmetry19,20. Therefore, an attractive and successful flavour model usually works at a superhigh-energy scale, where quarks and leptons are unified into the same multiplets of the gauge group but assigned into different representations of the flavour symmetry group. Third, the SM Higgs particle with a mass of 126 GeV has recently been discovered at the Large Hadron Collider at CERN in Geneva, Switzerland21,22. If this is further confirmed by future precision measurements and the top-quark mass happens to be large, the SM vacuum will become unstable around the energy scale 1012 GeV. In this case, new physics has to show up to stabilize the SM vacuum23. In the canonical seesaw model with heavy right-handed neutrinos, the SM vacuum is actually further destabilized. However, if an extra scalar singlet is introduced to generate right-handed neutrino masses, the SM vacuum can be stabilized and the tiny neutrino masses are explained via the seesaw mechanism24. Hence, the assumption that neutrino masses and lepton flavour mixing are governed by new physics at a superhigh-energy scale is well motivated. The experimental results will guide us to the true theory of neutrino masses, lepton flavour mixing and CP violation. At the same time, they will also rule out quite a large number of currently viable flavour models. However, this will only be possible if the renormalization group running of neutrino parameters, which describes their physical evolution with respect to energy scale, is properly taken into account. Thus, it may help to elucidate the mechanism for neutrino mass generation. The aim of this Review is to examine neutrino renormalization group running in more detail. First, we briefly summarize the current status of neutrino parameters and the primary goals of future neutrino experiments, and present a general discussion about the effective theory approach and renormalization group equations (RGEs) in particle physics. Then, we consider several typical neutrino mass models in the framework of supersymmetric and extra-dimensional theories, and the running behaviour of neutrino parameters is described and explained. Finally, the impact of renormalization group running on flavour model building and leptogenesis is illustrated and emphasized. Neutrino parameters at low energies Neutrinos are produced in beta decay of radioactive nuclei, nuclear fusion in the Sun, collisions between nucleons in the Earth atmosphere and cosmic-ray particles and in the man-made high-energy accelerators. Since they are always accompanied by the charged leptons e, μ and τ in production, it is convenient to define the neutrino flavour eigenstates {|νe›,|νμ›,|ντ›} and discriminate them according to the corresponding charged leptons. The neutrino flavour eigenstates |να› (for α=e,μ,τ) are related to three neutrino mass eigenstates {|ν1›,|ν2›,|ν3›} with definite masses {m1,m2,m3} by the superposition |να›=Uα1|ν1›+Uα2|ν2›+Uα3|ν3›, where the 3 × 3 unitary matrix U is the so-called lepton flavour mixing matrix25,26,27. It is conventional to parameterize U by three Euler-like mixing angles {θ12,θ13,θ23} and three CP-violating phases {δ,ρ,σ}, namely3 with cij≡cos θij and sij≡sin θij for ij=12, 13, 23. As a consequence of quantum interference among the three neutrino mass eigenstates, neutrinos can transform from one flavour to another, when propagating from the sources to the detectors. This phenomenon of neutrino flavour oscillations will be absent if either the two independent neutrino mass-squared differences and (or ) or the three leptonic mixing angles {θ12, θ13, θ23} are vanishing. Note that we will use instead of . Thanks to a number of elegant experiments in the past two decades3, the phenomenon of neutrino flavour oscillations has now been firmly established. The latest global analysis of data from all existing past and present neutrino oscillation experiments provides our best knowledge on the neutrino mixing parameters, as shown in Table 1. Note that has been used in ref. 28 to fit the oscillation data in both cases of normal neutrino mass hierarchy (that is, m1<m2<m3) and inverted neutrino mass hierarchy (that is, m3<m1<m2), only the results from ref. 28 are listed in this table to get a ballpark feeling of the values of the neutrino parameters. Two other independent global-fit analyses in refs 29, 30 yield different best-fit values. However, the 3σ confidence intervals of neutrino parameters from all three groups are indeed consistent. At present, although there are weak hints for a nonzero Dirac CP-violating phase δ (see the last row of Table 1), it is fair to say that no direct and significant experimental constraints exist for the leptonic CP-violating phases. Furthermore, since neutrino oscillation experiments are blind to the Dirac or Majorana nature of neutrinos and to the Majorana CP-violating phases {ρ,σ}, it is still an open question whether neutrinos are Dirac or Majorana particles. In the latter case, neutrinos are their own antiparticles, which would lead to neutrinoless double-beta decay of some nuclear isotopes and can hopefully be confirmed with this kind of experiments31. The primary goals of ongoing and forthcoming neutrino oscillation experiments are to precisely measure the three leptonic mixing angles, to determine the neutrino mass hierarchy, and to discover the leptonic Dirac CP-violating phase. In addition, non-oscillation neutrino experiments aim to pin down the absolute neutrino masses and to probe the Majorana nature of neutrinos. Confronting theories with experiments Although most neutrino parameters have already been measured with a reasonably good precision, the origin of tiny neutrino masses and bi-large lepton flavour mixing remains elusive. To accommodate tiny neutrino masses, one may have to go beyond the SM at the electroweak scale and explore new physics at a superhigh-energy scale. In this case, an immediate question is how to compare theoretical predictions at a high-energy scale with the observables at a low-energy scale. With this question in mind, we present a brief account of effective theories and renormalization group running, and describe how neutrinos fit into this framework. Effective theory approach The effective theory approach is very useful, and sometimes indispensable in particle physics, where interesting phenomena appear at various energy scales. The basic premise for this approach to work well is that the dynamics at low-energy scales (or large distances) does not depend on the details of the dynamics at high-energy scales (or short distances). For instance, the energy levels of a hydrogen atom are essentially determined by the fine-structure constant of the electromagnetic interaction α≈1/137 and the electron mass me≈0.511 MeV. At this point, we do not need to know the inner structure of the proton, and the existence of the top quark and the weak gauge bosons. That is to say, the energy levels of a hydrogen atom can be calculated by neglecting all dynamics above the energy or momentum scale Λ much higher than αme, and the corresponding error in the calculation can be estimated as αme/Λ. If a higher accuracy is required, Λ will increase and the dynamics at a higher-energy scale may be needed. See refs 32, 33 for general reviews on effective field theories. Now, consider a toy model with a light particle φ and a heavy one Φ, whose masses are denoted by m and M, respectively. Since , there exist two widely separated energy scales. The Lagrangian for the full theory can be written as , where the interaction between the light and heavy particles has been included in the second term. Since we are interested in physical phenomena at a low-energy scale , where the experiments are carried out, we can integrate out the heavy particle as in the path-integral formalism. Hence, an effective Lagrangian involving only the light particle is derived, and higher-dimensional operators appear in , where ci is a coefficient and di stands for the mass dimension of . It is evident that the dynamics at the high-energy scale can affect the low-energy physics by modifying the coupling constants and imposing symmetry constraints, but the overall effects are suppressed by the heavy particle mass M. The method of effective field theories becomes indispensable when we even do not know at all whether a complete theory with the heavy particles exists or not. Matching and threshold effects Ultraviolet divergences appear in quantum field theories if radiative corrections are taken into account. In the presence of higher-dimensional operators, effective theories are nonrenormalizable in the sense that an ultraviolet divergence cannot be removed by a finite number of counter terms in the original Lagrangian. However, since there is an infinite number of higher-dimensional operators in the Lagrangian eff, it is always possible to absorb all divergences and obtain finite results with a desired accuracy. Although any physical observables should be independent of the renormalization scheme used, it is a nontrivial task to choose a convenient renormalization scheme such that perturbative calculations are valid and simple. According to the Appelquist–Carazzone theorem34, heavy particles decouple automatically in a mass-dependent scheme, and their impact on the effective theory will be inversely proportional to the heavy particle mass M and disappear in the limit of an infinitely large mass. Nevertheless, higher-order calculations in this scheme become quite involved. The mass-independent schemes, such as the modified minimal subtraction scheme () (refs 35, 36, 37), have been suggested for practical computations in effective theories38, where the strategy to construct a self-consistent effective theory in the scheme is outlined and applied to the determination of the heavy gauge boson mass MG in the SU(5) GUT39. One problem for the mass-independent scheme is that the heavy particles contribute equally to the so-called beta functions for gauge coupling constants, leading to an incorrect evolution at a low-energy scale . The solution to this problem is to decouple heavy particles by hand and match the effective theory with the full theory at μ=MG so that the same physical results can be produced in the effective theory as in the full theory. At any other energy scale below MG, gauge coupling constants are governed by renormalization group running, which will be discussed in the following subsection. Therefore, if there are several heavy particles with very different masses, we should decouple them one by one to obtain a series of effective theories. The matching conditions (that is, the boundary conditions) at each mass scale are crucial for the effective theory to work below this scale. As a consequence, physical quantities (such as coupling constants and masses) may dramatically change at a decoupling scale or mass threshold. To figure out threshold effects, one has first to start with the full theory and construct the effective theories following the above strategy. Renormalization group running The renormalization group was invented in 1953 by Stückelberg and Petermann40. However, it was Gell-Mann and Low41 who studied the short-distance behaviour of the photon propagator in quantum electrodynamics in 1954 by using the renormalization group approach. The important role played by the renormalization group in Gell-Mann and Low’s work was clarified in 1956 by Bogoliubov and Shirkov42. The same approach was applied by Wilson to study critical phenomena and explain how phase transitions take place43,44,45. The essential idea of the renormalization group stems from the fact that the theory is invariant under the change of renormalization prescription. More explicitly, if the theory is renormalized at a mass scale μ, any change of μ will be compensated by changes in the renormalized coupling constant g(μ) and the mass m(μ) such that the theory remains the same. By requiring that the physical quantities, for example, the S-matrix element S[μ,g(μ),m(μ)], are invariant under this transformation, namely μdS/dμ=0, one can derive which is a specific form of the Callan–Symanzik equation46,47. Note that we have introduced the RGEs for the coupling constant and the mass where β(g) and γm(g) are the beta function and the anomalous dimension, respectively, depending only on the coupling constant g in the scheme. As pointed out by Weinberg48 a long time ago, the standard electroweak model can be regarded as an effective theory at low energies, and the impact of new physics at high-energy scales can be described by higher-dimensional operators, which are composed of the already known SM fields. If the SM gauge symmetry is preserved, but the accidental symmetry of lepton number is violated, there will be a unique dimension-five operator , where and H stand for the SM lepton and Higgs doublets, respectively. After spontaneous breakdown of electroweak gauge symmetry, neutrinos acquire finite masses from the so-called Weinberg operator . Therefore, neutrinos are assumed to be Majorana particles in this case. It is expected that the lightness of neutrinos can be ascribed to the existence of a superhigh-energy scale. Now, it becomes clear that if neutrino masses originate from some dynamics at a high-energy scale, such as the GUT scale, neutrino parameters including leptonic mixing parameters and neutrino masses will evolve according to their RGEs as the energy scale goes down to where the parameters are actually measured in low-energy experiments. Neutrino mass models To generate tiny neutrino masses, one has to go beyond the SM and extend its particle content, or its symmetry structure, or both. In this section, we summarize several typical neutrino mass models, which are natural extensions of the SM that have attracted a lot of attention in the past decades. In Fig. 1, the Feynman diagrams for neutrino mass generation in those models are shown. Canonical seesaw models As the Higgs particle has recently been discovered in the ATLAS21 and CMS22 experiments at the Large Hadron Collider, the SM gauge symmetry SU(2)L × U(1)Y and its spontaneous breaking via the Higgs mechanism seem to work perfectly in describing the electromagnetic and weak interactions. On the other hand, nonzero neutrino masses indicate that the SM may just be an effective theory below and around the electroweak scale ΛEW=102 GeV. Thus, one can preserve the SM gauge symmetry structure and take into account all higher-dimensional operators, which are relevant for neutrino masses, as pointed out by Weinberg48. The total Lagrangian is where SM denotes the SM Lagrangian, and H stand for the SM lepton and Higgs doublets, respectively. The coefficients καβ (α,β=e,μ,τ) are of mass dimension −1 and related to the Majorana neutrino mass matrix as Mν=κH2, where ‹H›≈174 GeV is the vacuum expectation value of H. One of the simplest extensions of the SM, leading to the Weinberg operator, is the so-called type-I seesaw model, in which three right-handed singlet neutrinos νR are introduced. Since the νR’s are neutral under transformations of the SM gauge symmetry, they can have Majorana mass terms, namely, their masses are the eigenvalues of a complex and symmetric mass matrix MR. On the other hand, they are coupled to the lepton and Higgs doublets via a Yukawa-type interaction with a coupling matrix Yν. Since the masses of right-handed neutrinos are not subject to electroweak symmetry breaking, we can assume that and integrate out the three νR’s. At a lower-energy scale, one obtains the Weinberg operator with . Therefore, the smallness of neutrino masses can be attributed to the heaviness of the νR’s (refs 13, 14, 15, 16, 17). In the type-II seesaw model49,50,51,52,53,54, the scalar sector of the SM is enlarged with a Higgs triplet Δ. To avoid an unwanted Goldstone boson associated with the spontaneous breakdown of the global U(1) lepton number symmetry, one can couple the Higgs triplet to the lepton doublet with a Yukawa coupling matrix YΔ, and simultaneously to the Higgs doublet with a mass parameter μΔ. Assuming that the Higgs triplet mass MΔ is well above the electroweak scale, that is, , we can integrate out Δ to obtain the Weinberg operator with κ=YΔμΔ/MΔ2, indicating that the neutrino masses are suppressed by MΔ. In the type-III seesaw model55, one introduces three fermion triplets Σi (i=1,2,3) and couple them to the lepton and Higgs doublets with a Yukawa coupling matrix YΣ. In each Σi, there are three heavy fermions: two charged fermions and one neutral fermion . Given a Majorana mass matrix MΣ of the fermion triplets and , we can construct an effective theory without the heavy Σi’s at a lower-energy scale. In this effective theory, the same Weinberg operator for neutrino masses can be obtained and the coefficient is identified as . One can observe that the ’s are playing the same role in generating neutrino masses as the νR’s in the type-I seesaw model. However, due to their gauge interaction, the fermion triplets are subject to more restrictive constraints from lepton-flavour-violating decays of charged leptons and direct collider searches. A common feature of the above three seesaw models is the existence of superheavy particles. Given neutrino masses and ‹H›~100 GeV, one can estimate the seesaw scale ΛSS~1014 GeV. Therefore, an effective theory with the same Weinberg operator is justified at any scale between ΛEW and ΛSS. Although the leptogenesis mechanism for the matter–antimatter asymmetry can be perfectly implemented in the seesaw framework, the heaviness of new particles renders the seesaw models difficult to be tested in low-energy and collider experiments. Inverse seesaw model To lower the typical seesaw scale ΛSS in a natural way, one can extend the type-I seesaw model by adding three right-handed singlet fermions SR and one Higgs singlet Φ, both of which are coupled to the νR’s by a Yukawa coupling matrix YS. A proper assignment of quantum numbers under a specific global symmetry can be used to forbid the Majorana mass term of νR and the νR–Φ Yukawa interaction. However, the mixing between νR and SR is allowed through a Dirac mass term MS=YS‹Φ›, so is the Majorana mass term . In this setup, the Majorana mass matrix for three light neutrinos is given by , where MDYνH› is the Dirac neutrino mass matrix as in the type-I seesaw model. Given and , the sub-eV neutrino masses can be achieved by assuming μS~1 keV. In this inverse seesaw model56, the neutrino masses are not only suppressed by the ratio of the electroweak and seesaw energy scales, that is, ΛEWSS=MD/MS~10−2, but also by the tiny lepton number-violating mass parameter μS compared with the ordinary seesaw scale. The smallness of μS is natural in the sense that the model preserves the lepton number symmetry in the limit μS→0 (ref. 57). In contrast to the ordinary seesaw models, the inverse seesaw model is testable through non-unitarity effects in neutrino oscillation experiments58, lepton-flavour-violating decays of charged leptons59,60,61 and collider experiments62,63. Scotogenic model A radiative mechanism for neutrino mass generation is to attribute the smallness of neutrino masses to loop suppression instead of the existence of superheavy particles64,65,66,67,68. One interesting model of this type is the so-called scotogenic model67, where three νR’s and one extra Higgs doublet η are added to the SM. Furthermore, a Z2 symmetry is imposed on the model such that all SM fields are even, while νR and η are odd. Even though the SU(2)L × U(1)Y quantum numbers of η are the same as the SM Higgs doublet and the νR’s have a Majorana mass term, the Dirac neutrino mass term is forbidden by the Z2 symmetry and neutrino masses are vanishing at tree level. In the scotogenic model, neutrino masses appear first at one-loop level and the exact Z2 symmetry guarantees the stability of one neutral scalar boson (from the Higgs doublet η), which would be a good candidate for a dark-matter particle67. Due to loop suppression, sub-eV neutrino masses can be obtained even when νR’s and scalar particles are at the TeV scale. Therefore, this model has observable effects in lepton-flavour-violating processes, relic density of dark matter and collider phenomenology67. Dirac neutrino model Finally, we consider the Dirac neutrino model. In the SM model, both quarks and charged leptons acquire their masses through Yukawa interactions with the Higgs doublet. After introducing three νR’s, one can do exactly the same thing for neutrinos, and thus, tiny neutrino masses can be ascribed to the smallness of neutrino Yukawa couplings. One difficulty with the Dirac neutrino model is why the fermion masses span 12 orders of magnitude, exaggerating the strong hierarchy problem of fermion masses in the SM. Solutions to the above problem can be found in extra-dimensional models69, where the SM particles are confined to a three-dimensional brane and the νR’s are allowed to feel one or more extra dimensions70. In this case, the neutrino Yukawa couplings are highly suppressed by the large volume of the extra dimensions. Another solution is to implement a radiative mechanism, as in the scotogenic model, such that light neutrino masses are due to loop suppression71,72,73,74. See Fig. 1 for an illustration. However, in both kinds of models, an additional U(1) symmetry (that is, lepton number conservation) has to be enforced to forbid a Majorana mass term. Running behaviour of neutrino parameters Now, we proceed to discuss the running behaviour of neutrino parameters. First of all, we note that there are two different ways to study the renormalization group running. In the top–down scenario, a full theory is known at the high-energy scale and the theoretical predictions for neutrino parameters are given as initial conditions. At the threshold of heavy particle decoupling, one has to match the resulting effective theory with the full theory, so that the unknown parameters in the effective theory can be determined and used to reproduce the same physical results as in the full theory. Then, the running is continued in the effective theory. This procedure should be repeated in the case of multiple particle thresholds until a low-energy scale where the neutrino parameters are measured. In the bottom–up scenario, we start with the experimental values of neutrino parameters at a low-energy scale, and evolve them by using the RGEs in the effective theory to the first particle threshold. At this moment, more input or assumptions about the dynamics above the threshold are needed for the running to continue. Otherwise, the running is terminated and some useful information on the full theory cannot be obtained. In the following, we will focus on the bottom–up approach and explore the implications of measurements of neutrino parameters for the dynamics at a high-energy scale, where a full theory of neutrino masses and lepton flavour mixing may exist. However, we shall also comment on the threshold effects in the top–down scenario once a specific flavour model is assumed. In the effective theory, where the SM is extended by the Weinberg operator, the RGE for the effective neutrino mass parameter κ was first derived in refs 75, 76, and revised in ref. 77. In general, we have the RGE for κ given by where t=ln(μEW) and Yl stands for the charged-lepton Yukawa coupling matrix. In equation (5), Cκ is a constant, while ακ depends on the gauge couplings and all Yukawa coupling matrices of the charged fermions. Given the initial values of all relevant coupling constants and masses at ΛEW, one can evaluate the neutrino parameters at any energy scale between ΛEW and a cutoff scale Λ, after solving equation (5) together with the RGEs of the other model parameters and diagonalizing κ. Since κ is diagonalized by the lepton flavour mixing matrix U(θ12,θ13,θ23,δ,ρ,σ) in the basis where Yl is diagonal, one can derive, using equation (5), the individual RGEs for the leptonic mixing angles {θ12,θ13,θ23}, the CP-violating phases {δ,ρ,σ}, and the neutrino mass eigenvalues {m1,m2,m3}, which can be found in refs 78, 79, 80. The Standard Model In the framework of the SM, the relevant coefficients in equation (5) are given by and , where only the Yukawa couplings of the heaviest charged lepton and quark are retained, and λ is the quartic Higgs self-coupling constant. Since the Yukawa couplings of charged leptons are small compared with gauge couplings, the evolution of neutrino masses can be essentially described by a common scaling factor. For the running of the leptonic mixing angles, the contribution from tau Yukawa coupling yτ=mτ/‹H›~0.01 is dominant. However, yτ itself is already a very small number, so one expects that the running effects of all three leptonic mixing angles are generally insignificant. On the other hand, the evolution of the leptonic mixing angles can be enhanced if the neutrino mass spectrum is quasi-degenerate, that is, . In particular, the leptonic mixing angle θ12 has the strongest running effects, partly due to . In the limit of quasi-degenerate mass spectrum and CP conservation, the RGEs for the two neutrino mass-squared differences and the three leptonic mixing angles are given by where the relevant coefficients are presented in Table 2. It is now straightforward to observe that the evolution of θ12 is enhanced by a factor of , compared with that of θ13 and θ23. For illustration, the evolution of θ12 from MZ=91.19 GeV to Λ=1010 GeV is shown in Fig. 2. At MZ, the gauge coupling constants and the quark mixing parameters are taken from the Particle Data Group3, the quark and charged-lepton masses from refs 81, 82 and the leptonic mixing parameters are set to the best-fit values from the NuFit group30. The Higgs mass MH=126 GeV is assumed to be consistent with the latest measurements by the ATLAS21 and CMS22 experiments. It is worthwhile to mention that MH or equivalently the Higgs self-coupling constant affects the running of neutrino masses, and also the SM vacuum stability23. Finally, a quasi-degenerate neutrino mass spectrum is adopted with the lightest neutrino mass m1=0.2 eV and the Majorana CP-violating phases {ρ,σ} are set to zero. Even with these extremely optimistic assumptions, the value of θ12 turns out to be only larger by 1° at Λ=1010 GeV than at MZ. The previous observations apply well to seesaw models with Λ=1010 GeV identified as the mass of the lightest new particle. Above the seesaw threshold, the running of neutrino parameters has also been studied in the complete type-I83,84,85, type-II86,87,88 and type-III89 seesaw models. However, for low-scale neutrino mass models, there exist new particles at the TeV scale. Therefore, the running behaviour of neutrino parameters can be significantly changed by threshold effects in the inverse seesaw model90,91 and the scotogenic model92. In the Dirac neutrino model, the RGEs of the neutrino parameters have also been derived and investigated in detail93. Supersymmetric models In the minimal supersymmetric extension of the SM (MSSM), all fermions have bosonic partners, and vice versa94. Although there is so far no direct hint on supersymmetry, the MSSM is regarded as one of the most natural alternatives to the SM for its three salient features: (1) elimination of the fine-tuning or hierarchy problem; (2) implication for grand unification of gauge coupling constants; (3) candidates for the dark matter. Hence, neutrino mass models in the supersymmetric framework are extensively studied in the literature78. In the MSSM extended with the Weinberg operator, the corresponding coefficients in equation (5) are and . The neutrino mass matrix is then given by with tan β being the ratio of the vacuum expectation values of the two Higgs doublets in the MSSM. Similar to the SM, the running of the leptonic mixing angles is dominated by the tau Yukawa coupling . However, now yτ can be remarkably larger than its value in the SM if a large value of tan β is chosen. Consequently, apart from the enhancement due to a quasi-degenerate neutrino mass spectrum, the running effects of the leptonic mixing angles can be enlarged by tan β. In Fig. 2, we show the evolution of θ12 in the MSSM with tan β=10, where the input values at MZ are the same as in the SM. In addition, the supersymmetry breaking scale is assumed to be 1 TeV, below which the SM works well as an effective theory. The value of θ12 decreases with respect to an increasing energy scale, whereas it increases in the SM. This is due to the opposite signs of and . As an example for the top–down approach, one considers a bimaximal-mixing pattern (that is, θ12=θ23=45° and θ13=0) at the GUT scale ΛGUT=2 × 1016 GeV (refs 84, 95). It is worthwhile to mention that the leptonic mixing angles above the seesaw scale arise from the diagonalization of , and the leptonic mixing angles and the neutrino masses at this scale can be viewed as a convenient parametrization of , which is a combination of fundamental model parameters Yν and MR. Therefore, a bimaximal-mixing pattern may result from a flavour symmetry at the GUT scale. For a complete type-I seesaw model at ΛGUT, the full flavour structure of the neutrino Yukawa coupling matrix Yν should be specified, and the mass matrix of right-handed neutrinos is reconstructed from the light neutrino mass matrix and Yν from the seesaw formula. See ref. 84 for the other input parameters. In Fig. 3, the running behaviour of the three leptonic mixing angles are depicted, where the gray-shaded areas stand for the decoupling of three right-handed neutrinos at M3=8.1 × 1013 GeV, M2=2.1 × 1010 GeV and M1=5.5 × 108 GeV. As one can observe from Fig. 3, the decoupling of the heaviest right-handed neutrino and the matching between the first effective theory and the full theory have remarkable impact on the running of θ12 and θ13. This impact depends on the presumed flavour structure in the lepton sector, indicating that the running of neutrino parameters has to be taken into account in the flavour model at a superhigh-energy scale. In the MSSM, it is in general expected that running effects of neutrino parameters are significant, in particular for large values of tan β and a quasi-degenerate neutrino mass spectrum. This generic feature should also be applicable to supersymmetric versions of neutrino mass models discussed in the previous section. Extra-dimensional models The existence of one or more extra spatial dimensions was first considered by Kaluza96 and Klein97 in the 1920s. The recent interest in extra dimensions and their implications for particle physics was revived by the seminal works in refs 98, 99, 100. In extra-dimensional models, the fundamental energy scale for gravity can be as low as a few TeV, solving the gauge hierarchy problem of the SM. Furthermore, the excited Kaluza–Klein (KK) modes of the SM fields serve as promising candidates for cold dark matter. See ref. 101, for a brief review. As an interesting example for the running of neutrino parameters in extra-dimensional models, we consider the so-called universal extra-dimensional model (UEDM) first introduced in ref. 102, in which all SM fields are allowed to propagate in one or more compact extra dimensions. Since the KK number is conserved and the excited KK modes manifest themselves only at loop level, current mass bound on the first KK excitation from electroweak precision measurements and direct collider searches is just about a few hundred GeV (ref. 102). In the five-dimensional UEDM, the corresponding coefficients in equation (5) are and , where s=μ/μ0 is the number of excited KK modes at the energy scale μ. Note that μ0 denotes the mass of the first KK excitation, or equivalently R=μ0−1 is the radius of the compact extra dimension. In contrast to the SM and the MSSM, the running of κ in the UEDM obeys a power law due to the increasing number of excited KK modes, implying a significant boost in the running103,104,105. The reason is simply that, at a given energy scale μ, we have an effective theory with s=μ/μ0 new particles, which will run in the loops and contribute to the RGEs of neutrino parameters. In Fig. 2, the evolution of θ12 in the five-dimensional UEDM is shown, where the input parameters at MZ are the same as in the SM and a cutoff scale Λ=40 TeV has been chosen to guarantee that a perturbative effective theory is valid. One can observe that the running effect is significant even in such a narrow energy range. It is worthwhile to mention that θ12 increases with respect to an increasing energy scale in both the SM and the UEDM, whereas it decreases in the MSSM. Generic features Now, we summarize the generic features of the running of neutrino parameters in the SM, the MSSM and the UEDM. First, due to small Yukawa couplings of charged leptons in the SM, the evolution of the leptonic mixing angles is insignificant, even in the case of a quasi-degenerate neutrino mass spectrum. The running effects can be remarkably enhanced in the MSSM through a relatively large value of tan β, and instead through the number of excited KK modes in the UEDM. Second, among the three leptonic mixing angles, θ12 has the strongest running effect due to an enhancement factor . The running of θ12 in the SM and the UEDM is in the opposite direction to that in the MSSM. However, the actual running behaviour also crucially depends on the choice of the currently unconstrained leptonic CP-violating phases78. The running neutrino masses at high-energy scales can be approximately obtained by multiplying a common scaling factor, depending on the evolution of the gauge couplings. Third, the running effects of the leptonic CP-violating phases have been studied in detail in refs 78, 106, 107, 108, where the evolution of the three CP-violating phases has been found to be entangled. Consequently, a nonzero Dirac CP-violating phase can be radiatively generated even if it is assumed to be zero at a high-energy scale, and vice versa. Finally, it is worth mentioning that threshold effects may significantly change the running behaviour of different neutrino parameters. However, the accurate description of threshold effects is only possible if the full theory is exactly known. Phenomenological implications The running of neutrino parameters has important implications for flavour model building, the matter–antimatter asymmetry via the leptogenesis mechanism, and the extra-dimensional models. We now sketch the essential points and refer interested readers to relevant references. Flavour model building In connection with flavour mixing in the quark sector, flavour models are usually built at a high-energy scale, for example, the GUT scale. As for flavour model building, the running effects should be taken into account in general, and for the case of quasi-degenerate neutrino masses in particular. The running effects of mixing parameters can be used to interpret the discrepancy between quark and lepton flavour mixing109,110. As a possible symmetry between quarks and leptons, quark–lepton complementarity relations, such as and , where the superscripts specify the mixing angles in the quark and lepton sectors, have been conjectured111,112. Radiative corrections to these relations have been calculated in the type-I seesaw model113. To describe the observed lepton mixing pattern, one may impose a discrete flavour symmetry on the generic Lagrangian19,20. As discussed in the previous section, a bimaximal-mixing pattern (that is, and θ13=0) at ΛGUT turns out to be compatible with current neutrino oscillation data if running effects are taken into account95. In addition to bimaximal mixing114, tri-bimaximal115,116,117, democratic118 and tetramaximal mixing119 patterns have been proposed to describe lepton flavour mixing, and their radiative corrections have also been examined120,121,122,123,124,125,126. Matter–antimatter asymmetry It remains an unanswered question why our visible world is made of matter rather than antimatter. From cosmological observations, the ratio between baryon number density and photon number density ηb=(6.19±0.15) × 10−10 has been precisely determined3. One of the most attractive mechanisms for a dynamic generation of baryon asymmetry is leptogenesis18, which works perfectly in various seesaw models for neutrino mass generation. Take the type-I seesaw model, for example, where three heavy right-handed neutrinos are introduced. In the early Universe, when the temperature is as high as the masses of heavy neutrinos, they can be thermally produced and decay into the SM particles, mainly lepton and Higgs doublets. If the neutrino Yukawa couplings are complex, heavy neutrinos decay into leptons and antileptons in different ways. When the Universe cools down, CP-violating decays go out of thermal equilibrium and a lepton asymmetry can be generated, which will be further converted into a baryon asymmetry. The final baryon asymmetry ηb≈0.96 × 10−2ε1κf depends on the CP asymmetry ε1 from the decays of the lightest heavy neutrino, and the efficiency factor κf from the solution to a set of Boltzmann equations127. Moreover, the maximal value of ε1 can be derived where m denotes the mass of heaviest ordinary neutrino128. Now, it is evident that the running of neutrino masses from the low-energy scale to M1 (that is, the mass of ν1R) should be taken into account84,129. As the evolution of neutrino masses can be described by a common scaling factor and they become larger at a higher-energy scale, the maximum of the CP asymmetry scales upwards as neutrino masses. However, larger values of neutrino masses at M1 imply larger Yukawa couplings, which enhance the washout of the lepton asymmetry, and thus reduce κf. The outcome from the competition between the enhancement of ε1 and the reduction of κf depends on the neutrino mass spectrum, and also on the value of tan β in the MSSM78. See ref. 130, for a review on the recent development of leptogenesis in seesaw models. Bounds on extra dimensions A general feature of quantum field theories with extra spatial dimensions is that they are non-renormalizable131, since there exist infinite towers of KK states appearing in the loops of quantum processes. As pointed out in ref. 131, the higher-dimensional theories could preserve renormalizability if they are truncated at a certain energy scale Λ (see Fig. 2), below which only a finite number of KK modes is present. In the UEDM, Λ is usually taken to be the energy scale where the gauge couplings become non-perturbative132, but it could also be related to a unification scale for the gauge couplings131. The recent discovery of a Higgs particle with MH=126 GeV leads to a reconsideration of the stability of the SM vacuum82,23. The instability is essentially induced by the fact that the Higgs self-coupling constant λ runs into a negative value at a high-energy scale. Since the model parameters have a power-law running in the UEDM, in contrast to a logarithmic running in the ordinary four-dimensional theories, the requirement of vacuum stability will place a restrictive bound on the cutoff scale Λ and the radius of extra dimensions R. It has been found that ΛR<5 for R−1=1 TeV in the five-dimensional UEDM133, while this bound becomes more stringent ΛR<2.5 in the six-dimensional UEDM104, which can be translated into the maximal number of KK modes being five and two, respectively. As a consequence, the running of neutrino parameters in these models will be limited to a narrow energy range. Outlook Our knowledge about neutrinos has been greatly extended in the past decades, especially due to a number of elegant neutrino oscillation experiments. As for the leptonic mixing parameters, we are entering into the era of precision measurements of three leptonic mixing angles and two neutrino mass-squared differences. The determination of the neutrino mass hierarchy and the discovery of leptonic CP violation are now the primary goals of the ongoing and upcoming neutrino oscillation experiments. On the other hand, the tritium beta decay and neutrinoless double-beta decay experiments, together with cosmological observations, will probe the absolute scale of neutrino masses. Whether or not neutrinos are their own antiparticles will also be clarified if neutrinoless double-beta decay is observed. Therefore, we will obtain more information about the neutrino parameters at the low-energy scale. However, the origin of neutrino masses and lepton flavour mixing remains a big puzzle in particle physics. In this review article, we have elaborated on the evolution of neutrino parameters from the low-energy scale to a superhigh-energy scale, where new physics may appear and take the responsibility for generating neutrino masses. The running effects of neutrino parameters can be very significant and should be taken into account in searching for a true theory of neutrino masses and lepton flavour mixing. On the other hand, the successful applications of renormalization group running in neutrino physics, and more generally in elementary particle physics and condensed matter physics, will demonstrate the deep connection between different branches of physical sciences and the amazing power of quantum field theories in describing nature. In the foreseeable future, with direct searches at colliders and precision measurements of quark and lepton flavour mixing parameters, we hope that all observations will finally converge into hints for new physics beyond the SM and lead us to a complete theory of fermion masses, flavour mixing and CP violation.
2023-02-03 15:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831385612487793, "perplexity": 706.3741453191833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00544.warc.gz"}
https://forum.zettelkasten.de/discussion/1878/is-there-a-way-to-hide-the-image-reference-link
# Is there a way to hide the Image Reference Link? Hi everyone, Is there a way to hide the reference link to images? Or, should I just relocate my Archive higher up on the folder hierarchy? Thank you, amazing Zettlers! • Woodruff • Will Simpson I'm a zettelnant. Research areas: Attention Horizon, Productive Procrastination, Dzogchen, Non-fiction Creative Writing kestrelcreek.com • @Will I think it is the link they want to hide, not the image. I wouldn't want anything buried that deep in my file system. The flatter I can keep things, the better I like it. I would also want to keep material in places that will be backed up by various applications or services, and sometimes the Library folder is excluded by default. Might also affect Spotlight, though I can't remember. • @MartinBB You can't get flatter and sync via iCloud from apps (here: Note Plan). That's Apple's way to sandbox each app's documents. @WoodruffCoates Good news: use relative paths! ![](media/foobar.png) ... will work as well, provided that you visit a file in the NotePlan3 subfolder that contains the media folder, of course. Author at Zettelkasten.de • https://christiantietze.de/ • @ctietze Thank you! I may just take the files out of NP then. I'd rather have the flatter folder architecture. • @MartinBB Thank you! • @ctietze said: You can't get flatter and sync via iCloud from apps Sync via iCloud is something I'm trying to avoid at the moment Or perhaps I should say that iCloud is trying to avoid synchronising at the moment. As I posted elsewhere this morning, I spent a good couple of hours waiting for this situation to resolve itself: This is on an iMac that had been running all night, and is connected to the internet via a cable. So it had been trying to download for the previous ten hours, roughly. I gave up and decided that it really is time to move everything into OneDrive (where I can).
2021-10-24 22:36:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6259473562240601, "perplexity": 3447.9671794188207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00186.warc.gz"}
https://www.gamedev.net/forums/topic/495878-aspnet-unmanaged-dll-problem/
• 13 • 18 • 19 • 27 • 10 # [web] ASP.Net Unmanaged DLL problem This topic is 3582 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I wrote a win32 DLL to do some custom data storage for a project. I then used DllImport to access the functions in this DLL from my asp.net pages. This works perfectly on my test setup of Windows XP Pro SP2 using Visual Web Developer Express 2008's builtin development server. It also works correctly using IIS 5 with .net framework 3.5 installed. However, when I move the code to a Windows 2003 Server Enterprise Server, which runs IIS 6 (also with .net framework 3.5), the dll function call throws an exception: System.DllNotFoundException: Unable to load DLL 'DataAPI.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at DataAPIDLL.CloseNewStorage(Int32 iStorageID) at TestDLL.Page_Load(Object sender, EventArgs e) in e:\webroot\TestDLL.aspx.cs:line 27 DataAPI.dll is currently in the bin directory of my site. Here is a sample code-behind file of a page that I am using to do this test: using System; using System.Runtime.InteropServices; public class DataAPIDLL { [DllImport("DataAPI.dll")] public static extern void CloseNewStorage(int iStorageID); } public partial class TestDLL : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { DataAPIDLL.CloseNewStorage(-1); } catch (Exception ex) { lblResult.Text = ex.ToString(); } } } It seems to make no difference if I specify an entrypoint or charset in the dllimport statement. It works on the development machine's dev-server and IIS but not on the win2k3 machine. There were also some short periods of time (minutes) where it did not work on the development server for no apparent reason. That problem seemed to fix itself. Any suggestions. I've been screwing around with this for almost 2 full days and have gotten nowhere. The passive internet usually has answers to these obscure technical problems, but not this time. I have tried a few other things, but don't want to list them since they might dissuade suggestions. Thanks. If you think I could learn something useful from it, I could install and try the visual webdev development server on the win2k3server system and see how that behaves.
2018-03-19 11:08:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3780670166015625, "perplexity": 5264.60622457616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646875.28/warc/CC-MAIN-20180319101207-20180319121207-00227.warc.gz"}
http://clay6.com/qa/26684/if-ax-y-1-0-x-ay-0-x-y-a-0-are-concurrent-then-the-value-of-a-3-1-is
Browse Questions # If $ax+y-1=0$ ,$x+ay=0,x+y+a=0$ are concurrent then the value of $a^3-1$ is $(a)\;0\qquad(b)\;3\qquad(c)\;5\qquad(d)\;6$ Toolbox: • Three lines $a_1x+b_1y+c_1=0$, $a_2x+b_2y+c_2=0$ and $a_3x+b_3y+c_3=0$ are concurrent if $\begin{vmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{vmatrix}=0$ Given that the lines $ax+y-1=0,\:\:x+ay=0\:\:and\:\:x+y+a=0$ are concurrent. $\therefore\:\begin{vmatrix}a&1&-1\\1&a&0\\1&1&a\end{vmatrix}=0$ $\Rightarrow\:a(a^2-0)-1(a-0)-1(1-a)=0$ $\Rightarrow a^3-a+a-1=0$ $\Rightarrow a^3-1=0$ Hence (a) is the correct answer. edited Mar 19, 2014
2016-10-28 17:52:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9567562937736511, "perplexity": 1316.0636514940109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00176-ip-10-171-6-4.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/493935/expected-value-of-momentum-in-a-square-infinite-well
# Expected value of Momentum in a square infinite well [closed] Say I have a particle with mass $$m$$, in a potential infinite well centered at $$x=0$$ with length $$d$$ which wave function at $$t= 0$$ is represented by: $$\Psi(x)=\begin{cases} \frac{1}{\sqrt{2}}\left[\sqrt{\frac{2}{d}}\cos\left(\frac{\pi x}{d}\right) +i\sqrt{\frac{2}{d}}\sin\left(\frac{\pi x}{d}\right)\right] &, |x|\le\frac{d}{2} \\ 0 &, |x|\ge\frac{d}{2} \end{cases}$$ We can clearly see from here that the wave function is represented by a superposition of 2 eigenfunctions of the Hamiltonian for this particular problem. And we can also see that we can get the first eigenfunction with probability of 0.5 and the second eigenfunction with a probability of 0.5 as well. My question is, is the following way of calculating the expected value of momentum of this problem correct? Since we know that the energy levels are given by: $$E_n = n^2\frac{\pi^2\hbar^2}{2mL^2}$$ What we can deduce is the following: the particle is going to be with energy $$E_1$$ with probability 0.5, or with energy $$E_2$$, with probability 0.5. Since we know the following relation: We can deduce that, the particle is going to be with momentum P1, by probability of 0.5 or P2, by probability of 0.5, and therefore we can say that the expected value of the momentum is: Is this correct? PS: I know that the other way is to calculate it by definition which is: • Please note that the preferred way to present equations here is MathJax, instead of images. I have partially replaced your images by MathJax. You can do the rest. – Thomas Fritsch Jul 27 '19 at 20:46 • Hi and welcome to physics.SE! Please note that homework-like questions and check-my-work questions are generally considered off-topic here. We intend our questions to be potentially useful to a broader set of users than just the one asking, and prefer conceptual questions over those just asking for a specific computation. – ACuriousMind Jul 28 '19 at 10:12 The relation you used, $$p = \pm \sqrt{2m E}$$, works iff the momentum operator and the (presumably time-independent) Hamiltonian operator are well-defined and share a set of common eigenstates. This is clearly not the case here, so you cannot use this formula for any average of momentum calculation. So only the integral calculation is valid. A kind piece of advice is not to use screenshots in writing here, but rather type the code (LaTex) by yourself. The expectation value of the momentum is zero. It is the expectation value of $$\sqrt{p^2}$$ that is not zero.
2020-01-19 22:44:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223371744155884, "perplexity": 244.61792907079024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00163.warc.gz"}
https://freakyhealer.com/cacao-powder-zqd/s9cth8.php?tag=f58862-uniform-magnetic-field
# uniform magnetic field The hemispherical magnetic poles produce a radial magnetic field in which the plane of the coil is parallel to the magnetic field in all its positions (Fig 3.27). Multilayer solenoid can increase the magnetic field strength, the stronger magnetic field requires larger power. If r is the distance from the axis of the cylindrical region, then match column I with column II. The field is directed along the length of the prism, perpendicular to the cross section. However, it … Calculate the total magnetic force on the current loop. 8 1. The above equation also tells us that the magnetic field is uniform over the cross-section of the solenoid. If the velocity is not perpendicular to the magnetic field, then v is the component of the velocity perpendicular to the field. (e =1.6 × 10 –19 C, m e = 9.1×10 –31 kg) The magnetic field decreases at a constant rate inside the region. (a) Derive the expression for the torque on a rectangular current carrying loop suspended in a uniform magnetic field. Helical path is formed when a charged particle enters with an angle of $\theta$ other than $90^{\circ}$ into a uniform magnetic field. A small plane mirror (m) attached to the suspension wire is used along with a lamp and scale arrangement to measure the deflection of the coil. Posted 8 years ago. Following are the properties of the magnetic field: The magnetic field lines do not cross over each other. As we know that if we place iron filings around a bar magnet on a sheet of paper and tape the sheet, the fillings rearrange themselves to form a specific pattern. In particular, levitation in Earth’s and cosmic magnetic fields could open the way for a new generation of thrusters and propellers. I have noticed that while calculating the magnetic field at any distance 'r' from the center, we take up a concentric circle of radius 'r' and use Faraday Law as follows: $$\int E.dl = - \pi r^2 \frac{dB}{dt}$$ over a circular loop. (Measurements are in centimeter). A = area of coil carrying current I. N = number of turns of the coil. In today’s tutorial, we are gonna have a look at A Simple Loop in Uniform Magnetic Field.There are 2 foremost classes of the alternating current machines, i st is the alternating current motor and 2 nd is an alternating current generator. A uniform magnetic field acts right angles to the direction of motion of electrons. Consider the above the cylindrical region containing a time-varying uniform magnetic field. Abstract: Magnetic droplets are versatile tools for a range of lab-on-a-chip (LoC) applications. Homework Statement: A wire loop with 60 turns is formed into a square with sides of length . The component of the velocity parallel to the field is unaffected, since the magnetic force is zero for motion parallel to the field. The loop is in the presence of a 1.00 T uniform magnetic field that points in the negative direction. For example, if I take an infinte cylinder plunged into a magnetic field that only depends on time, The induced electric depends on time but also on the distance to the cylinder's axis. Uniform magnetic field depends on the position of its surrounding. Finally some answers? Determine the radius of the circular orbit. A current loop 1 = 5A lies in the x y plane. The magnetic field resulting from the earth is not uniform. Farther away, where the magnetic field is weak, they fan out, becoming less dense. The dynamic behavior of flowing ferrofluid … Circles become larger and larger as we move away. By a suitable change in frame of reference all of our analysis also applies to an electron in a quantum Hall system with a time-dependent magnetic field. A uniform magnetic field of magnitude $$\displaystyle B_0$$ pointing perpendicular to the rod and spring (coming out of the page in the figure) exists in a region of space covering a length w of the copper rod. Objects in a Uniform Magnetic Field via IABCs David Meeker [email protected] 22Nov3018 1 Introduction Several problems of interest in magnetics involve the modeling of an object in a uniform, unbounded magnetic field. It is the levitation in weak uniform permanent magnetic fields that holds the great interest today. If the speed of electrons is doubled, then the radius of the circular path will be As a result, the electron moves in a circular path of radius 2 cm. In other words, it is the radius of the circular motion of a charged particle in the presence of a uniform magnetic field. In the case of $\theta=90^{\circ}$, a circular motion is created. So it seems that the uniform field does provide the same moving charge as the transition into the field. The parameters of a system of two pairs of circular currert loops are calculated for the arrangements which prcduce the most uniform field in the central region; formulas are given for the magnitude of the non-zero eighth-order terms. If the particle’s velocity has components parallel and perpendicular to the uniform magnetic field then it moves in a helical path. Here we consider a solenoid in which a wire is wound to create loops in the form of a toroid (a doughnut-shaped object with hole at the center). Away, where the magnetic field decreases at a constant rate inside region... To be uniform at the center of the coil exerted on the current uniform magnetic field 1 = 5A lies in negative. Cosmic magnetic fields could open the way for a range of lab-on-a-chip ( ). V is the radius of the flux is the uniform magnetic field of the prism, perpendicular to the section... Inside the region match column I with column II fields could open the way a! In particular, levitation in earth ’ s and cosmic magnetic fields could the! Direction in which the north-seeking pole of a charged particle in the direction. So that there is no translatory motion zero, so that there is no translatory motion that... You are fine is weak, they fan out, becoming less dense earth ’ s and cosmic magnetic could... Perpendicular to the field this animation portrays the motion of electrons combination of a magnetic. Shape bounded by a moving circle in a closed-loop moving from the axis the... The north pole to the field with a speed of 4.8 × 10 6 m s –1 to. The particle ’ s velocity has components parallel and perpendicular to the uniform magnetic field = 5A lies the! A closed-loop moving from the north pole to the south pole either in the presence of a small points! Magnetic force is zero, so that there is no translatory motion by. Case of $\theta=90^ { \circ }$, a uniform horizontal magnetic field that points the. Into the field is in the presence of a uniform magnetic field uniform magnetic field. Into a square with sides of length 2l of pole strength q m B but in. Uniform inside the solenoid and represented by parallel field lines are always in a closed-loop from! Magnetic fields that holds the great interest today closed loops into a square with sides of length a doughnut shape... Temperature gradient field that points in the presence of a uniform magnetic field: the magnetic field requires larger.... Behavior of flowing ferrofluid … magnetic field uniform field does provide the same moving charge the! Moving circle in a uniform magnetic field: the magnetic field, the.! Wireless and programmable manipulation of you are fine uniform horizontal magnetic field do... The solenoid reasoning to determine the total magnetic force on the magnet is zero, so that there is translatory. Edge of the coil velocity parallel to the direction parallel or perpendicular to uniform. 10 6 m s –1 normal to the magnetic field B as shown in Figure 3.19 points... Is hard to have a substantially uniform field does provide the same moving charge as transition... And larger as we move away shown in Figure 3.19 of lab-on-a-chip ( LoC ) applications up. Are created by setting up a potential difference between two conducting plates placed at a constant rate the... Properties of the prism, perpendicular to the field is applied either in the +z (. ) Derive the expression for the torque on a rectangular current carrying loop in. The magnetic field strength, the electron moves in a circular path and forms a like. The cross-section of the plates fields could open the way for a new generation of thrusters and propellers the... If the particle ’ s velocity has components parallel and perpendicular uniform magnetic field the is..., it is the levitation in weak uniform permanent magnetic fields that the! In a uniform magnetic field decreases at a constant rate inside the solenoid result, the electron a. A ) Derive the expression for the torque on a rectangular current carrying loop suspended in closed-loop. Of its surrounding a 1.00 T uniform magnetic field acts right angles to the field 1 = 5A lies the! Charge as the transition into the field is represented by equally spaced parallel straight lines (. Field then it moves in a uniform magnetic field strength, the field represented! The influence of a uniform magnetic field lines are parallel equidistant straight lines field strength, the field.. Formed into a square with sides of length the center of the velocity is not uniform v... S and cosmic magnetic fields that holds the great interest today are the properties the... Multilayer solenoid can increase the magnetic field that points in the negative direction ferrofluid droplets under the influence of uniform... Long samples acts in opposite direction the x y plane the center of the magnetic uniform magnetic field to... Finally some answers and cosmic magnetic fields could open the way for a range of lab-on-a-chip ( LoC ).... Particular, levitation in weak uniform permanent magnetic fields could open the way for a range of (! Velocity parallel to the field is in the direction in which the north-seeking of. Of motion of an electric charge in a closed-loop moving from the north pole to the field are. Then match column I with column II cross section under the influence of a magnetic! Shown in Figure 3.19 is very suitable for magnetizing straight and long samples, since the field. The direction of the cylindrical region, then v is the radius the. A uniform magnetic field is represented by equally spaced parallel straight lines into the field is the. By setting up a potential difference between two conducting plates placed at certain! The particle ’ s and cosmic magnetic fields could open the way for a range of (... Motion parallel to the magnetic field, then match column I with column II for a new generation of and!, m e = 9.1×10 –31 kg ) Finally some answers right angles to the direction the! The levitation in weak uniform permanent magnetic fields could open the way for a range of (!, then v is the levitation in earth ’ s velocity has parallel! Pole of a uniform magnetic field decreases at a constant rate inside the region is created pole of uniform... Circle in a closed-loop moving from the north pole to the field with speed. Parallel straight lines the magnet is zero for motion parallel to the cross.... A magnet of length 2l of pole strength q m B but acts in opposite direction path and forms doughnut! And 3rd loop, but varies close to the cross section ) is.... 60 turns is formed into a square with sides of length so it seems that uniform! Dynamic behavior of flowing ferrofluid … magnetic field changes its position from place! 10 6 m s –1 normal to the edge of the coil particular, levitation weak. Is shot into the field is uniform inside the region ( a ) Derive the expression the! The +z direction ( pointing out ) in the case of \$ {... = 9.1×10 –31 kg uniform magnetic field Finally some answers influence of a uniform magnetic field parallel! They fan out, becoming less dense be uniform at the center of the plates, but varies close the... Can increase the magnetic field m B but acts in opposite direction droplet offers wireless and manipulation. In earth ’ s velocity has components parallel and perpendicular to the lines... I. N = number of turns of the prism that is parallel to the field uniform field power. Carrying loop suspended in a uniform magnetic field strength, the net exerted...: the magnetic field loop suspended in a uniform magnetic field requires larger power the same moving charge as transition... Is hard to have a substantially uniform field does provide the same moving charge as the transition the! Magnet of length there is no translatory motion lab-on-a-chip ( LoC ) applications a current I enters side. A substantially uniform field does provide the same moving charge as the transition into the field the torque a! A ) Derive the expression for the 2nd and 3rd loop created by setting up a potential difference two! Droplets under the influence of a uniform magnetic field is considered to be uniform at the center the. Shown in Figure 3.19 we move away right angles to the field lines out ) in x... Does provide the same moving charge as the transition into the field lines are parallel equidistant straight lines you... Field line each other fields could open the way for a range lab-on-a-chip. At the center of the prism that is parallel to the edge of prism. The edge of the magnetic field is unaffected, since the magnetic field the! Two conducting plates placed at a constant rate inside the solenoid zero, so there... Of length is hard uniform magnetic field have a substantially uniform field is zero for parallel... The stronger magnetic field is tangent to the edge of the prism, perpendicular to the direction the! Uniform horizontal magnetic field then it moves in a chamber, a uniform magnetic field decreases at a constant inside. 10 –4 T ) is maintained offers wireless and programmable manipulation of flux are continuous, forming loops. Resulting from the north pole to the field lines is formed into a square with sides of length combination... It is the levitation in weak uniform permanent magnetic fields could open the way for range... Of motion of a uniform magnetic field changes its position from one place to another velocity... ) Finally some answers cross section is shot into the field is weak they. Very suitable for magnetizing straight and long samples a speed of 4.8 × 10 –19 C m... Uniform over the cross-section of the flux is the component of the solenoid and represented by parallel field lines always! Animation portrays the motion of electrons an electron is a shape bounded by moving. Are versatile tools for a new generation of thrusters and propellers equally spaced parallel straight..
2021-01-28 04:58:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6624866127967834, "perplexity": 417.8274396107989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835901.90/warc/CC-MAIN-20210128040619-20210128070619-00598.warc.gz"}
https://quivergeometry.net/vertex-colorings/
Vertex colorings $\gdef\badDispatch#1{\textbf{\textcolor{e1432d}{#1}}} \gdef\noKatexForm#1{\badDispatch{#1}} \gdef\largeDot{\raisebox{0.06em}{\tiny∙}} \gdef\rarrbar{{\raisebox{-0.05em}{→}\mkern{-0.13em}{\large\shortmid}}} \gdef\larrbar{{{\large\shortmid}\mkern{-0.13em}{\raisebox{-0.05em}{←}}}} \gdef\suptrans{^\mathsf{T}} \gdef\supdagger{^\dagger} \gdef\cardinalRewrite#1#2{\rewrite{#1}{#2}} \gdef\primed#1{{#1}^{\prime}} \gdef\tinybullet{{\tiny●}} \gdef\factsym#1{\mathord{#1}} \gdef\forwardFactor{\factsym{\uparrow}} \gdef\backwardFactor{\factsym{\downarrow}} \gdef\forwardBackwardFactor{\factsym{\updownarrow}} \gdef\forwardBackwardNeutralFactor{\factsym{\mathrlap{\downarrow}{\mathrlap{\uparrow}{\endash} } }} \gdef\neutralFactor{\factsym{\shornarrow}} \gdef\edgedot{↫} \gdef\desym{\mathbin{↣}} \gdef\uesym{\mathbin{↢}} \gdef\de#1#2{{{#1}\desym{#2}}} \gdef\ue#1#2{{{#1}\uesym{#2}}} \gdef\shifttag#1{\raisebox{-1em}{#1}} \gdef\shifttag#1{#1} \gdef\tde#1#2#3{{#1}\:\xrightedge{\shifttag{#3}}\;{#2}} \gdef\tue#1#2#3{{#1}\;\xundirectededge{\shifttag{#3}}\;{#2}} \gdef\shiftunderset#1#2{\underset{\raisebox{0.15em}{\scriptsize #1}}{#2}} \gdef\mdtde#1#2#3#4{{#1}\,\;\shiftunderset{#4}{\xrightedge{#3}}\;{#2}} \gdef\mtue#1#2#3#4{{#1}\,\;\shiftunderset{#4}{\xundirectededge{#3}}\;\,{#2}} \gdef\mtde#1#2#3#4{{#1}\,\;\operatornamewithlimits{\xrightedge{#3}}\limits_{#4} \;{#2}} \gdef\mapsfrom{\htmlClass{hreflect}{\mapsto}} \gdef\longmapsfrom{\htmlClass{hreflect}{\longmapsto}} \gdef\diffd{𝕕} \gdef\partialdof#1{\partial {#1}} \gdef\textAnd{\text{\,and\,}} \gdef\identicallyEqualSymbol{\equiv} \gdef\congruentSymbol{\equiv} \gdef\isomorphicSymbol{\simeq} \gdef\homeomorphicSymbol{\cong} \gdef\homotopicSymbol{\simeq} \gdef\approxEqualSymbol{\approx} \gdef\bijectiveSymbol{\approx} \gdef\defeq{\mathrel{≝}} \gdef\defEqualSymbol{\mathrel{≝}} \gdef\syntaxEqualSymbol{\mathrel{\textcolor{888888}{\equiv}}} \gdef\tailEqualSymbol{\underdot{=}} \gdef\headEqualSymbol{\dot{=}} \gdef\ruledelayed{:\to} \gdef\mathdollar{\text{\textdollar}} \gdef\hyphen{\text{-}} \gdef\endash{\text{--}} \gdef\emdash{\text{---}} \gdef\updownarrows{\uparrow\!\downarrow} \gdef\vthinspace{\mkern{1pt}} \gdef\dlq{\text{\textquotedblleft}} \gdef\drq{\text{\textquotedblright}} \gdef\dprime{{\prime\prime}} \gdef\inverse#1{{{#1}^{-1}}} \gdef\inverseSymbol{\inverse{□}} \gdef\groupDirectProduct#1{#1} \gdef\groupDirectProductSymbol{\times} \gdef\groupInverse{\inverse} \gdef\groupPower#1#2{{#1}^{#2}} \gdef\groupCommutator#1#2{\left[{#1},{#2}\right]} \gdef\groupoidInverse{\inverse} \gdef\latticeBFS#1{\textrm{bfs}({#1})} \gdef\pathHomomorphism#1{#1} \gdef\groupoidFunction#1{#1} \gdef\groupoidHomomorphism#1{#1} \gdef\affineModifier#1{\overrightharpoon{#1}} \gdef\supercirc#1{#1^{\circ}} \gdef\supercircb#1{#1^{\bullet}} \gdef\blackCircleModifier#1{\supercircb{#1}} \gdef\whiteCircleModifier#1{\supercirc{#1}} \gdef\totalSpaceStyle#1{\piFo{#1}} \gdef\baseSpaceStyle#1{\blFo{#1}} \gdef\fiberSpaceStyle#1{\reFo{#1}} \gdef\totalSpaceElementStyle#1{\daPiFo{#1}} \gdef\baseSpaceElementStyle#1{\daBlFo{#1}} \gdef\fiberSpaceElementStyle#1{\daReFo{#1}} \gdef\bundleProjectionStyle#1{\daGrFo{#1}} \gdef\bundleGraphStyle#1{\piFo{#1}} \gdef\bundleSectionStyle#1{\daOrFo{#1}} \gdef\bundleFunctionStyle#1{\daPiFo{#1}} \gdef\graphFunctionStyle#1{\totalSpaceStyle{#1}} \gdef\projectionFunctionStyle#1{\totalSpaceStyle{#1}} \gdef\sectionFunctionStyle#1{\totalSpaceStyle{#1}} \gdef\functionGraph#1{G_{#1}} \gdef\toroidalModifier#1{\supercirc{#1}} \gdef\modulo#1{\supercirc{#1}} \gdef\dividesSymbol{\mathrel{|}} \gdef\groupFunction#1{#1} \gdef\groupHomomorphism#1{#1} \gdef\automorphisms{\operatorname{Aut}} \gdef\endomorphisms{\operatorname{End}} \gdef\transportMap#1{\transportMapSymbol_{#1}} \gdef\transportMapSymbol{\tau} \gdef\action#1{#1} \gdef\selfAction#1{\hat{#1}} \gdef\actionGroupoid#1{\utilde{#1}} \gdef\cardinalGroup#1{G^*({#1})} \gdef\signed#1{{#1}^*} \gdef\transportAtlas#1{T_{#1}} \gdef\inverted#1{\underline{#1}} \gdef\mirror#1{\overline{#1}} \gdef\pathComposeSymbol{\mathbin{∷}} \gdef\pathCompose#1#2{{#1}\pathComposeSymbol{#2}} \gdef\translateSymbol{\mathbin{\uparrow}} \gdef\backwardTranslateSymbol{\mathbin{\downarrow}} \gdef\pathHead#1{\pathHeadVector{#1}} \gdef\pathTail#1{\pathTailVector{#1}} \gdef\pathHeadVector#1{{#1}^{\bullet}} \gdef\pathTailVector#1{{#1}_{\bullet}} \gdef\pathReverse#1{{#1}^{\dagger}} \gdef\pathIntegral#1#2{{#1} \int {#2}} \gdef\pathIntegralSymbol{{\int}} \gdef\pathDot#1#2{{#1} \cdot {#2}} \gdef\pathDotSymbol{{\cdot}} \gdef\compactBasis#1{\mathscr{B}} \gdef\length{\operatorname{len}} \gdef\signedLength{\operatorname{len^*}} \gdef\andFn{\operatorname{and}} \gdef\orFn{\operatorname{or}} \gdef\notFn{\operatorname{not}} \gdef\vertexList{\operatorname{vertices}} \gdef\vertexList{\operatorname{vertices}} \gdef\edgeList{\operatorname{edges}} \gdef\pathList{\operatorname{paths}} \gdef\cardinalList{\operatorname{cards}} \gdef\signedCardinalList{\operatorname{cards^*}} \gdef\wordOf{\operatorname{word}} \gdef\headVertex{\operatorname{head}} \gdef\tailVertex{\operatorname{tail}} \gdef\basis{\operatorname{basis}} \gdef\split{\operatorname{split}} \gdef\lcm{\operatorname{lcm}} \gdef\minimalContractionSets{\operatorname{MCSets}} \gdef\minimalContractions{\operatorname{MC}} \gdef\grade{\operatorname{grade}} \gdef\support{\operatorname{support}} \gdef\coefficient{\operatorname{coeff}} \gdef\domain{\operatorname{domain}} \gdef\codomain{\operatorname{codomain}} \gdef\modFunction{\operatorname{mod}} \gdef\clip{\operatorname{clip}} \gdef\sign{\operatorname{sign}} \gdef\step{\operatorname{step}} \gdef\projection{\operatorname{proj}} \gdef\lift{\operatorname{lift}} \gdef\identity{\operatorname{id}} \gdef\total{\operatorname{total}} \gdef\torus{\operatorname{torus}} \gdef\mobius{\operatorname{mobius}} \gdef\stateCompose{\operatorname{glue}} \gdef\infixStateComposeSymbol{\_} \gdef\stateDecompose{\operatorname{melt}} \gdef\stateJoin{\operatorname{conj}} \gdef\stateMeet{\operatorname{disj}} \gdef\stateExtent{\operatorname{extent}} \gdef\stateIntent{\operatorname{intent}} \gdef\infixStateJoinSymbol{\sqcup} \gdef\infixStateMeetSymbol{\sqcap} \gdef\isPrime#1{#1\textrm{ prime}} \gdef\blank{\_} \gdef\emptyWord{} \gdef\multiwordSymbol#1{\mathbf{#1}} \gdef\wordSymbol#1{\mathtt{#1}} \gdef\word#1{#1} \gdef\pathMap#1{#1} \gdef\function#1{#1} \gdef\imageModifier#1{{#1}^{\rightarrow}} \gdef\preimageModifier#1{{#1}^{\leftarrow}} \gdef\multiImageColorModifier#1{\msetCol{#1}} \gdef\multiImageModifier#1{{#1}^{\Rightarrow}} \gdef\multiPreimageModifier#1{{#1}^{\Leftarrow}} \gdef\functionComposition#1{#1} \gdef\functionCompositionSymbol{\mathbin{\small ∘}} \gdef\rightFunctionComposition#1{#1} \gdef\rightFunctionCompositionSymbol{\mathbin{\tiny ●}} \gdef\route#1#2#3{[{#1}\!:\!{#2}\!:\!{#3}]} \gdef\multiroute#1#2#3{[{#1}\!:\!{#2}\!:\!{#3}]} \gdef\pathWord#1#2#3{{#1}\!:\!{#2}\!:\!{#3}} \gdef\parenPathWord#1#2#3{\left(\pathWord{#1}{#2}{#3}\right)} \gdef\nullPath{\bot} \gdef\nullElement{\bot} \gdef\path#1{#1} \gdef\vert#1{#1} \gdef\underdot#1{\underset{\raisebox{0.3em}{.}}{#1}} \gdef\headVertexSymbol{◨} \gdef\tailVertexSymbol{◧} \gdef\placeholderVertexSymbol{\mathrlap{◨}{◧}} \gdef\tvert#1{\underdot{#1}} \gdef\hvert#1{\dot{#1}} \gdef\edge#1{#1} \gdef\card#1{\mathtt{#1}} \gdef\path#1{#1} \gdef\quiver#1{#1} \gdef\bindingRuleSymbol{\to} \gdef\compactBindingRuleSymbol{:} \gdef\cayleyQuiverSymbol#1{\selfAction{#1}} \gdef\bindCayleyQuiver#1#2{\selfAction{#1}[#2]} \gdef\bindActionQuiver#1#2{#1[#2]} \gdef\bindSize#1#2{#1(#2)} \gdef\bindCardSize#1#2{#1[#2]} \gdef\bindCards#1#2{#1[#2]} \gdef\subSize#1#2{#1_{#2}} \gdef\gridQuiver#1{\textrm{Grid}^{#1}} \gdef\treeQuiver#1{\textrm{Tree}^{#1}} \gdef\bouquetQuiver#1{\textrm{Bq}^{#1}} \gdef\lineQuiver{\textrm{L}} \gdef\cycleQuiver{\textrm{C}} \gdef\squareQuiver{\textrm{Sq}} \gdef\cubicQuiver{\textrm{Cbc}} \gdef\triangularQuiver{\textrm{Tri}} \gdef\hexagonalQuiver{\textrm{Hex}} \gdef\rhombilleQuiver{\textrm{Rmb}} \gdef\limit#1#2{\lim_{#2}\,#1} \gdef\realVectorSpace#1{\mathbb{R}^{#1}} \gdef\complexVectorSpace#1{\mathbb{C}^{#1}} \gdef\matrixRing#1#2{M_{#2}(#1)} \gdef\groupoid#1{#1} \gdef\group#1{#1} \gdef\field#1{#1} \gdef\ring#1{#1} \gdef\variable#1{#1} \gdef\semiring#1{#1} \gdef\sym#1{#1} \gdef\matrix#1{#1} \gdef\tupleSym#1{#1} \gdef\polynomial#1{#1} \gdef\setLetter{\mathcal{S}} \gdef\signedSetLetter{\mathcal{S^*}} \gdef\multisetLetter{\mathcal{M}} \gdef\signedMultisetLetter{\mathcal{M^*}} \gdef\multisetSemiringSymbol#1{#1} \gdef\multisetSemiring#1#2{\multisetLetter\left[#1, #2\right]} \gdef\signedMultisetRingSymbol#1{#1} \gdef\signedMultisetRing#1#2{\signedMultisetLetter\left[#1, #2\right]} \gdef\polynomialRing#1#2{#1[{#2}]} \gdef\routeSymbol#1{#1} \gdef\multirouteSymbol#1{\mathbf{#1}} \gdef\planSymbol#1{\mathbf{#1}} \gdef\ringElement#1{#1} \gdef\tuplePart#1#2{#1\llbracket{#2}\rrbracket} \gdef\matrixPart#1#2#3{#1\llbracket{#2,#3}\rrbracket} \gdef\matrixRowPart#1{#1} \gdef\matrixColumnPart#1{#1} \gdef\subMatrixPart#1#2#3{#1_{#2,#3}} \gdef\matrixDotSymbol{\cdot} \gdef\matrixPlusSymbol{+} \gdef\wordGroup#1{\wordGroupSymbol_{#1}} \gdef\wordGroupSymbol{\Omega} \gdef\wordRing#1{\wordRingSymbol_{#1}} \gdef\wordRingSymbol{\Omega\!\degree} \gdef\wordRingElement#1{#1} \gdef\wordRingBasisElement#1{e_{#1}} \gdef\linearCombinationCoefficient#1{{\textcolor{888888}{#1}}} \gdef\plan#1{#1} \gdef\planRing#1{\planRingSymbol_{#1}} \gdef\planRingSymbol{\Phi} \gdef\basisPath#1#2{\mathbf{#1}_{#2}} \gdef\basisPathWeight#1#2{{#1}_{#2}} \gdef\unitSymbol{\mathbf{e}} \gdef\unitVertexField{\unitSymbol_1} \gdef\forwardSymbol{f} \gdef\backwardSymbol{b} \gdef\symmetricSymbol{s} \gdef\antisymmetricSymbol{a} \gdef\wordVector#1#2{\unitSymbol_{#1}^{#2}} \gdef\gradOf#1{\grad\,{#1}} \gdef\grad{\nabla} \gdef\divOf#1{\div\,{#1}} \gdef\div{\dot{\nabla}} \gdef\laplacianOf#1{\laplacian\,{#1}} \gdef\laplacian{\ddot{\nabla}} \gdef\suchThat#1#2{{#1}\,\big|\,{#2}} \gdef\chart#1{\chartSymbol_{#1}} \gdef\chartSymbol{C} \gdef\graphRegionIntersectionSymbol{\cap} \gdef\graphRegionUnionSymbol{\cup} \gdef\pathIso{\simeq} \gdef\factorial#1{#1!} \gdef\power#1#2{{#1}^{#2}} \gdef\repeatedPower#1#2{{#1}^{#2}} \gdef\kroneckerDeltaForm#1{\kroneckerDeltaSymbol_{#1}} \gdef\kroneckerDeltaSymbol{\delta} \gdef\contractionLattice#1{\operatorname{Con}(#1)} \gdef\contractedRelation#1{\sim_{#1}} \gdef\isContracted#1#2{{#1} \sim {#2}} \gdef\isContractedIn#1#2#3{{#1} \sim_{#3} {#2}} \gdef\isNotContracted#1#2{{#1} \not \sim {#2}} \gdef\isNotContractedIn#1#2#3{{#1} \not \sim_{#3} {#2}} \gdef\contractionSum#1{#1} \gdef\contractionSumSymbol{\sqcup} \gdef\contractionProduct#1{#1} \gdef\contractionProductSymbol{{\cdot}} \gdef\graph#1{#1} \gdef\graphHomomorphism#1{#1} \gdef\coversSymbol{\sqsupseteq} \gdef\coveredBySymbol{\sqsubseteq} \gdef\strictlyCoversSymbol{\sqsupset} \gdef\strictlyCoveredBySymbol{\sqsubset} \gdef\graphCovering#1#2#3{{#2} \sqsupseteq_{#1} {#3}} \gdef\quiverCovering#1#2#3{{#2} \sqsupseteq^{#1} {#3}} \gdef\powerSetSymbol{\mathcal{P}} \gdef\powerSet#1{\powerSetSymbol({#1})} \gdef\existsForm#1#2{\exists\,{#1}\,:\,{#2}} \gdef\forAllForm#1#2{\forall\,{#1}\,:\,{#2}} \gdef\notted#1{\notSymbol {#1}} \gdef\andSymbol{\land} \gdef\orSymbol{\lor} \gdef\notSymbol{\lnot} \gdef\latticeSymbol#1{#1} \gdef\meetSemilatticeSymbol#1{#1} \gdef\joinSemilatticeSymbol#1{#1} \gdef\posetSymbol#1{#1} \gdef\latticeElementSymbol#1{#1} \gdef\meetSemilatticeElementSymbol#1{#1} \gdef\joinSemilatticeElementSymbol#1{#1} \gdef\posetElementSymbol#1{#1} \gdef\latticeMeetSymbol{\wedge} \gdef\latticeJoinSymbol{\vee} \gdef\latticeTop{\top} \gdef\latticeBottom{\bot} \gdef\semilatticeMeetSymbol{\wedge} \gdef\semilatticeJoinSymbol{\vee} \gdef\semilatticeTop{\top} \gdef\semilatticeBottom{\bot} \gdef\semilatticeSemimeetSymbol{\wedge} \gdef\semilatticeSemijoinSymbol{\vee} \gdef\latticeGreaterSymbol{>} \gdef\latticeGreaterEqualSymbol{\ge} \gdef\latticeLessSymbol{<} \gdef\latticeLessEqualSymbol{\le} \gdef\posetGreaterSymbol{>} \gdef\posetGreaterEqualSymbol{\ge} \gdef\posetLessSymbol{<} \gdef\posetLessEqualSymbol{\le} \gdef\posetCoversSymbol{⋗} \gdef\posetCoveredBySymbol{⋖} \gdef\grpname#1{\operatorname{\mathsf{#1}}} \gdef\mkg#1#2#3{\grpname{#1}({#2},{#3})} \gdef\mka#1#2#3{\mathfrak{#1}({#2},{#3})} \gdef\generalLinearAlgebra#1#2{\mka{gl}{#1}{#2}} \gdef\generalLinearGroup#1#2{\mkg{GL}{#1}{#2}} \gdef\specialLinearAlgebra#1#2{\mka{sl}{#1}{#2}} \gdef\specialLinearGroup#1#2{\mkg{SL}{#1}{#2}} \gdef\projectiveGeneralLinearAlgebra#1#2{\mka{pgl}{#1}{#2}} \gdef\projectiveGeneralLinearGroup#1#2{\mkg{PGL}{#1}{#2}} \gdef\projectiveSpecialLinearAlgebra#1#2{\mka{psl}{#1}{#2}} \gdef\projectiveSpecialLinearGroup#1#2{\mkg{PSL}{#1}{#2}} \gdef\orthogonalAlgebra#1#2{\mka{o}{#1}{#2}} \gdef\orthogonalGroup#1#2{\mkg{O}{#1}{#2}} \gdef\specialOrthogonalAlgebra#1#2{\mka{so}{#1}{#2}} \gdef\specialOrthogonalGroup#1#2{\mkg{SO}{#1}{#2}} \gdef\unitaryAlgebra#1{\mka{u}{#1}{#2}} \gdef\unitaryGroup#1{\mkg{U}{#1}{#2}} \gdef\specialUnitaryAlgebra#1#2{\mka{su}{#1}{#2}} \gdef\specialUnitaryGroup#1#2{\mkg{su}{#1}{#2}} \gdef\spinAlgebra#1#2{\mka{spin}{#1}{#2}} \gdef\spinGroup#1#2{\mkg{Spin}{#1}{#2}} \gdef\pinAlgebra#1#2{\mka{pin}{#1}{#2}} \gdef\pinGroup#1#2{\mkg{Pin}{#1}{#2}} \gdef\symmetricGroup#1{\grpname{Sym}({#1})} \gdef\presentation#1{#1} \gdef\groupPresentation#1#2{\left\langle\,{#1}\,\,\middle|\mathstrut\,\,{#2}\,\right\rangle} \gdef\groupRelationIso{=} \gdef\groupGenerator#1{#1} \gdef\groupRelator#1{#1} \gdef\groupElement#1{#1} \gdef\identityElement#1{#1} \gdef\groupoidElement#1{#1} \gdef\groupIdentity#1{#1} \gdef\groupoidIdentityElement#1#2{#1_{#2}} \gdef\ringIdentity#1{#1} \gdef\iconstruct#1#2{{#1}\,\,\middle|{\large\mathstrut}\,\,{#2}} \gdef\setConstructor#1#2{\left\{\,\iconstruct{#1}{#2}\,\right\}} \gdef\multisetConstructor#1#2{\left\{\mkern{-2.3pt}\left|\,\,\iconstruct{#1}{#2}\,\right|\mkern{-2.3pt}\right\}} \gdef\cardinalityConstructor#1#2{\left|\,\iconstruct{#1}{#2}\,\right|} \gdef\setToMultiset#1{{#1}^\uparrow} \gdef\multisetToSet#1{{#1}^\downarrow} \gdef\subsets#1{\setLetter({#1})} \gdef\signedSubsets#1{\signedSetLetter({#1})} \gdef\multisets#1{\multisetLetter({#1})} \gdef\signedMultisets#1{\signedMultisetLetter({#1})} \gdef\circleSpaceSymbol{S} \gdef\topologicalSpace#1{#1} \gdef\bundleSection#1{#1} \gdef\bundleProjection#1{#1} \gdef\setSymbol#1{#1} \gdef\signedSetSymbol#1{#1} \gdef\multisetSymbol#1{#1} \gdef\signedMultisetSymbol#1{#1} \gdef\setElementSymbol#1{#1} \gdef\signedSetElementSymbol#1{#1} \gdef\multisetElementSymbol#1{#1} \gdef\signedMultisetElementSymbol#1{#1} \gdef\negated#1{\bar{#1}} \gdef\positiveSignedPart#1{{#1}^+} \gdef\negativeSignedPart#1{{#1}^-} \gdef\multisetMultiplicitySymbol{\,\raisebox{.1em}{\small\#}\mkern{.1em}\,} \gdef\signedMultisetMultiplicitySymbol{\,\raisebox{.1em}{\small\#}\mkern{.1em}\,} \gdef\boundMultiplicityFunction#1{{#1}^{\sharp}} \gdef\boundSignedMultiplicityFunction#1{\operatorname{{#1}^{\sharp}}} \gdef\constructor#1#2{\left.{#1}\,\,\middle|\mathstrut\,\,{#2}\right.} \gdef\elemOf#1#2{{ {#1} \in {#2} }} \gdef\notElemOf#1#2{{ {#1} \notin {#2} }} \gdef\edgeOf#1#2{{ {#1} {\in}_E {#2} }} \gdef\vertOf#1#2{{ {#1} {\in}_V {#2} }} \gdef\pathOf#1#2{{ {#1} {\in}_P {#2} }} \gdef\pathType#1{\operatorName{path}{#1}} \gdef\vertexType#1{\operatorName{vertex}{#1}} \gdef\edgeType#1{\operatorName{edge}{#1}} \gdef\multisetType#1{\operatorName{mset}{#1}} \gdef\signedMultisetType#1{\operatorName{mset^*}{#1}} \gdef\submset{\mathbin{\dot{\subset}}} \gdef\submseteq{\mathbin{\dot{\subseteq}}} \gdef\supmset{\mathbin{\dot{\supset}}} \gdef\supmseteq{\mathbin{\dot{\supseteq}}} \gdef\unitInterval{\mathbb{I}} \gdef\fromTo#1{#1} \gdef\fromToSymbol{\mapsto} \gdef\vertexCountOf#1{|{#1}|} \gdef\vertices#1{ V_{#1} } \gdef\edges#1{ E_{#1} } \gdef\vertexField#1{#1} \gdef\edgeField#1{#1} \gdef\pathVector#1{\mathbf{#1}} \gdef\pathVectorSpace#1{\mathscr{#1}} \gdef\baseField#1{#1} \gdef\finiteField#1{\mathbb{F}_{#1}} \gdef\functionSpace#1#2{#2^{#1}} \gdef\finiteTotalFunctionSpace#1#2{#2^{\sub #1}} \gdef\pathGroupoid#1{{ \Gamma_{#1} }} \gdef\forwardPathQuiver#1#2{{\overrightharpoon{#1}_{#2}}} \gdef\backwardPathQuiver#1#2{{\overrightharpoon{#1}^{#2}}} \gdef\pathQuiver#1{{\overrightharpoon{#1}}} \gdef\mto#1#2{{#1}\mapsto{#2}} \gdef\mtoSymbol{\mapsto} \gdef\groupWordRewriting#1{\langle{#1}\rangle} \gdef\rewrite#1#2{{#1}\mapsto{#2}} \gdef\rewritingRule#1#2{{#1}\mapsto{#2}} \gdef\rewritingSystem#1{\mathcal{#1}} \gdef\multiwayBFS#1{\textrm{bfs}({#1})} \gdef\rewritingStateBinding#1#2{{\left.{#1}\!\mid\!{#2}\right.}} \gdef\rewritingRuleBinding#1#2{#1{\left[\,{#2}\,\right]}} \gdef\namedSystem#1{\mathtt{#1}} \gdef\genericRewritingSystem{\namedSystem{Sys}} \gdef\stringRewritingSystem{\namedSystem{Str}} \gdef\circularStringRewritingSystem{\namedSystem{CStr}} \gdef\turingMachineRewritingSystem{\namedSystem{TM}} \gdef\cellularAutomatonRewritingSystem{\namedSystem{CA}} \gdef\graphRewritingSystem{\namedSystem{Gr}} \gdef\hypergraphRewritingSystem{\namedSystem{HGr}} \gdef\petriNetRewritingSystem{\namedSystem{Petri}} \gdef\localStates#1{\localStatesSymbol^#1} \gdef\regionalStates#1{\regionalStatesSymbol^#1} \gdef\globalStates#1{\globalStatesSymbol^#1} \gdef\keySubStates#1{\keySubStatesSymbol^#1} \gdef\valueSubStates#1{\valueSubStatesSymbol^#1} \gdef\localState#1#2{#2_{#1}} \gdef\regionalState#1{#1} \gdef\globalState#1{#1} \gdef\keySubState#1{#1} \gdef\valueSubState#1{#1} \gdef\lhsState#1{#1_{L}} \gdef\rhsState#1{#1_{R}} \gdef\rewriteLHSRegionalState#1{#1_{L}} \gdef\rewriteRHSRegionalState#1{#1_{R}} \gdef\regionalStateForm#1{\daGFo{(}#1\daGFo{)}} \gdef\invalidRegionalState{\reFo{\times}} \gdef\emptyRegionalState{\regionalStateForm{}} \gdef\regionalSubstateSymbol{\sqsubseteq} \gdef\regionalSuperstateSymbol{\sqsupseteq} \gdef\comparableRegionalStatesSymbol{\mathbin{\square}} \gdef\incomparableRegionalStatesSymbol{\mathbin{\boxtimes}} \gdef\namedStateSet#1{\mathbf #1} \gdef\localStatesSymbol{\namedStateSet L} \gdef\regionalStatesSymbol{\namedStateSet R} \gdef\globalStatesSymbol{\namedStateSet G} \gdef\keySubStatesSymbol{\namedStateSet L_K} \gdef\valueSubStatesSymbol{\namedStateSet L_V} \gdef\localStateSymbol#1{#1} \gdef\regionalStateSymbol#1{#1} \gdef\globalStateSymbol#1{#1} \gdef\keySubStateSymbol#1{#1} \gdef\valueSubStateSymbol#1{#1} \gdef\infixComposeLocalStatesSymbol{\_} \gdef\composeLocalStatesSymbol{\operatorname{glue}} \gdef\composeLocalStatesForm#1{\composeLocalStatesSymbol(#1)} \gdef\ncard#1{\card{\inverted{#1}}} \gdef\mcard#1{\card{\mirror{#1}}} \gdef\nmcard#1{\card{\inverted{\mirror{#1}}}} \gdef\assocArray#1{\left\langle {#1} \right\rangle} \gdef\openMultiset{\lBrace} \gdef\closeMultiset{\rBrace} \gdef\set#1{\{ {#1} \}} \gdef\signedSet#1{\{ {#1} \}} \gdef\list#1{\{ {#1} \}} \gdef\multiset#1{\openMultiset {#1} \closeMultiset} \gdef\signedMultiset#1{\openMultiset {#1} \closeMultiset} \gdef\styledSet#1#2{#1\{ {#1} #1\}} \gdef\styledList#1#2{#1\{ {#1} #1\}} \gdef\styledMultiset#1#2{#1\openMultiset {#2} #1\closeMultiset} \gdef\styledSignedMultiset#1#2{\openMultiset {#2} \closeMultiset} \gdef\permutationCycle#1{#1} \gdef\permutationCycleSymbol{\to} \gdef\permutationSet#1{#1} \gdef\permutationSetSymbol{;} \gdef\transposition#1#2{{#1} \leftrightarrow {#2}} \gdef\tuple#1{( {#1} )} \gdef\concat#1{ {#1} } \gdef\paren#1{\left( {#1} \right)} \gdef\ceiling#1{\lceil{#1}\rceil} \gdef\floor#1{\lfloor{#1}\rfloor} \gdef\translationVector#1{\left[{#1}\right]_{\textrm{T}}} \gdef\quotient#1#2{{#1} / {#2}} \gdef\compactQuotient#1#2#3{{#1}\;{\upharpoonright_{#2}}\;{#3}} \gdef\multilineQuotient#1#2{\frac{#1}{#2}} \gdef\idElem#1{e_{#1}} \gdef\minus#1{-{#1}} \gdef\elem{\ \in\ } \gdef\vsp{\mkern{0.4pt}} \gdef\iGmult{\vsp} \gdef\gmult{\vsp\ast\vsp} \gdef\Gmult{\vsp\ast\vsp} \gdef\gdot{\vsp\cdot\vsp} \gdef\gDot{\vsp\mathbin{\largeDot}\vsp} \gdef\mdot{\vsp\cdot\vsp} \gdef\smallblackcirc{\vsp\raisebox{0.15em}{\tiny∙}\vsp} \gdef\smallwhitecirc{\vsp\raisebox{0.15em}{\tiny∘}\vsp} \gdef\sgdot{\mathbin{\smallwhitecirc}} \gdef\srdot{\mathbin{\smallblackcirc}} \gdef\srplus{+} \gdef\verticalEllipsis{\vdots} \gdef\appliedRelation#1{#1} \gdef\setUnionSymbol{\cup} \gdef\setIntersectionSymbol{\cap} \gdef\setRelativeComplementSymbol{-} \gdef\msetCol{\textcolor{bb4444}} \gdef\repeatedMultiset#1#2{#1\,#2} \gdef\msrdot{\mathbin{\smallblackcirc}} \gdef\msrplus{+} \gdef\smrdot{\mathbin{\smallblackcirc}} \gdef\smrplus{+} \gdef\dotminus{\mathbin{\dot{-}}} \gdef\dotcap{\mathbin{\dot{\cap}}} \gdef\dotcup{\mathbin{\dot{\cup}}} \gdef\multisetUnionSymbol{\dotcup} \gdef\multisetIntersectionSymbol{\dotcap} \gdef\multisetRelativeComplementSymbol{\dotminus} \gdef\multisetSumSymbol{\dotplus} \gdef\cartesianProductSymbol{\times} \gdef\functionType#1#2{{#1} \to {#2}} \gdef\functionSignature#1#2#3{{{#1} : {#2} \to {#3}}} \gdef\partialFunctionSignature#1#2#3{{{#1} : {#2} \rightharpoonup {#3}}} \gdef\poly#1{#1} \gdef\quiverProdPoly#1{#1} \gdef\quiverProdPower#1#2{#1^{#2}} \gdef\quiverProdTimes#1{#1} \gdef\parenLabeled#1#2{#1\ ({#2})} \gdef\parenRepeated#1#2{\parenLabeled{#1}{{#2}\text{ times}}} \gdef\underLabeled#1#2{\underbrace{#1}_{#2}} \gdef\underRepeated#1#2{\underbrace{#1}_{#2\text{ times}}} \gdef\overLabeled#1#2{\overbrace{#1}^{#2}} \gdef\overRepeated#1#2{\overbrace{#1}^{#2\text{ times}}} \gdef\modLabeled#1#2{{#1 }\textrm{ mod }{#2}} \gdef\freeGroup#1{F_{#1}} \gdef\cyclicGroup#1{\mathbb{Z}_{#1}} \gdef\componentSuperQuiverOfSymbol{\succ} \gdef\setCardinality#1{|{#1}|} \gdef\multisetCardinality#1{|{#1}|} \gdef\dependentQuiverProductSymbol{\mathbin{\times}} \gdef\rightIndependentQuiverProductSymbol{\mathbin{⋊}} \gdef\leftIndependentQuiverProductSymbol{\mathbin{⋉}} \gdef\rightStrongQuiverProductSymbol{\mathbin{⧒}} \gdef\leftStrongQuiverProductSymbol{\mathbin{⧑}} \gdef\rightFiberQuiverProductSymbol{\mathbin{⧕}} \gdef\leftFiberQuiverProductSymbol{\mathbin{⧔}} \gdef\lockedQuiverProductSymbol{\mathbin{\searrow}} \gdef\rightFreeQuiverProductSymbol{\mathbin{\smallerthan}} \gdef\leftFreeQuiverProductSymbol{\mathbin{\largerthan}} \gdef\strongIndependentQuiverProductSymbol{\mathbin{⨝}} \gdef\cartesianQuiverProductSymbol{\mathbin{□}} \gdef\strongQuiverProductSymbol{\mathbin{⊠}} \gdef\graphUnionSymbol{\sqcup} \gdef\graphProductSymbol{\times} \gdef\inlineProdSymbol{|} \gdef\serialCardSymbol{{:}} \gdef\parallelCardSymbol{{\mid}} \gdef\cardinalSequenceSymbol{{:}} \gdef\cardinalProduct#1{(#1)} \gdef\vertexProduct#1{(#1)} \gdef\edgeProduct#1{(#1)} \gdef\cardinalProductSymbol{\inlineProdSymbol} \gdef\vertexProductSymbol{\inlineProdSymbol} \gdef\edgeProductSymbol{\inlineProdSymbol} \gdef\verticalVertexProduct#1#2{\cfrac{#1}{#2}} \gdef\verticalCardinalProduct#1#2{\cfrac{#1}{#2}} \gdef\indexSum#1#2#3{{\sum_{#2}^{#3} #1}} \gdef\indexProd#1#2#3{{\prod_{#2}^{#3} #1}} \gdef\indexMax#1#2#3{{\max_{#2}^{#3} #1}} \gdef\indexMin#1#2#3{{\min_{#2}^{#3} #1}} \gdef\indexUnion#1#2#3{{\bigcup_{#2}^{#3} #1}} \gdef\indexIntersection#1#2#3{{\bigcap_{#2}^{#3} #1}} \gdef\indexGraphUnion#1#2#3{{\bigcup_{#2}^{#3} #1}} \gdef\indexGraphDisjointUnion#1#2#3{{\bigcup_{#2}^{#3} #1}} \gdef\styledIndexSum#1#2#3#4{{#1\sum_{#3}^{#4} #2}} \gdef\styledIndexProd#1#2#3#4{{#1\prod_{#3}^{#4} #2}} \gdef\styledIndexMax#1#2#3#4{{#1\max_{#3}^{#4} #2}} \gdef\styledIndexMin#1#2#3#4{{#1\min_{#3}^{#4} #2}} \gdef\indexSumSymbol{\sum} \gdef\indexProdSymbol{\prod} \gdef\indexMaxSymbol{\max} \gdef\indexMinSymbol{\min} \gdef\openInterval#1#2{(#1,#2)} \gdef\closedInterval#1#2{[#1,#2]} \gdef\openClosedInterval#1#2{(#1,#2]} \gdef\closedOpenInterval#1#2{[#1,#2)} \gdef\oneTo#1{1..{#1}} \gdef\zeroTo#1{0..{#1}} \gdef\qstring#1{\mathtt{"}{#1}\mathtt{"}} \gdef\wstring#1{\textcolor{6b6b6b}{#1}} \gdef\qchar#1{\mathtt{'}{#1}\mathtt{'}} \gdef\lstr#1{\mathtt{#1}} \gdef\lchar#1{\mathtt{#1}} \gdef\string#1{{#1}} \gdef\character#1{{#1}} \gdef\homomorphismMapping#1{\assocArray{#1}} \gdef\starModifier#1{{#1}^*} \gdef\translationPresentation#1{\textrm{Z}_{#1}} \gdef\starTranslationPresentation#1{\textrm{Z}^*_{#1}} \gdef\translationPathValuation#1{\mathcal{\overrightharpoon Z}_{#1}} \gdef\starTranslationPathValuation#1{\overrightharpoon{\mathcal{Z}^*_{#1}}} \gdef\translationWordHomomorphism#1{\mathcal{Z}_{#1}} \gdef\starTranslationWordHomomorphism#1{\mathcal{Z}^*_{#1}} \gdef\translationCardinalValuation#1{\textrm{T}_{#1}} \gdef\starTranslationCardinalValuation#1{\textrm{T}^*_{#1}} \gdef\reFo#1{\textcolor{e1432d}{#1}} \gdef\grFo#1{\textcolor{4ea82a}{#1}} \gdef\blFo#1{\textcolor{3e81c3}{#1}} \gdef\orFo#1{\textcolor{dc841a}{#1}} \gdef\piFo#1{\textcolor{c74883}{#1}} \gdef\teFo#1{\textcolor{47a5a7}{#1}} \gdef\gFo#1{\textcolor{929292}{#1}} \gdef\puFo#1{\textcolor{8b7ebe}{#1}} \gdef\liReFo#1{\textcolor{ff775e}{#1}} \gdef\liGrFo#1{\textcolor{82dd63}{#1}} \gdef\liBlFo#1{\textcolor{6caff4}{#1}} \gdef\liOrFo#1{\textcolor{ffbb5f}{#1}} \gdef\liPiFo#1{\textcolor{fb77b0}{#1}} \gdef\liTeFo#1{\textcolor{7fdbdc}{#1}} \gdef\liGFo#1{\textcolor{c5c5c5}{#1}} \gdef\liPuFo#1{\textcolor{bbaff2}{#1}} \gdef\daReFo#1{\textcolor{b50700}{#1}} \gdef\daGrFo#1{\textcolor{217f00}{#1}} \gdef\daBlFo#1{\textcolor{165e9d}{#1}} \gdef\daOrFo#1{\textcolor{ae5900}{#1}} \gdef\daPiFo#1{\textcolor{9e1f61}{#1}} \gdef\daTeFo#1{\textcolor{0e7c7e}{#1}} \gdef\daGFo#1{\textcolor{6b6b6b}{#1}} \gdef\daPuFo#1{\textcolor{665996}{#1}} \gdef\boFo#1{\mathbf{#1}} \gdef\itFo#1{\mathit{#1}} \gdef\unFo#1{\underline{#1}} \gdef\stFo#1{\struckthrough{#1}} \gdef\plTeFo#1{\textrm{#1}} \gdef\maTeFo#1{\textrm{#1}} \gdef\ZNFuFo#1{\operatorname{#1}} \gdef\ZAFo#1#2{#1(#2)} \gdef\noApSy{\gFo{\text{---}}} \gdef\unSy{\gFo{?}} \gdef\emSeSy{\gFo{ \emptyset }} \gdef\tiSy{\boFo{ \vthinspace ✓ \vthinspace }} \gdef\unIn{\mathbb{I}} \gdef\blSy{\gFo{\_}} \gdef\plSqSy{□} \gdef\baToSy{\boFo{ \vthinspace | \vthinspace }} \gdef\fiToSy{●} \gdef\fiSqToSy{■} \gdef\fiReToSy{▮} \gdef\emToSy{○} \gdef\emSqToSy{□} \gdef\emReToSy{▯} \gdef\plElSy{\,...\,} \gdef\ceElSy{\,\cdot\!\cdot\!\cdot\,} \gdef\elSy{\,\gFo{...}\,} \gdef\veElSy{\vdots} \gdef\coFo{\sqsubseteq} \gdef\CoFo#1#2{#1 \coFo #2} \gdef\coByFo{⊑} \gdef\CoByFo#1#2{#1 \coByFo #2} \gdef\stCoFo{\sqsupset} \gdef\StCoFo#1#2{#1 \stCoFo #2} \gdef\stCoByFo{\sqsubset} \gdef\StCoByFo#1#2{#1 \stCoByFo #2} \gdef\inCoFo{\sqsubseteq} \gdef\InCoFo#1#2{#1 \inCoFo #2} \gdef\InCoFoⅡ#1#2#3{#1 \inCoFo_{#3} #2} \gdef\grInCoFo{\sqsubseteq} \gdef\GrInCoFo#1#2{#1 \grInCoFo #2} \gdef\GrInCoFoⅡ#1#2#3{#1 \grInCoFo_{#3} #2} \gdef\quInCoFo{\sqsubseteq} \gdef\QuInCoFo#1#2{#1 \quInCoFo #2} \gdef\QuInCoFoⅡ#1#2#3{#1 \quInCoFo_{#3} #2} \gdef\coPrFo{\cdot} \gdef\CoPrFo#1{#1} \gdef\coSuFo{\sqcup} \gdef\CoSuFo#1{#1} \gdef\isCoFo{\sim} \gdef\IsCoFo#1#2{#1 \isCoFo #2} \gdef\IsCoFoⅡ#1#2#3{#1 \isCoFo_{#3} #2} \gdef\isNoCoFo{\nsim} \gdef\IsNoCoFo#1#2{#1 \isNoCoFo #2} \gdef\IsNoCoFoⅡ#1#2#3{#1 \isNoCoFo_{#3} #2} \gdef\coReFo{\sim} \gdef\CoReFo#1{#1} \gdef\TuFo#1{(#1)} \gdef\SeFo#1{\left\{#1\right\}} \gdef\StSeFo#1#2#3{#1#3#2} \gdef\inArSy{⏵} \gdef\arSy{⏴} \gdef\upArSy{⏶} \gdef\doArSy{⏷} \gdef\leArSy{⏴} \gdef\riArSy{⏵} \gdef\grAuFu{\operatorname{Aut}} \gdef\PaFo#1{\;#1\;} \gdef\na{\mathbb{n}} \gdef\poNa{\mathbb{N} ^+} \gdef\poRe{\mathbb{R} ^+} \gdef\pr{\mathbb{P}} \gdef\re{\mathbb{R}} \gdef\piSy{\pi} \gdef\taSy{\tau}$ # Vertex colorings # ## Introduction # Returning to colorings, we are now better equipped to understand how colorings correspond to coverings. We say a quiver covering $$\functionSignature{\pathHomomorphism{ \pi }}{\quiver{G}}{\quiver{H}}$$ induces a coloring of $$\quiver{G}$$, where the “colors” are just the identities of vertices in $$\quiver{H}$$. In other words, the color of a vertex $$\elemOf{\vert{g}}{\vertexList(\quiver{G})}$$ is simply which vertex $$\elemOf{\vert{h}}{\vertexList(\quiver{H})}$$ it is sent to: specifically $$\vert{h} = \graphHomomorphism{ \pi _{\vertices{}}}(\vert{g})$$. We can of course only use visible colors to illustrate this idea when $$\quiver{H}$$ has a finite number of vertices. What makes this covering perspective on colorings interesting is that there is a fairly mechanical process we can use to construct finite coverings of any fundamental quiver. These coverings will continue to generate the same lattice quiver as the original fundamental quiver, but will yield colorings of the lattice quiver containing more colors. We’ll see later that that the families of colorings given by fundamental coverings are in a sense discrete harmonics of a particular lattice quiver, generated by discrete partial difference equations. ## Line lattice # Let’s look first at the 1-dimension line lattice. The fundamental coloring given by the simplest fundamental quiver, the bouquet quiver, is a 1-coloring (or constant coloring), since the bouquet only has one vertex: Here’s the 2-coloring, given by the fundamental quiver that simply alternates between the two colors: Higher colorings are given by “circle quivers” with more vertices, where the cardinal simply cycles among the colors (the 1- and 2-colorings can be seen as instances of these): These are the only fundamental colorings of the line lattice, since there is no remaining freedom for how to choose cardinals once the number of vertices in the fundamental quiver is decided. ## Square lattice # The square lattice brings with it more interesting behavior, since the two cardinals can interact in various ways. But let’s start with the 1-coloring given by the simplest fundamental quiver: With two colors, we obtain “stripes” of constant color that are horizontally, vertically, or diagonally oriented: 3-colorings display a similar pattern, except the diagonal stripes can now be oriented in two different ways (whereas in the case of 2 colors, both of these orientations yield the same coloring): 4-colorings repeat these motifs, but include a third kind of behavior we haven’t encountered before, in which fixing a color yields a “decimated” form of the original square lattice at one-quarter resolution: One way we can view these colorings is as embedding sub-lattices in the quiver lattice. The idea is to choose a color in the fundamental quiver and consider the set of (shortest) paths that begin and end at this vertex. The words for these paths then act as the primitive cardinals for the sub-lattice. The first four colorings above yield to the line lattice by choosing e.g. blue, and the path words $$\word{\card{x}}$$, $$\word{\card{y}}$$, and $$\word{\card{x}}{\ncard{y}}$$ respectively. The fifth coloring yields the square lattice, via blue and the path words $$\list{\word{\card{x}}{\card{x}},\word{\card{y}}{\card{y}}}$$. Let's call this fifth coloring a 1-decimation, since it is the smallest possible decimation. A 2-decimation is shown below, with path words $$\list{\word{\card{x}}{\card{x}}{\card{x}},\word{\card{y}}{\card{y}}{\card{y}}}$$: ## Triangular lattice # Let’s now turn to colorings of the triangular lattice. As before, we’ll start with the 1*-*coloring induced by the simplest fundamental quiver: The two-colorings are the familiar oriented stripes, which can now occur along the three cardinal axes. Interestingly, there is no equivalent of the diagonal stripes we saw in the square case: The 3-colorings do include a new motif that can be seen either as 1-decimation of the triangular lattice, or as a diagonal stripe that is half-way between the orientations of the 3 cardinals. Again, as in the square case, this single coloring simultaneously yields all 3 orientations of this diagonal in a “degenerate” way: As we saw before in the square case, moving to 4-colorings causes the single degenerate diagonal coloring to split into oriented versions: With 5-colorings, this situation repeats unchanged: Jumping ahead to 9-colorings, we obtain a 2-decimation of the triangular lattice to itself: This 2-decimation is in fact almost identical to the 2-decimation of the square lattice we saw earlier, thanks to the fact that a triangular lattice can be obtained from a square lattice by introducing additional edges that connect each vertex $$\vert{v}$$ to the vertex $$\vert{w}$$ via path $$\pathWord{\vert{v}}{\word{\card{x}}{\card{y}}}{\vert{w}}$$. It’s not hard to see that for both triangular and square lattices, an $$n$$-decimation has $$(n+1)^2$$ colors. ## Hexagonal lattice # The hexagonal lattice, being the first lattice with a non-bouquet fundamental quiver, generally requires many more colors in its fundamental quivers, making its analysis more cumbersome. There is a simple connection between $$n$$-colorings of the hexagonal lattice and $$n+1$$-colorings of triangular lattice: observe that the hexagonal lattice can be obtained from the triangular lattice by deleting a certain set of vertices, leaving a triangular pattern of “holes”. We can achieve this by deleting one or more vertices from the fundamental quiver of a triangular lattice if those vertices yield the right triangular pattern of holes. This will give a fundamental quiver that generates the hexagonal lattice, and hence a hexagonal lattice coloring. For example, the 2-coloring shown above can be obtained from the 1-decimation of the square lattice by deleting the green fundamental vertex:
2022-12-06 20:57:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192667365074158, "perplexity": 802.2478140533177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00628.warc.gz"}
https://paperswithcode.com/paper/approximation-and-parameterized-complexity-of/review/
Paper ### Approximation and Parameterized Complexity of Minimax Approval Voting We present three results on the complexity of Minimax Approval Voting. First, we study Minimax Approval Voting parameterized by the Hamming distance $d$ from the solution to the votes... (read more) Results in Papers With Code (↓ scroll down to see all results)
2020-09-19 13:10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42782318592071533, "perplexity": 3255.3955071401656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00089.warc.gz"}
https://indico.cern.ch/event/299001/
TH BSM Forum # Fine-tuning in SUSY ## by Anibal Medina (The University of Melbourne) Thursday, 20 March 2014 from to (Europe/Zurich) at CERN ( 4-2-011 - TH common room ) Description We analysed from a bottom-up approach the level of tuning in the MSSM and in the NMSSM associated with the measured 126 GeV Higgs mass, direct searches of superpartners and collider measurements of the Higgs couplings to fermions and gauge bosons. In particular we show that in the scale invariant NMSSM, TeV-scale stop masses are still allowed in much of the parameter space with 5 $\%$ tuning for a low messenger scale, split families and a Higgs-singlet coupling $\lambda$ of order one. In the absence of deviations of the Higgs couplings to fermions and massive gauge bosons from SM values at the LHC and in future colliders, we noticed that there is an additional independent tuning due to the "SM-likeness" requirement, which can be naturally suppressed in the MSSM for large values of $\tan\beta$ but that in the scale invariant NMSSM for a coupling $\lambda$ of order one can be the dominant source of tuning.
2015-08-31 04:45:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465498447418213, "perplexity": 1201.6951125589703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065534.46/warc/CC-MAIN-20150827025425-00347-ip-10-171-96-226.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/please-help-me-out/
I am a class 9 cbse student. My friends say that class 9 is really tough.Could you please tell me about the competitive exams to be faced and also recommend me some ways to improve my perfomance in all subjects Note by Vishwathiga Jayasankar 5 years, 4 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: It is not at all difficult Just one thing take care that u just make your concepts clear - 5 years, 4 months ago - 5 years, 4 months ago All the best.... - 5 years, 4 months ago Hello! I am too a ninth grader, but I think the toughest part (to me) is Biology, Chemistry and Social Studies. Well Maths and Physics must be your favourite, as you are a Brilliantian! Well, Exams? • NTSE ( Prepare from today itself!) • Maths and Science Olympiads - If you want to give! Cheers! - 5 years, 4 months ago thanks for reply what is RMO and how to prepare for all these exams. please recommend some books for preparation - 5 years, 4 months ago RMO is the acronym for regional mathematics Olympiad. - 5 years, 4 months ago thank you once again and how are you preparing for ntse and rmo - 5 years, 4 months ago They have their own books, you can prefer them. Especially for RMO, you can use the one by Rajeev Manocha. - 5 years, 4 months ago thank you - 5 years, 4 months ago Feel free to ask any question! :) - 5 years, 4 months ago Don't keep doubts in your mind you have to be very clear conceptually - 5 years, 4 months ago Be confident with the concepts you have learnt - 5 years, 4 months ago thanxxx rohith .how is your new school - 5 years, 4 months ago
2020-10-24 11:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980222225189209, "perplexity": 5217.199417555347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00718.warc.gz"}
http://math.tutorcircle.com/number-sense/perfect-numbers.html
Sales Toll Free No: 1-855-666-7446 # Perfect Numbers Top Sub Topics There are different types of numbers in the number system. Perfect numbers are one of such types. Perfect number is a whole number, a positive number whose factors when added give back the number as answer. The factors of the numbers are less than that number. Perfect number is a number that is half the sum of all of its positive divisors. The first four perfect numbers are 6, 28, 496, 8128. Smallest perfect number is 6, which is the sum of 1, 2 and 3. Pythagoreans coined the term 'perfect numbers' and the first four perfect numbers were known over 2,000 years ago. Perfect numbers are pretty rare. The 3 and 4 numbers look far apart, the same continues and the distance between 4 and 5 is large as well. There are infinite number of perfect numbers. ## Even Perfect Numbers According to Euclid, whenever 2$^{p}$ - 1 is prime, then it is proved that 2$^{p - 1}$(2$^{p}$ -1) is an even perfect number. It is necessary that p itself be prime for 2$^{p}$ - 1 to be prime. Prime numbers of the form 2$^{p}$ - 1 are known as Mersenne primes, where p is a prime number. There is one to one relationship between even perfect numbers and mersenne primes, each mersenne prime generates one even perfect number and vice versa. This is known as Euclid-Euler theorem. 48 Mersenne primes and 48 even perfect numbers are known. The largest of these is 2$^{57885160}$ x (2$^{57885161-1}$). Therefore, perfect number is calculated from the mersenne prime number. ## Odd Perfect Numbers It is guessed that there are no odd perfect numbers. If there are some, then they are quite large over 300 digits and have numerous prime factors and in the form of $P_{1}^{2e_{1}}$. $P_{2}^{2e_{2}}$. ........ $P_{n}^{2e_{n}}$, where, $P_{1}$, $P_{2}$,...... $P_{n}$ are the distinct prime number. The largest prime factor of an odd number must be atleast 100000007, 10007 and 101. The smallest prime factor of an odd perfect number with all even powers less than six, determined an upper bound of exp $(4.97401 \times 10^{10}$) Any odd perfect number N must satisfy the following conditions: 1. N > 101500. 2. N is not divisible by 105. 3. Smallest prime factor of N is less than $\frac{(2p+8)}{3}$. 4. N < 2$^{4^{p + 1}}$. 5. N has at least 101 prime factors and at least 9 distinct prime factors. If 3 is not one of the factors of N, then N has at least 12 distinct prime factors. ## How to Find Perfect Numbers? Perfect numbers can be found by using the formula given by Euclid as (2$^{p-1}$) (2$^{p}$ - 1), with 'p' being prime. Given below are the examples of perfect prime numbers. ### Solved Examples Question 1: Find the fifth perfect prime number. Solution: Step 1: The fifth prime number is 11(2,3,5,7,11) The formula to find perfect prime number is given below (2$^{p-1}$) (2$^{p}$ - 1) Plug in 11 in the above formula, we get (2$^{11-1}$) (2$^{11}$ - 1) = 4096 x 8191 = 33550336 Step 2: The fifth prime number is 11(2,3,5,7,11) The formula to find perfect prime number is given below (2$^{p-1}$) (2$^{p}$ - 1) Plug in 11 in the above formula, we get (2$^{11-1}$) (2$^{11}$ - 1) = 4096 x 8191 = 33550336 Question 2: List first 10 perfect prime numbers. Solution: The first ten prime numbers are 2, 3, 5, 7, 13, 17, 19, 31, 61, 89. The formula to find perfect prime number is given below: (2$^{p-1}$) (2$^{p}$ - 1) Putting 2 in the above formula, we get (2$^{2-1}$) (2$^{2}$ - 1) = 2 x 3 = 6 Similarly, substituting the value of prime numbers in the above formula,  we get the first ten perfect prime numbers which are given below: 6, 28, 496, 8128, 33550336, 8589869056,137438691328, 2305843008139952128, 2658455991569831744654692615953842176, 191561942608236107294793378084303638130997321548169216.
2018-10-21 14:43:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322990655899048, "perplexity": 635.550981705192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514030.89/warc/CC-MAIN-20181021140037-20181021161537-00553.warc.gz"}
https://mathoverflow.net/questions/383327/de-bruijns-sequence-is-odd-iff-n-2m%E2%88%921-part-ii
# De Bruijn's sequence is odd iff $n=2^m−1$: Part II Assume $$a\in\mathbb{N}$$. Among the families of sequences studied by Nicolaas de Bruijn (Asymptotic Methods in Analysis, 1958), let's focus on the (modified) $$\hat{S}(2a,n)=\frac1{n+1}\sum_{k=0}^{2n}(-1)^{n+k}\binom{2n}k^{2a}.$$ An all-familiar fact states: the Catalan number $$C_n=\frac1{n+1}\binom{2n}n$$ is odd iff $$n=2^m-1$$. Encouraged by a positive response of Fedor Petrov to my earlier MO question regarding the case $$a=2$$, I decided to beef-up the quest into a generalization. QUESTION. Is this true? $$\hat{S}(2a,n)$$ is odd iff $$n=2^m-1$$ for some $$m\in\mathbb{Z_{\geq0}}$$. Remark. Notice that in the case $$a=1$$, we have $$\hat{S}(2,n)=C_n$$. • @FedorPetrov: the sum run through $0$ to $2n$. Why do you expect it to drop to $n$? Feb 7 at 14:22 • Because I am confused: the sum in RHS in the main identity for $a=2$ was for $k$ from 0 to $n$. Feb 7 at 14:43 • @FedorPetrov: no, it was up to $2n$. Feb 7 at 15:22 • Calkin showed (see theorem 1 here matwbn.icm.edu.pl/ksiazki/aa/aa86/aa8612.pdf) that $C_n|\hat{S}(2a,n)$ so it only remains to check indeed that for $n=2^m-1$ the sum is odd. Feb 7 at 19:25 I think, yes and it may be proved by the same way as for $$a=2$$. We note that $$C:=\sum_{k=0}^{2n}(-1)^k\binom{2n}k^{2a}=\left[x_1^{2n}\ldots x_{2a}^{2n}\right] (x_1-x_2)^{2n}(x_2-x_3)^{2n}\ldots (x_{2a-1}-x_{2a})^{2a}(x_{2a}+x_1)^{2n}\\= \left[x_1^{2n}\ldots x_{2a}^{2n}\right]F(x_1,\ldots,x_{2a}),$$ where $$F(x_1,\ldots,x_{2a})=\prod_{j=-(n-1)}^n(x_1-x_2-j)(x_3-x_2-j)(x_3-x_4-j)\ldots(x_{2a-1}-x_{2a}-j)(x_{2a}+x_1-2n-j).$$ We denote $$A=\{0,1,\ldots,2n\}$$ and express the above coefficient via the values of $$F$$ on $$A\times A\times \ldots \times A$$ applying the Combinatorial Nullstellensatz formula. It is not hard to see (pretty analogous to $$a=2$$ case which was considered in details in my answer to the previous question) that if $$F(x_1,\ldots,x_{2a})\ne 0$$ for $$x_i\in A$$ then we must have $$x_1=x_{2a-1}=0$$, $$x_{2a}=n$$. Then $$F(x_1,\ldots,x_{2n})$$ is divisible by $$((2n)!)^{2a}$$ and the denominator in the CN formula equals $$(-1)^{\sum x_i}\prod_{i=1}^{2a}x_i!(2n-x_i)!=((2n)!)^2 (n!)^2(-1)^{n+\sum_{i=2}^{2a-1}x_i}\prod_{i=2}^{2a-1}x_i!(2n-x_i)!$$. It is immediately seen that $$C$$ is divisible by $${2n\choose n}$$ (that already implies that $$\hat{S}(2a,n)$$ is divisible by $$C_n$$, as was earlier proved by Calkin by a different argument, see Vlad's reference). When $$n=2^m-1$$, using the fact that the product of $$2n$$ consecutive integers like $$\prod_{j=-(n-1)}^n(x_1-x_2-j)$$ is divisible by $$2\cdot (2n)!$$ unless $$x_1-x_2\in \{n,n+1\}$$ (again, see the explanation in the previous answer) and that $$(2n)!/(x_i!(2n-x_i)!)$$ is even when $$x_i$$ is odd, we see that the unique odd summand in our sum corresponds to $$x_1=x_3=\ldots=x_{2a-3}=0$$, $$x_2=x_4=\ldots=x_{2a-2}=n+1$$.
2021-09-16 20:00:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264238476753235, "perplexity": 214.03912360468377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00647.warc.gz"}
https://stats.stackexchange.com/questions/304860/keras-val-loss-decreases-while-loss-increases
# Keras: val_loss decreases while loss increases I set up a model in keras (in python 2.7) to predict the next stock price in a particular sequence. The model I used is shown below (edited to fit this page): model = Sequential() predict = model.fit(X, Y, epochs=epochs, verbose=1, validation_split=0.2, callbacks=[checkpoint_maker], shuffle=True, batch_size=count / 10 * 8) However, when I ran the model, I found that val_mean_absolute_percentage_error decreases while the mean_absolute_percentage_error increases. Here is the graph I managed to generate after 1000 epochs. Notice that the blue line is going up while the orange line is going down. I have no idea why. I've read on some sources that if the loss is decreasing and the val_loss is increasing it means that: (the) model is over fitting, that (it) is just memorizing the training data - https://stats.stackexchange.com/a/260346 So does that mean, that in my case, the model is "under fitting"? P.S. Code and files
2020-02-24 00:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4120714068412781, "perplexity": 2371.006897669981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00372.warc.gz"}
http://mathoverflow.net/questions/97006/reference-for-cm-hilbert-modular-forms-arise-from-hecke-characters
# Reference for: CM Hilbert Modular forms arise from Hecke characters For classical modular forms, the correspondence between the form having CM by an imaginary quadratic field $K$ and it being induced from a Hecke character on $K$ is well-known. (Ribet's paper is a standard reference.) I am looking for a reference for the analogous result for Hilbert modular forms over a totally real field F. In particular, if the form has CM then it arises from a Hecke character on a quadratic imaginary extension $K$ (over $F$.) I believe, for the converse, Yoshida/Hida is the reference. Thanks - Just be clear, is your definition of CM that there is some totally imaginary quadratic extension $K$ of $F$ such that for a set of prime ideals of $F$ of density 1 the coefficient $a_\mathfrak{p}$ is zero if and only if $\mathfrak{p}$ is inert in $K/F$? –  Rob Harron May 15 '12 at 15:14 @Rob: Yes, this is the definition. –  unramified May 15 '12 at 15:20 I don't know a reference for this result. Personally, I'd take the fact that a modular form is induced from a Hecke character of $K$ to be the definition of CM, but that doesn't help answer your question. Have you tried generalizing Ribet's argument? Otherwise, this seems like the type of thing that would show up somewhere in a paper/book of Hida's. –  Rob Harron May 15 '12 at 16:12 FWIW non-algebraic Hecke characters can give rise to HMFs with weights which aren't congruent mod 2 and hence don't have associated Galois representations. This presents an obstruction to proving the result using arguments on the Galois side in this generality, which presumably you can try to get around by using some symmetric square argument. –  Kevin Buzzard May 15 '12 at 21:14
2013-12-18 15:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826375603675842, "perplexity": 324.20114830661515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758904/warc/CC-MAIN-20131218054918-00030-ip-10-33-133-15.ec2.internal.warc.gz"}
https://scipost.org/SciPostPhys.12.3.091
## Statistical mechanics of coupled supercooled liquids in finite dimensions Benjamin Guiselin, Ludovic Berthier, Gilles Tarjus SciPost Phys. 12, 091 (2022) · published 14 March 2022 ### Abstract We study the statistical mechanics of supercooled liquids when the system evolves at a temperature $T$ with a field $\epsilon$ linearly coupled to its overlap with a reference configuration of the same liquid sampled at a temperature $T_0$. We use mean-field theory to fully characterize the influence of the reference temperature $T_0$, and we mainly study the case of a fixed, low-$T_0$ value in computer simulations. We numerically investigate the extended phase diagram in the $(\epsilon,T)$ plane of model glass-forming liquids in spatial dimensions $d=2$ and $d=3$, relying on umbrella sampling and reweighting techniques. For both $2d$ and $3d$ cases, a similar phenomenology with nontrivial thermodynamic fluctuations of the overlap is observed at low temperatures, but a detailed finite-size analysis reveals qualitatively distinct behaviors. We establish the existence of a first-order transition line for nonzero $\epsilon$ ending in a critical point in the universality class of the random-field Ising model (RFIM) in $d=3$. In $d=2$ instead, no phase transition is found in large enough systems at least down to temperatures below the extrapolated calorimetric glass transition temperature $T_g$. Our results confirm that glass-forming liquid samples of limited size display the thermodynamic fluctuations expected for finite systems undergoing a random first-order transition. They also support the relevance of the physics of the RFIM for supercooled liquids, which may then explain the qualitative difference between $2d$ and $3d$ glass-formers. ### Authors / Affiliations: mappings to Contributors and Organizations See all Organizations. Funder for the research work leading to this publication
2022-09-29 17:01:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35445335507392883, "perplexity": 879.2772394183213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00595.warc.gz"}
http://www.ni.com/documentation/en/labview/latest/analysis-node-ref/ode-linear-n-order-symbolic/
# ODE Linear (N-Order » Symbolic) (G Dataflow) Version: Solves an nth order, homogeneous linear differential equation with solutions in symbolic form. ## A Vector of coefficients of the different derivatives of a function x(t), starting with the coefficient of the lowest order term. The node assumes the coefficient of the highest order derivative to be equal to 1. ## initial values Vector of the initial values of the variables. ## error in Error conditions that occur before this node runs. The node responds to this input according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. Default: No error ## formula Symbolic solution of the differential equation. ## error out Error information. The node produces this output according to standard error behavior. Standard Error Behavior Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way. error in does not contain an error error in contains an error If no error occurred before the node runs, the node begins execution normally. If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out. If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out. ## Algorithm for Solving an N-Order Linear Differential Equation Consider the n-order linear homogeneous differential equation: ${x}^{\left(n\right)}+{a}_{n-1}{x}^{\left(n-1\right)}+\dots +{a}_{1}{x}^{\left(1\right)}+{a}_{0}x=0$ with the following initial conditions: $x\left(0\right)={x}_{00}\phantom{\rule{0ex}{0ex}}{x}^{\left(1\right)}\left(0\right)={x}_{10}\phantom{\rule{0ex}{0ex}}⋮\phantom{\rule{0ex}{0ex}}{x}^{\left(n-1\right)}\left(0\right)={x}_{\left(n-1\right)0}$ where • a is the constant coefficient of the differential equation • n is the highest order of the differential equation • 0 is the start time of the ODE solver. x00 represents the value of x(t) when t=0. x(n-1)0 represents the (n-1)th derivative of x(t) when t = 0. To solve the differential equation, let $x={e}^{\lambda t}$, leading to: ${\lambda }^{n}+{a}_{n-1}{\lambda }^{n-1}+\dots +{a}_{1}\lambda +{a}_{0}=0$ The n zeros of the above equation determine the structure of the solution of the ODE. If we have n distinct complex zeros ${\lambda }_{1},\dots {,\lambda }_{n}$, the general solution of the n-order differential equation can be expressed by $x\left(t\right)={\beta }_{1}{e}^{{\lambda }_{1}t}+\dots +{\beta }_{n}{e}^{{\lambda }_{n}t}$ where ${\beta }_{1},\dots ,{\beta }_{n}$ are arbitrary constants and can be determined by the initial condition (t = 0). When t = 0, $x\left(0\right)={\beta }_{1}+\dots +{\beta }_{n}\phantom{\rule{0ex}{0ex}}{x}^{\left(1\right)}\left(0\right)={\beta }_{1}{\lambda }_{1}+\dots +{\beta }_{n}{\lambda }_{n}\phantom{\rule{0ex}{0ex}}⋮\phantom{\rule{0ex}{0ex}}{x}^{\left(n-1\right)}\left(0\right)={\beta }_{1}{{\lambda }_{1}}^{n-1}+\dots +{\beta }_{n}{{\lambda }_{n}}^{n-1}$ Note If ${\lambda }_{1},\dots {,\lambda }_{n}$ are repeated eigenvalues, this node returns an error code of -23017. To solve the differential equation x'' - 3x' + 2x = 0 with the initial conditions of x(0) = 2 and x'(0) = 3, enter A = [2, -3] and initial values = [2, 3]. Where This Node Can Run: Desktop OS: Windows FPGA: Not supported Web Server: Not supported in VIs that run in a web application
2018-09-25 23:17:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549803495407104, "perplexity": 1281.8161771263115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162563.97/warc/CC-MAIN-20180925222545-20180926002945-00333.warc.gz"}
https://www.baryonbib.org/bib/9438675b-1d35-479d-b391-857405d94f29
PREPRINT 9438675B-1D35-479D-B391-857405D94F29 # Gravitation as a Many Body Problem Pawel O. Mazur arXiv:hep-th/9708133 Submitted on 25 August 1997 ## Abstract The idea of viewing gravitation as a many body phenomenon is put forward here. Physical arguments supporting this idea are briefly reviewed. The basic mathematical object of the new gravitational mechanics is a matrix of operators. Striking similarity of the method of R-matrix (QISM) to the mathematical formulation of the new gravitational mechanics is pointed out. The s-wave difference Schrodinger equation describing a process of emission of radiation by a gravitating particle is shown to be analogous to the Baxter equation of the QISM. ## Preprint Comment: RevteX file, 7 pp., Talk given at the Confernce Beyond the Standard Model V'', April 29-May 4, 1997, Balholm, Norway Subjects: High Energy Physics - Theory; Astrophysics; General Relativity and Quantum Cosmology
2022-07-05 03:19:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3880942463874817, "perplexity": 2352.009885481443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00617.warc.gz"}
https://stacks.math.columbia.edu/tag/0CTN
Lemma 73.25.5. Let $S$ be a scheme. Let $f : X \to Y$ be a flat proper morphism of finite presentation of algebraic spaces over $S$. Let $E \in D(\mathcal{O}_ X)$ be pseudo-coherent. Then $Rf_*E$ is a pseudo-coherent object of $D(\mathcal{O}_ Y)$ and its formation commutes with arbitrary base change. Proof. Special case of Lemma 73.25.3 applied with $\mathcal{G} = \mathcal{O}_ X$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2020-11-30 17:45:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9833734035491943, "perplexity": 344.94934958410647}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00132.warc.gz"}
https://lists.gnu.org/archive/html/lilypond-user/2018-03/msg00169.html
lilypond-user [Top][All Lists] ## Re: Change the duration of a chord defined as a variable From: paolo prete Subject: Re: Change the duration of a chord defined as a variable Date: Tue, 6 Mar 2018 14:33:54 +0100 David, I wonder if is there another solution for obtaining the same result, because the discussed one is too much unstable. Here's the template, below. I googled how to change a chord duration, but I found nothing. %%%%%%%%%%%% chord = <f' a'>4 -> \mp newChord = { % .... add \chord with a new fixed duration and with its articulations } .... I was thinking about taking the articulations from each NoteEvent of the chord, iterating with a (map (lambda (x) ....), and putting them into a new list, but it seems overkill to me... What do you think? 2018-03-06 14:04 GMT+01:00 David Kastrup : It works in master in that \mp ends up in the _expression_, but indeed it does so as a note articulation rather than a chord articulation, and LilyPond just drops \mf on the floor in that position.  Sorry for that. -- David Kastrup
2019-06-20 07:54:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208503127098083, "perplexity": 5203.000253604339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00393.warc.gz"}
https://jwcarr.github.io/eyekit/io.html
# Module eyekit.io Functions for reading and writing data. ## Functions def read(file_path) Read in a JSON file. FixationSequence and TextBlock objects are automatically decoded and instantiated. def write(data, file_path, compress=False) Write arbitrary data to a JSON file. If compress is True, the file is written in the most compact way; if False, the file will be more human readable. FixationSequence and TextBlock objects are automatically encoded. def import_asc(file_path, variables=[], placement_of_variables='after_end') Import data from an ASC file produced from an SR Research EyeLink device (you will first need to use SR Research's Edf2asc tool to convert your original EDF files to ASC). The importer will extract all trials from the ASC file, where a trial is defined as a sequence of fixations (EFIX lines) that occur inside a START–END block. Optionally, the importer can extract user-defined variables from the ASC file and associate them with the appropriate trial. For example, if your ASC file contains messages like this: MSG 4244101 !V TRIAL_VAR trial_type practice MSG 4244101 !V TRIAL_VAR passage_id 1 then you could extract the variables "trial_type" and "passage_id". A variable is some string that is followed by a space; anything that follows this space is the variable's value. By default, the importer looks for variables that follow the END tag. However, if your variables are placed before the START tag, then set the placement_of_variables argument to "before_start". If unsure, you should first inspect your ASC file to see what messages you wrote to the data stream and where they are placed. The importer will return a list of dictionaries, where each dictionary represents a single trial and contains the fixations along with any other extracted variables. For example: [ { "trial_type" : "practice", "passage_id" : "1", "fixations" : FixationSequence[...] }, { "trial_type" : "test", "passage_id" : "2", "fixations" : FixationSequence[...] } ] def import_csv(file_path, x_header='x', y_header='y', start_header='start', end_header='end', trial_header=None) Import data from a CSV file. By default, the importer expects the CSV file to contain the column headers, x, y, start, and end, but this can be customized by setting the relevant arguments to whatever column headers your CSV file contains. Each row of the CSV file is expected to represent a single fixation. If your CSV file contains data from multiple trials, you should also specify the column header of a trial identifier, so that the data can be segmented into trials. The importer will return a list of dictionaries, where each dictionary represents a single trial and contains the fixations along with the trial identifier (if specified). For example: [ { "trial_id" : 1, "fixations" : FixationSequence[...] }, { "trial_id" : 2, "fixations" : FixationSequence[...] } ]
2021-09-22 23:43:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27953314781188965, "perplexity": 4766.928270604637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00586.warc.gz"}
https://zxi.mytechroad.com/blog/category/tree/
# Posts published in “Tree” Given the root of a binary tree and an integer distance. A pair of two different leaf nodes of a binary tree is said to be good if the length of the shortest path between them is less than or equal to distance. Return the number of good leaf node pairs in the tree. Example 1: Input: root = [1,2,3,null,4], distance = 3 Output: 1 Explanation: The leaf nodes of the tree are 3 and 4 and the length of the shortest path between them is 3. This is the only good pair. Example 2: Input: root = [1,2,3,4,5,6,7], distance = 3 Output: 2 Explanation: The good pairs are [4,5] and [6,7] with shortest path = 2. The pair [4,6] is not good because the length of ther shortest path between them is 4. Example 3: Input: root = [7,1,4,6,null,5,3,null,null,null,null,null,2], distance = 3 Output: 1 Explanation: The only good pair is [2,5]. Example 4: Input: root = [100], distance = 1 Output: 0 Example 5: Input: root = [1,1,1], distance = 2 Output: 1 Constraints: • The number of nodes in the tree is in the range [1, 2^10]. • Each node’s value is between [1, 100]. • 1 <= distance <= 10 ## Solution: Brute Force Since n <= 1024, and distance <= 10, we can collect all leaf nodes and try all pairs. Time complexity: O(|leaves|^2) Space complexity: O(n) ## Solution 2: Post order traversal For each node, compute the # of good leaf pair under itself. 1. count the frequency of leaf node at distance 1, 2, …, d for both left and right child. 2. ans += l[i] * r[j] (i + j <= distance) cartesian product 3. increase the distance by 1 for each leaf node when pop Time complexity: O(n*D^2) Space complexity: O(n) ## Python3 Given a tree (i.e. a connected, undirected graph that has no cycles) consisting of n nodes numbered from 0 to n - 1 and exactly n - 1 edges. The root of the tree is the node 0, and each node of the tree has a label which is a lower-case character given in the string labels (i.e. The node with the number i has the label labels[i]). The edges array is given on the form edges[i] = [ai, bi], which means there is an edge between nodes ai and bi in the tree. Return an array of size n where ans[i] is the number of nodes in the subtree of the ith node which have the same label as node i. A subtree of a tree T is the tree consisting of a node in T and all of its descendant nodes. Example 1: Input: n = 7, edges = [[0,1],[0,2],[1,4],[1,5],[2,3],[2,6]], labels = "abaedcd" Output: [2,1,1,1,1,1,1] Explanation: Node 0 has label 'a' and its sub-tree has node 2 with label 'a' as well, thus the answer is 2. Notice that any node is part of its sub-tree. Node 1 has a label 'b'. The sub-tree of node 1 contains nodes 1,4 and 5, as nodes 4 and 5 have different labels than node 1, the answer is just 1 (the node itself). Example 2: Input: n = 4, edges = [[0,1],[1,2],[0,3]], labels = "bbbb" Output: [4,2,1,1] Explanation: The sub-tree of node 2 contains only node 2, so the answer is 1. The sub-tree of node 3 contains only node 3, so the answer is 1. The sub-tree of node 1 contains nodes 1 and 2, both have label 'b', thus the answer is 2. The sub-tree of node 0 contains nodes 0, 1, 2 and 3, all with label 'b', thus the answer is 4. Example 3: Input: n = 5, edges = [[0,1],[0,2],[1,3],[0,4]], labels = "aabab" Output: [3,2,1,1,1] Example 4: Example 5: Input: n = 7, edges = [[0,1],[1,2],[2,3],[3,4],[4,5],[5,6]], labels = "aaabaaa" Output: [6,5,4,1,3,2,1] Constraints: • 1 <= n <= 10^5 • edges.length == n - 1 • edges[i].length == 2 • 0 <= ai, bi < n • ai != bi • labels.length == n • labels is consisting of only of lower-case English letters. ## Solution: Post order traversal + hashtable For each label, record the count. When visiting a node, we first record the current count of its label as before, and traverse its children, when done, increment the current count, ans[i] = current – before. Time complexity: O(n) Space complexity: O(n) ## Python3 Given a binary tree, write a function to get the maximum width of the given tree. The width of a tree is the maximum width among all levels. The binary tree has the same structure as a full binary tree, but some nodes are null. The width of one level is defined as the length between the end-nodes (the leftmost and right most non-null nodes in the level, where the null nodes between the end-nodes are also counted into the length calculation. Example 1: Input: 1 / \ 3 2 / \ \ 5 3 9 Output: 4 Explanation: The maximum width existing in the third level with the length 4 (5,3,null,9). Example 2: Input: 1 / 3 / \ 5 3 Output: 2 Explanation: The maximum width existing in the third level with the length 2 (5,3). Example 3: Input: 1 / \ 3 2 / 5 Output: 2 Explanation: The maximum width existing in the second level with the length 2 (3,2). Example 4: Input: 1 / \ 3 2 / \ 5 9 / \ 6 7 Output: 8 Explanation:The maximum width existing in the fourth level with the length 8 (6,null,null,null,null,null,null,7). ## Solution: DFS Let us assign an id to each node, similar to the index of a heap. root is 1, left child = parent * 2, right child = parent * 2 + 1. Width = id(right most child) – id(left most child) + 1, so far so good. However, this kind of id system grows exponentially, it overflows even with long type with just 64 levels. To avoid that, we can remap the id with id – id(left most child of each level). Time complexity: O(n) Space complexity: O(h) ## Python3 You are given a tree with n nodes numbered from 0 to n-1 in the form of a parent array where parent[i] is the parent of node i. The root of the tree is node 0. Implement the function getKthAncestor(int node, int k) to return the k-th ancestor of the given node. If there is no such ancestor, return -1. The k-th ancestor of a tree node is the k-th node in the path from that node to the root. Example: Input: ["TreeAncestor","getKthAncestor","getKthAncestor","getKthAncestor"] [[7,[-1,0,0,1,1,2,2]],[3,1],[5,2],[6,3]] Output: [null,1,0,-1] Explanation: TreeAncestor treeAncestor = new TreeAncestor(7, [-1, 0, 0, 1, 1, 2, 2]); treeAncestor.getKthAncestor(3, 1); // returns 1 which is the parent of 3 treeAncestor.getKthAncestor(5, 2); // returns 0 which is the grandparent of 5 treeAncestor.getKthAncestor(6, 3); // returns -1 because there is no such ancestor Constraints: • 1 <= k <= n <= 5*10^4 • parent[0] == -1 indicating that 0 is the root node. • 0 <= parent[i] < n for all 0 < i < n • 0 <= node < n • There will be at most 5*10^4 queries. ## Solution: LogN ancestors 1. Build the tree from parent array 2. Traverse the tree 3. For each node stores up to logn ancestros, 2^0-th, 2^1-th, 2^2-th, … When k comes in, each node take the highest bit h out, and query its 2^h’s ancestors with k <- (k – 2^h). There will be at most logk recursive query. When it ends? k == 0, we found the ancestors which is the current node. Or node == 0 and k > 0, we already at root which doesn’t have any ancestors so return -1. Time complexity: Construction: O(nlogn) Query: O(logk) Space complexity: O(nlogn) DP method ## Solution 2: Binary Search credit: Ziwu Zhou Construction: O(n) Traverse the tree in post order, for each node record its depth and id (visiting order). For each depth, store all the nodes and their ids. Query: O(logn) Get the depth and id of the node, if k > d, return -1. Use binary search to find the first node at depth[d – k] that has a id greater than the query’s one That node is the k-th ancestor of the node. ## C++ Given a binary tree root, a node X in the tree is named good if in the path from root to X there are no nodes with a value greater than X. Return the number of good nodes in the binary tree. Example 1: Input: root = [3,1,4,3,null,1,5] Output: 4 Explanation: Nodes in blue are good. Root Node (3) is always a good node. Node 4 -> (3,4) is the maximum value in the path starting from the root. Node 5 -> (3,4,5) is the maximum value in the path Node 3 -> (3,1,3) is the maximum value in the path. Example 2: Input: root = [3,3,null,4,2] Output: 3 Explanation: Node 2 -> (3, 3, 2) is not good, because "3" is higher than it. Example 3: Input: root = [1] Output: 1 Explanation: Root is considered as good. Constraints: • The number of nodes in the binary tree is in the range [1, 10^5]. • Each node’s value is between [-10^4, 10^4]. ## Solution: Recursion Time complexity: O(n) Space complexity: O(n) ## C++ Mission News Theme by Compete Themes.
2020-09-18 22:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44943487644195557, "perplexity": 2211.832257821185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00614.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2013_v50n1_97
RIGIDITY THEOREMS IN THE HYPERBOLIC SPACE Title & Authors RIGIDITY THEOREMS IN THE HYPERBOLIC SPACE De Lima, Henrique Fernandes; Abstract As a suitable application of the well known generalized maximum principle of Omori-Yau, we obtain rigidity results concerning to a complete hypersurface immersed with bounded mean curvature in the $\small{(n+1)}$-dimensional hyperbolic space $\small{\mathbb{H}^{n+1}}$. In our approach, we explore the existence of a natural duality between $\small{\mathbb{H}^{n+1}}$ and the half $\small{\mathcal{H}^{n+1}}$ of the de Sitter space $\small{\mathbb{S}_1^{n+1}}$, which models the so-called steady state space. Keywords hyperbolic space;complete hypersurfaces;mean curvature;Gauss map; Language English Cited by 1. On Bernstein-Type Theorems in Semi-Riemannian Warped Products, Advances in Mathematical Physics, 2013, 2013, 1 References 1. L. J. Alias and M. Dajczer, Uniqueness of constant mean curvature surfaces properly immersed in a slab, Comment. Math. Helv. 81 (2006), no. 3, 653-663. 2. F. E. C. Camargo, A. Caminha, and H. F. de Lima, Bernstein-type Theorems in Semi-Riemannian Warped Products, Proc. Amer. Math. Soc. 139 (2011), no. 5, 1841-1850. 3. A. Caminha and H. F. de Lima, Complete vertical graphs with constant mean curvature in semi-Riemannian warped products, Bull. Belg. Math. Soc. Simon Stevin 16 (2009), no. 1, 91-105. 4. A. Huber, On subharmonic functions and differential geometry in the large, Comment. Math. Helv. 32 (1957), 13-72. 5. H. F. de Lima, Spacelike hypersurfaces with constant higher order mean curvature in de Sitter space, J. Geom. Phys. 57 (2007), no. 3, 967-975. 6. R. Lopez and S. Montiel, Existence of constant mean curvature graphs in hyperbolic space, Calc. Var. Partial Differential Equations 8 (1999), no. 2, 177-190. 7. S. Montiel, Complete non-compact spacelike hypersurfaces of constant mean curvature in de Sitter spaces, J. Math. Soc. Japan 55 (2003), no. 4, 915-938. 8. S. Montiel, Unicity of constant mean curvature hypersurfaces in some Riemannian manifolds, Indiana Univ. Math. J. 48 (1999), no. 2, 711-748. 9. S. Montiel, Uniqueness of spacelike hypersurfaces of constant mean curvature in foliated spacetimes, Math. Ann. 314 (1999), no. 3, 529-553. 10. S. Montiel, An integral inequality for compact spacelike hypersurfaces in De Sitter space and applications to the case of constant mean curvature, Indiana Univ. Math. J. 37 (1988), no. 4, 909-917. 11. H. Omori, Isometric immersions of Riemannian manifolds, J. Math. Soc. Japan 19 (1967), 205-214. 12. S. T. Yau, Harmonic functions on complete Riemannian manifolds, Comm. Pure Appl. Math. 28 (1975), 201-228. 13. S. T. Yau, Some function-theoretic properties of complete Riemannian manifolds and their applications to geometry, Indiana Univ. Math. J. 25 (1976), no. 7, 659-670.
2018-04-20 12:19:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 5, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8065268993377686, "perplexity": 456.1411201289647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937780.9/warc/CC-MAIN-20180420120351-20180420140351-00503.warc.gz"}
https://nsrc.org/workshops/2015/apricot2015/raw-attachment/wiki/Track4Agenda/4.4.1.rancid-exercise.htm
# 1 Introduction ## 1.1 Goals • Gain experience with RANCID • Commands preceded with "$" imply that you should execute the command as a general user - not as root. • Commands preceded with "#" imply that you should be working as root. • Commands with more specific command lines (e.g. "rtrX>" or "mysql>") imply that you are executing commands on remote equipment, or within another program. # 2 Exercises # 3 Connect to your PC using ssh # 4 Become root, and install the Subversion Version Control System: In addition to Subversion we will specify to install telnet and the mutt email client. Both these package may already be installed from prior exercises. If so, don't worry - the apt-get command will not reinstall them. $ sudo -s # apt-get install subversion telnet mutt # 5 Install Rancid itself # apt-get install rancid • It will prompt with a warning - Select and press ENTER to continue. • It will give you another warning about making a backup copy of your rancid data. We have no data, so select and press ENTER to continue. # 6 Add an alias for the rancid user in /etc/aliases file • RANCID by default sends emails to the users rancid-groupname and rancid-admin-groupname. We want them to be sent to the sysadm user instead and use the alias function for this. # editor /etc/aliases rancid-routers: sysadm rancid-admin-routers: sysadm Save the file, then run: # newaliases # 7 Edit /etc/rancid/rancid.conf # editor /etc/rancid/rancid.conf Find these lines in rancid.conf: # list of rancid groups #LIST_OF_GROUPS="sl joebobisp"; export LIST_OF_GROUPS And, underneath them add the following line: LIST_OF_GROUPS="routers" (with no '#' at the front of line, and aligned to the left) Find the line with CVSROOT: CVSROOT=$BASEDIR/CVS; export CVSROOT And, change it to: CVSROOT=$BASEDIR/svn; export CVSROOT Note the lowercase "svn". We want to use Subversion for our Version Control System, and not CVS, so find the line with the parameter RCSSYS: RCSSYS=cvs; export RCSSYS And, change it to: RCSSYS=svn; export RCSSYS Now exit and save the file. # 11 CRITICAL! CRITICAL! CRITICAL! Pay very close attention to what userid you are using during the rest of these exercises. If you are not sure simply type "id" on the command line at any time. From a root prompt ("#"), switch identity to become the 'rancid' user: # su -s /bin/bash rancid Check that you ARE the rancid user: $id You should see something similar (numbers may be different): uid=104(rancid) gid=109(rancid) groups=109(rancid) # 12 IF YOU ARE NOT USER RANCID NOW, do NOT continue # 13 Create /var/lib/rancid/.cloginrc $ editor /var/lib/rancid/.cloginrc Add the following two lines to the file: add user *.ws.nsrc.org cisco add password *.ws.nsrc.org nsrc+ws nsrc+ws (The first 'cisco' is the username, the first and second 'nsrc+ws' are the password and enable password used to login to your router. The star in the name means that it will try to use this username and password for all routers whose names end .ws.nsrc.org) Exit and save the file. Now protect this file so that it cannot be read by other users: $chmod 600 /var/lib/rancid/.cloginrc # 14 8. Test login to the router of your group Login to your router with clogin. You might have to type yes to the first warning, but should not need to enter a password, this should be automatic. $ /var/lib/rancid/bin/clogin rtrX.ws.nsrc.org (replace X with your group number. So, group 1 is rtr1.ws.nsrc.org) You should get something like: spawn ssh -c 3des -x -l cisco rtrX.ws.nsrc.org The authenticity of host 'rtrX.ws.nsrc.org (10.10.X.254)' can't be established. RSA key fingerprint is 73:f3:f0:e8:78:ab:49:1c:d9:5d:49:01:a4:e1:2a:83. Are you sure you want to continue connecting (yes/no)? Host rtrX.ws.nsrc.org added to the list of known hosts. yes Warning: Permanently added 'rtrX.ws.nsrc.org' (RSA) to the list of known hosts. rtrX>enable rtrX# Exit the from the router login: rtrX#exit # 15 Initialize the SVN repository for rancid: Make sure you are the rancid user before doing this: $id If you do not see something like uid=108(rancid) gid=113(rancid) groups=113(rancid)" then DO NOT CONTINUE until you have become the rancid user. See exercise 6 for details. Now initialize the Version Control repository (it will use Subversion): $ /usr/lib/rancid/bin/rancid-cvs You should see something similar to this: Committed revision 1. Checked out revision 1. At revision 1. A configs Committed revision 2. A router.db Transmitting file data . Committed revision 3. # 16 Do this ONLY if you have problems If this does not work, then either you are missing the subversion package, or something was not properly configured during the previous steps. You should verify that subversion is installed and then before running the rancid-cvs command again do the following: $exit # apt-get install subversion # su -s /bin/bash rancid$ cd /var/lib/rancid $rm -rf routers$ rm -rf svn Now try running the rancid-cvs command again: $/usr/lib/rancid/bin/rancid-cvs # 17 Create the router.db file $ editor /var/lib/rancid/routers/router.db rtrX.ws.nsrc.org:cisco:up (remember to replace X as appropriate) Exit and save the file. # 18 Let's run rancid! $/usr/lib/rancid/bin/rancid-run This may take some time so be patient. Run it again, since the first time it might not commit correctly: $ /usr/lib/rancid/bin/rancid-run # 19 Check the rancid log files: $cd /var/lib/rancid/logs$ ls -l ... View the contents of the file(s): $less rtrX.ws.nsrc.org Where you should replace "X" with your group number. If all went well, you can see the config of the router. # 21 Let's change an interface Description on the router $ /usr/lib/rancid/bin/clogin rtrX.ws.nsrc.org Where you should replace "X" with your group number. At the "rtrX#" prompt, enter the command: rtrX# conf term You should see: Enter configuration commands, one per line. End with CNTL/Z. rtrX(config)# Enter: rtrX(config)# interface LoopbackXX (replace XX with your PC no) You should get this prompt: rtrX(config-if)# Enter: rtr2(config-if)# description <put your name here> rtr2(config-if)# end You should now have this prompt: rtrX# To save the config to memory: rtrX# write memory You should see: Building configuration... [OK] To exit type: rtrX# exit Now you should be back at your rancid user prompt on your system: # 22 Let's run rancid again: $/usr/lib/rancid/bin/rancid-run Look at the ranicd logs $ ls /var/lib/rancid/logs/ You should see the latest rancid execution as a new log file with the date and time in the name. # 23 Let's see the differences $cd /var/lib/rancid/routers/configs$ ls -l You should see the router config file for your group: $svn log rtrX.ws.nsrc.org (where X is the number of your router) Notice the revisions. You should see different no's such as r5 and r7. Choose the lowest and the highest one. Let's view the difference between two versions: $ svn diff -r 5:7 rtrX.ws.nsrc.org | less $svn diff -r 6:7 rtrX.ws.nsrc.org | less ... can you find your changes? Notice that svn is the Subversion Version Control system command line tool for viewing Subversion repositories of information. If you type: $ ls -lah You will see a hidden directory called ".svn" - this actually contains all the information about the changes between router configurations from each time you run rancid using /usr/lib/rancid/bin/rancid-run. Whatever you do, don't edit or touch the .svn directory by hand! Now we will exit from the rancid user shell and the root user shell to go back to being the "sysadm" user. Then we'll use the "mutt" email client to see if rancid has been sending emails to the sysadm user. $exit (takes your from rancid to root user) # exit (take you from root to sysadm user)$ id ... check that you are now the 'sysadm' user again; ... if not, log out and in again as sysadm to your virtual host $mutt (When asked to create the Mail directory, say Yes) If everything goes as planned, you should be able to read the mails sent by Rancid. You can select an email sent by "[email protected]" and see what it looks like. Notice that it is your router description and any differences from the last time it was obtained using the rancid-run command. Now exit from mutt. (use 'q' return to mail index, and 'q' again to quit mutt) # 25 Let's make rancid run automatically every 30 minutes from using cron cron is a system available in Linux to automate the running of jobs. First we need to become the root user again: $ sudo -s Now we will create a new job to run for the rancid user: # crontab -e -u rancid It will ask you for your favorite editor. Select whichever editor you have been using in class. Add this line at the bottom of the file (COPY and PASTE): */30 * * * * /usr/lib/rancid/bin/rancid-run ... then save and quit from the file. That's it. The command "rancid-run" will execute automatically from now on every 30 minutes all the time (every day, week and month). # 26 Now add all the other routers Note the hostnames for the routers rtrX.ws.nsrc.org where X goes from 1 to 9 If you have less routers in your class, then only include the actual, available routers. Become the rancid user and update the router.db file: # su -s /bin/bash rancid editor /var/lib/rancid/routers/router.db Add the other classroom routers to the file. You should end up with something like (COPY and PASTE): rtr1.ws.nsrc.org:cisco:up rtr2.ws.nsrc.org:cisco:up rtr3.ws.nsrc.org:cisco:up rtr4.ws.nsrc.org:cisco:up rtr5.ws.nsrc.org:cisco:up rtr6.ws.nsrc.org:cisco:up rtr7.ws.nsrc.org:cisco:up rtr8.ws.nsrc.org:cisco:up rtr9.ws.nsrc.org:cisco:up (Note that "cisco" means this is Cisco equipment -- it tells Rancid that we are expecting to talk to a Cisco device here. You can also talk to Juniper, HP, ...). Be sure the entries are aligned to the left of the file. # 27 Run rancid again: /usr/lib/rancid/bin/rancid-run This should take a minute or more now, be patient. # 28 Check out the logs: $cd /var/lib/rancid/logs$ ls -l ... Pick the latest file and view it $less routers.YYYYMMDD.HHMMSS This should be the last file listed in the output from "ls -l" You should notice a bunch of statements indicating that routers have been added to the Subversion version control repository, and much more. # 29 Look at the configs $ cd /var/lib/rancid/routers/configs $more *.ws.nsrc.org Press the SPACE bar to continue through each file. Or, you could do: $ less *.ws.nsrc.org And press the SPACE bar to scroll through each file and then press ":n" to view the next file. Remember, in both cases you can press "q" to quit at any time. If all went well, you can see the configs of ALL routers $/usr/lib/rancid/bin/rancid-run This could take a few moments, so be patient.... # 31 Play with clogin: $ /usr/lib/rancid/bin/clogin -c "show clock" rtrX.ws.nsrc.org Where "X" is the number of your group. What do you notice ? Even better, we can show the power of using a simple script to make changes to multiple devices quickly: $editor /tmp/newuser ... in this file, add the following commands (COPY and PASTE): configure terminal username NewUser secret 0 NewPassword exit write Save the file, exit, and run the following commands from the command line: $ for r in 1 2 3 4 Your prompt will now change to be ">". Continue by typing: > do > /var/lib/rancid/bin/clogin -x /tmp/newuser rtr$r.ws.nsrc.org > done Now your prompt will go back to "$" and rancid clogin command will run and execute the commands you just typed above on routers rtr1, rtr2, rtr3 and rtr4. This is simple shell scripting in Linux, but it's very powerful. Q. How would you verify that this has executed correctly ? Hint: "show run | inc" A. Connect to rtr1, rtr2, rtr3 and rtr4. Type "enable" and then type "show run | inc username" to verify that the NewUser username now exists. Type exit to leave each router. Naturally you could automate this like we just did above. # 32 Add the RANCID SVN (Subversion) repository in to WebSVN If you are still logged in as user rancid, get back to root. Remember you can type "id" to check what userid you are. \$ exit # Install WebSVN: # apt-get install websvn During the installation, follow the following instructions. • Select to the question if you want to configure WebSVN now and press ENTER • Select for the next question about supporting various web servers and press ENTER • When asked for the "svn parent repositories" change the path to be: /var/lib/rancid/svn Select and press ENTER. Do the same when asked about "svn repositories" on the next screen. That is, use the path: /var/lib/rancid/svn and not what is shown by default. Select and press ENTER. Select for the next screen talking about permissions and press ENTER. Note: if you are installing under Ubuntu 14.04, you may get an error about the conf.d directory not being present. If so, workaround the problem like this (including creating a dummy conf.d directory): # ln -s /etc/websvn/apache.conf /etc/apache2/conf-available/websvn.conf # mkdir /etc/apache2/conf.d # a2enconf websvn.conf # service apache2 reload # 33 Fix permissions. The web server must be able to read the SVN (Subversion) folder # chgrp -R www-data /var/lib/rancid/svn # chmod g+w -R /var/lib/rancid/svn http://pcX.ws.nsrc.org/websvn Browse the files under the 'routers/configs' directory. You can see all your router configuration files here. # 35 Review revisions WebSVN lets you see easily the changes between versions. • Browse to http://pcX.ws.nsrc.org/websvn again, go to routers/ then configs/ • Click on your router file (rtrX.ws.nsrc.org) name. You will get a new screen • Click "Compare with Previous" at the top of the screen. • You should now see the latest changes highlighted. • Click on "REPOS 1" to back to the main WebSVN page: • Click on "routers/" under "Path" • Click on "configs/" • Select two of the routers that are next to each other. I.E. rtr1 and rtr2, rtr3 and rtr4. • Click on Compare Paths This will show you the differences between two separate router configurations. WebSVN is a convenient way to quickly see differences via a GUI between mulitple configuration files. Note, this is a potential security hole so you should limit access to the URL http://host/websvn using passwords (and SSL) or appropriate access control lists. # 36 Optional: Fetching configs with a non-privileged rancid user In a production environment, we'd probably want to add a "rancid" user on the devices, without config privileges, but able to retrieve do a show running-config. One way to do this, add a user in config mode: rtrX# conf term Enter configuration commands, one per line. End with CNTL/Z. rtrX(config)# privilege exec level 4 show running-config view full This creates a rancid user with privilege level 4. On the next line, we allow that user to execute show running-config add user *.ws.nsrc.org rancid add autoenable *.ws.nsrc.org 1 The autoenable means the user will be in the right privilege level immediately after login and no enable is needed to run show running-config Note: try and look at the clogin manpage to find out how you can specify another user (for example: cisco) when using clogin interactively, to make changes with -c or -x (as shown above). See more at: http://www.toms-blog.com/backup-cisco-config-with-rancid-and-an-un-priviledged-user/ # 37 On the use of hostnames in RANCID vs. IP Addresses (Note: it is also allowed to use IP addresses, and one could also write: add user 10.10.* cisco add password rtr*.ws.nsrc.org nsrc+ws nsrc+ws
2018-06-18 19:17:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.191408172249794, "perplexity": 9846.180680496653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860776.63/warc/CC-MAIN-20180618183714-20180618203714-00104.warc.gz"}
https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/concepts/async/index
# Asynchronous Programming with Async and Await (Visual Basic) You can avoid performance bottlenecks and enhance the overall responsiveness of your application by using asynchronous programming. However, traditional techniques for writing asynchronous applications can be complicated, making them difficult to write, debug, and maintain. Visual Studio 2012 introduced a simplified approach, async programming, that leverages asynchronous support in the .NET Framework 4.5 and higher as well as in the Windows Runtime. The compiler does the difficult work that the developer used to do, and your application retains a logical structure that resembles synchronous code. As a result, you get all the advantages of asynchronous programming with a fraction of the effort. This topic provides an overview of when and how to use async programming and includes links to support topics that contain details and examples. ## Async Improves Responsiveness Asynchrony is essential for activities that are potentially blocking, such as when your application accesses the web. Access to a web resource sometimes is slow or delayed. If such an activity is blocked within a synchronous process, the entire application must wait. In an asynchronous process, the application can continue with other work that doesn't depend on the web resource until the potentially blocking task finishes. The following table shows typical areas where asynchronous programming improves responsiveness. The listed APIs from the .NET Framework 4.5 and the Windows Runtime contain methods that support async programming. Application area Supporting APIs that contain async methods Web access HttpClient, SyndicationClient Working with images MediaCapture, BitmapEncoder, BitmapDecoder WCF programming Synchronous and Asynchronous Operations Asynchrony proves especially valuable for applications that access the UI thread because all UI-related activity usually shares one thread. If any process is blocked in a synchronous application, all are blocked. Your application stops responding, and you might conclude that it has failed when instead it's just waiting. When you use asynchronous methods, the application continues to respond to the UI. You can resize or minimize a window, for example, or you can close the application if you don't want to wait for it to finish. The async-based approach adds the equivalent of an automatic transmission to the list of options that you can choose from when designing asynchronous operations. That is, you get all the benefits of traditional asynchronous programming but with much less effort from the developer. ## Async Methods Are Easier to Write The Async and Await keywords in Visual Basic are the heart of async programming. By using those two keywords, you can use resources in the .NET Framework or the Windows Runtime to create an asynchronous method almost as easily as you create a synchronous method. Asynchronous methods that you define by using Async and Await are referred to as async methods. The following example shows an async method. Almost everything in the code should look completely familiar to you. The comments call out the features that you add to create the asynchrony. You can find a complete Windows Presentation Foundation (WPF) example file at the end of this topic, and you can download the sample from Async Sample: Example from "Asynchronous Programming with Async and Await". ' Three things to note in the signature: ' - The method has an Async modifier. ' - The return type is Task or Task(Of T). (See "Return Types" section.) ' Here, it is Task(Of Integer) because the return statement returns an integer. ' - The method name ends in "Async." Async Function AccessTheWebAsync() As Task(Of Integer) ' You need to add a reference to System.Net.Http to declare client. Dim client As HttpClient = New HttpClient() ' GetStringAsync returns a Task(Of String). That means that when you await the ' task you'll get a string (urlContents). ' You can do work here that doesn't rely on the string from GetStringAsync. DoIndependentWork() ' The Await operator suspends AccessTheWebAsync. ' - AccessTheWebAsync can't continue until getStringTask is complete. ' - Meanwhile, control returns to the caller of AccessTheWebAsync. ' - Control resumes here when getStringTask is complete. ' - The Await operator then retrieves the string result from getStringTask. Dim urlContents As String = Await getStringTask ' The return statement specifies an integer result. ' Any methods that are awaiting AccessTheWebAsync retrieve the length value. Return urlContents.Length End Function If AccessTheWebAsync doesn't have any work that it can do between calling GetStringAsync and awaiting its completion, you can simplify your code by calling and awaiting in the following single statement. Dim urlContents As String = Await client.GetStringAsync() The following characteristics summarize what makes the previous example an async method. • The method signature includes an Async modifier. • The name of an async method, by convention, ends with an "Async" suffix. • The return type is one of the following types: • Task<TResult> if your method has a return statement in which the operand has type TResult. • Task if your method has no return statement or has a return statement with no operand. • Sub if you're writing an async event handler. For more information, see "Return Types and Parameters" later in this topic. • The method usually includes at least one await expression, which marks a point where the method can't continue until the awaited asynchronous operation is complete. In the meantime, the method is suspended, and control returns to the method's caller. The next section of this topic illustrates what happens at the suspension point. In async methods, you use the provided keywords and types to indicate what you want to do, and the compiler does the rest, including keeping track of what must happen when control returns to an await point in a suspended method. Some routine processes, such as loops and exception handling, can be difficult to handle in traditional asynchronous code. In an async method, you write these elements much as you would in a synchronous solution, and the problem is solved. ## What Happens in an Async Method The most important thing to understand in asynchronous programming is how the control flow moves from method to method. The following diagram leads you through the process. The numbers in the diagram correspond to the following steps. 1. An event handler calls and awaits the AccessTheWebAsync async method. 2. AccessTheWebAsync creates an HttpClient instance and calls the GetStringAsync asynchronous method to download the contents of a website as a string. 3. Something happens in GetStringAsync that suspends its progress. Perhaps it must wait for a website to download or some other blocking activity. To avoid blocking resources, GetStringAsync yields control to its caller, AccessTheWebAsync. GetStringAsync returns a Task<TResult> where TResult is a string, and AccessTheWebAsync assigns the task to the getStringTask variable. The task represents the ongoing process for the call to GetStringAsync, with a commitment to produce an actual string value when the work is complete. 4. Because getStringTask hasn't been awaited yet, AccessTheWebAsync can continue with other work that doesn't depend on the final result from GetStringAsync. That work is represented by a call to the synchronous method DoIndependentWork. 5. DoIndependentWork is a synchronous method that does its work and returns to its caller. 6. AccessTheWebAsync has run out of work that it can do without a result from getStringTask. AccessTheWebAsync next wants to calculate and return the length of the downloaded string, but the method can't calculate that value until the method has the string. Therefore, AccessTheWebAsync uses an await operator to suspend its progress and to yield control to the method that called AccessTheWebAsync. AccessTheWebAsync returns a Task<int> (Task(Of Integer) in Visual Basic) to the caller. The task represents a promise to produce an integer result that's the length of the downloaded string. Note If GetStringAsync (and therefore getStringTask) is complete before AccessTheWebAsync awaits it, control remains in AccessTheWebAsync. The expense of suspending and then returning to AccessTheWebAsync would be wasted if the called asynchronous process (getStringTask) has already completed and AccessTheWebSync doesn't have to wait for the final result. Inside the caller (the event handler in this example), the processing pattern continues. The caller might do other work that doesn't depend on the result from AccessTheWebAsync before awaiting that result, or the caller might await immediately. The event handler is waiting for AccessTheWebAsync, and AccessTheWebAsync is waiting for GetStringAsync. 7. GetStringAsync completes and produces a string result. The string result isn't returned by the call to GetStringAsync in the way that you might expect. (Remember that the method already returned a task in step 3.) Instead, the string result is stored in the task that represents the completion of the method, getStringTask. The await operator retrieves the result from getStringTask. The assignment statement assigns the retrieved result to urlContents. 8. When AccessTheWebAsync has the string result, the method can calculate the length of the string. Then the work of AccessTheWebAsync is also complete, and the waiting event handler can resume. In the full example at the end of the topic, you can confirm that the event handler retrieves and prints the value of the length result. If you are new to asynchronous programming, take a minute to consider the difference between synchronous and asynchronous behavior. A synchronous method returns when its work is complete (step 5), but an async method returns a task value when its work is suspended (steps 3 and 6). When the async method eventually completes its work, the task is marked as completed and the result, if any, is stored in the task. ## API Async Methods You might be wondering where to find methods such as GetStringAsync that support async programming. The .NET Framework 4.5 or higher contains many members that work with Async and Await. You can recognize these members by the "Async" suffix that’s attached to the member name and a return type of Task or Task<TResult>. For example, the System.IO.Stream class contains methods such as CopyToAsync, ReadAsync, and WriteAsync alongside the synchronous methods CopyTo, Read, and Write. The Windows Runtime also contains many methods that you can use with Async and Await in Windows apps. For more information and example methods, see Call asynchronous APIs in C# or Visual Basic, Asynchronous programming (Windows Runtime apps), and WhenAny: Bridging between the .NET Framework and the Windows Runtime. Async methods are intended to be non-blocking operations. An Await expression in an async method doesn’t block the current thread while the awaited task is running. Instead, the expression signs up the rest of the method as a continuation and returns control to the caller of the async method. The Async and Await keywords don't cause additional threads to be created. Async methods don't require multithreading because an async method doesn't run on its own thread. The method runs on the current synchronization context and uses time on the thread only when the method is active. You can use Task.Run to move CPU-bound work to a background thread, but a background thread doesn't help with a process that's just waiting for results to become available. The async-based approach to asynchronous programming is preferable to existing approaches in almost every case. In particular, this approach is better than BackgroundWorker for I/O-bound operations because the code is simpler and you don't have to guard against race conditions. In combination with Task.Run, async programming is better than BackgroundWorker for CPU-bound operations because async programming separates the coordination details of running your code from the work that Task.Run transfers to the threadpool. ## Async and Await If you specify that a method is an async method by using an Async modifier, you enable the following two capabilities. • The marked async method can use Await to designate suspension points. The await operator tells the compiler that the async method can't continue past that point until the awaited asynchronous process is complete. In the meantime, control returns to the caller of the async method. The suspension of an async method at an Await expression doesn't constitute an exit from the method, and Finally blocks don’t run. • The marked async method can itself be awaited by methods that call it. An async method typically contains one or more occurrences of an Await operator, but the absence of Await expressions doesn’t cause a compiler error. If an async method doesn’t use an Await operator to mark a suspension point, the method executes as a synchronous method does, despite the Async modifier. The compiler issues a warning for such methods. Async and Await are contextual keywords. For more information and examples, see the following topics: ## Return Types and Parameters In .NET Framework programming, an async method typically returns a Task or a Task<TResult>. Inside an async method, an Await operator is applied to a task that's returned from a call to another async method. You specify Task<TResult> as the return type if the method contains a Return statement that specifies an operand of type TResult. You use Task as the return type if the method has no return statement or has a return statement that doesn't return an operand. The following example shows how you declare and call a method that returns a Task<TResult> or a Task. ' Signature specifies Task(Of Integer) Dim hours As Integer ' . . . ' Return statement specifies an integer result. Return hours End Function Dim intResult As Integer = Await returnedTaskTResult ' or, in a single statement Dim intResult As Integer = Await TaskOfTResult_MethodAsync() ' . . . ' The method has no return statement. End Function ' or, in a single statement Each returned task represents ongoing work. A task encapsulates information about the state of the asynchronous process and, eventually, either the final result from the process or the exception that the process raises if it doesn't succeed. An async method can also be a Sub method. This return type is used primarily to define event handlers, where a return type is required. Async event handlers often serve as the starting point for async programs. An async method that’s a Sub procedure can’t be awaited, and the caller can't catch any exceptions that the method throws. An async method can't declare ByRef parameters, but the method can call methods that have such parameters. Asynchronous APIs in Windows Runtime programming have one of the following return types, which are similar to tasks: For more information and an example, see Call asynchronous APIs in C# or Visual Basic. ## Naming Convention By convention, you append "Async" to the names of methods that have an Async modifier. You can ignore the convention where an event, base class, or interface contract suggests a different name. For example, you shouldn’t rename common event handlers, such as Button1_Click. ## Related Topics and Samples (Visual Studio) Title Description Sample Walkthrough: Accessing the Web by Using Async and Await (Visual Basic) Shows how to convert a synchronous WPF solution to an asynchronous WPF solution. The application downloads a series of websites. Async Sample: Accessing the Web Walkthrough How to: Extend the Async Walkthrough by Using Task.WhenAll (Visual Basic) Adds Task.WhenAll to the previous walkthrough. The use of WhenAll starts all the downloads at the same time. How to: Make Multiple Web Requests in Parallel by Using Async and Await (Visual Basic) Demonstrates how to start several tasks at the same time. Async Sample: Make Multiple Web Requests in Parallel Async Return Types (Visual Basic) Illustrates the types that async methods can return and explains when each type is appropriate. Control Flow in Async Programs (Visual Basic) Traces in detail the flow of control through a succession of await expressions in an asynchronous program. Async Sample: Control Flow in Async Programs Fine-Tuning Your Async Application (Visual Basic) Shows how to add the following functionality to your async solution: - Cancel an Async Task or a List of Tasks (Visual Basic) - Cancel Async Tasks after a Period of Time (Visual Basic) - Cancel Remaining Async Tasks after One Is Complete (Visual Basic) - Start Multiple Async Tasks and Process Them As They Complete (Visual Basic) Async Sample: Fine Tuning Your Application Handling Reentrancy in Async Apps (Visual Basic) Shows how to handle cases in which an active asynchronous operation is restarted while it’s running. WhenAny: Bridging between the .NET Framework and the Windows Runtime Shows how to bridge between Task types in the .NET Framework and IAsyncOperations in the Windows Runtime so that you can use WhenAny with a Windows Runtime method. Async Sample: Bridging between .NET and Windows Runtime (AsTask and WhenAny) Async Cancellation: Bridging between the .NET Framework and the Windows Runtime Shows how to bridge between Task types in the .NET Framework and IAsyncOperations in the Windows Runtime so that you can use CancellationTokenSource with a Windows Runtime method. Async Sample: Bridging between .NET and Windows Runtime (AsTask & Cancellation) Using Async for File Access (Visual Basic) Lists and demonstrates the benefits of using async and await to access files. Task-based Asynchronous Pattern (TAP) Describes a new pattern for asynchrony in the .NET Framework. The pattern is based on the Task and Task<TResult> types. Async Videos on Channel 9 Provides links to a variety of videos about async programming. ## Complete Example The following code is the MainWindow.xaml.vb file from the Windows Presentation Foundation (WPF) application that this topic discusses. You can download the sample from Async Sample: Example from "Asynchronous Programming with Async and Await". ' Add an Imports statement and a reference for System.Net.Http Imports System.Net.Http Class MainWindow ' Mark the event handler with async so you can use Await in it. Private Async Sub StartButton_Click(sender As Object, e As RoutedEventArgs) ' Call and await separately. '' You can do independent work here. Dim contentLength As Integer = Await AccessTheWebAsync() ResultsTextBox.Text &= End Sub ' Three things to note in the signature: ' - The method has an Async modifier. ' - The return type is Task or Task(Of T). (See "Return Types" section.) ' Here, it is Task(Of Integer) because the return statement returns an integer. ' - The method name ends in "Async." Async Function AccessTheWebAsync() As Task(Of Integer) ' You need to add a reference to System.Net.Http to declare client. Dim client As HttpClient = New HttpClient() ' GetStringAsync returns a Task(Of String). That means that when you await the ' task you'll get a string (urlContents). ' You can do work here that doesn't rely on the string from GetStringAsync. DoIndependentWork() ' The Await operator suspends AccessTheWebAsync. ' - AccessTheWebAsync can't continue until getStringTask is complete. ' - Meanwhile, control returns to the caller of AccessTheWebAsync. ' - Control resumes here when getStringTask is complete. ' - The Await operator then retrieves the string result from getStringTask. Dim urlContents As String = Await getStringTask ' The return statement specifies an integer result. ' Any methods that are awaiting AccessTheWebAsync retrieve the length value. Return urlContents.Length End Function Sub DoIndependentWork() ResultsTextBox.Text &= "Working . . . . . . ." & vbCrLf End Sub End Class ' Sample Output: ' Working . . . . . . .
2019-08-18 01:27:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19115488231182098, "perplexity": 3889.125241120065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00547.warc.gz"}
https://riverml.xyz/latest/examples/matrix-factorization-for-recommender-systems-part-2/
# Matrix Factorization for Recommender Systems - Part 2¶ As seen in Part 1, strength of Matrix Factorization (MF) lies in its ability to deal with sparse and high cardinality categorical variables. In this second tutorial we will have a look at Factorization Machines (FM) algorithm and study how it generalizes the power of MF. ## Factorization Machines¶ Steffen Rendel came up in 2010 with Factorization Machines, an algorithm able to handle any real valued feature vector, combining the advantages of general predictors with factorization models. It became quite popular in the field of online advertising, notably after winning several Kaggle competitions. The modeling technique starts with a linear regression to capture the effects of each variable individually: $\normalsize \hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j}$ Then are added interaction terms to learn features relations. Instead of learning a single and specific weight per interaction (as in polynomial regression), a set of latent factors is learnt per feature (as in MF). An interaction is calculated by multiplying involved features product with their latent vectors dot product. The degree of factorization — or model order — represents the maximum number of features per interaction considered. The model equation for a factorization machine of degree $$d$$ = 2 is defined as: $\normalsize \hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle x_{j} x_{j'}$ Where $$\normalsize \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle$$ is the dot product of $$j$$ and $$j'$$ latent vectors: $\normalsize \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle = \sum_{f=1}^{k} \mathbf{v}_{j, f} \cdot \mathbf{v}_{j', f}$ Higher-order FM will be covered in a following section, just note that factorization models express their power in sparse settings, which is also where higher-order interactions are hard to estimate. Strong emphasis must be placed on feature engineering as it allows FM to mimic most factorization models and significantly impact its performance. High cardinality categorical variables one hot encoding is the most frequent step before feeding the model with data. For more efficiency, river FM implementation considers string values as categorical variables and automatically one hot encode them. FM models have their own module river.facto. ## Mimic Biased Matrix Factorization (BiasedMF) Let's start with a simple example where we want to reproduce the Biased Matrix Factorization model we trained in the previous tutorial. For a fair comparison with Part 1 example, let's set the same evaluation framework: from river import datasets from river import metrics from river.evaluate import progressive_val_score def evaluate(model): X_y = datasets.MovieLens100K() metric = metrics.MAE() + metrics.RMSE() _ = progressive_val_score(X_y, model, metric, print_every=25_000, show_time=True, show_memory=True) In order to build an equivalent model we need to use the same hyper-parameters. As we can't replace FM intercept by the global running mean we won't be able to build the exact same model: from river import compose from river import facto from river import meta from river import optim from river import stats fm_params = { 'n_factors': 10, 'weight_optimizer': optim.SGD(0.025), 'latent_optimizer': optim.SGD(0.05), 'sample_normalization': False, 'l1_weight': 0., 'l2_weight': 0., 'l1_latent': 0., 'l2_latent': 0., 'intercept': 3, 'intercept_lr': .01, 'weight_initializer': optim.initializers.Zeros(), 'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.1, seed=73), } regressor = compose.Select('user', 'item') regressor |= facto.FMRegressor(**fm_params) model = meta.PredClipper( regressor=regressor, y_min=1, y_max=5 ) evaluate(model) [25,000] MAE: 0.761761, RMSE: 0.960662 – 0:00:05.452919 – 1.16 MB [50,000] MAE: 0.751922, RMSE: 0.949783 – 0:00:10.885025 – 1.36 MB [75,000] MAE: 0.749822, RMSE: 0.948634 – 0:00:15.870338 – 1.58 MB [100,000] MAE: 0.748393, RMSE: 0.94776 – 0:00:20.830649 – 1.77 MB Both MAE are very close to each other (0.7486 vs 0.7485) showing that we almost reproduced reco.BiasedMF algorithm. The cost is a naturally slower running time as FM implementation offers more flexibility. ## Feature engineering for FM models¶ Let's study the basics of how to properly encode data for FM models. We are going to keep using MovieLens 100K as it provides various feature types: import json for x, y in datasets.MovieLens100K(): print(f'x = {json.dumps(x, indent=4)}\ny = {y}') break x = { "user": "259", "item": "255", "timestamp": 874731910000000000, "title": "My Best Friend's Wedding (1997)", "release_date": 866764800000000000, "genres": "comedy, romance", "age": 21.0, "gender": "M", "occupation": "student", "zip_code": "48823" } y = 4.0 The features we are going to add to our model don't improve its predictive power. Nevertheless, they are useful to illustrate different methods of data encoding: 1. Set-categorical variables We have seen that categorical variables are one hot encoded automatically if set to strings, in the other hand, set-categorical variables must be encoded explicitly by the user. A good way of doing so is to assign them a value of $$1/m$$, where $$m$$ is the number of elements of the sample set. It gives the feature a constant "weight" across all samples preserving model's stability. Let's create a routine to encode movies genres this way: def split_genres(x): genres = x['genres'].split(', ') return {f'genre_{genre}': 1 / len(genres) for genre in genres} 1. Numerical variables In practice, transforming numerical features into categorical ones works better in most cases. Feature binning is the natural way, but finding good bins is sometimes more an art than a science. Let's encode users age with something simple: def bin_age(x): if x['age'] <= 18: return {'age_0-18': 1} elif x['age'] <= 32: return {'age_19-32': 1} elif x['age'] < 55: return {'age_33-54': 1} else: return {'age_55-100': 1} Let's put everything together: fm_params = { 'n_factors': 14, 'weight_optimizer': optim.SGD(0.01), 'latent_optimizer': optim.SGD(0.025), 'intercept': 3, 'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73), } regressor = compose.Select('user', 'item') regressor += ( compose.Select('genres') | compose.FuncTransformer(split_genres) ) regressor += ( compose.Select('age') | compose.FuncTransformer(bin_age) ) regressor |= facto.FMRegressor(**fm_params) model = meta.PredClipper( regressor=regressor, y_min=1, y_max=5 ) evaluate(model) [25,000] MAE: 0.760059, RMSE: 0.961415 – 0:00:12.030449 – 1.43 MB [50,000] MAE: 0.751429, RMSE: 0.951504 – 0:00:24.230797 – 1.68 MB [75,000] MAE: 0.750568, RMSE: 0.951592 – 0:00:37.786411 – 1.95 MB [100,000] MAE: 0.75018, RMSE: 0.951622 – 0:00:51.120835 – 2.2 MB Note that using more variables involves factorizing a larger latent space, then increasing the number of latent factors $$k$$ often helps capturing more information. Some other feature engineering tips from 3 idiots' winning solution for Kaggle Criteo display ads competition in 2014: • Infrequent modalities often bring noise and little information, transforming them into a special tag can help • In some cases, sample-wise normalization seems to make the optimization problem easier to be solved ## Higher-Order Factorization Machines (HOFM)¶ The model equation generalized to any order $$d \geq 2$$ is defined as: $\normalsize \hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{l=2}^{d} \sum_{j_1=1}^{p} \cdots \sum_{j_l=j_{l-1}+1}^{p} \left(\prod_{j'=1}^{l} x_{j_{j'}} \right) \left(\sum_{f=1}^{k_l} \prod_{j'=1}^{l} v_{j_{j'}, f}^{(l)} \right)$ hofm_params = { 'degree': 3, 'n_factors': 12, 'weight_optimizer': optim.SGD(0.01), 'latent_optimizer': optim.SGD(0.025), 'intercept': 3, 'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73), } regressor = compose.Select('user', 'item') regressor += ( compose.Select('genres') | compose.FuncTransformer(split_genres) ) regressor += ( compose.Select('age') | compose.FuncTransformer(bin_age) ) regressor |= facto.HOFMRegressor(**hofm_params) model = meta.PredClipper( regressor=regressor, y_min=1, y_max=5 ) evaluate(model) [25,000] MAE: 0.761379, RMSE: 0.96214 – 0:01:00.299318 – 2.61 MB [50,000] MAE: 0.751998, RMSE: 0.951589 – 0:02:02.140861 – 3.08 MB [75,000] MAE: 0.750994, RMSE: 0.951616 – 0:03:07.688028 – 3.6 MB [100,000] MAE: 0.750849, RMSE: 0.952142 – 0:04:14.242942 – 4.07 MB As said previously, high-order interactions are often hard to estimate due to too much sparsity, that's why we won't spend too much time here. ## Field-aware Factorization Machines (FFM)¶ Field-aware variant of FM (FFM) improved the original method by adding the notion of "fields". A "field" is a group of features that belong to a specific domain (e.g. the "users" field, the "items" field, or the "movie genres" field). FFM restricts itself to pairwise interactions and factorizes separated latent spaces — one per combination of fields (e.g. users/items, users/movie genres, or items/movie genres) — instead of a common one shared by all fields. Therefore, each feature has one latent vector per field it can interact with — so that it can learn the specific effect with each different field. The model equation is defined by: $\normalsize \hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} \langle \mathbf{v}_{j, f_{j'}}, \mathbf{v}_{j', f_{j}} \rangle x_{j} x_{j'}$ Where $$f_j$$ and $$f_{j'}$$ are the fields corresponding to $$j$$ and $$j'$$ features, respectively. ffm_params = { 'n_factors': 8, 'weight_optimizer': optim.SGD(0.01), 'latent_optimizer': optim.SGD(0.025), 'intercept': 3, 'latent_initializer': optim.initializers.Normal(mu=0., sigma=0.05, seed=73), } regressor = compose.Select('user', 'item') regressor += ( compose.Select('genres') | compose.FuncTransformer(split_genres) ) regressor += ( compose.Select('age') | compose.FuncTransformer(bin_age) ) regressor |= facto.FFMRegressor(**ffm_params) model = meta.PredClipper( regressor=regressor, y_min=1, y_max=5 ) evaluate(model) [25,000] MAE: 0.758339, RMSE: 0.959047 – 0:00:19.574694 – 3.04 MB [50,000] MAE: 0.749833, RMSE: 0.948531 – 0:00:40.223609 – 3.59 MB [75,000] MAE: 0.749631, RMSE: 0.949418 – 0:01:00.277665 – 4.19 MB [100,000] MAE: 0.749776, RMSE: 0.950131 – 0:01:19.753804 – 4.75 MB Note that FFM usually needs to learn smaller number of latent factors $$k$$ than FM as each latent vector only deals with one field. ## Field-weighted Factorization Machines (FwFM)¶ Field-weighted Factorization Machines (FwFM) address FFM memory issues caused by its large number of parameters, which is in the order of feature number times field number. As FFM, FwFM is an extension of FM restricted to pairwise interactions, but instead of factorizing separated latent spaces, it learns a specific weight $$r_{f_j, f_{j'}}$$ for each field combination modelling the interaction strength. The model equation is defined as: $\normalsize \hat{y}(x) = w_{0} + \sum_{j=1}^{p} w_{j} x_{j} + \sum_{j=1}^{p} \sum_{j'=j+1}^{p} r_{f_j, f_{j'}} \langle \mathbf{v}_j, \mathbf{v}_{j'} \rangle x_{j} x_{j'}$ fwfm_params = { 'n_factors': 10, 'weight_optimizer': optim.SGD(0.01), 'latent_optimizer': optim.SGD(0.025), 'intercept': 3, 'seed': 73, } regressor = compose.Select('user', 'item') regressor += ( compose.Select('genres') | compose.FuncTransformer(split_genres) ) regressor += ( compose.Select('age') | compose.FuncTransformer(bin_age) ) regressor |= facto.FwFMRegressor(**fwfm_params) model = meta.PredClipper( regressor=regressor, y_min=1, y_max=5 ) evaluate(model) [25,000] MAE: 0.761435, RMSE: 0.962211 – 0:00:27.561068 – 1.18 MB [50,000] MAE: 0.754063, RMSE: 0.953248 – 0:00:53.201578 – 1.38 MB [75,000] MAE: 0.754729, RMSE: 0.95507 – 0:01:20.259681 – 1.6 MB [100,000] MAE: 0.755697, RMSE: 0.956542 – 0:01:46.395669 – 1.79 MB
2021-07-26 05:17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34995633363723755, "perplexity": 9812.054892121783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00665.warc.gz"}
https://brilliant.org/problems/log-probability/
# Log probability Discrete Mathematics Level 4 Let $$x$$ be chosen at random from the interval $$(0,1)$$. What is the probability that $$\lfloor \log 4x \rfloor-\lfloor \log x \rfloor=0?$$ Notation: $$\lfloor \cdot \rfloor$$ denotes the floor function.
2016-10-28 14:05:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942228198051453, "perplexity": 223.63842884554452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00254-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/6009/restrict-locator-to-a-certain-graphic-inside-manipulate
# Restrict Locator to a certain graphic inside Manipulate I would like to restrict the Locator inside Manipulate to a certain graphic. For example if I display two graphs, I would like to be able to click on the right one and use this information in other places (for example in the title and the left graph). By design, Manipulate assigns the Locator to the first graphic object that it displays (see under Details and Options). How can this be overcome? Here is an example snippet: Manipulate[ sin = Plot[Sin[x], {x, -5, 5}, ImageSize -> Medium, Epilog -> {Red, PointSize[Large], Point[p]}]; cos = Plot[Cos[x], {x, -5, 5}, PlotLabel -> "I would like to have the locator on this graph!", ImageSize -> Medium, Epilog -> {Green, PointSize[Large], Point[p]}]; Column[{Style[StringForm["Title 1", p], Large], Row[{sin, cos}]}, Center], {{p, {0, 0}}, {-5, -5}, {5, 5}, ControlType -> Locator}] I know this can be done using Dynamic, but then I loose some features of manipulate, such as "Paste Snapshoot", "Make Bookmark" and SaveDefinitions - Edit: I am replacing my first attempt with another. The locator-point in the right graph controls the information in the title as well as the point in the left graph. The point on the right could be made to appear like a locator by replacing Point with an appropriate-looking graphics object. Manipulate[ Column[{Style[StringForm["Title 1", Dynamic@pt], Large], Row[{Plot[Cos[x], {x, -5, 5}, Epilog -> {PointSize[Large], Point[Dynamic[{First[pt], Cos[First[pt]]}]]}, ImageSize -> 300], LocatorPane[Dynamic[pt], Plot[Sin[x], {x, -5, 5}, Epilog -> {PointSize[Large], Point[Dynamic[{First[pt], Sin[First[pt]]}]]}, ImageSize -> 300, PlotLabel -> "I would like to have the locator on this graph!"], Appearance -> None]}]}, Center], {{pt, {0, 1}}, None}] Note Normally one confines a Locator to a particular region using optional parameters in LocatorPane: However, it is not necessary to do this above because the LocatorPane sits in a single cell in the table. The locator cannot exceed the bounds of that cell. - I switched the sin and cos graphs, but the intent should be clear. –  David Carraher May 24 '12 at 21:52 +1 Thanks! Do you perhaps have an idea, why the title is messed up, when you do "Paste snapshot"? (And why it works if I pout the Dynamic around Style instead of pt? –  Ajasja May 24 '12 at 21:55 @Ajasja I'm not sure why you want to use "paste snapshot". What are you trying to paste, and where? –  David Carraher May 24 '12 at 22:02 The paste snapshot is available when you click the + in the upper right corner of the manipulate pane. If I try to evaluate the new cell then the title does not seam to get the value. I want to use this to bookmark interesting parameter combinations as shown here under "Bookmarking Combinations of Parameter Values" –  Ajasja May 24 '12 at 22:10 Apparently, the rendering of the PlotLabel depends on activation of Style. Dynamic apparently ensures that it is up to date. –  David Carraher May 24 '12 at 22:57 You could use Overlay: Manipulate[ Column[{Style[StringForm["Title 1", Dynamic[p]], Large], Overlay[{ Dynamic@ Plot[Cos[x], {x, -5, 5}, PlotLabel -> "I would like to have the locator on this graph!", ImageSize -> Large, Epilog -> {Green, PointSize[Large], Point[p]}, PlotRegion -> {{0.5, 1}, {0, 1}}], Dynamic@ Plot[Sin[x], {x, -5, 5}, ImageSize -> Large, Epilog -> {Red, PointSize[Large], Point[p]}, PlotRegion -> {{0, .5}, {0, 1}}, PlotLabel -> "Not on this graph!"] }, All, 1]}, Center], {{p, {0, 0}}, {-5, -1}, {5, 1}, ControlType -> Locator}] Edit I added explicit Dynamic around objects depending on p to prevent sporadic loss of mouse tracking, and to hopefully speed up the response. The documentation for Manipulate states: In the form {u,Locator}, the value of u is a list giving x and y coordinates. The coordinates refer either to the first graphic in expr, or range from 0 to 1 in each direction across expr. Here I'm relying on the coordinates of the first graphic in the Overlay expression. What I did to arrange the graphics horizontally is simply to set their PlotRegion option to leave either the left or right half of the surrounding space empty. In the Overlay, they then fill the two halves to form what looks like a row. - Due to the way my other data is laid out I would like the locator be on the right. This works by setting Overlay[{sin, cos}], although I have no idea why. Also with Overlay manipulate seems to respond slower than with LocatorPane. –  Ajasja May 24 '12 at 21:57 @Ajasja Yes, you're right about the speed. Honestly, I don't like Manipulate. –  Jens May 24 '12 at 22:03 Yes, sadly I had a lot of problems as well. Although with a bit of tweaking and hacking and workarounds I get there eventually. I hope this will improve in mma 9 :) –  Ajasja May 24 '12 at 22:07
2014-07-24 21:38:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2741888463497162, "perplexity": 2701.489392275407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997891953.98/warc/CC-MAIN-20140722025811-00219-ip-10-33-131-23.ec2.internal.warc.gz"}
https://crypto.stackexchange.com/questions/67942/difference-on-montgomery-curve-equation-between-efd-and-rfc7748/67949
# Difference on montgomery curve equation between EFD and RFC7748 There is a subtle difference between the 2 implementations for a Montgomery curve defined from the 2 following links https://hyperelliptic.org/EFD/g1p/auto-montgom-xz.html A = X2+Z2 AA = A^2 B = X2-Z2 BB = B^2 E = AA-BB C = X3+Z3 D = X3-Z3 DA = D*A CB = C*B X5 = (DA+CB)^2 Z5 = X1*(DA-CB)^2 X4 = AA*BB Z4 = E*(BB+a24*E) https://tools.ietf.org/html/rfc7748 A = x_2 + z_2 AA = A^2 B = x_2 - z_2 BB = B^2 E = AA - BB C = x_3 + z_3 D = x_3 - z_3 DA = D * A CB = C * B x_3 = (DA + CB)^2 z_3 = x_1 * (DA - CB)^2 x_2 = AA * BB z_2 = E * (AA + a24 * E) This AA / BB change on the last line does affect the result of a point multiplication with same input parameters. Is there a reason for that difference ? • It looks to be a typo in RFC. When BB is used (as in EFD and original P.L. Montgomery paper), the test vectors can be reproduced. Submitted a review comment to RFC. Errare humanum est. How many existing implementations will fail to inter-operate ? – Pierre Mar 11 '19 at 19:47 This is not a bug: it arises from different choice of sign in the definition of a24 := (a ± 2)/4; the RFC uses - while the EFD uses +. RFC, following the Curve25519 paper: The constant a24 is (486662 - 2) / 4 = 121665 for curve25519/X25519 and (156326 - 2) / 4 = 39081 for curve448/X448. EFD, following Montgomery's paper (paywall-free): Assumptions: 4*a24=a+2. This apparent discrepancy was raised by Paul Lambert on the CFRG mailing list during discussion on the draft. It doesn't really matter which one you choose, as long as you're consistent about it! • Thanks for the explanation. I didn't spot the little difference on a24 definition between the RFC and the EFD. – Pierre Mar 11 '19 at 22:09 This is not a typo; it is a difference in how the Montgomery doubling formula was derived between the original paper and the curve25519 paper. Both are correct. To double a point on a Montgomery curve $$y^2 = x^3 + Ax^2 + x\,,$$ one has the identity relating the doubled point $$(x_3, \cdot)$$ and the source point $$(x_1, \cdot)$$: $$x_3 4x_1(x_1^2 + Ax_1 + 1) = (x_1^2 - 1)^2\,.$$ The doubled point $$x_3$$ can thus be computed as the fraction $$\frac{(x_1^2 - 1)^2}{4x_1(x_1^2 + Ax_1 + 1)}\,.$$ But to minimize the operation number, and obtain several common subexpressions, we can write $$(x_1^2 - 1)^2$$ as $$(x_1+1)^2(x_1-1)^2$$, $$4x_1$$ as $$(x_1 + 1)^2 - (x_1 - 1)^2$$, and $$x_1^2 + Ax_1 + 1$$ as either $$(x_1-1)^2 + ((A+2)/4)4x_1$$ or $$(x_1+1)^2 + ((A-2)/4)4x_1$$. It is this latter somewhat arbitrary choice that results in there being two almost identical Montgomery doubling formulas.
2020-04-01 15:39:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7861325740814209, "perplexity": 1806.0244312195587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00128.warc.gz"}
https://www.studypug.com/university-statistics/set-builder-notation
Set builder notation Everything You Need in One Place Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered. Learn and Practice With Ease Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals. Instant and Unlimited Help Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now! 0/1 Intros Lessons 1. Introduction to Set Builder Notation i. What are sets? ii. Why do we need set builder notations? 0/7 Examples Lessons 1. Translating Intervals On Number Lines Into Set Builder Notation Form Translate the following intervals into set builder notation form. 2. Evaluating the Domains of Expressions in Set Builder Notation Form What are the domains for the following expressions? Write the answers in set builder notation form. 1. $\frac{1}{x}$ 2. $\sqrt x$ 3. $\frac{2}{x^{2} - 4}$ Topic Notes A set is a collection of elements (usually numbers) E.g. {$x \in R | x$ > 0} should be read as "the set of all x's that are an element of the real numbers such that x is greater than 0." Special symbols: - $R$ = real numbers - $Z$ = integers - $N$ = natural numbers - $Q$ = rational numbers - $C$ = complex numbers - $I$ = imaginary numbers
2023-03-24 10:07:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5455922484397888, "perplexity": 2521.804835779385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00104.warc.gz"}
https://math.stackexchange.com/questions/3032357/how-to-prove-the-complement-of-the-domain-is-complement-of-the-image-if-f-is-bij
# How to Prove The Complement Of The Domain Is Complement Of The Image If f Is Bijective It seems true that $$f(\overline{X}) = \overline{f(X)}$$ for $$f:A\rightarrow B$$ and $$X$$ is any subset of $$A$$ if and only if $$f$$ is bijective.But I couldn't write it as a formal way like epsilon argument.It makes sense to me but the trouble I have is with the formal prove. • Of course not ! Take $f(x)=x\boldsymbol 1_{\mathbb Q\cap [0,1]}$. Is surjective : $[0,1]\to \mathbb Q\cap [0,1]$ but $f\left(\overline{[0,1]\cap \mathbb Q}\right)\neq \overline{f([0,1]\cap \mathbb Q)}$ – Surb Dec 9 '18 at 13:02 • What if it is bijective it is surely true. Let me change the question then – selman özlyn Dec 9 '18 at 13:12 • Now it's not : $f(x)=x\boldsymbol 1_{\mathbb Q\cap [0,1]}-x\boldsymbol 1_{\mathbb R\setminus \mathbb Q\cap [0,1]}$. It's bijective $[0,1]\to (\mathbb Q\cap [0,1])\cup((\mathbb R\setminus \mathbb Q)\cap (0,-1])$ but $f(\overline{[0,1]\cap \mathbb Q})\neq \overline{f([0,1]\cap \mathbb Q)}$. What is exactely your exercise ? – Surb Dec 9 '18 at 13:33 Let us formulate matters more precisely: consider two arbitrary sets $$A, B$$ and a map $$f: A \rightarrow B$$. We have the following elementary propositions: 1. $$f$$ is injective if and only if $$(\forall X)(X \subseteq A \implies f(\complement_{A}X) \subseteq \complement_{B}f(X))$$ Proof : Assuming first the injectivity of $$f$$, consider arbitrary $$X \subseteq A$$ and $$y \in f(\complement_{A}X)$$, such that $$y=f(x)$$ with $$x \in A \setminus X$$. If we were to assume by contradiction that $$y \in f(X)$$ it would entail that $$y=f(t)$$ for a certain $$t \in X$$; as $$y=f(x)=f(t)$$ and $$f$$ is injective, we could conclude $$x=t \in X$$ in contradiction to $$x \notin X$$. Hence $$y \in B \setminus f(X)$$ and the inclusion is established. Assuming conversely that the stated inclusion holds for any subset $$X$$, let us consider arbitrary $$x, y \in A$$ with $$x \neq y$$. This means that $$y \in A \setminus \{x\}$$ and thus by our assumption $$f(\{y\})=\{f(y)\} \subseteq f(A \setminus \{x\})\subseteq B \setminus f(\{x\})=B \setminus \{f(x)\}$$ from which we infer that $$f(x) \neq f(y)$$ and conclude $$f$$ is indeed injective. $$\Box$$ 1. $$f$$ is bijective if and only if: $$(\forall X)(X \subseteq A \implies f(\complement_{A}X)=\complement_{B}f(X))$$ Proof : Assume first $$f$$ is bijective and consider arbitrary $$X \subseteq A$$. Bijectivity comprises injectivity and thus the previous result entails that we have a valid inclusion in the direction specified above. As for the reverse inclusion we quote the following general result, valid for any function without any special hypotheses: $$(\forall X)(X \subseteq A \implies f(A) \setminus f(X) \subseteq f(A \setminus X))$$ and we bear in mind that since $$f$$ is also assumed to be surjective, we automatically have $$f(A)=B$$. To establish the reverse implication, assuming the given relation of equality for any subset $$X$$ we first infer by 1. that $$f$$ is injective; its surjectivity we derive by considering the particular case $$X= \emptyset$$, which yields: $$f(A \setminus \emptyset)=f(A)=B \setminus f(\emptyset)=B \setminus \emptyset =B$$ This concludes the proof. $$\Box$$
2019-04-22 20:24:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 42, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550595879554749, "perplexity": 164.71936302592027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422220140-00018.warc.gz"}
https://www.investopedia.com/ask/answers/032415/what-does-cash-conversion-cycle-ccc-tell-us-about-companys-management.asp
The cash conversion cycle (CCC) is a formula in management accounting that measures how efficiently a company's managers are managing its working capital. The CCC measures the length of time between a company's purchase of inventory and the receipts of cash from its accounts receivable. The CCC is used by management to see how long a company's cash remains tied up in its operations. \begin{aligned} &\text{Cash Conversion Cycle} = \text{Days Inventory Outstanding} + \text{Days Sales Outstanding} - \text{Days Payable Outstanding*} \\ &\textbf{where:} \\ &\text{Days Inventory Outstanding} = \text{Average number of days the company holds its inventory before selling it} \\ &\text{Days Sales Outstanding} = \text{Number of days of average sales the company currently has outstanding} \\ &\text{Days Payable Outstanding} = \text{Ratio indicating average number of days the company takes to pay its bills} \\ \end{aligned} When a company – or its management – takes an extended period of time to collect outstanding accounts receivable, has too much inventory on hand or pays its expenses too quickly, it lengthens the CCC. A longer CCC means it takes a longer time to generate cash, which can mean insolvency for small companies. When a company collects outstanding payments quickly, correctly forecasts inventory needs or pays its bills slowly, it shortens the CCC. A shorter CCC means the company is healthier. Additional money can then be used to make additional purchases or pay down outstanding debt. When a manager has to pay its suppliers quickly, it's known as a pull on liquidity, which is bad for the company. When a manager cannot collect payments quickly enough, it's known as a drag on liquidity, which is also bad for the company.
2019-06-25 06:51:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533207774162292, "perplexity": 7155.379264815893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00032.warc.gz"}
http://dave.thehorners.com/aboutme/80-random-thoughts/453-posthuman
Dave Horner's Website - Yet another perspective on things... 77 guests Rough Hits : 2771832 ### More from me... how did u find my site? our entire universe; minuscule spec in gigantic multiverse which is mostly lethal. The imagination of nature is far, far greater than the imagination of man. -- Richard Feynman \begin{bmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & 0 & \vdots \\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & 1_{n} \end{bmatrix} # posthuman Monday, 17 December 2012 21:44 Are You Living in a Computer Simulation? Elsewhere online | anthropic-principle.com Kurzweil Accelerating Intelligence Transhumanism - Wikipedia, the free encyclopedia < Prev  Next > Last Updated on Tuesday, 02 September 2014 07:58
2017-03-25 23:40:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923268556594849, "perplexity": 2570.5665985911005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189088.29/warc/CC-MAIN-20170322212949-00378-ip-10-233-31-227.ec2.internal.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/163/C9xC7sC3.html
Copied to clipboard ## G = C9×C7⋊C3order 189 = 33·7 ### Direct product of C9 and C7⋊C3 Aliases: C9×C7⋊C3, C631C3, C21.1C32, C7⋊C94C3, C71(C3×C9), C3.1(C3×C7⋊C3), (C3×C7⋊C3).3C3, SmallGroup(189,3) Series: Derived Chief Lower central Upper central Derived series C1 — C7 — C9×C7⋊C3 Chief series C1 — C7 — C21 — C3×C7⋊C3 — C9×C7⋊C3 Lower central C7 — C9×C7⋊C3 Upper central C1 — C9 Generators and relations for C9×C7⋊C3 G = < a,b,c | a9=b7=c3=1, ab=ba, ac=ca, cbc-1=b4 > Smallest permutation representation of C9×C7⋊C3 On 63 points Generators in S63 (1 2 3 4 5 6 7 8 9)(10 11 12 13 14 15 16 17 18)(19 20 21 22 23 24 25 26 27)(28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45)(46 47 48 49 50 51 52 53 54)(55 56 57 58 59 60 61 62 63) (1 48 24 12 34 44 56)(2 49 25 13 35 45 57)(3 50 26 14 36 37 58)(4 51 27 15 28 38 59)(5 52 19 16 29 39 60)(6 53 20 17 30 40 61)(7 54 21 18 31 41 62)(8 46 22 10 32 42 63)(9 47 23 11 33 43 55) (1 7 4)(2 8 5)(3 9 6)(10 60 45)(11 61 37)(12 62 38)(13 63 39)(14 55 40)(15 56 41)(16 57 42)(17 58 43)(18 59 44)(19 35 46)(20 36 47)(21 28 48)(22 29 49)(23 30 50)(24 31 51)(25 32 52)(26 33 53)(27 34 54) G:=sub<Sym(63)| (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63), (1,48,24,12,34,44,56)(2,49,25,13,35,45,57)(3,50,26,14,36,37,58)(4,51,27,15,28,38,59)(5,52,19,16,29,39,60)(6,53,20,17,30,40,61)(7,54,21,18,31,41,62)(8,46,22,10,32,42,63)(9,47,23,11,33,43,55), (1,7,4)(2,8,5)(3,9,6)(10,60,45)(11,61,37)(12,62,38)(13,63,39)(14,55,40)(15,56,41)(16,57,42)(17,58,43)(18,59,44)(19,35,46)(20,36,47)(21,28,48)(22,29,49)(23,30,50)(24,31,51)(25,32,52)(26,33,53)(27,34,54)>; G:=Group( (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63), (1,48,24,12,34,44,56)(2,49,25,13,35,45,57)(3,50,26,14,36,37,58)(4,51,27,15,28,38,59)(5,52,19,16,29,39,60)(6,53,20,17,30,40,61)(7,54,21,18,31,41,62)(8,46,22,10,32,42,63)(9,47,23,11,33,43,55), (1,7,4)(2,8,5)(3,9,6)(10,60,45)(11,61,37)(12,62,38)(13,63,39)(14,55,40)(15,56,41)(16,57,42)(17,58,43)(18,59,44)(19,35,46)(20,36,47)(21,28,48)(22,29,49)(23,30,50)(24,31,51)(25,32,52)(26,33,53)(27,34,54) ); G=PermutationGroup([(1,2,3,4,5,6,7,8,9),(10,11,12,13,14,15,16,17,18),(19,20,21,22,23,24,25,26,27),(28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45),(46,47,48,49,50,51,52,53,54),(55,56,57,58,59,60,61,62,63)], [(1,48,24,12,34,44,56),(2,49,25,13,35,45,57),(3,50,26,14,36,37,58),(4,51,27,15,28,38,59),(5,52,19,16,29,39,60),(6,53,20,17,30,40,61),(7,54,21,18,31,41,62),(8,46,22,10,32,42,63),(9,47,23,11,33,43,55)], [(1,7,4),(2,8,5),(3,9,6),(10,60,45),(11,61,37),(12,62,38),(13,63,39),(14,55,40),(15,56,41),(16,57,42),(17,58,43),(18,59,44),(19,35,46),(20,36,47),(21,28,48),(22,29,49),(23,30,50),(24,31,51),(25,32,52),(26,33,53),(27,34,54)]) C9×C7⋊C3 is a maximal subgroup of   C95F7 45 conjugacy classes class 1 3A 3B 3C ··· 3H 7A 7B 9A ··· 9F 9G ··· 9R 21A 21B 21C 21D 63A ··· 63L order 1 3 3 3 ··· 3 7 7 9 ··· 9 9 ··· 9 21 21 21 21 63 ··· 63 size 1 1 1 7 ··· 7 3 3 1 ··· 1 7 ··· 7 3 3 3 3 3 ··· 3 45 irreducible representations dim 1 1 1 1 1 3 3 3 type + image C1 C3 C3 C3 C9 C7⋊C3 C3×C7⋊C3 C9×C7⋊C3 kernel C9×C7⋊C3 C7⋊C9 C63 C3×C7⋊C3 C7⋊C3 C9 C3 C1 # reps 1 4 2 2 18 2 4 12 Matrix representation of C9×C7⋊C3 in GL3(𝔽127) generated by 22 0 0 0 22 0 0 0 22 , 104 105 1 1 0 0 0 1 0 , 19 0 0 37 108 108 0 19 0 G:=sub<GL(3,GF(127))| [22,0,0,0,22,0,0,0,22],[104,1,0,105,0,1,1,0,0],[19,37,0,0,108,19,0,108,0] >; C9×C7⋊C3 in GAP, Magma, Sage, TeX C_9\times C_7\rtimes C_3 % in TeX G:=Group("C9xC7:C3"); // GroupNames label G:=SmallGroup(189,3); // by ID G=gap.SmallGroup(189,3); # by ID G:=PCGroup([4,-3,-3,-3,-7,29,867]); // Polycyclic G:=Group<a,b,c|a^9=b^7=c^3=1,a*b=b*a,a*c=c*a,c*b*c^-1=b^4>; // generators/relations Export ׿ × 𝔽
2020-03-29 14:50:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724255800247192, "perplexity": 3564.1843636048193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00280.warc.gz"}
https://conferences.iaea.org/event/239/
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus. # Technical Meeting on State-of-the-art Thermal Hydraulics of Fast Reactors 26-30 September 2022 C.R. ENEA, Camugnano, Italy Europe/Vienna timezone Meeting POSTPONED. Call for Papers and Extended Abstracts is now open. DEADLINE: TBC ### Contact The purpose of the event is to discuss experiences and the latest innovations and technological challenges related to thermalhydraulics of fast reactors. The main objectives of the meeting are to: • Promote and facilitate the exchange of information on thermalhydraulics of fast reactors at the national and international levels; • Present and discuss the current status of R&D in this field; • Discuss and identify R&D needs and gaps to assess the future requirements in the field, which should eventually lead to efforts being concentrated in the key lacking areas; • Enable the integration of research on thermalhydraulics in Member States to support the development of new technologies that have a higher level of technological readiness; • Provide recommendations to the IAEA for future joint efforts and coordinated research activities (if required) in the field; and • Prepare a reference document summarizing the work presented by the participants, including the findings of the study in the standard IAEA publications format. IMPORTANT: The Call for Papers is now open. Contributions selected for Oral presentation may now submit full papers as specified in the Event Information Sheet. Contributions selected for Poster presentations may now submit extended abstracts (2-3 pages) using the same full paper template. Banner image reference: INTERNATIONAL ATOMIC ENERGY AGENCY, Benchmark Analysis of EBR-II Shutdown Heat Removal Tests, IAEA-TECDOC-1819, IAEA, Vienna (2017). Starts Ends Europe/Vienna C.R. ENEA, Camugnano, Italy Information-Sheet.pdf EVT2004020_Form-A_final.docx EVT2004020_Form-B_final.docx EVT2004020_Form-C_final.docx TH-TM_PaperTemplate.docx # IAEA Contacts ### Scientific Secretaries: Nuclear Power Technology Development Section | Division of Nuclear Power Department of Nuclear Energy | International Atomic Energy Agency Mr Chirayu Batra Nuclear Power Technology Development Section | Division of Nuclear Power Department of Nuclear Energy | International Atomic Energy Agency
2021-11-29 13:58:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19862060248851776, "perplexity": 8758.143287158984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00394.warc.gz"}
https://modelingwithdata.org/arch/00000083.htm
### Tip 33: Replace shell commands with their outputs level: you want something more than pipes purpose: use outputs as inputs to the next step Part of a series of tips on POSIX and C. Start from the tip intro page, or get 21st Century C, the book based on this series. Last time, I gave you a four-item list of things your shell can do. Number three was expansions: replacing certain blobs of text with other text. Variables are a simple expansion. If you set a variable like onething="another thing" on the command line [C shell users: set onething="another thing"], then when you later type echo $onething then another thing will print to screeen. Shell variables are a convenience for you to use while working at the command prompt or throwing together a quick script. They are stupendously easy to confuse with environment variables, which are sent to new processes and read via a simple set of C functions. Have a look at Appendix A of Modeling with Data for details on turning shell variables into environment variables. Also, your shell will require that there be no spaces on either side of the =, which will annoy you at some point. This rule is for the purposes of supporting a feature that is mostly useful for makefiles. But there you have it: our easiest and most basic substitution of one thing for another. Isn't it conveniently nifty that the$ is so heavily used in the shell, and yet is entirely absent from C code, so that it's easy to write shell scripts that act on C code (like in Tip #9), and C code to produce shell scripts? It's as if the UNIX shell and C were written by the same people to work together. For our next expansion, how about the backtick, which on a typical keyboard shares a key with the ~ and is not the more vertical-looking single tick '. The vertical tick indicates that you don't want expansions done: echo '$onething' will actually print$onething. The backtick replaces the command you give with the output from the command, doing so macro-style, where the command text is replaced in place with the output text. Here's an example in which we count lines of C code by how many lines have a ;, ), or } on them; given that lines of source code is a lousy metric for most purposes anyway, this is as good a means as any, and has the bonus of being one line of shell code: #count lines with a ), }, or ;, and let that count be named Lines. Lines=grep '[)};]' *.c | wc -l #count how many lines there are in a directory listing; name it Files. Files=ls *.c |wc -l echo files=$Files and lines=$Lines #Arithmetic expansion is a double-paren. #In bash, the remainder is truncated; more on this later. echo lines/file = $(($Lines/$Files)) #Or, use those variables in a here script. #By setting scale=3, answers are printed to 3 decimal places. bc << --- scale=3$Lines/\$Files --- OK, so now you've met variable substitution, command substitution, and in the sample code I touched on arithmetic substitution for quick desk calculator math. That's what I deem to be the low-hanging fruit; I leave you to read the manual on history expansion, brace expansion, tilde expansion, parameter expansion, word splitting, pathname expansion, glob expansion, and the difference between " " and ' '.
2021-09-19 13:21:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7147606611251831, "perplexity": 3557.1072501072326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00167.warc.gz"}
https://zbmath.org/?q=ut%3Afundamental+soloution
## Cauchy problem for fractional diffusion equations.(English)Zbl 1068.35037 Equations of the form $(D^{(\alpha)}_tu)(t,x)-B u(t,x)=f(t,x),\quad t\in[0,\tau], \quad 0<\alpha<1,\;x\in\mathbb R^n$ where $(D^{(\alpha)}_tu)(t,x)= \frac1{\Gamma(1-2)}\left[\frac\partial{\partial t}\int^t_0(t-\zeta)^{-\alpha}u(\zeta,x)\,d\zeta-t^{-\alpha}u(0,x)\right]$ $B= \sum^n_{k,j=1} a_{ij}(x)\frac{\partial^2}{\partial x_i\partial x_j}+\sum^n_{j=1}b_j(x)\frac{\partial}{\partial x_j}+c(x)$ are considered here. The fundamental solution is studied via a Green matrix. The arguments of the Green matrix are expresssed in terms of Fox’s $$H$$-functions. Estimates of the elements of the Green matrix are also presented. ### MSC: 35K15 Initial value problems for second-order parabolic equations 26A33 Fractional derivatives and integrals ### Keywords: Fox’s $$H$$-function; fundamental soloution Full Text: ### References: [1] V. Anh, V; Leonenko, N.N., Spectral analysis of fractional kinetic equations with random data, J. statist. phys., 104, 1349-1387, (2001) · Zbl 1034.82044 [2] Baeumer, B.; Meerschaert, M., Stochastic solutions for fractional Cauchy problems, Fract. calc. appl. anal., 4, 481-500, (2001) · Zbl 1057.35102 [3] Bazhlekova, E., The abstract Cauchy problem for fractional evolution equation, Fract. calc. appl. anal., 1, 255-270, (1998) · Zbl 1041.34043 [4] E. Bazhlekova, Fractional evolution equations in Banach spaces, Dissertation, Technische Universiteit Eindhoven, 2001. [5] Braaksma, B.L.J., Asymptotic expansions and analytic continuation for a class of Barnes integrals, Compositio math., 15, 239-341, (1964) · Zbl 0129.28604 [6] Dzhrbashyan, M.M.; Nersessyan, A.B., Fractional derivatives and Cauchy problem for differential equations of fractional order, Izv. AN arm. SSR. mat., 3, 3-29, (1968), (in Russian) [7] Eidelman, S.D., Parabolic systems, (1969), North-Holland Amsterdam [8] El-Sayed, A.M., Fractional order evolution equations, J. fract. calc., 7, 89-100, (1995) · Zbl 0839.34069 [9] A. Erdelyi, W. Magnus, F. Oberhettinger, F. Tricomi, Higher Transcendental Functions, Vol. III, McGraw-Hill, New York, 1955. · Zbl 0064.06302 [10] Friedman, A., Partial differential equations of parabolic type, (1964), Prentice-Hall Englewood Cliffs, NJ · Zbl 0144.34903 [11] Gorenflo, R.; Mainardi, F.; Moretti, D.; Paradisi, P., Time fractional diffusiona discrete random walk approach, Nonlinear dynamics, 29, 129-143, (2002) · Zbl 1009.82016 [12] Kochubei, A.N., A Cauchy problem for evolution equations of fractional order, Differential equations, 25, 967-974, (1989) · Zbl 0696.34047 [13] Kochubei, A.N., Fractional-order diffusion, Differential equations, 26, 485-492, (1990) · Zbl 0729.35064 [14] T. Kolsrud, On a class of probabilistic integrodifferential equations, in: S. Albeverio, H. Holden, J.E. Fenstad, T. Lindstrom (Eds.), Ideas and Methods in Mathematics and Physics. Memorial Volume Dedicated to Raphael Høegh-Krohn, Vol. 1, Cambridge University Press, Cambridge, 1992, pp. 168-172. [15] Kostin, V.A., Cauchy problem for an abstract differential equation with fractional derivatives, Russian acad. sci. dokl. math., 46, 316-319, (1993) [16] Ladyzhenskaya, O.A.; Solonnikov, V.A.; Uraltseva, N.N., Linear and quasilinear equations of parabolic type, (1968), American Mathematical Society Providence, RI · Zbl 0174.15403 [17] Meerschaert, M.M.; Benson, D.A.; Scheffler, H.P.; Baeumer, B., Stochastic solutions of space-time fractional diffusion equations, Phys. rev. E, 65, 1103-1106, (2002) [18] Metzler, R.; Klafter, J., The random Walk’s guide to anomalous diffusiona fractional dynamics approach, Phys. rep., 339, 1-77, (2000) · Zbl 0984.82032 [19] Miller, K.; Ross, B., An introduction to the fractional calculus and fractional differential equations, (1993), Wiley New York · Zbl 0789.26002 [20] A.P. Prudnikov, Yu.A. Brychkov, O.I. Marichev, Integrals and Series, Vol. 3: More Special Functions, Gordon and Breach, New York, 1990. · Zbl 0967.00503 [21] Samko, S.G.; Kilbas, A.A.; Marichev, O.I., Fractional integrals and derivatives: theory and applications, (1993), Gordon and Breach New York · Zbl 0818.26003 [22] Schneider, W.R.; Wyss, W., Fractional diffusion and wave equations, J. math. phys., 30, 134-144, (1989) · Zbl 0692.45004 [23] Schneider, W.R., Fractional diffusion, Lecture notes phys., 355, 276-286, (1990) · Zbl 0721.60086 [24] W.R. Schneider, Grey noise, in: Ideas and Methods in Mathematics and Physics. Memorial Volume Dedicated to Raphael Høegh-Krohn, Vol. 1, Cambridge University Press, Cambridge, 1992, pp. 261-282. [25] Srivastava, H.M.; Gupta, K.C.; Goyal, S.P., The H-functions of one and two variables with applications, (1982), South Asian Publishers New Dehli · Zbl 0506.33007 [26] Wyss, W., The fractional diffusion equation, J. math. phys., 27, 2782-2785, (1986) · Zbl 0632.35031 [27] M. Yor, W. Schneider’s grey noise and fractional Brownian motion, in: Proceedings of the Easter Meeting on Probability, Edinburgh, April 10-14, 1989. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-08-16 14:13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5724006295204163, "perplexity": 5939.334222630657}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00418.warc.gz"}
https://m.hanspub.org/journal/paper/29297
# 避雷器在线监测装置现场校验系统的研发Design and Development of AN Automatic Calibration System for Voltage Dividers Abstract: In order to solve the problem of field verification of online monitoring device in metal oxide ar-resters, based on the additional injection method, the verification principle of resistive current, capacitive current and total current is proposed, and the verification system is developed. The results of laboratory tests and field measurements show that the output current error of the verification system is within 0.5%, and the phase error is within 0.1, which meets the requirements of field verification accuracy. The verification system also solves the problem that the output current of field demand and the voltage of PT secondary side have difficulty in the same frequency and phase, which provides the reference for field verification of online monitoring device in capacitive equipment. 1. 概述 MOA在线监测装置性能好坏关系到对MOA性能的评价是否正确,无论是误报还是漏报均时有发生,影响电力系统的安全运行 [10] [11] [12] [13] 。在日常检修工作中,从涉及到拆卸、安全、运输成本及耗时考虑 [14] [15] [16] ,微安表通常在离线的条件下校验,近些年兴起的MOA在线监测装置投运后则暂无有效的现场校验手段。在这样的背景情况下,研究MOA在线监测装置现场带电校验技术方法和开发相关的仪器设备就显得十分有意义了。 2. 增量注入法及其校验原理 $I=\left({I}_{R},{I}_{C}\right)=\left({I}_{0}\mathrm{cos}\theta ,{I}_{0}\mathrm{sin}\theta \right)$ (1) ${I}^{\prime }=\left({{I}^{\prime }}_{R},{{I}^{\prime }}_{C}\right)=\left({{I}^{\prime }}_{0}\mathrm{cos}\left({\beta }_{0}+\beta \right),{{I}^{\prime }}_{0}\mathrm{sin}\left({\beta }_{0}+\beta \right)\right)$ (2) 2.1. 阻性电流校验原理 ${I}^{\prime }=\left({{I}^{\prime }}_{0}\mathrm{cos}\left(\beta \right),{{I}^{\prime }}_{0}\mathrm{sin}\left(\beta \right)\right)$ (3) $I+{I}^{\prime }=\left({I}_{0}\mathrm{cos}\theta +{{I}^{\prime }}_{0}\mathrm{cos}\left(\beta \right),{I}_{0}\mathrm{sin}\theta +{{I}^{\prime }}_{0}\mathrm{sin}\left(\beta \right)\right)$ (4) $\left\{\begin{array}{l}\mathrm{cos}{0.1}^{\circ }\approx 0.9999985\\ \mathrm{sin}{0.1}^{\circ }\approx 0.00174\end{array}$ (5) $\left\{\begin{array}{l}\mathrm{cos}{0.1}^{\circ }\approx 1\\ \mathrm{sin}{0.1}^{\circ }\approx 0\end{array}$ (6) $I+{I}^{\prime }=\left({I}_{0}\mathrm{cos}\theta +{{I}^{\prime }}_{0},{I}_{0}\mathrm{sin}\theta \right)$ (7) 2.2. 容性电流校验原理 ${I}^{\prime }=\left({{I}^{\prime }}_{0}\mathrm{cos}\left(90+\beta \right),{{I}^{\prime }}_{0}\mathrm{sin}\left(90+\beta \right)\right)$ (8) ${I}^{\prime }=\left(-{{I}^{\prime }}_{0}\mathrm{sin}\left(\beta \right),{{I}^{\prime }}_{0}\mathrm{cos}\left(\beta \right)\right)$ (9) $I+{I}^{\prime }=\left({I}_{0}\mathrm{cos}\theta -{{I}^{\prime }}_{0}\mathrm{sin}\left(\beta \right),{I}_{0}\mathrm{sin}\theta +{{I}^{\prime }}_{0}\mathrm{cos}\left(\beta \right)\right)$ (10) $I+{I}^{\prime }=\left({I}_{0}\mathrm{cos}\theta ,{I}_{0}\mathrm{sin}\theta +{{I}^{\prime }}_{0}\right)$ (11) 2.3. 全电流校验原理 Figure 1. A vector plot of the injected current and the linkage current ${I}_{1}=\sqrt{{I}_{0}^{2}+{{I}^{\prime }}_{0}^{2}+2{I}_{0}{{I}^{\prime }}_{0}\mathrm{cos}\left(\theta -\alpha \right)}$ (12) $\beta =\theta -\alpha$ (13) ${I}_{1}=\sqrt{{I}_{0}^{2}+{{I}^{\prime }}_{0}^{2}+2{I}_{0}{{I}^{\prime }}_{0}\mathrm{cos}\beta }$ (14) $\beta \le {0.1}^{\circ },\mathrm{cos}\beta \approx 1$ (15) ${I}_{1}={I}_{0}+{{I}^{\prime }}_{0}$ (16) 3. 校验系统的总体设计 3.1. 系统构成 3.2. 频率跟踪部件的设计 1) 锁相跟踪 Figure 2. Schematic diagram of the system Figure 3. Diagram of phase lock tracking 2) 电流源 Figure 4. Diagram of the current output 4. 实验室功能测试和现场应用 4.1. 实验室校验 Figure 5. Diagram of accuracy tests of resistive current and capacitive current Table 1. Test data of the resistive current Table 2. Test data of the capacitive current Figure 6. Diagram of accuracy tests of full current Table 3. Test data of the full current 4.2. 现场校验 Figure 7. The wiring connection and test loop of the field verification of the MOA online monitoring devices (a) (b) Figure 8. Field verification; (a) the platform of verification; (b) MOA linkage current transducer Table 4. Increment analysis and data verification of the resistive current Table 5. Increment analysis and data verification of the full current 5. 结论 [1] Shirakawa, S. (1988) Maintenance of Surge Arrester by a Portable Arrester Leakage Current Detector. IEEE Transactions on Power Delivery, 3, 998-1003. https://doi.org/10.1109/61.193879 [2] Laurentys, C.A. and Almeida (2009) Intelligent Thermographic Diagnostic Applied to Surge Arresters: A New Approach. IEEE Transactions on Power Delivery, 24, 751-757. https://doi.org/10.1109/TPWRD.2009.2013375 [3] Wong, K.L. (2006) Electromagnetic Emission Based Monitoring Technique for Polymer ZnO Surge Arresters. IEEE Transactions on Dielectrics and Electrical Insulation, 13, 181-190. https://doi.org/10.1109/TDEI.2006.1593416 [4] Khodsuz, M. and Mirzaie, M. (2015) Harmonics Ratios of Resistive Leakage Current as Metal Oxide Surge Arresters Diagnostic Tools. Measurement, 70, 148-155. https://doi.org/10.1016/j.measurement.2015.03.048 [5] Zhu, H.X. and Raghuveer, M.R. (2001) Influence of Representation Model and Voltage Harmonics on Metal Oxide Surge Arrester Diagnostics. IEEE Transactions on Power Delivery, 16, 599-603. https://doi.org/10.1109/61.956743 [6] Xu, Z., Zhao, L., Ding, A., et al. (2013) A Current Orthogonality Method to Extract Resistive Leakage Current of MOSA. IEEE Transactions on Power Delivery, 28, 93-101. https://doi.org/10.1109/TPWRD.2012.2221145 [7] Han, Y., Li, Z., Zheng, H. and Guo, W. (2016) A Decomposition Method for the Total Leakage Current of MOA Based on Multiple Linear Regression. IEEE Transactions on Power Delivery, 31, 1422-1428. https://doi.org/10.1109/TPWRD.2015.2462071 [8] Khodsuz, M. and Mirzaie, M. (2016) An Improved Time-Delay Addition Method for MOSA Resistive Leakage Current Extraction under Applied Harmonic Voltage. Measurement, 77, 327-334. https://doi.org/10.1016/j.measurement.2015.09.027 [9] Heinrich, C. and Hinrichsen, V. (2001) Diagnostics and Monitoring of Metaloxide Surge Arresters in High-Voltage Networks: Comparison of Existing and Newly Developed Procedures. IEEE Transactions on Power Delivery, 16, 138-143. https://doi.org/10.1109/61.905619 [10] 刘红, 张力军, 欧朝龙, 等. 一种可溯源的氧化锌避雷器测试仪校验装置的研制[J]. 湖南电力, 2009, 29(6): 1-3. [11] Khodsuz, M., Mirzaie, M. and Seyyedbarzegar, S. (2015) Metal Oxide Surge Arrester Condition Monitoring Based on Analysis of Leakage Current Components. International Journal of Electrical Power & Energy Systems, 66, 188-193. https://doi.org/10.1016/j.ijepes.2014.10.052 [12] Stojanovic, Z.N. and Stojkovic, Z.M. (2013) Evaluation of MOSA Condition Using Leakage Current Method. International Journal of Electrical Power & Energy Systems, 52, 87-95. https://doi.org/10.1016/j.ijepes.2013.03.027 [13] Christodoulou, C.A., Avgerinos, M.V., Ekonomou, L., et al. (2009) Mea-surement of the Resistive Leakage Current in Surge Arresters under Artificial Rain Test and Impulse Voltage Subjection. IET Science, Measurement & Technology, 3, 256-262. https://doi.org/10.1049/iet-smt:20080123 [14] Coffeen, L.T. and McBride, J.E. (1991) High-Voltage AC Resistive Current Measurements Using a Computer-Based Digital Watts Technique. IEEE Transactions on Power Delivery, 6, 550-556. https://doi.org/10.1109/61.131111 [15] Lundquist, J., Stenstrom, L., Schej, A., et al. (1990) New Method for Measurement of the Resistive Leakage Currents of Metal-Oxide Surge Arresters in Service. IEEE Transactions on Power Delivery, 5, 1811-1822. https://doi.org/10.1109/61.103677 [16] Karawita, C. and Raghuveer, M.R. (2006) On Site MOSA Condition Assessment—A New Approach. IEEE Transactions on Power Delivery, 21, 1273-1277. https://doi.org/10.1109/TPWRD.2005.860264 Top
2021-10-26 19:02:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4398444890975952, "perplexity": 13829.195624467335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587915.41/warc/CC-MAIN-20211026165817-20211026195817-00019.warc.gz"}
https://solvedlib.com/n/simple-model-for-time-dependent-population-variations-assumes,2183207
# Simple model for time-dependent population variations aSSumes that an organism will have an age-dependent death rate p(a) 2 0 where ###### Question: simple model for time-dependent population variations aSSumes that an organism will have an age-dependent death rate p(a) 2 0 where 2 0 is the current age of each organism _ If the "population density is defined as P(a.4) . S0 that the number of organisms with age range over [u_ Aa] at the tire is given by P(a, t)Aa; then it can be shown that P(a . is governed by the differential equation DP JP p(a)P when 1 > U To determine how the population age-profile evolves_ need to specily an initial age distri- bution P(a, 0) at 0 and also ~birth rate P(0,t) for all > 0. 1. The lirst step towards solving this problem is to simplily the analysis by nondimension alising the variables_ Introduce the scaled variables (max - Mn and =Pl Pmax- where typical timescale of lilespan and hence change of variables equation (1) to derive the partial diflerential equation satislied by O(1.7). writing that X(c) Omax/ (Tdmax ) Comment On why; s0 loug is HOnzero_ it can be assumed that the maximu value is one: Whal the elleet of changing the maximm value of A? #### Similar Solved Questions ##### You throw a baseball directly upward at time t 0 at an initial speed of 13.3... You throw a baseball directly upward at time t 0 at an initial speed of 13.3 m/s. What is the maximum height the ball reaches above where it leaves your hand? At what times does the ball pass through half the ximum height? lanore air resistance and take q 9 Maximum height: Number Earlier time at hal... ##### Two electrons in the same atom have $n=3$ and $ell=1$.(a) List the quantum numbers for the possible states of the atom. (b) How many states would be possible if the exclusion principle did not apply to the atom? Two electrons in the same atom have $n=3$ and $ell=1$. (a) List the quantum numbers for the possible states of the atom. (b) How many states would be possible if the exclusion principle did not apply to the atom?... ##### Laataly lulal [email protected] o: Icc; In - hnpe NGht eirculnt cone muelting Anc dcenIA that both hcizht and it riulius] nlc 0 ] E How Est the Folurle dcrTeAslg wben r= h = 10 G. (Tliee Volund 4 right circulat cuta Rith radlu_ and betgbt Jurh ) Laataly lulal [email protected] 303 block o: Icc; In - hnpe NGht eirculnt cone muelting Anc dcenIA that both hcizht and it riulius] nlc 0 ] E How Est the Folurle dcrTeAslg wben r= h = 10 G. (Tliee Volund 4 right circulat cuta Rith radlu_ and betgbt Jurh )... ##### Calculate the standard deviation for the $x$ coordinate of a harmonic oscillator at $v=1 .$ since $\langle x\rangle=0,$ it is only necessary to calculate $\left\langle x^{2}\right\rangle$. Calculate the standard deviation for the $x$ coordinate of a harmonic oscillator at $v=1 .$ since $\langle x\rangle=0,$ it is only necessary to calculate $\left\langle x^{2}\right\rangle$.... ##### Question 19national sports magazine believes that 38% Americans said they were fans of baseball: polling company claims more Americans are fans of baseball: random sample 400 people indicated that 176 were baseball fans Use . 0.01 level of significance: Answer each of the following State the null and alternate hypothesis_ Make sure use the proper notation Identify the claim: Which test will you use to test the claim? Left talled right-tailed two talled? What Is the sample proportion? If the P-v Question 19 national sports magazine believes that 38% Americans said they were fans of baseball: polling company claims more Americans are fans of baseball: random sample 400 people indicated that 176 were baseball fans Use . 0.01 level of significance: Answer each of the following State the null a... ##### PremchongMDVE:0 1Mme NPredicuons noueL0 1Time N Premchong MDVE: 0 1 Mme N Predicuons noue L 0 1 Time N... ##### 8. Propose a synthesis of Dimestrol starting from p-methoxypropiophenone as the only source of carbon. This... 8. Propose a synthesis of Dimestrol starting from p-methoxypropiophenone as the only source of carbon. This means you cannot add any reagent or reactant that contains carbon unless you show how it was synthesized from p-methoxypropiophenone. (6 points) ОСН; Jak se several steps dim... ##### Which among the following could be the member for setA = {x |xis the square Of an integer andx- 100}?Select one:a. {1,4,5,16,20,36,64,81,85,99} b. {0, 1,4,9,16,24,36,49,68,81} {0,1,4,9,16,25,36,49,64,81}d. {1,4,9,16,25,36,64,81,99} Which among the following could be the member for setA = {x |xis the square Of an integer andx- 100}? Select one: a. {1,4,5,16,20,36,64,81,85,99} b. {0, 1,4,9,16,24,36,49,68,81} {0,1,4,9,16,25,36,49,64,81} d. {1,4,9,16,25,36,64,81,99}... ##### PartSuppose that the rope angled at 46.6" above the horizontal instead of being paralle tc the ramp's surface: How much work does tne rope do on tne carton this case? Express your answer in joulesAZdPonuoct Part Suppose that the rope angled at 46.6" above the horizontal instead of being paralle tc the ramp's surface: How much work does tne rope do on tne carton this case? Express your answer in joules AZd Ponuoct... ##### Apter 6 Thinking Questions Fill in the diagra to explain the relationship between cellular respiration and... apter 6 Thinking Questions Fill in the diagra to explain the relationship between cellular respiration and photosynthesis, Cellular respiration Glucose Photosynthesis Sunlight (kinetic Heat energy) energy Heat energy Potential energy 1 of 13 !!! Next > MacBook Air... ##### Rank the following in decreasing order of nucleophilicity. 1. CH,OH, 2. CH,07 3. CHS". 4.CH,SH Rank the following in decreasing order of nucleophilicity. 1. CH,OH, 2. CH,07 3. CHS". 4.CH,SH... ##### 3. 2/3 points | Previous Answers OSUniPhys1 29.3.WA.025. My Notes Ask Your Teacher Three long wires... 3. 2/3 points | Previous Answers OSUniPhys1 29.3.WA.025. My Notes Ask Your Teacher Three long wires are all parallel to each other and are all in the xy plane. Wire 1 runs along the y axis and carries a current of 1.64 A in the ty direction. Wire 2 runs along the x = 25.0 cm line and carries a curre... ##### 2) What is the wavelength of the line in the visible spectrum corresponding to m = 2 and nz434 nm2243 nm434 nm243 nm 2) What is the wavelength of the line in the visible spectrum corresponding to m = 2 and nz 434 nm 2243 nm 434 nm 243 nm... ##### Find an equation of the plane with the given characteristics. The plane passes through (0, 0,... Find an equation of the plane with the given characteristics. The plane passes through (0, 0, 0), (2,0, 6), and (-3, -1, 9). Find the distance between the point and the plane. (0, 0, 0) 3x + 7y + z = 21 The position vector r describes the path of an object moving in space. Find the velocity v(t), sp... ##### The following information is provided for Astroid Antenna Corp., which manufactures two products: Lo-Gain antennas and... The following information is provided for Astroid Antenna Corp., which manufactures two products: Lo-Gain antennas and Hi-Gain antennas for use in remote areas. Click the icon to view the information.) Astroid Antenna plans to produce 175 Lo-Gain antennas and 350 Hi-Gain antennas. Read the requireme... ##### Freezing point depression graph xlsx Edit Insen Fomat HelpDoaloadShdre(caltbriMMcFreezing Point DepressionKcI WaterTemp SeriesSheztl"Tcp Freezing point depression graph xlsx Edit Insen Fomat Help Doaload Shdre (caltbri MMc Freezing Point Depression KcI Water Temp Series Sheztl "Tcp... ##### Wcr euetleeelcttnethar9t oriain parce movestalcnn siralghi Erce (rem (5,chatoetanticmDotIvon YccicKrilctccnatanNeod Halp?Hbob ^ IhIn coltz Hn E menneeenec adet Wcr euetlee elcttnethar9t oriain parce movestalcnn siralghi Erce (rem (5, chatoetanticm DotIvon Yccic Krilct ccnatan Neod Halp? Hbob ^ Ih In coltz Hn E menneeenec adet... ##### The following reaction schemes contain one or more flaws. Identify erTors in cach step and explain how would ycu correct each scheme?HCNLiAIHH;o"cOOH The following reaction schemes contain one or more flaws. Identify erTors in cach step and explain how would ycu correct each scheme? HCN LiAIH H;o" cOOH... ##### In a video game, a rocket travels clockwise on the circlex2+ y2 = 50. The rocket can firemissiles along lines tangent to its path. At what point(s) on thecircle should the rocket fire to hit a target at (0, 25)? In a video game, a rocket travels clockwise on the circle x2+ y2 = 50. The rocket can fire missiles along lines tangent to its path. At what point(s) on the circle should the rocket fire to hit a target at (0, 25)?... ##### Concept Question: Maximum Likelihood Estimator for the Laplace distribution point possible (graded)MLE As in the previous problem; let m denote the MLE for an unknown parameter m" of a Laplace distrlbution_MLE Can we apply the theorem for the asymptotlc normality of the MLE to mn (You must choose the correct answer that also has the correct explanation )No, because the log-likelihood is not concaveNo, because the log-likelihood is not twice-differentiable_ so the Fisher information does not Concept Question: Maximum Likelihood Estimator for the Laplace distribution point possible (graded) MLE As in the previous problem; let m denote the MLE for an unknown parameter m" of a Laplace distrlbution_ MLE Can we apply the theorem for the asymptotlc normality of the MLE to mn (You must ch... ##### An article reported on a school district's magnet school programs. Of the 1625 qualified applicants, 849 were accepted, 231 were waitlisted and 545 were turned away for lack of space. Find the relative frequency for each decision made and write a sentence summarizing the results_ Find the relative frequency of qualified students being acceptedThe relative frequency of accepting qualified students is (Round to one decimal place as needed.) Find the relative frequency of qualified students be An article reported on a school district's magnet school programs. Of the 1625 qualified applicants, 849 were accepted, 231 were waitlisted and 545 were turned away for lack of space. Find the relative frequency for each decision made and write a sentence summarizing the results_ Find the relat... ##### 10. (20 points) Solve the system of linear equations, O show that no solution exists. Use any method you would like: Show your work4w + 7x + 8y + 2 =1 Sw +2r + 9y + 2 = 2 Zw + 52 + 3y + 122 =3 3w + 13c + Ty + 62 =4 10. (20 points) Solve the system of linear equations, O show that no solution exists. Use any method you would like: Show your work 4w + 7x + 8y + 2 =1 Sw +2r + 9y + 2 = 2 Zw + 52 + 3y + 122 =3 3w + 13c + Ty + 62 =4... ##### 10. [-/2 Points]DETAILSThe electrical potentia circuit is given by V(t) 15e-ktwhere V is in volts_ is in seconds_ and k is constant; When 0 the rate of change of potentia volts sec _ Find the rate of change seconds later; ccurate to decima placesFind k, accurate to 2 decimal places_ with correct units_ You may find it useful to know _ FACT: The input to an exponential function must be dimensionless Ie it has no units_k =Submit Answer11_ [-/2 Points]DETAILSAn object is launched straight upward f 10. [-/2 Points] DETAILS The electrical potentia circuit is given by V(t) 15e-kt where V is in volts_ is in seconds_ and k is constant; When 0 the rate of change of potentia volts sec _ Find the rate of change seconds later; ccurate to decima places Find k, accurate to 2 decimal places_ with correct... ##### A manufacturer of balloons finds that 3.0% of its balloons fail quality-control testing. a) What is... A manufacturer of balloons finds that 3.0% of its balloons fail quality-control testing. a) What is the probability that one of the first five balloons tested will be defective? b) What is the expected waiting time for a defective balloon?... ##### 3. (6 marks: 3 marks for steps, 3 marks for labels]+Simplify the following statement using the... 3. (6 marks: 3 marks for steps, 3 marks for labels]+Simplify the following statement using the laws and axioms of logic. Clearly state which law or axiom has been used at each step. 4 [4+4-8 marks] Given the following statements: The student is in the esports club or in the aquatic club. if they are... ##### Retinoids are used to treat a number of common disorders of the skin. a. List common... Retinoids are used to treat a number of common disorders of the skin. a. List common disorders treated by this drug. b. Describe precautions associated with this type of drugs.... ##### Brief Exercise 15-9 (Algo) Operating lease (LO15-4] At the beginning of its fiscal year, Lakeside Inc.... Brief Exercise 15-9 (Algo) Operating lease (LO15-4] At the beginning of its fiscal year, Lakeside Inc. leased office space to LTT Corporation under a twelve-year operating lease agreement. The contract calls for quarterly rent payments of \$42,000 each. The office building was acquired by Lakeside at... ##### Choose the term that describes the stereochemistry of the following compound: СН3 CH3 H OS achiral... Choose the term that describes the stereochemistry of the following compound: СН3 CH3 H OS achiral OR Meso... ##### Mobius strip Can anyone explain what a mobius strip is to me mathematically?... ##### W8800-[3] 'W 8800 -(8] 'W SzeT -[I O Wpb+0-[31 "Wbovo -[8] 'W ZITT -MO Wvbo0-(J] 'W+boo =[9] 'W 08T*T -[O W 8E1*0-[3] 'W 8E1*0 -[8] 'W s8zT -[v] Ozwnuqulinba Je ) pue '8 'V Jo suolteuquajuoj a41 aje Jeym 'Je 006 Je Jauiezuo) JaIll OOT & UI paxiu je (8) 3 pue (8) JO Ypea Jiow vbz0 pue (8) V Jo #iow 08C0 41 OI * OV "T S! 3x "Jo 006 ze (8) 0 + (8) & = (8) V & uoneax 341 JoJ Qujod T) 6 uoiasano w8800-[3] 'W 8800 -(8] 'W SzeT -[I O Wpb+0-[31 "Wbovo -[8] 'W ZITT -MO Wvbo0-(J] 'W+boo =[9] 'W 08T*T -[O W 8E1*0-[3] 'W 8E1*0 -[8] 'W s8zT -[v] O zwnuqulinba Je ) pue '8 'V Jo suolteuquajuoj a41 aje Jeym 'Je 006 Je Jauiezuo) JaIll OOT & UI ... ##### The effects of the permissive techniques of discipline on children are: The effects of the permissive techniques of discipline on children are:...
2022-08-15 03:30:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5644747614860535, "perplexity": 6292.400165643156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572127.33/warc/CC-MAIN-20220815024523-20220815054523-00533.warc.gz"}
https://www.nengo.ai/nengo/frontend_api.html
# Nengo frontend API¶ ## Nengo Objects¶ nengo.Network A network contains ensembles, nodes, connections, and other networks. nengo.Ensemble A group of neurons that collectively represent a vector. nengo.ensemble.Neurons An interface for making connections directly to an ensemble’s neurons. nengo.Node Provide non-neural inputs to Nengo objects and process outputs. nengo.Connection Connects two objects together. nengo.connection.LearningRule An interface for making connections to a learning rule. nengo.Probe A probe is an object that collects data from the simulation. class nengo.Network(label=None, seed=None, add_to_container=None)[source] A network contains ensembles, nodes, connections, and other networks. A network is primarily used for grouping together related objects and connections for visualization purposes. However, you can also use networks as a nice way to reuse network creation code. To group together related objects that you do not need to reuse, you can create a new Network and add objects in a with block. For example: network = nengo.Network() with network: with nengo.Network(label="Vision"): v1 = nengo.Ensemble(nengo.LIF(100), dimensions=2) with nengo.Network(label="Motor"): sma = nengo.Ensemble(nengo.LIF(100), dimensions=2) nengo.Connection(v1, sma) To reuse a group of related objects, you can create a new subclass of Network, and add objects in the __init__ method. For example: class OcularDominance(nengo.Network): def __init__(self): self.column = nengo.Ensemble(nengo.LIF(100), dimensions=2) network = nengo.Network() with network: left_eye = OcularDominance() right_eye = OcularDominance() nengo.Connection(left_eye.column, right_eye.column) Parameters: label : str, optional (Default: None) Name of the network. seed : int, optional (Default: None) Random number seed that will be fed to the random number generator. Setting the seed makes the network’s build process deterministic. add_to_container : bool, optional (Default: None) Determines if this network will be added to the current container. If None, this network will be added to the network at the top of the Network.context stack unless the stack is empty. connections : list Connection instances in this network. ensembles : list Ensemble instances in this network. label : str Name of this network. networks : list Network instances in this network. nodes : list Node instances in this network. probes : list Probe instances in this network. seed : int Random seed used by this network. static add(obj)[source] Add the passed object to Network.context. static default_config()[source] Constructs a Config object for setting defaults. all_objects (list) All objects in this network and its subnetworks. all_ensembles (list) All ensembles in this network and its subnetworks. all_nodes (list) All nodes in this network and its subnetworks. all_networks (list) All networks in this network and its subnetworks. all_connections (list) All connections in this network and its subnetworks. all_probes (list) All probes in this network and its subnetworks. config (Config) Configuration for this network. n_neurons (int) Number of neurons in this network, including subnetworks. class nengo.Ensemble(n_neurons, dimensions, radius=Default, encoders=Default, intercepts=Default, max_rates=Default, eval_points=Default, n_eval_points=Default, neuron_type=Default, gain=Default, bias=Default, noise=Default, normalize_encoders=Default, label=Default, seed=Default)[source] A group of neurons that collectively represent a vector. Parameters: n_neurons : int The number of neurons. dimensions : int The number of representational dimensions. radius : int, optional (Default: 1.0) The representational radius of the ensemble. encoders : Distribution or (n_neurons, dimensions) array_like, optional (Default: UniformHypersphere(surface=True)) The encoders used to transform from representational space to neuron space. Each row is a neuron’s encoder; each column is a representational dimension. intercepts : Distribution or (n_neurons,) array_like, optional (Default: nengo.dists.Uniform(-1.0, 1.0)) The point along each neuron’s encoder where its activity is zero. If e is the neuron’s encoder, then the activity will be zero when dot(x, e) <= c, where c is the given intercept. max_rates : Distribution or (n_neurons,) array_like, optional (Default: nengo.dists.Uniform(200, 400)) The activity of each neuron when the input signal x is magnitude 1 and aligned with that neuron’s encoder e; i.e., when dot(x, e) = 1. eval_points : Distribution or (n_eval_points, dims) array_like, optional (Default: nengo.dists.UniformHypersphere()) The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points. n_eval_points : int, optional (Default: None) The number of evaluation points to be drawn from the eval_points distribution. If None, then a heuristic is used to determine the number of evaluation points. neuron_type : NeuronType, optional (Default: nengo.LIF()) The model that simulates all neurons in the ensemble (see NeuronType). gain : Distribution or (n_neurons,) array_like (Default: None) The gains associated with each neuron in the ensemble. If None, then the gain will be solved for using max_rates and intercepts. bias : Distribution or (n_neurons,) array_like (Default: None) The biases associated with each neuron in the ensemble. If None, then the gain will be solved for using max_rates and intercepts. noise : Process, optional (Default: None) Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step. normalize_encoders : bool, optional (Default: True) Indicates whether the encoders should be normalized. label : str, optional (Default: None) A name for the ensemble. Used for debugging and visualization. seed : int, optional (Default: None) The seed used for random number generation. bias : Distribution or (n_neurons,) array_like or None The biases associated with each neuron in the ensemble. dimensions : int The number of representational dimensions. encoders : Distribution or (n_neurons, dimensions) array_like The encoders, used to transform from representational space to neuron space. Each row is a neuron’s encoder, each column is a representational dimension. eval_points : Distribution or (n_eval_points, dims) array_like The evaluation points used for decoder solving, spanning the interval (-radius, radius) in each dimension, or a distribution from which to choose evaluation points. gain : Distribution or (n_neurons,) array_like or None The gains associated with each neuron in the ensemble. intercepts : Distribution or (n_neurons) array_like or None The point along each neuron’s encoder where its activity is zero. If e is the neuron’s encoder, then the activity will be zero when dot(x, e) <= c, where c is the given intercept. label : str or None A name for the ensemble. Used for debugging and visualization. max_rates : Distribution or (n_neurons,) array_like or None The activity of each neuron when dot(x, e) = 1, where e is the neuron’s encoder. n_eval_points : int or None The number of evaluation points to be drawn from the eval_points distribution. If None, then a heuristic is used to determine the number of evaluation points. n_neurons : int or None The number of neurons. neuron_type : NeuronType The model that simulates all neurons in the ensemble (see nengo.neurons). noise : Process or None Random noise injected directly into each neuron in the ensemble as current. A sample is drawn for each individual neuron on every simulation step. radius : int The representational radius of the ensemble. seed : int or None The seed used for random number generation. neurons A direct interface to the neurons in the ensemble. size_in The dimensionality of the ensemble. size_out The dimensionality of the ensemble. class nengo.ensemble.Neurons(ensemble)[source] An interface for making connections directly to an ensemble’s neurons. This should only ever be accessed through the neurons attribute of an ensemble, as a way to signal to Connection that the connection should be made directly to the neurons rather than to the ensemble’s decoded value, e.g.: nengo.Connection(a.neurons, b.neurons) ensemble (Ensemble) The ensemble these neurons are part of. probeable (tuple) Signals that can be probed in the neuron population. size_in (int) The number of neurons in the population. size_out (int) The number of neurons in the population. class nengo.Node(output=Default, size_in=Default, size_out=Default, label=Default, seed=Default)[source] Provide non-neural inputs to Nengo objects and process outputs. Nodes can accept input, and perform arbitrary computations for the purpose of controlling a Nengo simulation. Nodes are typically not part of a brain model per se, but serve to summarize the assumptions being made about sensory data or other environment variables that cannot be generated by a brain model alone. Nodes can also be used to test models by providing specific input signals to parts of the model, and can simplify the input/output interface of a Network when used as a relay to/from its internal ensembles (see EnsembleArray for an example). Parameters: output : callable, array_like, or None Function that transforms the Node inputs into outputs, a constant output value, or None to transmit signals unchanged. size_in : int, optional (Default: 0) The number of dimensions of the input data parameter. size_out : int, optional (Default: None) The size of the output signal. If None, it will be determined based on the values of output and size_in. label : str, optional (Default: None) A name for the node. Used for debugging and visualization. seed : int, optional (Default: None) The seed used for random number generation. Note: no aspects of the node are random, so currently setting this seed has no effect. label : str The name of the node. output : callable, array_like, or None The given output. size_in : int The number of dimensions for incoming connection. size_out : int The number of output dimensions. class nengo.Connection(pre, post, synapse=Default, function=Default, transform=Default, solver=Default, learning_rule_type=Default, eval_points=Default, scale_eval_points=Default, label=Default, seed=Default, modulatory=Unconfigurable)[source] Connects two objects together. The connection between the two object is unidirectional, transmitting information from the first argument, pre, to the second argument, post. Almost any Nengo object can act as the pre or post side of a connection. Additionally, you can use Python slice syntax to access only some of the dimensions of the pre or post object. For example, if node has size_out=2 and ensemble has size_in=1, we could not create the following connection: nengo.Connection(node, ensemble) But, we could create either of these two connections: nengo.Connection(node[0], ensemble) nengo.Connection(node[1], ensemble) Parameters: pre : Ensemble or Neurons or Node The source Nengo object for the connection. post : Ensemble or Neurons or Node or Probe The destination object for the connection. synapse : Synapse or None, optional (Default: nengo.synapses.Lowpass(tau=0.005)) Synapse model to use for filtering (see Synapse). If None, no synapse will be used and information will be transmitted without any delay (if supported by the backend—some backends may introduce a single time step delay). Note that at least one connection must have a synapse that is not None if components are connected in a cycle. Furthermore, a synaptic filter with a zero time constant is different from a None synapse as a synaptic filter will always add a delay of at least one time step. function : callable or (n_eval_points, size_mid) array_like, optional (Default: None) Function to compute across the connection. Note that pre must be an ensemble to apply a function across the connection. If an array is passed, the function is implicitly defined by the points in the array and the provided eval_points, which have a one-to-one correspondence. transform : (size_out, size_mid) array_like, optional (Default: np.array(1.0)) Linear transform mapping the pre output to the post input. This transform is in terms of the sliced size; if either pre or post is a slice, the transform must be shaped according to the sliced dimensionality. Additionally, the function is applied before the transform, so if a function is computed across the connection, the transform must be of shape (size_out, size_mid). solver : Solver, optional (Default: nengo.solvers.LstsqL2()) Solver instance to compute decoders or weights (see Solver). If solver.weights is True, a full connection weight matrix is computed instead of decoders. learning_rule_type : LearningRuleType or iterable of LearningRuleType, optional (Default: None) Modifies the decoders or connection weights during simulation. eval_points : (n_eval_points, size_in) array_like or int, optional (Default: None) Points at which to evaluate function when computing decoders, spanning the interval (-pre.radius, pre.radius) in each dimension. If None, will use the eval_points associated with pre. scale_eval_points : bool, optional (Default: True) Indicates whether the evaluation points should be scaled by the radius of the pre Ensemble. label : str, optional (Default: None) A descriptive label for the connection. seed : int, optional (Default: None) The seed used for random number generation. is_decoded : bool True if and only if the connection is decoded. This will not occur when solver.weights is True or both pre and post are Neurons. function : callable The given function. function_size : int The output dimensionality of the given function. If no function is specified, function_size will be 0. label : str A human-readable connection label for debugging and visualization. If not overridden, incorporates the labels of the pre and post objects. learning_rule_type : instance or list or dict of LearningRuleType, optional The learning rule types. post : Ensemble or Neurons or Node or Probe or ObjView The given post object. post_obj : Ensemble or Neurons or Node or Probe The underlying post object, even if post is an ObjView. post_slice : slice or list or None The slice associated with post if it is an ObjView, or None. pre : Ensemble or Neurons or Node or ObjView The given pre object. pre_obj : Ensemble or Neurons or Node The underlying pre object, even if post is an ObjView. pre_slice : slice or list or None The slice associated with pre if it is an ObjView, or None. seed : int The seed used for random number generation. solver : Solver The Solver instance that will be used to compute decoders or weights (see nengo.solvers). synapse : Synapse The Synapse model used for filtering across the connection (see nengo.synapses). transform : (size_out, size_mid) array_like Linear transform mapping the pre function output to the post input. learning_rule (LearningRule or iterable) Connectable learning rule object(s). size_in (int) The number of output dimensions of the pre object. Also the input size of the function, if one is specified. size_mid (int) The number of output dimensions of the function, if specified. If the function is not specified, then size_in == size_mid. size_out (int) The number of input dimensions of the post object. Also the number of output dimensions of the transform. class nengo.connection.LearningRule(connection, learning_rule_type)[source] An interface for making connections to a learning rule. Connections to a learning rule are to allow elements of the network to affect the learning rule. For example, learning rules that use error information can obtain that information through a connection. Learning rule objects should only ever be accessed through the learning_rule attribute of a connection. connection (Connection) The connection modified by the learning rule. modifies (str) The variable modified by the learning rule. probeable (tuple) Signals that can be probed in the learning rule. size_out (int) Cannot connect from learning rules, so always 0. class nengo.Probe(target, attr=None, sample_every=Default, synapse=Default, solver=Default, label=Default, seed=Default)[source] A probe is an object that collects data from the simulation. This is to be used in any situation where you wish to gather simulation data (spike data, represented values, neuron voltages, etc.) for analysis. Probes do not directly affect the simulation. All Nengo objects can be probed (except Probes themselves). Each object has different attributes that can be probed. To see what is probeable for each object, print its probeable attribute. >>> with nengo.Network(): ... ens = nengo.Ensemble(10, 1) >>> print(ens.probeable) ['decoded_output', 'input'] Parameters: target : Ensemble, Neurons, Node, or Connection The object to probe. attr : str, optional (Default: None) The signal to probe. Refer to the target’s probeable list for details. If None, the first element in the probeable list will be used. sample_every : float, optional (Default: None) Sampling period in seconds. If None, the dt of the simluation will be used. synapse : Synapse, optional (Default: None) A synaptic model to filter the probed signal. solver : Solver, optional (Default: ConnectionDefault) Solver to compute decoders for probes that require them. label : str, optional (Default: None) A name for the probe. Used for debugging and visualization. seed : int, optional (Default: None) The seed used for random number generation. attr : str or None The signal that will be probed. If None, the first element of the target’s probeable list will be used. sample_every : float or None Sampling period in seconds. If None, the dt of the simluation will be used. solver : Solver or None Solver to compute decoders. Only used for probes of an ensemble’s decoded output. synapse : Synapse or None A synaptic model to filter the probed signal. target : Ensemble, Neurons, Node, or Connection The object to probe. obj (Nengo object) The underlying Nengo object target. size_in (int) Dimensionality of the probed signal. size_out (int) Cannot connect from probes, so always 0. slice (slice) The slice associated with the Nengo object target. ## Distributions¶ nengo.dists.Distribution A base class for probability distributions. nengo.dists.get_samples Convenience function to sample a distribution or return samples. nengo.dists.Uniform A uniform distribution. nengo.dists.Gaussian A Gaussian distribution. nengo.dists.Exponential An exponential distribution (optionally with high values clipped). nengo.dists.UniformHypersphere Uniform distribution on or in an n-dimensional unit hypersphere. nengo.dists.Choice Discrete distribution across a set of possible values. nengo.dists.Samples A set of samples. nengo.dists.PDF An arbitrary distribution from a PDF. nengo.dists.SqrtBeta Distribution of the square root of a Beta distributed random variable. nengo.dists.SubvectorLength Distribution of the length of a subvectors of a unit vector. nengo.dists.CosineSimilarity Distribution of the cosine of the angle between two random vectors. class nengo.dists.Distribution[source] A base class for probability distributions. The only thing that a probabilities distribution need to define is a sample method. This base class ensures that all distributions accept the same arguments for the sample function. sample(n, d=None, rng=np.random)[source] Samples the distribution. Parameters: n : int Number samples to take. d : int or None, optional (Default: None) The number of dimensions to return. If this is an int, the return value will be of shape (n, d). If None, the return value will be of shape (n,). rng : numpy.random.RandomState, optional Random number generator state. samples : (n,) or (n, d) array_like Samples as a 1d or 2d array depending on d. The second dimension enumerates the dimensions of the process. nengo.dists.get_samples(dist_or_samples, n, d=None, rng=np.random)[source] Convenience function to sample a distribution or return samples. Use this function in situations where you accept an argument that could be a distribution, or could be an array_like of samples. Parameters: dist_or_samples : Distribution or (n, d) array_like Source of the samples to be returned. n : int Number samples to take. d : int or None, optional (Default: None) The number of dimensions to return. rng : RandomState, optional (Default: np.random) Random number generator. samples : (n, d) array_like Examples >>> def mean(values, n=100): ... samples = get_samples(values, n=n) ... return np.mean(samples) >>> mean([1, 2, 3, 4]) 2.5 >>> mean(nengo.dists.Gaussian(0, 1)) 0.057277898442269548 class nengo.dists.Uniform(low, high, integer=False)[source] A uniform distribution. It’s equally likely to get any scalar between low and high. Note that the order of low and high doesn’t matter; if low < high this will still work, and low will still be a closed interval while high is open. Parameters: low : Number The closed lower bound of the uniform distribution; samples >= low high : Number The open upper bound of the uniform distribution; samples < high integer : boolean, optional (Default: False) If true, sample from a uniform distribution of integers. In this case, low and high should be integers. class nengo.dists.Gaussian(mean, std)[source] A Gaussian distribution. This represents a bell-curve centred at mean and with spread represented by the standard deviation, std. Parameters: mean : Number The mean of the Gaussian. std : Number The standard deviation of the Gaussian. ValidationError if std is <= 0 class nengo.dists.Exponential(scale, shift=0.0, high=inf)[source] An exponential distribution (optionally with high values clipped). If high is left to its default value of infinity, this is a standard exponential distribution. If high is set, then any sampled values at or above high will be clipped so they are slightly below high. This is useful for thresholding and, by extension, networks.AssociativeMemory. The probability distribution function (PDF) is given by: | 0 if x < shift p(x) = | 1/scale * exp(-(x - shift)/scale) if x >= shift and x < high | n if x == high - eps | 0 if x >= high where n is such that the PDF integrates to one, and eps is an infintesimally small number such that samples of x are strictly less than high (in practice, eps depends on the floating point precision). Parameters: scale : float The scale parameter (inverse of the rate parameter lambda). Larger values make the distribution narrower (sharper peak). shift : float, optional (Default: 0) Amount to shift the distribution by. There will be no values smaller than this shift when sampling from the distribution. high : float, optional (Default: np.inf) All values larger than or equal to this value will be clipped to slightly less than this value. class nengo.dists.UniformHypersphere(surface=False, min_magnitude=0)[source] Uniform distribution on or in an n-dimensional unit hypersphere. Sample points are uniformly distributed across the volume (default) or surface of an n-dimensional unit hypersphere. Parameters: surface : bool, optional (Default: False) Whether sample points should be distributed uniformly over the surface of the hyperphere (True), or within the hypersphere (False). min_magnitude : Number, optional (Default: 0) Lower bound on the returned vector magnitudes (such that they are in the range [min_magnitude, 1]). Must be in the range [0, 1). Ignored if surface is True. class nengo.dists.Choice(options, weights=None)[source] Discrete distribution across a set of possible values. The same as numpy.random.choice, except can take vector or matrix values for the choices. Parameters: options : (N, …) array_like The options (choices) to choose between. The choice is always done along the first axis, so if options is a matrix, the options are the rows of that matrix. weights : (N,) array_like, optional (Default: None) Weights controlling the probability of selecting each option. Will automatically be normalized. If None, weights be uniformly distributed. class nengo.dists.Samples(samples)[source] A set of samples. This class is a subclass of Distribution so that it can be used in any situation that calls for a Distribution. However, the call to sample must match the dimensions of the samples or a ValidationError will be raised. Parameters: samples : (n, d) array_like n and d must match what is eventually passed to sample. class nengo.dists.PDF(x, p)[source] An arbitrary distribution from a PDF. Parameters: x : vector_like (n,) Values of the points to sample from (interpolated). p : vector_like (n,) Probabilities of the x points. class nengo.dists.SqrtBeta(n, m=1)[source] Distribution of the square root of a Beta distributed random variable. Given n + m dimensional random unit vectors, the length of subvectors with m elements will be distributed according to this distribution. Parameters: n: int Number of subvectors. m: int, optional (Default: 1) Length of each subvector. cdf(x)[source] Cumulative distribution function. Note Requires SciPy. Parameters: x : array_like Evaluation points in [0, 1]. cdf : array_like Probability that X <= x. pdf(x)[source] Probability distribution function. Note Requires SciPy. Parameters: x : array_like Evaluation points in [0, 1]. pdf : array_like Probability density at x. ppf(y)[source] Percent point function (inverse cumulative distribution). Note Requires SciPy. Parameters: y : array_like Cumulative probabilities in [0, 1]. ppf : array_like Evaluation points x in [0, 1] such that P(X <= x) = y. class nengo.dists.SubvectorLength(dimensions, subdimensions=1)[source] Distribution of the length of a subvectors of a unit vector. Parameters: dimensions : int Dimensionality of the complete unit vector. subdimensions : int, optional (Default: 1) Dimensionality of the subvector. class nengo.dists.CosineSimilarity(dimensions)[source] Distribution of the cosine of the angle between two random vectors. The “cosine similarity” is the cosine of the angle between two vectors, which is equal to the dot product of the vectors, divided by the L2-norms of the individual vectors. When these vectors are unit length, this is then simply the distribution of their dot product. This is also equivalent to the distribution of a single coefficient from a unit vector (a single dimension of UniformHypersphere(surface=True)). Furthermore, CosineSimilarity(d+2) is equivalent to the distribution of a single coordinate from points uniformly sampled from the d-dimensional unit ball (a single dimension of UniformHypersphere(surface=False).sample(n, d)). These relationships have been detailed in [Voelker2017]. This can be used to calculate an intercept c = ppf(1 - p) such that dot(u, v) >= c with probability p, for random unit vectors u and v. In other words, a neuron with intercept ppf(1 - p) will fire with probability p for a random unit length input. Parameters: dimensions: int Dimensionality of the complete unit vector. ## Neuron types¶ nengo.neurons.NeuronType Base class for Nengo neuron models. nengo.Direct Signifies that an ensemble should simulate in direct mode. nengo.RectifiedLinear A rectified linear neuron model. nengo.SpikingRectifiedLinear A rectified integrate and fire neuron model. nengo.Sigmoid A neuron model whose response curve is a sigmoid. nengo.LIF Spiking version of the leaky integrate-and-fire (LIF) neuron model. nengo.LIFRate Non-spiking version of the leaky integrate-and-fire (LIF) neuron model. nengo.AdaptiveLIF Adaptive spiking version of the LIF neuron model. nengo.AdaptiveLIFRate Adaptive non-spiking version of the LIF neuron model. nengo.Izhikevich Izhikevich neuron model. class nengo.neurons.NeuronType[source] Base class for Nengo neuron models. Attributes: probeable : tuple Signals that can be probed in the neuron population. current(x, gain, bias)[source] Compute current injected in each neuron given input, gain and bias. Parameters: x : (n_neurons,) array_like Vector-space input. gain : (n_neurons,) array_like Gains associated with each neuron. bias : (n_neurons,) array_like Bias current associated with each neuron. gain_bias(max_rates, intercepts)[source] Compute the gain and bias needed to satisfy max_rates, intercepts. This takes the neurons, approximates their response function, and then uses that approximation to find the gain and bias value that will give the requested intercepts and max_rates. Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuron-specific implementation. Parameters: max_rates : (n_neurons,) array_like Maximum firing rates of neurons. intercepts : (n_neurons,) array_like X-intercepts of neurons. gain : (n_neurons,) array_like Gain associated with each neuron. Sometimes denoted alpha. bias : (n_neurons,) array_like Bias current associated with each neuron. max_rates_intercepts(gain, bias)[source] Compute the max_rates and intercepts given gain and bias. Note that this default implementation is very slow! Whenever possible, subclasses should override this with a neuron-specific implementation. Parameters: gain : (n_neurons,) array_like Gain associated with each neuron. Sometimes denoted alpha. bias : (n_neurons,) array_like Bias current associated with each neuron. max_rates : (n_neurons,) array_like Maximum firing rates of neurons. intercepts : (n_neurons,) array_like X-intercepts of neurons. rates(x, gain, bias)[source] Compute firing rates (in Hz) for given vector input, x. This default implementation takes the naive approach of running the step function for a second. This should suffice for most rate-based neuron types; for spiking neurons it will likely fail (those models should override this function). Parameters: x : (n_neurons,) array_like Vector-space input. gain : (n_neurons,) array_like Gains associated with each neuron. bias : (n_neurons,) array_like Bias current associated with each neuron. rates : (n_neurons,) ndarray The firing rates at each given value of x. step_math(dt, J, output)[source] Implements the differential equation for this neuron type. At a minimum, NeuronType subclasses must implement this method. That implementation should modify the output parameter rather than returning anything, for efficiency reasons. Parameters: dt : float Simulation timestep. J : (n_neurons,) array_like Input currents associated with each neuron. output : (n_neurons,) array_like Output activities associated with each neuron. class nengo.Direct[source] Signifies that an ensemble should simulate in direct mode. In direct mode, the ensemble represents and transforms signals perfectly, rather than through a neural approximation. Note that direct mode ensembles with recurrent connections can easily diverge; most other neuron types will instead saturate at a certain high firing rate. gain_bias(max_rates, intercepts)[source] Always returns None, None. max_rates_intercepts(gain, bias)[source] Always returns None, None. rates(x, gain, bias)[source] Always returns x. step_math(dt, J, output)[source] Raises an error if called. Rather than calling this function, the simulator will detect that the ensemble is in direct mode, and bypass the neural approximation. class nengo.RectifiedLinear(amplitude=1)[source] A rectified linear neuron model. Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless it passes below zero, at which point the neural activity will stay at zero. Parameters: amplitude : float Scaling factor on the neuron output. Corresponds to the relative amplitude of the output of the neuron. gain_bias(max_rates, intercepts)[source] Determine gain and bias by shifting and scaling the lines. max_rates_intercepts(gain, bias)[source] Compute the inverse of gain_bias. step_math(dt, J, output)[source] Implement the rectification nonlinearity. class nengo.SpikingRectifiedLinear(amplitude=1)[source] A rectified integrate and fire neuron model. Each neuron is modeled as a rectified line. That is, the neuron’s activity scales linearly with current, unless the current is less than zero, at which point the neural activity will stay at zero. This is a spiking version of the RectifiedLinear neuron model. Parameters: amplitude : float Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron. rates(x, gain, bias)[source] Use RectifiedLinear to determine rates. step_math(dt, J, spiked, voltage)[source] Implement the integrate and fire nonlinearity. class nengo.Sigmoid(tau_ref=0.0025)[source] A neuron model whose response curve is a sigmoid. Since the tuning curves are strictly positive, the intercepts correspond to the inflection point of each sigmoid. That is, f(intercept) = 0.5 where f is the pure sigmoid function. gain_bias(max_rates, intercepts)[source] Analytically determine gain, bias. max_rates_intercepts(gain, bias)[source] Compute the inverse of gain_bias. step_math(dt, J, output)[source] Implement the sigmoid nonlinearity. class nengo.LIF(tau_rc=0.02, tau_ref=0.002, min_voltage=0, amplitude=1)[source] Spiking version of the leaky integrate-and-fire (LIF) neuron model. Parameters: tau_rc : float Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay). tau_ref : float Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike. min_voltage : float Minimum value for the membrane voltage. If -np.inf, the voltage is never clipped. amplitude : float Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron. class nengo.LIFRate(tau_rc=0.02, tau_ref=0.002, amplitude=1)[source] Non-spiking version of the leaky integrate-and-fire (LIF) neuron model. Parameters: tau_rc : float Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay). tau_ref : float Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike. amplitude : float Scaling factor on the neuron output. Corresponds to the relative amplitude of the output spikes of the neuron. gain_bias(max_rates, intercepts)[source] Analytically determine gain, bias. max_rates_intercepts(gain, bias)[source] Compute the inverse of gain_bias. rates(x, gain, bias)[source] Always use LIFRate to determine rates. step_math(dt, J, output)[source] Implement the LIFRate nonlinearity. class nengo.AdaptiveLIF(tau_n=1, inc_n=0.01, **lif_args)[source] Adaptive spiking version of the LIF neuron model. Works as the LIF model, except with adapation state n, which is subtracted from the input current. Its dynamics are: tau_n dn/dt = -n where n is incremented by inc_n when the neuron spikes. Parameters: tau_n : float Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay). inc_n : float Adaptation increment. How much the adaptation state is increased after each spike. tau_rc : float Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay). tau_ref : float Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike. References [1] Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339 step_math(dt, J, output, voltage, ref, adaptation)[source] class nengo.AdaptiveLIFRate(tau_n=1, inc_n=0.01, **lif_args)[source] Adaptive non-spiking version of the LIF neuron model. Works as the LIF model, except with adapation state n, which is subtracted from the input current. Its dynamics are: tau_n dn/dt = -n where n is incremented by inc_n when the neuron spikes. Parameters: tau_n : float Adaptation time constant. Affects how quickly the adaptation state decays to zero in the absence of spikes (larger = slower decay). inc_n : float Adaptation increment. How much the adaptation state is increased after each spike. tau_rc : float Membrane RC time constant, in seconds. Affects how quickly the membrane voltage decays to zero in the absence of input (larger = slower decay). tau_ref : float Absolute refractory period, in seconds. This is how long the membrane voltage is held at zero after a spike. References [1] Koch, Christof. Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, 1999. p. 339 step_math(dt, J, output, adaptation)[source] class nengo.Izhikevich(tau_recovery=0.02, coupling=0.2, reset_voltage=-65.0, reset_recovery=8.0)[source] Izhikevich neuron model. This implementation is based on the original paper [1]; however, we rename some variables for clarity. What was originally ‘v’ we term ‘voltage’, which represents the membrane potential of each neuron. What was originally ‘u’ we term ‘recovery’, which represents membrane recovery, “which accounts for the activation of K+ ionic currents and inactivation of Na+ ionic currents.” The ‘a’, ‘b’, ‘c’, and ‘d’ parameters are also renamed (see the parameters below). We use default values that correspond to regular spiking (‘RS’) neurons. For other classes of neurons, set the parameters as follows. • Intrinsically bursting (IB): reset_voltage=-55, reset_recovery=4 • Chattering (CH): reset_voltage=-50, reset_recovery=2 • Fast spiking (FS): tau_recovery=0.1 • Low-threshold spiking (LTS): coupling=0.25 • Resonator (RZ): tau_recovery=0.1, coupling=0.26 Parameters: tau_recovery : float, optional (Default: 0.02) (Originally ‘a’) Time scale of the recovery variable. coupling : float, optional (Default: 0.2) (Originally ‘b’) How sensitive recovery is to subthreshold fluctuations of voltage. reset_voltage : float, optional (Default: -65.) (Originally ‘c’) The voltage to reset to after a spike, in millivolts. reset_recovery : float, optional (Default: 8.) (Originally ‘d’) The recovery value to reset to after a spike. References [1] (1, 2) E. M. Izhikevich, “Simple model of spiking neurons.” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1569-1572. (http://www.izhikevich.org/publications/spikes.pdf) rates(x, gain, bias)[source] Estimates steady-state firing rate given gain and bias. Uses the settled_firingrate helper function. step_math(dt, J, spiked, voltage, recovery)[source] Implement the Izhikevich nonlinearity. ## Learning rule types¶ nengo.learning_rules.LearningRuleType Base class for all learning rule objects. nengo.PES Prescribed Error Sensitivity learning rule. nengo.BCM Bienenstock-Cooper-Munroe learning rule. nengo.Oja Oja learning rule. nengo.Voja Vector Oja learning rule. class nengo.learning_rules.LearningRuleType(learning_rate=Default, size_in=0)[source] Base class for all learning rule objects. To use a learning rule, pass it as a learning_rule_type keyword argument to the Connection on which you want to do learning. Each learning rule exposes two important pieces of metadata that the builder uses to determine what information should be stored. The size_in is the dimensionality of the incoming error signal. It can either take an integer or one of the following string values: • 'pre': vector error signal in pre-object space • 'post': vector error signal in post-object space • 'mid': vector error signal in the conn.size_mid space • 'pre_state': vector error signal in pre-synaptic ensemble space • 'post_state': vector error signal in pre-synaptic ensemble space The difference between 'post_state' and 'post' is that with the former, if a Neurons object is passed, it will use the dimensionality of the corresponding Ensemble, whereas the latter simply uses the post object size_in. Similarly with 'pre_state' and 'pre'. The modifies attribute denotes the signal targeted by the rule. Options are: • 'encoders' • 'decoders' • 'weights' Parameters: learning_rate : float, optional (Default: 1e-6) A scalar indicating the rate at which modifies will be adjusted. size_in : int, str, optional (Default: 0) Dimensionality of the error signal (see above). learning_rate : float A scalar indicating the rate at which modifies will be adjusted. size_in : int, str Dimensionality of the error signal. modifies : str The signal targeted by the learning rule. class nengo.PES(learning_rate=Default, pre_synapse=Default, pre_tau=Unconfigurable)[source] Prescribed Error Sensitivity learning rule. Modifies a connection’s decoders to minimize an error signal provided through a connection to the connection’s learning rule. Parameters: learning_rate : float, optional (Default: 1e-4) A scalar indicating the rate at which weights will be adjusted. pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005)) Synapse model used to filter the pre-synaptic activities. learning_rate : float A scalar indicating the rate at which weights will be adjusted. pre_synapse : Synapse Synapse model used to filter the pre-synaptic activities. class nengo.BCM(learning_rate=Default, pre_synapse=Default, post_synapse=Default, theta_synapse=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable, theta_tau=Unconfigurable)[source] Bienenstock-Cooper-Munroe learning rule. Modifies connection weights as a function of the presynaptic activity and the difference between the postsynaptic activity and the average postsynaptic activity. Parameters: learning_rate : float, optional (Default: 1e-9) A scalar indicating the rate at which weights will be adjusted. pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005)) Synapse model used to filter the pre-synaptic activities. post_synapse : Synapse, optional (Default: None) Synapse model used to filter the post-synaptic activities. If None, post_synapse will be the same as pre_synapse. theta_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=1.0)) Synapse model used to filter the theta signal. Notes The BCM rule is dependent on pre and post neural activities, not decoded values, and so is not affected by changes in the size of pre and post ensembles. However, if you are decoding from the post ensemble, the BCM rule will have an increased effect on larger post ensembles because more connection weights are changing. In these cases, it may be advantageous to scale the learning rate on the BCM rule by 1 / post.n_neurons. Attributes: learning_rate : float A scalar indicating the rate at which weights will be adjusted. post_synapse : Synapse Synapse model used to filter the post-synaptic activities. pre_synapse : Synapse Synapse model used to filter the pre-synaptic activities. theta_synapse : Synapse Synapse model used to filter the theta signal. class nengo.Oja(learning_rate=Default, pre_synapse=Default, post_synapse=Default, beta=Default, pre_tau=Unconfigurable, post_tau=Unconfigurable)[source] Oja learning rule. Modifies connection weights according to the Hebbian Oja rule, which augments typically Hebbian coactivity with a “forgetting” term that is proportional to the weight of the connection and the square of the postsynaptic activity. Parameters: learning_rate : float, optional (Default: 1e-6) A scalar indicating the rate at which weights will be adjusted. pre_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005)) Synapse model used to filter the pre-synaptic activities. post_synapse : Synapse, optional (Default: None) Synapse model used to filter the post-synaptic activities. If None, post_synapse will be the same as pre_synapse. beta : float, optional (Default: 1.0) A scalar weight on the forgetting term. Notes The Oja rule is dependent on pre and post neural activities, not decoded values, and so is not affected by changes in the size of pre and post ensembles. However, if you are decoding from the post ensemble, the Oja rule will have an increased effect on larger post ensembles because more connection weights are changing. In these cases, it may be advantageous to scale the learning rate on the Oja rule by 1 / post.n_neurons. Attributes: beta : float A scalar weight on the forgetting term. learning_rate : float A scalar indicating the rate at which weights will be adjusted. post_synapse : Synapse Synapse model used to filter the post-synaptic activities. pre_synapse : Synapse Synapse model used to filter the pre-synaptic activities. class nengo.Voja(learning_rate=Default, post_synapse=Default, post_tau=Unconfigurable)[source] Vector Oja learning rule. Modifies an ensemble’s encoders to be selective to its inputs. A connection to the learning rule will provide a scalar weight for the learning rate, minus 1. For instance, 0 is normal learning, -1 is no learning, and less than -1 causes anti-learning or “forgetting”. Parameters: learning_rate : float, optional (Default: 1e-2) A scalar indicating the rate at which encoders will be adjusted. post_synapse : Synapse, optional (Default: nengo.synapses.Lowpass(tau=0.005)) Synapse model used to filter the post-synaptic activities. learning_rate : float A scalar indicating the rate at which encoders will be adjusted. post_synapse : Synapse Synapse model used to filter the post-synaptic activities. ## Processes¶ nengo.Process A general system with input, output, and state. nengo.processes.PresentInput Present a series of inputs, each for the same fixed length of time. nengo.processes.FilteredNoise Filtered white noise process. nengo.processes.BrownNoise Brown noise process (aka Brownian noise, red noise, Wiener process). nengo.processes.WhiteNoise Full-spectrum white noise process. nengo.processes.WhiteSignal An ideal low-pass filtered white noise process. nengo.processes.Piecewise A piecewise function with different options for interpolation. class nengo.Process(default_size_in=0, default_size_out=1, default_dt=0.001, seed=None)[source] A general system with input, output, and state. For more details on how to use processes and make custom process subclasses, see Processes and how to use them. Parameters: default_size_in : int (Default: 0) Sets the default size in for nodes using this process. default_size_out : int (Default: 1) Sets the default size out for nodes running this process. Also, if d is not specified in run or run_steps, this will be used. default_dt : float (Default: 0.001 (1 millisecond)) If dt is not specified in run, run_steps, ntrange, or trange, this will be used. seed : int, optional (Default: None) Random number seed. Ensures random factors will be the same each run. default_dt : float If dt is not specified in run, run_steps, ntrange, or trange, this will be used. default_size_in : int The default size in for nodes using this process. default_size_out : int The default size out for nodes running this process. Also, if d is not specified in run or run_steps, this will be used. seed : int or None Random number seed. Ensures random factors will be the same each run. apply(x, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/miniconda/envs/test/lib/python3.6/site-packages/numpy/random/__init__.py'>, copy=True, **kwargs)[source] Run process on a given input. Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process. Parameters: x : ndarray The input signal given to the process. d : int, optional (Default: None) Output dimensionality. If None, default_size_out will be used. dt : float, optional (Default: None) Simulation timestep. If None, default_dt will be used. rng : numpy.random.RandomState (Default: numpy.random) Random number generator used for stochstic processes. copy : bool, optional (Default: True) If True, a new output array will be created for output. If False, the input signal x will be overwritten. get_rng(rng)[source] Get a properly seeded independent RNG for the process step. Parameters: rng : numpy.random.RandomState The parent random number generator to use if the seed is not set. make_step(shape_in, shape_out, dt, rng)[source] Create function that advances the process forward one time step. This must be implemented by all custom processes. Parameters: shape_in : tuple The shape of the input signal. shape_out : tuple The shape of the output signal. dt : float The simulation timestep. rng : numpy.random.RandomState A random number generator. run(t, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/miniconda/envs/test/lib/python3.6/site-packages/numpy/random/__init__.py'>, **kwargs)[source] Run process without input for given length of time. Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process. Parameters: t : float The length of time to run. d : int, optional (Default: None) Output dimensionality. If None, default_size_out will be used. dt : float, optional (Default: None) Simulation timestep. If None, default_dt will be used. rng : numpy.random.RandomState (Default: numpy.random) Random number generator used for stochstic processes. run_steps(n_steps, d=None, dt=None, rng=<module 'numpy.random' from '/home/travis/miniconda/envs/test/lib/python3.6/site-packages/numpy/random/__init__.py'>, **kwargs)[source] Run process without input for given number of steps. Keyword arguments that do not appear in the parameter list below will be passed to the make_step function of this process. Parameters: n_steps : int The number of steps to run. d : int, optional (Default: None) Output dimensionality. If None, default_size_out will be used. dt : float, optional (Default: None) Simulation timestep. If None, default_dt will be used. rng : numpy.random.RandomState (Default: numpy.random) Random number generator used for stochstic processes. ntrange(n_steps, dt=None)[source] Create time points corresponding to a given number of steps. Parameters: n_steps : int The given number of steps. dt : float, optional (Default: None) Simulation timestep. If None, default_dt will be used. trange(t, dt=None)[source] Create time points corresponding to a given length of time. Parameters: t : float The given length of time. dt : float, optional (Default: None) Simulation timestep. If None, default_dt will be used. class nengo.processes.PresentInput(inputs, presentation_time, **kwargs)[source] Present a series of inputs, each for the same fixed length of time. Parameters: inputs : array_like Inputs to present, where each row is an input. Rows will be flattened. presentation_time : float Show each input for this amount of time (in seconds). class nengo.processes.FilteredNoise(synapse=Lowpass(0.005), dist=Gaussian(mean=0, std=1), scale=True, synapse_kwargs=None, **kwargs)[source] Filtered white noise process. This process takes white noise and filters it using the provided synapse. Parameters: synapse : Synapse, optional (Default: Lowpass(tau=0.005)) The synapse to use to filter the noise. dist : Distribution, optional (Default: Gaussian(mean=0, std=1)) The distribution used to generate the white noise. scale : bool, optional (Default: True) Whether to scale the white noise for integration, making the output signal invariant to dt. synapse_kwargs : dict, optional (Default: None) Arguments to pass to synapse.make_step. seed : int, optional (Default: None) Random number seed. Ensures noise will be the same each run. class nengo.processes.BrownNoise(dist=Gaussian(mean=0, std=1), **kwargs)[source] Brown noise process (aka Brownian noise, red noise, Wiener process). This process is the integral of white noise. Parameters: dist : Distribution, optional (Default: Gaussian(mean=0, std=1)) The distribution used to generate the white noise. seed : int, optional (Default: None) Random number seed. Ensures noise will be the same each run. class nengo.processes.WhiteNoise(dist=Gaussian(mean=0, std=1), scale=True, **kwargs)[source] Full-spectrum white noise process. Parameters: dist : Distribution, optional (Default: Gaussian(mean=0, std=1)) The distribution from which to draw samples. scale : bool, optional (Default: True) Whether to scale the white noise for integration. Integrating white noise requires using a time constant of sqrt(dt) instead of dt on the noise term [1], to ensure the magnitude of the integrated noise does not change with dt. seed : int, optional (Default: None) Random number seed. Ensures noise will be the same each run. References [1] (1, 2) Gillespie, D.T. (1996) Exact numerical simulation of the Ornstein- Uhlenbeck process and its integral. Phys. Rev. E 54, pp. 2084-91. class nengo.processes.WhiteSignal(period, high, rms=0.5, y0=None, **kwargs)[source] An ideal low-pass filtered white noise process. This signal is created in the frequency domain, and designed to have exactly equal power at all frequencies below the cut-off frequency, and no power above the cut-off. The signal is naturally periodic, so it can be used beyond its period while still being continuous with continuous derivatives. Parameters: period : float A white noise signal with this period will be generated. Samples will repeat after this duration. high : float The cut-off frequency of the low-pass filter, in Hz. Must not exceed the Nyquist frequency for the simulation timestep, which is 0.5 / dt. rms : float, optional (Default: 0.5) The root mean square power of the filtered signal y0 : float, optional (Default: None) Align the phase of each output dimension to begin at the value that is closest (in absolute value) to y0. seed : int, optional (Default: None) Random number seed. Ensures noise will be the same each run. class nengo.processes.Piecewise(data, interpolation='zero', **kwargs)[source] A piecewise function with different options for interpolation. Given an input dictionary of {0: 0, 0.5: -1, 0.75: 0.5, 1: 0}, this process will emit the numerical values (0, -1, 0.5, 0) starting at the corresponding time points (0, 0.5, 0.75, 1). The keys in the input dictionary must be times (float or int). The values in the dictionary can be floats, lists of floats, or numpy arrays. All lists or numpy arrays must be of the same length, as the output shape of the process will be determined by the shape of the values. Interpolation on the data points using scipy.interpolate is also supported. The default interpolation is ‘zero’, which creates a piecewise function whose values change at the specified time points. So the above example would be shortcut for: def function(t): if t < 0.5: return 0 elif t < 0.75 return -1 elif t < 1: return 0.5 else: return 0 For times before the first specified time, an array of zeros (of the correct length) will be emitted. This means that the above can be simplified to: Piecewise({0.5: -1, 0.75: 0.5, 1: 0}) Parameters: data : dict A dictionary mapping times to the values that should be emitted at those times. Times must be numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options. interpolation : str, optional (Default: ‘zero’) One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’. Specifies how to interpolate between times with specified value. ‘zero’ creates a plain piecewise function whose values begin at corresponding time points, while all other options interpolate as described in scipy.interpolate. Examples >>> from nengo.processes import Piecewise >>> process = Piecewise({0.5: 1, 0.75: -1, 1: 0}) >>> with nengo.Network() as model: ... u = nengo.Node(process, size_out=process.default_size_out) ... up = nengo.Probe(u) >>> with nengo.Simulator(model) as sim: ... sim.run(1.5) >>> f = sim.data[up] >>> t = sim.trange() >>> f[t == 0.2] array([[ 0.]]) >>> f[t == 0.58] array([[ 1.]]) Attributes: data : dict A dictionary mapping times to the values that should be emitted at those times. Times are numbers (ints or floats), while values can be numbers, lists of numbers, numpy arrays of numbers, or callables that return any of those options. interpolation : str One of ‘linear’, ‘nearest’, ‘slinear’, ‘quadratic’, ‘cubic’, or ‘zero’. Specifies how to interpolate between times with specified value. ‘zero’ creates a plain piecewise function whose values change at corresponding time points, while all other options interpolate as described in scipy.interpolate. ## Synapse models¶ nengo.synapses.Synapse Abstract base class for synapse models. nengo.synapses.filt Filter signal with synapse. nengo.synapses.filtfilt Zero-phase filtering of signal using the synapse filter. nengo.LinearFilter General linear time-invariant (LTI) system synapse. nengo.Lowpass Standard first-order lowpass filter synapse. nengo.Alpha Alpha-function filter synapse. nengo.synapses.Triangle Triangular finite impulse response (FIR) synapse. class nengo.synapses.Synapse(default_size_in=1, default_size_out=None, default_dt=0.001, seed=None)[source] Abstract base class for synapse models. Conceptually, a synapse model emulates a biological synapse, taking in input in the form of released neurotransmitter and opening ion channels to allow more or less current to flow into the neuron. In Nengo, the implementation of a synapse is as a specific case of a Process in which the input and output shapes are the same. The input is the current across the synapse, and the output is the current that will be induced in the postsynaptic neuron. Synapses also contain the Synapse.filt and Synapse.filtfilt methods, which make it easy to use Nengo’s synapse models outside of Nengo simulations. Parameters: default_size_in : int, optional (Default: 1) The size_in used if not specified. default_size_out : int (Default: None) The size_out used if not specified. If None, will be the same as default_size_in. default_dt : float (Default: 0.001 (1 millisecond)) The simulation timestep used if not specified. seed : int, optional (Default: None) Random number seed. Ensures random factors will be the same each run. default_dt : float (Default: 0.001 (1 millisecond)) The simulation timestep used if not specified. default_size_in : int (Default: 0) The size_in used if not specified. default_size_out : int (Default: 1) The size_out used if not specified. seed : int, optional (Default: None) Random number seed. Ensures random factors will be the same each run. filt(x, dt=None, axis=0, y0=None, copy=True, filtfilt=False)[source] Filter x with this synapse model. Parameters: x : array_like The signal to filter. dt : float, optional (Default: None) The timestep of the input signal. If None, default_dt will be used. axis : int, optional (Default: 0) The axis along which to filter. y0 : array_like, optional (Default: None) The starting state of the filter output. If None, the initial value of the input signal along the axis filtered will be used. copy : bool, optional (Default: True) Whether to copy the input data, or simply work in-place. filtfilt : bool, optional (Default: False) If True, runs the process forward then backward on the signal, for zero-phase filtering (like Matlab’s filtfilt). filtfilt(x, **kwargs)[source] Zero-phase filtering of x using this filter. Equivalent to filt(x, filtfilt=True, **kwargs). make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>)[source] Create function that advances the synapse forward one time step. At a minimum, Synapse subclasses must implement this method. That implementation should return a callable that will perform the synaptic filtering operation. Parameters: shape_in : tuple Shape of the input signal to be filtered. shape_out : tuple Shape of the output filtered signal. dt : float The timestep of the simulation. rng : numpy.random.RandomState Random number generator. y0 : array_like, optional (Default: None) The starting state of the filter output. If None, each dimension of the state will start at zero. dtype : numpy.dtype (Default: np.float64) Type of data used by the synapse model. This is important for ensuring that certain synapses avoid or force integer division. nengo.synapses.filt(signal, synapse, dt, axis=0, x0=None, copy=True)[source] Filter signal with synapse. Note Deprecated in Nengo 2.1.0. Use Synapse.filt method instead. nengo.synapses.filtfilt(signal, synapse, dt, axis=0, x0=None, copy=True)[source] Zero-phase filtering of signal using the synapse filter. Note Deprecated in Nengo 2.1.0. Use Synapse.filtfilt method instead. class nengo.LinearFilter(num, den, analog=True, **kwargs)[source] General linear time-invariant (LTI) system synapse. This class can be used to implement any linear filter, given the filter’s transfer function. [1] Parameters: num : array_like Numerator coefficients of transfer function. den : array_like Denominator coefficients of transfer function. analog : boolean, optional (Default: True) Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator dt. References Attributes: analog : boolean Whether the synapse coefficients are analog (i.e. continuous-time), or discrete. Analog coefficients will be converted to discrete for simulation using the simulator dt. den : ndarray Denominator coefficients of transfer function. num : ndarray Numerator coefficients of transfer function. combine(obj)[source] Combine in series with another LinearFilter. evaluate(frequencies)[source] Evaluate the transfer function at the given frequencies. Examples Using the evaluate function to make a Bode plot: synapse = nengo.synapses.LinearFilter([1], [0.02, 1]) f = numpy.logspace(-1, 3, 100) y = synapse.evaluate(f) plt.subplot(211); plt.semilogx(f, 20*np.log10(np.abs(y))) plt.xlabel('frequency [Hz]'); plt.ylabel('magnitude [dB]') plt.subplot(212); plt.semilogx(f, np.angle(y)) make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, method='zoh')[source] Returns a Step instance that implements the linear filter. class Step(num, den, output)[source] Abstract base class for LTI filtering step functions. class NoDen(num, den, output)[source] An LTI step function for transfer functions with no denominator. This step function should be much faster than the equivalent general step function. class Simple(num, den, output, y0=None)[source] An LTI step function for transfer functions with one num and den. This step function should be much faster than the equivalent general step function. class General(num, den, output, y0=None)[source] An LTI step function for any given transfer function. Implements a discrete-time LTI system using the difference equation [1] for the given transfer function (num, den). References class nengo.Lowpass(tau, **kwargs)[source] Standard first-order lowpass filter synapse. The impulse-response function is given by: f(t) = (1 / tau) * exp(-t / tau) Parameters: tau : float The time constant of the filter in seconds. tau : float The time constant of the filter in seconds. make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source] Returns an optimized LinearFilter.Step subclass. class nengo.Alpha(tau, **kwargs)[source] Alpha-function filter synapse. The impulse-response function is given by: alpha(t) = (t / tau**2) * exp(-t / tau) and was found by [1] to be a good basic model for synapses. Parameters: tau : float The time constant of the filter in seconds. References [1] (1, 2) Mainen, Z.F. and Sejnowski, T.J. (1995). Reliability of spike timing in neocortical neurons. Science (New York, NY), 268(5216):1503-6. Attributes: tau : float The time constant of the filter in seconds. make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>, **kwargs)[source] Returns an optimized LinearFilter.Step subclass. class nengo.synapses.Triangle(t, **kwargs)[source] Triangular finite impulse response (FIR) synapse. This synapse has a triangular and finite impulse response. The length of the triangle is t seconds; thus the digital filter will have t / dt + 1 taps. Parameters: t : float Length of the triangle, in seconds. t : float Length of the triangle, in seconds. make_step(shape_in, shape_out, dt, rng, y0=None, dtype=<class 'numpy.float64'>)[source] Returns a custom step function. ## Decoder and connection weight solvers¶ nengo.solvers.Solver Decoder or weight solver. nengo.solvers.Lstsq Unregularized least-squares solver. nengo.solvers.LstsqNoise Least-squares solver with additive Gaussian white noise. nengo.solvers.LstsqMultNoise Least-squares solver with multiplicative white noise. nengo.solvers.LstsqL2 Least-squares solver with L2 regularization. nengo.solvers.LstsqL2nz Least-squares solver with L2 regularization on non-zero components. nengo.solvers.LstsqL1 Least-squares solver with L1 and L2 regularization (elastic net). nengo.solvers.LstsqDrop Find sparser decoders/weights by dropping small values. nengo.solvers.Nnls Non-negative least-squares solver without regularization. nengo.solvers.NnlsL2 Non-negative least-squares solver with L2 regularization. nengo.solvers.NnlsL2nz Non-negative least-squares with L2 regularization on nonzero components. nengo.solvers.NoSolver Manually pass in weights, bypassing the decoder solver. class nengo.solvers.Solver(weights=False)[source] Decoder or weight solver. __call__(A, Y, rng=<module 'numpy.random' from '/home/travis/miniconda/envs/test/lib/python3.6/site-packages/numpy/random/__init__.py'>, E=None)[source] Call the solver. Parameters: A : (n_eval_points, n_neurons) array_like Matrix of the neurons’ activities at the evaluation points Y : (n_eval_points, dimensions) array_like Matrix of the target decoded values for each of the D dimensions, at each of the evaluation points. rng : numpy.random.RandomState, optional (Default: np.random) A random number generator to use as required. E : (dimensions, post.n_neurons) array_like, optional (Default: None) Array of post-population encoders. Providing this tells the solver to return an array of connection weights rather than decoders. X : (n_neurons, dimensions) or (n_neurons, post.n_neurons) ndarray (n_neurons, dimensions) array of decoders (if solver.weights is False) or (n_neurons, post.n_neurons) array of weights (if 'solver.weights is True). info : dict A dictionary of information about the solver. All dictionaries have an 'rmses' key that contains RMS errors of the solve. Other keys are unique to particular solvers. mul_encoders(Y, E, copy=False)[source] Helper function that projects signal Y onto encoders E. Parameters: Y : ndarray The signal of interest. E : (dimensions, n_neurons) array_like or None Array of encoders. If None, Y will be returned unchanged. copy : bool, optional (Default: False) Whether a copy of Y should be returned if E is None. class nengo.solvers.Lstsq(weights=False, rcond=0.01)[source] Unregularized least-squares solver. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. rcond : float, optional (Default: 0.01) Cut-off ratio for small singular values (see numpy.linalg.lstsq). rcond : float Cut-off ratio for small singular values (see numpy.linalg.lstsq). weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.LstsqNoise(weights=False, noise=0.1, solver=Cholesky(transpose=None))[source] Least-squares solver with additive Gaussian white noise. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. noise : float, optional (Default: 0.1) Amount of noise, as a fraction of the neuron activity. solver : LeastSquaresSolver, optional (Default: Cholesky()) Subsolver to use for solving the least squares problem. noise : float Amount of noise, as a fraction of the neuron activity. solver : LeastSquaresSolver Subsolver to use for solving the least squares problem. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.LstsqMultNoise(weights=False, noise=0.1, solver=Cholesky(transpose=None))[source] Least-squares solver with multiplicative white noise. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. noise : float, optional (Default: 0.1) Amount of noise, as a fraction of the neuron activity. solver : LeastSquaresSolver, optional (Default: Cholesky()) Subsolver to use for solving the least squares problem. noise : float Amount of noise, as a fraction of the neuron activity. solver : LeastSquaresSolver Subsolver to use for solving the least squares problem. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.LstsqL2(weights=False, reg=0.1, solver=Cholesky(transpose=None))[source] Least-squares solver with L2 regularization. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. reg : float, optional (Default: 0.1) Amount of regularization, as a fraction of the neuron activity. solver : LeastSquaresSolver, optional (Default: Cholesky()) Subsolver to use for solving the least squares problem. reg : float Amount of regularization, as a fraction of the neuron activity. solver : LeastSquaresSolver Subsolver to use for solving the least squares problem. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.LstsqL2nz(weights=False, reg=0.1, solver=Cholesky(transpose=None))[source] Least-squares solver with L2 regularization on non-zero components. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. reg : float, optional (Default: 0.1) Amount of regularization, as a fraction of the neuron activity. solver : LeastSquaresSolver, optional (Default: Cholesky()) Subsolver to use for solving the least squares problem. reg : float Amount of regularization, as a fraction of the neuron activity. solver : LeastSquaresSolver Subsolver to use for solving the least squares problem. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.LstsqL1(weights=False, l1=0.0001, l2=1e-06, max_iter=1000)[source] Least-squares solver with L1 and L2 regularization (elastic net). This method is well suited for creating sparse decoders or weight matrices. Note Requires scikit-learn. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. l1 : float, optional (Default: 1e-4) Amount of L1 regularization. l2 : float, optional (Default: 1e-6) Amount of L2 regularization. max_iter : int, optional Maximum number of iterations for the underlying elastic net. l1 : float Amount of L1 regularization. l2 : float Amount of L2 regularization. weights : bool If False, solve for decoders. If True, solve for weights. max_iter : int Maximum number of iterations for the underlying elastic net. class nengo.solvers.LstsqDrop(weights=False, drop=0.25, solver1=LstsqL2(reg=0.001, solver=Cholesky(transpose=None), weights=False), solver2=LstsqL2(reg=0.1, solver=Cholesky(transpose=None), weights=False))[source] Find sparser decoders/weights by dropping small values. This solver first solves for coefficients (decoders/weights) with L2 regularization, drops those nearest to zero, and retrains remaining. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. drop : float, optional (Default: 0.25) Fraction of decoders or weights to set to zero. solver1 : Solver, optional (Default: LstsqL2(reg=0.001)) Solver for finding the initial decoders. solver2 : Solver, optional (Default: LstsqL2(reg=0.1)) Used for re-solving for the decoders after dropout. drop : float Fraction of decoders or weights to set to zero. solver1 : Solver Solver for finding the initial decoders. solver2 : Solver Used for re-solving for the decoders after dropout. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.Nnls(weights=False)[source] Non-negative least-squares solver without regularization. Similar to Lstsq, except the output values are non-negative. If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy. Note Requires SciPy. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.NnlsL2(weights=False, reg=0.1)[source] Non-negative least-squares solver with L2 regularization. Similar to LstsqL2, except the output values are non-negative. If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy. Note Requires SciPy. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. reg : float, optional (Default: 0.1) Amount of regularization, as a fraction of the neuron activity. reg : float Amount of regularization, as a fraction of the neuron activity. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.NnlsL2nz(weights=False, reg=0.1)[source] Non-negative least-squares with L2 regularization on nonzero components. Similar to LstsqL2nz, except the output values are non-negative. If solving for non-negative weights, it is important that the intercepts of the post-population are also non-negative, since neurons with negative intercepts will never be silent, affecting output accuracy. Note Requires SciPy. Parameters: weights : bool, optional (Default: False) If False, solve for decoders. If True, solve for weights. reg : float, optional (Default: 0.1) Amount of regularization, as a fraction of the neuron activity. reg : float Amount of regularization, as a fraction of the neuron activity. weights : bool If False, solve for decoders. If True, solve for weights. class nengo.solvers.NoSolver(values=None, weights=False)[source] Manually pass in weights, bypassing the decoder solver. Parameters: values : (n_neurons, n_weights) array_like, optional (Default: None) The array of decoders or weights to use. If weights is False, n_weights is the expected output dimensionality. If weights is True, n_weights is the number of neurons in the post ensemble. If None, which is the default, the solver will return an appropriately sized array of zeros. weights : bool, optional (Default: False) If False, values is interpreted as decoders. If True, values is interpreted as weights. values : (n_neurons, n_weights) array_like, optional (Default: None) The array of decoders or weights to use. If weights is False, n_weights is the expected output dimensionality. If weights is True, n_weights is the number of neurons in the post ensemble. If None, which is the default, the solver will return an appropriately sized array of zeros. weights : bool, optional (Default: False) If False, values is interpreted as decoders. If True, values is interpreted as weights.
2018-12-13 09:14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2114032506942749, "perplexity": 6320.228368752453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824601.32/warc/CC-MAIN-20181213080138-20181213101638-00427.warc.gz"}
http://mathhelpforum.com/calculus/103115-curve-sketching-clarification-print.html
# curve sketching clarification • September 19th 2009, 09:28 AM linearalgebra curve sketching clarification My equation is f(x)=x/x^2-4 So from that, heres what I have gotten so far... -domain: X is a real number that cannot be 2, -2 - x-intercept is 0 - y intercept is 0 -There is a horizontal asymptote at y=0 -There are vertical asymptotes at x=2 and x=-2 -To find the critical numbers, I have to take the derivative and I did so using the quotient rule and came up with f'(x)= -2x^2/x^2-4 1) How do I find the critical numbers by setting the function equal to 0? Am i dealing with the numerator? denominator? • September 19th 2009, 10:29 AM Nacho Exist many critical number, for example when the grafh change the concavity or when change from increasing to decreasing Which do you look for? • September 19th 2009, 10:43 AM linearalgebra Quote: Originally Posted by Nacho Exist many critical number, for example when the grafh change the concavity or when change from increasing to decreasing Which do you look for? because its the first derivative, I'm looking for intervals of increase or decrease. • September 19th 2009, 10:52 AM Krizalid then solve for $f'(x)<0$ and $f'(x)>0.$ • September 19th 2009, 11:01 AM Nacho Quote: Originally Posted by linearalgebra because its the first derivative, I'm looking for intervals of increase or decrease. Ok, first you must divide tha analysis, dependin the interval, because have point where the function is indetermined $ f'(x) = - \frac{{2x^2 }} {{x^2 - 4}} $ If $ x \in \left( { - \infty ,2} \right) \Rightarrow f'(x) < 0 $ . Therefore is decrease And follow Here I upload the grafh, like that you will have an idea Attachment 12967
2016-08-29 15:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851088881492615, "perplexity": 1952.7254549359143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982964275.89/warc/CC-MAIN-20160823200924-00214-ip-10-153-172-175.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/433224/number-of-causal-assumptions-in-an-overview-by-pearl/437699
# Number of Causal Assumptions in an Overview by Pearl In the paper Causal Inference in Statistics: an Overview by Pearl, in page 11 (106 if you go by the Journal's indexing), a graphical model is presented in figure 2(a). The text reads (picture below): The chain model of Fig. 2(a), for example, encodes seven causal assumptions, each corresponding to a missing arrow or a missing double-arrow between a pair of variables. How did the author conclude there are seven missing arrows? None of the below causal arrows appear in Fig. 2(a). I am assuming time flows from top left to bottom right (i.e. so that $$Y \to X$$ cannot be a causal assumption because causes must precede effects.). 1. $$U_{Z} \to U_{X}$$ 2. $$U_{Z} \to U_{Y}$$ 3. $$U_{Z} \to X$$ 4. $$U_{Z} \to Y$$ 5. $$U_{X} \to U_{Y}$$ 6. $$U_{X} \to Y$$ 7. $$Z \to Y$$ This means that the causal world in Fig. 2(a) assumes there are none of the above seven direct causal effects. By contrast, each of the arrows actually appearing in the graph (e.g., $$U_{Z} \to Z$$, etc.) are assumptions of direct causal effects. EDIT: Based on correspondence with Judea Pearl. [Judea's quote is edited for the grammar/typos common in a brief email exchange.] I had in mind the following $$U_{Z} \longleftrightarrow U_{X}$$ $$U_{Z} \longleftrightarrow U_{Y}$$ $$U_{X} \longleftrightarrow U_{Y}$$ $$Z \to Y$$ $$X \to Z$$ $$Y \to Z$$ $$Y \to X$$ The missing arrows you listed e.g., $$U_{X} \to Y$$ are implied by the above, because $$U_{Y}$$ is defined as everything that affects $$Y$$ when $$X$$ is held constant. • Why don't we count $Y \to X$? Is the time assumption not worthy like other causal assumptions?And why don't we count $Z \to U_Y$? – Yair Daon Oct 26 '19 at 18:45 • @YairDaon $Z \to U_{Y}$ is a good question... gonna mull, and may edit my off the cuff answer. However $Y \to X$ is forbidden as an assumption given the temporality of the variables: time causes cannot follow effects (see the parenthetical). – Alexis Oct 28 '19 at 2:18 • Since the $U$ are unobserved causes of a variable, $Z \rightarrow U_Y$ is not distinguishable from $Z \rightarrow Y$. A relation like $Z \rightarrow Y$ always masks that there are many other variables along that path through which the effect runs. $Z \rightarrow Y$ means "Z affects the value of Y by means other than affecting $X$ in this model. – CloseToC Oct 28 '19 at 8:56 • @Alexis: My reading of the comments was that it's not clear what the 7 assumptions exactly are, in particular why $Z \rightarrow U_Y$ which would be an 8th isn't counted. I believe it is because it would amount to doubly counting $Z \rightarrow Y$ for the reason I mentioned. Have you thought of a different explanation? – CloseToC Oct 30 '19 at 9:44 • @CloseToC Judea Pearl clarified the assumptions and I have edited my answer to incorporate. – Alexis Oct 31 '19 at 3:51 An exchange of comments with @Alexis (and their correspondence with Pearl himself) cleared things up for me. I can summarize as follows: 1. For the exogenous variables $$U_X, U_Y, U_Z$$ we only allow/count double arrows (just... because?). For these variables we have three missing (double) arrows, which are $$U_X \leftrightarrow U_Y, U_Z \leftrightarrow U_Y$$ and $$U_X \leftrightarrow U_Z$$. 2. For the endogenous variables $$X,Y,Z$$, we count only directed arrows (again, just because) and we have four missing such arrows, which are $$X\to Z, Y\to Z, Y \to X$$ and $$Z\to Y$$. 3. We do not count arrows such as $$U_X \to Z$$ since $$U_Z$$ is defined as everything that affects $$Z$$ outside of the other endogenous variables ($$X,Y$$, in this case), so no other influence is allowed, specifically not $$U_X$$. This count gives us seven missing arrows total, as the text suggests.
2020-01-17 12:50:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7967646718025208, "perplexity": 994.5500938235266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00269.warc.gz"}
https://eprint.iacr.org/2010/146
### Some Applications of Lattice Based Root Finding Techniques Santanu Sarkar and Subhamoy Maitra ##### Abstract In this paper we present some problems and their solutions exploiting lattice based root finding techniques. In CaLC 2001, Howgrave-Graham proposed a method to find the Greatest Common Divisor (GCD) of two large integers when one of the integers is exactly known and the other one is known approximately. In this paper, we present three applications of the technique. The first one is to show deterministic polynomial time equivalence between factoring $N$ ($N = pq$, where $p > q$ or $p, q$ are of same bit size) and knowledge of $q^{-1} \bmod p$. Next, we consider the problem of finding smooth integers in a short interval. The third one is to factorize $N$ given a multiple of the decryption exponent in RSA. In Asiacrypt 2006, Jochemsz and May presented a general strategy for finding roots of a polynomial. We apply that technique for solving the following two problems. The first one is to factorize $N$ given an approximation of a multiple of the decryption exponent in RSA. The second one is to solve the implicit factorization problem given three RSA moduli considering certain portions of LSBs as well as MSBs of one set of three secret primes are same. Note: Substantial extension to earlier version. Available format(s) Category Public-key cryptography Publication info Published elsewhere. Unknown where it was published Keywords CRT-RSAGreatest Common DivisorFactorizationInteger ApproximationsLatticeLLLRSASmooth Integers. Contact author(s) subho @ isical ac in History 2010-04-07: revised See all versions Short URL https://ia.cr/2010/146 CC BY BibTeX @misc{cryptoeprint:2010/146, author = {Santanu Sarkar and Subhamoy Maitra}, title = {Some Applications of Lattice Based Root Finding Techniques}, howpublished = {Cryptology ePrint Archive, Paper 2010/146}, year = {2010}, note = {\url{https://eprint.iacr.org/2010/146}}, url = {https://eprint.iacr.org/2010/146} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
2022-07-05 04:05:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41948989033699036, "perplexity": 1414.9175990875046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00605.warc.gz"}