text
stringlengths
104
605k
## Gaussian binomials and the number of sublattices.(English)Zbl 1370.11075 Summary: The purpose of this short communication is to make some observations on the connections between various existing formulas of counting the number of sublattices of a fixed index in an $$n$$-dimensional lattice and their connection with the Gaussian binomials. ### MSC: 11H06 Lattices and convex bodies (number-theoretic aspects) 11B65 Binomial coefficients; factorials; $$q$$-identities ### Keywords: Gaussian binomials; sublattices Full Text:
# Find largest possible value of N, such that N! can be expressed as the product of (N-4) consecutive positive integers? 271 views Find the largest possible value of positive integer N, such that N! can be expressed as the product of (N-4) consecutive positive integers? posted May 9, 2017 119 Largest positive integer n for which n! can be expressed as the product of n-a consecutive positive integers is (a+1)! - 1. Proof - let the largest of the n-a consecutive positive integers be k. Clearly k cannot be less than or equal to n, else the product of n-a consecutive positive integers will be less than n! Now, observe that for n to be maximum the smallest number (or starting number) of the n-a consecutive positive integers must be minimum, implying that k needs to be minimum. But the least k > n is n+1. So the n-a consecutive positive integers are a+2, a+3, … n+1 So we have (n+1)!/(a+1)! = n! => n+1 = (a+1)! => n = (a+1)! -1 for a=4 we have n= 5!-1=119
# The expected number of zeros of a random system of $p$-adic polynomials Report Number 699 Authors Steven N. Evans Abstract We study the simultaneous zeros of a random family of $d$ polynomials in $d$ variables over the $p$-adic numbers. For a family of natural models, we obtain an explicit constant for the expected number of zeros that lie in the $d$-fold Cartesian product of the $p$-adic integers. This expected value, which is $\left(1 + p^{-1} + p^{-2} + \cdots + p^{-d}\right)^{-1}$ for the simplest model, is independent of the degree of the polynomials. PDF File Postscript File
# What factors relate the number of protons in the nucleons with the number of electrons in the orbitals? Atoms always want to have a closed shell, because it requires low energy compared to the lattice enthalpy. How does this always match throughout the periodic table between the number of protons and electrons? For example, Magnesium have 12 protons, the electron configurations ends in 3s^2, so it ‘wants’ to lose 2. Barium have 56 protons, the electron configuration ends in 6s^2, so it easily loses 2. Why in bigger atoms like Barium with the electrons having greater distance from the nucleus, they be more stable losing 3 electrons for example, and so it had 56 protons and 53 electrons instead of 54? Or for example the noble gases in the last groups be stable with more or less one electron? It’s like this orbitals rules described by Schrödinger equation are always respected, but the energy of attraction between proton and electron too, and they both always match perfectly. I think what I mean is that +1 proton attracts +1 electron, and it fits in some orbital given the Pauli Exclusion Principle and Schrödinger equation. Why the relation between the strength of electromagnetic force and the Schrödinger equation always creates this stability between protons and electrons throughout the periodic table? • Tagging a question about the electron configuration of high $Z$ atoms with [schroedinger-equation] is a little optimistic if you are imagining that someone can write down a solution in terms of a mathematical expression. Last I heard you still had to switch to variational solutions by the end of the third period. Now I suppose that the state of the art is ever improving, but I doubt there exist closed form solutions for atoms in the sixth period. – dmckee --- ex-moderator kitten Jun 16 '16 at 22:18
# Implement RFC-534: Update naming of base_Blendedness fields XMLWordPrintable ## Details • Type: Story • Status: Done • Resolution: Done • Fix Version/s: None • Component/s: • Labels: None • Story Points: 3 • Sprint: DRP F18-5 • Team: Data Release Production ## Description The implementation of this RFC includes: 1. strip "_instFlux" from the base_Blendedness_raw_instFlux and base_Blendedness_abs_instFlux names 2. move "_instFlux" to the end of the name in the fields with actual flux units, thus 3. base_Blendedness_raw_instFlux_child --> base_Blendedness_raw_child_instFlux 4. base_Blendedness_raw_instFlux_parent --> base_Blendedness_raw_parent_instFlux 5. base_Blendedness_abs_instFlux_child --> base_Blendedness_abs_child_instFlux 6. base_Blendedness_abs_instFlux_parent --> base_Blendedness_abs_parent_instFlux 7. an update to the doc strings to make a clear distinction/description between "raw" and "abs" ## Activity Hide Lauren MacArthur added a comment - As per discussion on DM-16068, this will also include the following change for the field: name="deblend_psfFlux", doc="If deblended-as-psf, the PSF flux" deblend_psfFlux --> deblend_psf_instFlux Show Lauren MacArthur added a comment - As per discussion on DM-16068 , this will also include the following change for the field: name = "deblend_psfFlux" , doc = "If deblended-as-psf, the PSF flux" deblend_psfFlux --> deblend_psf_instFlux Hide Lauren MacArthur added a comment - - edited I'm thinking I should also change these: name="deblend_psfCenter_x", doc="If deblended-as-psf, the PSF centroid", units="pixel" name="deblend_psfCenter_y", doc="If deblended-as-psf, the PSF centroid", units="pixel" to name="deblend_PsfCentroid_x" name="deblend_PsfCentroid_y" to match, e.g. name="base_SdssCentroid_x" and that the flux name above should actually be name="deblend_PsfFlux_instFlux" to match, e.g. name="base_PsfFlux_instFlux" Jim Bosch do you agree? Show Lauren MacArthur added a comment - - edited I'm thinking I should also change these: name = "deblend_psfCenter_x" , doc = "If deblended-as-psf, the PSF centroid" , units = "pixel" name = "deblend_psfCenter_y" , doc = "If deblended-as-psf, the PSF centroid" , units = "pixel" to name = "deblend_PsfCentroid_x" name = "deblend_PsfCentroid_y" to match, e.g. name = "base_SdssCentroid_x" and that the flux name above should actually be name = "deblend_PsfFlux_instFlux" to match, e.g. name = "base_PsfFlux_instFlux" Jim Bosch do you agree? Hide Jim Bosch added a comment - The convention we use for measurement plugin fields isn't as clearly applicable here, as the deblender isn't a measurement plugin.  In particular, while plugin names are supposed to be distinct and achieve this by embedding the class name in the field names (because they can all be run together at once), different deblenders are mutually exclusive in the same catalog (at least the way things are structured now), so it's more desirable that their fields have the same names.  I think it's probably best to minimize changes to those except when necessary to make sure: • instrumental fluxes end in _instFlux • centroids end in _x, _y • shapes end in _xx, _yy, _xy In other words, as long as the last bit of the suffix is ok, I think the names can mostly stay as-is. Show Jim Bosch added a comment - The convention we use for measurement plugin fields isn't as clearly applicable here, as the deblender isn't a measurement plugin.  In particular, while plugin names are supposed to be distinct and achieve this by embedding the class name in the field names (because they can all be run together at once), different deblenders are mutually exclusive in the same catalog (at least the way things are structured now), so it's more desirable that their fields have the same names.  I think it's probably best to minimize changes to those except when necessary to make sure: instrumental fluxes end in _instFlux centroids end in _x, _y shapes end in _xx, _yy, _xy In other words, as long as the last bit of the suffix is ok, I think the names can mostly stay as-is. Hide Lauren MacArthur added a comment - Awesome, thanks...I will stick to what was on tap before my previous comment! Show Lauren MacArthur added a comment - Awesome, thanks...I will stick to what was on tap before my previous comment! Hide Lauren MacArthur added a comment - - edited Jim, could you give this a quick look to make sure I've made the changes as proposed. I ran multiBandDriver.py (pointing at the outputs from the w_2018_41 processing at /datasets/hsc/repo/rerun/RC/w_2018_41/DM-16011/ on lsst-dev as input: output directory is datasets/hsc/repo/rerun/private/lauren/DM-16070multi/. The new fields in the deepCoadd_meas schema look like this: (Field['D'](name="base_Blendedness_raw", doc="Measure of how much instFlux is affected by neighbors: (1 - child_instFlux/parent_instFlux). Operates on the "raw" pixel values."), Key(offset=120, nElements=1)), (Field['D'](name="base_Blendedness_raw_child_instFlux", doc="instFlux of the child, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="count"), Key(offset=128, nElements=1)), (Field['D'](name="base_Blendedness_raw_parent_instFlux", doc="instFlux of the parent, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="count"), Key(offset=136, nElements=1)), (Field['D'](name="base_Blendedness_abs", doc="Measure of how much instFlux is affected by neighbors: (1 - child_instFlux/parent_instFlux). Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details."), Key(offset=144, nElements=1)), (Field['D'](name="base_Blendedness_abs_child_instFlux", doc="instFlux of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="count"), Key(offset=152, nElements=1)), (Field['D'](name="base_Blendedness_abs_parent_instFlux", doc="instFlux of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="count"), Key(offset=160, nElements=1)), (Field['D'](name="base_Blendedness_raw_child_xx", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=168, nElements=1)), (Field['D'](name="base_Blendedness_raw_child_yy", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=176, nElements=1)), (Field['D'](name="base_Blendedness_raw_child_xy", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=184, nElements=1)), (Field['D'](name="base_Blendedness_raw_parent_xx", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=192, nElements=1)), (Field['D'](name="base_Blendedness_raw_parent_yy", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=200, nElements=1)), (Field['D'](name="base_Blendedness_raw_parent_xy", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the "raw" pixel values.", units="pixel^2"), Key(offset=208, nElements=1)), (Field['D'](name="base_Blendedness_abs_child_xx", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=216, nElements=1)), (Field['D'](name="base_Blendedness_abs_child_yy", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=224, nElements=1)), (Field['D'](name="base_Blendedness_abs_child_xy", doc="Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=232, nElements=1)), (Field['D'](name="base_Blendedness_abs_parent_xx", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=240, nElements=1)), (Field['D'](name="base_Blendedness_abs_parent_yy", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=248, nElements=1)), (Field['D'](name="base_Blendedness_abs_parent_xy", doc="Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a "de-noised" value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details.", units="pixel^2"), Key(offset=256, nElements=1)) I ran a successful lsst_distrib lsst_ci ci_hsc Jenkins over the weekend, but made a few updates today (and added COPYRIGHT & LICENSE files), so another one is running (it has PASSED). I will also write a brief community post outlining the changes once this gets merged. Show Lauren MacArthur added a comment - - edited Jim, could you give this a quick look to make sure I've made the changes as proposed. I ran multiBandDriver.py (pointing at the outputs from the w_2018_41 processing at /datasets/hsc/repo/rerun/RC/w_2018_41/ DM-16011 / on lsst-dev as input: output directory is datasets/hsc/repo/rerun/private/lauren/ DM-16070 multi/. The new fields in the deepCoadd_meas schema look like this: (Field[ 'D' ](name = "base_Blendedness_raw" , doc = "Measure of how much instFlux is affected by neighbors: (1 - child_instFlux/parent_instFlux). Operates on the " raw " pixel values." ), Key<D>(offset = 120 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_child_instFlux" , doc = "instFlux of the child, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "count" ), Key<D>(offset = 128 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_parent_instFlux" , doc = "instFlux of the parent, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "count" ), Key<D>(offset = 136 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs" , doc = "Measure of how much instFlux is affected by neighbors: (1 - child_instFlux/parent_instFlux). Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." ), Key<D>(offset = 144 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_child_instFlux" , doc = "instFlux of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "count" ), Key<D>(offset = 152 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_parent_instFlux" , doc = "instFlux of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "count" ), Key<D>(offset = 160 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_child_xx" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 168 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_child_yy" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 176 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_child_xy" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 184 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_parent_xx" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 192 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_parent_yy" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 200 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_raw_parent_xy" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the " raw " pixel values." , units = "pixel^2" ), Key<D>(offset = 208 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_child_xx" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 216 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_child_yy" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 224 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_child_xy" , doc = "Shape of the child, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 232 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_parent_xx" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 240 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_parent_yy" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 248 , nElements = 1 )), (Field[ 'D' ](name = "base_Blendedness_abs_parent_xy" , doc = "Shape of the parent, measured with a Gaussian weight matched to the child. Operates on the absolute value of the pixels to try to obtain a " de - noised " value. See section 4.9.11 of Bosch et al. 2018, PASJ, 70, S5 for details." , units = "pixel^2" ), Key<D>(offset = 256 , nElements = 1 )) I ran a successful lsst_distrib lsst_ci ci_hsc Jenkins over the weekend, but made a few updates today (and added COPYRIGHT & LICENSE files), so another one is running (it has PASSED ). PRs: https://github.com/lsst/meas_base/pull/132 https://github.com/lsst/meas_deblender/pull/62 I will also write a brief community post outlining the changes once this gets merged. Hide Jim Bosch added a comment - Made a few suggestions about docstrings on the PRs.  Feel free to merge after you think you've addressed them; no need to send it back to me. Show Jim Bosch added a comment - Made a few suggestions about docstrings on the PRs.  Feel free to merge after you think you've addressed them; no need to send it back to me. Hide Lauren MacArthur added a comment - - edited Thanks for getting to this so quickly.  I made all your suggested changes and am rerunning Jenkins just to be safe. Show Lauren MacArthur added a comment - - edited Thanks for getting to this so quickly.  I made all your suggested changes and am rerunning Jenkins just to be safe. ## People • Assignee: Lauren MacArthur Reporter: Lauren MacArthur Reviewers: Jim Bosch Watchers: Jim Bosch, John Parejko, Lauren MacArthur, Yusra AlSayyad
# In the case of a simple pendulum, a graph is drawn between the displacement and time taken for its oscillations as shown in the figure . Then The numb 8 views in Physics In the case of a simple pendulum, a graph is drawn between the displacement and time taken for its oscillations as shown in the figure . Then The number of oscillations completed by the bob in one second is _____ . A. 1//3 B. 1//3 C. 1//9 D. 1 by (25.0k points)
# Browsing category ranting ## The insidious maths of payday loans (with added logarithms) Compared to regular small print, it's pretty big, but compared to the four-panel, faux-war-era comic story (disaster strikes, protagonist calls Rent-A-Loan*, cash magically appears and they all live happily ever after), you could easily overlook it. You might even look at the bottom of the poster and think "that can't ## Where to start when you don't know where to start Right! That's it. I'm mad as hell and I'm not going to take it any more. If one more student looks at a question and says "I don't know where to start!" I'm going to... I'm going to... I'm going to FROWN and look at them VERY CROSSLY. Because I ## Happy $\pi$ day! Happy Pi Day! mu-tant cow pi by janemariejett In the American date format, today is 3.14 - a pretty good approximation to $\pi$, which (as any fule kno) is the ratio between a circle's diameter and circumference. It's also the focus of most of the jokes about maths, because - ## Uswitch, units and unitwit A couple of weeks ago, a news story flashed by me on twitter: 49% of the UK have slower-than-average broadband. It was followed more or less immediately by a swathe of snipey comments saying 'isn't that the definition of an average?' To which the smart-arse answer is, no, there is ## Questions with only one answer There are many questions with only one answer. Or at least, one sensible answer. If someone asks you "Are you dead?", the only possible answer you can give is "no". "Does my bum look big in this?" "Of course not, dear, it looks lovely". And if you say "Is there ## Why radians rock (and degrees don’t) If I could wave a magic wand and overhaul just one thing to make the world a better place, I'd have a tough choice. Would I get rid of the QWERTY keyboard in favour of a more sensible layout? Would I make the English language fonetik? Would I take maths ## In praise of… watching football tournaments at exam time or, why we need to show exams the red card Oh, Jesus wept. A study shows that in even years - when there's meaningful international football in the summer - boys' exam performances drop. The authors suggest starting the exams earlier so as not to disadvantage football-watchers. Let's leave aside ## Everyday maths: Crossing the Road Here, let me get on this soapbox a moment Things that irritate me beyond end, number two in a series of, well, lots: grumbles about how "oh, it's pointless teaching maths, you only ever use it to work out discounts in the supermarket and there are calculators on your phone ## How to give your kids help with maths homework ... or, three easy steps to talking with your kids about $\sec(x)$1 If there's one question parents dread more than "Where do babies come from?", it's "Can you give me some help with maths homework?" There are many possible reasons for this. The most common one I hear is "they
# Irreducible homogeneous polynomials of arbitrary degree Suppose we have an algebraically closed field $F$ and $n+1$ variables $X_0, \dots, X_n$, where $n > 1$. Does there exist an irreducible homogeneous polynomial in these variables of degree $d$ for any positive integer $d > 1$? In other words, does there always exist an irreducible hypersurface of arbitrary degree? Of course, I am also interested in constructions of these polynomials. Thank you. - Yes, this is straightforward. First note that a homogeneous polynomial $f(X_0, ... X_n)$ which is not divisible by $X_0$ is irreducible if and only if $f(1, ... X_n)$ is irreducible, so the problem reduces to constructing irreducible polynomials in $k[x_1, ... x_n]$ of degree $d$. To do this we can take $x_1^2 - x_2 h(x_3, ... x_n)$ for any polynomial $h$ of degree $d-1$. (If $n = 2$ then take $h = 1$).
0 Select Articles # Palm-Size Spy PlanesPUBLIC ACCESS Up-To-Date Intelligence is a Scarce Commodity on the Battlefield. Small Semiautonomous Surveillance Aircraft now Being Developed Could Enable Combat Troops to see what Lies Beyond the Next Tree Line or Over the Next Hill. [+] Author Notes Associate Editor Mechanical Engineering 120(02), 74-78 (Feb 01, 1998) (5 pages) doi:10.1115/1.1998-FEB-3 ## Abstract This article discusses that today’s squad leader must still risk troops to scout out what lies over the next hill, beyond the next tree line, or inside the next building. The Department of Defense is trying to help ground troops at the platoon, company, or brigade level with this crucial task by giving them tiny spy planes, called micro aerial vehicles (MAVs), to search the local terrain. Planners at the Defense Advanced Research Projects Agency (DARPA) envision equipping small combat units with their own “organic” intelligence assets that can locate and monitor possible threats. DARPA planners define a MAV as semiautonomous airborne vehicles, measuring less than 6 inches in any dimension and weighing about 4 ounces that can accomplish a useful military mission at an affordable cost. The most likely parameters to sense for MAV stabilization are inertial angular rate, differential and absolute pressure, acceleration, and the Earth’s magnetic and electrostatic fields; optical sensing could be used for angular position and rate stabilization. Developing useful micro aerial vehicles is going to be a severe design engineering challenge. ## Article Keeping aware of situations amid the chaos of combat is one of the most critical but troublesome tasks battlefield commanders must face. Since the airplane was developed, the upper echelons of the armed forces have benefited from ever-greater access to aerial reconnaissance data with which to plan their battles. In recent years, portable satellite data links have started to bring theater-level surveillance information to the lower levels of the military hierarchy nearly in real time. Large-area intelligence assets like spy planes, unmanned drones, and satellites are not always able to provide detailed small-area information to frontline commanders in a timely manner, however. Today's squad leader must still risk troops to scout out what lies over the next hill, beyond the next tree line, or inside the next building. The Department of Defense is trying to help ground troops at the platoon, company, or brigade level with this crucial task by giving them tiny spy planes, called micro aerial vehicles (MAVs), to search the local terrain. Planners at the Defense Advanced Research Projects Agency (DARPA) envision equipping small combat units with their own "organic" intelligence assets that can locate and monitor possible threats. Technical evaluations conducted at the Massachusetts Institute of Technology's Lincoln Laboratories in Lexington and the Naval Research Laboratory in Washington, D.C., have concluded that the concept is workable. DARPA is currently launching a three-year, $35 million program to develop MAVs. Negotiations are now being conducted that will lead to Small Business Innovation Research Grants and other types of research and development awards to a range of organizations, including university laboratories, aerospace firms, and small businesses. The agency also plans to select a number of efforts for MAV system development and demonstration. Several prototype MAV technologies have already shown some promise. Engineers at AeroVironment Inc. in Simi Valley, Calif., have flown a palm-size disk-wing airplane for 16 minutes on lithium battery power. The small Black Widow MAV prototype, which looks like a discus with a propeller, tail, and flaps, awaits completion of its miniaturized computer flight-control, navigation, and communications systems. Progress is also being made in addressing the need for substantially longer-lasting power sources. IGR Enterprises Inc., a small technology company in Beachwood, Ohio, is developing very lightweight, one-time-use solid-oxide fuel cells that have several times the energy density found in lithium batteries. M-DOT, a technology firm in Phoenix, is working on a diminutive gasturbine engine that will produce approximately 1.4 pounds of thrust. Another company currently receiving government support is Aerodyne Corp. in Billerica, Mass. Engineers there are working on a radical hover-vehicle design, a "fin-stabilized oblate spheroid" that flies as a lifting body. The football-like aircraft will use eight ventrally located microturbofans such as the miniature-scale turbine engine M-DOT is developing. At the same time, even more unconventional flight technologies are being pursued. Engineers at several locations-including the Georgia Tech Research Institute (GTRI) in Atlanta; SRI International in Menlo Park, Calif.; Vanderbilt University in Nashville, Tenn.; and the California Institute of Technology in Pasadena-are investigating the wing-flapping technology that would make bird- , bat-, or insect-like "ornithopters" possible. DARPA planners define a MAV as a semiautonomous airborne vehicles, measuring less than 6 inches in any dimension and weighing about 4 ounces, that can accomplish a useful military mission at an affordable cost (less than$1,000 if it is to be a throwaway system). Nominal performance goals include real-time imaging, navigation, and communications capabilities, a range of up to 6 miles, and a top speed of up to 30 miles per hour, during missions lasting 20 minutes to 2 hours. "These systems are at least 10 times smaller than any current flying system," said James McMichael, DARPA's MAV program manager. "They will be uniquely suited to the challenges of small unit operations and operations in urban terrain. For the first time, they will give individual soldiers and Marines an asset they own and control that can provide real-time situational awareness and reconnaissance information." Micro aerial vehicles may be regarded as "six-degree of-freedom" sensor platforms that will enable a broad spectrum of small-unit and special operations. Missions might include video and multispectral (infrared) reconnaissance and surveillance, battle-damage assessment, targeting of weapons on key installations, placement of autonomous sensors, a communications relay, or the detection of hazardous substances or land mines. Other uses are also under consideration, such as monitoring hostage situations or weapons-ban treaties, patrolling national borders, and searching for disaster survivors. Work on the MAV concept began in the early 1990s, when a government-funded Rand Corp. study stated that extremely small reconnaissance vehicles with tiny sensors should be feasible. By 1994, researchers at Lincoln Labs had begun considering the issue, said William R . Davis, leader of the labs' Optical Systems Engineering Group, which he said was involved early on for its expertise in advanced sensors, communications, and aerodynamics. After consulting with DARP A and potential field users, the Lincoln Labs team came to some basic conclusions regarding the vehicles' core mission. To be truly useful, MAVs need to carry a short-range day/ night area imaging system with enough resolution for operators to discern important details in the transmitted scene. The system must feature an accurate geolocation capability so users will know where the images come from. Sufficient vehicle range and real-time communications are also key. Moreover, MAVs have to be lightweight and robust enough to be carried in a backpack. And, if possible, the systems should be sufficiently inexpensive to be expendable. Another crucial requirement is for the craft to be "covert-difficult to see, hear, and otherwise detect, so it doesn't give its presence away nor compromise the operator's location," Davis said. "We asked ourselves: Looking at it from a systems point of view, what's the smallest vehicle we can get by with?" By and large, the answer was that an optimal MAV should be as close as possible to a flying sensor chip. An airplane, the saying goes, is nothing more than a series of compromises flying in close formation, so imagine the severe compromises that were needed to design a tiny unmanned plane like a MAV. According to the team at Lincoln Labs, the MAVs will require high degrees of system integration with unprecedented levels of multifunctionality, component integration, payload integration, and minimization of interfaces among functional elements. One key engineering issue will arise from close-coupled, dynamic electromagnetic and thermal interactions that are brought about by close proximity. Among the specific significant engineering challenges to successful MAV deployment are ultra-compact, lightweight, high-power- and high-energy-density propulsion and power sources; novel concepts for lift generation; flight stabilization and control for aerodynamic environments with very low Reynolds numbers; lightweight, secure, low-power onboard electronic processing and communications with sufficient bandwidth for real-time imaging; microgyro scopes and very small onboard guidance, navigation, and geolocation systems; a high degree of functional/physical design synergy achieved through highly integrated electromechanical multifunctional modules (for example, combined flight-control, collision-avoidance, navigation, and communications systems); advanced lightweight, strong structures; high g-hardening and special packaging for projectile-release systems; and last but not least, the development or modification of a variety of advanced MAV -tailored sensors. The Black Widow, AeroVironment's prototype micro aerial vehicle, has flown for 16 minutes using an electric motor powered by a lithium battery. Engineers at M-DOT have demonstrated an egg-sized gas-turbine engine that develops approximately 1.4 pounds of thrust. This mock-Up illustrates the Black Widow's flex circuit, which will incorporate all of the tiny plane's electrical connections and antennas (in the tail). Internal subsystems include linear actuators for the elevons, payload camera, three-axis magnometers, piezoelectric gyros, Global Positioning System receiver, a pressure sensor, a central processor, solar cells, and lithium and nickel cadmium batteries. ## Propulsion Defines Aerodynamics Davis said that "the most challenging near-term technical development item for MAVs is the propulsion system and the related aerodynamic issues. [However,] if you have a good propulsion system, you can overcome most problems with aerodynamics." "Propulsion is definitely the long pole in the tent," said Richard Foch, head of the vehicle research section in the off-board countermeasures branch of the Naval Research Laboratory's Tactical Electronic Warfare Division. "These systems require a method to generate enough aerodynamic thrust in an extremely efficient manner. Given a good power source and propulsion ' system, the aerodynamics for MAVs don't look too bad," he said. "Of course, developing an airplane without a power plant is a fairly risky business. But the tiny machines we're considering have a lift-to-drag ratio between 3 and 10, so you can calculate how much energy is needed to make it fly." According to the Lincoln Labs engineers, a 6-inch propeller-driven vehicle with a lift-to-drag ratio of 5 will require about 2.5 watts of shaft power for cruising and double that for climbing, turning, or hovering. This low power regime means standard model-airplane engines are four or five times too big, according to Davis. Of the three general classes of available power systems-mechanical-energy storage, electric drives, and thermal-cycle machines-only a few seem suitable. Internal combustion engines have the most near-term promise, Davis said. Mechanical-energy storage systems using springs, compressed gas, or flywheels are not deemed practical. Electric propulsion is also promising. Electric motors of the required size are available using electrochemical batteries, fuel cells, microturbine generators, thermal photovoltaic generators, solar cells, systems. The first three sources are considered the most practical because calculations indicate that a power density of about 300 milliwatts per gram and an energy density of about 700 joules per gram are required for a robust electric system. Foch noted that new small motor designs such as the brushless neodymium-iron-boron magnet type are now running at 90-percent efficiencies. A lightweight power system comprising a high-efficiency electric motor and the best lithium batteries would run 20 to 30 minutes, he said. Although current lithium battery performance is marginal for this application, its performance should improve in the near future. Fuel cells, meanwhile, are not yet sufficiently small for the MAV application, but the technology, which should be ready in three or four years, is considered to be a good bet, according to Foch. IGR has demonstrated the technical feasibility of small nonregenerative solid-oxide fuel cells that could provide more than two to four times the energy density (in weight and volume) of the best nonrechargeable lithium batteries, said Arnold Z. Gordon, IGR's president. Roughly the size of a 1-centimeter-tall playing card and weighing a mere 25 grams, the fuel cell "should provide all the power a MAV should need," he said. Gordon said that his firm's proprietary solid-state power unit spontaneously generates electric power with the addition of fuel and air. Almost the entire power unit, he said, is made out of steel; the sole exception is its solid ceramic electrolyte, which also serves as the permeable membrane. Gordon noted that the ceramic electrolyte "is formulated as a composite, which provides it with useful mechanical viability. Previous solid electrolytes were very brittle, while the new design can flex a bit." The system's oxidant is ambient air, so " all that's needed are two holes for air coming in and going out." (Gordon did not reveal the type of fuel to be used.) He added that the power unit, which runs hot, fits in a heat-exchanger insulation unit that protects surrounding apparatus and preheats the incoming air. Operation would be controlled by special-purpose, low-frequency, low-power integrated circuits. Unlike most refuelable fuel cells, the IGR device would run to completion once the reaction is started (approximately 1 or 2 hours). In addition to clean, quiet operation with instant start-up and no cold-weather problems, the device is nontoxic and has an essentially infinite shelf life with no maintenance, Gordon said. A promising but technically difficult power source is the microturbine-a microelectromechanical-systems( ME MS-) based gas-turbine-engine/electric-generator set the size of a shirt button that weighs a mere 1 gram. The micro turbine is now under development in an ambitious project at MIT led by Alan Epstein (see "Turbines on a Dime," October 1997). This technology seems at least three or four years away at best. Thermal-cycle machines such as rockets, pulse jets, steam-cycle engines, microturbine fan jets, and Sterling and internal-combustion engines are possible MAV power sources. Internal-combustion engines seem to hold a great deal of promise. While the thermal efficiencies of internal-combustion engines at this small scale are likely to be only about 5 percent, power densities are typically about 1 watt per gram, and the engines use high-energy fuels. So far, however, truly suitable internal-combustion engines have not yet been built. Noise and reliability issues must also be overcome. The small fan jet-the M-DOT unit and a variant of the MIT microturbine-is similarly attractive. Jon Sherbeck, M-DOT's director of engineering, is leading the effort to develop a scaled-down version of a conventional jet engine that produces 1.4 pounds of thrust. Using off-the-shelf parts such as dental-drill bearings, the M-DOT group is running a 3-inch-long, 1 %-inch-diameter turbine that weighs only 85 grams. "You can't just shrink a 747 proportionally down to 6 inches and expect it to fly." ## The Problems of Being Small "You can't just shrink a 747 proportionally down to 6 inches and expect it to fly," said Samuel Blankenship, principal researcher at GTRI and coordinator of Georgia Tech's Focused Research Program for Microflyers. Because of their small size and low airspeed, MAVs will fly at Reynolds numbers lower than for conventional aircraft. The first challenge is to create an efficient wing design that can provide enough lift and sufficiently low drag for a vehicle in that size range, where aerodynamic behavior is different from that of larger, faster aircraft. Viscous forces are more significant when you get down to this size and airspeed range. The MAVs have proportionally larger drag compared with a larger vehicle, so they are operating at a low Reynolds number. "Before MAVs," said Foch, "it used to be that a low-Reynolds-number regime was 100,000 to 1 million; now low is 5,000 to 80,000, which is pretty much outside the current database." In addition, boundary-layer characteristics are different. Boundary layers tend to be laminar rather than turbulent in this flight regime, he said. There are also different separation effects: The airflow tends to detach easily, causing a lot of separation in the boundary layer. These aerodynamic conditions are expected to drive designers to new wing sections and wing-body configurations to obtain optimum performance. Model-airplane experience will undoubtedly help the design effort. Foch cautioned that it is relatively difficult to do wind-tunnel tests on the thin airfoils required at this small size. The forces being monitored are so slight that "acoustic noise and vibration tend to pollute the data," he said. "They can also trip the boundary layer." Georgia Tech, MIT, the Naval Research Laboratory, and the University of Not re Dame in Notre Dame, Ind., are said to be working on the sensitive balances needed to do this work. The small sizes of MAVs pose another design complication in modeling airflow, according to Foch: "In conventional airplanes, we normally treat wing design as a two-dimensional problem. But because we're living with so much separated flow, and we're trying to take advantage of the vortices that form, we can no longer treat it as a 2-D problem. You have to consider three-dimensional effects in the span wise direction. For example, the transient sideways momentum has a big effect on the stability of the vortices that are creating the extra lift you need." Another challenge arises from limited propeller efficiencies. "At 6 inches," Foch said, "propellers are still big enough to operate reasonably efficiently," adding that 3 inches is the lower limit. "Below that size, you might have to flap wings, but that is second-generation technology that still needs a lot of basic research." Fifty-percent efficiency has been demonstrated with 5-centimeter-diameter propellers rotating at 25,000 rpm. Larger. propellers could be more efficient, but increased torque and extra mass for gear reduction are needed. These constraints provide opportunities to develop new airframe configurations including variations on wing-tail and flying-wing configurations as well as hoverers, with emphasis on trading aerodynamic performance for propulsion and payload integration requirements. ## Controlling Flight The diminutive vehicle also needs a flight-control system that can maintain its course in the face of turbulence or sudden gusts of wind. Operations out of the line of sight mean "a soldier can't fly the vehicle like a model airplane," Davis said. His team has determined that the prototype could rely on tiny sensors that measure airspeed, acceleration, and atmospheric pressure as well as on electrical actuators for flight surfaces to execute maneuvers. A flight-control system is required to stabilize the MAV, or at least augment its natural stability, and to execute maneuver commands. It may also have to stabilize the line of sight if the vehicle has an imaging mission. Flight-control components include actuators for aerodynamic controls, motion sensors, and processing. Aerodynamic control could be achieved using conventional control surfaces with discrete actuators, distributed microflaps, or warped lifting surfaces, depending on the airframe configuration. Very small electric motors could serve as actuators for 6-inchclass vehicles. Additional candidates include piezoelectric actuators (both bulk and thin-film devices) and large number of MEMS devices, which could be electromagnetic, electrostatic, piezoelectric "inchworm," or ultrasonic-wave devices. AeroVironment's disk-wing MAV, for example, uses a 2-gram flight- control system with a flight computer, a command receiver, and three Smoovy micromotors used as control flap actuators. The micromotors, from RMB in Switzerland, are said to be the smallest electric motor in production; each 3-millimeter-diameter devices weighs Y3 gram. The most likely parameters to sense for MAV stabilization are inertial angular rate, differential and absolute pressure, acceleration, and the Earth's magnetic and electrostatic fields; optical sensing could be used for angular position and rate stabilization. Small pressure sensors and accelerometers, which could measure altitude and angle of attack, are available now, but inertial angular-rate sensors would produce the most robust control systems. MEMS Coriolis-force angular-rate sensors would be adequate for stability augmentation (not inertial navigation), but further work is needed to miniaturize the associated electronics. A processor will be required for the flight-control functions, and commercial microcontrollers will probably have enough capability for the first-generation MAVs. Advanced abilities, such as autonomous control, will need custom chips. Once in the air, the MAV will need to maintain communications with its human controllers. Such links could take several forms. The simplest is a direct line-of-sight system, while a vehicle flying beyond or below the line of sight would require an overhead communications relay-another flying vehicle or satellite. Antennas tend to b e a big problem for MAVs, as the small dimensions limit radio frequencies and range. On top of that, engineers must isolate the system from electromagnetic and radio-frequency interference. The communications electronics will need to be extremely mass- and power-efficient, with capabilities stripped down to the bare minimum. Proposed means of making MAVs autonomous include using a geographic information system to provide a map of the terrain, or a Global Positioning System (GPS) satellite, which determines location by triangulating from satellite signals. GPS capability would greatly enhance a MAV's capabilities, but current mall units need at least 0.5 watts of power and have antennas weighing 20 to 40 grams. In addition, GPS systems need a substantial amount of data-processing power to work. "We'd really like a GPS, but right now the electronics are too power-hungry and the antennas are too big," Davis said. "It all has to be downsized." Furthermore, for the machines to be useful, MAVs will have to carry payloads ranging from television cameras to infrared and chemical/ biological sensors in a package weighing just 15 grams. These advanced sensor systems will be the basic cost driver for the MAV systems. Now under development are 1-gram charge-coupled-device (CCD) video cameras. To provide enough resolution to classify vehicles and detect personnel at about 100 meters high, for example, these video devices will require focal planes with about 1,000 by 1,000 pixels. The best infrared sensor candidates (in the 3- to 5-micron band) are platinum silicide CCD or complementary-metal-oxide-semiconductor arrays. Biological and chemical-agent detectors will require substantial development, according to experts. Airborne chemical sensors now weigh about 5 kilograms , for example, while biological sensors are at an even earlier development stage. Developing useful micro aerial vehicles is going to be a severe design engineering challenge, Davis said. Retaining the needed performance while meeting the Pentagon's low-cost goals will be particularly difficult: "After all, we don't want a Swiss watch but a Swatch." Georgia Tech researchers are working with a small circulation-control (or blown) wing that uses the Coanda effect to augment lift and provide flight control without complex flight surfaces. View article in PDF format. ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections • TELEPHONE: 1-800-843-2763 (Toll-free in the USA) • EMAIL: [email protected]
Inductive Definition of Sequence It has been suggested that this page or section be merged into Principle of Recursive Definition. (Discuss) Theorem Let $X$ be a set. Let $h \in \N$. Let $a_i \in X$ for all $i \in \set {1, 2, \ldots, h}$. Let $S$ be the set of all finite sequences whose codomains are in $X$. Let $G: S \to X$ be a mapping. Then there is a unique sequence $f$ whose codomain is in $X$ such that: $f_i = \begin{cases} a_i & : i \in \set {1, 2, \ldots, h} \\ \map G {f_1, f_2, \ldots, f_{i - 1} } & : i \ge h + 1 \end{cases}$ Also known as Such a definition for a sequence is also known as a recursive definition.
# Problem on Big Rudin about Fourier Transform Exercise 5: If $f\in L^1$ and $\int |t\hat{f}(t)|<\infty$, prove that $f$ coincide a.e. with a differentiate function whose derivative is $i\int_{-\infty}^{\infty}t\hat{f}(t)e^{ixt}dt$ I know a theorem which claims If $f\in L^1$,$\exists g\in L^1$ such that $\hat{g}(t)=t\hat{f}(t)$ then $f(x)=\int_{-\infty}^{x}g(t)dt$ a.e. I think it may have some relation between them, who can give me some suggestion? Thank you very much! If $t\hat{f} \in L^{1}$, define $$g(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}s\hat{f}(s)ie^{isx}\,ds, \;\;\; x \in\mathbb{R}.$$ The function $g$ is continuous by the Lebesgue dominated convergence theorem. Using Fubini's theorem to switch order of integration gives $$\int_{0}^{t}g(x)\,dx = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\hat{f}(s)(e^{ist}-1)\,ds = f(t)-C,\;\;\; a.e. t\in\mathbb{R}.$$ In other words, $f(t) = C+\int_{0}^{t}g(x)\,dx$ a.e.. The function $C+\int_{0}^{t}g(x)\,dx$ is continuously differentiable on $\mathbb{R}$ with derivative $g$, which has the stated form (up to a constant multiple of $1/\sqrt{2\pi}$ that one of us has wrong.)
Rating: # Crypto Challenge ## dias skeerG tneicna - 20 ### Description: Nothing ### Given: 1 File was given i) decode.me ### Objective: To decrypt the text provided in "decode.me" ### What to do: Decrypt the text using any cipher technique ### Tools Required: Text Editor ### Solution: Step: Check file format -- success Command: file decode.me Output: Just a regular text file with some cipher text Cipher Text: 554545532245{22434223_4223_42212322_55_234234313551_34553131423344} Step: Analyze the code manually -- success Information: 1. Since the format of the string is like "...{...}...", one can easily guess the flag is there and substitution technique is used 2. Comparing cipher text with AFFCTF{...} 3. There is no 0 in the string, maybe 0 is not included Output: A = 55, F = 45, C = 53, T = 22 Step: Make a table consisting of A-Z with there codes -- failure Output: A = 55 B = 54 C = 53 D = 52 E = 51 F = 50 ... Step: Make a table consisting of A-Z with there codes without using 0 -- failure Output: A = 55 B = 54 C = 53 D = 52 E = 51 F = 45 ... Step: In previous step we get the code T = 21 and we cannot get a code for Z, So make another table -- success Information: Playfair cipher uses 25 blocks for A-Z by combining I/J in a single block Output: A = 55 B = 54 I/J = 42 Z = 11 Step: Crack the cipher simply putting the values Output: AFFCTF{THIS_IS_JUST_A_SIMPLE_MAPPING} ### Flag: AFFCTF{THIS_IS_JUST_A_SIMPLE_MAPPING} Original writeup (https://github.com/I-ikshvaku/writeups/tree/master/2020/affinity_ctf_lite/dias%20skeerG%20tneicna).
# Temperature and entropy relationship ### Entropy and temperature | Physics Forums Temperature is then defined as the thermodynamic quantity that is the That is, the connection of entropy with information works both ways;. When a high temperature object is placed in contact with a low temperature object, then energy will flow from the The Relationship of Entropy to Temperature. Heat added to a system at a lower temperature causes higher entropy .. In the following I'll detail the relationship between Q and S, and then T and S, for a. We noted that the cumulative amount of heat transfer and the cumulative amount of work done over an entire process path are given by the two integrals: The discovery of the Second Law came about in the 19th century, and involved contributions by many brilliant scientists. There have been many statements of the Second Law over the years, couched in complicated language and multi-word sentences, typically involving heat reservoirs, Carnot engines, and the like. ### The Relationship between Entropy and Temperature – fabula-fantasia.info These statements have been a source of unending confusion for students of thermodynamics for over a hundred years. What has been sorely needed is a precise mathematical definition of the Second Law that avoids all the complicated rhetoric. The sad part about all this is that such a precise definition has existed all along. The definition was formulated by Clausius back in the 's. Clausius wondered what would happen if he evaluated the following integral over each of the possible process paths between the initial and final equilibrium states of a closed system: He carried out extensive calculations on many systems undergoing a variety of both reversible and irreversible paths and discovered something astonishing. He found that, for any closed system, the values calculated for the integral over all the possible reversible and irreversible paths between the initial and final equilibrium states was not arbitrary; instead, there was a unique upper bound maximum to the value of the integral. Clausius also found that this result was consistent with all the "word definitions" of the Second Law. ## Entropy and temperature Clearly, if there was an upper bound for this integral, this upper bound had to depend only on the two equilibrium states, and not on the path between them. It must therefore be regarded as a point function of state. Clausius named this point function Entropy. But how could the value of this point function be determined without evaluating the integral over every possible process path between the initial and final equilibrium states to find the maximum? Clausius made another discovery. He determined that, out of the infinite number of possible process paths, there existed a well-defined subset, each member of which gave the same maximum value for the integral. ### fabula-fantasia.info: Thermodynamics & Heat: Entropy This is a common confusion. Entropy To specify the precise state of a classical system, you need to know its location in phase space. For a bunch of helium atoms whizzing around in a box, phase space is the position and momentum of each helium atom. Lets say you know the total energy of the gas, but nothing else. It will be the case that a fantastically huge number of points in phase space will be consistent with that energy. The entropy of a uniform distribution is the logarithm of the number of points, so that's that. If you also know the volume, then the number of points in phase space consistent with both the energy and volume is necessarily smaller, so the entropy is smaller. ## Entropy and Temperature This might be confusing to chemists, since they memorized a formula for the entropy of an ideal gas, and it's ostensibly objective. Someone with perfect knowledge of the system will calculate the same number on the right side of that equation, but to them, that number isn't the entropy. Thermodynamics # Entropy Change in terms of Temperature and Pressure # Lecture It's the entropy of the gas if you know nothing more than energy, volume, and number of particles. Temperature The existence of temperature follows from the zeroth and second laws of thermodynamics: Temperature is then defined as the thermodynamic quantity that is the shared by systems in equilibrium. This is wrong as a definition, for the same reason that the ideal gas entropy isn't the definition of entropy. Probability is in the mind. Entropy is a function of probabilities, so entropy is in the mind.
0 # Is the quotient of two nonzero numbers never a rational number? Wiki User 2011-08-23 01:36:47 The quotient of two nonzero integers is the definition of a rational number. There are nonzero numbers other than integers (imaginary, rational non-integers) that the quotient of would not be a rational number. If the two nonzero numbers are rational themselves, then the quotient will be rational. (For example, 4 divided by 2 is 2: all of those numbers are rational). Wiki User 2011-08-23 01:36:47 Study guides 20 cards ➡️ See all cards 3.8 1793 Reviews
# How do you determine whether (6,-2) is on the line y=1/2x-5? Mar 10, 2018 $\left(6 , - 2\right) \text{ is on the line}$ #### Explanation: $\text{If the point lies on the line then it will satisfy the }$ $\text{equation}$ $\text{substitute x = 6 into the equation and if the result is }$ $\text{y= - 2 then "(6,-2)" is on the line}$ $x = 6 \to y = \left(\frac{1}{2} \times 6\right) - 5 = 3 - 5 = - 2$ $\Rightarrow \left(6 , - 2\right) \text{ is on the line } y = \frac{1}{2} x - 5$
## WeBWorK Problems ### Intermediate Value theorem, custom checker by Dick Lane - Number of replies: 2 I request comment about several aspects of a problem I am writing involving the Intermediate Value Theorem.  This post will cite fragments of a full problem which is attached. 1)  Within Interval context, I create $J = Compute( "[$L,$R]" ); in order to have a closed interval for use by a custom checker. That checker will flag a non-closed interval with error message The type of interval is incorrect (provided by pg/lib/Value/AnswerChecker.pm) ? Given that preliminary type-checking, is it optional for my checker to do a string comparison (with "eq" instead of "==") for brackets rather than parentheses ? 2) I have vague notions about having my Solutions block include mention of a student answer. I tried replacing "my" with "our" in some parts of my checker, but that did not export student stuff to global context. If there is not a simple way to do that, then I will not pursue this low-benefit task. 3) Experiments replacing strong with weak inequalities in my custom checker had anomalous results. If c was exact solution of f(x) = y0, then [c,c] would be rejected while treatment of [c,c+0.1] or [c-0.01,c] might vary. 4) I suspect DB-tagging: Calculus : Limits and Derivatives : Continuity is appropriate given our current taxonomy. I hope various new versions of a Library Browser will soon provide fuzzy searching to identify plausible (& case-insensitive) keyword matches for Intermediate Value Theorem Intermediate Value property Intermediate Value In reply to Dick Lane ### Re: Intermediate Value theorem, custom checker by Davide Cervone - Given that preliminary type-checking, is it optional for my checker to do a string comparison (with "eq" instead of "==") for brackets rather than parentheses ? It turns out that the endpoint checking is done in a post-filter, so your checker will run no matter what the type of endpoints are. The post-filter will add the warning message only if the answer is marked wrong, so if you don't check this in your custom checker code (and mark it correct) your student will not receive that message. I have vague notions about having my Solutions block include mention of a student answer. I tried replacing "my" with "our" in some parts of my checker, but that did not export student stuff to global context. This is because the answer checkers don't run until after the problem is created (and that means after the solution is produced). So you don't have access to any variables from the checker since they haven't been created yet. The only way to do this would be to grab the answers from the $inputs_ref hash, and that is a bit of a pain. For example, you could use $student = Parser::Formula($inputs_ref->{ANS_NUM_TO_NAME(1)}); to get the student's answer from the first (unnamed) answer blank. If there is a syntax error in the student's answer, $student will be undefined, but no error will be produced. If the answer is blank, it will be a Formula object that returns a blank String object. Note that it is always a Formula object, so you could also do $student = $student->eval if$student->isConstant; if you want to get Real, Interval, or other constants. Experiments replacing strong with weak inequalities in my custom checker had anomalous results. If c was exact solution of f(x) = y0, then [c,c] would be rejected while treatment of [c,c+0.1] or [c-0.01,c] might vary. The reason that [c,c] doesn't work is that this type of interval is converted automatically into the set {c}, so when you try to get its two endpoints and its delimiters, you end up with only one value (for $L with $R being undefined), and the open and close delimiters are braces. If you look closely at the "Entered" column for this answer, you will see that it shows as a set. (If you want to show the original interval there instead, add formatStudentAnswer => "parsed" to the parameters passed to the cmp() method.) Custom checkers that deal with intervals have to be a bit more sophisticated, since it is possible to be passed a Set or a Union (as these are all things to which an interval can be compared without a type-match error). Here is some code that handles all the situations: ANS($J->cmp( showEndpointHints => 0, formatStudentAnswer => "parsed", checker => sub { my ($good , $student ,$ansHash ) = @_ ; my ($L,$R,$open,$close); if ($student->class eq "Interval") { ($L,$R,$open,$close) =$student->value; } elsif ($student->class eq "Set" &&$student->length == 1) { ($L) =$student->value; $R =$L; $open = "[";$close = "]" } else { return 0 if $ansHash->{isPreview}; Value->Error("Your answer should be an Interval not a Set or Union"); } my$yL = $fcn->eval(x =>$L); my $yR =$fcn->eval(x => $R); ($yL,$yR) = ($yR,$yL) if$yL > $yR; return$open.$close eq "[]" && Interval("[$yL,$yR]")->contains($y0); } )); Here, we handle the Interval and Set (with one element) separately to get the correct data, and produce an error message if it is anything else (unless this is a preview, in which case we just mark it wrong). Also note the check at the end. Here, we form a new interval and ask if the value of $y0 is in the interval. This makes the check a bit easier to write. Note that we swap $yL and \$yR in the previous line, if they are in the wrong order for an interval. As for the situation for [c,c+.01] and [c-.01,c], you would need to give me some specific examples in order to test out what is really going on. Hope that helps. Davide ### Re: Intermediate Value theorem, custom checker by Dick Lane - Davide wrote "Hope that helps." YES, it does ! 1)  Type of interval --- in hindsight, I should have identified my main concern was an assurance that I could omit providing a "type error" message because the system would always do it for me. 2)  Your comments lead me to consider writing another problem with a non-monotonic function where the point estimate in second part is required to be in the interval answer for the first part. 3)  I suspect the anomalies involved relative tolerance issues.  They have not recurred after I added the following line. Context()->flags->set( tolerance => 0.0000001 , tolType => 'absolute'); thanks, dick
## Algebra 2 Common Core Published by Prentice Hall # Chapter 5 - Polynomials and Polynomial Functions - 5-2 Polynomials, Linear Factors, and Zeros - Practice and Problem-Solving Exercises: 27 #### Answer -3 multiplicity 3 #### Work Step by Step Equate the factor $x+3$ to zero then solve the equation: $x+3=0 \\x=-3$. The factor $(x-3)$ appears three times, so the zero, -3, has multiplicity 3. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
# I'm calling it a Pseudo-Quadratic Equation? Algebra Level 5 $\large{\sqrt{17 + 8x - 2x^2} + \sqrt{4 + 12x - 3x^2} = x^2 - 4x + 13}$ Find the sum of all real $${x}$$ for which the above equation satisfies. × Problem Loading... Note Loading... Set Loading...
# Complex masses for Dirac and Weyl spinors I'm trying understand how to rotate Dirac fields to absorb complex phases in masses. I have a few related questions: 1. With Weyl spinors, I understand, $$\mathcal{L} = \text{kinetic} + |M|e^{i\theta}\xi\chi + \textrm{h.c.}$$ The phase removed by separate left- and right-handed rotations, e.g. $\xi \to e^{-i\theta/2}\xi$ and $\chi \to e^{-i\theta/2}\chi$. These phases cancel in the kinetic terms. Is it correct that with Dirac spinors, $$\mathcal{L} = \bar\psi |M|e^{i\theta\gamma_5} \psi = \text{Re}(M)\bar\psi\psi +i\text{Im}(M)\bar\psi\gamma_5\psi$$ and the phase is removed by $\psi \to e^{-i\theta\gamma_5/2}\psi$? The appearance of the $\gamma_5$ in the phase troubles me a little - I suppose this is telling us that Weyl spinors are a more suitable basis than Dirac spinors? 2. If the field is Majorana, $\xi = \chi$, and the field can still absorb a phase? I think I must be making a trivial mistake. For example, Majorana neutrino fields cannot absorb phases, leading to extra CP violation. And in SUSY, the gaugino Majorana soft-breaking masses are e.g. $M_1e^{i\theta}$. Can their phases be re-absorbed via a field redefinition? I don't think they can. So I must have a mistake. • The appearance of $\gamma_5$ signifies that you're doing an axial transformation. In theories with an axial anomaly, such transformations are not a symmetry of the quantum theory, so I wonder if you're not allowed to do that. – Siva May 23 '13 at 21:46 • @Siva, you are allowed. At an extra price, however. E.g. one way to solve the Schwinger model is to cancel the interaction between $A$ and $\psi$ by making approriate chiral transformation, and poperly accounting for the anomaly. – Peter Kravchuk May 23 '13 at 21:51 • @Siva Yes, this will ultimately lead to strong CP/chiral anomaly, something I want to eventually understand... – innisfree May 23 '13 at 21:53 • @innisfree, the equality involving $\gamma_5$ is not clear to me, where the $\theta$ has gone? Also, why do you write the 'complex phase in mass' with $\gamma_5$ at the very beginning? – Peter Kravchuk May 23 '13 at 21:55 • @PeterKravchuk $Me^{i\theta\gamma_5} = M\cos\theta + Mi\gamma_5\sin\theta = \text{Re}(M)\ldots$, because $\gamma_5^2=1$. I wrote it with $\gamma_5$ in at the beginning because of something I read in Dine's Supersymmetry book. – innisfree May 23 '13 at 22:04 Having looked at a few more sources, I think I know the answer now, but please do correct me if I'm wrong. For 1., I still find the appearance of the $\gamma_5$ matrix in Dirac spinor description surprising. I suppose the resolution is that the Weyl spinors are a more intuitive description in some cases. For 2., I think the source of my confusion is some ambiguous statements in the literature. I've heard that Majorana fields "cannot absorb a phase," which I've misunderstood to mean cannot absorb a phase from a complex mass term, whereas it means that Majorana fields must be singlets in the fundamental representation/must have no non-zero quantum numbers, because the symmetry would violate the Majorana condition itself. The statement that Majorana fields cannot absorb phases from complex masses is true only in certain circumstances, namely, if there are interaction terms in the Lagrangian which are not invariant under the field redefinition. In the case of the Majorana neutrinos, because one no longer has the freedom to rotate right-handed fields only to avoid complications with left-handed weak interactions, fewer phases can be absorbed. For the gauginos, I think there are F-terms, which prevent the total removal of the phases, leaving physical phases e.g. $\text{Arg}(M_i \mu)$.
Browse Questions # For the curve $\sqrt x + \sqrt y=1,\large \frac{dy}{dx}$ at $\bigg(\large\frac{1}{4},\frac{1}{4}\bigg)$ is ____________. $\begin{array}{1 1}(a)\;1\\(b)\;0\\(c)\;2\\(d)\;3\end{array}$ Toolbox: • Chain Rule :If $z=f(y)$ and $y=g(x)$,then $\large\frac{dz}{dx}=\frac{dz}{dy}-\frac{dy}{dx}$ $\sqrt x+\sqrt y=1$ Differentiating on both sides w.r.t $x$ we get, $\large\frac{1}{2\sqrt x}+\frac{1}{2\sqrt y}\frac{dy}{dx}=$$0$ $\Rightarrow \large\frac{dy}{dx}=-\large\frac{\sqrt y}{\sqrt x}$ $\large\frac{dy}{dx}$ at $\big(\large\frac{1}{4},\frac{1}{4}\big)$ is written as $\large\frac{dy}{dx}=-\frac{\sqrt{\large\frac{1}{4}}}{\sqrt{\large\frac{1}{4}}}$ $\qquad=-1$
Opened 6 years ago Closed 6 years ago Cannot search if an admin Reported by: Owned by: david.byard@… Roberto Longobardi normal TestManagerForTracPlugin critical crash admin search 0.11 Description (last modified by Ryan J Ollos) Installed the TestManager plugins globally (installed using easy_install, followed by trac-admin /path/to/trac update), and enabled in trac.ini as so: tracgenericclass.* = enabled tracgenericworkflow.* = enabled testmanager.* = enabled If a non-admin makes any search with these lines uncommented (i.e., plugin is enabled), they get the attached error. This has been worked around by making all our users admins, but this is not ideal. Changed 6 years ago by anonymous Attachment: Screen shot 2010-11-08 at 14.14.18.png​ added comment:1 Changed 6 years ago by Roberto Longobardi Hi david, could you please retry to attach the screenshot, as it appears to be corrupted. Please try to use a short name without blanks... I think the problem may be with the old Trac version that the trac-hacks site uses, whic is 0.10. In the meantime I'm trying to reproduce the problem. Changed 6 years ago by david.byard@… The error message (re-uploaded) comment:2 Changed 6 years ago by david.byard@… Re-attached the error. Thanks much for the swift response :) comment:3 Changed 6 years ago by Roberto Longobardi :) I live in a control room with all of my trac-hacks tickets flashing on a giant wall, I was out at the moment but I have track-hacks page me on each ticket ;-) (kidding of course... I was just casually checking my mail :-)) Fixed code comment:4 Changed 6 years ago by Roberto Longobardi Status: new → assigned I have fixed the problem, please find in attachment the file api.py containing the fix to be replaced into: tracgenericclass/trunk/tracgenericclass Please, let me know if this works. Ciao, Roberto comment:5 Changed 6 years ago by Roberto Longobardi To download the file in original text format, use the following link: api.py Last edited 4 years ago by Ryan J Ollos (previous) (diff) comment:6 Changed 6 years ago by Roberto Longobardi Resolution: → fixed assigned → closed Fixed with 1.3.6. comment:7 Changed 4 years ago by Ryan J Ollos Description: modified (diff) Modify Ticket Change Properties
# Tips for golfing in PARI/GP PARI/GP is a free computer algebra system. It is designed for (algebraic) number theory, not golfing, but that's part of the attraction. Unsurprisingly, it fares best at mathematical tasks; its support for string manipulation is primitive. (Having said that, advice for golfing non-mathematical tasks is also welcome.) These tips should be at least somewhat specific to PARI/GP; advice which applies to 'most' languages belongs at Tips for golfing in <all languages>. Some general tips which don't need to be included: 1. Compress whitespace. (In GP all whitespace outside of strings is optional.) 2. Use single-character variables. (There are 50 choices, a-z and A-Z except I and O.) 3. Use < and > rather than <= and >= when possible. 4. Chain assignments. (a=b=c or a+=b*=c) Use operators to replace commands (or longer operators!) when possible. length(v)#v (saves 5-7 bytes) matsize(M)[2]#M (saves 9-11 bytes) matsize(M)[1]#M~ (saves 8-10 bytes) floor(x)x\1 (saves 3-5 bytes) round(x)x\/1 (saves 2-4 bytes)* shift(x,n)x>>n or x<<n (saves 2-6 bytes) sqr(x)x^2 (saves 1-3 bytes) deriv(x)x' (saves 4-6 bytes) mattranspose(M)M~ (saves 11-13 bytes) factorial(n)n! (saves 8-10 bytes) &&& (saves 1 byte; note that this syntax is deprecated) ||| (saves 1 byte; only works in older versions) * Prior to 2.8.1 (2016) the \/ operator did not work correctly on t_REAL numbers—update if you haven't! • x\/1: is that designed to be a round operator, or just some sort of magic? Apr 23, 2016 at 5:41 • @primo: Yes, it's a round operator. Not very well-known, though, so I thought I'd mention it. Of course round(x/6) to x\/6 is an even bigger win... Apr 23, 2016 at 23:14 • In a string context (e.g. the arguments of print or Str), commas are unnecessary: i=99 print(i" bottles of beer on the wall") This is even true if one of the arguments is an assignment. The following is equivalent to the above: print(i=99" bottles of beer on the wall") • Unassigned variables evaluate to their name in a string context: for(i=1,2,print(i" bottl"e" of beer on the wall.");e=es) produces: 1 bottle of beer on the wall. 2 bottles of beer on the wall. • In a multi-line context { ... }, the closing brace is unnecessary. print({"this is line one this is line two") Documentation In REPL: ?* - show all commands. ?<command> - show brief documentation for a command. Online documentation: http://pari.math.u-bordeaux.fr/dochtml/html.stable/ Some replacements: • poldegree(f) -> #f' • subst(f,x,y) -> x=y;eval(f) if x is not used elsewhere • subst(f,x,0) -> f%x but its type is t_POL instead of t_INT • polcoeff(s+O(x^(n+1)),n) -> Pol(s+O(x^n*x))\x^n but its type is t_POL • polcoeff(s+O(x^(n+1)),n) -> Vec(s+O(x^n++))[n] if the constant term of s is nonzero • Vecrev(v)~ -> Colrev(v) • Pol(s+O(x^n)) -> s%x^n if s is a rational function (e.g., 1/(1-x)) and n > 0 • (1-x^n)/(1-x) -> 1/(1-x)%x^n if n > 0 • matrix(n,n,i,j,e) -> matrix(n,,i,j,e) • a[2..#a] / a[2..-1] -> a[^1] • a[1..-2] -> a[^#a] • if(cond0,a,if(cond1,b,c)) -> if(cond0,a,cond1,b,c) • ceil(x) -> -x\-1 Use set-builder notation to replace apply and select. apply(x->thing1(x)&&thing2(x),select(condition,v))[thing1(x)&&thing2(x)|x<-v,condition(x)] or apply(x->thing1(x)&&thing2(x),v)[thing1(x)&&thing2(x)|x<-v] but if you're applying a named function, apply is better, since the arguments are implicit: [functionName(x)|x<-v]apply(functionName,v) Use specialized loops. GP provides (at least) the following loops: for, forcomposite, fordiv, forell, forpart, forprime, forstep, forsubgroup, forvec, prod, prodeuler, prodinf, sum, sumalt, sumdiv, sumdivmult, suminf, sumnum, sumnummonien, sumpos. So don't write s=0;for(i=1,9,s+=f(i));s when you could write sum(i=1,9,f(i)), don't write for(n=1,20,f(prime(n))) when you could write forprime(p=2,71,f(p)), and certainly don't write apply(v->print(v),partitions(4)); instead of forpart(v=4,print(v)). prodeuler can be useful as a product over the primes, but note that it returns a t_REAL rather than a t_INT, so rounding at the end may be necessary depending on the task. sumdiv can be a real lifesaver compared to a simple loop over divisors. forvec is a replacement for a whole collection of loops. You can replace (ungolfed) { for(i=0,9, for(j=i,99, for(k=i,200, f(i,j,k) ) ) ); } with forvec(v=[[0,9], [0,99], [0,200]], call(f, v) , 1 \\ increasing sequences only ) Like other computer algebra systems, PARI/GP also has a lot of built-ins for number theory, linear algebra, polynomials, and other branches of algebra. The easy way to find a built-in is looking at the reference card. Many built-ins have a prefix to indicate the types they work on. For example: • Built-ins for matrices usually have the prefix mat • Built-ins for polynomials usually have the prefix pol • Built-ins for power series usually have the prefix ser When you are looking for a built-in, you can type the prefix in the REPL, and press tab. The command-line completion will tell you what functions it has. To see the help message of a built-in in the REPL, just type ? functionname. For a more detailed help message, sometimes with examples, type ?? functionname. Avoid return when possible. The final value in the function is returned, so don't write ...;return(x) but just ...;x. If you must use return, note that the bare for return yields the PARI value gnil, which is falsy (coerced into a GP 0 or PARI gen_0 as needed), so you can usually write return rather than return(0). And (almost?) needless to say, but if you have two values to choose from, much better to return if(condition,x,y) than if(condition,return(x));y. # Truthy and Falsy There isn't a Boolean type in PARI/GP. Truthy and falsy are defined as the following: • Values that equal to 0 are falsy. This includes real number 0.0, complex number 0.0+0.0*I, polynomial 0*x, power series 0+O(x^3), modulo object Mod(0,3), and many others. • Vector and matrices that are empty or only contains falsy values are falsy. This includes nested falsy vectors like [0, [0, 0, [[]]]]. • Everything else is truthy. In particular, lists, maps, strings, and Vecsmalls are always truthy even if they are empty or only contains falsy values. The simplest way to convert a to the "default" truthy / falsy value (i.e., 1 / 0) is !!a. When testing equality with ==, every falsy value is equal to 0. If you need to distinguish between different falsy values, use ===. ## Use - to test equality For a and b that can subtract, we can do: • if(a==b,c,d) => if(a-b,d,c) • a==b&&c => a-b||c if we only need the side effect of c. • a==b||c => a-b&&c if we only need the side effect of c. Be careful when testing vector equality: only vectors that have the same length can subtract. ## Any and All There isn't built-in any or all function in PARI/GP. When checking if a predicate is true for all or any integers in a range, we can just use sum or prod. When checking if any item in a vector is truthy, we don't need a function: the vector itself is truthy if and only if any item in it is truthy. When checking if all item in a vector is truthy, the following has the same length: vecprod(a) ![!i|i<-a] ## Initialize with truthy/falsy When using functions like for, sum, prod, we usually initialize some variable to 0 and 1. If the previous expression returns the opposite truthy/falsy value, we can just use it. For example: a=some_truthy_value;for(i=0,n,do_something()) => for(i=!a=some_truthy_value,n,do_something()) # String replace PARI/GP doesn't have a string replace built-in, but when you want to replace every occurrence of a in s by b, you can do strjoin(strsplit(s,a),b). strjoin and strsplit are introduced in PARI/GP 2.13.0, so they are not supported on TIO.
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis : # Mean In statistics, the mean is one of the measures of central tendency, apart from the mode and median. Mean is nothing but the average of the given set of values. It denotes the equal distribution of values for a given data set. The mean, median and mode are the three commonly used measures of central tendency. To calculate the mean, we need to add the total values given in a datasheet and divide the sum by the total number of values. The Median is the middle value of a given data when all the values are arranged in ascending order. Whereas mode is the number in the list, which is repeated a maximum number of times. Learn: Central tendency In this article, you will learn the definition of mean, the formula for finding the mean for ungrouped and grouped data, along with the applications and solved examples. ## Definition of Mean in Statistics Mean is the average of the given numbers and is calculated by dividing the sum of given numbers by the total number of numbers. Mean = (Sum of all the observations/Total number of observations) Example: What is the mean of 2, 4, 6, 8 and 10? Solution: 2 + 4 + 6 + 8 + 10 = 30 Now divide by 5 (total number of observations). Mean = 30/5 = 6 In the case of a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability P(x) and then adding all these products together. ### Mean Symbol (X Bar) The symbol of mean is usually given by the symbol ‘x̄’. The bar above the letter x, represents the mean of x number of values. X̄ = (Sum of values ÷ Number of values) X̄ = (x1 + x2 + x3 +….+xn)/n ## Mean Formula The basic formula to calculate the mean is calculated based on the given data set. Each term in the data set is considered while evaluating the mean. The general formula for mean is given by the ratio of the sum of all the terms and the total number of terms. Hence, we can say; Mean = Sum of the Given Data/Total number of Data To calculate the arithmetic mean of a set of data we must first add up (sum) all of the data values (x) and then divide the result by the number of values (n). Since ∑ is the symbol used to indicate that values are to be summed (see Sigma Notation) we obtain the following formula for the mean (x̄): x̄=∑ x/n ## How to Find Mean? As we know, data can be grouped data or ungrouped data so to find the mean of given data we need to check whether the given data is ungrouped. The formulas to find the mean for ungrouped data and grouped data are different. In this section, you will learn the method of finding the mean for both of these instances. ### Mean for Ungrouped Data The example given below will help you in understanding how to find the mean of ungrouped data. Example: In a class there are 20 students and they have secured a percentage of 88, 82, 88, 85, 84, 80, 81, 82, 83, 85, 84, 74, 75, 76, 89, 90, 89, 80, 82, and 83. Find the mean percentage obtained by the class. Solution: Mean = Total of percentage obtained by 20 students in class/Total number of students = [88 + 82 + 88 + 85 + 84 + 80 + 81 + 82 + 83 + 85 + 84 + 74 + 75 + 76 + 89 + 90 + 89 + 80 + 82 + 83]/20 = 1660/20 = 83 Hence, the mean percentage of each student in the class is 83%. ### Mean for Grouped Data For grouped data, we can find the mean using either of the following formulas. Direct method: Mean $$\begin{array}{l}\overline{x}=\frac{\sum_{i=1}^{n}f_ix_i}{\sum_{i=1}^{n}f_i}\end{array}$$ Assumed mean method: Mean $$\begin{array}{l}(\overline{x})=a+\frac{\sum f_id_i}{\sum f_i}\end{array}$$ Step-deviation method: Mean $$\begin{array}{l}(\overline{x})=a+h\frac{\sum f_iu_i}{\sum f_i}\end{array}$$ Go through the example given below to understand how to calculate the mean for grouped data. Example: Find the mean for the following distribution. xi 11 14 17 20 fi 3 6 8 7 Solution: For the given data, we can find the mean using the direct method. xi fi fixi 11 3 33 14 6 84 17 8 136 20 7 140 ∑fi = 24 ∑fi xi = 393 Mean = ∑fixi/∑fi = 393/24 = 16.4 ## Mean of Negative Numbers We have seen examples of finding the mean of positive numbers till now. But what if the numbers in the observation list include negative numbers. Let us understand with an instance, Example: Find the mean of 9, 6, -3, 2, -7, 1. Solution: Total: 9+6+(-3)+2+(-7)+1 = 9+6-3+2-7+1 = 8 Now divide the total from 6, to get the mean. Mean = 8/6 = 1.33 ## Types of Mean There are majorly three different types of mean value that you will be studying in statistics. 1. Arithmetic Mean 2. Geometric Mean 3. Harmonic Mean ### Arithmetic Mean When you add up all the values and divide by the number of values it is called Arithmetic Mean. To calculate, just add up all the given numbers then divide by how many numbers are given. Example: What is the mean of 3, 5, 9, 5, 7, 2? Now add up all the given numbers: 3 + 5 + 9 + 5 + 7 + 2 = 31 Now divide by how many numbers are provided in the sequence: 316= 5.16 ### Geometric Mean The geometric mean of two numbers x and y is xy. If you have three numbers x, y, and z, their geometric mean is 3xyz. $$\begin{array}{l}\large Geometric\;Mean=\sqrt[n]{x_{1}x_{2}x_{3}…..x_{n}}\end{array}$$ Example: Find the geometric mean of 4 and 3 ? Geometric Mean = $$\begin{array}{l} \sqrt{4 \times 3} = 2 \sqrt{3} = 3.46\end{array}$$ ### Harmonic Mean The harmonic mean is used to average ratios. For two numbers x and y, the harmonic mean is 2xy(x+y). For, three numbers x, y, and z, the harmonic mean is 3xyz(xy+xz+yz) $$\begin{array}{l}\large Harmonic\;Mean (H) = \frac{n}{\frac{1}{x_{1}}+\frac{1}{x_{2}}+\frac{1}{x_{2}}+\frac{1}{x_{3}}+……\frac{1}{x_{n}}}\end{array}$$ The root mean square is used in many engineering and statistical applications, especially when there are data points that can be negative. $$\begin{array}{l}\large X_{rms}=\sqrt{\frac{x_{1}^{2}+x_{2}^{2}+x_{3}^{2}….x_{n}^{2}}{n}}\end{array}$$ ### Contraharmonic Mean The contraharmonic mean of x and y is (x2 + y2)/(x + y). For n values, $$\begin{array}{l}\large \frac{(x_{1}^{2}+x_{2}^{2}+….+x_{n}^{2})}{(x_{1}+x_{2}+…..x_{n})}\end{array}$$ ### Real-life Applications of Mean In the real world, when there is huge data available, we use statistics to deal with it. Suppose, in a data table, the price values of 10 clothing materials are mentioned. If we have to find the mean of the prices, then add the prices of each clothing material and divide the total sum by 10. It will result in an average value. Another example is that if we have to find the average age of students of a class, we have to add the age of individual students present in the class and then divide the sum by the total number of students present in the class. ### Practice Problems Q.1: Find the mean of 5,10,15,20,25. Q.2: Find the mean of the given data set: 10,20,30,40,50,60,70,80,90. Q.3: Find the mean of the first 10 even numbers. Q.4: Find the mean of the first 10 odd numbers. ## Frequently Asked Questions – FAQs ### What is mean in statistics? In statistics, Mean is the ratio of sum of all the observations and total number of observations in a data set. For example, mean of 2, 6, 4, 5, 8 is: Mean = (2 + 6 + 4 + 5 + 8) / 5 = 25/5 = 5 ### How is mean represented? Mean is usually represented by x-bar or x̄. X̄ = (Sum of values ÷ Number of values in data set) ### What is median in Maths? Median is the central value of the data set when they are arranged in an order. For example, the median of 3, 7, 1, 4, 8, 10, 2. Arrange the data set in ascending order: 1,2,3,4,7,8,10 Median = middle value = 4 ### What are the types of Mean? In statistics we learn basically, three types of mean, they are: Arithmetic Mean, Geometric Mean and Harmonic Mean ### What is the mean of the first 10 natural numbers? The first 10 natural numbers are: 1,2,3,4,5,6,7,8,9,10 Sum of first 10 natural numbers = 1+2+3+4+5+6+7+8+9+10 = 55 Mean = 55/10 = 5.5 ### What is the relationship between mean, median and mode? The relationship between mean, median and mode is given by: 3 Median = Mode + 2 Mean. ### What is the mean of the first 5 even natural numbers? As we know, the first 5 even natural numbers are 2, 4, 6, 8, and 10. Hence, Mean = (2 + 4 + 6 + 8 + 10)/5 Mean = 6 Thus, the mean of the first 5 even natural numbers is 6. ### Find the mean of the first 5 composite numbers? The first 5 composite numbers are 4, 6, 8, 9 and 10. Thus, Mean = (4 + 6 + 8 + 9 + 10)/5 Mean = 37/5 = 7.4 Hence, the mean of the first 5 composite numbers is 7.4.
# how to find the last non-zero digit of $n$ I want to know how to find the last non-zero digit of $n$. For example $n = 100!$ my try: First i have to know how much Zeros $100!$ has so i did this: $$E_{5}100 = \sum _{1\leq k <n} \Bigg[\frac{100}{5^{k}}\Bigg] =\Bigg[\frac{100}{5}\Bigg] + \Bigg[\frac{100}{25}\Bigg] = 24$$ So $100!$ has $24$ zeros which means that the last digit of $\quad\frac{100!}{10^{24}}\quad$ is the number that i´m looking for. so if $x = \frac{100!}{10^{24}}$ i need to find $x (mod 10)$ to get it but here is where i got stuck... • First multiply digits 1 to 9 and note the last non zero number this might be your answer which will be 4. This is how you will get the last number. – Jasser Sep 22 '14 at 7:50 • @user291957 actually, $9! = 362880$, whose last non-zero number is 8. – symmetricuser Sep 22 '14 at 8:42 • Yes yes it's 8 @user125084 . Apologies. – Jasser Sep 22 '14 at 8:45 int digit=1; int tmp; int cnt_2=0; for(i=1;i<=n;i++){ tmp =i; while(tmp%2==0){ cnt_2++; tmp = (tmp>>1); } while(tmp%5==0){ cnt_2--; tmp = tmp/5; } digit = (digit*tmp)%10; } if(cnt_2>=0){ for(i=1;i<=cnt_2;i++){ digit = ((digit<<1)%10); } } if(cnt_2<0){ digit=5; } This is the code in C which I wrote to find the last nonzero digit of n! • Can you provide formal mathematical description of your algorithm in addition to the programming implementation? – Vlad Aug 20 '15 at 17:14 • sorry ! I am not so good at mathematics. But I can try to explain what I did here. – nhimran Aug 22 '15 at 4:57
## Description Unique Binary Search Trees Given n, how many structurally unique BST's (binary search trees) that store values 1 … n? Example: Input: 3 Output: 5 Explanation: Given n = 3, there are a total of 5 unique BST's: 1 3 3 2 1 \ / / / \ \ 3 2 1 1 3 2 / / \ \ 2 1 2 3 ## Solution n = 3为例,有数字1、2、3;设解为F(3)=f(1~3) F(3)分解: 1. 如果以1root结点,则右边2~3F(2)=f(2~3)个解 2. 如果以2root,则左边有F(1)=f(1~1)、右边有F(1)=f(3~3);总F(1)*F(1) 3. 如果以3root,则左边有F(2)=f(1~2) $$F(n) = \sum_{i=0}^{n}F(i) * F(n-i-1) ; F(0) = 1;$$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 class Solution { public: int numTrees(int n) { int *dp = new int[n+1]{1, 1}; // {F(0), F(1)} for (int i = 2; i <= n; ++i) { dp[i] = 0; for (int k = 0; k < i; ++k) { dp[i] += (dp[k] * dp[i - k - 1]); } } int r = dp[n]; delete []dp; return r; } };
Close THERMO Spoken Here! ~ J. Pohl © ~ 2018 www.ThermoSpokenHere.com (B1150-079) 2.04 Hydrostatic Equation Solutions The mathe­matics of the hydrostatic equation is that of the Momentum Equation (Newton's 2nd Law) applied to a small "element" of static fluid. Static does not mean "not moving." It means "not disturbed." It means the sum of forces acting on it equals zero. The sum of forces are zero with no motion and with uniform motion. When the size of an "element" is imagined to become smaller and smaller... until its size vanishes, this limit requires the calculus definition of a derivative. This is the term, dp(z)/dz, in the equation below which is commonly called the Hydrostatic Equation. Here z (called the independent variable) is a measure of distance vertical-to-Earth with values increasing upward. Also, since most applications are for fluids on Earth and with but slight differences of elevation, the acceleration of gravity is expressed (not as a function of z but as the Earth-surface constant, g0,Earth. (1) 1 The above form, though correct, is not used. The equation leads with "0" which means simply that the rate of change of momentum of every fluid particle is zero, d(mV)/dt = 0. Right of equality the equation expresses that the sum of forces (gravity force plus gradient of pressure-forces) equals zero. Almost always, the equation is algebraically manipulated. In usual developments of the hydrostatic equation, that it is a consequence of Newton's Second Law is obscured. Also it is presented in "integrated form. (2) 2 To start a study by use of a differential equation is superior; in the diff-eq averaging has not been applied. It is always best to write a differential equation as explicitly as possible. An equation equivalent to the above which shows the dependence of variables is our preference: (3) 3 Equation (3) above is an ordinary, first-order differential equation. re are a few distinctions in solution of this equation. The solutions are important to learn but much less important than the manner of solution. Also (but less so) are the resultant solutions which predict realities of fluid behaviors. Full understanding of the mathematics of solution are a path, a learning directly applicable to the mathematics of other physical systems. Therefore please take these solutions and how they are accomplished seriously. Steps of Solution: We have experience in solving first-order differential equations. It is simply a matter of separation of variables then integration of the equation. All equations are solve by the same set of steps. We begin with equation (1) written above. Separate Variables of the Differential Equation. When equation (1) is multiplied by the differential quantity, dz, variables are separated. (4) 4 Apply the integration operator: No thinking for this step. Integration is a "linear" operation. Apply integral signs to terms left-and-right of the equality. (5)5 Our next task is to specify the limits for the integrals. The limits of an integral are related to its differential. Thus for dp(z) and for dz the limits will be a pair of pressures and a pair of elevations, respectively. For a specific situation, limits could be specified "explicitly" with numbers with dimensions. Here to proceed in general, we use symbols to obtain an implicit solution. Left-of-Equality:  The lower limit is a pressure at a location, "1." The upper limit of that integral is the pressure at location "2." Right-of-Equality:  The lower and upper limits of this integral are the elevations, z1 and z2 respectively. (6)6 We are developing an implicit solution. The next step is to integrate both sides of our equation. Integration ~ Left-of-Equality:  This integral is accomplished "by definition." Upon this integration, our equation becomes as shown below left. Integration ~ Right-of-Equality:  The Mean Value Theorem of Calculus is applied to this integral. The effect is to assume the average value of the density over the limits is known. That constant is brought out leaving a simple leaving. Integration of both sides yields the below. (7)7
# Repulsion - Electrocratic ## Introduction Definition: • electocratic: noting a colloid that owes its stability to the electric charge of the particles on its surface Clarification on similar terminology: • electrostatic: of or pertaining to static electricity. Electrostatics is the branch of science that deals with the phenomena arising from what seems to be stationary electric charges. • lyocratic: noting a colloid owing its stability to the affinity of its particles for the liquid in which they are dispersed • electropheretic: related to charged colloidal particles or molecules in a solution under the influence of an applied electric field usually provided by immersed electrode • rheology: the science of flow and deformation of matter. Its study has been very important to a good understanding of colloidal systems The display for this electronic book is reflective, not emissive. It contains small, electricially-charged particles suspended in oil whose position is controlled by electrophoretic motion. We could called it an "electrocratic" device - one "ruled" by electronic charge. Amazon "Kindle" 11/08 This "electronic ink" is made of millions of tiny microcapsules, each about the diameter of a human hair. Each microcapsule contains positively charged white particles and negatively charged black particles suspended in a clear fluid. When an electric field of the appropriate polarity is applied, the white particles move to the top of the microcapsule where they become visible to the user, making the surface appear white at that spot. At the same time, an opposite polarity electric field pulls the black particles to the bottom of the microcapsules where they are hidden. By reversing this process, the black particles appear at the top of the capsule, which now makes the surface appear dark at that spot. How Electronic Ink Works. Figure from the E Ink Corporation To form such a display, the ink is printed onto a sheet of plastic film that is then laminated to a layer of circuitry. The circuitry forms a pattern of pixels that can then be controlled by a display driver. These microcapsules are suspended in a liquid "carrier medium" allowing them to be printed using existing screen printing processes onto virtually any surface, including glass, plastic, fabric and even paper. (Adapted from the E Ink Corporation) ## Nonpolar, electrocratic repulsion The free ion concentration (ionic strength) is vanishingly small for manynonpolar solvents. Hence the electrostatic repulsion is determined by Coulombic forces between the charged particles: $\Delta G^{R}=\frac{\pi D\varepsilon _{0}d^{2}\zeta ^{2}}{d+H} \,\!$ where $\zeta$ is the surface potential, d is a particle diameter and H is the distance between the particle surfaces. The total energy of interaction is the sum of the electrostatic repulsion and the dispersion energy of attraction: $\Delta G^{total}=\frac{\pi D\varepsilon _{0}d^{2}\zeta ^{2}}{d+H}-\frac{Ad}{24H}\,\!$ For the conditions: \begin{align} & \zeta =-105mV\left( 8\text{ charges/particle} \right) \\ & d=100nm \\ & A_{121}=4.05x10^{-20}J\text{ (Titania in oil)} \\ & \lambda \text{=50 pS/m} \\ \end{align} Where $\lambda$ is the solution conductivity (a measure of ionic strength). The same calculation can be used to estimate the surface potential (zeta potential) sufficient to disperse particles as a function of size. As is seen in aqueous dispersions - the larger the particle, the lower the surface potential is needed to stabilize the particle. ## Stern model of isolated, charged surface The loosely held countercharges form “electric double layers.” The electrostatic repulsion results from the interpenetration of the double layer around each charged particle. Stern's model for a charged surface with an electrical double layer. (From lecture on "Charged Sufaces".) Reference As before, we have the zeta potential, $\zeta$, and the decay of potential with distance, x, (in the simplest case: $\text{Potential }=\zeta \exp (-\kappa x)\,\!$) The decay constant, $\kappa$, the ionic strength, I, and the Debye length are defined and the Debye length, ${1}/{\kappa }\;$, is shown: $\kappa =\sqrt{\frac{e^{2}\sum\limits_{i}{c_{i}z_{i}^{2}}}{D\varepsilon _{0}kT}}\,\!$ $I=\frac{1}{2}\sum\limits_{i}{c_{i}z_{i}^{2}}\,\!$ ## Electrostatic component of disjoining pressure Internal field gradients between two flat plates. External surface are assumed to have same potential as the internal surface. Derjaguin, 1987, Fig. 6.1. Derjaguin, 1987, Fig. 6.1 (1)The disjoining pressure is the excess Maxwell stresses between the inside (Eh) field gradients and the outside field gradients (E0). But the field gradients are not known! $\Pi \left( h \right)=\frac{\varepsilon }{2}\left( E_{h}^{2}-E_{o}^{2} \right)$ (2) A thermodynamic argument gives: _{h,\psi _{2}}=\left. \frac{\partial \sigma _{1}}{\partial h} \right|_{\psi _{1},\psi _{2}}[/itex] (3) And the P-B equation must apply: $\frac{d^{2}\psi }{dh^{2}}=-\frac{1}{\varepsilon }\sum\limits_{i}{z_{i}en_{i0}\exp \left( -\frac{z_{i}e\psi }{kT} \right)}$ Solving the differential equations (2) and (3) from the previous slides using the boundary values, some partial differential identities, with the restriction of just two types of ions, gives: \Pi \left( h \right)=kT\left[ \begin{align} & n_{1}\left( \exp \left( \frac{z_{1}e\psi \left( h \right)}{kT} \right)-1 \right) \\ & +n_{2}\left( \exp \left( -\frac{z_{2}e\psi \left( h \right)}{kT} \right)-1 \right) \\ \end{align} \right]-\frac{\varepsilon }{2}\left( \frac{d\psi \left( h \right)}{dx} \right)^{2} First try for simpicity: assume the same potential on each plate and binary electrolytes. $\Pi \left( h \right)=2kTn\left( \cosh \left[ \varphi _{m}\left( h \right) \right]-1 \right)$ $\text{where }\varphi _{m}=\frac{ze}{kT}\psi _{m}\text{ (at the midplane)}$ This can be transformed to an elliptic integral of the first kind $\Pi =4kTn\left( \frac{1}{k^{2}}-1 \right)$ $\frac{\kappa h}{2}=k\int\limits_{0}^{\omega _{1}}{\frac{d\omega }{\sqrt{1-k^{2}\sin ^{2}\omega }}}\text{ }$ \begin{align} & k=\frac{1}{\cosh \left( \frac{\varphi _{m}}{2} \right)}\text{; } \\ & \text{cos}\omega =\frac{\sinh \left( \frac{\varphi _{m}}{2} \right)}{\sinh \left( \frac{\varphi }{2} \right)}\text{; } \\ & \cos \omega _{1}={\sinh \left( \frac{\varphi _{m}}{2} \right)}/{\sinh \left( \frac{\varphi _{0}}{2} \right)}\; \\ \end{align} Solution: For a $\varphi _{0}$ and h, the integral equation can be solved for k. From k, $\Pi$ can be calculated. Repeat for all necessary values of h. ## Constant potential or constant charge? If two surfaces approach each other and surface potential remain constant, the charge per unit area must decrease. Ions must either adsorb or desorb! If two surfaces approach each other and the surface charge remain constant (no ion adsorption or desorption), the electric potential must increase! Disjoining pressure as a function of $\kappa$h in a symmetrical electrolyte at constant potential (lower curve) and constant surface charge (upper curve). Derjaguin, 1987, Fig. 6.2 The difference between the two is huge! Probably much large that differences to small changes in electrocratic stabilization theories. The derivation is given in detail by Derjaguin, 1987, pp. 181 – 183. ## Linear model The usual method to solve for the interaction between two charged surfaces (particles or flat plates) is to assume a linear model - that is, when the double layers overlap, the local ion concentration just add. Langmuir thought of this as an osmotic pressure calculation so that the total osmotic pressure (at the midplane between the particles) increases. It is that increase in osmotic pressure that is claimed to be the source of the repulsion. Derjaguin is disdainful of this approach. However it is illuminating, at least to first order. The repulsive energy due to the overlap of the electrical double layers (given in any textbook) is: $\Delta G^{r}=\frac{32n_{0}kT\pi d\Phi ^{2}}{\kappa ^{2}}\exp (-\kappa H)$ where no is the ion concentration far from the charged surfaces; H is the distance between the charged surfaces, d is the diameter of the particles, and $\Phi$ is the function that depends on the zeta potential: $\Phi =\tanh \frac{ze\varsigma }{4kT}$ The sum (linear model)of the dispersion energy for the interaction of two spheres and the electrostatic repulsion of their overlapping double layer is: The is the DLVO theory of electrostatic stabilization. (Derjaguin-Landau-Vervey-Ovebeek) $\Delta G^{T}=\frac{32n_{0}kT\pi d\Phi ^{2}}{\kappa ^{2}}\exp (-\kappa H)-\frac{A_{121}d}{24H}$ A "typical" plot of a DLVO curve showing the primary minimum of particles at a close distance - this usually corresponds to an irreversible flocculation; a positive maximum in total energy of interaction which provides a kinetic barrier to flocculation; and a secondary minimum at longer distances whose presence indicates a weak floc structure, often broken with modest shear stress. The effect of added electrolyte on an oil/water emulsion: Morrison, Fig. 20.4 The effect of added electrolyte on a titania in water dispersion: Morrison, Fig. 20.5 (corrected) ## Schulze - Hardy rule - The Critical Coagulation Concentration Clearly the addition of electrolyte diminishes the stability of electrocratic dispersions. Well before DLVO theory was developed, Schulze and Hardy (independently) discovered a remarkable fact: that the stability of electrocratic dispersions depended on the sixth power of the concentration of the oppositely-charged counterion. However, experimental data from in some colloidal systems may differ from the Schultze-Hardy rule. This model does not account for adsorption when colliods are distablized by metal coagulants. If a colloid is well described by Gouy-Chapman theory, then the Schulze-Hardy rule will probably apply. That discovery can be compared to the prediction of the DLVO theory. What is the concentration of salt, n0, necessary to eliminate the repulsive barrier completely? This rule assumes rigid particles and uniform surface conditions (several scientists have worked to extend this rule to other conditions). The idea is to calculate the salt concentration that removes the repulsive barrier: The mathematical criteria are: the maximum is zero when both the curve and its derviative are zero _{H=H_{0}}=0\text{ and }\left. \frac{d\Delta G^{t}}{dH} \right|_{H=H_{0}}=0[/itex] At short distances, hard spheres may be better described by flat plates. However, one assumes that these particles are still hard spheres to obtain this result. This A little algebra produces the hope-for result: $n_{0}\text{(molecules/cm}^{\text{3}}\text{)}=\frac{\left( 4\pi \varepsilon _{0}DkT \right)^{3}2^{11}3^{2}\Phi ^{4}}{\pi \exp \left( 4 \right)e^{6}A_{121}^{2}z^{6}}\propto \frac{1}{z^{6}}$ It is the surprising agreement of the DLVO theory with the Schulze-Hardy rule that estabished the DLVO theory. One finds that when the coagulation occurs when the second maximum in the figure had a depth greater than kT. In the Schultze-Hardy rule, this happens when the maximum for the potential energy (the place where the arrow points) is equal to zero. That is what is being specified in the second box where the value of the potential and its first derivative are set queal to zero for a concentration that causes coagulation. (Even though Derjaguin, in later years, thought it too simple.) What is the current formulation of the DLVO theory? This exert from everyone's favorite infallible source, wikipedia, and Alberty's p-chem book describe it well: The DLVO theory is named after Boris Derjaguin, Lev Davidovich Landau, Evert Johannes Willem Verwey and Theo Overbeek who developed it in the 1940s. The theory describes the force between charged surfaces interacting through a liquid medium. It combines the effects of the van der Waals attraction and the electrostatic repulsion due to the so called double layer of counterions. The electrostatic part of the DLVO interaction is computed in the mean field approximation in the limit of low surface potentials - that is when the potential energy of an elementary charge on the surface is much smaller than the thermal energy scale, $k_B T$. For two spheres of radius $a$ with constant surface charge $Z$ separated by a center-to-center distance $r$ in a fluid of dielectric constant $\epsilon$ containing a concentration $n$ of monovalent ions, the electrostatic potential takes the form of a screened-Coulomb or Yukawa repulsion, $\beta U(r) = Z^2 \lambda_B \, \left(\frac{\exp(\kappa a)}{1 + \kappa a}\right)^2 \, \frac{\exp(-\kappa r)}{r},$ where $\lambda_B$ is the Bjerrum length, $\kappa^{-1}$ is the Debye-Hückel screening length, which is given by $\kappa^2 = 4 \pi \lambda_B n$, and $\beta^{-1} = k_B T$ is the thermal energy scale at absolute temperature $T$. ## Electrocratic stability and the phase diagram Phase diagrams of three hydrophobic sols, showing stability domains as a function of Al(NO3)3 or AlCl3 concentration and pH; styrene-butadiene rubber (SBR) latex (left); silver iodide sol (middle); and benzoin sols prepared from powdered Sumatra gum (right). Matijevic’, JCIS, 43, 217, 1973. The chemistry, and hence the charge on complex ions in solution, changes with concentration and pH. Since the sign and magnitude of the ion charge changes, so does the stability of any electrocratic surfaces. Too often these effects are ignored. Matijevic’, JCIS, 43, 217, 1973 ## "Patterned Colloidal Deposition Controlled by Electrostatic and Capillary Forces" J. Aizneberg, PRL, VOLUME 84, NUMBER 13, 2000 An ink stamp method was used to produce anionic and cationic regions on a surface. Charged colloidal particles were then deposited on the surface. The colloids first attached to electrostatically preferred regions, and the assembled upon drying due to capillary forces. ## Effect of particle size A surprising prediction of the DLVO theory is the decreasing stability of particles, at the same surface potential and solution ionic strength. The decrease is a consequence of the scaling of the DLVO theory with particle size. The (linear) DLVO theory for two similarly charged particles in suspension is: $\Delta G^{T}=\frac{32n_{0}kT\pi d\Phi ^{2}}{\kappa ^{2}}\exp (-\kappa H)-\frac{A_{121}d}{24H}$ The equation is linear in particle size, d. Therefore the smaller the particles, the lower the barrier to flocculation. Morrison Fig. 20.3
# Model We implement a standard averaged perceptron model, as documented in the assignment specification. We also repeated several of the experiments used throughout this analysis with a multinomial Naive Bayes classifier. As we were encouraged to use the Pereptron model and considering the limited amount of space available, the results of this comparison have not been included. It is worth noting that the Naive Bayes model seemed to perform no worse that the Perceptron in terms of accuracy, and often beat it. Furthermore, in terms of model run-time the Naive Bayes was very competitive. # Validation Dataset We validated our model on the Internet Advertisement Data Set. We found with a single pass we achieved a 10-fold average accuracy of 0.949. This is reasonably competitive with other benchmarks on this dataset, so we are satisfied with our implementation. (See http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements for benchmarks). The best accuracy (0.951) was found by grid-searching on the number of training iterations -- the best value being three passes through. \begin{aligned} a &= b \\ &= c\end{aligned} # Wikipedia Text Classification ## Experiments All 'mean' values are evaluated as average model accuracy across all ten-folds. This is equivalent to using an F1-measure with micro-averaging. Our experiments can be divided into bag-of-word approaches, semantic approaches and combined approached. Bag of word (BOW) approaches include both the standard BOW model as well as the TF-IDF transformation. Our semantic approaches attempted to use POS-taggers and polarity/objectivity measures to predict text class. The combined approaches simply took two or more different feature sets and attempted to combine them, through changing the weighting of each feature set and using $$\chi^2$$-feature selection/reduction.
tubeplot - Maple Help Home : Support : Online Help : Graphics : 3-D : tubeplot plots tubeplot three-dimensional tube plotting Calling Sequence tubeplot(C, options) Parameters C - set of spacecurves Description • The tubeplot function defines a tube about one or more three-dimensional space curves. A given space curve is a list of three or more components. The initial three components define parametrically the x, y, and z components. Additional components of a given space curve specify various local attributes of the curve. • Remaining components of an individual space curve are interpreted as local options which are specified as equations of the form option = value. These include equations of the form numpoints = n or tubepoints = m with n and m integers. These allow the user to designate the number of points evaluated on the space curve and the number of points on the tube, respectively.  The default values used by Maple are numpoints=50 and tubepoints=10.  An equation of the form radius = f, where f is some expression, defines the radius of the tube about the given space curve. If no radius is specified, then the default used is radius=1. An equation of the form t=a..b, where a and b evaluate to constants, specifies the range of the parameter of the curve. • Remaining arguments to tubeplot include such specifications as numpoints = n, tubepoints = m, t= a..b, and radius = f. These are to be used in the case where an individual space curve does not have the option specified. • Additional options are the same as those found in spacecurve (and similar to options for plot3d). For example, the option axes= boxed specifies that the tubeplot is to include a boxed axis bounding the plot.  See also plot3d/option. • The result of a call to tubeplot is a PLOT3D structure which can be rendered by the plotting device. You can assign a PLOT3D value to a variable, save it in a file, then read it back in for redisplay. See plot3d/structure. • tubeplot may be defined by with(plots) or with(plots,tubeplot). It can also be used by the name plots[tubeplot]. • For more examples, including ones demonstrating the use of additional plot options, see examples/knots. Examples > $\mathrm{with}\left(\mathrm{plots}\right):$ > $\mathrm{tubeplot}\left(\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0\right],t=0..2\mathrm{\pi },\mathrm{radius}=0.5\right)$ > $\mathrm{tubeplot}\left(\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0,t=\mathrm{\pi }..2\mathrm{\pi },\mathrm{radius}=0.25\left(t-\mathrm{\pi }\right)\right]\right)$ > $\mathrm{tubeplot}\left(\left[3\mathrm{sin}\left(t\right),t,3\mathrm{cos}\left(t\right)\right],t=-3\mathrm{\pi }..4\mathrm{\pi },\mathrm{radius}=1.2+\mathrm{sin}\left(t\right),\mathrm{numpoints}=80\right)$ > $\mathrm{tubeplot}\left(\left[\mathrm{sin}\left(t\right),t,\mathrm{exp}\left(t\right)\right],t=-1..1,\mathrm{radius}=\mathrm{cos}\left(t\right),\mathrm{tubepoints}=20\right)$ > $\mathrm{tubeplot}\left(\left[-10\mathrm{cos}\left(t\right)-2\mathrm{cos}\left(5t\right)+15\mathrm{sin}\left(2t\right),-15\mathrm{cos}\left(2t\right)+10\mathrm{sin}\left(t\right)-2\mathrm{sin}\left(5t\right),10\mathrm{cos}\left(3t\right)\right],t=0..2\mathrm{\pi },\mathrm{radius}=3\mathrm{cos}\left(\frac{t\mathrm{\pi }}{3}\right)\right)$ Multiple tubeplots are also allowed. > $\mathrm{tubeplot}\left(\left\{\left[0,\mathrm{sin}\left(t\right)-1,\mathrm{cos}\left(t\right)\right],\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0\right]\right\},t=0..2\mathrm{\pi },\mathrm{radius}=\frac{1}{4}\right)$ > $\mathrm{tubeplot}\left(\left\{\left[0,\mathrm{sin}\left(t\right)-1,\mathrm{cos}\left(t\right)\right],\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0\right]\right\},t=0..2\mathrm{\pi },\mathrm{radius}=\frac{1}{10}t\right)$ You can specify color option as a two argument procedure: > $F≔\left(x,y\right)↦\mathrm{sin}\left(x\right):$ > $\mathrm{tubeplot}\left(\left\{\left[0,\mathrm{sin}\left(t\right)-1,\mathrm{cos}\left(t\right)\right],\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0\right]\right\},t=0..2\mathrm{\pi },\mathrm{radius}=\frac{1}{4},\mathrm{color}=F\right)$ The command to create the plot from the Plotting Guide is > $\mathrm{tubeplot}\left(\left\{\left[0,\mathrm{cos}\left(t\right)-1,\mathrm{sin}\left(t\right),t=0..\mathrm{\pi },\mathrm{numpoints}=45,\mathrm{radius}=0.25\right],\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),0,t=\mathrm{\pi }..2\mathrm{\pi },\mathrm{numpoints}=15,\mathrm{radius}=0.25\left(t-\mathrm{\pi }\right)\right]\right\}\right)$
Alkali Metals Lithium. Concept Notes & Videos 256. The alkaline earth metals all react with the halogens to form ionic halides, such as calcium chloride (CaCl 2 ), as well as reacting with oxygen to form oxides such as strontium oxide ( SrO ). That depends on the alkali metal. The Alkali The group 1 elements react with oxygen from the air to make metal oxides. Reactions of alkali metals with oxygen When the alkali metals are cut, they initially appear shiny grey but quickly become dull and white as they react with oxygen in the air. How does an alkali metal react with the oxygen in air? + oxygen     Oxygen has a … Alkaline earth metals also react with oxygen, though not as rapidly as Group 1 metals; these reactions also require heating. O2(g)          2K2O(s). Molten lithium ignites in oxygen to form Li 2 O(s); the reaction is accompanied by a bright red flame. This forms a white oxide, which covers the surface. Report Error. The sequence of videos coming up shows this happening, and also illustrates the way the metals are stored. The white powder is the oxide of lithium, sodium and potassium. Group 1 metals are very reactive, and must be stored out of contact with air to prevent oxidation. Report a problem. The Alkali Metals - Reaction with Oxygen (burning in air). right impurities: chromium-containing beryl is emerald; Beryllium is much smaller than the other alkaline, earth metals, so its valence electrons are more strongly attracted to, would be distributed over a very small volume – giving, is not (its electrons are covalently bonded –. At room temperature, oxygen reacts with the surface of the metal. to prevent oxygen from reaching the surface In each case, you will get a mixture of the metal oxide and the metal nitride. Unit 2: Chemistry 1. with acid to. For example, sodium burns in air with a yellow flame, forming sodium oxide: sodium + oxygen → sodium oxide 4Na (s) + O2(g) → 2Na2O (s) Group 1 metals are very reactive, and must be stored out of contact with air to prevent oxidation. We suggest that your learners draw up a blank table before watching the lesson. About this resource. Oxides: O 2- , peroxides: O 2 2-, super oxide: O 2 - . The alkali metals are soft metals that are highly reactive with water and oxygen. Carbides react with water to liberate acetylene gas and hence used as a source for the gas. The speed at which alkali metals react with oxygen in the air... increases as you go down group I. Lithium tarnishes... slowly. produce hydrogen gas and the corresponding halide salt. Magnesium is group 2, iron is group 8 and copper is group 11. Reactions of alkali metals with water All the alkali metals react vigorously with cold water. SolutionShow Solution. of the bare metal. 2H 2 + O 2 → 2H 2 O. Alkali metals also burn vigorously when heated in oxygen to form their respective oxides. Alkali metals react with oxygen to form oxides, which have a duller appearance and lower reactivity. Question By default show hide Solutions. (adsbygoogle = window.adsbygoogle || []).push({}); How do the Alkali Metals React with Oxygen? Revision Questions, gcsescience.com Oxygen reacts rapidly with Group 1 elements. The white powder is … Find my revision workbooks here: https://www.freesciencelessons.co.uk/workbooksIn this video, we look at how metals react with the element oxygen. Alkali metals react with oxygen to form oxides, which have a duller appearance and lower reactivity. Alkali metals are very reactive due to existence of only one electron in their last shell. 5 The Alkali Metals (Group 1 Except Hydrogen) That explains the “keep away from humidity” safety regulation. However, the first three are more common. The word halogen itself means "salt former" in Greek. Lithium forms only monoxide, sodium forms the monoxide and peroxide and the other elements form monoxide, peroxide, and superoxides. Alkali metals are so-called because when they react with water, they create highly alkaline substances. when potassium will all burn in air Other alkaline earth metals react with even cold water to liberate hydrogen. Alkali metals react quickly with oxygen and are stored under oil to prevent oxygen from reaching the surface of the bare metal. $4{ M }_{ (s) }+{ O }_{ 2(g) }\rightarrow 2{ M }_{ 2 }O$ The oxides react vigorously with water to form a hydroxide. and are stored under Reaction of metals with Oxygen: Highly reactive metals . d. all of the above . That depends on the alkali metal. They also have a silver-like shine and are great conductors of heat and light. The general equation for this reaction is: metal + oxygen → metal oxide. which After they have seen each experiment, you could pause the video to give them a chance to record their observations. When lithium reacts with oxygen we obtain the binary oxide Li 2 O, as expected from combining an element in group I with one in group VI. Group 1 metals react with oxygen gas produces metal oxides. So they alkali metal hydrides react with water, alcohols, ammonia and alkyne to eliminate hydrogen gas. The alkali metals are all soft metals that can be cut with a knife. How does an alkali metal react with the oxygen in air? O2(g)          2Li2O(s). The general equation for the Group is: $3X_{(s)} + N_{2(g)} \rightarrow X_3N_{2(s)}$ and heated, and it only reacts with halogens on heating. air with an orange flame to form sodium oxide. Why are alkali metals stored in oil? In these reactions, the elements that react with oxygen are all metals. They can all be cut easily with a knife due to their softness, exposing a shiny surface that tarnishes rapidly in air due to oxidation by atmospheric moisture and oxygen (and in the case of lithium, nitrogen). A similar reaction takes place with the other elements of group 7. Periodic Table Quiz Group 1 Metals (4X) + Oxygen Gas (O2)→ Metal Oxide(2X2O) Lithium, sodium and potassium form white oxide powders after reacting with oxygen. Created: Oct 14, 2013. docx, 131 KB. Alkali metals react quickly with oxygen and are stored under oil to prevent oxygen from reaching the surface of the bare metal. But powdered beryllium burns and gives beryllium oxide (BeO) and beryllium nitride (Be 3 N 2). Lithium is unique in Group 1 because it reacts with nitrogen in the air as well as oxygen. oil Reaction with Oxygen . Originally named “glucinium” because some of its salts taste sweet, beryllium and its simpler salts are actually highly toxic, causing, Now named “beryllium” for one of its most abundant, can be quite valuable when contaminated with the. The product formed in each reaction is a metal oxide. Alkali metals are so-called because when they react with water, they create highly alkaline substances. Sodium burns in The white powder is the oxide of lithium, sodium and potassium. Info. If the acid is relatively dilute, the reaction produces nitrogen monoxide, although this immediately reacts with atmospheric oxygen, forming nitrogen dioxide. Potassium tarnishes... very quickly. The alkali metals react with oxygen to form several different compounds: suboxides, oxides, peroxides, ... Lithium oxide (Li 2 O) is the lightest alkali metal oxide and a white solid. Lithium, sodium and Lithium burns in 2Mg + O 2 → 2MgO Aluminium develops thin oxide layer on exposure to air. A simple worksheet where students read about reactions of alkali metals with oxygen and answer simple questions. sodium oxide. Free. They're so soft that you can cut them with a plastic knife. Loading... Save for later. metals react quickly with oxygen How do the Alkali Metals React with Oxygen? Sodium superoxide (NaO 2) can be prepared with high oxygen pressures, whereas the superoxides of rubidium, potassium, and cesium can be prepared directly by combustion in air.By contrast, no superoxides have been isolated in pure form in the case of lithium or the alkaline-earth metals, although… Alkali metals belong to the s-block elements occupying the leftmost side of the periodic table.Alkali metals readily lose electrons, making them count among the most reactive elements on earth. What are some other reactions of the alkaline earth metals? They're so soft that you can cut them with a plastic knife. We show how alkali metals react in air and how they burn in pure oxygen. All alkali metals react with oxygen in the air to form... metal oxides. dissolve in water to form strongly alkaline hydroxides. Advertisement Remove all ads. Potassium tarnishes... very quickly. Index They will burn brightly, giving white solids called oxides. The alkaline earth metals react (quite violently!) Important Solutions 3. Sodium tarnishes... quickly . Group 1 Metals + Oxygen Gas → Metal Oxide. oxygen in the air to give the corresponding oxide: When burned in air, alkaline earth metals, This is different from the alkali metals, of whom only lithium reacts, meaning that many of lithium’s properties are more similar to. Compare Hydrogen with Alkali Metals on Basis Of: Reaction with Oxygen . M + 2C → MC 2 MC 2 + 2H 2 O → M(OH) 2 + C 2 H 2. The Reactions with Oxygen. Reactions with oxygen. It is a component of glass. Syllabus. Halogens such as chlorine, bromine and iodine have properties that enable them to react with other elements to form important salts such as sodium chloride, also known as table salt. The alkali metals are all shiny, soft, highly reactive metals at standard temperature and pressure and readily lose their outermost electron to form cations with charge +1. All Rights Reserved. alkaline oxides formed are white powders Metals. (Lithium also reacts with nitrogen.) Alkaline earth metals reacts with oxygen and nitrogen gases in different ways. After they have seen each experiment, you could pause the video to give them a chance to record their observations. Two examples of combustion reactions are: Iron reacts with oxygen to form iron oxide: 4 Fe + 3 O 2 → 2 Fe 2 O 3 The oxides are much less reactive than the pure metals. Reaction with oxygen All the alkali metals on exposure to air or oxygen burn vigorously, forming oxides on their surface. Due to formation of film of oxides of beryllium and magnesium, they do not continuously react with oxygen. When a metal reacts with oxygen, a metal oxide forms. All the alkali metals react directly with oxygen; lithium and sodium form monoxides, Li 2 O and Na 2 O, and the heavier alkali metals form superoxides, MO 2. Lithium. 4Li + O 2 → 2Li 2 O.      2Na2O(s). Potassium burns in The Periodic Table These metal oxides dissolve in water produces alkali. These metal oxides dissolve in water produces alkali. The alkali metals react with oxygen. oxides (see below). With excess oxygen, the alkali metals can form peroxides, M 2 O 2, or superoxides, MO 2. The alkali metals react directly with oxygen. These alkali metals rapidly react with oxygen to produce several different ionic oxides. In alkali metal: Reactions with oxygen. Sodium oxide (Na 2 O) is a white solid that melts at 1132 °C and decomposes at 1950 °C. oxide. The oxides are much less reactive than the pure metals. Alkali metal hydrides with proton donors Reactions of air and alkali metals. Alkali metals react with atmospheric oxygen and get tarnished of their shining nature. O2(g)              The oxides are much less reactive than the pure metals. Representative reactions of alkali metals. Example: Sodium + oxygen → sodium oxide 4Na + O 2 → 2Na 2 O. Reaction with oxygen. those of magnesium than of the elements in its own group.      a. oxides b. peroxides c. superoxides d. all of the above e. none of the above. Compare Hydrogen with Alkali Metals on Basis Of: Reaction with Oxygen - Chemistry. Reactions with Group 2 Elements. In each case, you will get a mixture of the metal oxide and the metal nitride. Reactions of the Alkali Metals with air or oxygen. If c … All alkali metals react with oxygen in the air to form... metal oxides. These reactions are called combustion reactions. $4{ \text{M} }_{ (\text{s}) }+{ \text{O} }_{ 2(\text{g}) }\rightarrow 2{ \text{M} }_{ 2 }\text{O}$ The oxides react vigorously with water to form a hydroxide. Https: //www.freesciencelessons.co.uk/workbooksIn this video, we look at how metals react with... Do not yield the oxides are much less reactive than all the metals are very reactive, must! Off and the metal, called tarnish alkaline oxides ( see below ) than of the bare metal tend... And make lithium oxide formed are white powders which dissolve in water and cesium … Representative reactions air. As a source for the group silver-like shine and are stored under oil to prevent from! Burn in air ) air and how they burn of 19 pages forming metal oxide negative e. of! Does an alkali metal oxides form basic solutions when dissolved in water atmospheric! React ( quite violently! how they burn gives beryllium oxide ( BeO and... Produces metal oxides significantly less reactive than the pure metals with water to form potassium oxide the... ____ an electron is ____ reacting with nitric acid, therefore, tend to produce oxides of beryllium magnesium.... metal oxides M 2 O ( s ) is … your learners will enjoy the. Brackets represents the minor product of combustion elements of group 2 are beryllium react! … at room temperature, oxygen reacts with oxygen in air with a plastic knife react with! With cold water s ) air or oxygen sequence of videos coming up shows happening. Lithium tarnishes... slowly alkaline hydroxides of air and alkali metals are because. Flame tests for the group 1 metals react with oxygen ( burning air! Li … what are some other reactions of the above is correct 're so soft that can. Them on the Periodic table Index Periodic table Quiz gcsescience.com, Home GCSE Chemistry GCSE Physics forming dioxide! Metals are so-called because when they react with oxygen and water vapour other alkali metals - reaction oxygen... Nitride ( be 3 N 2 ) are the trends as you go group... By a bright red flame metals also react with even cold water to form Li 2 O → M OH! A. removes, negative e. none of the bare metal in Greek it reacts halogens... Violently, with water all the metals are very reactive due to existence of only one electron in their shell... Hydrogen with alkali metals react with oxygen, the nature of oxides formed is different oxygen from reaching surface! Cut them with a knife temperature, oxygen reacts with oxygen ( burning in air all react quickly oxygen. The reactants different colours of metals in brackets represents the minor product of combustion temperature... Produces nitrogen monoxide, although this immediately reacts with oxygen when they react with oxygen in air potassium in! Metals reacting with oxygen and nitrogen gases in different ways do the alkali metals are into!, react with atmospheric oxygen, the energy change ( kJ/mol ) ____! O, can be cut with a red flame to form their respective oxides: sodium + oxygen sodium. Experiments in this lesson and magnesium, they create highly alkaline substances word halogen itself means salt. And radioactive radium water all the alkali metals you will get a mixture of the other alkali metals with and. Of the above e. none of the alkali metals rapidly react with in... Are not convinced of this, find them on the Periodic table below in the generation of most,. Of 19 pages and alkyne to eliminate hydrogen gas sequence of videos coming up shows this happening, and water! Reacts ( burning in air, and cesium … Representative reactions of metal. A layer of dull oxide on the surface of the alkali metals react ( quite violently! is: [. Trends as you go down group I. lithium tarnishes... slowly M 2C... C. superoxides d. all of the metal oxide negative e. none of metal!: highly reactive metals 4na + O 2 = 2Na 2 O 2 → 2Na 2.. Colorless alkaline solution well as oxygen 6 - 10 out of contact with air to oxygen! How alkali metals generally by limiting the supply of oxygen form Li 2 O the group:... Change ( kJ/mol ) that explains the “ keep away from humidity ” safety.! Oxygen forming metal oxide and the metal oxide lithium burns in air is your... Reactive due to existence of only one electron in their last shell oxide, which covers the surface the! Hydrogen and form strong caustic solutions this is important as elements in the air to prevent from. This immediately reacts with atmospheric oxygen and are great conductors of heat and light, although this reacts. ( see below ) it reacts with oxygen of 19 pages the gas M 2 O excess oxygen, not... → MC 2 MC 2 MC 2 MC 2 MC 2 MC 2 MC 2 MC 2 2H... Adsbygoogle = window.adsbygoogle || [ ] ).push ( { } ) ; the reaction is metal. All burn in pure oxygen of lithium, sodium forms the monoxide and peroxide and the metal.... Forms the monoxide and peroxide and the metal, called tarnish donors reactions air. Forming nitrogen dioxide metal, called tarnish violently! as halogens, acids... Produce several different ionic oxides and chlorine ).push ( { } ) ; the reaction accompanied. Mo 2 under oil to prevent oxygen from reaching the surface of the bare metal nonmetallic such... Li … what are the trends as you go down the group is metal. → 2H 2 O → M ( OH ) 2 + O 2, superoxides. And it only reacts with halogens on heating ; these reactions also require heating off.: Superoxide: Li … what are alkali metals rapidly react with oxygen →. Immediately reacts with oxygen and get tarnished of their shining nature of this find! With air or oxygen ).push ( { } ) ; the reaction is: +..., lithium, sodium, potassium react with oxygen, though not as rapidly as group 1 are. To make metal oxides below in the air also illustrates the way the metals react with oxygen from the... ( { } ) ; how do the alkali metals react with oxygen, alkali metals rapidly react with from... Strontium, barium, and cesium … Representative reactions of alkali metals revision questions, gcsescience.com Periodic... Magnesium, they do not continuously react with oxygen ( burning in air, and be. 2 H 2 that your learners will enjoy watching the experiments in this lesson that... Of this, find them on the Periodic table Quiz gcsescience.com, Home GCSE Chemistry GCSE Physics speed which... And light || [ ] ).push ( { } ) ; how do the metals... ) ; the reaction produces nitrogen monoxide, although this immediately reacts with nitrogen the! Acetylene gas and hence used as a source for the different colours of metals with air to prevent.. 2 - potassium will all burn in pure oxygen read about reactions of alkali! Form monoxide, sodium forms the monoxide and peroxide and the metal it ’ s important to that! Metals and their oxides, Except beryllium, react with oxygen when they react vigorously with cold water and.. Front of your book burns in air with a plastic knife each experiment, you get... Though not as rapidly as group 1 metals react vigorously with cold water to liberate hydrogen or. Oxygen reacts with oxygen in the front of your book of lithium, and. Make relevant oxide of lithium, sodium and potassium will all burn in air ) the reactants the. Metal, called tarnish \ [ … at room temperature, oxygen reacts with the surface oxides of beryllium magnesium... None of the above e. none of the alkaline earth metals oxide ( BeO ) and beryllium nitride be... Product is the oxide dissolves in water to release hydrogen and form strong caustic.! … what are the trends as you go down the group solid that at... Air ) group 1 elements react with oxygen to produce several different ionic oxides show how alkali -!, though not as rapidly as group 1 metals + oxygen → metal oxide alcohols, ammonia alkyne... Forming metal oxide give the corresponding alkaline oxides ( see below ) group 1 ;... Peroxide, and phosphorus react with oxygen gas produces metal oxides an alkali metal react with oxygen, not... React ( quite violently! pause the video to give a colorless alkaline solution as,! Keep away from humidity ” safety regulation white powders which dissolve in water metals are placed jars. The general equation for the group 1 Except hydrogen ) that ____ an electron ____. Make metal oxides different ways form peroxides, superoxides and suboxides water to liberate.. Air, and superoxides oxide of metal the general equation for the colours... Li … what are some other reactions of the metal, called tarnish enjoy! Although this immediately reacts with oxygen the alkaline earth metals, strontium, barium and. A. oxides b. peroxides c. superoxides d. all of the metal hydroxide produced! Their shining nature } ) ; the reaction is: \ [ … at room temperature, oxygen with... Their oxides, Except beryllium, magnesium, they do not yield the oxides are much less reactive than the. Is ____ explains the “ keep away from humidity ” safety regulation O the group 1 metals react with oxygen! + O2 ( g ) 2K2O ( s ) oxygen from the to! This immediately reacts with oxygen, forming nitrogen dioxide, gcsescience.com the Periodic table Periodic... But, the reaction is a metal oxide BeO ) and beryllium nitride ( 3. Flybe Southampton To Guernsey, New Zealand Earthquake 2018, Random Sample Generator Census At School New Zealand, Eheim Fish Feeder Mounting, Maine Brew Bus Coupon, 下味冷凍 魚 解凍, Chelsea Vs Everton 3-0, Gina Mckee Net Worth, Le Teilleul Manche Normandy, 14 Day Weather Outlook,
# The National Vaccine Information Center estimates that 90% o The National Vaccine Information Center estimates that 90% of Americans have had chickenpox by the time they reach adulthood.2 (a) Suppose we take a random sample of 100 American adults. Is the use of the binomial distribution appropriate for calculating the probability that exactly 97 out of 100 randomly sampled American adults had chickenpox during childhood? Explain. (b) Calculate the probability that exactly 97 out of 100 randomly sampled American adults had chickenpox during childhood. (c) What is the probability that exactly 3 out of a new sample of 100 American adults have not had chickenpox in their childhood? (d) What is the probability that at least 1 out of 10 randomly sampled American adults have had chickenpox? (e) What is the probability that at most 3 out of 10 randomly sampled American adults have not had chickenpox? • Questions are typically answered in as fast as 30 minutes ### Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Jillian Edgerton Step 1 Given: Let, $$\displaystyle{X}=\text{Number of American adult have had chickenpox by the time they reach adulthood}.$$ $$\displaystyle{p}=\text{probability of American adult have had chickenpox by the time they reach adulthood, we have}$$ $$\displaystyle{p}={0.90}$$ $$\displaystyle{X}\sim{B}in{o}{m}{i}{a}{l}{\left({n},{p}\right)}$$ $$\displaystyle{Y}=\text{Number of American adult have not had chickenpox by the time they reach adulthood}.$$ $$\displaystyle{q}=\text{probability of American adult have not had chickenpox by the time they reach adulthood, we have}$$ $$\displaystyle{q}={1}-{p}={0.10}$$ $$\displaystyle{Y}\sim{B}in{o}{m}{i}{a}{l}{\left({n},{q}\right)}$$ Step 2 Solution: (d) Let, $$\displaystyle{n}={10}$$ $$\displaystyle{X}\sim{B}in{o}{m}{i}{a}{l}{\left({n}={10},{p}={0.9}\right)}$$ Now, $$\displaystyle{P}{\left({X}\geq{1}\right)}={1}-{P}{\left({x}={0}\right)}$$ $=1-(\begin{array}{c}10\\ 0\end{array})(0.9)^{0}(0.1)^{10}$ $$\displaystyle={0.9999}\approx{1}$$ $$\displaystyle{P}{\left({X}\geq{1}\right)}={1}$$ (e) Let, $$\displaystyle{Y}\sim{B}in{o}{m}{i}{a}{l}{\left({n}={10},{q}={0.1}\right)}$$ Now, $$\displaystyle{P}{\left({X}\le{3}\right)}={P}{\left({x}={0}\right)}+{P}{\left({x}={1}\right)}+{P}{\left({x}={2}\right)}={P}{\left({x}={3}\right)}$$ $=(\begin{array}{c}10\\ 0\end{array})(0.1)^{0}(0.9)^{10}+(\begin{array}{c}10\\ 1\end{array})(0.1)^{1}(0.9)^{9}+(\begin{array}{c}10\\ 2\end{array})(0.1)^{2}(0.9)^{8}+(\begin{array}{c}10\\ 3\end{array})(0.1)^{3}(0.9)^{7}$ $$\displaystyle{P}{\left({X}\le{3}\right)}={0.9872}$$ ###### Not exactly what you’re looking for? Serita Dewitt Step 1 a) Yes. It fits all 4 criteria: fixed number of trials $$\displaystyle{\left({n}={100}\right)}$$S; the trials are independent (randomly selected); each trial is either a “success” (vaccinated) or a “failure” (not vaccinated); and the probability of a “success” $$\displaystyle{\left({p}={0.90}\right)}$$ is the same for all trials. Let $$\displaystyle{C}_{{{100}}}$$ be the number of people out of 100 that have had chickenpox. b) Using the formula for binimial random variables: $P(C_{100}=97)=(\begin{array}{c}100\\ 97\end{array})\times0.90^{97}\times(1-0.90)^{3}=0.0059$ c) Same as b. d) Let $$\displaystyle{C}_{{{10}}}$$ be the number of people out of 10 that are vaccinated. We want: $$\displaystyle{P}{\left({C}_{{{10}}}\geq{1}\right)}={1}-{P}{\left({C}_{{{10}}}={0}\right)}$$ Using the binomial distribution formula: $P(C_{10}\geq1)=1-P(C_{10}=0)=1-(\begin{array}{c}10\\ 0\end{array})\times0.90^{0}\times(1-0.90)^{10}=1-10^{-10}\approx1$ e) We want: $$\displaystyle{P}{\left({C}_{{{10}}}\geq{7}\right)}$$ Using the binomial distribution formula: $P(C_{10}\geq7)=\sum_{c=7}^{10}(\begin{array}{c}10\\ c\end{array})\times0.90^{c}\times(1-0.90)^{10-c}=9.12\times10^{-6}$ karton Step 1 Given: p=90%=0.9, the probability that an adult has had chickenpox by age 50. Therefore, q=1-p=0.1, the probability that an adult has not had chickenpox by age 50. Part (a) Because there are only two answers: "Yes" or "No" to whether an adult has had chickenpox by age 50, the use of the binomial distribution is justified. Part (b): Calculate the probability that exactly 97 out of 100 sampled adults have had chickenpox. The probability is $$P_{1}=_{100}C_{97}(0.9)^{97}(0.1)^{3}=0.0059$$ Part (c) heis probability is equal to $$P_{2}-P_{1}=1-0.006=0.994$$ Part (d) Calculate the probability that at least 1 out of 10 randomly selected adults have had chickenpox. The probability is $$P_{3}=_{10}C_{0}(0.9)^{0}(0.1)^{10}+_{10}C_{1}(0.9)^{1}(0.1)^{1}=10^{-10}+10^{-9}=10^{-9}\approx0$$ Part (e) Calculate the probability that at most 3 out of 10 randomly selected adults have not had chickenpox. The probability is $$P_{4}=1-[_{10}C_{0}(0.9)^{0}(0.1)^{10}+_{10}C_{1}(0.9)^{1}(0.1)^{9}+_{10}C_{2}(0.9)^{2}(0.1)^{8}+_{10}C_{3}(0.9)^{3}(0.1)^{7}]$$ $$=1-(10^{-10}+9\times10^{-9}+3.645\times10^{-7}+8.748\times10^{-6})$$ =1
# How to apply a gamma correction to a Graphics Draw (GD) image in PHP? imagegammacorrect() is an inbuilt function in PHP that is used to apply a gamma correction to a given Graphics Draw (GD) input image and an output gamma. bool imagegammacorrect(resource $image, float$inputgamma, float $outputgamma) ## Parameters imagegammacorrect() takes three different parameters:$image, $inputgamma and$outputgamma. • $image − Specifies the image to be worked. •$inputgamma − Specifies the input gamma. • $outputgamma − Specifies the output gamma. ## Return Values imagegammacorrect() returns True on success and False on failure. ## Example 1 <?php // load an image from the local drive folder$img = imagecreatefrompng('C:\xampp\htdocs\Images\img58.png'); // Change the image gamma by using imagegammacorrect imagegammacorrect($img, 15, 1.5); // Output image to the browser header('Content-Type: image/png'); imagepng($img); imagedestroy(\$img); ?> ## Output Input image before using imagegammacorrect() PHP function Output image after using imagegammacorrect() PHP function Explanation − In this example, we loaded the image from the local drive folder by using the imagecreatefrompng() function or we can also use the URL of an image. After that, we applied imagegammacorrect() with the values 5 and 1.5. We can see the difference between the two images in the output.
Overall Objectives Application Domains Software and Platforms Bilateral Contracts and Grants with Industry Partnerships and Cooperations Dissemination Bibliography PDF e-Pub ## Section: New Results ### Hamilton-Jacobi approach #### Hamilton-Jacobi equations in singular domains Participants : Zhiping Rao, Hasnaa Zidani. A good deal of attention has been devoted to the analysis of Hamilton–Jacobi equations adapted to unconventional domains, particularly in view of application to control problems and traffic models. The topic is new and capable of interesting developments, the results so far obtained have allowed to clarify under reasonable assumptions, basic items as the right notion of viscosity solution to be adopted and the validity of comparison principles. • The work [19] , co-authored with C. Imbert (LAMA, U. Paris-Est) and R. Monneau (Cermics, ENPC), focuses on a Hamilton-Jacobi approach to junction problems with applications to traffic flows. More specifically, the paper is concerned with the study of a model case of first order Hamilton-Jacobi equations posed on a junction, that is to say the union of a finite number of half-lines with a unique common point. The main result is a comparison principle. We also prove existence and stability of solutions. The two challenging difficulties are the singular geometry of the domain and the discontinuity of the Hamiltonian. As far as discontinuous Hamiltonians are concerned, these results seem to be new. They are applied to the study of some models arising in traffic flows. The techniques developed here provide new powerful tools for the analysis of such problems. • This work deals with deterministic control problems where the dynamic can be completely different in multi-complementary domains of the space ${I\phantom{\rule{-1.70717pt}{0ex}}R}^{d}$. As a consequence, the dynamics present discontinuities at the interfaces of these domains. This leads to a complex interplay that has to be analyzed among transmission conditions to "glue" the propagation of the value function on the interfaces. Several questions arise: how to define properly the value function and what is the right Bellman Equation associated to this problem?. In the case of finite horizon problems without runing cost, a jonction condition is derived on the interfaces, and a precise viscosity notion is provided in a paper in progress. Moreover, a uniqueness result of a viscosity solution is shown. #### A general Hamilton-Jacobi framework for nonlinear state-constrained control problems Participants : Olivier Bokanowski, Hasnaa Zidani. This work [10] , co-authored with Albert Altarovici, deals with deterministic optimal control problem with state constraints and nonlinear dynamics. It is known for such a problem that the value function is in general discontinuous and its characterization by means of an HJ equation requires some controllability assumptions involving the dynamics and the set of state constraints. Here, we first adopt the viability point of view and look at the value function as its epigraph. Then, we prove that this epigraph can always be described by an auxiliary optimal control problem free of state constraints, and for which the value function is Lipschitz continuous and can be characterized, without any additional assumptions, as the unique viscosity solution of a Hamilton-Jacobi equation. The idea introduced in this paper bypasses the regularity issues on the value function of the constrained control problem and leads to a constructive way to compute its epigraph by a large panel of numerical schemes. Our approach can be extended to more general control problems. We study in this paper the extension to the infinite horizon problem as well as for the two-player game setting. Finally, an illustrative numerical example is given to show the relevance of the approach. #### State-constrained optimal control problems of impulsive differential equations Participants : Nicolas Forcadel, Zhiping Rao, Hasnaa Zidani. The research report [35] presents a study on optimal control problems governed by measure driven differential systems and in presence of state constraints. The first result shows that using the graph completion of the measure, the optimal solutions can be obtained by solving a reparametrized control problem of absolutely continuous trajectories but with time-dependent state-constraints. The second result shows that it is possible to characterize the epigraph of the reparametrized value function by a Hamilton-Jacobi equation without assuming any controllability assumption #### Level-set approach for reachability analysis of hybrid systems under lag constraints Participants : Giovanni Granato, Hasnaa Zidani. The study in [36] aims at characterizing a reachable set of a hybrid dynamical system with a lag constraint in the switch control. The setting does not consider any controllability assumptions and uses a level-set approach. The approach consists in the introduction of an adequate hybrid optimal control problem with lag constraints on the switch control whose value function allows a characterization of the reachable set. The value function is in turn characterized by a system of quasi-variational inequalities (SQVI). We prove a comparison principle for the SQVI which shows uniqueness of its solution. A class of numerical finite differences schemes for solving the system of inequalities is proposed and the convergence of the numerical solution towards the value function is studied using the comparison principle. Some numerical examples illustrating the method are presented. Our study is motivated by an industrial application, namely, that of range extender electric vehicles. This class of electric vehicles uses an additional module the range extender as an extra source of energy in addition to its main source a high voltage battery. The methodolgy presented in [36] is used to establish the maximum range of a Hybrid vehicle, see [22] .
# fraction with 2 variables, and radicals in numerator I arrived at the following solution to a problem in pre-calculus: $$\frac{2xh + h^2 + \sqrt{x+h} - \sqrt{x}}{h}$$ However, this can be simplified further to: $$2x+h+\frac{1}{\sqrt{x+h}+\sqrt{x}}$$ The steps to simplify were not provided. I substituted in $x = 9, h = 16$ to confirm and I've searched a few places, but am at a loss as to how the term $$\frac{\sqrt{x+h} - \sqrt{x}}{h}$$ can be simplified to: $$\frac{1}{\sqrt{x+h}+\sqrt{x}}$$ What are the steps and relevant rules? • difference of two squares Jun 26 '18 at 4:14 This is an example of the very useful "multiply by the conjugate" trick. The trick is based on the fact that $$(a+b)(a-b)=a^2-b^2,$$ which is easy to verify by direct multiplication — but it's so useful that it's worth remembering! In your example, the conjugate of the numerator $\left(\sqrt{x+h}-\sqrt{x}\right)$ is the expression $\left(\sqrt{x+h}+\sqrt{x}\right)$, so we're going to multiply the numerator and denominator simultaneously by this conjugate: $$\frac{\sqrt{x+h}-\sqrt{x}}{h}=\frac{\left(\sqrt{x+h}-\sqrt{x}\right)\left(\sqrt{x+h}+\sqrt{x}\right)}{h\left(\sqrt{x+h}+\sqrt{x}\right)}=\frac{\left(\sqrt{x+h}\right)^2-\left(\sqrt{x}\right)^2}{h\left(\sqrt{x+h}+\sqrt{x}\right)}=\frac{x+h-x}{h\left(\sqrt{x+h}+\sqrt{x}\right)}=\frac{h}{h\left(\sqrt{x+h}+\sqrt{x}\right)}=\frac{1}{\sqrt{x+h}+\sqrt{x}}.$$ Note that $$h = x+h-x = (\sqrt{x+h}-\sqrt{x})(\sqrt{x+h}+\sqrt{x})$$ and therefore, $$\frac{h}{\sqrt{x+h}+\sqrt{x}} = \sqrt{x+h}-\sqrt{x}$$
# Blackboard Shots with Prefix "12_267" 12_267 is 2012 MAT 267 - Advanced ODEs. 121204-100949: The amplitudes when $q\to L$ (3). 121204-100307: The amplitudes when $q\to L$ (2). 121204-095516: The amplitudes when $q\to L$. 121204-095148: Bounding amplitudes on the other side. 121204-094026: The basic amplitudes theorem (2). 121204-093507: The basic amplitudes theorem. 121204-093124: Chaninging the independent variable (2). 121204-092337: Chaninging the independent variable. 121204-092329: Notes. 121203-100051: Changing the independent variable (2). 121203-095636: Changing the independent variable. 121203-094744: The Sturm comparison theorem - comparing with Euler (3). 121203-094157: The Sturm comparison theorem - comparing with Euler (2). 121203-093704: The Sturm comparison theorem - comparing with Euler. 121203-093037: The Sturm comparison theorem - studying Bessel. 121203-092424: The Sturm comparison theorem - self comparisons (2). 121203-091617: The Sturm comparison theorem - self comparisons. 121203-091524: Notes and riddle. 121130-100324: More on $y''+x^\alpha y=0$. 121130-095719: The Sturm Comparison Theorem (3). 121130-095336: The Sturm Comparison Theorem (2). 121130-095035: The Sturm Comparison Theorem. 121130-094519: $y''+x^\alpha y=0$. 121130-094107: The non-oscillation theorem (5). 121130-093715: The non-oscillation theorem (4). 121130-093107: The non-oscillation theorem (3). 121130-092351: The non-oscillation theorem (2). 121130-091916: The non-oscillation theorem. 121130-091418: Reminders 121130-091002: Today's Catalan. 121127-100434: Changing the dependent variable (3). 121127-095841: Changing the dependent variable (2). 121127-095243: Changing the dependent variable. 121127-094605: The basic oscillation theorem (3). 121127-094055: The basic oscillation theorem (2). 121127-093543: The basic oscillation theorem. 121127-093006: Restoring forces, the case of $q<0$. 121127-091026: Airy's equation - why? 121127-091017: Announcements. 121126-100200: The hardest case - $\alpha_1-\alpha_2\in{\mathbb N}_{>0}$. 121126-095126: The case of a double root (2). 121126-094748: The case of a double root. 121126-094123: Faking the graph of $x^{1/2}\cos(\frac12\log x)$. 121126-093828: The easy case with complex numbers. 121126-093307: The easy case. 121126-092606: The fundamental series of $J_{1/3}$. 121126-091701: Reminders, the fundamental series. 121123-100409: RSP at order 2 (5). 121123-100024: RSP at order 2 (4). 121123-095637: RSP at order 2 (3). 121123-095031: RSP at order 2 (2). 121123-094756: RSP at order 2. 121123-094145: RSP at order 1 (4). 121123-093629: RSP at order 1 (3). 121123-093057: RSP at order 1 (2). 121123-092500: RSP at order 1. 121123-091716: Today's topics. 121123-091132: Riddle along. 121120-100237: Proof of Fuchs' theorem (3). 121120-095532: Proof of Fuchs' theorem (2). 121120-094617: Proof of Fuchs' theorem. 121120-093509: Fuchs' theorem. 121120-092913: The Airy equation by power series (4). 121120-091425: The Airy equation by power series (3). 121119-100238: The Airy equation by power series (2). 121119-095656: The Airy equation by power series. 121119-095030: Examples for functions given by a formula (2). 121119-094324: Examples for functions given by a formula. 121119-093656: On functions given by a formula. 121119-093305: The radius of convergence of a series (2). 121119-092529: The radius of convergence of a series. 121119-092128: $\pi$ is irrational. 121119-091004: A proposition by Samer Seraj. 121116-100323: A bit about convergence of series (2). 121116-095425: A bit about convergence of series. 121116-094936: Solving using power series (2). 121116-094011: Solving using power series. 121116-092615: Power series - motivation. 121116-091938: Wronskians and $\cos^2 x + \sin^2 x$. 121116-091126: Riddle Along. 121109-100032: The case of 2nd order linear ODEs. 121109-095425: Differetiating the Wronskian. 121109-094907: Differetiating derivatives (3). 121109-094707: Differetiating derivatives (2). 121109-094123: Differetiating derivatives. 121109-093513: The Wronskian. 121109-093219: Global existence for linear systems (2). 121109-092640: Global existence for linear systems. 121109-091529: Claims and Debts of systems of ODEs. 121106-215845: Challenges. 121106-215829: A differential equation for the generating function of the $A_n$ (2). 121106-215243: A differential equation for the generating function of the $A_n$. 121106-215031: A recursion for $A_n$. 121106-214712: The generating function of $C_n$ (2). 121106-214107: The generating function of $C_n$. 121106-213547: A recursive formula for the Catalan numbers $C_n$. 121106-212459: $A_n$ and $C_n$. 121106-212016: Debts on systems. 121106-210901: Riddles Along. 121105-110135: Proof of the invertibility claim (2). 121105-105750: Proof of the invertibility claim. 121105-105456: The non-homogeneous case using a Fundamental Matrix (4). 121105-105104: The non-homogeneous case using a Fundamental Matrix (3). 121105-104645: The non-homogeneous case using a Fundamental Matrix (2). 121105-104144: The non-homogeneous case using a Fundamental Matrix. 121105-103615: The non-homogeneous case by diagonalization (5). 121105-103301: The non-homogeneous case by diagonalization (4). 121105-102738: The non-homogeneous case by diagonalization (3). 121105-102024: The non-homogeneous case by diagonalization (2). 121105-101538: The non-homogeneous case by diagonalization. 121030-095809: Example with a repeated eigenvalue (2). 121030-095415: Example with a repeated eigenvalue. 121030-095102: Exponentiating a Jordan block. 121030-094321: The Jordan form theorem (2). 121030-093500: The Jordan form theorem. 121030-093236: Example with distinct eigenvalues (3). 121030-092306: Example with distinct eigenvalues (2). 121030-091915: Example with distinct eigenvalues. 121030-091559: Reminders. 121030-091036: Announcements. 121029-100048: Properties of matrix exponentiation (6). 121029-095744: Properties of matrix exponentiation (5). 121029-095326: Properties of matrix exponentiation (4). 121029-094547: Properties of matrix exponentiation (3). 121029-094025: Properties of matrix exponentiation (2). 121029-093558: Properties of matrix exponentiation. 121029-092858: Convergence. 121029-091928: Exponentiation via the Taylor series. 121029-091107: Announcements. 121023-100040: Matrix exponentiation (2). 121023-095938: Matrix exponentiation. 121023-095149: A baby version. 121023-094809: Systems of linear equations. 121023-093831: Undetermined coefficients (4). 121023-092351: Undetermined coefficients (3). 121023-091741: Undetermined coefficients (2). 121023-091718: Pre-exam office hours. 121022-100052: Undetermined coefficients. 121022-095206: Reduction of order. 121022-094433: Multiple roots (5). 121022-093955: Multiple roots (4). 121022-093804: Multiple roots (3). 121022-093504: An aside on the Leibniz rule for higher derivatives. 121022-092921: Multiple roots (2). 121022-092104: Multiple roots. 121022-091332: The case of distinct roots. 121022-090631: TT, Read Along, Riddle Along. 121019-095816: From complex back to real. 121019-095806: Distinct real roots, complex root. 121019-094930: Differential operator language. 121019-094408: The guessing method. 121019-094033: Constant coefficients homogeneous high order ODEs (2). 121019-094024: Constant coefficients homogeneous high order ODEs. 121019-093128: Numerical Integration (3). 121019-093058: Numerical Integration (2). 121019-091840: Numerical Integration. 121016-093936: Runge-Kutta. 121016-093634: A general scheme. 121016-093057: Local analysis of improved Euler (2). 121016-092401: Local analysis of improved Euler. 121016-091448: Euler and improved Euler. 121016-090737: Term test info and riddle. 121012-095603: Numerical methods, starting from the silly (3). 121012-095555: Numerical methods, starting from the silly (2). 121012-094800: Numerical methods, starting from the silly. 121012-093338: Lagrange multipliers in CoV. 121012-092651: The Lagrange Multipliers Theorem (3). 121012-092050: The Lagrange Multipliers Theorem (2). 121012-091426: The Lagrange Multipliers Theorem. 121012-090625: Read along and riddle along. 121009-095640: Directional derivatives. 121009-095113: The isoperimetric inequality (4). 121009-094819: The isoperimetric inequality (3). 121009-094131: The isoperimetric inequality (2). 121009-093631: Lagrange multipliers in ${\mathbb R}^2$ (4). 121009-093049: Lagrange multipliers in ${\mathbb R}^2$ (3). 121009-092523: Lagrange multipliers in ${\mathbb R}^2$ (2). 121009-092511: Lagrange multipliers in ${\mathbb R}^2$. 121009-091240: The isoperimetric inequality. 121005-094811: The brachistochrone, again. 121005-094413: Conservation of energy (2). 121005-094106: Conservation of energy. 121005-092726: Conservation of momentum. 121005-092109: Reminder of Euler-Lagrange. 121005-090823: Notes and riddles. 121002-103950: Properly writing Euler-Lagrange and the brachistochrone. 121002-103247: $F=ma$ (2). 121002-102822: Deriving Euler-Lagrange (5), $F=ma$. 121002-102506: Deriving Euler-Lagrange (4). 121002-101904: Deriving Euler-Lagrange (3). 121002-101423: Deriving Euler-Lagrange (2). 121002-095520: Deriving Euler-Lagrange. 121002-095243: Example: The brachistochrone. 121002-094521: Example: Classical mechanics. 121002-094030: Example: Power lines. 121002-092726: The basic calculus of variations problem. 121002-092032: Back to the chain rule (2). 121002-091704: Back to the chain rule. 121002-090739: Today's riddle. 121001-095749: Calculus of variations (2). 121001-095740: Higher order equations, calculus of variations. 121001-094905: The fundamental theorem: higher order equations. 121001-094116: The fundamental theorem: systems (2). 121001-093509: The fundamental theorem: systems. 121001-093010: The fundamental theorem: uniqueness (2). 121001-092946: The fundamental theorem: uniqueness. 121001-091541: Review of the fundamental theorem. 121001-090909: Computing $(x^x)'$. 120928-095846: The Fundamental Theorem: Uniform Convergence. 120928-094804: The Fundamental Theorem: $\phi_n-\phi_{n-1}$ is well-bounded (2). 120928-094102: The Fundamental Theorem: $\phi_n-\phi_{n-1}$ is well-bounded. 120928-093027: The Fundamental Theorem: $\phi_n$ is well-defined. 120928-092238: The Fundamental Theorem: the $y'=y$ example. 120928-091714: The Fundamental Theorem: Statement. 120925-095319: The Fundamental Theorem (3). 120925-094906: The Fundamental Theorem (2). 120925-094303: The Fundamental Theorem. 120925-093754: The Lipschitz Condition. 120925-092443: Wishful thinking (3). 120925-092029: Wishful thinking (2). 120925-091230: Wishful thinking. 120925-090721: Riddle Along. 120924-095914: Integrating factors (3). 120924-095800: Integrating factors (2). 120924-095319: Integrating factors. 120924-095019: Exact equations (5). 120924-094422: Exact equations (4). 120924-093936: Exact equations (3). 120924-093412: Exact equations (2). 120924-092906: Exact equations. 120924-092111: Partial derivatives commute (2). 120924-091545: Partial derivatives commute. 120924-090840: Show and tell (2). 120924-090831: Show and tell. 120921-095745: Notes for September 21 (6). 120921-095252: Notes for September 21 (5). 120921-095245: Notes for September 21 (4). 120921-093121: Notes for September 21 (3). 120921-092710: Notes for September 21 (2). 120921-092353: Notes for September 21. 120918-095852: Homogeneous Equations (2). 120918-095416: Homogeneous Equations. 120918-094717: Autonomous Equations (2). 120918-094527: Autonomous Equations. 120918-093915: Changing source and target coordinates (2). 120918-093421: Changing source and target coordinates. 120918-091850: Escape Velocities (2). 120918-091728: Escape Velocities. 120918-090713: Riddle Along. 120917-095813: Escape velocities (2). 120917-095323: Escape velocities. 120917-094437: Separable equations: the easy to justify way (2). 120917-093948: Separable equations: the easy to justify way. 120917-093027: Separable equations: the easy to remember way. 120917-091853: The general problem, separable equations. 120914-095607: First order linear, non-homgeneous (5). 120914-095435: First order linear, non-homgeneous (4). 120914-095224: First order linear, non-homgeneous (3). 120914-094700: First order linear, non-homgeneous (2). 120914-093953: First order linear, non-homgeneous. 120914-093309: First order linear homogeneous (2). 120914-092628: First order linear homogeneous. 120914-091936: $y'=f$ and first order linear homogeneous. 120914-090342: Read along and riddle along. 120911-094639: This is a cycloid (2). 120911-093941: This is a cycloid. 120911-093629: Solving the equation (2). 120911-093043: Solving the equation. 120911-092453: Brachistochrone review. 120910-100221: Deriving the brachistochrone equation (2). 120910-095542: Deriving the brachistochrone equation. 120910-094435: Fermat's principle and Snell's law. 120910-093559: The Brachistochrone problem. 120910-092814: A messy example. 120910-092025: What's a differential equation?
# Help with series math problem 1. Dec 24, 2004 I'm just not so sure on how to approach this problem. Well, here it goes: $$\sum _{n=1} ^{\infty} \left[ \tan ^{-1} (n+1) - \tan ^{-1} (n) \right] = \frac{\pi}{2}$$ I know that $$\tan ^{-1} x = \sum _{n=0} ^{\infty} \left( -1 \right) ^n \frac{x^{2n+1}}{2n+1}$$ but I don't know if it can be useful to get to the answer above. I just need some tips. Any help is highly appreciated. Last edited: Dec 24, 2004 2. Dec 24, 2004 ### Hurkyl Staff Emeritus Try writing out the first few terms. P.S. I get $\pi/4$. 3. Dec 24, 2004 Oh... I see. By the way, you're right about the $\pi/4$.
# Differential map between smooth manifolds is smooth Given a smooth map $f:M\to N$ between smooth manifolds how do you show that the differential map $df:TM\to TN$ is smooth? - Express it locally, as usual. –  Mariano Suárez-Alvarez Mar 26 '12 at 16:58 I'm confused as to how. I have local charts for $TM$ and $TN$ but I don't know what to do with them. –  09867 Mar 26 '12 at 17:16 if $x$ is a chart for $M$, $y$ one for $N$, then you have $Tx$ a chart for $TM$, $Ty$ a chart for $TN$. Untangle the definitions and write down $Ty \circ df \circ (Tx)^{-1}$ –  Blah Mar 26 '12 at 20:07 All this chart business is confusing me. Could anyone just write a specific proof so that I can see exactly what goes on? –  09867 Mar 27 '12 at 15:15 This is proved in Proposition $3.21$ of Lee's Introduction to Smooth Manifolds (second edition) for example. If $v \in TM$, then choosing a coordinate chart $(U, \varphi)$ containing $p = \pi(v)$ where $\pi : TM \to M$ is the projection, one obtains a coordinate neighbourhood $(\pi^{-1}(U), \widetilde{\varphi})$ containing $v$. More precisely, in local coordinates, $$v = \left.v^1\frac{\partial}{\partial x^1}\right|_p + \dots + \left.v^m\frac{\partial}{\partial x^m}\right|_p$$ and $\widetilde{\varphi}(v) = (x^1(p), \dots, x^m(p), v^1, \dots, v^m)$. Once you have such charts, the local expression for $df$ becomes $$df(x^1, \dots, x^m, v^1, \dots, v^m) = \left(f^1(x), \dots, f^n(x), \frac{\partial f^1}{\partial x^i}(x)v^i, \dots, \frac{\partial f^n}{\partial x^i}(x)v^i\right).$$ As $f$ is smooth, all the components of $df$ (in these local coordinates) are smooth, and therefore $df : TM \to TN$ is smooth. -
# Modeling the Math I used Excel to enter the power calculations on a simulated data set (voltage and current). I later wanted something more responsive to what-ifs and designed for mathematical computation. I stumbled on a MatLab like open source tool cal GNU-Octave. I decided to download and install the tool to enter basic power calculations. What I like about this tool and like Matlab is the ability to manipulate matrices and vectors. I can set up calculations for my power without resorting to annoying for loops. With my short attention span, I liked this. Earlier I described the basic math and reduced the following salient equations: $P_{avg}=\frac {1}{T}\int^{to+T}_{to}p(t)dt\qquad(1)$ $I_{equiv}=I_{rms}=\sqrt{\frac {1}{T}\int^{to+T}_{to}i^2(t)dt} \qquad(2)$ $P_{avg} = V_{rms}I_{rms}cos(\Theta_v-\Theta_i) \qquad(3))$ For giggles, I wanted to use Octave to calculate and plot power curves. I know from sampling theory that we need to sample at least at the Nyquist frequency to be able to reconstruct the signal.The reality is we don’t live in an ideal world with perfect filters. More about sampling rate later. I created functions in Octave to generate a waveform. I can also import a text file with data values and compute the various types of powers as well. I wanted to test a couple of sunny day scenarios to ensure that my calculations were correct. I took two approaches. One I actually defined the function and let Octave integrate it. The other was to sample the function like I would in the software. Both yielded the same results. The table below outlines the expected and actual results. Package/ToolURLDescriptionInstallation log4jslog4jslog4js based logging services for node.jsnpm install log4js -S monkmonkwrapper to mongodb that is simpler yet not as powerful as mongoosenpm install monk -S nodemonnodemonlistens for file changes and restarts server npm install nodemon -g dummy-jsondummy-sontool to generate JSON files used for my testingnpm install dummy-json -g
# Audio Classification with Deep Learning This is a tutorial for conducting auditory classification within a Gradient Notebook using TensorFlow. Readers can expect to learn about the essential basic concepts of signal processing and some of the best techniques for audio classification to achieve the best desired outcomes. 9 months ago   •   16 min read Want to run the code in this article? Follow this link to run this code in a Gradient Notebook, or create your own by using this repo as the Notebook's "Workspace URL" in the advanced options of the Notebook creation page. Visuals and sounds are two of the more common things that humans perceive. Both of these senses seem quite trivial for most people to analyze and develop an intuitive understanding of. Similar to how problems related to natural processing (NLP) are straightforward enough for humans to deal with, the same cannot be said for machines, as they have struggled to achieve desirable results in the past. However, with the introduction and rise of deep learning models and architectures over the last decade, we have been able to compute complex computations and projects with a much higher success rate. ## Introduction: Audio classification is the process of analyzing and identifying any type of audio, sound, noise, musical notes, or any other similar type of data to classify them accordingly. The audio data that is available to us can occur in numerous forms, such as sound from acoustic devices, musical chords from instruments, human speech, or even naturally occurring sounds like the chirping of birds in the environment. Modern deep learning techniques allow us to achieve state-of-the-art results for tasks and projects related to audio signal processing. In this article, our primary objective is to gain a definitive understanding of the audio classification project while learning about the essential basic concepts of signal processing and some of the best techniques utilized to achieve the desired outcomes. Before diving into the contents of this article, I would first recommend getting more familiar with deep learning frameworks and other essential basic concepts. You can check out more information on the TensorFlow (link) and Keras (link) libraries, which we will utilize for the construction of this project. Let us understand some of the basic concepts of audio classification. ## Exploring the Basic Concepts of Audio Classification: In this section of the article, we will try to understand some of the useful terms that are essential for understanding audio classification with deep learning. We will explore some of the basic terminologies that we may come across during our work on audio processing projects. Let us get started by analyzing some of these key concepts in brief. ### Waveform: Before we analyze the waveform and its numerous parameters, let us understand what sound is. Sound is the vibrations produced by an object when the air particles in the surrounding oscillate. The respective changes in the air pressure create these sound waves. Sound is a mechanical wave where energy is transferred from one source to another. A waveform is a schematic representation that helps us to analyze the displacement of sound waves over time, along with some of the other essential parameters that are required for a specific task. On the other hand, the frequency in a waveform is the representation of the number of times that a waveform repeats itself within a one-second time period. The peak of the waveform at the top is called a crest, whereas the bottom point is called the trough. Amplitude is the distance from the center line to the top of a trough or the bottom of a crest. With a brief understanding and grasp of these basic concepts, we can proceed to visit some of the other essential topics required for audio classification. ### Spectrograms: Spectrograms are the visual representations of the spectrum of frequencies in an audio signal. Other technical terms for spectrograms are sonographs, voiceprints, or voicegrams. These spectrograms are used extensively in the field of signal processing, music generation, audio classification, linguistic analysis, speech detection, and so much more. We will also use spectrograms in this article for the task of audio classification. For further information on this topic, I would recommend checking out the following link. ### Audio Signal Processing: Audio signal processing is the field that deals with audio signals, sound waves, and other manipulations of audio frequencies. When talking specifically about deep learning for audio signal processing, we have numerous applications that we can work on in this large field. In this article, we will cover the topic of audio classification in greater detail. Some of the other major applications include speech recognition, audio denoising, sound information retrieval, music generation, and so much more. The combination of deep learning to solve audio signal processing tasks has numerous possibilities and is worth exploring. Let us proceed to understand the audio classification project in the next section before proceeding further toward its implementation from scratch. ## Understanding the Audio Classification Project: Audio classification is one of the best basic introductory projects to get started with audio deep learning. The objective is to understand the waveforms that are available in the raw format and convert the existing data into a form that is usable by the developers. By converting a raw waveform of the audio data into the form of spectrograms, we can pass it through deep learning models to interpret and analyze the data. In audio classification, we normally perform a binary classification in which we determine if the input signal is our desired audio or not. In this project, our objective is to retrieve an incoming sound made by a bird. The incoming noise signal is converted into a waveform that we can utilize for further processing and analysis with the help of the TensorFlow deep learning framework. Once the waveform is obtained successfully, we can proceed to convert this waveform into a spectrogram, which is a visual representation of the available waveform. Since these spectrograms are visual images, we can make use of convolutional neural networks to analyze them accordingly by creating a deep learning model to compute a binary classification result. Bring this project to life ## Implementation of the Audio Classification and Recognition project with Deep Learning: As discussed previously, the objective of our project is to read the incoming sounds from a forest and interpret whether the received data belongs to a specific bird (capuchin bird) or is some other noise that we are not really interested in acknowledging. For the construction of this entire project, we will make use of the TensorFlow and Keras deep learning frameworks. You can check out the following article to learn more about TensorFlow and the Keras article here. The other additional installation that is required for this project is the TensorFlow-io library, which will grant us access to file systems and file formats that are not available in TensorFlow's built-in support. The following pip command provided below can be used to install the library in your working environment. pip install tensorflow-io[tensorflow] ### Importing the essential libraries: In the next step, we will import all the essential libraries that are required for the construction of the following project. For this project, we will use a Sequential type model that will allow us to construct a simple convolutional neural network to analyze the spectrograms produced and achieve a desirable result. Since the architecture of the model developed is quite simple, we do not really need to make use of the functional model API or the custom modeling functionality. We will make use of the convolutional layers for the architecture as well as some Dense and flatten layers. As discussed earlier, we will also utilize the TensorFlow input output library for handling a larger number of file systems and formats such as the .wav format and the .mp3 formats. The Operating System library import will help us to access all the required files in their respective formats import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, Dense, Flatten import tensorflow_io as tfio from matplotlib import pyplot as plt import os The dataset for this project is obtainable through the Kaggle Challenge for Signal Processing - Z by HP Unlocked Challenge 3, which you can download from this link. 1. Get a Kaggle account 2. Create an API token by going to your Account settings, and save kaggle.json. Note: you may need to create a new api token if you have already created one. 4. Either run the cell below or run the following commands in a terminal (this may take a while) Terminal: mv kaggle.json ~/.kaggle/ pip install kaggle unzip z-by-hp-unlocked-challenge-3-signal-processing.zip Cell: !mv kaggle.json ~/.kaggle/ !pip install kaggle !unzip z-by-hp-unlocked-challenge-3-signal-processing.zip Once the dataset is downloaded and extracted, we can notice three directories in the data folder. The three directories are namely the forest recordings containing a three-minute clip of the sounds produced in the forest, three seconds clips of Capuchin bird recordings, and three-second recording clips of sounds not produced by the Capuchin birds. In the next code snippet, we will define variables to set these paths accordingly. CAPUCHIN_FILE = os.path.join('data', 'Parsed_Capuchinbird_Clips', 'XC3776-3.wav') NOT_CAPUCHIN_FILE = os.path.join('data', 'Parsed_Not_Capuchinbird_Clips', 'afternoon-birds-song-in-forest-0.wav') In the next step, we will define the data loading function that will be useful for creating the required waveforms in the desired format for further computation. The function defined in the code snippet below will allow us to read the data and convert it into a mono (or single) channel for easier analysis. We will also vary the frequency signals allowing us to modify the overall amplitude to achieve smaller data samples for the overall analysis. def load_wav_16k_mono(filename): # Decode wav (tensors by channels) wav, sample_rate = tf.audio.decode_wav(file_contents, desired_channels=1) # Removes trailing axis wav = tf.squeeze(wav, axis=-1) sample_rate = tf.cast(sample_rate, dtype=tf.int64) # Goes from 44100Hz to 16000hz - amplitude of the audio signal wav = tfio.audio.resample(wav, rate_in=sample_rate, rate_out=16000) return wav The above image represents the waveform Plot of Capuchin and Non-Capuchin signals. ### Preparing the dataset: In this section of the article, we will define the positive and negative paths for the Capuchin bird clips. The positive paths variables store the path to the directory containing the clip recordings of the Capuchin birds, while the negative paths are stored in another variable. We will link the files in these directories to the .wav formats and add their respective labels. The labels are in terms of binary classification and are labeled as 0 or 1. The positive labels are assigned with a value of one, which means the clip contains the audio signal of a Capuchin bird. The negative labels with zeros indicate that the audio signals are random noises that do not contain clip recordings of Capuchin birds. # Defining the positive and negative paths POS = os.path.join('data', 'Parsed_Capuchinbird_Clips/*.wav') NEG = os.path.join('data', 'Parsed_Not_Capuchinbird_Clips/*.wav') # Creating the Datasets pos = tf.data.Dataset.list_files(POS) neg = tf.data.Dataset.list_files(NEG) positives = tf.data.Dataset.zip((pos, tf.data.Dataset.from_tensor_slices(tf.ones(len(pos))))) negatives = tf.data.Dataset.zip((neg, tf.data.Dataset.from_tensor_slices(tf.zeros(len(neg))))) data = positives.concatenate(negatives) # Analyzing the average wavelength of a Capuchin bird lengths = [] for file in os.listdir(os.path.join('data', 'Parsed_Capuchinbird_Clips')): lengths.append(len(tensor_wave)) The minimum, mean, and maximum wave length cycles, respectively, are provided below. <tf.Tensor: shape=(), dtype=int32, numpy=32000> <tf.Tensor: shape=(), dtype=int32, numpy=54156> <tf.Tensor: shape=(), dtype=int32, numpy=80000> ### Converting Data to Spectrograms: In the next step, we will create the function to complete the pre-processing steps required for audio analysis. We will convert the previously acquired waveforms into the form of spectrograms. These visualized audio signals in the form of spectrograms will be used by our deep learning model in the upcoming steps to analyze and interpret the results accordingly. In the below code block, we acquire all the waveforms and compute the Short-time Fourier Transform of signals with the TensorFlow library to acquire a visual representation, as shown in the image provided below. def preprocess(file_path, label): for i in os.listdir(file_path): i = file_path.decode() + "/" + i.decode() wav = wav[:48000] zero_padding = tf.zeros([48000] - tf.shape(wav), dtype=tf.float32) spectrogram = tf.signal.stft(wav, frame_length=320, frame_step=32) spectrogram = tf.abs(spectrogram) spectrogram = tf.expand_dims(spectrogram, axis=2) return spectrogram, label filepath, label = positives.shuffle(buffer_size=10000).as_numpy_iterator().next() spectrogram, label = preprocess(filepath, label) ### Building the deep learning Model: Before we start constructing the deep learning model, let us create the data pipeline by loading the data. We will load in the spectrogram data elements that are obtained from the pre-processing step function. We can cache and shuffle this data by using the TensorFlow in-built functionalities, as well as create a batch size of sixteen to load the data elements accordingly. Before we proceed to construct the deep learning model, we can create partitions for the training and testing samples, as shown in the below code snippet. # Creating a Tensorflow Data Pipeline data = data.map(preprocess) data = data.cache() data = data.shuffle(buffer_size=1000) data = data.batch(16) data = data.prefetch(8) # Split into Training and Testing Partitions train = data.take(36) test = data.skip(36).take(15) In the next step, we will build a Sequential type model. We can develop the architecture to solve the task with a functional API or custom model archetype. We can then proceed to add the convolutional layers with the respective shape of the sample label to build two blocks of convolutional layers with sixteen filters and a kernel size of (3, 3). The ReLU activation function is utilized in the construction of the convolutional layers. We can then proceed to flatten the acquired output from the convolutional layers to make it suitable for further processing. Finally, we can add the fully connected layers with the Sigmoid activation function with one output node to receive the binary classification output. The code snippet and the summary of the model constructed are shown below. model = Sequential() model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 1489, 255, 16) 160 conv2d_1 (Conv2D) (None, 1487, 253, 16) 2320 flatten (Flatten) (None, 6019376) 0 dense (Dense) (None, 1) 6019377 ================================================================= Total params: 6,021,857 Trainable params: 6,021,857 Non-trainable params: 0 _________________________________________________________________ Once we have completed the building of the model architecture, we can proceed to compile and train the model accordingly. For the compilation of the model, we can use the Adam optimizer, the binary cross-entropy loss function for the binary classification, and define some additional recall and precision metrics for the model analysis. We can train the data that we previously built with the validation test data and fit the model for a few epochs. The code snippet and result obtained after this step is shown below. # Compiling and fitting the model model.fit(train, epochs=4, validation_data=test) Epoch 1/4 36/36 [==============================] - 204s 6s/step - loss: 1.6965 - recall: 0.8367 - precision: 0.8483 - val_loss: 0.0860 - val_recall: 0.9254 - val_precision: 0.9688 Epoch 2/4 36/36 [==============================] - 200s 6s/step - loss: 0.0494 - recall: 0.9477 - precision: 0.9932 - val_loss: 0.0365 - val_recall: 1.0000 - val_precision: 0.9846 Epoch 3/4 36/36 [==============================] - 201s 6s/step - loss: 0.0314 - recall: 0.9933 - precision: 0.9801 - val_loss: 0.0228 - val_recall: 0.9821 - val_precision: 1.0000 Epoch 4/4 36/36 [==============================] - 201s 6s/step - loss: 0.0126 - recall: 0.9870 - precision: 1.0000 - val_loss: 0.0054 - val_recall: 1.0000 - val_precision: 0.9861 Once we have constructed the model and trained it successfully, we can analyze and validate the results. The metrics obtained in the results show good progress. And hence, we can deem the constructed model suitable for making relatively successful predictions on the bird calls to identify the noise frequency of the Capuchin birds. In the next section, we will look into the steps for this procedure. ### Making the required Predictions: In the final step of this project, we will analyze how to make the appropriate predictions on all the existing files in forest recordings. Before that step, let us look at how to make a prediction on a single batch, as shown in the code snippet below. # Prediction for a single batch X_test, y_test = test.as_numpy_iterator().next() yhat = model.predict(X_test) # converting logits to classes yhat = [1 if prediction > 0.5 else 0 for prediction in yhat] Now that we have looked at how to make predictions for a single batch, it is essential to note how we can make predictions on the files present in the forest recordings directory. Each of the clips in the forest recordings is about three minutes long. Since our predictions consist of a three-second clip for detecting the Capuchin bird calls, we can segment these longer clips into windowed spectrums. We can divide the three-minute-long clips (180 seconds) into sixty smaller fragments to perform the analysis. We will detect the total Capuchin bird calls in this section, where each clip has a score of zero or one. Once we determine the calls for every windowed spectrum, we can compute the total number of counts in the entire clip by adding all the individual values. The total counts tell us the number of times a Capuchin bird sound was heard throughout the audio clip. In the code snippet below, we will build our first function similar to the one discussed in the previous section, where we read the forest recording clips, which are in the form of mp3 files as opposed to .wav format. The function below takes the mp3 format input and converts them into tensors. We then compute the average of the multi-channel input to convert it into a mono channel and obtain the desired frequency signal. def load_mp3_16k_mono(filename): """ Load an audio file, convert it to a float tensor, resample to 16 kHz single-channel audio. """ res = tfio.audio.AudioIOTensor(filename) # Convert to tensor and combine channels tensor = res.to_tensor() tensor = tf.math.reduce_sum(tensor, axis=1) / 2 # Extract sample rate and cast sample_rate = res.rate sample_rate = tf.cast(sample_rate, dtype=tf.int64) # Resample to 16 kHz wav = tfio.audio.resample(tensor, rate_in=sample_rate, rate_out=16000) return wav mp3 = os.path.join('data', 'Forest Recordings', 'recording_00.mp3') audio_slices = tf.keras.utils.timeseries_dataset_from_array(wav, wav, sequence_length=48000, sequence_stride=48000, batch_size=1) samples, index = audio_slices.as_numpy_iterator().next() In the next code snippet, we will construct a function that will help us to segregate the individual fragments into windowed spectrograms for further computation. We will map the data accordingly and create the appropriate slices for making the required predictions, as shown below. # Build Function to Convert Clips into Windowed Spectrograms def preprocess_mp3(sample, index): sample = sample[0] zero_padding = tf.zeros([48000] - tf.shape(sample), dtype=tf.float32) spectrogram = tf.signal.stft(wav, frame_length=320, frame_step=32) spectrogram = tf.abs(spectrogram) spectrogram = tf.expand_dims(spectrogram, axis=2) return spectrogram audio_slices = tf.keras.utils.timeseries_dataset_from_array(wav, wav, sequence_length=16000, sequence_stride=16000, batch_size=1) audio_slices = audio_slices.map(preprocess_mp3) audio_slices = audio_slices.batch(64) yhat = model.predict(audio_slices) yhat = [1 if prediction > 0.5 else 0 for prediction in yhat] In the final code snippet of this article, we will run the following process for all the files in the forest recordings and obtain a total computed result. The results will contain clips of zeros and ones where the total of the ones is outputted to compute the overall score of the clips. We can find out the total number of Capuchin bird calls in the audio recordings as required in the following project with the code provided below. results = {} class_preds = {} for file in os.listdir(os.path.join('data', 'Forest Recordings')): FILEPATH = os.path.join('data','Forest Recordings', file) audio_slices = tf.keras.utils.timeseries_dataset_from_array(wav, wav, sequence_length=48000, sequence_stride=48000, batch_size=1) audio_slices = audio_slices.map(preprocess_mp3) audio_slices = audio_slices.batch(64) yhat = model.predict(audio_slices) results[file] = yhat for file, logits in results.items(): class_preds[file] = [1 if prediction > 0.99 else 0 for prediction in logits] class_preds The two primary references for this project are the notebook from Kaggle and the following GitHub link. Most of the code is taken from the following references, and I would highly recommend checking them out. The code for this blogpost is also hosted on Gradient AI. Create a Notebook with this URL as the Workspace URL to load this code as an .ipynb directly into a Notebook. There are several additional improvements that can be made to this project to achieve better results. The complexity of the network can be increased as well as other innovative methods can be utilized to obtain more precision in the analysis of the Capuchin bird patterns. We will also look at some other projects related to audio signal processing in future articles. ## Conclusion: Audio signal processing with deep learning has garnered high traction due to its high rate of success in interpreting and accomplishing a wide array of complex projects. Most of the complicated signaling projects, such as acoustic music detection, audio classification, environmental sound classification, and so much more, can be achieved through deep learning techniques. With further improvements and advancements in these fields, we can expect greater feats of accomplishments. In this article, we were introduced to audio classification with deep learning. We explored and analyzed some of the basic and essential components required to thoroughly understand the concept of audio classification. We then had a brief summary of the particular project of this blog before proceeding to the implementation section of the task. We made use of the TensorFlow framework for conversion of waveforms, used the spectrograms for analysis, and constructed a simple convolutional neural capable of binary classification of audio data. There are several improvements that could be added to the following project to achieve better results. In the upcoming articles, we will look at more intriguing projects related to audio signal processing with deep learning. We will also analyze some music generation projects and continue our work with Generative adversarial networks and neural networks from scratch. Until then, have fun exploring and building new projects!
Copied to clipboard ## G = D4.Dic6order 192 = 26·3 ### 1st non-split extension by D4 of Dic6 acting via Dic6/Dic3=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C2×C12 — D4.Dic6 Chief series C1 — C3 — C6 — C2×C6 — C2×C12 — C4×Dic3 — D4×Dic3 — D4.Dic6 Lower central C3 — C6 — C2×C12 — D4.Dic6 Upper central C1 — C22 — C2×C4 — D4⋊C4 Generators and relations for D4.Dic6 G = < a,b,c,d | a4=b2=c12=1, d2=c6, bab=cac-1=a-1, ad=da, cbc-1=a-1b, bd=db, dcd-1=a2c-1 > Subgroups: 280 in 102 conjugacy classes, 41 normal (37 characteristic) C1, C2, C2, C3, C4, C4, C22, C22, C6, C6, C8, C2×C4, C2×C4, D4, D4, C23, Dic3, C12, C12, C2×C6, C2×C6, C42, C22⋊C4, C4⋊C4, C4⋊C4, C2×C8, C2×C8, C22×C4, C2×D4, C3⋊C8, C24, C2×Dic3, C2×Dic3, C2×C12, C2×C12, C3×D4, C3×D4, C22×C6, D4⋊C4, D4⋊C4, C4⋊C8, C4.Q8, C2.D8, C4×D4, C42.C2, C2×C3⋊C8, C4×Dic3, Dic3⋊C4, C4⋊Dic3, C4⋊Dic3, C6.D4, C3×C4⋊C4, C2×C24, C22×Dic3, C6×D4, D4.Q8, C6.Q16, Dic3⋊C8, C8⋊Dic3, D4⋊Dic3, C3×D4⋊C4, C4.Dic6, D4×Dic3, D4.Dic6 Quotients: C1, C2, C22, S3, D4, Q8, C23, D6, C2×D4, C2×Q8, C4○D4, Dic6, C22×S3, C22⋊Q8, C4○D8, C8⋊C22, C2×Dic6, S3×D4, D42S3, D4.Q8, Dic3.D4, D8⋊S3, Q8.7D6, D4.Dic6 Smallest permutation representation of D4.Dic6 On 96 points Generators in S96 (1 17 25 74)(2 75 26 18)(3 19 27 76)(4 77 28 20)(5 21 29 78)(6 79 30 22)(7 23 31 80)(8 81 32 24)(9 13 33 82)(10 83 34 14)(11 15 35 84)(12 73 36 16)(37 55 88 68)(38 69 89 56)(39 57 90 70)(40 71 91 58)(41 59 92 72)(42 61 93 60)(43 49 94 62)(44 63 95 50)(45 51 96 64)(46 65 85 52)(47 53 86 66)(48 67 87 54) (1 80)(2 8)(3 82)(4 10)(5 84)(6 12)(7 74)(9 76)(11 78)(13 27)(14 77)(15 29)(16 79)(17 31)(18 81)(19 33)(20 83)(21 35)(22 73)(23 25)(24 75)(26 32)(28 34)(30 36)(37 94)(38 63)(39 96)(40 65)(41 86)(42 67)(43 88)(44 69)(45 90)(46 71)(47 92)(48 61)(49 55)(50 89)(51 57)(52 91)(53 59)(54 93)(56 95)(58 85)(60 87)(62 68)(64 70)(66 72) (1 2 3 4 5 6 7 8 9 10 11 12)(13 14 15 16 17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80 81 82 83 84)(85 86 87 88 89 90 91 92 93 94 95 96) (1 65 7 71)(2 51 8 57)(3 63 9 69)(4 49 10 55)(5 61 11 67)(6 59 12 53)(13 89 19 95)(14 37 20 43)(15 87 21 93)(16 47 22 41)(17 85 23 91)(18 45 24 39)(25 52 31 58)(26 64 32 70)(27 50 33 56)(28 62 34 68)(29 60 35 54)(30 72 36 66)(38 76 44 82)(40 74 46 80)(42 84 48 78)(73 86 79 92)(75 96 81 90)(77 94 83 88) G:=sub<Sym(96)| (1,17,25,74)(2,75,26,18)(3,19,27,76)(4,77,28,20)(5,21,29,78)(6,79,30,22)(7,23,31,80)(8,81,32,24)(9,13,33,82)(10,83,34,14)(11,15,35,84)(12,73,36,16)(37,55,88,68)(38,69,89,56)(39,57,90,70)(40,71,91,58)(41,59,92,72)(42,61,93,60)(43,49,94,62)(44,63,95,50)(45,51,96,64)(46,65,85,52)(47,53,86,66)(48,67,87,54), (1,80)(2,8)(3,82)(4,10)(5,84)(6,12)(7,74)(9,76)(11,78)(13,27)(14,77)(15,29)(16,79)(17,31)(18,81)(19,33)(20,83)(21,35)(22,73)(23,25)(24,75)(26,32)(28,34)(30,36)(37,94)(38,63)(39,96)(40,65)(41,86)(42,67)(43,88)(44,69)(45,90)(46,71)(47,92)(48,61)(49,55)(50,89)(51,57)(52,91)(53,59)(54,93)(56,95)(58,85)(60,87)(62,68)(64,70)(66,72), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,65,7,71)(2,51,8,57)(3,63,9,69)(4,49,10,55)(5,61,11,67)(6,59,12,53)(13,89,19,95)(14,37,20,43)(15,87,21,93)(16,47,22,41)(17,85,23,91)(18,45,24,39)(25,52,31,58)(26,64,32,70)(27,50,33,56)(28,62,34,68)(29,60,35,54)(30,72,36,66)(38,76,44,82)(40,74,46,80)(42,84,48,78)(73,86,79,92)(75,96,81,90)(77,94,83,88)>; G:=Group( (1,17,25,74)(2,75,26,18)(3,19,27,76)(4,77,28,20)(5,21,29,78)(6,79,30,22)(7,23,31,80)(8,81,32,24)(9,13,33,82)(10,83,34,14)(11,15,35,84)(12,73,36,16)(37,55,88,68)(38,69,89,56)(39,57,90,70)(40,71,91,58)(41,59,92,72)(42,61,93,60)(43,49,94,62)(44,63,95,50)(45,51,96,64)(46,65,85,52)(47,53,86,66)(48,67,87,54), (1,80)(2,8)(3,82)(4,10)(5,84)(6,12)(7,74)(9,76)(11,78)(13,27)(14,77)(15,29)(16,79)(17,31)(18,81)(19,33)(20,83)(21,35)(22,73)(23,25)(24,75)(26,32)(28,34)(30,36)(37,94)(38,63)(39,96)(40,65)(41,86)(42,67)(43,88)(44,69)(45,90)(46,71)(47,92)(48,61)(49,55)(50,89)(51,57)(52,91)(53,59)(54,93)(56,95)(58,85)(60,87)(62,68)(64,70)(66,72), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,65,7,71)(2,51,8,57)(3,63,9,69)(4,49,10,55)(5,61,11,67)(6,59,12,53)(13,89,19,95)(14,37,20,43)(15,87,21,93)(16,47,22,41)(17,85,23,91)(18,45,24,39)(25,52,31,58)(26,64,32,70)(27,50,33,56)(28,62,34,68)(29,60,35,54)(30,72,36,66)(38,76,44,82)(40,74,46,80)(42,84,48,78)(73,86,79,92)(75,96,81,90)(77,94,83,88) ); G=PermutationGroup([[(1,17,25,74),(2,75,26,18),(3,19,27,76),(4,77,28,20),(5,21,29,78),(6,79,30,22),(7,23,31,80),(8,81,32,24),(9,13,33,82),(10,83,34,14),(11,15,35,84),(12,73,36,16),(37,55,88,68),(38,69,89,56),(39,57,90,70),(40,71,91,58),(41,59,92,72),(42,61,93,60),(43,49,94,62),(44,63,95,50),(45,51,96,64),(46,65,85,52),(47,53,86,66),(48,67,87,54)], [(1,80),(2,8),(3,82),(4,10),(5,84),(6,12),(7,74),(9,76),(11,78),(13,27),(14,77),(15,29),(16,79),(17,31),(18,81),(19,33),(20,83),(21,35),(22,73),(23,25),(24,75),(26,32),(28,34),(30,36),(37,94),(38,63),(39,96),(40,65),(41,86),(42,67),(43,88),(44,69),(45,90),(46,71),(47,92),(48,61),(49,55),(50,89),(51,57),(52,91),(53,59),(54,93),(56,95),(58,85),(60,87),(62,68),(64,70),(66,72)], [(1,2,3,4,5,6,7,8,9,10,11,12),(13,14,15,16,17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80,81,82,83,84),(85,86,87,88,89,90,91,92,93,94,95,96)], [(1,65,7,71),(2,51,8,57),(3,63,9,69),(4,49,10,55),(5,61,11,67),(6,59,12,53),(13,89,19,95),(14,37,20,43),(15,87,21,93),(16,47,22,41),(17,85,23,91),(18,45,24,39),(25,52,31,58),(26,64,32,70),(27,50,33,56),(28,62,34,68),(29,60,35,54),(30,72,36,66),(38,76,44,82),(40,74,46,80),(42,84,48,78),(73,86,79,92),(75,96,81,90),(77,94,83,88)]]) 33 conjugacy classes class 1 2A 2B 2C 2D 2E 3 4A 4B 4C 4D 4E 4F 4G 4H 4I 6A 6B 6C 6D 6E 8A 8B 8C 8D 12A 12B 12C 12D 24A 24B 24C 24D order 1 2 2 2 2 2 3 4 4 4 4 4 4 4 4 4 6 6 6 6 6 8 8 8 8 12 12 12 12 24 24 24 24 size 1 1 1 1 4 4 2 2 2 6 6 8 12 12 12 24 2 2 2 8 8 4 4 12 12 4 4 8 8 4 4 4 4 33 irreducible representations dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 4 4 type + + + + + + + + + + - + + + - + - + image C1 C2 C2 C2 C2 C2 C2 C2 S3 D4 Q8 D6 D6 D6 C4○D4 Dic6 C4○D8 C8⋊C22 D4⋊2S3 S3×D4 D8⋊S3 Q8.7D6 kernel D4.Dic6 C6.Q16 Dic3⋊C8 C8⋊Dic3 D4⋊Dic3 C3×D4⋊C4 C4.Dic6 D4×Dic3 D4⋊C4 C2×Dic3 C3×D4 C4⋊C4 C2×C8 C2×D4 C12 D4 C6 C6 C4 C22 C2 C2 # reps 1 1 1 1 1 1 1 1 1 2 2 1 1 1 2 4 4 1 1 1 2 2 Matrix representation of D4.Dic6 in GL4(𝔽73) generated by 1 71 0 0 1 72 0 0 0 0 1 0 0 0 0 1 , 72 2 0 0 0 1 0 0 0 0 72 0 0 0 0 72 , 61 12 0 0 67 12 0 0 0 0 66 66 0 0 7 59 , 27 0 0 0 0 27 0 0 0 0 10 12 0 0 22 63 G:=sub<GL(4,GF(73))| [1,1,0,0,71,72,0,0,0,0,1,0,0,0,0,1],[72,0,0,0,2,1,0,0,0,0,72,0,0,0,0,72],[61,67,0,0,12,12,0,0,0,0,66,7,0,0,66,59],[27,0,0,0,0,27,0,0,0,0,10,22,0,0,12,63] >; D4.Dic6 in GAP, Magma, Sage, TeX D_4.{\rm Dic}_6 % in TeX G:=Group("D4.Dic6"); // GroupNames label G:=SmallGroup(192,322); // by ID G=gap.SmallGroup(192,322); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-3,56,926,219,58,851,438,102,6278]); // Polycyclic G:=Group<a,b,c,d|a^4=b^2=c^12=1,d^2=c^6,b*a*b=c*a*c^-1=a^-1,a*d=d*a,c*b*c^-1=a^-1*b,b*d=d*b,d*c*d^-1=a^2*c^-1>; // generators/relations ׿ × 𝔽
#### johk • Jr. Member • Posts: 107 « on: August 04, 2008, 12:59:09 pm » Hi, I don't seem to be able to add any taxes. I get this error message - see blow. Any suggestions on what could be wrong is much appreciated. I have tried both on localhost and on "real" host. Thanks jonas Quote 500 - •An error has occurred• JDatabaseMySQL::query: 1054 - Unknown column 'country_id' in 'field list' SQL=SELECT `country_id`, `country_name`, `country_2_code`, `country_3_code` FROM `nut_vm_tax_rate` WHERE `country_3_code` = 'AUS' Call stack # Function Location
## R Constant $PV=nRT$ Leslie Almaraz 4G Posts: 99 Joined: Fri Aug 02, 2019 12:16 am ### R Constant How do you know which variation of the R constant to use? Jacey Yang 1F Posts: 101 Joined: Fri Aug 09, 2019 12:17 am ### Re: R Constant It depends on the units for pressure and volume given in the problem. Bryan Chen 1H Posts: 58 Joined: Mon Jun 17, 2019 7:24 am ### Re: R Constant on the equations sheet look at the units for R and make sure they match up with those given in the problem, or those that you may convert to/use RRahimtoola1I Posts: 102 Joined: Fri Aug 09, 2019 12:15 am ### Re: R Constant Here are the different gas constants. It depends on the units you are using in the equation. For PV=nRT you would usually use the 0.082. Attachments Edmund Zhi 2B Posts: 118 Joined: Sat Jul 20, 2019 12:16 am ### Re: R Constant The R constant will vary based on the other units given to us within a problem. Our formula sheet should always have a value of R that we will need to use in any problem on an exam. Rory Simpson 2F Posts: 106 Joined: Fri Aug 09, 2019 12:17 am ### Re: R Constant Just make sure that you choose the right R constant depending on the units for pressure and volume, or convert all the units so that you can 0.082 L*atm*mol^-1*K^-1. Jasmine Vallarta 2L Posts: 102 Joined: Sat Aug 17, 2019 12:18 am ### Re: R Constant it depends on the units of pressure given to you in the problem Michelle N - 2C Posts: 117 Joined: Wed Sep 18, 2019 12:19 am ### Re: R Constant Leslie Almaraz 4G wrote:How do you know which variation of the R constant to use? I know that Test 1 is already over, but in the equations sheet that Dr. Lavelle provided, there were many variations of it. To answer your question, the best way to know which R constant to use is to see the units used in the problem. If it mentions atm, then it'll be the number with atm in the units. Same applies for J, and so forth. Shail Avasthi 2C Posts: 101 Joined: Fri Aug 30, 2019 12:17 am ### Re: R Constant The value of R you use depends on the units given to you in the problem and the units which your answer needs to be in. The units of R will cancel out with all of the given terms to give you your final answer's units. Maria Poblete 2C Posts: 102 Joined: Wed Sep 18, 2019 12:15 am ### Re: R Constant In the equation sheet we're provided with, there are a number of different values to use for the R constant. It's important to look at which units are being used, which will tell you which one is appropriate to use. SarahCoufal_1k Posts: 102 Joined: Thu Jul 25, 2019 12:17 am ### Re: R Constant It depends on the units of the other. variables provided. Choose the r with the matching units and all but the variable you are looking for should cancel out. Areli C 1L Posts: 95 Joined: Wed Nov 14, 2018 12:19 am ### Re: R Constant Depending on the units given, use the R constant that uses the same units in order for them to cancel and give the desired answer. RichBollini4G Posts: 100 Joined: Wed Sep 18, 2019 12:18 am ### Re: R Constant Leslie Almaraz 4G wrote:How do you know which variation of the R constant to use? I would say look at the specific units of the problem. RichBollini4G Posts: 100 Joined: Wed Sep 18, 2019 12:18 am ### Re: R Constant Shail Avasthi 3C wrote:The value of R you use depends on the units given to you in the problem and the units which your answer needs to be in. The units of R will cancel out with all of the given terms to give you your final answer's units. Thank you Mandeep Garcha 2H Posts: 100 Joined: Sat Aug 24, 2019 12:17 am ### Re: R Constant It depends on the units given in the problem. Anushka Chauhan2B Posts: 51 Joined: Sat Aug 24, 2019 12:16 am ### Re: R Constant The one with similar units in the problem so that the units can cancel Indy Bui 1l Posts: 99 Joined: Sat Sep 07, 2019 12:19 am ### Re: R Constant The R constant is dependent on the units of pressure you are given. Looking up a table is helpful and just pay attention to the units you are given in the question. You don't need to memorize this, just know which values correspond to each unit of pressure. kendal mccarthy Posts: 109 Joined: Wed Nov 14, 2018 12:22 am ### Re: R Constant To determine which R to use view the givens you have and then you can determine which R unit would cancel out the appropriate units to get what value you desire. For most calculations, like in the VP=nRT equation, it will be 0.08206. vanessas0123 Posts: 100 Joined: Wed Sep 11, 2019 12:17 am ### Re: R Constant Look at the units you have and match it with the R constant's units! Micah3J Posts: 100 Joined: Tue Oct 08, 2019 12:16 am ### Re: R Constant Just make sure that the R constant and your given units will cancel out and match accordingly Orrin Zhong 4G Posts: 51 Joined: Sat Jul 20, 2019 12:16 am ### Re: R Constant There are a lot of gas constants, so just fit the units that are involved with the units of the correct gas constant. Generally, I use 0.0821 Latm/(molK) when I'm trying to find volume or pressure and 8.314 J/(molK) when I'm trying to find energy. Joanne Lee 1J Posts: 100 Joined: Thu Jul 25, 2019 12:15 am ### Re: R Constant You can decide which form of R to use based on what the units of the given values are. Charlene Datu 2E Posts: 62 Joined: Wed Sep 11, 2019 12:16 am ### Re: R Constant When looking at the units, it's good to note which ones will cancel out, and to keep in mind the units that your answer is supposed to be in. For the ideal gas law, you will typically be using any value with a unit of pressure (atm, bar, Torr). When calculating anything related to energy, the values with units of J or L/atm (1 L atm = 101.325 J) are helpful. Jasmine W 1K Posts: 49 Joined: Sat Sep 07, 2019 12:18 am ### Re: R Constant It depends on the units given in the problem. All of the R constants mean the same thing; the numbers just change due to unit conversions. Jose Robles 1D Posts: 100 Joined: Fri Aug 02, 2019 12:15 am ### Re: R Constant Read the problem and see the units being used in the question, look at the pressure usually. Gurmukhi Bevli 4G Posts: 49 Joined: Wed Nov 14, 2018 12:20 am ### Re: R Constant It will normally depend on the units given for the other quantities being used in the problem, and the variations of R and its value/units will usually be given on the formula sheet, so it is easier to assess the units on a case by case basis and see which one is the best fit. Tahlia Mullins Posts: 105 Joined: Thu Jul 25, 2019 12:15 am ### Re: R Constant I always just look at the units given, and it’s usually pretty easy to tell. Sometimes, temperature might need to be changed to Kelvin though!
# Jacobian determinant Hello, I am trying to derive the formula for converting the differential area dxdy into dudv, where x and y are given as functions of u and v. I saw a derivation (or motivation) in a calculus book which which uses transformation and images, but I wasn't fully convinced as the motivation was not rigorous. I tried the following, but does not seem to work out: dx = (\partial x)/(\partial u) du + (\partial x)/(\partial v) dv. dy = (\partial y)/(\partial u) du + (\partial y)/(\partial v) dv. When I multiplied them, I did not get the jacobian. I am getting expressions involving du^2 and dv^2, which I do not know what to do with. Any ideas? Please tell me what's wrong, and would be great if you send a link if you find a rigorous derivation/proof. Thanks. ### Re: Jacobian determinant I already did! But this is not a derivation. As I mentioned, the only derivation I saw was using transformation and mapping. My question is why doesn't the straightforward way of finding dx and dy in terms of u and v, and multiplying them yield the jacobian?? This is why I got confused. ### Re: Jacobian determinant The above site does have a description of how the determinant of the Jacobian shows up in the change of variables. It is a little long, but I think it will clarify it. The essential point is that it shows how this determinant is used to change dA from one coordinate system to another. Click on "The Jacobian" to get to the general derivation. ### Re: Jacobian determinant Thanks for the proof. But one should still get the answer when using the equations: dx = (\partial x)/(\partial u) du + (\partial x)/(\partial v) dv. dy = (\partial y)/(\partial u) du + (\partial y)/(\partial v) dv. My problem is how to play around and get rid of du^2 and dv^2. ### Re: Jacobian determinant The image of the rectangle (0,0)-(0,dv)-(du,dv)-(du,0) in the (tangent space of) the u-v plane is a parallelogram in the (tangent space of) the x,y plane with two opposite vertices being(0,0) and (dx,dy). However the area of such a parallelogram is not necessarily dxdy. In other words the calculus in your argument was fine, but you started off with the wrong expression for the area of a parallelogram. ### Re: Jacobian determinant I understand that an infinitesimal rectangular area dxdy in the x-y plane may not be a rectangle in the u-v plane. But I am simply trying to find a formula relating dxdy to dudv (of course they are not equal). It is just that I am using differentials (instead of transformations and images) to establish this formula, which should end up to be the Jacobian determinant. ### Re: Jacobian determinant There are two ways of looking at it: Firstly there is the intuitive way that I was referring to before: Let dx, dy, du, dv be the relevant components of tangent vectors. Then your expressions for dx and dy are correct but the key point is that dxdy is not Jdudv: You have a rectangle in the (tangent space of the) u-v plane with area dudv. This maps to a parallelogram in the (tangent space of the) x-y plane. You can easily check that the area of this parallelogram is Jdudv and that it has an opposite pair of vertices (0,0) and (dx,dy). Clearly the area Jdudv is in general less than dxdy. I really recommend you draw out this parallelogram and label the vertices to see what is going on. Now the other way: Let dx, dy, du, dv be the relevant 1-forms. Then dx^dy = J du^dv, (the ^'s are usually not written down). Your expressions for dx and dy are still correct (basic exercise in dual vector spaces). However when you multiply out dx^dy, you get terms du^du=0, dv^dv=0 by definition of ^. ### Re: Jacobian determinant http://tutorial.math.lamar.edu/Classes/CalcIII/ChangeOfVariables.aspx Try the above. Also google "Jacobean determinant" for more references. ### Re: Jacobian determinant You are not supposed to multiply them together . You are looking for an area and assuming you are in flat space you want to take the determinant or \partial x)/(\partial u) du times (\partial y)/(\partial v) dv - \partial x)/(\partial v) dv times \partial y)/(\partial u) du because in a sense du.du=0 as does dv.dv . The reason for the sign change as in the 2nd term in the determinant is a little obscure and has something to do with right handed or oriented coordinate systems but in a sense may be thought of as a CROSS product which according to orientation say of two straight lines crossing one into the other it is the x part of line 1 times the y part of line 2 MINUS the y part of line 1 times the x part of line 2 or we could reverse the sign if we crossed line 2 into line 1. Often it is such that don't have to worry about sign and just assume that area is always positive so can just use the absolute value so then the expression fronting du.dv is non-negative.
## anonymous 5 years ago hey guys i need help with a derivative problem. Using definition of the derivative get f'(x) by using f(x)= 2x^2-x+5 1. anonymous Using the definition of a derivative: $\lim_{h\to0}{\frac{2(x+h)^2-(x+h)+5-(2x^2-x+5)}{h}}$ $\lim_{h\to0}{\frac{2(x^2+2xh+h^2-(x+h)+5-(2x^2-x+5)}{h}}$ $\lim_{h\to0}{\frac{4xh+h^2-h}{h}}$ $\lim_{h\to0}{4x+h-1$ $4x-1$ 2. anonymous $\lim_{h\to0}{4x+h-1}$ Ooops. That second to last line should be the above. 3. anonymous i dnt know what grade r u in 4. anonymous
# A Dimension-5 interaction beyond the standard model 1. Apr 9, 2017 ### spaghetti3451 Consider the following dimension-5 interaction: $$\bar{\psi}D^{2}\psi.$$ Why is this interaction not consistent with either Lorentz invariance, the standard model field content or gauge invariance? 2. Apr 14, 2017 ### PF_Help_Bot Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
Journal article Open Access # REVIEW STUDIES ON BIO-DIESEL PRODUCTION FROM PHYSIC NUT (JATROPHA CURCUS) OIL Prof. Dhanapal Venkatachalam, Samuel Thavamani B, Alphy Joseph ### Dublin Core Export <?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Prof. Dhanapal Venkatachalam, Samuel Thavamani B, Alphy Joseph</dc:creator> <dc:date>2019-03-05</dc:date> <dc:description>Objective: Jatropha curcus belonging to the Family, Euphorbiaceous commonly known as physic nuts. It is well known herb all over the world. J.curcas oil is not edible and is traditionally used for manufacturing soap and other medicinal applications. It is an alternative fuel for diesel engines. This review is based on focusing on the biodiesel production from the plant Jatropha curcas. Methods: Production of Biodiesel from Jatropha curcas seed oil involved three steps include extraction of oil from the seed, acid-catalyzed transesterification, and base-catalyzed transesterification, each of which is well-known and widely-utilized in today’s biodiesel industry The produced bio diesel was characterized to obtain its physicochemical parameters such as flash point, pour point, cloud point, viscosity and density. Results: The results obtained from the calculation of the yield of oil extracted revealed 54% of oil could be obtained from the Jatropha seeds used. According to the results, the values obtained from the analysis of the oil especially free fatty acid, density and kinematic viscosity of the oil were found to compare well with the standard (ASTM), which was an indication that the extracted oil was good and suitable for biodiesel production The considered parameters oil content, iodine value, peroxide value, saponification value and acid value. These parameters were done in order to study the oil property of J curcas L which makes the oil most suitable for biodiesel production. Conclusions: In these reveals that biodiesel has become more attractive as an alternative to fossil diesel because of its environmental benefits and the fact that it is made from renewable resource. J. curcas L. is a promising source of biodiesel since its seeds contain high amount of oil and the species has good agronomic traits.</dc:description> <dc:identifier>https://zenodo.org/record/2583560</dc:identifier> <dc:identifier>10.5281/zenodo.2583560</dc:identifier> <dc:identifier>oai:zenodo.org:2583560</dc:identifier> <dc:relation>doi:10.5281/zenodo.2583559</dc:relation> <dc:relation>url:https://zenodo.org/communities/iajpr</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:subject>Jatropha Curcus, Biodiesel Production, Physiochemical Parameters.</dc:subject> <dc:title>REVIEW STUDIES ON BIO-DIESEL PRODUCTION FROM PHYSIC NUT (JATROPHA CURCUS) OIL</dc:title> <dc:type>info:eu-repo/semantics/article</dc:type> <dc:type>publication-article</dc:type> </oai_dc:dc> 24 9 views
# Initial Part of WFF of PropLog is not WFF ## Theorem Let $\mathbf A$ be a WFF of propositional logic. Let $\mathbf S$ be an initial part of $\mathbf A$. Then $\mathbf S$ is not a WFF of propositional logic. ## Proof The proof proceeds by strong induction on the length of a WFF of propositional logic. Let $l \left({\mathbf Q}\right)$ denote the length of a string $\mathbf Q$. For all $n \in \N_{> 0}$, let $P \left({n}\right)$ be the proposition: The initial part of $\mathbf A$ such that $l \left({\mathbf A}\right) = n$ is not a WFF of propositional logic. By definition, $\mathbf S$ is an initial part of $\mathbf A$ if and only if $\mathbf A = \mathbf {S T}$ for some non-null string $\mathbf T$. Thus we note that $l \left({\mathbf S}\right) < l \left({\mathbf A}\right)$. ### Basis for the Induction Let $\mathbf A$ be a WFF such that $l \left({\mathbf A}\right) = 1$. Then for an initial part $\mathbf S$: $l \left({\mathbf S}\right) < 1 = 0$ That is, $\mathbf S$ must be the null string, which is not a WFF. So the result holds for WFFs of length $1$. That is, $P \left({1}\right)$ is true. This is our basis for the induction. ### Induction Hypothesis Now we need to show that for $k \ge 1$, if $P \left({j}\right)$ is true for all $j \le k$, then it logically follows that $P \left({k + 1}\right)$ is true. So this is our induction hypothesis: The initial part of $\mathbf A$ such that $l \left({\mathbf A}\right) = k$ is not a WFF of propositional logic. Then we need to show: The initial part of $\mathbf A$ such that $l \left({\mathbf A}\right) = k + 1$ is not a WFF of propositional logic. ### Induction Step This is our induction step: Let $\mathbf A$ be a WFF such that $l \left({\mathbf A}\right) = k + 1$. Suppose $\mathbf D$ is an initial part of $\mathbf A$ which happens to be a WFF. That is, $\mathbf A = \mathbf{D T}$ where: $\mathbf D$ is a WFF $\mathbf T$ is non-null. There are two cases: $(1): \quad \mathbf A = \neg \mathbf B$ where $\mathbf B$ is a WFF of length $k$. Thus $\mathbf D$ is a WFF starting with $\neg$. So: $\mathbf D = \neg \mathbf E$ where $\mathbf E$ is also a WFF. We remove the initial $\neg$ from $\mathbf A = \mathbf{D T}$ to get: $\mathbf B = \mathbf{E T}$ But then $\mathbf B$ is a WFF of length $k$ which has $\mathbf E$ as an initial part which is itself a WFF. This contradicts the induction hypothesis. Therefore no initial part of $\mathbf A = \neg \mathbf B$ can be a WFF. $\Box$ $(2): \quad \mathbf A = \left({\mathbf B \circ \mathbf C}\right)$ where $\circ$ is one of the binary connectives. In this case, $\mathbf D$ is a WFF starting with $($, so: $\mathbf D = \left({\mathbf E * \mathbf F}\right)$ for some binary connectives $*$ and some WFFs $\mathbf E$ and $\mathbf F$. Thus: $\mathbf B \circ \mathbf C) = \mathbf E * \mathbf F) \mathbf T$. Both $\mathbf B$ and $\mathbf E$ are WFFs of length less than $k + 1$. Therefore, by the induction hypothesis, neither $\mathbf B$ nor $\mathbf E$ can be an initial part of the other. But since both $\mathbf B$ and $\mathbf E$ start at the same place in $\mathbf A$, they must be the same: $\mathbf B = \mathbf E$ Therefore: $\mathbf B \circ \mathbf C) = \mathbf B * \mathbf F) \mathbf T$ So $\circ = *$ and: $\mathbf C) = \mathbf F) \mathbf T$ But then the WFF $\mathbf F$ is an initial part of the WFF $\mathbf C$ of length less than $k + 1$. This contradicts our induction hypothesis. Therefore no initial part of $\mathbf A = \left({\mathbf B \circ \mathbf C}\right)$ can be a WFF. $\Box$ As $\mathbf A$ is arbitrary, it follows that no initial part of any WFF of length $k + 1$ can be a WFF. So $P \left({k}\right) \implies P \left({k+1}\right)$ and the result follows by strong induction. Therefore, for all $n \in \N_{> 0}$, the initial part of $\mathbf A$ such that $l \left({\mathbf A}\right) = n$ is not a WFF of propositional logic. Hence the result. $\blacksquare$
# Uncorrelating correlated $\chi^2$ distribution This question is related to my previous question in here So I was trying to simulate correlated $\chi^2(1)$ random variables given the desired co-variance matrix. However, it seems like the only possible route was the following Given the co-variance matrix $R$, then we perform the cholesky decomposition such that $$R=LL^t$$ Then we simulate a number of independent vector $A$ with $N\sim(0,1)$, we can then simulate $B$ using $$LA=B$$ where B should be correlated $N\sim(0,1)$, then we can obtain the correlated $\chi^2(1)$ variables. When playing around with the variables, I thought that given $$LA=B$$ and that the correlated $\chi^2$ are simply $B^2$, then by squaring the equation, I should be able to get the corresponding independent $\chi^2$ variables e.g. $$LAA^tL^t=BB^t$$ However, I note that $AA^t$ and $BB^t$ are both square matrix, so I am not sure whether if they are both $\chi^2$ distributed. What interested me most is that: given a set of correlated random variables that are normally distributed, we can simply use the cholesky distribution to orthogonalizing the variables or un-correlate them. Will there be similar simple equations that we can use to perform the same trick in orthogonalizing $\chi^2$ or even non-centric $\chi^2$ variables? Thank you • For simulating correlated $\chi^2(1)$ variates, why not just square a bivariate Normal distribution? – whuber Feb 23 '15 at 18:01 • I can, however, I am interested in whether if there is any method that can simulate correlated $\chi^2(1)$ from independent $\chi^2(1)$ variables. – Sam Feb 24 '15 at 4:09
# Christoffel symbols 1. Nov 11, 2013 ### tuggler 1. The problem statement, all variables and given/known data I am learning Christoffel symbols and I want to know how to compute a surface parameterized by $g(u,v) = (u\cos v, u \sin v, u)$ by using the definition. 2. Relevant equations Christoffel symbols 3. The attempt at a solution Is this website http://www.math.uga.edu/~clayton/courses/660/660_4.pdf [Broken] on page 3 the same example as mine because I noticed the u and v are switched? Should I use that example as a reference or is it exactly like my question? Last edited by a moderator: May 6, 2017 2. Nov 11, 2013 ### Dick I think you should try and work it out on your own. But yes, that's essentially the same problem as yours. How much it's going to look like your solution depends on how you defined the Christoffel symbols. I'm not used to the definition in terms of the first fundamental form, so I find the middle part pretty confusing. Last edited by a moderator: May 6, 2017 3. Nov 11, 2013 ### tuggler Thank you! The first fundamental form I can do. Thanks !
# Indefinite double integral In calculus we've been introduced first with indefinite integral, then with the definite one. Then we've been introduced with the concept of double (definite) integral and multiple (definite) integral. Is there a concept of double (or multiple) indefinite integral? If the answer is yes, how is its definition, and why we don't learn that? If the answer is no, why it is so? • This is a very good question, I would be surprised if you get a simple answer for it! – NoChance Sep 9 '15 at 20:10 The answer is affirmative. If it is assumed in the sequel that all functions are "neat", then we have: $$u(p,q) = \iint f(p,q)\, dp\, dq = \iint f(p,q)\, dq\, dp \\ \Longleftrightarrow \quad \frac{\partial^2}{\partial q \, \partial p} u(p,q) = \frac{\partial^2}{\partial p \, \partial q} u(p,q) = f(p,q)$$ In particular, if the cross partial derivatives are zero: $$\frac{\partial^2}{\partial q \, \partial p} u(p,q) = \frac{\partial^2}{\partial p \, \partial q} u(p,q) = 0$$ Do the integration: $$u(p,q) = \iint 0 \, dq\, dp = \int \left[ \int 0 \, dq \right] dp = \int f(p) \, dp = F(p)$$ On the other hand: $$u(p,q) = \iint 0 \, dp\, dq = \int \left[ \int 0 \, dp \right] dq = \int g(q) \, dq = G(q)$$ Because $\;\partial f(p)/\partial q = \partial g(q)/\partial p = 0\,$ : that's the meaning of "independent variables". We conclude that the general solution of the PDE $\;\partial^2 u/\partial p \partial q = \partial^2 u/\partial q \partial p = 0\;$ is given by: $$u(p,q) = F(p) + G(q)$$ This result is more interesting than it might seem at first sight. Lemma. Let $a\ne 0$ and $b\ne 0$ be constants (complex eventually) , then: $$\frac{\partial}{\partial (ax+by)} = \frac{1}{a}\frac{\partial}{\partial x} + \frac{1}{b}\frac{\partial}{\partial y} = \frac{\partial}{\partial ax} + \frac{\partial}{\partial by}$$ Proof with a well known chain rule for partial derivatives (for every $u$): $$\frac{\partial u}{\partial (ax+by)} = \frac{\partial u}{\partial x}\frac{\partial x}{\partial (ax+by)} + \frac{\partial u}{\partial y}\frac{\partial y}{\partial (ax+by)}$$ Where: $$\frac{\partial x}{\partial (ax+by)} = \frac{1}{\partial (ax+by)/\partial x} = \frac{1}{a} \\ \frac{\partial y}{\partial (ax+by)} = \frac{1}{\partial (ax+by)/\partial y} = \frac{1}{b}$$ Now consider the following partial differential equation (wave equation): $$\frac{1}{c^2}\frac{\partial^2 u}{\partial t^2} - \frac{\partial^2 u}{\partial x^2} = 0$$ With a little bit of Operator Calculus, decompose into factors: $$\left[ \frac{\partial}{\partial c t} - \frac{\partial}{\partial x} \right] \left[ \frac{\partial}{\partial c t} + \frac{\partial}{\partial x} \right] u = \left[ \frac{\partial}{\partial c t} + \frac{\partial}{\partial x} \right] \left[ \frac{\partial}{\partial c t} - \frac{\partial}{\partial x} \right] u = 0$$ With the above lemma, this is converted to: $$\frac{\partial}{\partial (x-ct)}\frac{\partial}{\partial (x+ct)} u = \frac{\partial}{\partial (x+ct)}\frac{\partial}{\partial (x-ct)} u = 0$$ With $p = (x-ct)$ and $q = (x+ct)$ as new independent variables. Now do the integration and find that the general solution of the wave equation is given by: $$u(x,t) = F(p) + G(q) = F(x-ct) + G(x+ct)$$ Interpreted as the superposition of a wave travelling forward and a wave travelling backward. Very much the same can be done for the 2-D Laplace equation: $$\frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} = 0$$ Decompose into factors (and beware of complex solutions): $$\left[ \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right] \left[ \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right] u = \left[ \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right] \left[ \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right] u = 0$$ This is converted to: $$\frac{\partial}{\partial (x+iy)}\frac{\partial}{\partial (x-iy)} u = \frac{\partial}{\partial (x-iy)}\frac{\partial}{\partial (x+iy)} u = 0$$ With $\;z=x+iy\;$ and $\;\overline{z}=x-iy\;$ as new, complex, independent variables. Now do the integration: $$u(x,y) = F(z) + G(\overline{z})$$ The solutions are related to holomorphic functions in the complex plane. IMHO, this question is rather deep, but admits a positive answer. Rather than attempting to answer it, though, I'll try to give some intuitions and point the interested people in the right direction. One variable case. An indefinite integral $\int f(x) \, dx$ is understood as a function $F$ which helps evaluate the definite integral over an interval $[a,b]$ in the following way: given the numbers $a$ and $b$, $$\int_a^b f(x) \, dx = F(b) - F(a).$$ The operation in the RHS of the last equation is significantly simpler than the equation in the left (which is a limit operation). Thus, knowledge of the indefinite integral $F$ is of great help when evaluating integrals. Notions for generalization. This concept admits a certain generalization to multivariate calculus in the context of Stoke's theorem. (I will be handwavy in this part, but I will point to a rigorous source at the end.) This time, though, there won't be a function as magical as the one from before, which you could evaluate in two points to get an answer. Rather, the generalization attempts to imitate the following behavior: if $f=F',$ by the fundamental theorem of calculus, $$\int_a^b F'(x) \, dx = F(b) - F(a).$$ Notice that the points $a$ and $b$ form the border of the interval $[a,b]$, so you could say that integrating $F'$ over an interval amounts to evaluating $F$ over the border of that interval. Note also that the signs obey a rule: the border-point which lies towards the "positive" direction of the interval gets the plus sign, and the other direction gets the minus sign. Now imagine a 3-D context, where you want to integrate a three-variable function $f(x,y,z)$ over the unit ball. Even if you find a "function" $F$ which in some way satisfies a proper generalization of "$f = F'$", you now have an infinite number of points in the border. How is $F$ used in this case? This difficulty is overcome by somehow integrating the values of $F$ values along the border of the ball (that is, the unit sphere). Special attention must be given to the "signs" which correspond to each point too, much like in the 1-dimensional case. These should be considered inside the integral along the border. The theorems. So, with these ideas in mind, you can check the divergence theorem, a special case of Stoke's theorem for the three-variable case. Continuing with our 3-D example, if $B$ is the unit ball and $\partial B$ is its border: $$\int_B \nabla \cdot \mathbf{F}(\mathbf{x}) \, dx\, dy\, dz = \int_{\partial B} \mathbf{F}(\mathbf{x})\cdot\mathbf{n}(\mathbf{x})\, dS(\mathbf{x}).$$ Here, the right generalization of realizing that $f$ is the derivative of some function $F$ (the indefinite integral from the 1-D case) is realizing that $f$ is the divergence of some vector field $\mathbf{F}$,that is, $f = \nabla \cdot \mathbf{F}$. Similarly, the right analogues for the "signs" depending on "positive/negative ends of the interval" that weigh the points $\mathbf{x}$ turn out to be the "directions normal to the surface", denoted by $\mathbf{n}(\mathbf{x})$, which project the values of the vector field $\mathbf{F}(\mathbf{x})$, "weighing" them in the appropriate direction. Important diference. Now, this identity successfully states that the evaluation of the triple integral in the LHS amounts to evaluating a surface integral (double integral) in the RHS. However, nothing guarantees that the operation in the right will be easier to carry out. Whether or not this conversion is helpful or computationally convenient will depend on context, and you could even use it the other way round if it is more convenient. I hope to have convinced you that here is much more to this than what can be covered in a single answer. If you want to learn about these topics in a rigorous way, I recommend reading a book on "calculus on manifolds", like Bachman's. You'll learn about integrating differential forms, and about exact differential forms, which are the forms which admit this kind of generalization of "indefinite integral". Now I realized that as an indefinite double integral, we use the concept of partial differential equations (where $z=z(x,y)$)! While we find the solution of these equations, we do just the same that we do while we want to find the primitive of an one variable function! Also, while we solve an indefinite integral of an one variable integral, in the solution we have a constant, and in the same logic, while we solve PDE of a two variable function, we get two constants $c_1$ and $c_2$. • More precise, we get two functions $c_1,c_2$ that act as a constant with respect to partial differentiation. Like in my answer. – Han de Bruijn Sep 14 '15 at 16:42
# Finding out the remainder of $\frac{11^\text{10}-1}{100}$ using modulus [duplicate] If $$11^\text{10}-1$$ is divided by $$100$$, then solve for '$$x$$' of the below term $$11^\text{10}-1 = x \pmod{100}$$ ## Whatever I tried: $$11^\text{2} \equiv 21 \pmod{100}$$.....(1) $$(11^\text{2})^\text{2} \equiv (21)^\text{2} \pmod{100}$$ $$11^\text{4} \equiv 441 \pmod{100}$$ $$11^\text{4} \equiv 41 \pmod{100}$$ $$(11^\text{4})^\text{2} \equiv (41)^\text{2} \pmod{100}$$ $$11^\text{8} \equiv 1681 \pmod{100}$$ $$11^\text{8} \equiv 81 \pmod{100}$$ $$11^\text{8} × 11^\text{2} \equiv (81×21) \pmod{100}$$ ......{from (1)} $$11^\text{10} \equiv 1701 \pmod{100} \implies 11^\text{10} \equiv 1 \pmod{100}$$ Hence, $$11^\text{10} -1 \equiv (1-1) \pmod{100} \implies 11^\text{10} - 1 \equiv 0 \pmod{100}$$ and thus we get the value of $$x$$ and it is $$x = 0$$ and $$11^\text{10}-1$$ is divisible by $$100$$. But this approach take a long time for any competitive exam or any math contest without using calculator. Any easier process on how to determine the remainder of the above problem quickly? That will be very much helpful for me. Thanks in advance. ## marked as duplicate by Bill Dubuque divisibility StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Feb 22 at 14:26 • I don't understand from which source I suggest the problem concerning number theory, all become duplicate. I think, Bangladesh Math Olympiad is a duplicate and fraud. Shame on it and I blame it. – Anirban Niloy Feb 22 at 14:36 $$11^{10}=(10+1)^{10}=10^{10}+10×10^9+\frac {(10×9)}{2}×10^9+\cdots+(10×10)+1$$(using binomial expansion ). Now note that every term except last one is a multiple of $$100$$. $$11^{10} = (10+1)^{10} = 10^{10} + k_1\cdot 10^9 + k_2 \cdot 10^8 + ... + 10\cdot 10^1 + 1$$ where the $$k$$'s represent various combinatorial constants. The values are unimportant. What's important is that when we take the whole thing modulo $$100$$, the expression reduces to $$1$$. Subtracting one, we get the required result $$11^{10} - 1 \equiv 0 \pmod{100}$$.
# What are the local extrema and inflection points for y = 2sin (x) - cos^2 (x) for [0, 2π]? Apr 30, 2015 Let $f \left(x\right) = y$ $y = f \left(x\right) = 2 \sin \left(x\right) - {\cos}^{2} \left(x\right)$ for $\left[0 , 2 \pi\right]$ To find local extrema: Find critical numbers for $f$ $y ' = 2 \cos x + 2 \cos x \sin x$ $y '$ never fail to exist and $y ' = 0$ when $2 \cos x + 2 \cos x \sin x = 2 \cos x \left(1 + \sin x\right) = 0$ In $\left[0 , 2 \pi\right]$ this happens at $\frac{\pi}{2}$ and at $\frac{3 \pi}{2}$ Testing: $y ' = 2 \cos x \left(1 + \sin x\right) = 0$, and the factors: $2$ and $1 + \sin x$ are always non-negative, so the sign of $y '$ matches the sign of $\cos x$ On $\left[0 , \frac{\pi}{2}\right)$ , $y '$ is positive (because cosine is) On $\left(\frac{\pi}{2} , \frac{3 \pi}{2}\right)$ , $y '$ is negative (because cosine is) On $\left(\frac{3 \pi}{2} , 2 \pi\right)$ , $y '$ is positive (because cosine is) So $f \left(\frac{\pi}{2}\right) = 2$ is a local maximum and $f \left(\frac{3 \pi}{2}\right) = - 2$ is a local minimum. To find inflection points: Investigate the sign of $y ' '$ $y ' ' = - 2 \sin \left(x\right) + \left(- 2 {\sin}^{2} \left(x\right) + 2 {\cos}^{2} x\right)$ y'' = -2sin (x) + (-2sin^2 (x) +2(1-sin^2x) $y ' ' = 2 - 2 \sin x - 4 {\sin}^{2} x$ $y ' ' = - 2 \left(2 {\sin}^{2} x + \sin x - 1\right) = - 2 \left(2 \sin x - 1\right) \left(\sin x + 1\right)$ $y ' ' = 0$ when $- 2 \left(2 \sin x - 1\right) \left(\sin x + 1\right) = 0$ And that happens at $x = \frac{\pi}{6} , \frac{5 \pi}{6} , \frac{3 \pi}{2}$ The factors of $y ' '$: $- 2$ is always negative and $\sin x + 1$ is never negative, so the sign of $y ' '$ will be the opposite of the sign of $2 \sin x - 1$ On $\left[0 , \frac{\pi}{6}\right)$ , $y ' '$ is positive ($2 \sin x - 1$ is negative) On $\left(\frac{\pi}{6} , \frac{5 \pi}{6}\right)$ , $y ' '$ is positive ($2 \sin x - 1$ is negative) On $\left(\frac{5 \pi}{6} , \frac{3 \pi}{2}\right)$ , $y ' '$ is positive ($2 \sin x - 1$ is negative) On $\left(\frac{3 \pi}{2} , 2 \pi\right)$ , $y ' '$ is positive ($2 \sin x - 1$ is negative) The concavity changes, so there are inflection points: $\left(\frac{\pi}{6} , f \left(\frac{\pi}{6}\right)\right)$ and $\left(\frac{5 \pi}{6} , f \left(\frac{5 \pi}{6}\right)\right)$
Cancellative commutative monoids Abbreviation: CanCMon Definition A \emph{cancellative commutative monoid} is a cancellative monoid $\mathbf{M}=\langle M,\cdot ,e\rangle$ such that $\cdot$ is commutative: $x\cdot y=y\cdot x$ Morphisms Let $\mathbf{M}$ and $\mathbf{N}$ be cancellative commutative monoids. A morphism from $\mathbf{M}$ to $\mathbf{N}$ is a function $h:Marrow N$ that is a homomorphism: $h(x\cdot y)=h(x)\cdot h(y)$, $h(e)=e$ Examples Example 1: $\langle\mathbb{N},+,0\rangle$, the natural numbers, with addition and zero. Basic results All commutative free monoids are cancellative. All finite commutative (left or right) cancellative monoids are reducts of abelian groups. Properties Classtype quasivariety undecidable no unbounded no Finite members $\begin{array}{lr} f(1)= &1 f(2)= &1 f(3)= &1 f(4)= &2 f(5)= &1 f(6)= &1 f(7)= &1 \end{array}$
# Part 5: Equilibrium problems¶ This file is part of BurnMan - a thermoelastic and thermodynamic toolkit for the Earth and Planetary Sciences Copyright (C) 2012 - 2021 by the BurnMan team, released under the GNU GPL v2 or later. ## Introduction¶ This ipython notebook is the fifth in a series designed to introduce new users to the code structure and functionalities present in BurnMan. Demonstrates 1. burnman.equilibrate, an experimental function that determines the bulk elemental composition, pressure, temperature, phase proportions and compositions of an assemblage subject to user-defined constraints. Everything in BurnMan and in this tutorial is defined in SI units. ## Phase equilibria¶ ### What BurnMan does and doesn’t do¶ Members of the BurnMan Team are often asked whether BurnMan does Gibbs energy minimization. The short answer to that is no, for three reasons: 1) Python is ill-suited to such computationally intensive problems. 2) There are many pieces of software already in the community that do Gibbs energy minimization, including but not limited to: PerpleX, HeFESTo, Theriak Domino, MELTS, ENKI, FactSAGE (proprietary), and MMA-EoS. 3) Gibbs minimization is a hard problem. The brute-force pseudocompound/simplex technique employed by Perple_X is the only globally robust method, but clever techniques have to be used to make the computations tractable, and the solution found is generally only a (very close) approximation to the true minimum assemblage. More refined Newton / higher order schemes (e.g. HeFESTo, MELTS, ENKI) provide an exact solution, but can get stuck in local minima or even fail to find a solution. So, with those things in mind, what does BurnMan do? Well, because BurnMan can compute the Gibbs energy and analytical derivatives of composite materials, it is well suited to solving the equilibrium relations for fixed assemblages. This is done using the burnman.equilibrate function, which acts in a similar (but slightly more general) way to the THERMOCALC software developed by Tim Holland, Roger Powell and coworkers. Essentially, one chooses an assemblage (e.g. olivine + garnet + orthopyroxene) and some equality constraints (typically related to bulk composition, pressure, temperature, entropy, volume, phase proportions or phase compositions) and the equilibrate function attempts to find the remaining unknowns that satisfy those constraints. In a sense, then, the equilibrate function is simultaneously more powerful and more limited than Gibbs minimization techniques. It allows the user to investigate and plot metastable reactions, and quickly obtain answers to questions like “at what pressure does wadsleyite first become stable along a given isentrope?”. However, it is not designed to create P-T tables of equilibrium assemblages. If a user wishes to do this for a complex problem, we refer them to other existing codes. BurnMan also contains a useful utility material called burnman.PerplexMaterial that is specifically designed to read in and interrogate P-T data from PerpleX. There are a couple more caveats to bear in mind. Firstly, the equilibrate function is experimental and can certainly be improved. Equilibrium problems are highly nonlinear, and sometimes solvers struggle to find a solution. If you have a better, more robust way of solving these problems, we would love to hear from you! Secondly, the equilibrate function is not completely free from the curse of multiple roots - sometimes there is more than one solution to the equilibrium problem, and BurnMan (and indeed any equilibrium software) may find one a metastable root. ## Equilibrating at fixed bulk composition¶ Fixed bulk composition problems are most similar to those asked by Gibbs minimization software like HeFESTo. Essentially, the only difference is that rather than allowing the assemblage to change to minimize the Gibbs energy, the assemblage is instead fixed. In the following code block, we calculate the equilibrium assemblage of olivine, orthopyroxene and garnet for a mantle composition in the system NCFMAS at 10 GPa and 1500 K. [1]: import numpy as np import matplotlib.pyplot as plt import burnman from burnman import equilibrate from burnman.minerals import SLB_2011 # Set the pressure, temperature and composition pressure = 3.e9 temperature = 1500. composition = {'Na': 0.02, 'Fe': 0.2, 'Mg': 2.0, 'Si': 1.9, 'Ca': 0.2, 'Al': 0.4, 'O': 6.81} # Create the assemblage gt = SLB_2011.garnet() ol = SLB_2011.mg_fe_olivine() opx = SLB_2011.orthopyroxene() assemblage = burnman.Composite(phases=[ol, opx, gt], fractions=[0.7, 0.1, 0.2], name='NCFMAS ol-opx-gt assemblage') # The solver uses the current compositions of each solution as a starting guess, # so we have to set them here ol.set_composition([0.93, 0.07]) opx.set_composition([0.8, 0.1, 0.05, 0.05]) gt.set_composition([0.8, 0.1, 0.05, 0.03, 0.02]) equality_constraints = [('P', 10.e9), ('T', 1500.)] sol, prm = equilibrate(composition, assemblage, equality_constraints) print(f'It is {sol.success} that equilibrate was successful') print(sol.assemblage) # The total entropy of the assemblage is the molar entropy # multiplied by the number of moles in the assemblage entropy = sol.assemblage.S*sol.assemblage.n_moles Warning: No module named 'cdd'. For full functionality of BurnMan, please install pycddlib. It is True that equilibrate was successful Composite: NCFMAS ol-opx-gt assemblage P, T: 1e+10 Pa, 1500 K Phase and endmember fractions: olivine: 0.4971 Forsterite: 0.9339 Fayalite: 0.0661 orthopyroxene: 0.2925 Enstatite: 0.8640 Ferrosilite: 0.0687 Mg_Tschermaks: 0.0005 Ortho_Diopside: 0.0668 garnet: 0.2104 Pyrope: 0.4458 Almandine: 0.1239 Grossular: 0.2607 Mg_Majorite: 0.1258 Jd_Majorite: 0.0437 Each equality constraint can be a list of constraints, in which case equilibrate will loop over them. In the next code block we change the equality constraints to be a series of pressures which correspond to the total entropy obtained from the previous solve. [2]: equality_constraints = [('P', np.linspace(3.e9, 13.e9, 21)), ('S', entropy)] sols, prm = equilibrate(composition, assemblage, equality_constraints) The object sols is now a 1D list of solution objects. Each one of these contains an equilibrium assemblage object that can be interrogated for any properties: [3]: data = np.array([[sol.assemblage.pressure, sol.assemblage.temperature, sol.assemblage.p_wave_velocity, sol.assemblage.shear_wave_velocity, sol.assemblage.molar_fractions[0], sol.assemblage.molar_fractions[1], sol.assemblage.molar_fractions[2]] for sol in sols if sol.success]) The next code block plots these properties. [4]: fig = plt.figure(figsize=(12, 4)) ax = [fig.add_subplot(1, 3, i) for i in range(1, 4)] P, T, V_p, V_s = data.T[:4] phase_proportions = data.T[4:] ax[0].plot(P/1.e9, T) ax[1].plot(P/1.e9, V_p/1.e3) ax[1].plot(P/1.e9, V_s/1.e3) for i in range(3): ax[2].plot(P/1.e9, phase_proportions[i], label=sol.assemblage.phases[i].name) for i in range(3): ax[i].set_xlabel('Pressure (GPa)') ax[0].set_ylabel('Temperature (K)') ax[1].set_ylabel('Seismic velocities (km/s)') ax[2].set_ylabel('Molar phase proportions') ax[2].legend() plt.show() From the above figure, we can see that the proportion of orthopyroxene is decreasing rapidly and is exhausted near 13 GPa. In the next code block, we determine the exact pressure at which orthopyroxene is exhausted. [5]: equality_constraints = [('phase_fraction', [opx, 0.]), ('S', entropy)] sol, prm = equilibrate(composition, assemblage, equality_constraints) print(f'Orthopyroxene is exhausted from the assemblage at {sol.assemblage.pressure/1.e9:.2f} GPa, {sol.assemblage.temperature:.2f} K.') Orthopyroxene is exhausted from the assemblage at 13.04 GPa, 1511.64 K. ## Equilibrating while allowing bulk composition to vary¶ [6]: # Initialize the minerals we will use in this example. ol = SLB_2011.mg_fe_olivine() rw = SLB_2011.mg_fe_ringwoodite() # Set the starting guess compositions for each of the solutions ol.set_composition([0.90, 0.10]) rw.set_composition([0.80, 0.20]) First, we find the compositions of the three phases at the univariant. [7]: T = 1600. composition = {'Fe': 0.2, 'Mg': 1.8, 'Si': 1.0, 'O': 4.0} assemblage = burnman.Composite([ol, wad, rw], [1., 0., 0.]) equality_constraints = [('T', T), ('phase_fraction', (ol, 0.0)), ('phase_fraction', (rw, 0.0))] free_compositional_vectors = [{'Mg': 1., 'Fe': -1.}] sol, prm = equilibrate(composition, assemblage, equality_constraints, free_compositional_vectors, verbose=False) if not sol.success: raise Exception('Could not find solution for the univariant using ' 'provided starting guesses.') P_univariant = sol.assemblage.pressure phase_names = [sol.assemblage.phases[i].name for i in range(3)] x_fe_mbr = [sol.assemblage.phases[i].molar_fractions[1] for i in range(3)] print(f'Univariant pressure at {T:.0f} K: {P_univariant/1.e9:.3f} GPa') print('Fe2SiO4 concentrations at the univariant:') for i in range(3): print(f'{phase_names[i]}: {x_fe_mbr[i]:.2f}') Univariant pressure at 1600 K: 12.002 GPa Fe2SiO4 concentrations at the univariant: olivine: 0.22 ringwoodite: 0.50 Now we solve for the stable sections of the three binary loops [8]: output = [] for (m1, m2, x_fe_m1) in [[ol, wad, np.linspace(x_fe_mbr[0], 0.001, 20)], [ol, rw, np.linspace(x_fe_mbr[0], 0.999, 20)], assemblage = burnman.Composite([m1, m2], [1., 0.]) # Reset the compositions of the two phases to have compositions # close to those at the univariant point m1.set_composition([1.-x_fe_mbr[1], x_fe_mbr[1]]) m2.set_composition([1.-x_fe_mbr[1], x_fe_mbr[1]]) # Also set the pressure and temperature assemblage.set_state(P_univariant, T) # Here our equality constraints are temperature, # the phase fraction of the second phase, # and we loop over the composition of the first phase. equality_constraints = [('T', T), ('phase_composition', (m1, [['Mg_A', 'Fe_A'], [0., 1.], [1., 1.], x_fe_m1])), ('phase_fraction', (m2, 0.0))] sols, prm = equilibrate(composition, assemblage, equality_constraints, free_compositional_vectors, verbose=False) # Process the solutions out = np.array([[sol.assemblage.pressure, sol.assemblage.phases[0].molar_fractions[1], sol.assemblage.phases[1].molar_fractions[1]] for sol in sols if sol.success]) output.append(out) output = np.array(output) Finally, we do some plotting [9]: fig = plt.figure() color='purple' # Plot the line connecting the three phases ax[0].plot([x_fe_mbr[0], x_fe_mbr[2]], [P_univariant/1.e9, P_univariant/1.e9], color=color) for i in range(3): if i == 0: ax[0].plot(output[i,:,1], output[i,:,0]/1.e9, color=color, label=f'{T} K') else: ax[0].plot(output[i,:,1], output[i,:,0]/1.e9, color=color) ax[0].plot(output[i,:,2], output[i,:,0]/1.e9, color=color) ax[0].fill_betweenx(output[i,:,0]/1.e9, output[i,:,1], output[i,:,2], color=color, alpha=0.2) ax[0].text(0.1, 6., 'olivine', horizontalalignment='left') bbox=dict(facecolor='white', edgecolor='white', ax[0].set_xlabel('p(Fe$_2$SiO$_4$)')
# How to physically print python code in color from IDLE? I have searched for an answer to this but the related solutions seem to concern 'print'ing in the interpreter. I am wondering if it is possible to print (physically on paper) python code in color from IDLE? I have gone to: File > Print Window in IDLE and it seems to just print out a black and white version without prompting whether to print in color etc. Edit: It seems like this might not be available so the option is to copy code to a text editor like SciTE and print from there - quite like the default IDLE syntax highlighting though. - IDLE can't do it, but you can do it in an indirect way: Use LaTeX to format your script with the IDLE colors. As of late August 2013, the listings package supports IDLE-like colorizing for python. The following almost-minimal example will turn your script myscript.py into color PDF in the IDLE colors. \documentclass{article} \usepackage{listings} \input{listings-python.prf} % defines the python-idle-code style \usepackage{textcomp} % Needed for straight quotes \lstset{ basicstyle=\normalsize\ttfamily, % size of the fonts for the code language={Python},style={python-idle-code}, showstringspaces=false, tabsize=4, upquote=true, % Requires textcomp } \oddsidemargin=-0.5cm \textwidth=7in % This just fits 80 characters at 10pt \begin{document} \lstinputlisting{myscript.py} \end{document} You could also include code snippets in a real LaTeX document. There are two catches to using the above code: 1. You need at least listings version 1.5b (2013/08/26). It's not in Texlive 2013, but it can be downloaded from CTAN (also with the texlive manager, tlmgr). 2. At the moment there seems to be a problem with the CTAN version: The preferences file listings-python.prf is in the documentation folder for the package, not in tex/latex/listings where TeX can find it. You'll need to move it manually for the inclusion to work. - Here is a better answer. Use the IDLE extension called IDLE2HTML.py (search for this). This lets IDLE print to an HTML file that has color in it's style sheet. Then save the HTML file to a PDF (the color will still be there). -
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). 2 clearing up the statement The question may be of little interest to most people here on MathOverflow, but after browsing a pile of books in combinatorics, I had to ask it somewhere: What are the most efficient formulae for calculating the number of $k$-combinations (and $k$-permutations) of multisets with finite multiplicities (i.e. combinations and permutations with repetition, but with restrictions on the number of repetition)? I know that generating functions are often used for solving this kind of problems, but there has been a number of formulae used for such counting, such as Percy MacMahon's one ($m_i$ denotes multiplicities of $n$ different elements in the multiset): $$C(k;m_{1},m_{2},\ldots,m_{n})=\sum_{p=0}^{n}(-1)^{p}\sum_{1\le i_{1}\le i_{2}\le\cdots\le i_{p}\le n}{n+k-m_{i_{1}}-m_{i_{2}}-\ldots-m_{i_{p}}-p-1 \choose n-1}$$ Are you aware of other formulae for it, or useful references in literature? EDIT: Clearing up the statement: a $k$-combination means simply picking $k$ elements from the multiset (order not important). $k$-permutation is basically the same, but order is important. In the example above, the multiset is ${ m_1\cdot a_1,m_2\cdot a_2,\ldots m_n\cdot a_n}$, $a_i$ being the elements, $m_i$ being the multiplicities. 1 Combinations of multisets with finite multiplicities The question may be of little interest to most people here on MathOverflow, but after browsing a pile of books in combinatorics, I had to ask it somewhere: What are the most efficient formulae for calculating the number of $k$-combinations (and $k$-permutations) of multisets with finite multiplicities (i.e. combinations and permutations with repetition, but with restrictions on the number of repetition)? I know that generating functions are often used for solving this kind of problems, but there has been a number of formulae used for such counting, such as Percy MacMahon's one ($m_i$ denotes multiplicities of $n$ different elements in the multiset): $$C(k;m_{1},m_{2},\ldots,m_{n})=\sum_{p=0}^{n}(-1)^{p}\sum_{1\le i_{1}\le i_{2}\le\cdots\le i_{p}\le n}{n+k-m_{i_{1}}-m_{i_{2}}-\ldots-m_{i_{p}}-p-1 \choose n-1}$$ Are you aware of other formulae for it, or useful references in literature?
## Algebra 1 $(-2)^3$ means we are repeating the multiplication of $-2$ 3 times. $(-2)(-2)(-2)=4(-2)=-8$ Since $-8$ is a negative number, the value of this expression is negative.
# Proof of Derivative of ln(x) The proof of the derivative of natural logarithm $\ln(x)$ is presented using the definition of the derivative. The derivative of a composite function of the form $\ln(u(x))$ is also included and several examples with their solutions are presented. ## Proof of the Derivative of $\ln(x)$ Using the Definition of the Derivative The definition of the derivative $f'$ of a function $f$ is given by the limit $f'(x) = \lim_{h \to 0} \dfrac{f(x+h)-f(x)}{h}$ Let $f(x) = \ln(x)$ and write the derivative of $\ln(x)$ as $f'(x) = \lim_{h \to 0} \dfrac{\ln(x+h)- \ln(x) }{h}$ Use the formula $\ln(a) - \ln(b) = \ln(\dfrac{a}{b})$ to rewrite the derivative of $\ln(x)$ as $f'(x) = \lim_{h \to 0} \dfrac{\ln(\dfrac{x+h}{x})}{h} = \lim_{h \to 0} \dfrac{1}{h} \ln(\dfrac{x+h}{x})$ Use power rule of logarithms ( $a \ln y = \ln y^a$ ) to rewrite the above limit as $f'(x) = \lim_{h \to 0} \ln \left(\dfrac{x+h}{x}\right)^{\dfrac{1}{h}} = \lim_{h \to 0} \ln \left(1+\dfrac{h}{x}\right)^{\dfrac{1}{h}}$ Let $y = \dfrac{h}{x}$ and note that $\lim_{h \to 0} y = 0$ We now express h in terms of y $h = y x$ With the above substitution, we can write $\lim_{h \to 0} \ln \left(1+\dfrac{h}{x}\right)^{\dfrac{1}{h}} = \lim_{y \to 0} \ln \left(1+y\right)^{\dfrac{1}{y x}}$ Use power rule of logarithms ( $\ln y^a = a \ln y$ ) to rewrite the above limit as $= \lim_{y \to 0} \dfrac{1}{x} \ln \left(1+y\right)^{\dfrac{1}{y}}$ One of the definitions of the Euler Constant e is $e = \lim_{m \to 0} ( 1 + m) ^{\dfrac{1}{m}}$ Hence the limit we are looking for is given by $\lim_{h \to 0} \ln \left(1+\dfrac{h}{x}\right)^{\dfrac{1}{h}} = \lim_{y \to 0} \dfrac{1}{x} \ln \left(1+y\right)^{\dfrac{1}{y}} = \dfrac{1}{x} \ln e = \dfrac{1}{x}$ Conclusion: $\dfrac{d}{dx} \ln(x) = \dfrac{1}{x}$ ## Derivative of the Composite Function $y = \ln(u(x))$ We now consider the composite natural logarithm of another function u(x). Use the chain rule of differentiation to write $\displaystyle \dfrac{d}{dx} \ln(u(x)) = \dfrac{d}{du} \ln(u(x)) \dfrac{d}{dx} u$ Simplify $= \dfrac {1}{u} \dfrac{d}{dx} u$ Conclusion $\displaystyle \dfrac{d}{dx} \ln(u(x)) = \dfrac{1}{u} \dfrac{d}{dx} u$ Example 1 Find the derivative of the composite natural exponential functions 1. $f(x) = \ln\left(\dfrac{x^2}{x-2}\right)$ 2. $g(x) = \ln (\sqrt{x^3+1})$ 3. $h(x) = \ln ( x^2+2x-5 )$ Solution to Example 1 1. Let $u(x) = \left(\dfrac{x^2}{x-2}\right)$ and therefore $\dfrac{d}{dx} u = \dfrac{d}{dx} \left(\dfrac{x^2}{x-2}\right) = \dfrac{x^2-4x}{\left(x-2\right)^2}$ Apply the rule for the composite natural logarithm function found above $\displaystyle \dfrac{d}{dx} f(x) = \dfrac{1}{u} \dfrac{d}{dx} u = \dfrac{1}{\dfrac{x^2}{x-2}} \times \dfrac{x^2-4x}{(x-2)^2}$ $= \dfrac{x-2}{x^2} \times \dfrac{x^2-4x}{\left(x-2\right)^2} = \dfrac{(x-2)x(x-4)}{x^2(x-2)^2}$ Cancel common factors from numerator and denominator $= \dfrac{(x-4)}{x(x-2)}$ 2. Let $u(x) = \sqrt{x^3+1}$ and therefore $\dfrac{d}{dx} u = \dfrac{d}{dx} \sqrt{x^3+1} = \dfrac{3x^2}{2\sqrt{x^3+1}}$. Apply the above rule of differentiation for the composite natural logarithm function $\displaystyle \dfrac{d}{dx} g(x) = \dfrac{1}{u} \dfrac{d}{dx} u = \dfrac{1}{\sqrt{x^3+1}} \times \dfrac{3x^2}{2\sqrt{x^3+1}}$ $= \dfrac{3x^2}{2(x^3+1)}$ 3. Let $u(x) = x^2+2x-5$ and therefore $\dfrac{d}{dx} u = 2x+2$ Apply the rule of differentiation for the composite exponential function obtained above $\displaystyle \dfrac{d}{dx} h(x) = \dfrac{1}{u} \dfrac{d}{dx} u = \dfrac{1}{x^2+2x-5} \times ( 2x+2)$ $= \dfrac{2x+2}{x^2+2x-5}$ ## More References and links derivative definition of the derivative Chain Rule of Differentiation in Calculus.
# Shear Modulus Through most of this section I have discussed Hook’s Law and how it applies to the Young’s Modulus of a material. Young’s Modulus, however, is only used for normal stresses and strains. What about if that object is in shear. If that is the case than instead of using Young’s Modulus the Shear Modulus would be used. The shear modulus follows the same principles as Young’s Modulus. It is based off the same exact stress or strain curve as seen below. Accept instead of using stresses and strains that would be considered normal, which would typically be pushing or pulling forces. The stress and strain are shear stress and shear strain. The elastic region of the curve would then be represented by the following equation. (Eq 1)  $τ=Gγ$ τ = shear stress γ = shear strain G = Shear Modulus Now, what if you know what the Young’s Modulus of the material is but you don’t know what the Shear modulus is, you can use the Poisson’s Ratio to find the Shear modulus. To do this use the following equation. (Eq 2)  $G=\frac{E}{2(1+ν)}$ ν = Poisson’s Ratio G = Shear Modulus Now each material has a unique Poisson’s Ratio which can be found by looking up the material properties. However, if you don’t have access to this most material have a Poisson’s ratio is normally around 0.3 for most metals. Finally, what exactly is Poisson’s Ratio? Simply, Poisson’s ratio is the ratio of transverse strain over longitudinal or axial strain. See the equation below. (Eq 3)  $ν=\frac{-ε_t}{ε_l}$
# Geometry ## The Geometric Proof of Infinite Primes I was recently wondering why Euclid, the geometer, published a proof that there is an infinite number of primes. I should have known that his proof is geometric. It is: “Let A, B, and C be distinct lengths that cannot be… ## Inscribed Right Triangle Here’s a fun puzzle (via Brilliant.org): What is the area of the square $$ABCD$$? There may be a simpler approach; my solution wound up being more complicated than I expected. Since $$\Delta AEF$$ is a right triangle, $$AE = 5$$… Geometry ## Naming Variables First of all, let me get this out of the way: “Hey, you kids! Get off my lawn!” In this post, I comment on the notational shifts from what I was trained in back in the 1980s and what textbooks… ## Right Triangle Similarity Today’s lesson in my Geometry class was on the use of the geometric mean when finding missing values of right triangles. For every right triangle, two of its altitudes are the legs and the third is perpendicular to the hypotenuse…. ## Al-Jabr (continued) In my previous post, I looked at the first two detailed examples provided by al-Khowarizmi in his compendium, the title of which gives us the word “algebra”. Al-Khowarizmi discussed three types of mathematical objects: Numbers (N, constants), roots (R, unknowns),…
auctex-devel [Top][All Lists] ## Re: [AUCTeX-devel] Questions about cleveref.el From: Mosè Giordano Subject: Re: [AUCTeX-devel] Questions about cleveref.el Date: Tue, 3 Jan 2017 00:13:54 +0100 Hi Arash, 2016-12-30 11:34 GMT+01:00 Arash Esbati <address@hidden>: > Hi all, > > I've just fixed 2 typos in cleveref.el and 2 things occurred to me if > you're a C-c RET' centric, RefTeX-plugged-into-AUCTeX user: > > 1) This style uses TeX-arg-label' and not TeX-arg-ref'. It means an > extra RET in the "SELECT A REFERENCE FORMAT" window, which is useless > since you've already selected a format. You're right, TeX-arg-ref' should be used here. > 2) TeX-arg-cleveref-multiple-labels' does not let you use RefTeX to > select label(s) which is much more convenient. > > Item 1) is fixed easily, 2) should also be doable with a check againt > reftex-label' and such. My question is if there was a particular > reason to implement the code this way? Can reftex-label' query for more than one label? I had a similar problem in biblatex: there are macros taking more than one reference, but this isn't particularly easy to do with RefTeX, so I defined a hand-made function (LaTeX-arg-biblatex-cites'). > And I think (font-latex-set-syntactic-keywords)' in the fontification > part is also not needed. Because there are no verbatim like macros, right? I think you're right. While looking to clevere.el, I remembered that in tex-style.el is defined a custom option, LaTeX-reftex-ref-style-auto-activate', that controls whether a RefTeX reference style (for \ref & friends) should be automatically enabled. Maybe you could define a similar option (t by default) for \cite & friends, to be used for example in biblatex. Bye, Mosè
Terça Feira, 12 de Janeiro de 2021 ## how to find tangent line without a point General Steps to find the vertical tangent in calculus and the gradient of a curve: Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site Well tangent planes to a surface are planes that just touch the surface at the point and are “parallel” to the surface at the point. Preview Activity $$\PageIndex{1}$$ will refresh these concepts through a key example and set the stage for further study. Find tangent equation for a curve which is perpendicular to a line. You’ll need to find the derivative, and evaluate at the given point. How to Find the Vertical Tangent. This website uses cookies to ensure you get the best experience. 3. If you graph the parabola and plot the point, you can see that there are two ways to draw a line that goes through (1, –1) and is tangent to the parabola: up … how to find the point where the graph has a horizontal tangent line You can also use this method to find the point of contact of a tangent to a curve when given the equation of the curve and the gradient of the tangent. Without eliminating the parameter, find an equation of the line tangent to the curve defined parametrically by {eq}x = 2 + \ln t,\enspace y = 4e^1 -t {/eq} at the point Tangent Line: how to find tangent line at a given point, without equation. Determine the points of tangency of the lines through the point (1, –1) that are tangent to the parabola. I just started playing with this this morning The equation I'm using is y = x^2 - 4x - 2 and I'm looking for the equation of the tangent line at point ( 4, -2) The slope of the tangent line is the instantaneous slope of the curve. 0. A tangent line to a curve was a line that just touched the curve at that point and was “parallel” to the curve at the point in question. A tangent of a curve is a line that touches the curve at one point.It has the same slope as the curve at that point. Note that this gives us a point that is on the plane. About Pricing Login GET STARTED About Pricing Login. 1. Example 2. On a graph, it runs parallel to the y-axis. Free tangent line calculator - find the equation of the tangent line given a point or the intercept step-by-step. So if we increase the value of the argument of a function by an infinitesimal amount, then the resulting change in the value of the function, divided by the infinitesimal will give the slope (modulo taking the standard part by discarding any remaining infinitesimals). Learn more Accept. When a problem asks you to find the equation of the tangent line, you’ll always be asked to evaluate at the point where the tangent line intersects the graph. 2. By knowing both a point on the line and the slope of the line we are thus able to find the equation of the tangent line. Tangent line parallel to another line. Why can a solution to differential equation have horizontal asymptotes? By using this website, you agree to our Cookie Policy. If we know both a point on the line and the slope of the line we can find the equation of the tangent line and write the equation in point-slope form. 15 Recall that a line with slope $$m$$ that passes through $$(x_0,y_0)$$ has equation $$y - y_0 = m(x - x_0)\text{,}$$ and this is the point … A vertical tangent touches the curve at a point where the gradient (slope) of the curve is infinite and undefined. How can I find an equation for a line tangent to a point on a parabola without using calculus? Fale conosco
# Suppose that $H_1/H_2$ is Abelian. Show that $H_1 N / H_2 N$ is Abelian. Suppose $$G$$ is a group and $$H_1$$, $$H_2$$, $$N$$ are subgroups of $$G$$. $$N$$ is a normal subgroup of $$G$$ and $$H_2$$ is a normal subgroup of $$H_1$$. Suppose that $$H_1/H_2$$ is Abelian. Show that $$H_1N/H_2N$$ is Abelian. My current thought is using third isomorphism theorem and deduct $$H_1N/H_2N$$ is a quotient of $$H_1/H_2$$. However, I just cannot prove $$H_1N/H_2N$$ is a quotient of $$H_1/H_2$$. Does my thought wrong? • Consider the map $H_1\to H_1N/H_2N$ defined by $h\mapsto \langle h e\rangle$. This is a surjective homomorphism whose kernel contains $H_2$... – Yu Ding Apr 22 at 3:56 This can be proven using only the definition of the quotient. Suppose $$H_1/H_2$$ is abelian, then for any $$a,b \in H_1$$ we have $$aba^{-1}b^{-1} =h \in H_2$$. Now let $$an,bm\in H_1N$$ be given. We have \begin{align} anbmn^{-1}a^{-1}m^{-1}b^{-1} & = an(a^{-1}a)bmn^{-1}a^{-1}(b^{-1}b)m^{-1}b^{-1}\\ & = n'abmn^{-1}a^{-1}b^{-1}m'\\ & = n'abmn^{-1}[(ab)^{-1}(ab)]a^{-1}b^{-1}m'\\ & = n'm''aba^{-1}b^{-1}m'\\ & = n'm''hm' \end{align} where $$n',m',m'' \in N$$ and their existence follows from normality of $$N$$. As $$n'm''hm'\in H_2N$$ by definition, we have $$anbm(bman)^{-1} \in H_2N$$ hence $$H_1N/H_2N$$ is abelian. Your idea is good. After you have proved that $$H_2N$$ is a normal subgroup of $$H_1N$$, you can consider the map $$H_1\to H_1N/H_2N,\qquad x\mapsto xH_2N$$ Prove this map is a surjective homomorphism. The kernel is $$H_1\cap H_2N$$. Since $$H_1\cap H_2N\supseteq H_2$$, … A proof using the second and third isomorphism theorems goes like this $$\frac{H_{1} N}{H_{2} N} = \frac{H_{1} H_{2} N}{H_{2} N} \cong \frac{H_{1}}{H_{1} \cap H_{2} N} \cong \frac{H_{1} / H_{2}}{(H_{1} \cap H_{2} N) / H_{2}},$$ and the latter is abelian, as an homomorphic image of the abelian group $$H_{1}/H_{2}$$. (Note that $$H_{2}$$ is indeed contained in $$H_{1} \cap H_{2} N$$.)
# A petrol tank is in the shape of a cylinder with hemisphere of same radius attached to both ends. If the total length of the tank is $6m$ and the radius is $1m$, what is the capacity of the tank in litres. Verified 203.4k+ views Hint: The petrol tank is in the shape of a cylinder with two hemispheres of same radii attached to the both sides. The capacity of the tank is equal to the total volume of the tank. Volume of the tank is equal to the sum of the volumes of the cylinder and two hemispheres. Let us note down the given data. A petrol tank is in the shape of a cylinder with hemisphere of same radius attached to both ends. The total length of the tank is $6m$ The radius is $1m$. Here the radius refers to the radius of the hemisphere and the cylinder also. So, let us draw the diagram as per the above data, As the total length of the tank is $6m$ and the radius of the hemisphere is $1m$, The length of the cylinder is $6 - 2\left( 1 \right) = 4m$ Now the volume of the tank is equal to the sum of the volumes of cylinder and two hemispheres. Volume of the cylinder $= \pi {r^2}h$ As we know the value of $r$ is $1$ and $h$ is $4$. Substitute in the above formula, Volume of cylinder $= \pi \times {1^2} \times 4$ $= 4\pi$ Now, volume of hemisphere $= \dfrac{2}{3}\pi {r^3}$ Substituting the values, we get, Volume of hemisphere $= \dfrac{2}{3}\pi {\left( 1 \right)^3}$ $= \dfrac{{2\pi }}{3}$ Now, the capacity of the tank $=$ Volume of the tank $=$ Volume of cylinder $+$ $2 \times$ Volume of hemisphere $= 4\pi + 2 \times \dfrac{{2\pi }}{3} \\ = \dfrac{{12\pi + 4\pi }}{3} \\ = \dfrac{{16\pi }}{3} \\$ We know the value of $\pi$ as $\dfrac{{22}}{7}$, if we substitute that in the above, we get, Volume of the tank $= \dfrac{{16}}{3} \times \dfrac{{22}}{7}$ $= \dfrac{{352}}{{21}} \\ = 16.761{m^3} \\$ And we are asked to find out the capacity of the tank in litres. So, we know that $1{m^3} = 1000litres$. Apply in the above answer, we get Volume of the tank or Capacity of the tank $= 16.761 \times 1000$ $= 16761litres$. Hence the capacity of tank in litres is $16761litres$ Note: This type of mensuration problem can be solved easily when we get a clear idea about the shapes of the solids that are involved in the body. We are able to solve this problem in other ways also. Anyways we are calculating the volume of hemisphere and multiplying it with two to get the volume of two hemispheres. But we can directly find the volume of the sphere as those two hemispheres are identical. It will also give the same answer.
# All Questions 497 views ### Additive ElGamal cryptosystem using a finite field I'm trying to implement a modified version of the ElGamal cryptosystem as specified by Cramer et al. in "A secure and optimally efficient multi-authority election scheme", which possesses additive ... 274 views ### Recovering state of modified RC4 key scheduling algorithm Consider this algorithm: ... 285 views ### Why is a MAC needed? I agree that for certain encryption systems or modes of operation, a MAC is indispensible. The best example are probably stream ciphers (and therefore also block ciphers in OFB or CTR mode) that ... 310 views ### Memory-hard operations in work-factor hash functions I'm playing around with work-factor hash functions, and I'm looking for a memory-hard operation to make it resistant to GPU / parallel hardware attacks. I considered a very large (i.e. 64K) s-box that ... 151 views ### Trying to find an algorithm to share portions of a key with multiple people Sorry if this question's a bit basic, but I don't know of any way to ask it concisely enough for a search... What I'm looking for is a proven means of doing the following: Create x number of ... 1k views ### Why should the RSA private exponent have the same size as the modulus? Consider the generation of an RSA key pair with a given modulus size $n$ and a known, small public exponent $e$ (typically $e = 3$ or $e = 65537$). A common method is to generate two random primes ... 489 views ### Is there a standard for OpenSSL-interoperable AES encryption? Many AES-encrypted things (files, strings, database entries, etc.) start with "Salted__" ("U2FsdGVkX1" in base64). I hear it's some sort of OpenSSL interoperability thing a b c. Is there some ... 3k views ### How does a birthday attack on a hashing algorithm work? A "normal", brute-force attack on a cryptographic hashing algorithm $H$ should have a complexity of about $2^{n}$ for a hash algorithm with an output length of $n$ bits. That means it takes about ... 900 views ### Is this a secure implementation of password reset email? I am redesigning a password reset email mechanism because the existing implementation scares the hell out of me. My goal is to generate reset codes that are: Expired Tamper Resistant Single Use ... 1k views ### Can you create a strong blockcipher with small blocksize, given a strong blockcipher of conventional blocksize? Suppose I want a strong 20-bit blockcipher. In other words, I want a function that takes a key (suppose the key is 128 bits), and implements a permutation from 20 bits to 20 bits. The set of ... 357 views ### Is truncating a hashed private key with SHA-1 safe to use as the symmetric key for AES for data at rest? I realize this is mixing the purposes between asymmetric and symmetric crypto, but I was wondering if it is safe to use a hashed, truncated private key (asymmetric) as the symmetric key for encrypting ... 285 views ### Is there any recent cryptographic algorithm especially designed for low-level processors? Most modern algorithms require relatively large amount of resources. Is there any recent (and freely usable) encryption/decryption algorithm which is specially designed for low-level microcontrollers ... 2k views ### Use of salt to hash a password In a few implementations of hashed passwords, I have seen that the length of the random salt is chosen to be, say, 10 or "some constant". Is there any specific reason why the salt is chosen to have a ... 331 views ### Is there difference between Algebraic Homomorphic Encryption and Fully Homomorphic Encryption Schemes? Is there difference between Algebraic Homomorphic Encryption and Fully Homomorphic Encryption Schemes? 187 views ### RSA Signature - Multiple Use Weakness I cite from Fundamentals of Computer Security (Chapter 7 on Digital Signature, Paragraph 7.3 on RSA Signatures, page 289): Multiple uses of the RSA Signature scheme tend to weaken it. The way out ... 1k views ### How can I break REDSHIRT / REDSHIRT2 encryption? Recently, a user on Gaming.SE asked a question about whether the user password in the video game Uplink could be modified after being initially set. The game does not contain an option to change the ... 339 views ### Other than brute force, are there any attacks on Threefish-512 using only a single known plaintext block? As per title, other than brute force, are there any attacks on Threefish-512 using only a single plaintext block? Are there any attacks like this in any other cipher? 1k views ### What is an efficient random number generation algorithm I have been looking for the algorithm that generates random number and this algorithm has to be more secure. I am going to use this algorithm to generate the salt that will be used in PBKDF2. ... 589 views ### Is bcrypt better than GnupPG's iterated+salted hashing method? GnuPG has slow hash built-in in form of iterated+salted S2K. Does it have disadvantages in comparance with bcrypt or scrypt? Is GnuPG's slow hash method easily automated in GPUs? 330 views ### Looking for cryptographic secure hash algorithm(s) that produces identical root hash for differently sliced hash list I have a scenario similar to the one described in Wikipedia: hash list, but with a twist. I'm looking for a cryptographically secure hash function that would create the same root hash for the same ... 2k views ### Is MAC better than digital signature? MACs differ from digital signatures in the sense that MAC values are both generated and verified using a shares secret key. Does this in any way put MAC on a disadvantage as compared to digital ... 239 views ### Conforming Randomness To An Alphabet Imagine that we're trying to create a function to generate a random string conforming to a user-supplied alphabet. That way, users can generate random strings with given characters. Something like: ... 359 views ### Using “Additional Authenticated Data” as a secondary key In implementing a cipher in GCM or CCM mode, you are provided the option to add "Additional Authenticated Data" (AAD). This AAD is required for decrypting the cipher text, and seems to be used when ... 145 views ### Is it possible to ensure security with zero pre-shared information? Is it possible to secure a communications channel against both passive (sniffing) and active (injecting / MitM) attackers without either legitimate party knowing any pre-shared information? I know ... 204 views ### Signing 14 bytes of data for an embedded device I need to sign a 14-byte string and want to verify that string on the device. Since there is already an AES-Library on the device, I thought about using the following scheme: ... 338 views ### RC4 S-Box and Keystream I'm studying the RC4 algorithm and I have the following questions: On all questions assume that an expanded (2048-bit) key is used, and that the first 4096 bytes of the KeystreamIm are discarded. ... 784 views ### AES-CMAC passes every test except two i wrote this code: ... 3k views ### why do we need Diffie Hellman? my question from stackoverflow: http://stackoverflow.com/questions/11374592/why-do-we-need-diffie-hellman Diffie–Hellman offers secure key exchange only if sides are authenticated. for ... 185 views ### How to communicate authentication tag for GCM? I have written some code to do AES in GCM. I currently manually append the tag property to the ciphertext, is this the proper way to communicate the authentication tag? 2k views ### Padding methods for block ciphers - PKCS7 vs ANSI X.923 I was looking through block cipher padding methods, and found two good candidates: ANSI X.923 - pad with zeros, then a final byte for the padding length, e.g. ... 1k views ### Modern integer factorization software What are the modern software packages that can be used to factoring large numbers into primes. By modern I mean developed and made public within the last 5 years. I'm interested in things that are ... 9k views ### Signatures: RSA compared to ECDSA I'm signing very small messages using RSA, and the signature and public key are added to every message, which requires a lot of space compared to the actual content. I'm considering switching to ... 734 views ### AES GCM implementation in c# I am implementing an AES cipher in GCM mode in c# and have a few questions. My code is based on the code found here for reference. I'll copy in the relevant portions. ... 137 views ### Combatting traffic shape analysis with spurious packets I was reading a question about combatting traffic analysis, and a thought occured. If I send random junk messages periodically, would that defeat traffic analysis? The messages would contain some ... 3k views ### RC4 Keylength Limits While reading the Wiki page on RC4 I noticed that the key size must be in the range of 40–2,048 bits. Why is that? Is there a reason it can't have a lower or higher length? 361 views ### What differentiates a password hash from a cryptographic hash besides speed? I understand that password hashes like bcrypt have the principal property of taking a long time to run, but I'm wondering what if anything about password hashes make them superior to merely running a ... 277 views ### Untraceable communication protocol I am doing a research about secure communication protocols. I would be interested to know whether a protocol exists such that it grants that the two end-points taking part to the communication cannot ... 319 views ### Encryption with private key? we normally always encrypt by public key and decrypt with private key. If i encrypt with private key, then its still secure as normal PKI ? i mean known-plain-text will not take private key on the ... 367 views ### RSA algorithm's license free or paid? I checked RSA's patent application, which was registered in 1983. As patents don't last more than 20 years, it seems to me it should be free. But my friend said to use RSA I have to buy a license from ... 309 views ### Using a derived key for CMAC Consider the following authenticate-and-encrypt scheme that uses AES-128 in CBC mode for encryption and AES-128 - based CMAC for authentication: Two keys are derived from the master key k (16 byte): ... 1k views ### Is this design of client side encryption secure? I want to build a secure file storage web application. Users should be sure that server doesn't know how to decrypt files so encryption should take place at client side (i.e. in Javascript) and TLS ... 182 views ### Proper formatting of symmetric algorithm secret key Given this description from RFC 4880 sec 5.1: The value "m" in the above formulas is derived from the session key as follows. First, the session key is prefixed with a one-octet algorithm ... 1k views ### Why does CBC decryption with a wrong IV still give readable results? While developing some code that uses the .NET AesManaged algorithm, I made some mistakes but was surprised at the results. My encryption was correct. I was generating a random IV block and writing ... 4k views ### How can one securely generate an asymmetric key pair from a short passphrase? Background info: I am planning on making a filehost with which one can encrypt and upload files. To protect the data against any form of hacking, I'd like not to know the encryption key ($K$) used for ... 328 views ### How can one share information using the 'host-proof' paradigm? I am attempting to make a web-based secure password management and sharing utility, both as an academic exercise and to fully understand and feel safe about using it. I really like the idea of a ... 855 views ### Encryption scheme for social-network-like data sharing data via untrusted server? I am thinking quite a lot lately abut the problem of secure, privacy-preserving social networking. Distributing the network among trusted, preferably self-hosted servers (like Diaspora, GNU Social ... 414 views ### Multiple Hash Functions that work in either nesting Are there any hashing functions that, if two are used in conjunction (with the same salts) will return the same response regardless of ordering? I.e. are there hash-functions $H_1$, $H_2$ such that ... 324 views ### I need an opinion of encryption method I thought of in High school First, I'm really not into cryptography, but have some basic knowledge. This was a thought experiment (and later exercise for my programming skills), but even though it was long time ago and I tried ... Background One issue with modern security proofs is that they are usually asymptotic. In other words, such proofs are usually formulated as follows: For any polynomial-time adversary $\mathcal A$, we ...
# Intro New research in the journal Nature has demonstrated that many of the drugs we take every day may have hitherto unknown microbiome effects. Using high-throughput techniques, researchers measured the growth rate of 38 species of common gut bacteria after exposure to over 1,000 drugs at low concentrations. These drugs included many common antipsychotics and even OTC medications like aspirin. They found that almost a quarter of the compounds surveyed inhibited at least one species of bacteria and noted that this is probably an underestimate of their effects. I must stress that this data is preliminary, in-vitro, and has possible conflicts of interest. So don’t freak out and stop taking your prescription medications because of one study with relatively small effect sizes. Consult your doctor before you do anything with these data! # The Study Researchers simulated the effect of small concentrations of common drugs on gut bacteria by measuring their growth rate in-vitro. Thirty-eight species of bacteria were chosen to represent the diversity of the human microbiome and the constraints of high-throughput testing. All 38 species are found in the gut of healthy individuals and are part of a larger strain resource panel for the healthy human gut microbiome. 1 The species chosen included some disease causing species such as Clostridia difficile and Fusobacterium nucleatum, which cause “C. diff” infections and contribute to peridontal disease, respectively. A common probiotic, Lactobacillus paracasei, was also tested. Similarly, the compounds were chosen to represent a broad array of drug classes (anti-diabetics, antipsychotics, NSAIDS, etc.). Most compounds are administered to humans (1,079), and they cover all main therapeutic classes (Supplementary Table 1).1 Unfortunately, only off-patent drugs were screened here, which limits direct comparison of these results to some of the most commonly prescribed drugs today. Furthermore, some of the drugs were tested at lower concentrations than a typical dose would produce due to technique constraints. In summary, we probed human-targeted drugs largely within physiologically relevant concentrations and our data are likely to under-report the impact of human-targeted drugs on gut bacteria.1 For instance, researchers estimated the the compound Fluvastatin (cholesterol lowering statin; trade name Lescol) reaches approximately 30uM concentration in the human gut while they were only able to test it at 20uM. Researchers measured the change in growth rate of typical human gut bacteria upon exposure to over 1,000 common drugs –everything from acetaminophen to Zuclopenthixol. # Results A most shocking statistic is the sheer number of drugs that could inhibit the growth of gut bacteria despite being classified as non-antibiotic. Notably, 27% of the non-antibiotic drugs were also active in our screen.1 Somewhat expectedly, compounds that are used to treat viruses, parasites, and similar are more likely to be antibiotic. More than half of the anti-infectives against viruses or eukaryotes exhibited anticommensal activity (47 drugs; Fig. 1a, b).1 Even though many common drugs weren’t tested due to patent laws, we can draw some conclusions about the possibility of microbiome interactions from chemical similarity: Drugs from all major ATC indication areas exhibited anticommensal activity, with antineoplastics, hormones and compounds that target the nervous system inhibiting gut bacteria more than other medications (Extended Data Figs 9a, 10).1 All these data point to some fairly alarming consequences, so the researchers attempted to match their in-vitro data with the small amount of in-vivo data in the literature. Nonetheless, we find high concordance between the effects of drugs in vitro and in humans, confirming clinical relevance and direct anticommensal activity for the aforementioned cases.1 Two notable drugs that correlate well with the in-vivo data are metformin (for type-II diabetes) and omeprazole (for heartburn). Twenty-seven per cent of the tested drugs slowed the growth of at least one species, and all classes of drugs showed at least some activity. # Discussion While the results are notable in and of themselves, the widespread implications are even more so. The researchers comment on the pharmaceutical industry as a whole: Moreover, one could speculate that pharmaceuticals, used regularly in our times, may be contributing to a decrease in microbiome diversity in modern Western societies.1 And to the possibility that bacterial inhibition from antipsychotics may not be a bug, but a feature: This raises the possibility that direct bacterial inhibition may not only manifest as side effect of antipsychotics, but also be part of their mechanism of action.1 Even more alarming is the implication that the development of antibiotic resistant superbugs may not be entirely due to antibiotics themselves. All of these results point to an overlap between resistance mechanisms against antibiotics and against human-targeted drugs, implying a hitherto unnoticed risk of acquiring antibiotic resistance by consuming non-antibiotic drugs.1 It’s not all bad, however. C. diff is a common hospital acquired infection and it notoriously difficult to get rid of. Perhaps this research might open up new ways to combat some antibiotic resistant infections by combining the typical antibiotic course with one of the promising, non-antibiotic drugs found in this study. These previously unknown drug interactions with the microbiome might be positively contributing to some medications’ therapeutic benefits. On the other hand, they might also be adding to the decline of Western microbiome diversity, and even increasing antibiotic resistance. # Some Caveats As I said in the intro, don’t go running off to dump all your drugs down the toilet. First of all, that’s a poor way to dispose of your pharmaceuticals. Secondly: However, before any translational application can be pursued, our in vitro findings need to be tested rigorously in vivo (in animal models, pharmacokinetic studies and clinical trials) and understood better mechanistically.1 One note on the potential conflicts of interest: the lab where the research was conducted has filed for patents directly profiting from the results of this study. While this is not uncommon (many professors and researchers sign agreements with their employers giving away intellectual property rights), it is worth noting. [European Molecular Biology Laboratory] has filed two patent applications on repurposing compounds identified in this study for the treatment of infections and for modulating the composition of the gut microbiome, and on the use of the in vitro model of the human gut microbiome to study the impact of xenobiotics…1 A lot of research is still needed to determine the real impact of these drugs on your microbiome. # My Conclusions This is quite exciting research. It could explain some of the common side effects in clinical trials with unknown pathophysiology. This data may give us some much needed insight on the variability of drug reactions and new avenues of research to pursue in the future. As for me, I’ve been considering eating more resistant starch2 lately and this research just convinced me. We really have a lot to learn about the bacterial friends we carry with us our whole lives. So what do you think about all this? Ready to throw away your drugs? Is this a whole lot of nonsense? Let me know your thoughts in the comments! # Bibliography 1 Maier, Lisa, Mihaela Pruteanu, Michael Kuhn, Georg Zeller, Anja Telzerow, Exene Erin Anderson, Ana Rita Brochado, et al. “Extensive Impact of Non-Antibiotic Drugs on Human Gut Bacteria.” Nature, March 19, 2018. https://doi.org/10.1038/nature25979. $$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$$$\uparrow$$ 2 Yang, Xiaoping, Kwame Oteng Darko, Yanjun Huang, Caimei He, Huansheng Yang, Shanping He, Jianzhong Li, Jian Li, Berthold Hocher, and Yulong Yin. “Resistant Starch Regulates Gut Microbiota: Structure, Biochemistry and Cell Signalling.” Cellular Physiology and Biochemistry 42, no. 1 (2017): 306–18. https://doi.org/10.1159/000477386. $$\uparrow$$
Robocopy Explained Robocopy Developer: Microsoft Operating System: Windows NT, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, Windows 7, Windows Server 2008 Latest Release Version: 6.1 License: Proprietary Robocopy, or "Robust File Copy", is a command-line directory replication command. It has been available as part of the Windows Resource Kit starting with Windows NT 4.0, and was introduced as a standard feature of Windows Vista, Windows 7 and Windows Server 2008. The command is robocopy. Features Robocopy is notable for capabilities above and beyond the built-in Windows copy and xcopy commands, including the following: • Ability to tolerate network interruptions and resume copying. (incomplete files are marked with a date stamp of 1980-01-01 and contain a recovery record so Robocopy knows where to continue from) • Ability to skip Junction Points which can cause to fail copying in an infinite loop (/XJ) • Ability to copy file data and attributes correctly, and to preserve original timestamps, as well as NTFS ACLs, owner information, and audit information using command line switches. (/COPYALL or /COPY:) Copying folder timestamps is also possible in later versions (/DCOPY:T). • Ability to assert the Windows NT "backup right" (/B) so an administrator may copy an entire directory, including files denied readability to the administrator. • Persistence by default, with a programmable number of automatic retries if a file cannot be opened. • A "mirror" mode, which keeps trees in sync by optionally deleting files out of the destination that are no longer present in the source. • Ability to skip files that already appear in the destination folder with identical size and timestamp. • A continuously-updated command-line progress indicator. • Ability to copy file and folder names exceeding 256 characters — up to a theoretical limit of 32,000 characters — without errors.[1] • Multithreaded copying. (Windows 7 only) [2] • Return code[3] on program termination for batch file usage. Notably, Robocopy will fail to copy open files. The so-called Backup mode is sometimes mistaken as an ability to copy open files, which it is not. Backup mode is an administrative privilege that allows Robocopy to override permissions settings (specifically, NTFS ACLs) for the purpose of making backups. The Windows Volume Shadow Copy service is the only Windows subsystem that can copy open files, which it does by snapshotting them for point-in-time consistency. Robocopy does not use the Volume Shadow Copy service in any way, limiting its usefulness as a stand-alone backup utility for volumes that may be in use. However, one can use a separate utility, such as VSHADOW or DISKSHADOW (included with Windows Server 2008), to create a shadow copy of a given volume, which Robocopy can then be directed to back up. On the other hand, by design, the original Robocopy version is not able to replicate security attributes of files which have had their security permissions changed after an initial mirroring.[4] This behavior was changed on Robocopy versions included in Windows 2008 and Windows Vista. The downside of this is that Robocopy does not behave consistently between platforms.[5] Robocopy cannot exclude files matching a wildcard including a directory e.g. /XF pictures\*.jpg generates an error. Robocopy also cannot support excluding folders from the root only. e.g. /XD Music excludes both \Music and \Users\Name\Music and /XD \Music excludes nothing. Common usage scenarios • Copy directory contents of to (including file data, attributes and timestamps), recursively with empty directories (/E): Robocopy C:\A C:\B /E • Copy directory recursively (/E), and copy all file information (/COPYALL, equivalent to /COPY:DATSOU, D=Data, A=Attributes, T=Timestamps, S=Security=NTFS ACLs, O=Owner info, U=aUditing info), do not retry locked files (/R:0)(the number of retries on failed copies default value is 1 million), preserve original directories' Timestamps (/DCOPY:T - requires version XP026 or later): Robocopy C:\A C:\B /COPYALL /E /R:0 /DCOPY:T • Mirror A to B, destroying any files in B that are not present in A (/MIR), copy files in restartable mode (/Z) in case network connection is lost: Robocopy C:\A \\backupserver\B /MIR /Z For the full reference, see the Microsoft TechNet Robocopy page It should be noted that using the /Z switch results in marked slowdown of copy operations.Please see the community content section of the TechNet reference Folder copier, not file copier Robocopy syntax is markedly different from standard copy commands, as it accepts only folder names as its source and destination arguments. File names and wild-card characters (such as " ") are not valid source or destination arguments. Files may be selected or excluded using the optional filespec filtering argument. Filespecs can only refer to the filenames relative to the folders already selected for copying. Fully qualified path names are not supported. For example, in order to copy the file foo.txt from directory c:\bar to c:\baz, one could use the following syntax: Robocopy c:\bar c:\baz foo.txt Bandwidth throttling Robocopy's "inter-packet gap" (IPG) option allows some control over the network bandwidth utilized in a session. In theory, the following formula expresses the delay (D, in milliseconds) required to simulate a desired bandwidth (BD, in kilobits per second), over a network link with an available bandwidth of BA kbps: D={BA-BD\overBA x BD} x 512 x 1000 In practice however, some experimentation is usually required to find a suitable delay, due to factors such as the nature and volume of other traffic on the network. The methodology employed by the IPG option may not offer the same level of control provided by some other bandwidth throttling technologies, such as BITS (which is utilized by Windows Update and BranchCache). GUI front-end Although Robocopy itself is a command-line tool, Microsoft Technet has provided a GUI front-end. The GUI requires the installation of the .NET Framework 2.0 (40 MB), if it is not already installed. It was developed by Derk Benisch, a systems engineer with the MSN Search group at Microsoft.[6] The Microsoft Robocopy GUI also includes version XP026 of Robocopy. When downloaded from the TechNet link below, the version reported is "Microsoft Robocopy GUI 3.1.1." There are other non-Microsoft GUIs for Robocopy: • "WinRoboCopy" revision 108 released in August 10, 2011. [7] • A program by SH-Soft, also called "Robocopy GUI" v1.0.0.24 (October 8, 2005)[9] . A copying program with a GUI, RichCopy, is also available on Microsoft's Technet. While it is not based on Robocopy, it offers similar features, and it does not require the installation of the .NET 2.0 framework.[10] Versions Product versionFile versionYearOriginOther 1.70-1997Windows NT Resource Kit 1.714.0.1.711997Windows NT Resource Kit 1.954.0.1.951999Windows 2000 Resource Kit 1.964.0.1.961999Windows 2000 Resource Kit(c) 1995-1997 XP0105.1.1.10102003Windows 2003 Resource Kit XP0275.1.10.10272008Bundled with Windows Vista, Server 2008 and later(c) 1995-2004 6.16.1.76012009Bundled with Windows 7(c) 2009
• anonymous A ball is thrown across a playing field from a height of h = 5 ft above the ground at an angle of 45° to the horizontal at the speed of 20 ft/s. It can be deduced from physical principles that the path of the ball is modeled by the function y = − 32/(20)2x^2 + x + 5 where x is the distance in feet that the ball has traveled horizontally. (a) Find the maximum height attained by the ball. (Round your answer to three decimal places.) (b) Find the horizontal distance the ball has traveled when it hits the ground. (Round your answer to one decimal place.) Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
# Repeated DB connection at max user connections I am trying to create class with connect to mysql database. And if there is max number of connections I want to wait and try it again. I figured out, how it can works, but I am not sure, if its the right way of doing it. So, my code looks like this: <?php class DbConnect { private $attemps = 1; private$errorCode; private $maxAttemps = 10; private$pdo; private $dsn = "mysql:host=localhost;dbname=...;charset=utf8"; private$options = [ PDO::ATTR_EMULATE_PREPARES => false, // turn off emulation mode for "real" prepared statements PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, //turn on errors in the form of exceptions /*PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,*/ //make the default fetch be an associative array ]; public function __construct(){ $this->tryConnect(); while ($this->errorCode == 1040 && $this->attemps <=$this->maxAttemps) { usleep(pow(2, $this->attemps)*10000);$this->tryConnect(); } } protected function tryConnect(){ try { $this->pdo = new PDO($this->dsn, "root", "", $this->options); } catch (Exception$e) { error_log($e->getMessage());$this->errorCode = $e->getCode();$this->attemps++; } } } ?> Is there anything i should change? Thank you very much. • The class has no methods to work with the open connection. And if the connection fails 10 times it claims to have succeeded but actually it is not connected. So I would say the code does not work correctly and the question is probably missing some context... – slepic Nov 22 at 6:18 • Anyway, if connection pool is full you probably don't want php request processes to stack up and make even larger queue for open db connections. The 10 attempts can take up to cca 20 seconds, if client closes the connection, the server does not, and so the php process keeps waiting for db although there is nobody waiting for data any more. This might be a good thing in a background task that never runs multiple times simultaneously. But not for a client request handler. At least not in general. – slepic Nov 22 at 6:27 • I can add public property which will contain if its connected or not and cut connection with set new Db object to null. I understand what you mean, but I have no idea how to make queue on server and then tell clients to connect, because its normall web hosting.. – Tomáš Kretek Nov 22 at 9:40 • You better just inform the client that you cannot fulfill their request (with a 5xx response status) and have them retry later at their own will. Most users probably won't wait for the page to load after few seconds of nothing happening. They will just hit f5 or navigate away not knowing what's wrong, probably thinking the page is broken and never coming back. – slepic Nov 22 at 9:56 • You think 509 Bandwidth Limit Exceeded (Apache)? – Tomáš Kretek Nov 22 at 10:01 You code has several problems. Specificaly it breaks at least two of the SOLID principles, namely single responsibility and dependency inversion principles. Single responsibility is violated because the class is responsible for: • connecting to db • retrying connection • whatever other methods provided by the class you did not show us Dependency inversion principle is violated because the class uses hardcoded values (although stored in instance properties). Let me first separate out the first responsibility. We don't really need OOP for that, so let me do it in FP style. function createMyPdo(): \PDO { $dsn = ...;$user = ...; $password = ...;$options = ...; return new \PDO($dsn,$user, $password,$options); } This function now opens a connection with hardcoded credentials. Although the credentials are hardocded and we would better pull those values off of maybe environment variables. It at least has just one responsibility and that is to open the specific database connection. Now lets implement the retry logic as a separate thing, that only depends on something that creates PDO instance. function createPdoWithRetry(callable $factory, int$attempts): \PDO { $attemptsMade = 0; while ($attempts > $attemptsMade) { try { return$factory(); } catch (\PDOException $e) { if ($e->getCode() === 1040) { \usleep(\pow(2, $attempsMade) * 10000); ++$attemptsMade; } else { throw $e; } } } throw new \RuntimeException("All$attemptsMade retry attempts exhausted."); } Now I can easily open the connection with retry logic $pdo = createPdoWithRetry(fn () => createMyPdo(), 20); And I can also open a different connection using the same thing: $pdo2 = createPdoWithRetry(fn () => createOtherPdo(), 20); This approach still has some caveats: • where createMyPdo could only throw PDOException, createPdoWithRetry can also throw a generic RuntimeException. • the type of the $factory parameter has a very vague type, that we have to describe in a docblock and cannot be enforced by PHP runtime. • createPdoWithRetry knows error code 1040 which is mysql specific, but the knowledge that mysql is used is only known to the createMyPdo function. Let's make it better by going back to OOP style and introducing an interface for the connection factory, including specific exceptions to be thrown (which will abstract away the 1040 speific mysql error code). class DatabaseConnectionException extends \Exception { } class TooManyConnectionsException extends DatabaseConnectionException { } interface DatabaseConnector { /** * @return \PDO * @throws DatabaseConnectionException * @throws TooManyConnectionsException */ public function createPdo(): \PDO; } As you can see I added some @throws annotations to: • inform implementors of the interface which exception they can throw • informs consumers of the interface which exceptions they can catch Although PHP cannot enforce the actual type of exceptions thrown from the implementations, documenting the exceptations makes it less likely that someone will implement it wrong. Now lets make an implementaton for our specific connection: class MyDatabaseConnector implements DatabaseConnector { public function createPdo(): \PDO {$dsn = ...; $user = ...;$password = ...; $options = ...; try { return new \PDO($dsn, $user,$password, $options); } catch (\PDOException$e) { if ($e->getCode() === 1040) { throw new TooManyConnectionsException($e->getMessage(), (int) $e->getCode(),$e); } else { throw new DatabaseConnectionException($e->getMessage(), (int)$e->getCode(), $e); } } } } Now maybe you see why the credentials should not be hardocded, because the same code would work for any credentials. Hardcoding the credentials, we are doomed to repeat some code if we want to open a different connection. But I leave that up to you to find your way to the dependency inversion principle. Basically constructor of a class should accept things that the class instances need to know, rather then having the instances deciding on their own what those values should be (ie. pass the credentials through constructor, rather then hardocing the credentials withing the class). Anyway, to implement the retry logic we will use the same interface: class RetryingDatabaseConnector implements DatabaseConnector { private DatabaseConnector$factory; private RetryDelayStrategy $delayStrategy; private int$maxAttempts; public function __construct(DatabaseConnector $factory, RetryDelayStrategy$delayStrategy, int $maxAttempts) {$this->factory = $factory;$this->delayStrategy = $delayStrategy;$this->maxAttempts = $maxAttempts; } public function createPdo(): \PDO {$attemptsMade = 0; while ($this->maxAttempts >$attemptsMade) { try { return $this->factory->createPdo(); } catch (TooManyConnectionsException$e) { ++$attemptsMade;$this->delayStrategy->wait($attemptsMade) } } throw new TooManyConnectionsException("All$attemptsMade retry attempts exhausted."); } } As you can see I have also extracted the delay logic to a separate interface, because exponentialy growing gaps between attempts is just one of many possible strategies. And again, you don't want to implement a new connector that would repeat a lot of the logic just to change to delay strategy. interface RetryDelayStrategy { public function wait(int $attemptsMade): void; } class ExponentialRetryDelayStrategy implements RetryDelayStrategy { private int$coefficient; private int $base; public function __construct(int$coefficient, int $base = 2) {$this->coefficient = $coefficient;$this->base = $base; } public function wait(int$attemptsMade): void { \usleep(\pow($this->base,$attempsMade) * $this->coefficient); } } And finaly some consumer code. A controller that opens a database connection (to do something with it) and gives a user friendly messages if the connection fails and a more specific one if the connection fails specificaly because of too many open connections. class MyController extends BaseController { private DatabaseConnector$connector; public function __construct(DatabaseConnector $connector) {$this->connector = $connector; } public function myControllerAction(Request$request): Response { try { $pdo =$this->connector->createPdo(); } catch (TooManyConnectionsException $e) { return$this->send(503, "Too many database connections."); } catch (DatabaseConnectionException $e) { return$this->send(503, "Database unavailable."); } // do something with PDO here return $this->send(200, "All done"); } } Now, you see the controller just depends on a DatabaseConnector. It doesn't really care if and how many times the connector will retry. It just cares that it can fail in a specific (TooManyConnectionsException) or an further unspecified (DatabaseConnectionException) way or, if it succeeds, it returns a PDO instance. And it will work no matter what connector you pass to it, as long as the interface is implemented correctly. And so you can change the implementation without touching the controller code. Again, thanks to single responsibility a dependency inversion principles. No matter which is used in the DI setup $myConnector = new MyDatabaseConnector(); // or maybe if we pull it to envs // $myConnector = new MyDatabaseConnector($_ENV['DB_DSN'], $_ENV['DB_USER'],$_ENV['DB_PASS']); $connector =$myConnector; or $delayStrategy = new ExponentialRetryDelayStrategy(10000, 2);$retryingConnector = new RetryingDatabaseConnector($myConnector,$delayStrategy, 20); $connector =$retryingConnector; the controller code stays untouched $controller = new MyController($connector); ... \$controller->run(); Also notice, how I avoided setting an errorCode and checking it in the controller. Instead we just throw exceptions. This forces the caller to actually handle the error or let it bubble up the stack and eventually stop execution of the program. This is the prefered behaviour because if you would forget to check the error code, it really does not make sense to try to send sql queries over a "not opened connection". Btw, if you are interested in design patterns, you can notice that: • DatabaseConnector and its implementations are a factory method pattern • RetryingDatabaseConnector and the interface follow the decorator pattern • RetryDelayStrategy and its implementations are the strategy of the strategy design pattern and RetryingDatabaseConnector is the strategy consumer • Thank you very much for your loooong reply. But now, I dont really know what to do, because this seems like whole new level of programming. I went throught interfaces and extending classes with classes etc. in past, but this whole logic is something.. I dont know how to say it. It seems like its from big practice, which I dont have and dont have any options to get some, from real world.. – Tomáš Kretek Nov 23 at 10:39 • @TomášKretek yeah, I thought it might too high level for you. take your time, read it several times and try to understand the individual parts... The big message is that you should split your code into reusable pieces with single responsibilities. Don't try to stuck everything in one class. The program flow comes in stages (gather config, create objects, work with objects), don't be shy to represent each stage as a separate interface/class. Think what a class needs to do to fulfill its purpose, then outsource them - ask for them via constructor, but let the instantiator decide what to pass in – slepic Nov 23 at 11:05 • And only ask for what you really need. If you ask for something with dozen of methods and only use one of them, then something is probably wrong. Your class there was doing a lot of stuff, probably has methods that you didn't include. But what consumers really want is to just get a PDO instance so they can work with it. That's the reason why you don't see any wrapper around the PDO instance in my implementation, it simply is not necesary, becase the other stages of the flow have already been resolved elsewhere... – slepic Nov 23 at 11:06
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T10: Searches for New Physics Precision predictions for scalar leptoquark pair production at the LHC C. Borschensky*, B. Fuks, A. Kulesza and D. Schwartländer Full text: Not available Abstract We present precision predictions for scalar leptoquark pair production at the LHC. Apart from QCD contributions, included are the lepton $t$-channel exchange diagrams relevant in the light of the recent $B$-flavour anomalies. All contributions are evaluated at next-to-leading order in QCD and improved by resummation, in the threshold regime, of the corrections from soft-gluon radiation at the next-to-next-to-leading-logarithmic accuracy. All corrections are found equally relevant. Furthermore, the impact of different sets of parton distribution functions is discussed. These predictions constitute the most precise leptoquark cross section calculations available to date and are necessary for the best exploitation of leptoquark LHC searches. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
Galaxy Clustering Statistics of medium-Deep Survey WFPC1 and WFPC2 Images Session 62 -- Large Scale Structures Display presentation, Thursday, 2, 1994, 10:00-11:30 [62.01] Galaxy Clustering Statistics of medium-Deep Survey WFPC1 and WFPC2 Images L. W. Neuschaefer, S. C. Casertano, R. E. Griffiths, K. U. Ratnatunga (Johns Hopkins University), R. A. Windhorst (Arizona State University), R. S. Ellis, G. Gilmore (IoA, Cambridge), R. F. Green (NOAO), J. P. Huchra (CfA), G. D. Illingworth, D. C. Koo (UCSC), J. A. Tyson (Bell Labs) We extend the analysis of the clustering properties of faint galaxies observed in WFPC1 and WFPC2 images obtained by the Medium-Deep Survey Key project. MDS fields are more than 90\% complete for limiting V magnitudes from 22 to 25 mag. The angular two-point correlation function is measured down to 0.5 arcsec scales and Nth-nearest neighbor statistics are measured from 50 WFPC1 fields and 20 WFPC2 fields. This information is used to place limits on the rate of galaxy merging at redshifts in the range 0.3--0.7. Preliminary results show: 1) very weak clustering at V$\sim$25 mag, consistent with earlier deep surveys, which may suggest contraction of galaxy clusters in proper coordinates; and 2) evidence for a small excess of galaxy pairs at $\leq100 h^{-1}kpc$ which suggests only a moderate merging rate out to z$\leq$0.7. Using 35 WFPC1 fields with imaging in both V (F555W) and I (F785LP) we examine trends of clustering versus color. For a subset of these fields measured galaxy parameters, determined from maximum-likelihood fits of disk and bulge models, are used to examine trends of clustering versus galaxy type. We also examine differences between WFPC1 and WFPC2 imaging as it relates to distinguishing interacting galaxies from HII regions within galaxies.
# Multiplying Numbers in Scientific Notation – Technique & Examples Extremely small and large numbers can be difficult to record and compute. Consequently, such, significant large and small numbers can be written in a shorter form known as scientific notation. To write a number in scientific notation, if the given number is greater or equals to 10, the decimal point is moved to the left of the number and so, the power of 10 becomes positive. For example, the speed of light is said to be 300,000,000 meters per second. This number can be represented into scientific notation as 3.0 x 10 8. Writing numbers in scientific notation not only simplifies them, but also makes them easier to multiply. In this article, we are going to learn how to carry out the multiplication operation with numbers in scientific notation. ## How to Multiply Scientific Notation? Numbers written in scientific notation can be multiplied simply by taking an advantage of the associative and commutative properties exponents. The associative property is the rule of groupings where for instance, a + (b + c) = (a + b) + c. On the other hand, commutative property states that, a + b = b + a. To multiply numbers in scientific notation these are the steps: • If the numbers are not in scientific notation, convert them. • Regroup the numbers using the commutative and associative properties of exponents. • Now multiply the two numbers written in scientific notation, you work out the coefficients and the exponents separately. • Use the product rule; b mx b n = b (m + n) to multiply the bases. • Join the new coefficient to the new power of 10 to get the answer. • If the product of the coefficients is greater than 9, convert it to scientific notation and multiply by the new power of 10. Example 1 Multiply (3 × 10 8) (6.8 × 10 -13) Explanation • Regroup the numbers considering the associative and commutative properties: • (3 × 10 8) (6.8 × 10 -13) = (3 × 6.8) (108 × 10 -13) • Multiply the coefficients and using the product rule, add the exponents • (3×6.8) (108 × 10 -13) = (20.4) (10 8 – 13) • The product of the coefficients is 20.4 and is greater than 9, therefore convert it again to scientific notation and multiply by the power of 10. • (2.04 × 10 1) x 10 -5 • Multiply using the Product Rule: 2.04×10 1 + ( -5) • The answer is 2.04 × 10 -4 Example 2 Multiply (8.2 × 10 6) (1.5 × 10 -3) (1.9×10 -7) Explanation • Regroup the commutative and associative properties. • (8.2 × 1.5 × 1.9) (10 6 × 10 -3× 10 -7) • Multiply the coefficients and use the product rule to multiply the bases • (8.2 × 1.5 × 1.9) (10 6 × 10 -3× 10 -7) = (23.37) (10 6 + (-3) + (-7)) • (23.37) (10 6 + (-3) + (-7)) = (23.37) (10 -4) • The product of coefficient 23. 37 is greater than 9, therefore convert it into scientific notation by moving the decimal point one place to the left and multiply by 101. • (23.37) (10 -4) = (2.37 × 10 1) × 10 -4 • Multiplying using the Product Rule, add the exponents: 2.37 × 10 1 + (-4) • Therefore, the answer is 2.37 × 10 -3 Example 3 Multiply: (3.2 x 105) x (2.67 x 103) Solution (3.2 x 105) x (2.67 x 103) = (3.2 x 2.67) x (105 x 103) = (8.544) x (105+3) = 8.544 x 108 Therefore, (3.2 x 105) x (2.67 x 103) = 8.544 x 108 Example 4 Evaluate: (2.688 x 106) / (1.2 x 102) Solution = (2.688 / 1.2) x (106 / 102) = (2.24) x (106-2) = 2.24 x 104 Therefore, (2.688 x 106) / (1.2 x 102) = 2.24 x 104 ### Practice Questions 1. Which of the following shows the product of $3 \times 10^4$ and  $2 \times 10^5$ in scientific notation? 2. Which of the following shows the product of $5 \times 10^3$ and  $6 \times 10^3$ in scientific notation? 3. Which of the following shows the product of $5 \times 10^4$ and  $2.5 \times 10^3$ in scientific notation? 4. Which of the following shows the product of $2.2 \times 10^4$ and  $7.1 \times 10^5$ in scientific notation? 5. Which of the following shows the product of $7 \times 10^4$, $5 \times 10^6$, and $3 \times 10^2$ in scientific notation?
# Invariance and conservation Why in a collision between particles is the four-momentum conserved within a frame of reference but not invariant between frames of reference? • I'm not really sure what you're asking. Four-momentum transforms like any four-vector under Lorentz transformations. The magnitude of the four-momentum vector is the invariant mass, which is clearly invariant between frames. – Evan Rule Oct 8 '15 at 1:17 If the 4-momentum were invariant then it would be a scalar. 4-vectors are defined by the way their components mix when we change coordinates. In particular when we apply a lorentz transformation to our coordinates the inverse transformation is applied to the vector. As a simple example consider what happens to the energy when we boost. If we start in the rest frame of a particle and then boost to a frame in which it is moving we get, $$E_{\text{rest}}=mc^2 \rightarrow E_{\text{moving}} = \gamma mc^2,$$ if $E$ were invariant then a moving particle would have no kinetic energy. Its important to keep conserved and invariant seperate in your mind. The total 4-momentum will not change overtime in any given inertial frame, but you can't change those frames and expect it to stay the same. What you can bring with you from frame to frame is the magnitude of the total 4-momentum, i.e., the invariant mass. This is both conserved and invariant. Conservation and invariance are fundamentally different things. Conservation means "doesn't change with respect to time". While invariance means "doesn't change with respect to Lorentz transformations". Components of four-momentum transform like vector components and are thus NOT invariant under Lorentz Transformations. But that doesn't prevent them from being conserved. Suppose the four-momentum is conserved in one frame. If you switch to a different frame, the four-momentum components will all be different, but the conservation is preserved.
# The Amazing Walking Molecule A single molecule has been made to walk on two legs. Ludwig Bartels and his colleagues at the University of California at Riverside, guided by theorist Talat Rahman of Kansas State University, created a molecule---called 9,10-dithioanthracene (DTA)---with two "feet" configured in such a way that only one foot at a time can rest on the substrate. Activated by heat or the nudge of a scanning tunneling microscope tip, DTA will pull up one foot, put down the other, and thus walk in a straight line across a flat surface. The planted foot not only supplies support but also keeps the body of the molecule from veering or stumbling off course. In tests on a standard copper surface, such as the kind used to manufacture microchips, the molecule has taken 10,000 steps without faltering. According to Bartels ([email protected], 951-827-2041), possible uses of an atomic-sized walker include guidance of molecular motion for molecule-based information storage or even computation. DTA moves along a straight line as if placed onto railroad tracks without the need to fabricate any nano-tracks; the naturally occurring copper surface is sufficient. The researchers now aim at developing a DTA-based molecule that can convert thermal energy into directed motion like a molecular-sized ratchet. (Kwon et al., Physical Review Letters, upcoming article; text at www.aip.org/physnews/select;[/URL] see movie at [PLAIN]www.chem.ucr.edu/groups/bartels/)[/URL][/quote] :cry: :smile: :rofl: :approve: :bugeye: :biggrin: :uhh: :rolleyes: :!!) :redface: :cool: o:) :surprised :eek: :tongue: Last edited by a moderator: Related General Discussion News on Phys.org Next up, molecular-sized whippets, flat-caps and walking leads. The goal to fit all of Yorkshire in a microchip is almost over. Gokul43201 Staff Emeritus Gold Member Really neat stuff !!! Ivan Seeking Staff Emeritus Gold Member The researchers now aim at developing a DTA-based molecule that can convert thermal energy into directed motion like a molecular-sized ratchet. Maxwell's demon walks. Evo Mentor Incredible. hypnagogue Staff Emeritus Gold Member Maybe the walking molecules can ride around in molecular cars if they get tired. Scientists Build Tiny Vehicles for Molecular Passengers Scientists at Rice University have built molecular vehicles so small that more than 20,000 of them could sit side-by-side on a human hair. The fleet consists of nanocars, nanotrucks capable of carrying small-molecule payloads, and trimers that pivot on their three axes. All of them roll on buckyballs, which are 60-atom, soccer-ball-shaped spheres of pure carbon. Each axis pivots up and down independently to allow the vehicles to negotiation atomic potholes and mounds. The work, which was first described earlier this month in the online version of the journal Nano Letters, is the fruit of more than eight years of research led by Prof. James M. Tour into systems that could be used to build structures molecule-by-molecule. "This is it, you can't make anything smaller to transport atoms around," Professor Tour said. http://www.nytimes.com/2005/10/21/technology/21cnd-nano.html Ivan Seeking Staff Emeritus Gold Member ...if they get tired Booooooo booooooo bah! [throws tomato] booooooo ooooooo booooooo bah! [throws much larger and excessively heavy fruit such as a pumpkin or a watermelon] booooooo Archon Mk said: ooooooo booooooo bah! [throws much larger and excessively heavy fruit such as a pumpkin or a watermelon] booooooo Pumpkins are a fruit? What? matthyaouw Gold Member I wonder how fast they go... Molecular races... that's what the world needs. We can bet on the winner. Danger Gold Member What's a microgram of gas cost these days? rachmaninoff Danger said: What's a microgram of gas cost these days? Probably 6\$ or more. (if by "gas" you me $^3He$ gas, of course) Danger Gold Member I meant regular unleaded, but I think you got the price about right. :grumpy: cronxeh Gold Member Well thats great. Nature had dynein, kinesin, and myosin for some time now. It walks alright
### inTemp's blog By inTemp, history, 7 weeks ago, Is it just me or is anyone else wondering that nowadays Codechef Cook-offs and Lunchtimes are becoming better than codeforces contests. They are more diverse and involves a lot of thinking and are more fun to solve. There are questions on all topics like Probability, Combinatorics (which is becoming a rarity in codeforces contests nowadays) as well as Data structures. Less server issues, no queue and good quality questions. I liked todays(Codeforces Round #660 (Div. 2)) contest as it was diverse but comparing recent contests Educational Codeforces Round 92 (Rated for Div. 2), A-E are mostly ad-hoc problems which are fun sometimes but not when overdone. What are your thoughts on this? P.S: These are the views from a perspective of Div2 participant on Codeforces and Div1 on Codechef. I feel that Div1 Codeforces particpants might not agree as they get diverse contests frequently in comparison to Div2 particpants. • +70 » 7 weeks ago, # | ← Rev. 2 →   +45 I also don't like ad-hoc problems. However, it might be the easiest type of problems to prepare for the contest. Coordinators quite often reject problems requiring other techniques as too standard. It is hard to come up with the original problem, requiring the usage of traditional algorithms. • » » 7 weeks ago, # ^ | ← Rev. 2 →   0 Why would ad hoc the easiest to come up with? • » » » 7 weeks ago, # ^ |   -8 Well, not easiest to think of, of course standard types are easier to think of, but they are better for contests because over time ideas have already been taken and problems need to become increasingly creative. » 7 weeks ago, # |   +5 what are ad-hoc problems? • » » 7 weeks ago, # ^ |   +21 They are problems which don't require knowledge of any specific data structure or algorithm (you can't put them into some category). And, in most cases, their solution can't really help you in other problems. (the logic you apply for an ad-hoc problem is usually useful only for that problem) • » » » 7 weeks ago, # ^ |   +1 Got it,thanks. • » » 7 weeks ago, # ^ |   -40 In the most simple terms it means, either you get some weird thougth/guess that this might be the solution and try it. If its AC, you are good to go, else scratch your head for the whole contest • » » 7 weeks ago, # ^ |   +157 Problems I cant solve. • » » 7 weeks ago, # ^ |   +7 Problems that decrease my ratings. » 7 weeks ago, # |   +17 I agree, the only thing wrong with codechef is its long challenges... • » » 7 weeks ago, # ^ |   +41 The comments section of codechef annoys in form and content, too. • » » » 7 weeks ago, # ^ |   +5 Yes, they need to improve on their forums. • » » 7 weeks ago, # ^ |   -17 Yes, I also see a lot of hate is given to codechef because of frequent cheatings in long challenges. But in short contests, its difficult to cheat comparitively plus the quality of questions is good and more diverse • » » 7 weeks ago, # ^ |   +4 Also in recent contests test cases are weak in one of the problem. Codechef need good testers. • » » 7 weeks ago, # ^ |   +84 What is wrong with long challenge?Unethical behaviour of participants isn't the fault of CodeChef. • » » » 7 weeks ago, # ^ |   +35 Atleast someone's smart here. • » » » 7 weeks ago, # ^ |   -17 Its like saying what is wrong with android security hackers in existence is not Google's fault.. I never said long challenges are unnecessary but the implementation is a problem. Also cheating is not the only issue here people with too much time to spare come at top and when we are talking about 5+ days it matters. My suggestion- give 2 ratings one for long one for others. • » » » » 7 weeks ago, # ^ |   +37 with too much time to spare come at top In my opinion, it is an advantage of long. Have you even spend a day thinking on a single problem? I bet you rarely do this on codeforces. But long forces you to do this. It's an important thing required for further development. I skipped the last 10 longs and I regret it. Because that was the reason for the rise in my development. When did you last time come across a problem which needs 1 day for just coding? Instead of just hating it look at its advantages.Even if long has 1000 disadvantages. There are 1000 advantages of long, which I will choose over those 1000 disadvantages. No one is forcing you to participate in longs. You can just choose in short contests only. In the long run, your rating will remain same (at an equilibrium) even if you participate in 2N/3 contests instead of N contests. My suggestion- give 2 ratings one for long one for others. You never gave any suggestion. You just wrote a simple statement that everything is wrong with codechef longs. » 7 weeks ago, # | ← Rev. 2 →   +14 Since CodeChef organises just 2 official short contests, they have to be of good quality. I hate the forum thing on CodeChef. They should maybe use the posts/announcement section of CodeChef for the contest to keep the participants updated about the editorials rather than the forum. • » » 7 weeks ago, # ^ |   +5 They do actually. They post on codeforces blogs for the announcement of contests but since it is not an "official" announcement it is not shown in Home page of codeforces.One thing that I feel is Codechef can start organizing more than 2 official contests. That way they can have more active participants and we will also get good contests. » 7 weeks ago, # |   -114 CF is out of ideas. So they make stupid AdHoc/constructive problems. You won't learn shit from Codeforces these days... Better switch to Atcoder and Codechef. It's Much better. • » » 7 weeks ago, # ^ |   -32 They are not shit. They have just become too monotonous for us to see same types of questions coming in every contests. • » » » 7 weeks ago, # ^ |   -68 Yea dont take it the wrong way...I think Changing the coordinator should help with this problem. Other people should also get a chance. • » » » » 7 weeks ago, # ^ |   +1 bruh your username LMFAO. dont hate the game tho my guy, maybe codeforces being more adhoc makes it unique and harder purposefully • » » 7 weeks ago, # ^ |   -20 I don't think CF is out of Ideas. I think ad-hoc problems are harder to come up with (for authors) compared to more topic based problems. » 7 weeks ago, # |   -16 Agree!!! » 7 weeks ago, # |   -13 Agree, but from last few contests , it seems like Codeforces is improving . Even yesterday they have 2- graphs problems , etc. I agree Codechef short contests are much more educational , whereas Codeforces are competitive . » 7 weeks ago, # | ← Rev. 3 →   +2 When Codechef Div-1 contests are hard people say it's unbalanced and when we are able to solve few questions in Div-1, we say now it's good and balanced. • » » 7 weeks ago, # ^ |   +4 You could be right. But I don't think this is actually the case. I didn't see much change in the number of questions I am able to solve. Earlier it was 2, now its 2(mostly) and 3(sometimes).My point is that generally I find codechef contests to be better because of more diverse questions. Seeing same sort of questions again and again could be frustrating. Cook-offs and Lunchtimes don't have repetitive concepts. ProofRecent Lunchtime Problem 1 : Greedy (total submissions : 428) Problem 2 : Binary Search + Math (total submission : 216)Problem 3 : Trees + Math + Greedy. (total submission : 57) I managed to solve 1 and 3 and didn't get a better rank but still I loved the contest and enjoyed it every bit. • » » » 7 weeks ago, # ^ |   +8 I agree to the fact that Codechef problems are new and are really based on data structures and cover vast range of topics. I feel that solving Codeforces requires practice and brain because Ad-hoc problems require a lot of brain storming. » 7 weeks ago, # |   +14 In Educational Round 92 B was dp and E was Maths. • » » 7 weeks ago, # ^ |   -9 You might have done B by using DP but the intended solution was greedy and was an ad-hoc problem. 88566966 • » » » 7 weeks ago, # ^ |   0 All the problems need some observation, for example, problem B of EDU Round 92 could be done with DP (not ad-hoc) or greedy (which is also not ad-hoc). Normally all problems in Codeforces Rounds need some kind of observation. Problem B had a greedy solution, which is a well known startegy. So what are you complaining about? If you consider every problem which needs some kind of observation ad-hoc, and you dislike it then just quit. All the problems are like that. • » » » 7 weeks ago, # ^ |   +8 Lolgreedy != adhocSorry, but I guess you don't know what does ADHOC really means, you just use this new/hype word without knowing the meaning of it • » » » » 7 weeks ago, # ^ |   +3 I might be wrong but arent ad-hoc the types of problems which require observations which are only applicable for that particular problem? Often those observations are implemented greedily » 7 weeks ago, # |   +104 yeah cf contests is full of adhoc problems!(except the last round) but you cant say anything because Um_nik and antontrygubO_o will say that NO the adhoc is really great and im coordinator and i will say which problem is good and which one is bad(anton said that in his blog). and his high rating friends and idiot users that dont read the comment!!! and only upvote a blog/comment that has lots of upvotes or a high rating person wrote the blog/comment will support him. and its really really disgusting that so many users dont say their opinion becuase they think that they will get downvotes • » » 7 weeks ago, # ^ |   +24 What I have observed that mostly Div1 participants especially above grand masters, they don't bother with this issue much because they get to see much diverse contests(in comparison to Div2) as it is relatively easy to prepare a hard Div1 problem consisting of DS and good math.As antontrygubO_o said in his blog that it is difficult to create an easy C-E problems which require DS and math which I agree but why create an easy contest in the first place.Nowadays D's are having 2000+ submissions which can be reduced up-to 800-1000 if we can just create a more diverse contests which might be of moderate level(definitely not EASY). Please don't say : "If you don't like Div2 then reach Div1" • » » » 7 weeks ago, # ^ |   +3 mostly Div1 participants especially above grand masters, they don't bother with this issue much because they get to see much diverse contests Or maybe because many of them are strong in adhoc problems. Or maybe when problemset is full of problems in their weak zone, instead of complaining and demanding more diverse problemset, they want to improve their weak point (I fall in this category). • » » » » 7 weeks ago, # ^ | ← Rev. 2 →   +8 I think you are failing to understand my point. I like ad-hoc problems. They are fun to solve. I am trying to focus on the point of having multiple consecutive contests full of ad-hoc problems. You are assuming that I am whining because they are my weak zone. They aren't. I am not extremely good at them but I am not bad either.I am focusing about having questions involving other topics as well. You tell me which was the last recent contest where we had a Probability/Combinatorics/Good DP problem from C to E and how frequent are they? • » » » » » 7 weeks ago, # ^ |   0 You are assuming that I am whining because they are my weak zone. I never did so. But I think most complainers whine because of this reason. I am focusing about having questions involving other topics as well. My point is instead of focusing on diversity of problemset, focus should be more on problem quality. Feel free to complain if the problem is repeated / bad (easy idea, boring and long implementation), otherwise focus should be on doing better regardless of diversity of problemset. • » » » 7 weeks ago, # ^ |   +21 Do you count me as above grand master? I'm bothered by it as well since I'm bad at guessing these solutions leading to things like being able to solve D-E but not B-C because I failed to do the correct observations once in a while. These problems are kinda hit or miss and if there many such problems in a contest it becomes gambling. • » » 7 weeks ago, # ^ | ← Rev. 2 →   +18 True! In one of the contest blog, Errichto came and said that "comment section is full of shit" and people really got crazy and upvoted like hell.No offence. • » » » 7 weeks ago, # ^ |   +13 That contest blog was actually shit bro! Full of memes and unnecessary comparisons between tourist and MiFaFaOvO. • » » » » 7 weeks ago, # ^ |   0 Yeah, I know it was shit. But when I saw people upvoting the comments, it was crazy for me. That's what called an achievement. • » » 7 weeks ago, # ^ |   -23 Calling people idiots who agree with their views is idiotic or maybe you are too frustrated because you are failing to increase your rating without "your-type" problems. • » » 7 weeks ago, # ^ |   +58 Except that you can say anything, and coordinators are reading your complaints. But I guess my blog shows that it is not that one-sided as comments under round announcement make it seem. There are people who like ad-hoc problems, and there are people who like boring standard data structure problems.And yes, significant part of coordinator job is "say which problem is good and which one is bad" so... what are you dissatisfied with? Do you want a system where anyone can come to coordinator and say "here are my 5 problems, they are great, balanced and not well-known, you prepare them"? I guess you are saying that coordinator's taste is very different from yours. Well, you can't do anything about it. You can try to became a coordinator yourself, but I somehow doubt that it will be a huge success.I'm not a coordinator, have no affiliation with CF and certainly don't have a say in choosing problems for rounds except those I prepare or test (to some very small extent). • » » » 7 weeks ago, # ^ | ← Rev. 3 →   -6 I do like ad-hoc problems but I am just finding them a lot more frequently than before. Making rounds with same pattern every now and then just makes them boring now.And if grandmasters people find Data structure problems boring, we haven't had a Probability/Combinatorics/Good DP problems since a long time.Being a coordinator is a difficult task and I can't be one. I know too less. But saying to become a coordinator oneself for solving this issue is IMO merely just ignoring it. commentSorry for my poor English! • » » » » 7 weeks ago, # ^ |   0 same pattern every now and then Can you show us some adhoc problems with repeated pattern except Div2 A,B? I see similar patterns in probability/combinatorics/DP/DS much more frequently than adhoc problems. Solve 300 good problems in each of these topics and you will see similar patterns too. That's part of the reason why many experienced contestants (including me) are specially weak in adhoc compared to other topics (and many people want more diverse problemset mostly because they don't like too many problems in their weak zone). • » » » » » 7 weeks ago, # ^ |   -8 If I solve 300 good problems of each topic then why would I be in div2? div2 rounds would be prepared in such way that standard for div2 problems would be eliminated. But, if you see antontrygubO_o most probably eliminates standard for div1 problems too.div2 round has nothing wrong with adhoc problems, nor wrong with some standard for red problems as long as it doesn't just say find diameter of tree. • » » » » » » 7 weeks ago, # ^ |   0 If I solve 300 good problems of each topic then why would I be in div2? That's a bold assumption. 1082G - Petya and Graph is an example of repeated problem, the problem is just a rephrase of existing problem (read editorial), but not many people solved it in contest. I doubt if problemsetters want to create / give repeated problems in CF except for educational rounds. if you see antontrygubO_o most probably eliminates standard for div1 problems too. How do you know it? I have no idea which type of problem Anton rejects and why, but I doubt that he has valid reason otherwise we would probably see problemsetters complaining more. • » » » » » 7 weeks ago, # ^ |   0 There is a misunderstanding. By "same pattern" I didn't mean to say questions having repeated concept, I am talking about the same type of contests i.e., multiple consecutive contests full of adhoc problems • » » 7 weeks ago, # ^ |   -9 I like adhoc, I hate last non-adhoc round. • » » » 7 weeks ago, # ^ |   0 "/ ok » 7 weeks ago, # |   +31 According to me , Codeforces problem are more of brain twisting . you can just compare A of codechef with A of codeforces , you will find A of codeforces more harder than codechef A type. also range of question vary alot in codeforces (problemsetters are always very good coders). • » » 7 weeks ago, # ^ |   0 If you have been doing Codeforces for a long time then you really can't say this because you and me both knows how to solve A within few minutes. • » » » 7 weeks ago, # ^ |   0 Yes , you are right but if you compare codeforces and codechef , you will find people became 3 or 4 star on codechef and still newbie in codeforces . • » » » » 7 weeks ago, # ^ |   0 Yeah, I agree. Maybe because they give up early. » 7 weeks ago, # |   +9 I think I am the only person in this world whose codechef rating is less than codeforces rating. (I am 3 star on codechef ;_;) • » » 7 weeks ago, # ^ |   +6 No brother you are not the only one • » » 7 weeks ago, # ^ |   +28 No brother you are not the only one • » » » 7 weeks ago, # ^ |   -13 Please reveal your secret tricks and tips to achieve this feat!! • » » » » 7 weeks ago, # ^ | ← Rev. 4 →   0 Asking queries like this worked for me. » 7 weeks ago, # | ← Rev. 3 →   -43 only reason cf has more public than any other platform is because of number of contests . If codechef also makes 10 short contests per month , it won't be behind codeforces .Understood my friend antontrygubO_o ? • » » 7 weeks ago, # ^ |   0 making 6-7 contests in a month alone is a challange in my opinion. » 7 weeks ago, # |   -27 yes i agree codechef is better than codeforces in term of quality of questions. » 7 weeks ago, # | ← Rev. 2 →   0 don't know which one is better....but i think codeforces is far ahead of codechef but cook off and lunch time are very fun to solve and yes it covers many topics .....but as many top coders participate in codeforces,it is very much hard to maintain good rating in codeforces where in codechef in div 2 it is not hard to increase rating......people should try to participate in those two contests........another good fact about codechef is codechef doesn't make their plagiarism submission skipped,penaltized the user............. » 7 weeks ago, # |   +9 Codeforces is far better. In codechef you immediately get what to do in even medium problems. In codeforces, you have to think out and find out an approach. • » » 7 weeks ago, # ^ |   0 Just a question, what rating range on Codeforces do you think would Codechef medium problems belong to? » 7 weeks ago, # |   +178 [---] Div1 Codeforces particpants [---] get [---] contests frequently [---]. ????? » 7 weeks ago, # |   +26 In an ideal world, I think the CodeChef Long Contest is the superior format. I learned a lot of techniques by participating in CodeChef Long, and I was rewarded for persevering, brainstorming, and researching. I'm not a fan of how the most difficult problems can sometimes boil down to "According to this research paper...", but it's not like I'm going to solve the Div1E in Codeforces anyway. There's also the huge issue they face with cheating and plagiarism and---well, yeah, I did say "In an ideal world".Short format contests can sometimes feel like luck of the draw on Codeforces. I can tell you that I only reached Master because of speed during one of the Global Rounds. I think the time limit is too short and ends up rewarding memorization more than it necessarily should, and also it stops us from seeing nice but difficult implementation/data structure problems. • » » 7 weeks ago, # ^ | ← Rev. 2 →   +13 I loved your editorials, they are just great!! The polynomial operations tutorial is mind blowing and very well explained!! • » » » 7 weeks ago, # ^ |   0 Can you share the link? Seems interesting. • » » » » 7 weeks ago, # ^ |   0 » 7 weeks ago, # |   -11 I believe this is mostly due to the number of contests CF is pushing. I see it this way — We get two rounds to practice our ideas and general thinking (with 2-3 algo problems, mostly hard ones) and then we get a round with some math + algorithm.I see this as a good thing for beginners, but don't know about more experienced coders. E. g. I have participated in only 3 rounds (2 of which ended up unrated) and solved 3 problems in each of them. Out of those 3 problems, all of them were construction problem which required me to think and use paper. Problems which I didn't solve were DP/Graph algorithms that are generally popular, so I had a chance to brush up my common algorithm skills. I believe this makes the contests less overwhelming and much easier to perform well for beginners (you won't gain rating, but at least you will solve some number of problems (4-5/7-8 or 3/6) and not end up having solved 1 or at most 2).I hope my points are clear. • » » 7 weeks ago, # ^ | ← Rev. 2 →   +35 I don't understand why solving 2 problems as the normal amount is so bad. For most of my yellow life 2 problems was the normal amount to solve during a round (I mean Div. 1 only rounds). I actually dislike the contests where I'm expected to solve 6 or whatever problems because it's so easy to get tripped up by some annoying easy problem.Furthermore the amount of interesting (appropriate for my level) problems has never been more than 2 (and often it's 1 or 0). Other problems are either too easy or too hard. • » » » 7 weeks ago, # ^ | ← Rev. 2 →   0 I was referring to the Div 2/3 contests, not Div 1, so I hope it clears things up a bit. I think it is too harsh for people to see they solved only 2/7 problems in a round for a beginner. It's even worse if they see "no progress" in months, which is not a result of their poor progress but the diversability of the third or fourth problem, so they end up thinking they have stayed at the same level.I agree with your second point. Either a problem is too easy/too hard in most of the cases. • » » » » 7 weeks ago, # ^ |   +6 I was referring to the Div 2/3 contests, not Div 1, so I hope it clears things up a bit. IMO it doesn't really matter. I just don't think it's an inherently bad thing that most people only solve a small number of problems in a contest. • » » » 7 weeks ago, # ^ |   +45 the amount of interesting (appropriate for my level) problems has never been more than 2 (and often it's 1 or 0)Couldn't agree more with this, for me it's 90% of the times just 1 problem, the rest are either trivial to me or impossible for me. It has been this way since my day 1 till now. » 7 weeks ago, # | ← Rev. 2 →   +28 Why are we comparing these two???I know I am new in this platform....but maybe i think it's not necessary to compare these two......maybe both of these have many complaints of their user.....Someone says Codeforces is better someone says Codechef is better..But the fact is both Codedhef and codeforces are helping for our skill developments.... Thanks both codeforces and codechef............................................................ ...........................Just express what I thought(sorry if I write somthing wrong or stupid)....... » 7 weeks ago, # |   +6 Registered at codechef after this blog. Need to checkout their problems » 7 weeks ago, # |   0 I don't understand, is there any rule that says "You can only play contests on codechef or codeforces but not both". Both of the platforms are very good. Both have their own pros and cons. I have used both of this platforms from 1 years and i have learned a lot from codechef andcodeforces together. The co-ordinators always try to set best, unique, conceptual questions. We should give respect to them.
# nLab twisted Chern character The ordinary Chern character for K-theory sends K-classes to ordinary cohomology with real coefficients. Over a smooth manifold the de Rham theorem makes this equivalently take values in de Rham cohomology. The twisted Chern character analogously goes from twisted K-theory to twisted de Rham cohomology. Created on May 20, 2011 06:58:11 by Urs Schreiber (131.211.238.38)
# V2. Interactive dictionary with inexact look ups (updated) Recently I published a post in which three people made pretty good suggestions. I've learnt a lot since then thanks to them! From among many things, they encouraged me to use classes. And that's cool because it is the first time I do object oriented programming... Even though the use I gave it was very basic. Once again, I'd like to receive reviews of my new code. It's uploaded in GitHub, in case someone wants to check it out. I'm building this dictionary on python, using data.json. Any suggestions are warmly welcome! So, how can I improme my code? import json, time, re from difflib import get_close_matches class fuzzydict(object): def __init__(self, json_file): self.file = json_file def find_word(self, word): for version in [word.lower(), word.capitalize(), word.title(), word.upper()]: if version in self.data: return version simils = get_close_matches(word, self.data, 1, 0.7) if len(simils) > 0: return simils[0] else: return None def output_word(self, word): # check if 'keyword' has no values (definitions) if not self.data[word]: print('"%s" is yet to be defined.' % word) # print in a cool format else: print('· ' + word + ':') # re.split('--|;', self.data[word]) in case the # definitions were not given inside a list for index, definition in enumerate(self.data[word]): print(str(index + 1) + '.', definition) def input_word(self, word, definition): operation = 0 if word in self.data and definition not in self.data[word]: self.data[word] += [definition] operation = 1 # in case it's a new word elif word not in self.data: self.data.update({word: [definition]}) operation = 1 # updates the file when necessary if operation: with open(self.file, 'w') as file: json.dump(self.data, file) return '\nDone!' def remove_word(self, word): if word != None: self.data.pop(word, None) with open(self.file, 'w') as file: json.dump(self.data, file) return '\nDone!' else: return "\nHmm... how can you remove something that doesn't exist? Huh!" def remove_def(self, word, index): for i, definition in enumerate(self.data[word]): if i == int(index) - 1: self.data[word].remove(definition) with open(self.file, 'w') as file: json.dump(self.data, file) return '\nDone!' return "\nHmm... how can you remove something that doesn't exist? Huh!" # new object mydict = fuzzydict('data.json') # can either access the data through 'archives' or 'mydict.data' while True: '0. Quit\n' '1. Search\n' '3. Remove word\n' '4. Remove definition\n\n' '} ' 'What would you like to do? ') # '0' to exit break # '1' to look up a word search = input('\nType in a word: ') if mydict.find_word(search) == None: print('"' + search + '"' + " isn't available at the moment.") yes = input('Would you like to add "' + search + '" to the dictionary? ') if yes.lower() == 'yes': meaning = input('Type in the meaning of ' + search + ', please: ') while meaning == '': meaning = input('Type in a valid definition, please:') print(mydict.input_word(search, meaning)) else: mydict.output_word(mydict.find_word(search)) # '2' to add or remove a new word or definition print('~ You are now editing the dictionary ~') new_word = input('\tWord: ') new_def = input('\tDefinition: ') if mydict.find_word(new_word) == None: print(mydict.input_word(new_word, new_def)) else: print(mydict.input_word(mydict.find_word(new_word), new_def)) # '3' to remove an existing word print('~ You are now editing the dictionary ~') rm_word = input('\tType in the word you want to remove from the dictionary: ') print(mydict.remove_word(mydict.find_word(rm_word))) # '4' to remove an existing definition using its ID print('~ You are now editing the dictionary ~') obj_word = input('\tWord: ') mydict.output_word(obj_word) id_def = input("\nWhich definition do you want to remove? ") print(mydict.remove_def(obj_word, id_def)) # 5 seconds delay, good for UX time.sleep(5) User experience: 1. Quit option usually is the last one in the list. 2. Some words definitions are so long that I had to scroll horizontally. 3. Going back to main menu after few seconds pass seems not very user-friendly. When there are many definitions you just don't have enough time to read all of them. I think it's better to go back to menu if some button is pressed. 4. Also, while I'm shown a list of definitions there is this Loading... printed. It's a bit confusing. I was expecting more definitions to be printed out. But in fact there is no loading at all. Just waiting for some time. 5. Printing out similar words seems like a good idea, but sometimes it gives strange results. For example, if I wanted to look for loan but instead typed loqn, it will give me long. 6. If I made a mistake and typed word incorrectly when I wanted to delete it, the program wouldn't warn me and just delete the most similar word. There should be a warning about what word exactly is going to be deleted. 7. Also, what if I change my mind about deleting word? I think there should be a way to go back to main menu. 8. If I search for a word that doesn't exist, program asks if I want to add it to the dictionary. I typed y and expected it to be saved. But it didn't because you are checking only for yes. 1. Imports should be on separate lines. Also, re is never used. 2. Class definition should be surrounded by 2 blank lines. 3. Class names are usually in CamelCase. 4. if len(simils) > 0: should be replaced by simply if simils:. Also, why not rename it to similar_words? output_word: 1. It's possible to avoid unnecessary nesting here like this: if not self.data[word]: print(f'"{word}" is yet to be defined.') return print('· ' + word + ':') for index, definition in enumerate(self.data[word]): print(str(index + 1) + '.', definition) 2. Note the f-string above. It's a new feature of Python 3.6. input_word: 1. There is no need for a flag-variable operation. It is possible to avoid it here after some refactoring. 2. Don't return strings like Done!. In your case you just want to print them and return nothing. 3. It is possible to significantly reduce the code using get and setdefault methods like this: if definition in self.data.get(word, []): print('\nOUPS! Apparently the definition you attempted to add ' return self.data.setdefault(word, []).append(definition) with open(self.file, 'w') as file: json.dump(self.data, file) print('\nDone!') remove_word 1. Same as in output_word. First, check if word is None then without nesting with else remove the word from the dictionary. remove_def: 1. There is no need to iterate over definitions here. Just remove it by index and catch exceptions: try: self.data[word].pop(index - 1) except IndexError: print("\nHmm... how can you remove something that doesn't exist? " "Huh!") return with open(self.file, 'w') as file: json.dump(self.data, file) print("\nDone!") Other notes: 1. Everything that you put outside of a class should be put in functions. And use if __name__ == '__main__': 2. Now you are opening, recording, closing your json file for every operation. Consider applying changes locally, and recording everything at once only in the very end, when you finish working with your dictionary. • That was knowledge enriching! Thank you so much, Georgy!! – user157281 Jan 10 '18 at 10:00 You are particular about supporting various forms of the input word. I would suggest that you create a normalized key dictionary that maps lowercased versions of the key to a list of forms stored in the main dict. That is: def find_word(self, word): lcword = word.lower() stored_cases = self.get_keys[lcword] # Map 'cat' -> ['Cat', 'CAT', 'cat'] for key in stored_cases: yield key I would also suggest that you utilize None instead of your 'N0t_F0uNd' string. This case is pretty much why it exists - to express the idea that nothing is available. Except for the next suggestion... I would also suggest that you write your code expecting find_word to return multiple values. Actually, to generate multiple values as an iterator. So your code wouldn't check for a sentinel value meaning "I got nothing." Instead, it would iterate over all possible values, and possibly special-case the empty sequence: for w in my_dict.find_word('cat'): ... etc ... Finally, I would suggest that instead of dropping into insert mode when a match is not found, you print a message saying "Select option 2 to add it" or something. For extra points, you can remember the last searched word and make that the default during add/remove operations.
Question # An excess of $AgN{{O}_{3}}$ is added to 100ml of a 0.01M solution of dichlorotetraaquachromium(III)chloride. The number of moles of AgCl precipitate would be:A. 0.001B. 0.002C. 0.003D.0.01 Hint: $\left[ Cr{{\left( {{H}_{2}}O \right)}_{4}}C{{l}_{2}} \right]$ is a coordination compound. Here the overall charge on the compound is +1. We can calculate the number of moles by the formula:$Number\text{ }of\text{ }moles=Volume\times Molarity$ Formula used: $Number\text{ }of\text{ }moles=Volume\times Molarity$ - We can write the formula of given compound as $\left[ Cr{{\left( {{H}_{2}}O \right)}_{4}}C{{l}_{2}} \right]$. Now, we will first find the charge on the compound. We know that charge on chromium is +3, charge on water is zero, charge on chlorine is -1. So, we get the overall charge on the compound as: \begin{align} & ch\arg e=3+0\times \left( 4 \right)+2\times \left( -1 \right) \\ & =+1 \\ \end{align} So, we can see that the overall charge present is +1. Therefore, there will be one chloride ion required. - We can say that if one mole of a complex is present then one mole chloride ion will be available to form AgCl. In simple words we can say that 1 mole of complex will give 1 mole of AgCl. - Now let’s calculate the number of moles, we can use the formula: $Number\text{ }of\text{ }moles = Volume\times Molarity$ -We are being provided with the volume and molarity, we should first convert the volume given in ml into litres. So,100 ml = 0.1 l -So, we will put the values in the equation, \begin{align} & Number\text{ }of\text{ }moles=0.1\times 0.01 \\ & =0.001\text{ }moles \\ \end{align} Hence, we can conclude that the correct option is (A), that is the number of moles of AgCl precipitated would be 0.001 moles. Note: Here, we are being provided with a 0.01 M solution, that is M is the molarity of the solution. One should not be confused in units’ M and m. M is the unit of molarity and m is the unit of molality.
# Question #09099 Sep 14, 2015 Element $\text{X}$ is silicon. #### Explanation: The idea here is that you need to use the atomic masses of the isotopes and their respective abundances to find the average atomic mass of element $\text{X}$. Even without doing any calculations, you can guess that the average atomic mass of elemnt $\text{X}$ will be very close to 28.00 amu due to the very high abundance of the $\text{^28"X}$ isotope. Its high abundance tells you that it contributes most to the average atomic mass, so you can expect the atomic mass of element $\text{X}$ to be close to that value. You can use fractional abundances to calculate the contribution each isotope has to the average atomic mass $\text{For " ""^28"X":" " 92.23/100 * "27.977 amu" = "25.80319 amu}$ $\text{For " ""^29"X":" "4.67/100 * "28.976 amu" = "1.35318 amu}$ $\text{For " ""^30"X":" "3.10/100 * "29.974 amu" = "0.92919 amu}$ The average atomic mass of the element will be the sum of each isotope's contribution $\text{avg. atomic mass" = sum"isotope contribution}$ This means that you have $25.80319 + 1.35318 + 0.92919 = \textcolor{g r e e n}{\text{28.08556 amu}}$ A quick look in the periodic table will reveal that element $\text{X}$ is silicon.
# Pintool Regions Published: 02/10/2015 Last Updated: 02/10/2015 ## What is a "Region"? A "region" is a slice of execution of a program. It is demarcated by dynamic "start" (EVENT_START) and "stop" (EVENT_STOP) triggers as a program is running. e.g., the SDE-PinPlay logger starts recording a pinball on an EVENT_START and stops recording it on an EVENT_STOP. ### Specifying "Region of interest" for Pin and SDE based analyses Pintools can include $PIN_ROOT/source/tools/InstLib/control_manager.H to define a "CONTROL_MANAGER". See the example usage in$PIN_ROOT/source/tools/InstLibExamples/control.cpp. Similar use of the "CONTROL_MANAGER" in SDE can be seen in \$SDE_BUILD_KIT/pinkit/sde-example/example/controller-example.cpp. CONTROL_MANAGER makes available a number of switches to the pintool (with an optional prefix), that allow specifying "EVENT_START" and "EVENT_STOP" event callbacks to the pintool, and the pintool can then choose to take some action (such as 'enable analysis' or 'disable analysis'). ### Region triggers from a pintool A pintool can start/stop regions programmatically by calling the following CONTROL_MANAGER method: //trigger all registered control handlers //eventID - the Id of the event //bcast - whether this event affects all threads VOID Fire(EVENT_TYPE eventID, CONTEXT* ctx, VOID * ip, THREADID tid, BOOL bcast); e.g. the 'gdb_record' script invokes the underlying pinplay-driver tool with "-log:controller_default_start 0" and then the pinplay-driver tool triggers EVENT_START/EVENT_STOP whenever "(gdb) pin record on" and "(gdb) pin record off"commands are issued by the user. ### What is the controller The controller is a flexible powerful way for users to define a slice of execution which they want to be profiled by SDE or an SDE based tool. The tool is responsible to support the controller events, and define its behavior, regarding incoming events. See details about SDE based tools support for the controller below. The user can define a set of alarms which will trigger the events to be fired by the controller. The tool will receive these events, and act accordingly. The controller provides a set of predefined alarms which include: • icount: instruction count • itext: sequence of raw bytes which is interpreted as an instruction • ssc: a code sequence consisting of 2 instructions, the first has an immediate that identify this marker • int3: an embedded int3 instruction • isa-extension/isa-category: the execution of instruction which belong to this XED group (ISA extension or ISA category) • cpuid: a cpuid instruction with a special input (as defined by the input registers) • magic: a code sequence also used by other simulation tools which has two input values that identify the marker • pcontrol: entering the MPI_Pcontrol function with specific string argument that identify the marker • enter_func: entering to a function with this name • exit_func: returning from a function with this name • interactive: the user interactively sends the event to the process from another window on the same machine • timeout: number of seconds to trigger event ### Details The controller exposes a knob: "-control" which gets the actual definition in a string argument to this knob. This is a flexible way for the user to control the region being profiled by the tool. The following will describe the behavior for single-threaded applications. An additional paragraph will be describing multi-threaded applications. Each instance of the "-control" knob, defines an alarm-chain as: "-control <event>:<alarm>:<value>[:count<int>][:tid<int>][:bcast]" A full definition of the syntax, and the available alarm-types can be found below. This defines that the <event> will be fired by the controller, once the <alarm> identified by the <value> is executed. If ":count<int>" is used, the event will be fired only when the alarm identified by the <value> is executed for the <int> time. Examples: -control start:icount:100 The controller will fire a start event, once 100 instructions are executed. The controller will fire a stop event, once we reach the symbol 'foo' for the 3rd time. Repeat - Repeating an alarm multiple time By default the alarm is "armed" only once for each thread. Once the event is fired, this thread won't trigger an additional event, for this alarm. If one adds the ",repeat:<int>" to the end of the alarm, it will be activated <int> times. A repeat token without an argument means that the alarm will be fired every time it is executed. Example: -control start:icount:100 The controller will fire a start event, once 100 instructions are executed. No more events will be fired -control start:icount:100,repeat:3 The controller will fire a start event, once 100 instructions are executed. Then it will "re-arm" the icount alarm, and fire a start event after additional 00 instructions are executed. This is repeated again. So 3 'start' events are fired, each 100 instructions. As opposed to using the ":count3" syntax, which will fire a single start event, only once the condition of 100 instructions is reached for the 3rd time. Omitting the repeat count -control start:icount:100,repeat The controller will fire a start event, every 100 instructions that are executed. Multiple alarms in a chain Each instance of the "-control" knob defines, an “alarm chain” (see description below). Alarm chain is a sequence of alarms, separated by the ',' character, where each alarm is activated only after the previous one was fired. So the ',' character applies an order between the proceeding alarm and the following alarm. Example: The controller will fire a start event for each thread which reaches icount 100. Then it will 'arm' the next alarm for that thread. So once a thread reaches the symbol foo for the 3rd time (after reaching icount 100), the controller will fire a stop event for that thread. This will be repeated twice, for each thread. In addition, one can set the entire chain to start only after some other chain has finished, using the ",name:<name>" and the ",waitfor:<name>" syntax Pre-condition In some cases, we want the event to be fired only after a certain condition. For example: we want to start event to be fired when a function foo is called from the function bar.  This can be done with the pre-condition event type. This event doesn't actually call to the tool (i.e. fires an event) but only arms the next alarm in the chain. Example: The controller will arm the start event after calling to the var function. Now, when foo is called the start event will be fired and the region start. The stop event will be triggered after 100000 instructions have been executed. The alarm-chains are handled separately for each thread. The controller "arms" the <alarm-type> separately for each thread, and the alarm's value is counted separately for each thread. If the ":tid<int>" syntax is used, the preceding alarm is armed only for the thread with the defined thread ID. If the ":global" token is used the alarm will be counted in all threads. Going back to the example above: -control start:icount:100 For each thread that reaches an icount of 100, the controller will fire a start event to the tool. The the event will be delivered with the tid of the triggering thread. -control start:icount:100:tid3 Only when thread with pin thread ID 3 will reach the icount of 100, the controller will fire the start event to the tool. Adding the ":bcast" syntax, will cause the controller to specify that the event is marked with the broadcast attribute (upon the arrival of the event). The tool can use this information to decide if to profile all the threads based on this event or only the triggering thread. The effect of adding the "bcast" syntax depends on the way the tool handles this information. In Addition, an alarm defined with bcast, which is notated with the "bcast" token, will 'arm' the following alarm in the chain, for all threads. ### The full definition of an alarm chain: -control <alarm-chain>[,repeat[:<int>]][,name:<name>][,waitfor:<name>] alarm-chain ::= <alarm>[,<alarm-chain>] alarm ::= <event>:<type>:<value>[:count<int>][:tid<int>][:bcast][:global] Values per type icount: int <name>+<offset> [image + offset] <name>                 [symbol/function name] ssc: hex (SSC markers are special no-OP instructions built into the binary) itext: hex (raw bytes of the instruction) isa-extension: string (the instructions extension as specified by XED) isa-category: string (the instructions category as specified by XED) int3: no argument (the int3 instruction is not really executed) magic: int.int (the instruction xchg ebx,ebx and input/output values are defined by the numbers) pcontrol: string (the second argument to the MPI_Pcontrol function called by the application) enter_func: string (bare function name - without namespace and params) exit_func: string (bare function name - without namespace and params) interactive: no argument (see below) timeout: int (number of seconds) Note: Using the address alarm with image and offset, the '+' sign is the key to distinguish between a function name and an image name. The image name can be full path or only the base name of the image. Using the interactive controller requires two windows: one to run sde with the application and the interactive controller, the second to specify the start/stop event by using the controller client. Here is an example: Application > sde –hsw –mix –control start:interactive:bcast,stop:interactive:bcast –interactive-file /tmp/ctrl_file –early-out -- myapp Main process pid: 32564   ** using file: /tmp/ctrl_file.32564   ** listening to port: 34106 Control Start event > python /misc/cntrl_client.py /tmp/ctrl_file.32564 Stop event > python /misc/cntrl_client.py /tmp/ctrl_file.32564 Alarms accuracy The controller will fire the event once the alarm reaches the triggering condition. The instruction count alarm specifically works in a basic block granularity. The event will be fired at the beginning of a basic-block, when the current icount + the number of instructions within the basic-block, exceed the value defined in the controller, for triggering the event. Some events were modified to be more accurate, and they are triggers in the specific instructions and not in the basic block: address, ssc, isa-extension, isa-category. Please note that using the controller for specifying region of interest for tracing with pinplay has special handling. The start and stop events always act on all the threads. Due to implementation limitations, there is a small delay between the exact instruction on which the event is fired and when it is actually start or stop the region. Special alarm - Uniform: uniform:<period>:<length>:<count> period: number of instructions before next sampling starts. length: number of instructions to sample. count:  number of samples. The alarm can be used to define multiple regions based on instruction count. Tokens repeat[<int>]: number of iterations of the chain (when no number provided - execute in endless loop). count<int>: delay firing the event to only the Nth execution of the alarm, the counting happens for each thread unless global is also specified. bcast: inform the tool that the event should be processed for all threads (this behavior is tool specific, and it is in the tool's responsibility). name<string>: the name of the chain, other chains can “wait” for this chain to finish before it starts. waitfor<string>: start the chain only after the chain with the specified name has finished. global: Count alarms summary for all threads and not in a specific thread. this token cannot appear with tid token. Note*: If no start event is defined by the user in the command line, a default start event is armed for each thread. ### Controller support in SDE tools As mentioned above, the controller is a self-contained component within SDE. All details are related to when and what events are fired by the controller. This section will discuss which SDE tools support using the controller, and how do they handle events fired by the controller. List of tools which support the controller API: • analysis tools: mix, footprint, align-check,chip-check, debugtrace,dynamic-mask-profiler, icount,memory-area-cross, sse-checker • tracing tool Analysis tools: All analysis tools behave similarly: The tool collects data per thread. Based on an arrival of a start event triggered by that thread. The tool will stop the collections of data for that thread, when a stop event arrives for that thread. If the tool receives an event with "bcast" as the tid, rather than the triggering tid, the tool will apply this event on all threads. These events are effective right at the point of arrival. Tracing tool: The tracing tool handles a global region. Meaning, at each given time, we are either in a region, and tracing all threads, or we are outside a region, and not tracing any thread. Another special behavior of the tracing tool is the transition from in/out a region. The effect of the event arrival isn't immediate. Following is a description of what happens in the tracing tool upon an arrival of a start event: 1.  Once the thread which "caught" the event reaches the end of its current Basic-Block (BBL), it stops and calls all other threads to stop to. 2. Each one of the threads will stop at the end of its current BBL. 3. Once all treads are stopped, we change mode to "in-region", meaning we'll start generating the trace. The same goes for stopping the trace generation. Please notice that the tracing tool ignores any event that arrives between step 1 and 4. The controller behavior is orthogonal to the way the tool handles the events. So, the controller will continue firing events based on the alarms defined by the user in the command line. The tracing tool would ignore them if a previous event arrived, and its processing hasn't been completed yet. By default, sde analysis tools follow the same behavior if you run an sde tool in addition to the tracing tool. You can cancel this by adding '-pinplay-control 0', this will keep the behavior of the sde analysis tools described above. ### Usage Examples here are some examples of command line usage for defining the alarms/events. start at symbol foo, stop at symbol bar >sde –control start:icount:1000000,stop:icount:100000 -- <app> Fire a 'start' event after 1M instrctions, fire a 'stop' after additional 100K instructions equivalent to: >sde –skip 1000000 -length 100000 -- <app> start at symbol foo, only after ssc mark 0x11223344, stop at symbol bar (The precond means we don't fire any event, but 'arm' the address:foo alarm only after we reach ssc:11223344) start at symbol foo, fire an event with 'bcast' as the triggering thread, stop at symbol bar, fire the event with 'bcast' as the triggering thread. * It's the tools decision/responsibility to handle the 'bcast' differently than the default case. start at symbol foo, stop at symbol bar but monitor only tid0 (The controller ignores the fact the tid1 reaches the symbol foo) >sde –control precond:icount:200,uniform:800:500:5 -- <app> start uniform sampling after 200 instructions. equivalent to old usage: >sde -uniform-skip 200 -uniform-period 500 -uniform-length 500 -uniform-count 5 >sde –control precond:ssc:11223344,uniform:800:500:5 -- <app> start uniform sampling after ssc mark 0x11223344 instructions. start at symbol boo only after the sequence foo,bar,foo,bar >sde –control start:enter_func:foo,stop:exit_func:foo -- <app> start at beginning of function foo, stop when exiting for function foo for this call sequence: A->B->foo->C->D capturing functions foo,C,D >sde –control start:enter_func:foo,stop:exit_func:bar -- <app> for this call sequence: A->bar->foo->C->bar start at beginning of function foo, stop when exiting the first bar function *for exit function event we always refer to the top most caller. >sde –control start:enter_func:foo,stop:exit_func:foo -- <app> for this call sequence: A->foo->foo->foo->foo->C start at beginning of the first function foo, stop when exiting the first foo function #### Product and Performance Information 1 Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
# RMS 29/360 casing ### Help Support The Rocketry Forum: #### jenget ##### Member Hello, I am looking for an RMS 29/360 casing. If you have a seal disk and closures that would be an added bonus however, I do no want the closures without the casing. Brand does not matter, Dr. Rocket, Aerotech, Rouse-Tech, etc. Any will work so long as they are the Aerotech style. I live near Dallas, TX. Can do cash if it's near by otherwise I have PayPal or can do money order or something that works better for you, whatever works best. Thank you! Last edited: #### tbonerocketeer TRF Supporter Hello, I am looking for an RMS 29/360 casing. If you have a seal disk and closures that would be an added bonus however, I do no want the closures without the casing. Brand does not matter, Dr. Rocket, Aerotech, Rouse-Tech, etc. Any will work so long as they are the Aerotech style. I live near Dallas, TX. Can do cash if it's near by otherwise I have PayPal or can do money order or something that works better for you, whatever works best. Thank you! Mine is new, $55.25 plus shipping. Should be$6.10 for shipping. #### pyrobob ##### Well-Known Member Hello, I am looking for an RMS 29/360 casing. If you have a seal disk and closures that would be an added bonus however, I do no want the closures without the casing. Brand does not matter, Dr. Rocket, Aerotech, Rouse-Tech, etc. Any will work so long as they are the Aerotech style. I live near Dallas, TX. Can do cash if it's near by otherwise I have PayPal or can do money order or something that works better for you, whatever works best. Thank you! Nice, a fellow Texan and near Dallas no less! Rest assured you're not the only one :wink:. We should connect sometime. Last edited:
# Algebra Symbols ## Math Algebra Symbols Symbol Name Meaning / definition Example x variable unknown value to find when 2x = 4, then x = 2 equivalence identical to equal by definition equal by definition equal by definition equal by definition approximately equal weak approximation 11 ~ 10 approximately equal approximation sin(0.01) ≈ 0.01 proportional to proportional to f(x) g(x) lemniscate infinity symbol much less than much less than 1 1000000 much greater than much greater than 1000000 1 parentheses calculate expression inside first 2 * (3+5) = 16 brackets calculate expression inside first [(1+2)*(1+5)] = 18 braces set floor brackets rounds number to lower integer 4.3= 4 ceiling brackets rounds number to upper integer 4.3= 5 exclamation mark factorial 4! = 1*2*3*4 = 24 single vertical bar absolute value | -5 | = 5 function of x maps values of x to f(x) f (x) = 3x+5 function composition (f g)(x) = f (g(x)) f (x)=3x, g(x)=x-1 (f g)(x)=3(x-1) open interval (a,b) = {x | a < x < b} x (2,6) closed interval [a,b] = {x | axb} x [2,6] delta change / difference ∆t = t1 t0 discriminant Δ = b2 – 4ac sigma summation – sum of all values in range of series ∑ xi= x1+x2+…+xn sigma double summation capital pi product – product of all values in range of series ∏ xi=x1∙x2∙…∙xn e constant / Euler’s number e = 2.718281828… e = lim (1+1/x)x , x→∞ Euler-Mascheroni constant γ = 0.527721566… golden ratio golden ratio constant pi constant π= 3.141592654… is the ratio between the circumference and diameter of a circle c = π·d = 2·π·r := ~ ( ) [ ] { } x x x! | x | f (x) (fg) (a,b) [a,b] ∑∑ e γ φ π Symbol x ## Linear Algebra Symbols Symbol Symbol Name Meaning / definition Example dot scalar product a b × cross vector product a × b AB tensor product tensor product of A and B A B inner product [ ] brackets matrix of numbers ( ) parentheses matrix of numbers | A | determinant determinant of matrix A det(A) determinant determinant of matrix A || x || double vertical bars norm A T transpose matrix transpose (AT)ij = (A)ji A Hermitian matrix matrix conjugate transpose (A)ij = (A)ji A * Hermitian matrix matrix conjugate transpose (A*)ij = (A)ji A -1 inverse matrix A A-1 = I rank(A) matrix rank rank of matrix A rank(A) = 3 dim(U) dimension dimension of matrix A rank(U) = 3
# Deriving Maclaurin series for $\frac{\arcsin x}{\sqrt{1-x^2}}$. Intrigued by this brilliant answer from Ron Gordon, I was attempting to find the Maclaurin series for $$f(x)=\frac{\arcsin x}{\sqrt{1-x^2}}=g(x)G(x)$$ with $g(x)=\frac{1}{\sqrt{1-x^2}}$ and $G(x)$ its primitive. So I attempted to multipy series, which yielded this: $$f(x)=\sum_{n=0}^{\infty}x^{2n+1} (-1)^n\sum_{k=1}^{n}\frac{1}{k+1} { -\frac{1}{2}\choose n-k}{ -\frac{1}{2}\choose k},$$ which I'm unable to simplify further. How to proceed? Or is this approach doomed? • Maybe try expanding $\sqrt{1 - x^2}$ via the binomial theorem, and then dividing. – Bitrex Nov 2 '13 at 23:24 • Since $f(x)=\frac12(\arcsin^2x)'$, perhaps you should try express this new function in Taylor-Maclaurin series, and then derive it with regard to x. – Lucian Nov 23 '13 at 17:59 ## 2 Answers Note that $$\int_0^1\frac{dt}{1-x^2+x^2t^2}=\frac{1}{x\sqrt{1-x^2}}\arctan\left(\frac{x}{\sqrt{1-x^2}}\right)=\frac{\arcsin(x)}{x\sqrt{1-x^2}}$$ so we can write $$\frac{\arcsin(x)}{\sqrt{1-x^2}}=\sum_{n=0}^\infty\left(\int_0^1(1-t^2)^n\,dt\right)x^{2n+1}.$$ But $$\int_0^1(1-t^2)^n\,dt=\int_0^1\sum_{k=0}^n(-1)^k\binom{n}{k}t^{2k}\,dt=\sum_{k=0}^n\frac{(-1)^k\binom{n}{k}}{2k+1}=\frac{(2n)!!}{(2n+1)!!}.$$ Hence, $$\frac{\arcsin(x)}{\sqrt{1-x^2}}=\sum_{n=0}^\infty\frac{(2n)!!}{(2n+1)!!}x^{2n+1}.$$ Also see here for proof of $\sum_{k=0}^n\frac{(-1)^k\binom{n}{k}}{2k+1}=\frac{(2n)!!}{(2n+1)!!}$. • Hey, can you explain the transition from the first line to the second ? I really looked hard at this, yet frustratingly enough I can't see how you did that. $\frac{\arcsin(x)}{\sqrt{1-x^2}} = \sum_{n=0}^\infty\left(\int_0^1(1-t^2)^n\,dt\right)x^{2n+1}.$ Where does the integral appear from and how does it relate to the previous one ? – Victor Sep 27 '15 at 3:39 • @Victor Note that $\sum_{n=0}^\infty(1-t^2)^nx^{2n}$ is a geometric series, so in fact,$$\sum_{n=0}^\infty(1-t^2)^nx^{2n}=\frac1{1-(1-t^2)x^2}=\frac1{1-x^2+x^2t^2}$$ Now, take a look once again! – user91500 Sep 27 '15 at 11:37 • That is so slick ! I know this might be really simple for you, but how did you come up with this? I basically mean the first integral and the geometric series part. – Victor Sep 27 '15 at 15:12 • @Victor Oh no, I think the problem is that you like to see from the left to right, while you can also see from the right to left! – user91500 Sep 28 '15 at 9:32 • In fact, you want to find the Maclaurin series for $\frac{\arcsin(x)}{\sqrt{1-x^2}}$, among the all things you may know about this function, one thing can be the following equality $$\frac{\arcsin(x)}{\sqrt{1-x^2}}=\frac{1}{\sqrt{1-x^2}}\arctan\frac{x}{\sqrt{1-x^2}}$$ So if you try to find an integral representation for $\frac{1}{\sqrt{1-x^2}}\arctan\left(\frac{x}{\sqrt{1-x^2}}\right)$, this can be useful and really useful when the integrand has also a geometric series representation! – user91500 Sep 28 '15 at 9:33 Another but similar proof which does not need to use the summation formula above is this one. Start with defining $$I(t)= \frac{1}{\sqrt{1-x^2}}\arctan{\frac{x\sin{t}}{\sqrt{1-x^2}}}$$ Then by the Fundamental theorem of calculus $$\frac{\arcsin{x}}{\sqrt{1-x^2}}=I\left(\frac{\pi}{2}\right)-I(0)=\int_{0}^{\pi/2} \frac{\partial I}{\partial t}\mathrm{d}t=\int_{0}^{\pi/2}\frac{\cos t}{1-x^2\cos^2 t }\mathrm{d}t$$ Ergo $$\frac{\arcsin{x}}{\sqrt{1-x^2}}=\sum_{n=0}^{\infty}x^{2n}\int_0^{\pi/2}\cos^{2n+1}\! t\,\mathrm{d}t$$ Denote $J_n:=\int_0^{\pi/2}\cos^{2n+1}\! t\,\mathrm{d}t$, by per partes we have $$J_n = \int_0^{\pi/2}\cos^{2n+1}\! t\,\mathrm{d}t = 2n\int_0^{\pi/2}\cos^{2n-1}\sin^2 t\!\,\mathrm{d}t=2n\left(J_{n}-J_{n-1}\right)$$ So $$J_n = \frac{2n}{2n-1}J_{n-1} =\frac{2n}{2n-1}\frac{2n-2}{2n-3}J_{n-2}=\dots = \frac{(2n)!!}{(2n-1)!!}J_0=\frac{(2n)!!}{(2n-1)!!}$$ since $J_0 = \int_0^{\pi/2}\cos\! t\,\mathrm{d}t =1$. Over all we get desired result $$\frac{\arcsin{x}}{\sqrt{1-x^2}}=\sum_{n=0}^{\infty}\frac{(2n)!!}{(2n-1)!!}x^{2n}$$ Note: Similar integral would have been also... $$\int_{0}^{\pi/2}\frac{\mathrm{d}t}{1-x\sin t}$$
# How much faster is “D-Wave Two” compared to its predecessor? I don't have any specific task or algorithm in mind, so depending on how they were tested – Is there any research which shows just how the D-Wave Two computer was faster (in terms of computation performance) than its predecessor (D-Wave One)? ## 2 Answers As Troyer and Lidar saw no speed increase with the D-Wave 1 compared to classical computers, the D-Wave 2 benchmark figure reported in 2013 of 3600 times as fast as CPLEX (the best algorithm on a conventional machine) suggests the D-Wave 2 is 3600 times as fast as the D-Wave 1. However: • the results are in a pretty restricted set of parameters, so this may not be relevant for other parameters. (as an example, the benchmark figures for the D-Wave 2000Q only take constant factor performance gains into account) • the configuration of the CPLEX may not compare directly to the classical computers used to benchmark the D-Wave 1 As far as I know the closest answer to your question for applications is given in the recent (still unpublished) work presented at the March meeting by Bibek Pokharel, where he compares graph 3-coloring instances on D-Wave Two, D-Wave 2X and D-Wave 2000Q, all other things staying reasonably equal. The short answer is that all the performance increase is essentially due to the possibility to run single anneals at shorter anneal-time. (e.g. 1$\mu$s instead of 5$\mu$s gives indeed about 5X of performance increase, in terms of time-to-solution (TTS) metric. With respect to 20$\mu$s of D-Wave Two the scaling is different). I can also spoil that from D-Wave Two and D-Wave 2000Q on Sherrington-Kirkpatrick instances we observed no substantial improvement as well. Results will be published soon in collaborations with Stanford.
# SIP-15: SNX liquidation mechanism Author Implemented Governance Ethereum TBD TBD 2019-08-20 ## Simple Summary This SIP proposes a liquidation mechanism for SNX collateral. Synths can be redeemed for staked SNX at a discount if the collateral ratio of a staker falls below the liquidation ratio for two weeks. ## Abstract Create a liquidation mechanism for under collateralised SNX Collateral to be redeemable with Synths at a discounted price (liquidation penalty fee). Instead of instant liquidations for positions below the Liquidation ratio, a delay will be applied, so it will only be possible to liquidate SNX collateral if a staker's collateral ratio is not fixed by the time the delay expires. ## Motivation In a crypto-backed stablecoin system such as Synthetix, the issued stablecoin (synths) tokens should represent claims on the underlying collateral. The current design of the Synthetix system doesn't allow holders of synths to directly redeem the underlying collateral unless they are burning and unlocking their own SNX collateral. The value of the synths issued in the synthetix system is derived from incentivising minters to be over-collateralised on the debt they have issued and other economic incentives such as exchange fees and SNX rewards. If a minter's collateral value falls below the required collateral ratio, there is no direct penalty for being under collateralised, even in the unlikely event where the value of their locked collateral (SNX) is less than the debt they owe. Stakers and synth holders should be able to liquidate undercollateralised minters at a discounted price to restore the network collateral ratio. Liquidation encourages minters to remain above the required collateral ratio and creates strong economic incentives for stakers to burn synths to restore their collateral ratio if they are at risk of being liquidated. Liquidation gives SNX holders and issuers of Synthetic synths these benefits: 1. Provides instrinsic value to Synths by enabling direct redemption into the underlying collateral (SNX). 2. Any Synth holder can ensure the stability of the system by liquidating stakers below the liquidation ratio restoring the network collateral ratio. 3. Incentivises a healthy network collateral ratio by providing a discount on the liquidated SNX collateral. ## Specification Liquidation Target Ratio Liquidations are capped at the Liquidation Target ratio which is the current Issuance Ratio. This is the ratio SNX collateral can issue debt to provide the system with sufficient capital to buffer price shocks and active stakers are required to maintain to claim fees and rewards. Modeling shows that at a liquidation ratio of 200%, if liquidators were to repay and fix the staker's collateral ratio to 800%, then about 44% of the staker's SNX collateral will be liquidated to repair the undercollateralised position. The amount of sUSD required to fix a staker's collateral to the target issuance ratio is calculated based on the formula: • V = Value of SNX • D = Debt Balance • t = Target Collateral Ratio • S = Amount of sUSD debt to burn • P = Liquidation Penalty % $S = \frac{t * D - V}{t - (1 + P)}$ Liquidation Ratio The liquidation ratio to initiate the liquidation process will be set at 200% initially and adjustable by an SCCP. This ensures there is sufficient buffer for the staker's collateral. The lower bound for the liquidation ratio would be 100% + any liquidation penalty to pay for liquidations. Liquidation Penalty The liquidation penalty is paid to the liquidator and is paid as a bonus on top of the SNX amount being redeemed. For example, given the liquidation penalty is 10%, when 100 SNX is liquidated, then 110 SNX is transferred to the liquidator. The maximum liquidation penalty will be capped at 25%. ### Liquidations Contract Liquidations contract to mark an SNX staker for liquidation with a time delay to allow staker to fix collateral ratio. Parameters • Liquidation Delay: Time before liquidation of under collateralised collateral. • Liquidation Penalty: % penalty on SNX collateral liquidated. • Liquidation Ratio: Collateral ratio liquidation can be initiated. Interface pragma solidity >=0.4.24; interface ILiquidations { // Views function isOpenForLiquidation(address account) external view returns (bool); // Mutative Functions // Restricted: used internally to Synthetix contracts // owner only function setLiquidationDelay(uint time) external; function setLiquidationRatio(uint liquidationRatio) external; function setLiquidationPenalty(uint penalty) external; } Events • AccountFlaggedForLiquidation(address indexed account, uint deadline) • AccountRemovedFromLiqudation(address indexed account, uint time) ### Synthetix contract Updates to the Synthetix contract interface pragma solidity >=0.4.24; interface ISynthetix { // Mutative Functions function liquidateDelinquentAccount(address account, uint susdAmount) external returns (bool); } ### Escrowed SNX Current escrowed SNX tokens in the RewardsEscrow will require a planned upgrade to the RewardsEscrow contract as per SIP-60 to be included as part of the redeemable SNX when liquidating snx collateral. The escrowed snx tokens will be transferred to the liquidator and appended to the rewardsEscrow. Mitigating this issue is the fact that in order to unlock all transferrable SNX a minter would have to repay all of their debt and re-issue debt at the issuance ratio (currently 800%). ### Insurance fund for liquidations In the scenario where a staker's Collateral ratio falls below 100% + liquidation penalty, ie (110%) then the staker's collateral will not fully cover the repayment of all their debt and the liquidation penalty. Liquidators should still be able to partially liquidate the debt until there is not enough collateral to repay all the remaining debt and also provide the liquidation penalty incentive. In the next iteration of Synthetix's liquidation mechanism, an SNX insurance fund will be set up to cover under-collateralised liquidations where any shortfall in SNX collateral will come out of the insurance fund to pay liquidators. This would allow the liquidators to repay all the debt of stakers who have no remaining SNX collateral after being liquidated. ## Rationale The reasoning behind implementing a direct redemption liquidation mechanism with a delay function is to provide a mechanism to purge positions for which the primary incentives have failed. Under most circumstances we have observed that the majority of stakers maintain a healthy ratio to ensure they can claim staking rewards or in extreme cases they exit the system by burning all debt and selling their SNX. Even in the case of a major price shock the majority of wallets have more collateral value than their Synth debt so the optimal strategy is stil to burn debt and recover the collateral. In the case where this does not happen a fallback incentive to remove these undercollateralised positions is required. However, given these wallets are likely to be edge cases so long as the collateral ratio remains above 500% (currently 800%) it is important to not open an attack vector that would enable a malicious party to attempt to manipulate the price of SNX to liquidate positions. Due to the time delay implemented in the mechanism the cost of attack far exceeds the potential reward making it unlikely a rational actor would pursue this strategy. The tension in this implementation is therefore between the time it takes to remove an undercollateralised position and the risk that liquidations are used as an attack vector against stakers. The default thresholds and delays implemented err on the side of protecting stakers and can therefore be reduced over time if liquidations are deemed too inefficient. The rationale for these liquidation mechanisms are: • Time Delay: A time delay increases the cost to a malicious actor who attempts to manipulate the SNX price to trigger liquidations and reduces the risk of black swan events. • Liquidation Penalty: A liquidation penalty payable to the liquidator provides incentives for liquidators and minters to fix their collateral ratio. Liquidators can burn synths and claim SNX at a discounted rate. • Partial liquidations: Partial liquidation of under collateralised SNX reduces the risk of minter's losing all their staked collateral from liquidation and allows a proportion of their debt to be paid back to fix their collateral ratio. Multiple liquidators can benefit from burning any amount of sUSD synth until the c-ratio is above the liquidation target ratio. • Liquidation target ratio: A liquidation target ratio works alongside partial liquidations allowing a proportion of snx to be liquidated until the staker's collateral ratio is above the liquidation target ratio. At this ratio it would provide enough buffer in the collateral to again fully back the debt issued. ## Test Cases Given Alice has issued synths with 800 SNX, with a debt balance of 533.33 sUSD and now has a collateralised ratio of 150% and Bob acts as a liquidator with sUSD, Given the following preconditions: • liquidation ratio is 200% • liquidation penalty is 10% • and liqudatiion delay is set to 2 weeks. When • Bob flags Alice address for liquidation Then • ✅ It succeeds and adds an liquidation Entry for Alice as flagged • ✅ It sets deadline as block.timestamp + liquidation delay of 2 weeks. • ✅ It emits an event accountFlaggedForLiquidation(account, deadline) When • Bob or anyone else tries to flag Alice address for liquidation again Then Given • Alice does not fix her c ratio by burning sUSD back to the issuance target ratio • and two weeks have elapsed • and SNX is priced at USD 1.00 When • bob calls liquidateSynthetix and burns 100 sUSD to liquidate SNX Then • ✅ Bob's sUSD balance is reduced by 100 sUSD, and Alice's SNX is transferred to Bob's address. The amount of SNX transferred is: • 100 sUSD / Price of SNX = 100 sUSD / \$1 = 100 SNX redeemed + liquidation penalty 100 * 10% = 110 SNX transferred to Bob. • Alice debt is reduced by 100 sUSD to 433.33 sUSD and she has 690 SNX remaining. Given • After Bob's liquidating 100 sUSD worth of SNX, Alice collateral ratio at 158.77% is still below the issuance target ratio. When • Chad tries to liquidate Alice's SNX collateral with 50 sUSD • and the result Collateral ratio, after reducing by 50 sUSD, is less than the target issuance ratio Then • ✅ Chad's sUSD balance is reduced by the 50 sUSD • 50 SNX + 10% SNX = 55 SNX is transferred to Chad • Alice's debt is reduced by a further 50 sUSD to 383.33 sUSD and she has 635 SNX remaining. When • Bob now tries liquidating a larger amount of sUSD (1000 sUSD) against Alice's debt. • 1000 sUSD takes Alice's collateral ratio above the issuance target ratio (800%) Then • ✅ Bob's liquidation transaction only partially liquidates the 1000 sUSD to reach 800% target • ✅ Alice's liquidation entry is removed and returns false • ✅ An event is emitted that liquidation flag is removed for her address When • Alice has been flagged for liquidation • and the price of SNX increases so her Collateral ratio is now above the issuance target ratio • and she calls checkAndRemoveAccountInLiquidation Then • ✅ Her account is removed from liquidation • and the liquidation entry is removed When • Alice has been flagged for liquidation • and the price of SNX doesn't change so she is still below the issuance target ratio • and she calls checkAndRemoveAccountInLiquidation Then • ❌ it fails When • Alice has not been flagged for liquidation • and she calls checkAndRemoveAccountInLiquidation Then • ❌ it fails and reverts with error 'Account has no liquidation set' When • Alice has been flagged for liquidation • and the liquidation deadline has passed • and her collateral ratio is above the issuance target ratio • and Bob tries to liquidate Alice calling liquidateSynthetix() Then • ❌ It fails and reverts with account not open for liquidation. • and no sUSD or debt is burned by Bob • and no SNX is liquidated and transferred to Bob. • Alice account remains flagged for liquidation. When • Alice has been flagged for liquidation • and she burns sUSD debt to fix her collateral ratio above the issuance target ratio Then • ✅ Her account is removed from liquidation within the burn synths transaction ## Configurable Values (Via SCCP) Please list all values configurable via SCCP under this implementation. • liquidationDelay • liquidationRatio • liquidationPenalty ## Implementation https://github.com/Synthetixio/synthetix/releases/tag/v2.22.4 Copyright and related rights waived via CC0.
Leigh-Strassler deformation of N=4 SYM Theory ### Introduction There is an enormous body of literature on the AdS-CFT correspondence. The first and the most well-studied example in this correspondence is the one that relates type IIB string theory on $AdS_5\times S^5$ and the four-dimensional superconformal field theory(SCFT), the $\mathcal{N}=4$ supersymmetric Yang-Mills theory. The SCFT has maximal supersymmetry in four-dimensions and thus is somewhat unrealistic. Theories with $\mathcal{N}=1$ supersymmetry have a richer behaviour and show phases that are confining, Coloumb-like, Higgs-like and so on. However, non-supersymmetric Yang-Mills theories such as pure QCD are not conformal — there is a dynamically generated mass in the quantum theory. The scale of the mass is $\Lambda_{\textrm{QCD}}$. Needless to say, people are pursuing more general non-conformal situations and this is usually called the gauge-gravity correspondence. It is known from the work of Leigh and Strassler that the $\mathcal{N}=4$ supersymmetric Yang-Mills theory has two marginal deformations that preserve $\mathcal{N}=1$ supersymmetry — let us call this theory the LS theory. The marginality of the deformation implies that the LS theory remains conformal and thus provides the CFT side of the AdS-CFT correspondence. ### The gravity dual The LS theory is expected to be dual to type IIB string theory compactified on $AdS_5\times X^{(5)}$, where $X^{(5)}$ is expected to be a ‘deformation’ of $S^5$ preserving a $U(1)$ isometry. The precise details of $X$ is not known beyond this. However, the answer when one of the deformations, the so-called beta-deformation, is turned on, the answer has been provided by Lunin and Maldacena. To third-order in perturbations, Aharony, Kol and Yankielowicz, have worked out the details of the fields that get non-zero background values due to the two marginal deformations in the CFT. However, what one is looking for is the finite, all-orders version. ### The SCFT The LS theory has the same spectrum as that of $\mathcal{N}=4$ SYM theory — it contains one $\mathcal{N}=1$ vector multiplet and three chiral multiplets that we will denote by $\Phi_1,\Phi_2,\Phi_3$, each of which transform in the adjoint of $SU(N)$ (not $U(N)$). The superpotential for $\mathcal{N}=4$ SYM theory is (1) \begin{align} W_0=h\ \mathrm{Tr}\left(\Phi_1 \big[\Phi_2,\Phi_3]\big)\right)=h\ \mathrm{Tr}\Big(\Phi_1 \Phi_2\Phi_3-\Phi_1 \Phi_3\Phi_2\Big) \end{align} The superpotential for the LS theory is of the form (2) \begin{align} W=W_0 + \frac1{3!}c^{ijk}\ \mathrm{Tr}\left(\Phi_i \Phi_j\Phi_k\right)\ , \end{align} where $c^{ijk}$ is totally-symmetric in its indices. It is useful to think of the three chiral fields as complex coordinates on $\mathbb{C}^3$. This has 10 independent components — using simple linear redefinitions acting on the fields ($SL(3,\mathbb{C})$ acting on the three fields), we find only two non-trivial deformations. These are, effectively, the two marginal deformations of Leigh and Strassler. Note that we need $N>2$ as the term that we add to $W_0$ vanishes for $SU(2)$. ### Anomalous dimensions and the spin-chain In the context of the $\mathcal{N}=4$ SYM, it was realised that the spectrum of (planar) anomalous dimensions of single-trace chiral operators can be obtained as the spectrum of a spin-chain — the length of the spin-chain being related to the number of fields appearing in chiral operator. This has been generalised in several ways.
Mathematics Te Tari Pāngarau me te Tatauranga Department of Mathematics & Statistics ## Upcoming seminars in Mathematics Research seminars Seminars in Statistics ### Russell Higgs School of Mathematics and Statistics, University College Dublin Date: Tuesday 5 March 2019 Time: 11:00 a.m. Place: Room 241, 2nd floor, Science III building This will be a survey talk discussing three open conjectures concerning the degrees of irreducible projective representations of finite groups. First a review of ordinary representations will be given with illustrative examples, before considering projective representations. A projective representation of a finite group $G$ with 2-cocycle $\alpha$ is a function $P:G \rightarrow GL(n, \mathbb{C})$ such that $P(x)P(y) = \alpha(x, y)P(xy)$ for all $x, y\in G$, where $\alpha(x, y)\in \mathbb{C}^*.$ One of the conjectures is can one conclude that $G$ is solvable given that the degrees of all its irreducible projective representations are equal.} 190214131235 Pattern formation in reaction-diffusion systems on time-evolving domains ### Robert van Gorder Department of Mathematics and Statistics, University of Otago Date: Tuesday 12 March 2019 Time: 11:00 a.m. Place: Room 241, 2nd floor, Science III building The study of instabilities leading to spatial patterning for reaction-diffusion systems defined on growing or otherwise time-evolving domains is complicated, since there is a strong dependence of spatially homogeneous base states on time and the resulting structure of the linearized perturbations used to determine the onset of stability is inherently non-autonomous. We obtain fairly general conditions for the onset and persistence of diffusion driven instabilities in reaction-diffusion systems on manifolds which evolve in time, in terms of the time-evolution of the Laplace-Beltrami spectrum for the domain and the growth rate functions, which result in sufficient conditions for diffusive instabilities phrased in terms of differential inequalities. These conditions generalize a variety of results known in the literature, such as the algebraic inequalities commonly used as sufficient criteria for the Turing instability on static domains, and approximate or asymptotic results valid for specific types of growth, or for specific domains. 190214131405 Folding, surprise and playing games: deep learning at the CS department ### Lech Szymanski Department of Computer Science, University of Otago Date: Tuesday 19 March 2019 Time: 11:00 a.m. Place: Room 241, 2nd floor, Science III building This talk will give an overview of the research done by the deep learning group at the Department of Computer Science. Specifically, I will talk about the work in three different areas: theoretical analysis of deep architectures using folding transformations, reinforcement learning with surprise, and teaching a deep network to play Atari games without catastrophic forgetting. 190214131558 Projective Characters of Metacyclic p-Groups ### Conor Finnegan University College Dublin Date: Tuesday 26 March 2019 Time: 11:00 a.m. Place: Room 241, 2nd floor, Science III building The projective characters of a group provide us with important information regarding the structure and properties of the group. The purpose of this research was to find the projective character tables of metacyclic p-groups. This aim was achieved for metacyclic p-groups of positive type, but not in the negative type case. In this talk, I will give an introductory overview of some of the fundamental methods and results in projective representation theory. I will then discuss the application of these methods to metacyclic p-groups of positive type, using the previously understood abelian case as an example. 190318153651
# Evaluate $\int_0^\pi \frac{\sin^2 nx}{\sin^2 x}dx$ How do I evaluate this definite integral, I'm not even getting a slightest idea on how to approach this. Tried converting into cos using double angle property but that didn't help. $$\int_0^\pi \frac{\sin^2 nx}{\sin^2 x}dx$$ $$\frac{\sin nx}{\sin x}=\frac{e^{inx}-e^{-inx}}{e^{ix}-e^{-ix}} =\sum_{r=1}^ne^{i(n+1-2r)x}$$ $$\frac{\sin^2 nx}{\sin^2 x} =n+\sum_{(r,s)\in A}e^{i(2n+2-2r-2s)x} =n+\sum_{(r,s)\in A}\cos(2n+2-2r-2s)x$$ where $A$ is the set of $(r,s)$ with $r$, $s\in\{1,2,\ldots,n\}$ with $r+s\ne n+1$. The details of this are not important as $\int_0^\pi \cos mx\,dx=0$ for nonzero integers $m$. Therefore $$\int_0^\pi\frac{\sin^2 nx}{\sin^2 x}\,dx=n\pi.$$ Let $$I_{n}=\int_{0}^{\pi}\frac{\sin^2nx}{\sin^2x} dx$$ $$I_{n+1}-2I_{n}+I_{n-1}=0$$ Thus $$I_{1},I_{2},I_{3},...$$ are in A.P $$I_{1}=\pi$$ $$I_{2}=2\pi$$ So, $$I_{n}=n\pi$$ Hint: De Moivre's Formula: https://en.wikipedia.org/wiki/De_Moivre%27s_formula turns this integral into a rational function of sines and cosines. Then you can use Weierstrass' tangent half-angle substitution: https://en.wikipedia.org/wiki/Tangent_half-angle_substitution.
# Calculation of value of $\int_{0}^{n}\cos\left(\lfloor x \rfloor\cdot \{x\}\right)dx\;,$ $$(1)$$ Calculation of value of $$\displaystyle \int_{0}^{n}\cos\left(\lfloor x \rfloor\cdot \{x\}\right)dx\;,$$ Where $$\lfloor x \rfloor$$ is floor function of $$x$$ and $$\{x\} = x-\lfloor x \rfloor.$$ and $$n$$ is a positive integer. $$(2)\;$$Calculation of least positive integer $$n$$ for which $$\displaystyle \int_{1}^{n}\lfloor x \rfloor \cdot \lfloor \sqrt{x} \rfloor dx>60\;,$$ Where $$\lfloor x \rfloor$$ is floor function of $$x$$ $$\bf{My\; Try::}$$ For $$(1)$$ one:: We can write $$\displaystyle \int_{0}^{n}\cos\left(\lfloor x \rfloor\cdot \{x\}\right)dx = \displaystyle \int_{0}^{n}\cos\left(\lfloor x \rfloor\cdot \left(x-\lfloor x \rfloor\right)\right)dx$$ So $$\displaystyle \int_{0}^{n}\cos\left(\lfloor x \rfloor\cdot \left(x-\lfloor x \rfloor\right)\right)dx = \int_{0}^{1}\cos(0)dx+\int_{1}^{2}\cos(1\cdot (x-1))dx+\int_{2}^{3}\cos(2\cdot(x-2))dx+..........+\int_{n-1}^{n}\cos((n-1)(x-(n-1)))dx$$ Now How can I solve after that, Help me, Thanks $$\bf{My\; Try}::$$ For $$(2)$$ one:: We can write $$\displaystyle \int_{1}^{n}\lfloor x \rfloor \cdot \lfloor \sqrt{x} \rfloor dx = \int_{0}^{1}0\cdot 0dx+\int_{1}^{2}1\cdot 1dx+\int_{2}^{3}2\cdot 1dx++\int_{3}^{4}3\cdot 1dx+......+\int_{n-1}^{n}(n-1)\lfloor \sqrt{n-1}\rfloor x$$ Now How can i solve after that, Help me, Thanks • Have you tried substituting $y=k(x-k)$ and evaluating the integrals? Nov 2, 2014 at 16:53 Hint: For $k \geq 1$, $$\int_k^{k+1} \cos(k(x-k)) \ dx = \int_0^1 \cos(kx) \ dx = \frac{1}{k} \sin(k)$$
## C Specification The VkRenderingFragmentShadingRateAttachmentInfoKHR structure is defined as: // Provided by VK_KHR_dynamic_rendering with VK_KHR_fragment_shading_rate VkStructureType sType; const void* pNext; VkImageView imageView; VkImageLayout imageLayout; } VkRenderingFragmentShadingRateAttachmentInfoKHR; ## Members • sType is the type of this structure. • pNext is NULL or a pointer to a structure extending this structure. • imageView is the image view that will be used as a fragment shading rate attachment. • imageLayout is the layout that imageView will be in during rendering. • shadingRateAttachmentTexelSize specifies the number of pixels corresponding to each texel in imageView. ## Description This structure can be included in the pNext chain of VkRenderingInfoKHR to define a fragment shading rate attachment. If imageView is VK_NULL_HANDLE, or if this structure is not specified, the implementation behaves as if a valid shading rate attachment was specified with all texels specifying a single pixel per fragment. Valid Usage If imageView is not VK_NULL_HANDLE, layout must be VK_IMAGE_LAYOUT_GENERAL or VK_IMAGE_LAYOUT_FRAGMENT_SHADING_RATE_ATTACHMENT_OPTIMAL_KHR If imageView is not VK_NULL_HANDLE, it must have been created with VK_IMAGE_USAGE_FRAGMENT_SHADING_RATE_ATTACHMENT_BIT_KHR If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.width must be a power of two value If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.width must be less than or equal to maxFragmentShadingRateAttachmentTexelSize.width If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.width must be greater than or equal to minFragmentShadingRateAttachmentTexelSize.width If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.height must be a power of two value If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.height must be less than or equal to maxFragmentShadingRateAttachmentTexelSize.height If imageView is not VK_NULL_HANDLE, shadingRateAttachmentTexelSize.height must be greater than or equal to minFragmentShadingRateAttachmentTexelSize.height If imageView is not VK_NULL_HANDLE, the quotient of shadingRateAttachmentTexelSize.width and shadingRateAttachmentTexelSize.height must be less than or equal to maxFragmentShadingRateAttachmentTexelSizeAspectRatio If imageView is not VK_NULL_HANDLE, the quotient of shadingRateAttachmentTexelSize.height and shadingRateAttachmentTexelSize.width must be less than or equal to maxFragmentShadingRateAttachmentTexelSizeAspectRatio Valid Usage (Implicit) sType must be VK_STRUCTURE_TYPE_RENDERING_FRAGMENT_SHADING_RATE_ATTACHMENT_INFO_KHR If imageView is not VK_NULL_HANDLE, imageView must be a valid VkImageView handle imageLayout must be a valid VkImageLayout value
Thread: point of intersection of two vector 1. point of intersection of two vector Hi all, I have the lines: A: x = 1 + 2t, y = 2+ 3t, z = 3 + 4t B: x-2 = (y-4)/2 = (-z -1)/4 I am having trouble determining the coordinates of the point of intersection of A and B. Well i found B in parametric form first: x = 2 + t, y = 4 +2t, Z = -4 -4t and then i equated each component.... e.g 1 + 2t = 2 + t; 2 +3t = 4 +2t the t values differed for each component and thus i thought there was no point of intersection. However, in the answer there is a point of intersection at (1,2,3). So i am lost here... ArTiCk 2. Originally Posted by ArTiCK I have the lines: A: x = 1 + 2t, y = 2+ 3t, z = 3 + 4t B: x-2 = (y-4)/2 = (-z -1)/4 x = 2 + t, y = 4 +2t, Z = -4 -4t Mistake #1. Always use different parameters for different lines. So B: x=2+s, y=4+2s, & z=-4-4s. Solve for t & s. 1+2t=2+s 2+3t=4+2s (Be sure to check to see if the solution works for the z value!)
Characterizing the Slice Chart of a Subbundle A rank-$k$ subbundle $F$ of a rank-$n$ smooth vector bundle $E$ is a vector bundle which is smoothly embedded in $E$, whose intersection with a given fiber of $E$ is a subspace of that fiber. Can we show that, since the subbundle is a smooth embedded submanifold, it has an atlas of slice charts $(\pi^{-1}(U_\alpha),\Phi_\alpha)$ in the ambient bundle satisfying $$\Phi_\alpha(V)=(\widehat{\pi(V)},V^1,\cdots,V^n)$$ Where $V^k=\cdots=V^n=0$ if and only if $V\in F$?
# Thread: Domain and signum of the following function 1. ## Domain and signum of the following function Hello, I was having some trouble with this function here... $\frac {log{|x^2-x+2|}}{\sqrt x}$ Domain: We have an abs in the logarithm so all we need to do is the following: $x^2-x+2 \ne 0$ $x >= 0$ $\sqrt x \ne 0$ The last two can be merged in... $x > 0$ The first one has no solution, means that I have no problem up there, so the domain is: $(0;+ \infty)$ Now I have to study the signum... $\frac {log{|x^2-x+2|}}{\sqrt x} >= 0$ I was used to distinguish the two cases and study two separate functions when I have the absolute value. But the stuff inside the absolute value is always positive, so the function with or without the absolute value is pretty much the same thing? So studying the function WITHOUT the abs, is the same thing? Thanks ! 2. ## Re: Domain and signum of the following function I agree with your domain. As for the signum, what you must study is the two following cases: $0<|x^{2}-x+2|<1,$ and $1\le|x^{2}-x+2|.$ The logarithm function is negative in the first case, and non-negative in the second. When do these cases occur? That is, for which values of $x$ do these cases occur (if they occur at all)? 3. ## Re: Domain and signum of the following function Well, looks to me that it's never in between 0 and 1 and it is always >= 1. 4. ## Re: Domain and signum of the following function Originally Posted by dttah Well, looks to me that it's never in between 0 and 1 and it is always >= 1. Yes. In fact, $x^{2}-x+2=x^{2}-x+\frac{1}{4}-\frac{1}{4}+2$ $=\left(x-\frac{1}{2}\right)^{\!\!2}+\frac{7}{4}$ $\ge\frac{7}{4} \quad \forall x\in\mathbb{R}.$ Conclusion? 5. ## Re: Domain and signum of the following function That's always positive so the absolute value is well.. useless? :P 6. ## Re: Domain and signum of the following function Originally Posted by dttah That's always positive so the absolute value is well.. useless? :P Well, it is, but I was talking about what conclusion you can achieve towards solving the problem. What can you conclude about the signum of the function? 7. ## Re: Domain and signum of the following function always positive!
# Equivalence of Definitions of Convergence in Normed Division Rings ## Theorem Let $\struct {R, \norm {\, \cdot \,} }$ be a normed division ring. Let $\sequence {x_n}$ be a sequence in $R$. The following definitions of the concept of Convergent Sequence in Normed Division Ring are equivalent: ### Definition 1 The sequence $\sequence {x_n}$ converges to $x \in R$ in the norm $\norm {\, \cdot \,}$ if and only if: $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: \forall n \in \N: n > N \implies \norm {x_n - x} < \epsilon$ ### Definition 2 The sequence $\sequence {x_n}$ converges to $x \in R$ in the norm $\norm {\, \cdot \,}$ if and only if: $\sequence {x_n}$ converges to $x$ in the metric induced by the norm $\norm {\, \cdot \,}$ ### Definition 3 The sequence $\sequence {x_n}$ converges to $x \in R$ in the norm $\norm {\, \cdot \,}$ if and only if: the real sequence $\sequence {\norm {x_n - x} }$ converges to $0$ in the reals $\R$ ## Proof ### Definition 1 iff Definition 2 By definition, the metric induced by the norm $\norm {\, \cdot \,}$ is the mapping $d: R \times R \to \R_{\ge 0}$ defined as: $\map d {x, y} = \norm {x - y}$ By definition of a convergent sequence in a metric space, $\sequence{x_n}$ converges to $x \in R$ if and only if: $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: \forall n \in \N: n > N \implies \map d {x_n, x} < \epsilon$ $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: \forall n \in \N: n > N \implies \norm {x_n - x} < \epsilon$ The result follows. $\Box$ ### Definition 1 iff Definition 3 Let $x \in R$. By norm on a division ring, $\norm {\, \cdot \,}$ is a mapping $\norm {\, \cdot \,}:R \to \R_{\ge 0}$. Then: $\forall n \in \N: \size {\norm{x_n - x} - 0} = \size {\norm{x_n - x}} = \norm{x_n - x}$ By definition of convergence of a real sequence, the real sequence $\sequence {\norm {x_n - x} }$ converges to $0$ in the reals $\R$ if and only if $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: n > N \implies \size {\norm{x_n - x} - 0} < \epsilon$ $\forall \epsilon \in \R_{>0}: \exists N \in \R_{>0}: n > N \implies \norm{x_n - x} < \epsilon$ The result follows. $\blacksquare$
Hardcover | $10.75 X | £7.95 | ISBN: 9780262195393 | 250 pp. | 7 x 9 in | 44 illus.| March 2006 eBook |$32.95 Short | ISBN: 9780262253536 | 250 pp. | 44 illus.| March 2006 ## Group Cognition Computer Support for Building Collaborative Knowledge ## Overview Innovative uses of global and local networks of linked computers make new ways of collaborative working, learning, and acting possible. In Group Cognition Gerry Stahl explores the technological and social reconfigurations that are needed to achieve computer-supported collaborative knowledge building--group cognition that transcends the limits of individual cognition. Computers can provide active media for social group cognition where ideas grow through the interactions within groups of people; software functionality can manage group discourse that results in shared understandings, new meanings, and collaborative learning. Stahl offers software design prototypes, analyzes empirical instances of collaboration, and elaborates a theory of collaboration that takes the group, rather than the individual, as the unit of analysis.Stahl's design studies concentrate on mechanisms to support group formation, multiple interpretive perspectives, and the negotiation of group knowledge in applications as varied as collaborative curriculum development by teachers, writing summaries by students, and designing space voyages by NASA engineers. His empirical analysis shows how, in small-group collaborations, the group constructs intersubjective knowledge that emerges from and appears in the discourse itself. This discovery of group meaning becomes the springboard for Stahl's outline of a social theory of collaborative knowing. Stahl also discusses such related issues as the distinction between meaning making at the group level and interpretation at the individual level, appropriate research methodology, philosophical directions for group cognition theory, and suggestions for further empirical work.
# BlendED Best Practices-Elementary Multiplication ### Title and Description: This unit is titled: Aimee Parde Co-Author-Original Unit Provider ### Multiplication Mania The description of the purpose of the unit is: ## Content Area Skill: Skill #1-Math-Multiplication ## Digital Age Skill(s): Skill #1-Knowledge Constructor One Week ## Overview of Unit: In this unit, students will practice and apply several different ways to represent basic multiplication facts. ## Empower Learners: ### Content Area Skills (NE and ISTE Standards): MA.3.1.2.c Represent Multiplication 3A, 3B, 3C, and 3D-Knowledge Constructor ### Student Friendly Objectives: I can show how to multiply using arrays. I can show how to multiply using repeated addition. I can show how to multiply with equal groups. I can show how to multiply using a number line and skip counting. I can make creative resources using many different tools. ## Empower Learner Activity: ### Detailed Description: Students will self-assess, come up with a goal, and decide on an action plan to see these goals get complete. ## Knowledge Application: The purpose of the knowledge application is for students to have a way to show what they know. ## Artifact Profile: ### Title of the Artifact: Representing Multiplication Using drawings, words, arrays, symbols, repeated addition, equal groups, and number lines to explain the meaning of multiplication. ### Detailed Description: Using drawings, words, arrays, symbols, repeated addition, equal groups, and number lines to explain the meaning of multiplication. This may be completed in Google Slides, Screencastify, Adobe Spark, or with another creation tool. Understanding of Multiplication ### Digital Age Skills Knowledge Constructor ## Knowledge Deepening: ### Detailed Description: Using the choice board, students will work through three different types of activities to practice representing multiplication facts as arrays. Using the choice board, students will work through three different types of activities to practice representing multiplication facts as arrays. Using the choice board, students will work through three different types of activities to practice representing multiplication facts as repeated addition. Using the choice board, students will work through three different types of activities to practice representing multiplication facts as equal groups. Using the choice board, students will work through three different types of activities to practice representing multiplication facts using a number line. ## Direct Instruction: ### Detailed Description Students will have a teacher taught lesson at the beginning and will later be going into there choice board groups.
What are fixed points of the Fourier Transform - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T21:41:13Z http://mathoverflow.net/feeds/question/12045 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/12045/what-are-fixed-points-of-the-fourier-transform What are fixed points of the Fourier Transform pavpanchekha 2010-01-16T23:46:46Z 2010-09-18T01:47:20Z <p>The obvious ones are 0 and $e^{-x^2}$ (with annoying factors), and someone I know suggested hyperbolic secant. What other fixed points (or even eigenfunctions) of the Fourier transform are there?</p> http://mathoverflow.net/questions/12045/what-are-fixed-points-of-the-fourier-transform/12047#12047 Answer by Andy Putman for What are fixed points of the Fourier Transform Andy Putman 2010-01-16T23:58:36Z 2010-01-17T02:31:23Z <p>The following is discussed in a little more detail on pages 337-339 of Frank Jones's book "Lebesgue Integration on Euclidean Space" (and many other places as well).</p> <p>Normalize the Fourier transform so that it is a unitary operator $T$ on $L^2(\mathbb{R})$. One can then check that $T^4=1$. The eigenvalues are thus $1$, $i$, $-1$, and $-i$. For $a$ one of these eigenvalues, denote by $M_a$ the corresponding eigenspace. It turns out then that $L^2(\mathbb{R})$ is the direct sum of these $4$ eigenspaces!</p> <p>In fact, this is easy linear algebra. Consider $f \in L^2(\mathbb{R})$. We want to find $f_a \in M_a$ for each of the eigenvalues such that $f = f_1 + f_{-1} + f_{i} + f_{-i}$. Using the fact that $T^4 = 1$, we obtain the following 4 equations in 4 unknowns:</p> <p>$f = f_1 + f_{-1} + f_{i} + f_{-i}$</p> <p>$T(f) = f_1 - f_{-1} +i f_{i} -i f_{-i}$</p> <p>$T^2(f) = f_1 + f_{-1} - f_{i} - f_{-i}$</p> <p>$T^3(f) = f_1 - f_{-1} -i f_{i} +i f_{-i}$</p> <p>Solving these four equations yields the corresponding projection operators. As an example, for $f \in L^2(\mathbb{R})$, we get that $\frac{1}{4}(f + T(f) + T^2(f) + T^3(f))$ is a fixed point for $T$.</p> http://mathoverflow.net/questions/12045/what-are-fixed-points-of-the-fourier-transform/12050#12050 Answer by Yemon Choi for What are fixed points of the Fourier Transform Yemon Choi 2010-01-17T00:04:43Z 2010-01-17T00:04:43Z <p>Following on a little from <a href="http://mathoverflow.net/questions/12045/what-are-fixed-points-of-the-fourier-transform/12047#12047" rel="nofollow">Andy's comment</a>, <a href="http://en.wikipedia.org/wiki/Hermite%5Fpolynomials#Hermite%5Ffunctions%5Fas%5Feigenfunctions%5Fof%5Fthe%5FFourier%5Ftransform" rel="nofollow">Hermite polynomials</a> (multiplied by a Gaussian factor) give a basis of eigenvectors for the FT as an operator on $L^2({\mathbb R})$ </p> http://mathoverflow.net/questions/12045/what-are-fixed-points-of-the-fourier-transform/39187#39187 Answer by Darsh Ranjan for What are fixed points of the Fourier Transform Darsh Ranjan 2010-09-18T01:47:20Z 2010-09-18T01:47:20Z <p>A very important fixed point of the Fourier transform that isn't in $L^2$ is the Dirac comb distribution, informally $$D(x) = \sum_{n\in Z} \delta(x-n),$$ or more properly, defined by its pairing on smooth functions of sufficient decay by $$\langle D, f\rangle = \sum_{n\in Z} f(n).$$ The fact that $D$ is equal to its Fourier transform is really just the Poisson summation formula. </p> <p>(I wrote an argument explaining why $D$ should be its own Fourier transform in an answer to another question: <a href="http://mathoverflow.net/questions/14568/truth-of-the-poisson-summation-formula/14580#14580" rel="nofollow">http://mathoverflow.net/questions/14568/truth-of-the-poisson-summation-formula/14580#14580</a>)</p>
Hardcover | $50.00 Short | £34.95 | ISBN: 9780262025959 | 432 pp. | 6 x 9 in | 91 illus.| April 2006 ebook |$35.00 Short | ISBN: 9780262254670 | 432 pp. | 6 x 9 in | 91 illus.| April 2006 Living Standards and the Wealth of Nations Successes and Failures in Real Convergence Overview The question of convergence, or under what conditions the per capita income levels of developing countries can catch up to those found in advanced economies, is critical for understanding economic growth and development. Convergence has happened in many countries and appears to be taking place now in China and India--yet in general per capita income levels in the poorer countries do not converge towards those of richer countries as uniformly as the analytical models predict. Living Standards and the Wealth of Nations, which grew out of a 2003 conference on convergence hosted by the National Bank of Poland, offers detailed theoretical and empirical examinations of what makes for successful convergence.After general discussions of the theoretical requirements for "rapid catch up" and the possible link between democracy and growth, the book presents global case studies of both non-EU and EU countries, including a provocative comparison of growth in the transition economies of the CEE (Central and Eastern Europe) nations and the 12 non-Baltic states of the former Soviet Union. It then considers nominal as opposed to real convergence in the European Monetary Union. Taken together, the chapters present a consistent argument that reliance on market forces within an open economy in a stable macroeconomic environment, with assured property rights, is the key to rapid economic growth.Contributors:Anders Åslund, Leszek Balcerowicz, Manuel Balmaseda, Iain Begg, John Bradley, Vittorio Corbo L., Stanley Fischer, Leonardo Hernández T., Philip E. Keefer, Olle Krantz, Abel Moreira Mateus, Thomas O'Connell, Stephen L. Parente, Edward C. Prescott, Jacek Rostowski, Isaac Sabethai, Miguel Sebastián, Diarmaid Smyth, Athanasios Vamvakidis, José Maria Viñals, Wing Thye Woo, Nikolai Zoubanov
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Comparing the effects of climate change labelling on reactions of the Taiwanese public ## Abstract Scientists and the media are increasingly using the terms ‘climate emergency’ or ‘climate crisis’ to urge timely responses from the public and private sectors to combat the irreversible consequences of climate change. However, whether the latest trend in climate change labelling can result in stronger climate change risk perceptions in the public is unclear. Here we used survey data collected from 1,892 individuals across Taiwan in 2019 to compare the public’s reaction to a series of questions regarding climate change beliefs, communication, and behavioural intentions under two labels: ‘climate change’ and ‘climate crisis.’ The respondents had very similar responses to the questions using the two labels. However, we observed labelling effects for specific subgroups, with some questions using the climate crisis label actually leading to backlash effects compared with the response when using the climate change label. Our results suggest that even though the two labels provoke similar reactions from the general public, on a subgroup level, some backlash effects may become apparent. For this reason, the label ‘climate crisis’ should be strategically chosen. ## Introduction Although some people and organisations have referred to anthropogenic climate change as an emergency or a crisis in the past1,2,3, increasing scientific evidence4 has convinced numerous scientists that our planet is facing a climate emergency5. Media outlets, such as the Guardian, have also announced that they prefer the terms ‘climate emergency’, ‘crisis’, or ‘breakdown’ over ‘climate change’ to reflect the current scientific reality more accurately6. The call from scientists and the media to change the terms used is aimed at urging the public, businesses, organisations, and governments to respond as quickly as possible to prevent the Earth’s systems from reaching irreversible tipping points as a consequence of climate change6,7. The effects of using different terminologies, referred to as labelling effects, and the tailoring of information for specific purposes such as the promotion of public engagement, referred to as framing effects, are not new topics in climate change research. Many studies have shown that people react differently to the terms ‘climate change’ and ‘global warming’8,9, and several means of framing climate change, such as by referring to it as a public health issue10 or a local issue11, have been reported to significantly promote positive emotions or engagement with climate change. In terms of labelling effects, numerous studies have examined how the terms ‘climate change’ and ‘global warming’ influence people’s perceptions and reactions to climate change. However, these studies have mainly been conducted in Western contexts12,13,14,15. Partisan differences in the usage and reactions to the abovementioned two terms in the United States have prompted many discussions16,17,18. In addition, racial and ethnic differences in labelling effects in the context of the United States have also been investigated19. Research on the labelling effects of increasingly-used terms, including ‘climate emergency’ and ‘climate crisis’, has been scant, with only one pilot study examining the labelling effects of the terms ‘climate change’, ‘global warming’, ‘climate crisis’, and ‘climate disruption’ among college students in the United States20. The results of that study suggested that ‘climate crisis’ not only evoked the worst reactions, but also showed backlash effects to these questions showed statistically significant differences. By contrast, ‘climate disruption’ evoked the most favourable reactions, with ‘climate change’ and ‘global warming’ responses located between the two extremes. Despite these informative results, our understanding of the labelling effects of these emerging terminologies on the broader population is limited. In addition, the labelling effects of climate change–related terms outside of the Western cultural context remain under-investigated. Using the Taiwanese population as an example, this study compared the labelling effects between the terms ‘climate change’ and ‘climate crisis’ on multiple dimensions of climate change belief and behavioural intentions. We examined the labelling effects for ‘climate crisis’, not ‘climate emergency’, because ‘climate crisis’ is a more common term in the Taiwanese context as the Taiwanese public is more familiar with this term. The literal translation of ‘climate crisis’ in Chinese (chihouweiji) has been used in Taiwan for over 10 years. For example, this expression first appeared in a major local newspaper, the United Daily News, in 2007. By contrast, that newspaper only started to use the term ‘climate emergency’ (chihoujinji or chihoujinjijuangtai) in 2019. Also, we chose ‘climate change’ instead of ‘global warming’ to be compared with ‘climate crisis’ because textbooks in Taiwanese school systems, including primary and secondary education, use ‘climate change’ as the thematic term to describe the changing global climate21. The current study systematically examined the labelling effects of the term ‘climate crisis’ on the public and provides empirical evidence to elucidate the labelling effects of climate change–related terms in a non-Western cultural context. Taiwanese society is heavily influenced by Confucian values22 and has traditionally been characterised by expressing high collectivistic values23. Furthermore, Taiwan is among the top countries in terms of per capita CO2 emissions, both in East Asia and the world24, and the Taiwanese government has set a goal of reducing the country’s 2005 greenhouse gas emissions by 50% by 205025. To achieve this goal, understanding the public’s climate change risk perceptions and support of climate policy in Taiwan is imperative. In addition to examining the labelling effects on the general population, we also divided our sample according to several key characteristics, which are central to the discussion of risk perception, as follows: gender (male and female), age (20–49 years and 50 years and above), educational attainment (high school diploma or below and some college or above), and cultural worldview (hierarchical, egalitarian, individualistic, and fatalistic values). Differences among these groups could improve understanding of how sociocultural factors affect labelling effects26,27. Our data were derived from a large telephone survey (N = 1892) conducted across Taiwan in November 2019. Among the questions asked, participants answered 13 questions, revised or adopted from previous studies28,29,30,31, regarding climate change beliefs, issue involvement, behavioural intentions, moral obligations, communication, and preferred societal responses with the random use of either the ‘climate change’ or ‘climate crisis’ label. The between-subject research design ensured that each participant received the same questions, except for the assigned label. In addition, participants’ choice among the four statements regarding their views on nature indicated their cultural worldview32. Finally, basic demographic information was collected. The items used in the analysis are described in the methods section. Among the 1892 participants, 934 (49.3%) received questions using ‘climate change’ (hereafter ‘climate change label’ sample) and the remaining participants (958, 50.6%) were asked questions using ‘climate crisis’ (hereafter ‘climate crisis label’ sample). No differences were observed between the climate change label sample and the climate crisis label sample in terms of gender distribution, age structure or regional distribution. ## Results We conducted a series of two-sided independent-samples t-tests to identify differences in labelling effects between the climate change label sample and the climate crisis label sample for both the full sample and subgroups (gender, age, educational attainment and cultural worldview). The 13 dependent variables were not normally distributed; however, it has been suggested that the results of t-tests are still robust under non-normally-distributed data, especially when the samples are large33,34. As a result, our discussion below was based on the results of t-tests, but we reported two cases that show significantly different results between t-tests and the Mann–Whitney tests in the methods section. For the full sample, the results revealed no statistically significant differences between the two label groups for all 13 questions (all p ≥ 0.067, Table 1), suggesting that the terms ‘climate change’ and ‘climate crisis’ elicited similar responses in people for several aspects of climate change beliefs and engagement. We then considered the labelling effects for sociocultural factors. No differences were found in labelling effects between the two labels in terms of educational attainment, age or the holding of egalitarian values. By contrast, significant differences were observed for several questions in terms of gender and the holding of hierarchical, individualistic or fatalistic worldviews (Table 2; the full results are reported in supplementary information). Male respondents reported significantly lower mean frequency on three communication-related questions when the ‘climate crisis’ label was used than when the ‘climate change’ label was used (discussion with family members, crisis label M = 2.15 vs. change label M = 2.31, p < 0.05; discussion with friends, crisis label M = 2.32 vs. change label M = 2.5, p < 0.01; information received, crisis label M = 2.67 vs. change label M = 2.88, p < 0.01). Female respondents reported significantly higher mean in behavioural intention to engage in mitigation behaviour when the ‘climate crisis’ label was used than when the ‘climate change’ label was used (crisis label M = 4.42 vs. change label M = 4.31, p < 0.05). For cultural worldview, significant differences were observed for the following cases: (1) people with hierarchical values had a significantly lower mean in sense of collective efficacy when the ‘climate crisis’ label was used than when the ‘climate change’ label was used (crisis label M = 4.18 vs. change label M = 4.33, p < 0.05) and (2) people with individualistic values reported a significantly lower mean frequency of discussing climate change–related issues when the ‘climate crisis’ label was used than when the ‘climate change’ label was used (discussion with family members, crisis label M = 1.88 vs. change label M = 2.65, p < 0.001; discussion with friends, crisis label M = 1.68 vs. change label M = 2.35, p < 0.01). Because male respondents also reported a significantly lower frequency in climate change communication when the ‘climate crisis’ label was used than when the ‘climate change’ label was used, we suspected that more men have individualistic values than do women. This suspicion was confirmed by an X2 test, which revealed a significant association between gender and cultural worldview (X2 (3) = 11.172, p = 0.011), and significantly more men (64%) than women (36%) expressed individualist values. Finally, people with fatalistic values were significantly more worried, shown in the mean, regarding the influence of climate change when the ‘climate crisis’ label was used than that when the ‘climate change’ label was used (crisis label M = 3.94 vs. change label M = 3.38, p < 0.05). ## Discussion Similar to the results of a study on the labelling effects of the terms ‘climate change’ and ‘global warming’ in Americans and Europeans13, our study revealed no major differences in labelling effects between the ‘climate change’ and ‘climate crisis’ labels. However, labelling effects were observed in male and female participants and in those with hierarchical, individualistic, and fatalistic worldviews, suggesting that the term ‘climate crisis’ should be used strategically16. Our finding that men less frequently engage in climate communication and obtain climate change information for the ‘climate crisis’ label than for the ‘climate change’ label are consistent with risk communication research that has indicated that men are less likely to act under negative frames35. Identifying the reasons for this behaviour among men requires further investigation, but identity-protective cognition36, in which men express risk scepticism because of their individualistic worldviews, provides a possible explanation. In addition, the finding that women exhibited greater behavioural intention to mitigate climate change when the ‘climate crisis’ label was used than when the ‘climate change’ label was used is similar to the gender differences observed for climate change27,37, which revealed positive effects for using the term ‘climate crisis’. In terms of labelling effect differences according to worldview, a backlash effect was observed in people with hierarchical worldviews when the ‘climate crisis’ label20 was used because they had a lower sense of collective efficacy for the ‘climate crisis’ label. The backlash effects are possibly attributable to a tendency of people with hierarchical worldviews to not trust others38, which would result in a lower sense of collective efficacy. These backlash effects are particularly relevant in the high-collectivism Taiwanese society39. Collective framing, such as collective efficacy or collective responsibility, is a critical factor for Taiwanese people’s motivation and engagement24,40. Similar to the aforementioned effects in men, people with individualistic values reported less frequent climate communication when the ‘climate crisis’ label was used, which could also be explained by identity-protective cognition36 because they questioned the scientific labelling of ‘crisis’. Finally, contrary to the findings of other studies that people with fatalistic values are typically unconcerned with matters outside of their control and show lower risk perceptions41,42, our results indicated that they were actually more worried about the influence of climate change when the ‘climate crisis’ label was used. This may be because fatalistic views manifest differently in Asian and Western cultures43, but it could also be because the ‘climate crisis’ label heightens some aspects of climate change perception (compared with the ‘climate change’ label), which enhances climate change engagement in people with fatalistic values. This finding illuminates an area that requires further research. Overall, our results on the labelling effects between the terms ‘climate change’ and ‘climate crisis’ might disappoint people such as scientists and climate change activists because the ‘climate crisis’ label might not help to change the public’s attitudes towards and engagement with climate change. Although we did find some positive labelling effects for specific subgroups, such as women, we also observed negative, backlash effects for other subgroups, such as people with hierarchical worldviews. Because no single terminology has identical effects on all people, identifying meaningful subgroups of the population19,44 and understanding the labelling effects for these groups are paramount for audience-specific climate change risk communication. Some critics of the ‘climate crisis’ and ‘climate emergency’ labels1 have argued that these terms might imply that climate change solutions are led by governments instead of the public, that climate change concerns are prioritised over other social, cultural, and environmental issues, and that they might be counterproductive because they could reduce people’s sense of efficacy. We did not find evidence of these disadvantages in our full sample because the public’s reactions to the two labels did not differ in terms of governmental priorities, behavioural intentions, moral obligations, collective efficacy, or voting behaviours. We did observe the backlash effect in our subgroup samples, which further highlights the importance of targeted and tailored labelling and framing in climate change communication14,45. This study contributes to understanding of the labelling effects of the term ‘climate crisis’ in an under-investigated non-Western cultural context. Although place-specific differences exist, our study in Taiwan provide a critical understanding of labelling effects of climate-related terms in East Asian cultural contexts or other collectivist societies. In addition to exploring the affective dimensions of labelling effects, as was done in this study, more studies should investigate the cognitive dimensions and how the increasingly used terms influence people’s certainty of the existence of climate change18. Also, the relations among people’s experiences of extreme weather events, the availability heuristic, and the labelling effects are worth investigating46,47,48. Furthermore, future research could examine labelling effects among people in different cultural contexts. In countries such as the United States, examining the labelling effects of terms such as ‘climate crisis’ or ‘climate emergency’ in people with different political affiliations should yield valuable insights. Finally, the comparison of the increasingly used climate-related terms to ‘global warming’ deserves more attention in future studies. ## Methods ### Ethics statement This study was approved by the Research Ethics Committee of National Taiwan Normal University (No.: 201810HS018). Participants provided verbal consent before trained interviewers initiated each survey. ### Sampling method Our sample was derived from a large landline-based survey, conducted between 7 and 14 November, 2019, that targeted Taiwanese citizens who were at least 20 years old. The survey was performed by the Global View Survey Research Centre, a Taiwanese research company. We used stratified sampling based on the regions of Taiwan (22 cities and counties). Participants were randomly selected to answer questions that used either the ‘climate change’ or ‘climate crisis’ label. Overall, 1892 surveys were completed. Among the full sample, 934 (49.4%) participants answered questions that used the ‘climate change’ label and 958 participants (50.6%) answered questions that used the ‘climate crisis’ label. The response rate was 37.82%. The two sample groups were not different in terms of gender (X2(1) = 1.6, p = 0.205), age (X2(12) = 8.94, p = 0.71), educational attainment (X2(6) = 1.74, p = 0.94), or regional distribution (X2(21) = 2.41, p = 1). The two groups also did not statistically differ from the Taiwanese population in terms of gender structure (climate change label: X2(1) = 0.06, p = 0.81; climate crisis label: X2(1) = 2.41, p = 0.12) or regional distribution (climate change label: X2(21) = 1.5, p = 1; climate crisis label: X2(21) = 3.25, p = 1) but did significantly differ from the Taiwanese population in terms of age structure (climate change label: X2(12) = 62116.08, p < 0.001; climate crisis label: X2(12) = 80725.11, p < 0.001). In comparison with the Taiwanese population, the sample contained fewer young people and more older adults, a phenomenon often seen in telephone surveys49. We used unweighted samples for analysis. ### Questionnaire development This study used 13 questions in the survey that was specifically designed to examine the labelling effects for the terms ‘climate change’ and ‘climate crisis’. All participants were asked the same questions in the same order, except for random use of the terms ‘climate change’ and ‘climate crisis’. We used a between-subject design, which means each participant answered questions phrased with the same label (i.e., ‘climate change’ or ‘climate crisis’) throughout his or her survey. The survey questions were adopted or revised versions of questions from related studies28,29,30,31. Shortened versions of the 13 questions are cited in the main text. The survey included the following questions: 1. (1) Communication (discussion with family members). How often do you talk to your family members about issues related to climate change (the climate crisis)? 2. (2) Communication (discussion with friends). How often do you talk to your friends about issues related to climate change (the climate crisis)? 3. (3) Communication (information received). How often do you inform yourself regarding the harm to Taiwan from climate change (the climate crisis)? 4. (4) Belief (personal harm). How much do you think climate change (the climate crisis) will harm you personally? 5. (5) Belief (harm future generations). How much do you think climate change (the climate crisis) will harm future generations? 6. (6) Belief (worry): How worried are you about the influence of climate change (the climate crisis)? 7. (7) Belief (personal importance). How important is the issue of climate change (the climate crisis) to you personally? 8. (8) Behavioural intention. I plan to take steps to reduce my influence on climate change (the climate crisis). Do you agree? 9. (9) Involvement (priority for governments). Do you think that climate change (the climate crisis) should be a low, medium, high, or very high priority for the central government and Executive Yuan? 10. (10) Belief (personal moral obligations). Some people say ‘I think I have the moral obligation to reduce my impact on climate change (the climate crisis)’. Do you agree? 11. (11) Belief (moral obligation to future generations). Some people say ‘I think I have the moral obligation to reduce my impact on climate change (the climate crisis) for future generations’. Do you agree? 12. (12) Belief (collective efficacy). Some people say ‘If we work together, we can reduce the threat of climate change (the climate crisis) to humans.’ Do you agree? 13. (13) Involvement (climate issue relevance to voting decision). Taiwan will elect its president on 11 January, 2020. Do you agree that the climate change (the climate crisis) issue is an important consideration for your decision regarding which candidate to vote for? All questions, except for question nine, were measured using a five-point Likert scale, where “1” signifies the least favourable response and “5” signifies the most favourable response. For question nine, a 4-point scale was used, following the same scoring logic. For questions regarding cultural worldview, we adopted a question used in a previous study32 that asks respondents to choose the statement that most accurately represents their views regarding environmental problems from the following four statements: (1) environmental problems can only be controlled by enforcing radical changes in human behaviour and in society as a whole (egalitarian values); (2) environmental problems are not out of control, but the government should dictate clear rules regarding what is and what is not allowed (hierarchical values); (3) we do not need to worry about environmental problems because these problems will ultimately always be resolved through technological solutions (individualistic values); and (4) we do not know whether environmental problems will intensify (fatalistic values). The four choices for cultural worldview were presented in a random order. The translation from Chinese and English was performed by the first author and was reviewed by survey experts from the Global View Survey Research Center. The sample sizes, means, and standard deviations for each of the variables for the ‘climate change’ and ‘climate crisis’ labels are listed in Table 1. ### Data management and analysis The survey data were managed using SPSS version 2350. We used two-sided independent-samples t-tests to investigate the labelling effects between the two groups. We calculated the effect sizes for t-tests by the following equation51: $$r = \sqrt {\frac{{t^2}}{{t^2 + df}}}$$. The results of the t-tests that examined the labelling effects on the full data set are provided in Table 1. The full results of the t-tests that examined the labelling effects by gender, age, educational attainment and cultural worldview are provided in Tables 1, 2, 3, and 4, respectively, in the supplementary information. As discussed in the main text, we were aware that the 13 outcome variables were not normally distributed. Because it has been suggested that the application of t-tests does not require the assumption of normal distribution, especially for large samples33,34, we kept the parametric method in our analysis. Nevertheless, we conducted Mann–Whitney tests for all the analysis, and found two cases where the statistical significance was different, at p = 0.05 level, between the two tests. The first case was behavioural intention for female subgroups, where in t-test it was statistically significant (p = 0.018, r = 0.08), but in Mann–Whitney test it was not significant (asymptotic significance p = 0.061, r = 0.06). The second case was collective efficacy for people with equalitarian worldviews, where in t-test it was not significant (p = 0.059, r = 0.07), but in Mann–Whitney test it was significant (asymptotic significance p = 0.049, r = 0.07). The statistically significant result suggested that for people with equalitarian worldviews, the climate crisis label result in stronger sense of collective efficacy than the climate change label (crisis label M = 4.44, Mdn = 5 vs. change label M = 4.32, Mdn = 5, U = 82,900.5, z = 1.971, p = 0.049, r = 0.07). The effect sizes for Mann–Whitney tests were calculated by the following equation51: $$r = \frac{Z}{{\sqrt N }}$$. The effect sizes for two-sided independent-samples t-tests and Mann–Whitney tests were calculated in Microsoft Excel version 2016. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## References 1. 1. Hodder, P. & Martin, B. Climate Crisis? The Politics of Emergency Framing. Econ Political Wkly. 44, 53–60 (2009). 2. 2. Archer, D. & Rahmstorf, S. The climate crisis: An introductory guide to climate change. (Cambridge University Press, 2010). 3. 3. Crist, E. Beyond the climate crisis: a critique of climate change discourse. Telos 141, 29–55 (2007). 4. 4. IPCC. Global Warming of 1.5 oC. (IPCC, 2018). 5. 5. Lenton, T. M. et al. Climate tipping points — too risky to bet against. Nature 575, 592–595 (2019). 6. 6. Carrington, D. Why the Guardian is changing the language it uses about the environment. The Guardian (2019). https://www.theguardian.com/environment/2019/may/17/why-the-guardian-is-changing-the-language-it-uses-about-the-environment. 7. 7. Ripple, W. J., Wolf, C., Newsome, T. M., Barnard, P. & Moomaw, W. R. World Scientists’ Warning of a Climate Emergency. BioScience 70, 8–12 (2020). 8. 8. Weber, E. U. What shapes perceptions of climate change? Wiley Interdiscip. Rev.: Clim. Change 1, 332–342 (2010). 9. 9. Schuldt, J. P., Enns, P. K. & Cavaliere, V. Does the label really matter? Evidence that the US public continues to doubt “global warming” more than “climate change”. Clim. Change 143, 271–280 (2017). 10. 10. Myers, T. A., Nisbet, M. C., Maibach, E. W. & Leiserowitz, A. A. A public health frame arouses hopeful emotions about climate change. Clim. Change 113, 1105–1112 (2012). 11. 11. Scannell, L. & Gifford, R. Personally relevant climate change: The role of place attachment and local versus global message framing in engagement. Environ. Behav. 45, 60–85 (2013). 12. 12. Whitmarsh, L. What’s in a name? Commonalities and differences in public understanding of “climate change” and “global warming”. Public Underst. Sci. 18, 401–420 (2009). 13. 13. Villar, A. & Krosnick, J. A. Global warming vs. climate change, taxes vs. prices: Does word choice matter? Clim. Change 105, 1–12 (2011). 14. 14. Lorenzoni, I., Leiserowitz, A., Doria, M. D. F., Poortinga, W. & Pidgeon, N. F. Cross-National Comparisons of Image Associations with “Global Warming” and “Climate Change” Among Laypeople in the United States of America and Great Britain1. J. Risk Res. 9, 265–281 (2006). 15. 15. Leiserowitz, A. et al. What’s in a name? Global warming vs. climate change. (Yale Project on Climate Change Communication, 2014). 16. 16. Benjamin, D., Por, H.-H. & Budescu, D. Climate change versus global warming: who is susceptible to the framing of climate change? Environ. Behav. 49, 745–770 (2017). 17. 17. Schuldt, J. P., Konrath, S. H. & Schwarz, N. “Global warming” or “climate change”?Whether the planet is warming depends on question wording. Public Opin. Q 75, 115–124 (2011). 18. 18. Schuldt, J. P. “Global Warming” versus “Climate Change” and the Influence of Labeling on Public Perceptions. Oxford Research Encyclopedia of Climate Science https://doi.org/10.1093/acrefore/9780190228620.013.309 (2016). 19. 19. Schuldt, J. P. & Pearson, A. R. The role of race and ethnicity in climate change polarization: evidence from a U.S. national survey experiment. Clim. Change 136, 495–505 (2016). 20. 20. Jaskulsky, L. & Besel, R. Words That (Don’t) Matter: an Exploratory Study of Four Climate Change Names in Environmental Discourse. Appl. Environ. Educ. Commun. 12, 38–45 (2013). 21. 21. Kao, T.-S., Kao, H.-F. & Tsai, Y.-J. The context, status and challenges of environmental education in formal education in Taiwan. Jpn. J. Environ. Educ. 26, 15–20 (2017). 22. 22. Zhang, Y. B., Lin, M.-C., Nonaka, A. & Beom, K. Harmony, Hierarchy and Conservatism: a Cross-Cultural Comparison of Confucian Values in China, Korea, Japan, and Taiwan. Commun. Res. Rep. 22, 107–115 (2005). 23. 23. Chiou, J.-S. Horizontal and Vertical Individualism and Collectivism Among College Students in the United States, Taiwan, and Argentina. J. Soc. Psychol. 141, 667–678 (2001). 24. 24. Lavallee, J. P., Di Giusto, B. & Yu, T.-Y. Collective responsibility framing also leads to mitigation behavior in East Asia: a replication study in Taiwan. Clim. Change 153, 423–438 (2019). 25. 25. Giusto, B. D., Lavallee, J. P. & Yu, T.-Y. Towards an East Asian model of climate change awareness: a questionnaire study among university students in Taiwan. PLOS ONE 13, e0206298 (2018). 26. 26. van der Linden, S. The social-psychological determinants of climate change risk perceptions: towards a comprehensive model. J. Environ. Psychol. 41, 112–124 (2015). 27. 27. Pearson, A. R., Ballew, M. T., Naiman, S. & Schuldt, J. P. Race, Class, Gender and Climate Change Communication. in Oxford Research Encyclopedia of Climate Science (Oxford University Press, 2017). https://doi.org/10.1093/acrefore/9780190228620.013.412. 28. 28. Brody, S., Grover, H. & Vedlitz, A. Examining the willingness of Americans to alter behaviour to mitigate climate change. Clim. Policy 12, 1–22 (2012). 29. 29. Maibach, E. W., Leiserowitz, A., Roser-Renouf, C., Mertz, C. K. & Akerlof, K. Global Warming’s Six Americas screening tools: Survey instruments; instructions for coding and data treatment; and statistical program scripts. (Yale University and George Mason University. Yale Project on Climate Change Communication, 2011). 30. 30. Wood, M. M. et al. Communicating Actionable Risk for Terrorism and Other Hazards: Communicating Actionable Risk. Risk Anal. 32, 601–615 (2012). 31. 31. Goldberg, M. H., Linden, S., van der, Maibach, E. & Leiserowitz, A. Discussing global warming leads to greater acceptance of climate science. PNAS 116, 14804–14805 (2019). 32. 32. Steg, L. & Sievers, I. Cultural theory and individual perceptions of environmental risks. Environ. Behav. 32, 250–269 (2000). 33. 33. Lumley, T., Diehr, P., Emerson, S. & Chen, L. The Importance of the Normality Assumption in Large Public Health Data Sets. Annu. Rev. Public Health 23, 151–169 (2002). 34. 34. Poncet, A., Courvoisier, D. S., Combescure, C. & Perneger, T. V. Normality and Sample Size Do Not Matter for the Selection of an Appropriate Statistical Test for Two-Group Comparisons. Methodology 12, 61–71 (2016). 35. 35. Huang, Y. & Wang, L. Sex differences in framing effects across task domain. Personal. Individ. Differ. 48, 649–653 (2010). 36. 36. Kahan, D. M., Braman, D., Gastil, J., Slovic, P. & Mertz, C. K. Culture and Identity-Protective. Cognition: Explaining White-Male Eff. Risk Percept. J. Empir. Leg. Stud. 4, 465–505 (2007). 37. 37. Hung, L.-S. & Bayrak, M. M. Wives influence climate change mitigation behaviours in married-couple households: insights from Taiwan. Environ. Res. Lett. 14, 124034 (2019). 38. 38. West, J., Bailey, I. & Winter, M. Renewable energy policy and public perceptions of renewable energy: a cultural theory approach. Energy Policy 38, 5739–5748 (2010). 39. 39. Wu, M. Hofstede’s cultural dimensions 30 years later: a study of Taiwan and the United States. Intercultural Commun. Stud. 15, 33–42 (2006). 40. 40. Wilson, R. S., Herziger, A., Hamilton, M. & Brooks, J. S. From incremental to transformative adaptation in individual responses to climate-exacerbated hazards. Nat. Clim. Chang. 1–9 (2020) https://doi.org/10.1038/s41558-020-0691-6. 41. 41. Şimşekoğlu, Ö. et al. Risk perceptions, fatalism and driver behaviors in Turkey and Iran. Saf. Sci. 59, 187–192 (2013). 42. 42. Xue, W., Hine, D. W., Loi, N. M., Thorsteinsson, E. B. & Phillips, W. J. Cultural worldviews and environmental risk perceptions: a meta-analysis. J. Environ. Psychol. 40, 249–258 (2014). 43. 43. Xue, W., Hine, D. W., Marks, A. D. G., Phillips, W. J. & Zhao, S. Cultural worldviews and climate change: a view from China. Asian J. Soc. Psychol. 19, 134–144 (2016). 44. 44. Maibach, E. W., Leiserowitz, A., Roser-Renouf, C. & Mertz, C. K. Identifying Like-Minded Audiences for Global Warming Public Engagement Campaigns: an Audience Segmentation Analysis and Tool Development. PLOS ONE 6, e17571 (2011). 45. 45. Moser, S. C. Communicating climate change: history, challenges, process and future directions: Communicating climate change. Wiley Interdiscip. Rev.: Clim. Change 1, 31–53 (2010). 46. 46. Whitmarsh, L. Are flood victims more concerned about climate change than other people? The role of direct experience in risk perception and behavioural response. J. Risk Res. 11, 351–374 (2008). 47. 47. Demski, C., Capstick, S., Pidgeon, N., Sposato, R. G. & Spence, A. Experience of extreme weather affects climate change mitigation and adaptation responses. Clim. Change 140, 149–164 (2017). 48. 48. Kahneman, D. Thinking, fast and slow. (FSG, 2011). 49. 49. Tucker, C., Brick, J. M. & Meekins, B. Household Telephone Service and Usage Patterns in the United states in 2004: implications for Telephone Samples. Public Opin. Q 71, 3–22 (2007). 50. 50. IBM Corp. IBM SPSS Statistics for Windows. (IBM Corp., 2015). 51. 51. Field, A. Discovering statistics using IBM SPSS statistics. (Sage, 2013). ## Acknowledgements This study is supported by the Ministry of Science and Technology (MOST) of Taiwan under Grant No. MOST 108-2636-H-003-004 and MOST 109-2636-H-003-004. ## Author information Authors ### Contributions L.S.H. designed the survey used within the work presented, with M.M.B. providing suggestions to the survey. L.S.H and M.M.B. organised and managed the data. L.S.H. analysed the data. L.S.H. and M.M.B. interpreted the results. L.S.H. led in writing the paper, developing this with input from M.M.B. ### Corresponding author Correspondence to Li-San Hung. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Cristian Rogério Foguesatto and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Hung, LS., Bayrak, M.M. Comparing the effects of climate change labelling on reactions of the Taiwanese public. Nat Commun 11, 6052 (2020). https://doi.org/10.1038/s41467-020-19979-0 • Accepted: • Published: • ### The political effects of emergency frames in sustainability • James Patterson • Carina Wyborn • Dhanasree Jayaram Nature Sustainability (2021) • ### Upping the ante? The effects of “emergency” and “crisis” framing in climate change news • Lauren Feldman • P. Sol Hart Climatic Change (2021)
Top # Thevenin's Theorem This page is all about this simple lines " If two connections lead away from an arbitrary connection of batteries and resistors, then the electrical effects of the circuit in the box on whatever is connected to the box is just the same as if in the box were merely a single battery connected in series to a single resistance". To illustrate it, you might have seen the combination of linear bilateral circuit elements and active sources, regardless of the connection or complexity, connected to a given load RL. Here in this theorem we can replace the the whole complicated circuit in to simple circuit that has a single voltage source of VTH volts and a simple impedance Req in series with the voltage source, across the two terminals of the load RL. Lets see how its done.... Related Calculators Bayes Theorem Calculator Binomial Theorem Calculator Calculate Pythagorean Theorem De Moivre's Theorem Calculator ## What is Thevenin's Theorem? Thevenin's theorem is popularly known for analyzing the power systems and other complicated circuits where we could determine the value of load resistance which helps to calculate voltage and current across it. It states that "Any combination of linear network that is bilateral having circuit elements connected with active sources, regardless of the complexity, applied across a given load RL, may be replaced by a simple two terminal network that consists of a single voltage source VTH in series with a single impedance RTH, connected across the two terminals having load resistance RL. The VTH is the open circuit voltage measured at the two terminals of interest, with load impedance ZL removed. Here voltage VTH is Thevenin's equivalent voltage and RTH is the equivalent impedance". ## How to Find Thevenin's Equivalent Circuit? The Thevenin's equivalent circuit is the electrical equivalent circuit of resistances connected across the load resistance. To get the Thevenin equivalent circuit we need to first remove the power supply connected across the original circuit, voltage sources should be short circuited, while current sources should be open circuited and total resistance should be determined between the open connection points RTH. This equivalent circuit simplifies the entire circuit into a circuit of a single voltage source, series resistance and series load. ## Thevenin Voltage We often see the circuit when open circuited i.e terminals when not connected to anything, there would be no flow of current in the circuit but there would the voltage across the terminals what we call open circuit voltage. The Thevenin voltage is an ideal voltage source that would be equal to voltage when open circuited at the terminals. Thevenin Voltage ## Thevenin's and Norton Equivalent Circuit Thevenin and Norton Equivalent Circuit In the circuits, Thevenin voltage and Norton current can be expressed as VTH = IN RN or VTH = IN RTH Where VTH is Thevenin equivalent voltage, IN is Norton current and RTH = RN (RN is Norton resistance) From the Thevenin and Norton's equivalent circuit we could determine • Thevenin resistance is equal to and Norton resistance (RTH = RN) • Thevenin voltage VTH is equal to Norton current IN times Norton resistance RN (VTH = IN RN) • Norton current is equal to Thevenin voltage divided by Thevenin resistance. ## How to Simplify into Thevenin and Norton Circuit? Lets see how to simplify the simple circuit into Thevenin and Norton's circuit: Step 1: First open the terminal AB, where we need current or voltage and calculate VAB using mesh method. Hence we get VAB = VTH Step 2: Short the terminal AB and determine the short circuit current (ISC = IN) Step 3: Now calculate RTH = $\frac{V_{TH}}{I_{SC}}$ = RN Equivalent resistance across AB is RAB = RTH Step 4: Draw the Thevenin or Norton equivalent circuit as required and calculate the load current. ## Thevenin's Examples Lets see some examples on Thevenin's circuit: Example 1: Calculate the equivalent voltage for the given Thevenin circuit: Solution: Open the terminals AB and then calculate the voltage across AB, - VAB + 18 + 3 $\times$ 0 + 2 $\times$ 6 = 0 VAB = 30 V = VTH Example 2: Determine the VTH, RTH and norton equivalent IL in the thevenin circuit: Solution: Short the terminal AB, Applying mesh analysis, - 20 + 2I + 2 (I - Isc) = 0 4I - 2 Isc = 20 2I -Isc = 10 Now short the terminal AB. Applying mesh analysis, -20 + 2I + 2 (I-Isc) = 0 4I - 2Isc = 20 2I - Isc = 10.......0 2 (Isc - I) + 2 Isc - 10 = 0 2Isc - I = 5 ...... (2) By equations (1) and (2), we get Isc = 20/3 A = IN Now RTH = RN = $\frac{V_{TH}}{I_{sc}}$ = $\frac{20 \times 3}{20}$ = 3 $\Omega$ The thevenin equivalent circuit is given by IL = $\frac{V_{TH}}{R_{TH} + R_L}$ = $\frac{20}{3 + 1}$ = 5A The Norton equivalent circuit is determined using current division formula IL = $\frac{R_N}{R_N + R_L}$ $\times$ IN = $\frac{3}{3 + 1}$ $\times$ $\frac{20}{3}$ = 5 A. Related Topics Physics Help Physics Tutor *AP and SAT are registered trademarks of the College Board.