text
stringlengths 100
356k
|
---|
# Solar flux on (not at) Earth
I'm trying to find out if there are reliable values for the solar flux received on Earth. I realise this will change with location, time of day, season, weather etc. but I'm surprised that I can find anything at all. For space calculations of solar flux on solar panels I typically use a value of 1372 W/m2 at 1 AU from the Sun. I understand that this value will be significantly reduced once the light has travelled through the atmosphere but I can't find a lookup table or anything.
You can use an online calculator like the one at Meteo Exploration. Input parameters are classic lat / lon / alt / time and:
Visibility (km): it is the maximum distance in km where large objects can be distinguished in the horizon. The value given (50 km) is for a clean atmosphere. This is a proxy for atmospheric turbidity, it is choosen as many airports report visibility, while other parameters, such as Linke Turbidity, are more difficult to obtain. Airport reports (METAR) are given here (choose 'Decoded').
Temperature °C: temperature in degrees centigrade, use CT to convert from Farenheit.
RH (0-100): Relative Humidity in percentage, values from 0 to 100. Airport reports (METAR) normally include RH.
Ozone thickness: ozone layer thickness in cm. Divide Dobson Units by 1000 to get the equivalent in cm. Check the TOMS (Total Ozone Mapper Spectrometer) pages for current values.
Albedo ground (0-1): albedo of the surrounding terrain, e.g. 0.8 to 0.95 for fresh snow, 0.17 for deciduous forest, 0.35 for sand, etc.
Timezone (timezone map): the timezone, a good approximation is (-1)*longitude/15, longitude in degrees and west is negative.
Slope Orientation (0-360): orientation of the surface, solar panel, roof, etc. If looking north it will be 0°, if looking south will be 180°. Range 0 to 360 degrees
Slope Tilt (0-90): inclination of the surface or panel with respect to the horizontal, 0° is flat 90° is completely vertical. Range is from 0 to 90 degrees.
References:
Note that the Earth receives energy from the Sun, but also radiates energy (infrared). Some of this radiated energy is reflected by the atmosphere (greenhouse effect) and is reabsorbed by the Earth. The infrared energy radiated is indeed usable. See Climate and Earth’s Energy Budget at Nasa:
To convert between the energy radiated by the Sun at 1AU and the insolation, see this article on Wikipedia. Extract:
The Earth receives a total amount of radiation determined by its cross section (π·RE²), but as it rotates this energy is distributed across the entire surface area (4·π·RE²). Hence the average incoming solar radiation, taking into account the angle at which the rays strike and that at any one moment half the planet does not receive any solar radiation, is one-fourth the solar constant (approximately 340 W/m²). The amount reaching the Earth's surface (as insolation) is further reduced by atmospheric attenutation, which varies. At any given moment, the amount of solar radiation received at a location on the Earth's surface depends on the state of the atmosphere, the location's latitude, and the time of day.
• -1. Seriously? After I submitted my answer, you edited your initial answer (which was very different and rather non-responsive) to incorporate exactly what I wrote. – David Hammen Feb 19 '15 at 13:37
• @DavidHammen. Request: "I'm trying to find out if there are reliable values for the solar flux received on Earth. I realise this will change with location, time of day, season, weather etc. but I'm surprised that I can find anything at all." I don't think your answer actually covers this, mine yes, from the initial post. Conversion irradiance --> insolation, Wikipedia is clearer and mentions Earth rotation and night. For the Earth budget (which is secondary to the question), I added it after you, you are right. Is this a bad practice? – mins Feb 19 '15 at 18:45
You are asking about the Earth's radiation budget. (Google that phrase. There's a lot of information out there on this topic.) NASA and NOAA have been using remote sensing to study the Earth's radiation budget for at least forty years.
Note that the above has the incoming solar radiation at 340.4 watts/meter2. Compare that with your value of 1372 (which is a bit high). Your value and the NASA value differ by a factor of four. The reason is simple: The flux intercepted by the Earth is proportionate to $\pi {R_e}^2$, but the surface area of the Earth is $4 \pi {R_e}^2$. Dividing the solar constant by four yields the incoming solar flux to the top of the atmosphere, averaged over ten years and over the surface of the Earth. About 29.3% of the incoming solar radiation is reflected back into space. Most of the reflection is by clouds. The atmosphere isn't perfectly clear. It absorbs another 22.6% of the incoming solar radiation. A bit less than half (48.0%) makes it's way all the way to the surface.
|
12:13 AM
A new tag . The tag-creator Al Jebr also created a tag-excerpt.
0
If $f: X \to Y$ is a quasi-isometry between geodesic spaces, and $\gamma: [a,b] \to X$ is a geodesic segment in $X$, is $f\circ \gamma$ a geodesic segment in $Y$? I know that $f \circ \gamma$ is a quasi-geodesic segment since $f$ is a quasi-isometric embedding. But since $f$ is actually a qu...
BTW it seems that the tag existed shortly in 2014: math.stackexchange.com/posts/655358/revisions But this is the only occurrence I was able to find: data.stackexchange.com/math/query/927958/… data.stackexchange.com/math/query/927958/…
11 hours later…
10:57 AM
|
# Regression using circular variable (hour from 0~23) as predictor
My question originally arises from reading this post Use of circular predictors in linear regression.
Right now, I'm trying construct linear regression using "Bike Sharing dataset" from https://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset which basically tries to regression bike rental count on different variables
One of the predictor that I have question is on using "Hour" of when the rental occurred, which takes value from 0 to 23. The original post suggests transforming the circular data (time of day) using sine function to maintain the circular characteristic.
I was trying to apply to same methodology to my situation to transform the Hour variable. However,transforming 0~23 using sin(π hour/180) lets 00:00 and 12:00 to have 0. But I think people will certainly display different behavior when renting bike at midnight(00:00) and afternoon(12:00)
In this case, is it better to just use 23 dummy variables to account for hour or am I misunderstanding the concept of circular regression?
• I doubt bike sharing data of any sort are well represented by a simple sine wave. For more flexibility consider using a circular spline instead.
– whuber
Mar 25, 2018 at 19:04
Circular regression most often would refer to regression with a circular outcome.
In this case, we have linear regression with a circular predictor. In that case, we would add both the sine and the cosine of the angle to the regression, so that we predict the outcome as $\hat{y} = \beta_1\cos(\pi * \text{hour} / 12) + \beta_2\sin(\pi * \text{hour} / 12).$ Adding both the sine and cosine naturally resolves the issue you mention. Note that here, different than you, I've assumed that you represent hour in hours rather than degrees.
For a more elaborate answer on how to do this and what it means, please see the answer to this SO question.
You want to map the interval $$(0,24)$$ to the interval $$(0,2\pi)$$ - a full cycle -; the function to do so is
$$2\pi \frac{\mathrm{hour}}{24}$$
You then need two terms in your linear model (recall that an equivalent non-linear parametrization uses phase & amplitude):
$$\beta_1 \sin\left(2\pi\frac{\mathrm{hour}}{24}\right) + \beta_2 \cos\left(2\pi \frac{\mathrm{hour}}{24}\right)$$
Noon & midnight aren't constrained to result in equal predictor values because the phase is estimated from your data. Noon might be at the peak and midnight at the trough of the wave.
(And you can continue with harmonics in an analogous way to higher-order polynomial terms: $$\ldots +\beta_3 \sin\left(2\times 2\pi\frac{\mathrm{hour}}{24}\right) + \beta_4 \cos\left(2\times 2\pi \frac{\mathrm{hour}}{24}\right)+\ldots$$)
• harmonic regression Mar 25, 2018 at 17:23
|
## My experience with W3M, a console-based web browser
W3M an entirely console-based web browser I mentioned briefly in a previous post, despite a series of limitations in the context of a full web browsing experience, can play just the necessary role in minimizing your workflow to the essential, non-distracting components.
Having reached a point of frustration with fully-featured graphical web browsers, I proceeded to perform serious meta-analysis on how I best operate.
I’ve experienced a series of graphical browsers, from among the heavier traditional variants to the smaller-footprint alternatives. The problem with each is the ease of slipping into the rabbit hole. This is fantastic for ‘web browsing’ in the strict sense of the word, but fatal for single-tasking and content production.
I don’t want pleasantly designed web pages opened in a series of tabs waiting to be consumed with ease. I don’t want attractive visual elements arranged conveniently for ease of navigation and scanning. I don’t want unnecessary images cluttering real estate, and most images across my work contribute to nothing but decoration. I don’t want quick access to email or Youtube. I don’t want any superfluous content to crawl it’s way to my attention.
I don’t like to discipline myself to cultivate or eliminate habits. This has hardly been successful in the past. I prefer to eliminate malignant habits by their roots, and make positive habits essential and irreplaceable.
With respect to web browsers, I needed to regress to the roots of World Wide Web, before the epoch of HTML 5.0 and rich web content. I don’t even know everything that’s out there, and don’t much care; not for lack of value and innovation, but for deviating my attention from the essential.
A series of limited-function text-based browsers exist, with use largely among enthusiasts, or in the case of the less frequently encountered console-only environment. I focused on W3M, but don’t have compelling arguments for why I favored it over Links, Lynx, the Emacs browser, or others.
W3M actually provides support for images in an otherwise entirely terminal-based interface, never mind by what means. The support is not terminal oblivious, and incompatible with my terminal, which I didn’t much object to. Moreover, I entirely disabled image rendering attempts and even the automatic image downloads, which means web page rendering has become a function of pure text. However, image viewing is possible upon demand. The image placeholders still render among the content, and open in an externally configured viewer at the invocation of a proper shortcut.
Web page rendering also becomes largely color oblivious, rendering only the link, image-placeholder, form, and title text color uniquely from the remainder of the document. Otherwise, the default color scheme involves light gray text on black background, consistent with my terminal, and far more forgiving on the eyes. (I utilized a variation of ‘dark reader’ plugins in full web browsers, but found a nuisance having to toggle it’s status between contrastingly lit pages.)
W3M provides no JavaScript support, and renders text-based elements in a layout that attempts to respect the graphical counterpart. The functionality suffices to browse most simple, text-oriented content. W3M also provides support for tabs.
I have not completely eliminated the possibility of distractions and the such. The advantage, however, lies precisely in the lack of appeal to reach superfluous content. I’ll explain how.
The navigation is entirely keystroke based, making content not-entirely text-oriented a bit of a burden to navigate, such as that with frames, forms, tables. Looking at the rendering of such pages, I lose further motivation to explore any but the essential content.
Functions to visualize all accessible links and open them in a new tab are present. The ability to search is handled in a similar way to other Unix text buffers, with partially VIM-like shortcuts. I can also download documents or images via respective shortcuts.
The virtue of such an interface, lies not in the inability to consume unnecessary content, but the sufficient discomfort at carrying out the task. Such a web browser is best suited for pure, simple text content, without rich elements.
W3M doesn’t provide global session support, the ability to save open tabs, and the ability to interface with a running W3M instance externally to the application. This limitation further discourages the urge for tab harvesting. Once the running instance is closed or terminates for any reason, the tab layout is irrecoverable.
W3M does enable the opening of any link in an external web browser, which, by argument, could entirely perturb the text-based workflow. But I rationalize this as less of a danger and more of a backup measure for the infrequent yet critical web content not fitting for a terminal. Once I invest in a particular working environment, I generally refrain from interchanging between multiple alternatives.
This flexibility to pass a URL to an external application I’ve actually leveraged for a useful purpose. I don’t much find the presence of many tabs appealing, but recognize the need to queue necessary (not bookmark-worthy) content for later viewing, not deviating the mind from the task at hand. For this, I created a shortcut to simply append the highlighted URL to an external text file, taking no further action on it. In addition to not cluttering the workspace with provocative tabs, it also serves the benefit of maintaining the URLs externally saved and not subject to loss. I’ve also experimented with passing the URL to a console-based journaling application, adding the appropriate tag, and making the retrieval of such saved URLs equally simple.
I’ll include the relevant bit from my configuration file ~/.w3m/config that dictates this external ‘browser’ behavior:
extbrowser sensible-browser %s &
extbrowser2 surf %s &
extbrowser3 url=%s out_file=~/saved_links.txt && echo $url >>$out_file && echo $url saved to$out_file && read s
extbrowser4 url=%s && jrnl now: @link $url && echo$url saved to journal && read s
Naturally, cases requiring usage of a full web browser will arise on occasion. My hopes lie in the 80/20 rule as always, that such cases will comprise a small minority, yielding room for optimizing the workflow for the common remainder.
I’ve only hovered on the basic W3M functionality. It’s entirely customizable with regard to keystrokes, mime types, rendering, etc. Ultimately, I’ve transitioned from much focus in a rich graphical application to the rendering of pure text in a console. In addition to the potential for a more goal-oriented workflow and a much decreased memory footprint, I find console applications visually minimalist and retroactively appealing. Pure content over visual flourish.
Another huge benefit is the extra step one must take with regard to conducting searches. There is no smart search function incorporated into the URL bar. In fact, there is no URL bar. One must invoke a shortcut to enter a desired URL, but it neither autocompletes nor offers a smart search function. I must resort to traditional methods of loading the search engine page, entering the query into the text form (to which I must navigate via the keyboard), and invoke the submit button.
The above hurdle causes me to reconsider the importance of a search, and less likely to outsource a minor cognitive challenge to a search engine. Alternatively, I’m more likely to bundle multiple searches into one session, rather than interrupting focus for intermittent, and largely distracting search operations.
With regard to email, I also outsourced this element to the console-based neomutt, which interacts with a local Maildir synchronized with Gmail via offlineimap. It may sound convoluted, but I’m finding this workflow subject to less distraction, and reminiscent of the days before heavy reliance on rich web-based email. I might explore Neomutt in a separate post if I have anything exceptional to add. I perceive, however, that the console-based email client userbase far outnumbers the console web client users.
I find that in general, technological innovation, in it’s totality, disproportionally surpasses the respective need, much of which demands a mere application of traditionally available simple tools with a small touch of imagination.
|
# 13.2 Active circuit elements
Page 2 / 7
If the diode is reverse-biased, the $+$ terminal of the battery is connected to the n-type semiconductor. This makes it even more negatively charged. It also removes even more of the free electrons near the depletion band. At the same time, the $-$ terminal of the battery is connected to the p-type silicon. This will supply free electrons and fill in more of the holes next to the depletion band. Both processes cause the depletion band to get wider. The resistance of the diode (which was already high) increases. This is why a reverse-biased diode does not conduct.
Another explanation for the increased resistance is that the battery has made the p-type semiconductor m ore negative than it used to be, making it repel any electrons from the n-type semiconductor which attempt to cross the depletion band.
On the other hand, if the diode is forward biased, the depletion band is made narrower. The negative charge on the p-type silicon is cancelled out by the battery. The greater the voltage used, the narrower the depletion band becomes. Eventually, when the voltage is about 0,6 V (for silicon) the depletion band disappears. Once this has occurred, the diode conducts very well.
## The diode
1. What is a diode?
2. What is a diode made of?
3. What is the term which means that a diode is connected the `wrong way' and little current is flowing?
4. Why is a diode able to conduct electricity in one direction much more easily than the other?
## The light-emitting diode (led)
A light-emitting diode (LED) is a diode device that emits light when charge flows in the correct direction through it. If you apply a voltage to force current to flow in the direction the LED allows, it will light up.
## Circuit symbols
This notation of having two small arrows pointing away from the device is common to the schematic symbols of all light-emitting semiconductor devices. Conversely, if a device is light-activated (meaning that incoming light stimulates it), then the symbol will have two small arrows pointing toward it. It is interesting to note, though, that LEDs are capable of acting as light-sensing devices: they will generate a small voltage when exposed to light, much like a solar cell on a small scale. This property can be gainfully applied in a variety of light-sensing circuits.
The colour depends on the semiconducting material used to construct the LED, and can be in the near-ultraviolet, visible or infrared part of the electromagnetic spectrum.
## Interesting fact
Nick Holonyak Jr. (1928) of the University of Illinois at Urbana-Champaign developed the first practical visible-spectrum LED in 1962.
## Light emission
The wavelength of the light emitted, and therefore its colour, depends on the materials forming the p-n junction. A normal diode, typically made of silicon or germanium, emits invisible far-infrared light (so it can't be seen), but the materials used for an LED can emit light corresponding to near-infrared, visible or near-ultraviolet frequencies.
how is ester formed
how is n ester formed
Aubrey
Alcohol reacts with a carboxylic acid
Texas
an athlete with a mass of 70kg runs at a velocity of 45km . determine the athlete's momentum
Is that a velocity or something else
msawenkosi
45km/h i guess
Texas
Change to m/s
Texas
45km/h = 12.5 m/s p=mv =70×12.5 =875 kg.m/s
Thato
what are the measures of the rates of reaction
Volume Concentration Temperature Pressure Surface Area
Thato
the principle of superposition of waves
what is work
is this a group chat
Hey can y'all define newton's 2nd law
mthebzification
If a resultant force act on an object...the object will accelerate in the direction of a resultant force,the acceleration of the object is directly proportional to the net force and inversely proportional to the mass of the object
mosa
how do you calculate tension force
Bulumko
use the formula Fnet=ma if there is tension connecting two objects
Sboniso
to calculate Tension, usually calculate acceleration first Draw separate free body diagrams for each body. Apply Fnet = ma to calculate Tension
Kevin
Hi people
Paul
how does temperature affect the equilibrium position
an increased temperature increases the average kinetic energy thus in turn increases the number of effective collisions........
Lwando
so...which reaction is favored between endothermic and exothermic .when temperature is increased..?
Blessing
exothermic reaction because energy is realised to the surroundings as heat and light energy ....graphical so much energy is realised as reactants to form product and because temperature is high rate of reaction is fast which means there is a successful collision
Code
INTEMENDO - INCREASE IN TEMPERATURE FAVOURS ENDOTHERMIC DETEMEXO - DECREASE IN TEMPERATURE FAVOURS EXOTHERMIC
Thato
an object will continue in a state of rest unless it is acted upon an unbalanced force
Newton's Law 1
Code
First Newton's Law
Azola
Newton's first law
Surprise
newton first law
Thinavhuyo
Newton's first law
Blessing
when pressure is increased what happen to volume
decreases
Code
care to explain?
Mpati
if pressure is applied to a pistol , the volume will decrease and particles will collide more frequently with the wall of a container .Each time they collide with the wall they exert a force on them .More collision means more force and the pressure will increase , that Boyle's Law
Code
Because the volume has decreased , the particle will collide more frequently with the wall of a container and each time they collide with the wall of a container they exert a force on them.More collision means more force so the pressure will increase , that Boyle's Law
Code
what is the difference between momentum and a change in momentum?
How to name a branched molecule from right or left?
What's Free Fall
Free Fall means there is no acting force on that object.
Dingaletu
only gravitational force
Dingaletu
no external force acting on an object
Sphiwe
by only force of gravite
Sello
but gravitational force
Sphiwe
true
Lucky
a motion in which the only force acting is gravitational force
Blessing
and an object experiencing free fall is referred as a projectile
Blessing
Do polymers form restrictedly only if compound is saturated, only?
what is a free fall?
is when The Only Force acting On an Object is Gravitational Force
Thats right
Beyanca
She's just helping those who forgot it...bro
Thato
guys I need help on Getting ready for a last minute test
Neil
we'll I'm in grade 12 so we doing this topic about upac thing
Kenelioe
on What?
the organic molecule section
Kenelioe
IUPAC NAMING WHICH FUNCTIONAL GROUP YOU CANNOT NAME?SO I COULD HELP YOU
ester
Sboniso
you should also look at structural isomers. Its crucial that they might add that one. also try and write down the structural formula of all the given compounds on the table
milani
|
### Security Analysis of an Ultra-lightweight RFID Authentication Protocol for M-commerce
Seyed Farhad Aghili and Hamid Mala
##### Abstract
Over the last few years, more people perform their social activities on mobile devices, such as mobile payment or mobile wallet. Mobile commerce (m-commerce) refers to manipulating electronic commerce (e-commerce) by using mobile devices and wireless networks. Radio frequency identification(RFID) is a technology which can be employed to complete payment functions on m-commerce. As an RFID subsystem is applied in m-commerce and supply chains, the related security concerns is very important. Recently, Fan et al. have proposed an ultra-lightweight RFID authentication scheme for m-commerce(ULRAS) and claimed that their protocol is enough efficient, and provides a high level of security. In this paper, we show that their protocol is vulnerable to secret disclosure and reader impersonation attacks. Finally, we improve the Fan et al. protocol to present a new one, which is resistant to the mentioned attacks presented in this paper and the other known attacks in the context of RFID authentication. Our proposed improvement does not impose any additional workload on the RFID tag.
Available format(s)
Category
Cryptographic protocols
Publication info
Preprint. MINOR revision.
Keywords
Mobile commerceRFIDUltra-lightweightSecret disclosureImpersonation
Contact author(s)
History
Short URL
https://ia.cr/2017/547
CC BY
BibTeX
@misc{cryptoeprint:2017/547,
author = {Seyed Farhad Aghili and Hamid Mala},
title = {Security Analysis of an Ultra-lightweight RFID Authentication Protocol for M-commerce},
howpublished = {Cryptology ePrint Archive, Paper 2017/547},
year = {2017},
note = {\url{https://eprint.iacr.org/2017/547}},
url = {https://eprint.iacr.org/2017/547}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
# Difference between \href{xxx.pdf}{} and \href{run:xxx.pdf}{}
What are the differences between these two types of links?
\href{xxx.pdf}{My PDF}
\href{run:xxx.pdf}{run:My PDF}
Both open the same file in Acrobat (but at different zoom levels?). However, in TeXShop, only the run: version opens the file. I have always used the second style, but as both seem to be valid, I am wondering if I need to change things?
## Questions:
1. What are the difference between these two types of links?
2. When should I use the first style versus the second style?
## Notes:
• At this time, I am only asking regarding opening external pdf files. Having answers regarding non-pdf files (in case there are different issues involved) if fine too, as that may be useful to others.
## Code:
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\href{run:xxx.pdf}{run:My PDF}
\end{document}
• run is intended to launch an application rather than open a file, I think it just happens to work if you give it a pdf file that it opens it in your default pdf viewer Mar 5, 2020 at 22:07
Here is a slightly modified version of your document:
\documentclass{article}
\pdfobjcompresslevel=0
\pdfcompresslevel=0
\usepackage{hyperref}
\begin{document}
\href{xxx.pdf}{My PDF}
\href{run:xxx.pdf}{run:My PDF}
\end{document}
You get an uncompressed PDF and therefore a readable PDF (by opening it directly in your favorite editor).
1. The first link is coded as a GoToR action (a remote go-to action) similar to an ordinary go-to action but jumps to a destination in another PDF file instead of the current file:
/Subtype/Link/A<</F(xxx.pdf)/S/GoToR/D[0/Fit]>>
2. The second link is coded as a Launch action (to launch an external application):
/Subtype/Link/A<</F(xxx.pdf)/S/Launch>>
|
# Shrinking drill hole size in Eagle
The last time I did a successful PCB, the drill hole size relative to the component pad size is rather huge to the point where when I entered the soldering stage, the component wires were not connecting to the pads correctly.
What I want to do in eagle is change the drill holes on every device (with small pins) on my PCB so that they are about 1/64ths of an inch instead of Eagle's preferred size, that way, when I drill, I drill a small amount of copper but at the same time, the copper will meet with the hole and the odds of a strong connection are higher.
Is there a quick way I can make the drill holes smaller in eagle without having to manually edit each part one-by-one?
• The general idea is that a cad package represent the manufactured result. If you want to change the resulting hole sizes, you can do that with the wrench tool, scripts, or even sed on the raw xml files. But if you don't want to change the intended size of the hole, but merely generate your etch images with smaller holes, that's the role for something like drill-aid.ulp. Contrast if you actually shrink the alleged-result defined sizes of the holes to facilitate your drilling, and then order boards from a real board house with changing them back, you might find header pins, etc won't fit. – Chris Stratton Jul 24 '16 at 7:29
There is an User Language Program (ULP) for this, in the standard distribution. It is called drill-aid.ulp.
|
Orals
Rose Wang · Esin Durmus · Noah Goodman · Tatsunori Hashimoto
Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent stochastic process. TC does this by learning a representation which maps the dynamics of how text changes in a document to the dynamics of a stochastic process of interest. Using this representation, the language model can generate text by first implicitly generating a document plan via a stochastic process, and then generating text that is consistent with this latent plan. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC improves performance on text infilling and discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to +40% better) and text length consistency (up to +17% better). Human evaluators also prefer TC's output 28.6% more than the baselines.
Nicolas Papernot · Thomas Steinke
For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fine tune the value of the training algorithm’s hyperparameters. In this work, we first illustrate how simply setting hyperparameters based on non-private training runs can leak private information. Motivated by this observation, we then provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy. Our results improve and extend the work of Liu and Talwar (STOC 2019). Our analysis supports our previous observation that tuning hyperparameters does indeed leak private information, but we prove that, under certain assumptions, this leakage is modest, as long as each candidate training run needed to select hyperparameters is itself differentially private.
Haobo Wang · Ruixuan Xiao · Yixuan Li · Lei Feng · Gang Niu · Gang Chen · Junbo Zhao
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL---representation learning and label disambiguation---in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Extensive experiments demonstrate that PiCO significantly outperforms the current state-of-the-art approaches in PLL and even achieves comparable results to fully supervised learning. Code and data available: https://github.com/hbzju/PiCO.
Yusong Wu · Ethan Manilow · Yi Deng · Rigel Swavely · Kyle Kastner · Timotheus Cooijmans · Aaron Courville · Anna Huang · Jesse Engel
Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for control. In this work, we introduce MIDI-DDSP a hierarchical model of musical instruments that enables both realistic neural audio synthesis and detailed user control. Starting from interpretable Differentiable Digital Signal Processing (DDSP) synthesis parameters, we infer musical notes and high-level properties of their expressive performance (such as timbre, vibrato, dynamics, and articulation). This creates a 3-level hierarchy (notes, performance, synthesis) that affords individuals the option to intervene at each level, or utilize trained priors (performance given notes, synthesis given performance) for creative assistance. Through quantitative experiments and listening tests, we demonstrate that this hierarchy can reconstruct high-fidelity audio, accurately predict performance attributes for a note sequence, independently manipulate the attributes of a given performance, and as a complete system, generate realistic audio from a novel note sequence. By utilizing an interpretable hierarchy, with multiple levels of granularity, MIDI-DDSP opens the door to assistive tools to empower individuals across a diverse range of musical experience.
Mia Chiquier · Chengzhi Mao · Carl Vondrick
Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries.
Nicholas Carlini · Andreas Terzis
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training datasets. This is cheaper than labeling datasets manually, and even improves out-of-distribution robustness. We show that this practice makes backdoor and poisoning attacks a significant threat. By poisoning just 0.01% of a dataset (e.g., just 300 images of the 3 million-example Conceptual Captions dataset), we can cause the model to misclassify test images by overlaying a small patch. Targeted poisoning attacks, whereby the model misclassifies a particular test input with an adversarially-desired label, are even easier requiring control of 0.0001% of the dataset (e.g., just three out of the 3 million images). Our attacks call into question whether training on noisy and uncurated Internet scrapes is desirable.
Ye Yuan · Yuda Song · Zhengyi Luo · Wen Sun · Kris Kitani
An agent's functionality is largely determined by its design, i.e., skeletal structure and joint attributes (e.g., length, size, strength). However, finding the optimal agent design for a given function is extremely challenging since the problem is inherently combinatorial and the design space is prohibitively large. Additionally, it can be costly to evaluate each candidate design which requires solving for its optimal controller. To tackle these problems, our key idea is to incorporate the design procedure of an agent into its decision-making process. Specifically, we learn a conditional policy that, in an episode, first applies a sequence of transform actions to modify an agent's skeletal structure and joint attributes, and then applies control actions under the new design. To handle a variable number of joints across designs, we use a graph-based policy where each graph node represents a joint and uses message passing with its neighbors to output joint-specific actions. Using policy gradient methods, our approach enables joint optimization of agent design and control as well as experience sharing across different designs, which improves sample efficiency substantially. Experiments show that our approach, Transform2Act, outperforms prior methods significantly in terms of convergence speed and final performance. Notably, Transform2Act can automatically discover plausible …
Boris Oreshkin · Florent Bocquelet · Felix G. Harvey · Bay Raitt · Dominic Laflamme
Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code.
Benjamin Eysenbach · Ruslan Salakhutdinov · Sergey Levine
How can a reinforcement learning (RL) agent prepare to solve downstream tasks if those tasks are not known a priori? One approach is unsupervised skill discovery, a class of algorithms that learn a set of policies without access to a reward function. Such algorithms bear a close resemblance to representation learning algorithms (e.g., contrastive learning) in supervised learning, in that both are pretraining algorithms that maximize some approximation to a mutual information objective. While prior work has shown that the set of skills learned by such methods can accelerate downstream RL tasks, prior work offers little analysis into whether these skill learning algorithms are optimal, or even what notion of optimality would be appropriate to apply to them. In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function. However, we show that the distribution over skills provides an optimal initialization minimizing regret against adversarially-chosen reward functions, assuming a certain type of adaptation procedure. Our analysis also provides a geometric perspective on these skill learning methods.
Sagar Vaze · Kai Han · Andrea Vedaldi · Andrew Zisserman
The ability to identify whether or not a test sample belongs to one of the semantic classes in a classifier's training set is critical to practical deployment of the model. This task is termed open-set recognition (OSR) and has received significant attention in recent years. In this paper, we first demonstrate that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes. We find that this relationship holds across loss objectives and architectures, and further demonstrate the trend both on the standard OSR benchmarks as well as on a large-scale ImageNet evaluation. Second, we use this correlation to boost the performance of the maximum softmax probability OSR 'baseline' by improving its closed-set accuracy, and with this strong baseline achieve state-of-the-art on a number of OSR benchmarks. Similarly, we boost the performance of the existing state-of-the-art method by improving its closed-set accuracy, but the resulting discrepancy with the strong baseline is marginal. Our third contribution is to present the 'Semantic Shift Benchmark' (SSB), which better respects the task of detecting semantic novelty, as opposed to low-level distributional shifts as tackled by neighbouring machine learning fields. On this new evaluation, we again …
Yonathan Efroni · Dipendra Misra · Akshay Krishnamurthy · Alekh Agarwal · John Langford
Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. However, such approaches can fail in the presence of temporally correlated noise in the observations, a phenomenon that is common in practice. We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP (EX-BMDP), for rich observation RL. We start by establishing several negative results, by highlighting failure cases of prior representation learning based approaches. Then, we introduce the Predictive Path Elimination (PPE) algorithm, that learns a generalization of inverse dynamics and is provably sample and computationally efficient in EX-BMDPs when the endogenous state dynamics are near deterministic. The sample complexity of PPE depends polynomially on the size of the latent endogenous state space while not directly depending on the size of the observation space, nor the exogenous state space. We provide experiments on challenging exploration problems which show that our approach works empirically.
Kyle Hsu · Moo Kim · Rafael Rafailov · Jiajun Wu · Chelsea Finn
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms out-of-distribution generalization. To mitigate this, we propose to regularize the third-person information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.
Floris Geerts · Juan L. Reutter
Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNNs architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We provide an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. The crux is to view GNNs as expressions in a procedural tensor language describing the computations in the layers of the GNNs. Then, by a simple analysis of the obtained expressions, in terms of the number of indexes used and the nesting depth of summations, bounds on the separation power in terms of the WL-tests readily follow. We use tensor language to define Higher-Order Message-Passing Neural Networks (or k-MPNNs), a natural extension of MPNNs. Furthermore, the tensor language point of view allows for the derivation of universality results for classes of GNNs in a natural way. Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs, without needing to know the intricacies of the WL-tests. We also …
Jake Topping · Francesco Di Giovanni · Benjamin Chamberlain · Xiaowen Dong · Michael Bronstein
Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph. Recent works pointed to the distortion of information flowing from distant nodes as a factor limiting the efficiency of message passing for tasks relying on long-distance interactions. This phenomenon, referred to as 'over-squashing', has been heuristically attributed to graph bottlenecks where the number of $k$-hop neighbors grows rapidly with $k$. We provide a precise description of the over-squashing phenomenon in GNNs and analyze how it arises from bottlenecks in the graph. For this purpose, we introduce a new edge-based combinatorial curvature and prove that negatively curved edges are responsible for the over-squashing issue. We also propose and experimentally test a curvature-based graph rewiring method to alleviate the over-squashing.
Steeven Janny · Fabien Baradel · Natalia Neverova · Madiha Nadri · Greg Mori · Christian Wolf
Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactual reasoning of physical processes in pixel space, which requires the prediction of the impact of interventions on initial conditions. Going beyond the identification of structural relationships, we deal with the challenging problem of forecasting raw video over long horizons. Our method does not require the knowledge or supervision of any ground truth positions or other object or scene properties. Our model learns and acts on a suitable hybrid latent representation based on a combination of dense features, sets of 2D keypoints and an additional latent vector per keypoint. We show that this better captures the dynamics of physical processes than purely dense or sparse representations. We introduce a new challenging and carefully designed counterfactual benchmark for predictions in pixel space and outperform strong baselines in physics-inspired ML and video prediction.
X.Y. Han · Vardan Papyan · David Donoho
The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and classifier behavior collapses to the nearest-class-mean decision rule. Recent works demonstrated that deep nets trained with mean squared error (MSE) loss perform comparably to those trained with CE. As a preliminary, we empirically establish that NC emerges in such MSE-trained deep nets as well through experiments on three canonical networks and five benchmark datasets. We provide, in a Google Colab notebook, PyTorch code for reproducing MSE-NC and CE-NC: https://colab.research.google.com/github/neuralcollapse/neuralcollapse/blob/main/neuralcollapse.ipynb. The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss, inspiring us to leverage MSE loss towards the theoretical investigation of NC. We develop three main contributions: (I) We show a new decomposition of the MSE loss into (A) terms directly interpretable through the lens of NC and which assume the last-layer classifier is exactly the least-squares classifier; and (B) a term capturing the deviation from this least-squares classifier. (II) We exhibit experiments on canonical datasets and networks demonstrating that term-(B) is negligible during training. …
Albert Gu · Karan Goel · Christopher Re
A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they still struggle to scale to very long sequences of $10000$ or more steps. A promising recent approach proposed modeling sequences by simulating the fundamental state space model (SSM) $$x'(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)$$, and showed that for appropriate choices of the state matrix $$A$$, this system could handle long-range dependencies mathematically and empirically. However, this method has prohibitive computation and memory requirements, rendering it infeasible as a general sequence modeling solution. We propose the Structured State Space sequence model (S4) based on a new parameterization for the SSM, and show that it can be computed much more efficiently than prior approaches while preserving their theoretical strengths. Our technique involves conditioning $$A$$ with a low-rank correction, allowing it to be diagonalized stably and reducing the SSM to the well-studied computation of a Cauchy kernel. S4 achieves strong empirical results across a diverse range of established benchmarks, …
Lixu Wang · Shichao Xu · Ruiqi Xu · Xiao Wang · Qi Zhu
As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to state-of-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets.
Minghao Guo · Veronika Thost · Beichen Li · Payel Das · Jie Chen · Wojciech Matusik
The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of samples) due to labor-intensive experimentation and data collection. Another major challenge is to generate only physically synthesizable molecules. This is a non-trivial task for neural network-based generative models since the relevant chemical knowledge can only be extracted and generalized from the limited training data. In this work, we propose a data-efficient generative model that can be learned from datasets with orders of magnitude smaller sizes than common benchmarks. At the heart of this method is a learnable graph grammar that generates molecules from a sequence of production rules. Without any human assistance, these production rules are automatically constructed from training data. Furthermore, additional chemical knowledge can be incorporated into the model by further grammar optimization. Our learned graph grammar yields state-of-the-art results on generating high-quality molecules for three monomer datasets that contain only ${\sim}20$ samples each. Our approach also achieves remarkable performance in a challenging polymer generation task with $only$ $117$ training samples and …
Meng Qu · Huiyu Cai · Jian Tang
This paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e.g., conditional random fields (CRFs). In this paper, we present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds. SPN defines flexible potential functions of CRFs with GNNs. However, learning such a model is nontrivial as it involves optimizing a maximin game with high-cost inference. Inspired by the underlying connection between joint and marginal distributions defined by Markov networks, we propose to solve an approximate version of the optimization problem as a proxy, which yields a near-optimal solution, making learning more efficient. Extensive experiments on two settings show that our approach outperforms many competitive baselines.
Rachid Riad · Olivier Teboul · David Grangier · Neil Zeghidour
Convolutional neural networks typically contain several downsampling operators, such as strided convolutions or pooling layers, that progressively reduce the resolution of intermediate representations. This provides some shift-invariance while reducing the computational complexity of the whole architecture. A critical hyperparameter of such layers is their stride: the integer factor of downsampling. As strides are not differentiable, finding the best configuration either requires cross-validation or discrete optimization (e.g. architecture search), which rapidly become prohibitive as the search space grows exponentially with the number of downsampling layers. Hence, exploring this search space by gradient descent would allow finding better configurations at a lower computational cost. This work introduces DiffStride, the first downsampling layer with learnable strides. Our layer learns the size of a cropping mask in the Fourier domain, that effectively performs resizing in a differentiable way. Experiments on audio and image classification show the generality and effectiveness of our solution: we use DiffStride as a drop-in replacement to standard downsampling layers and outperform them. In particular, we show that introducing our layer into a ResNet-18 architecture allows keeping consistent high performance on CIFAR10, CIFAR100 and ImageNet even when training starts from poor random stride configurations. Moreover, formulating strides as learnable variables allows …
Marine Schimel · Ta-Chu Kao · Kristopher Jensen · Guillaume Hennequin
Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding. As recordings are typically performed in localized circuits that form only a part of the wider implicated network, it is important to simultaneously learn the local dynamics and infer any unobserved external input that might drive them. Here, we introduce iLQR-VAE, a novel control-based approach to variational inference in nonlinear dynamical systems, capable of learning both latent dynamics, initial conditions, and ongoing external inputs. As in recent deep learning approaches, our method is based on an input-driven sequential variational autoencoder (VAE). The main novelty lies in the use of the powerful iterative linear quadratic regulator algorithm (iLQR) in the recognition model. Optimization of the standard evidence lower-bound requires differentiating through iLQR solutions, which is made possible by recent advances in differentiable control. Importantly, having the recognition model be implicitly defined by the generative model greatly reduces the number of free parameters …
Asiri Wijesinghe · Qing Wang
We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. Then, we theoretically characterize how message-passing GNNs can be designed to be more expressive than the Weisfeiler Lehman test. To elaborate this characterization, we propose a novel neural model, called GraphSNN, and prove that this model is strictly more expressive than the Weisfeiler Lehman test in distinguishing graph structures. We empirically verify the strength of our model on different graph learning tasks. It is shown that our model consistently improves the state-of-the-art methods on the benchmark tasks without sacrificing computational simplicity and efficiency.
Yifei Wang · Jonathan Lacotte · Mert Pilanci
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces. Given the set of solutions of our convex optimization program, we show how to construct exactly the entire set of optimal neural networks. We provide a detailed characterization of this optimal set and its invariant transformations. As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.Overall, we provide a rich framework for studying the landscape of neural network training loss through convexity.
Shoufa Chen · Enze Xie · Chongjian GE · Runjian Chen · Ding Liang · Ping Luo
This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions. As compared to modern MLP architectures, e.g. , MLP-Mixer, ResMLP, and gMLP, whose architectures are correlated to image size and thus are infeasible in object detection and segmentation, CycleMLP has two advantages compared to modern approaches. (1) It can copewith various image sizes. (2) It achieves linear computational complexity to image size by using local windows. In contrast, previous MLPs have $O(N^2)$ computations due to fully spatial connections. We build a family of models which surpass existing MLPs and even state-of-the-art Transformer-based models, e.g. Swin Transformer, while using fewer parameters and FLOPs. We expand the MLP-like models’ applicability, making them a versatile backbone for dense prediction tasks. CycleMLP achieves competitive results on object detection, instance segmentation, and semantic segmentation. In particular, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset.
Chulhee Yun · Shashank Rajput · Suvrit Sra
In distributed learning, local SGD (also known as federated averaging) and its simple baseline minibatch SGD are widely studied optimization methods. Most existing analyses of these methods assume independent and unbiased gradient estimates obtained via with-replacement sampling. In contrast, we study shuffling-based variants: minibatch and local Random Reshuffling, which draw stochastic gradients without replacement and are thus closer to practice. For smooth functions satisfying the Polyak-Łojasiewicz condition, we obtain convergence bounds (in the large epoch regime) which show that these shuffling-based variants converge faster than their with-replacement counterparts. Moreover, we prove matching lower bounds showing that our convergence analysis is tight. Finally, we propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.
Alex Rogozhnikov
Tensor computations underlie modern scientific computing and deep learning.A number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc.However, tensor operations in all frameworks follow the same paradigm.Recent neural network architectures demonstrate demand for higher expressiveness of tensor operations.The current paradigm is not suited to write readable, reliable, or easy-to-modify code for multidimensional tensor manipulations. Moreover, some commonly used operations do not provide sufficient checks and can break a tensor structure.These mistakes are elusive as no tools or tests can detect them.Independently, API discrepancies complicate code transfer between frameworks.We propose einops notation: a uniform and generic way to manipulate tensor structure, that significantly improves code readability and flexibility by focusing on the structure of input and output tensors.We implement einops notation in a Python package that efficiently supports multiple widely used frameworks and provides framework-independent minimalist API for tensor manipulations.
Huiqi Deng · Qihan Ren · Hao Zhang · Quanshi Zhang
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and humans, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose losses to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities. The code is available at https://github.com/Nebularaid2000/bottleneck.
Zongze Wu · Yotam Nitzan · Eli Shechtman · Dani Lischinski
In this paper, we perform an in-depth study of the properties and applications of aligned generative models.We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation. Here, we perform the first detailed exploration of model alignment, also focusing on StyleGAN. First, we empirically analyze aligned models and provide answers to important questions regarding their nature. In particular, we find that the child model's latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing. We further show that zero-shot vision tasks may be performed in the child domain, while relying exclusively on supervision in the parent domain. We demonstrate qualitatively and quantitatively that our approach yields state-of-the-art results, while requiring only simple fine-tuning and inversion.
Kohei Miyaguchi · Takayuki Katsuki · Akira Koseki · Toshiya Iwamori
We are concerned with the problem of distributional prediction with incomplete features: The goal is to estimate the distribution of target variables given feature vectors with some of the elements missing. A typical approach to this problem is to perform missing-value imputation and regression, simultaneously or sequentially, which we call the generative approach. Another approach is to perform regression after appropriately encoding missing values into the feature, which we call the discriminative approach. In comparison, the generative approach is more robust to the feature corruption while the discriminative approach is more favorable to maximize the performance of prediction. In this study, we propose a hybrid method to take the best of both worlds. Our method utilizes the black-box variational inference framework so that it can be applied to a wide variety of modern machine learning models, including the variational autoencoders. We also confirmed the effectiveness of the proposed method empirically.
Divyam Madaan · Jaehong Yoon · Yuanchun Li · Yunxin Liu · Sung Ju Hwang
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advances are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unannotated. In this work, we focus on unsupervised continual learning (UCL), where we learn the feature representations on an unlabelled sequence of tasks and show that reliance on annotated data is not necessary for continual learning. We conduct a systematic study analyzing the learned feature representations and show that unsupervised visual representations are surprisingly more robust to catastrophic forgetting, consistently achieve better performance, and generalize better to out-of-distribution tasks than SCL. Furthermore, we find that UCL achieves a smoother loss landscape through qualitative analysis of the learned representations and learns meaningful feature representations. Additionally, we propose Lifelong Unsupervised Mixup (Lump), a simple yet effective technique that interpolates between the current task and previous tasks' instances to alleviate catastrophic forgetting for unsupervised representations.
Sebastian Flennerhag · Yannick Schroecker · Tom Zahavy · Hado van Hasselt · David Silver · Satinder Singh
Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that metric can be used to control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an epsilon-greedy Q-learning agent - without backpropagating through the update rule.
Ananya Kumar · Aditi Raghunathan · Robbie Jones · Tengyu Ma · Percy Liang
When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer---the "head"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (BREEDS-Living17, BREEDS-Entity30, DomainNet, CIFAR $\to$ STL, CIFAR-10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head---this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both …
Anirudh Goyal · Aniket Didolkar · Alex Lamb · Kartikeya Badola · Nan Rosemary Ke · Nasim Rahaman · Jonathan Binas · Charles Blundell · Michael Mozer · Yoshua Bengio
Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions and object-centric architectures make use of graph neural networks to model interactions among entities. We consider how to improve on pairwise interactions in terms of global coordination and a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists.
S Chandra Mouli · Bruno Ribeiro
Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^\text{te} \circ X'$ not observed in training, as in training we observe $T^\text{tr} \circ X'$, where $X'$ is a hidden variable. This work argues that when the transformations in train $T^\text{tr}$ and test $T^\text{te}$ are (arbitrary) symmetry transformations induced by a collection of known $m$ equivalence relations, the task of finding a robust OOD classifier can be defined as finding the simplest causal model that defines a causal connection between the target labels and the symmetry transformations that are associated with label changes. We then propose a new learning paradigm, asymmetry learning, that identifies which symmetries the classifier must break in order to correctly predict $Y$ in both train and test. Asymmetry learning performs a causal model search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two physics tasks.
Huaxiu Yao · Linjun Zhang · Chelsea Finn
Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge. However, the bottleneck of current meta-learning algorithms is the requirement of a large number of meta-training tasks, which may not be accessible in real-world scenarios. To address the challenge that available tasks may not densely sample the space of tasks, we propose to augment the task set through interpolation. By meta-learning with task interpolation (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels. Under both gradient-based and metric-based meta-learning settings, our theoretical analysis shows MLTI corresponds to a data-adaptive meta-regularization and further improves the generalization. Empirically, in our experiments on eight datasets from diverse domains including image recognition, pose prediction, molecule property prediction, and medical image classification, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
Olivia Wiles · Sven Gowal · Florian Stimberg · Sylvestre-Alvise Rebuffi · Ira Ktena · Krishnamurthy Dvijotham · Ali Taylan Cemgil
Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work (Gulrajani & Lopez-Paz, 2021), that progress has been made over a standard ERM baseline; in particular, pretraining and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. We will open source our experimental framework, allowing future work to evaluate new methods over multiple shifts to obtain a more complete picture of a method's effectiveness. Code is available at github.com/deepmind/distributionshiftframework.
Shuxiao Chen · Koby Crammer · Hangfeng He · Dan Roth · Weijie J Su
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implement, is computationally efficient, requires little hyperparameter tuning, and enjoys non-asymptotic learning-theoretic guarantees. The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing (NLP), including part-of-speech (PoS) tagging, chunking, predicate detection, and named entity recognition (NER). As a byproduct, the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning, such as the choice of the source data and the impact of fine-tuning.
António Farinhas · Wilker Aziz · Vlad Niculae · Andre Martins
Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call "mixed random variables.'' Our starting point is a new "direct sum'' base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic ("sample-and-project'’) and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables.
Omri Puny · Matan Atzmon · Edward Smith · Ishan Misra · Aditya Grover · Heli Ben-Hamu · Yaron Lipman
Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean motion invariant/equivariant graph or point cloud neural networks. We introduce Frame Averaging (FA), a highly general purpose and systematic framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types. Our framework builds on the well known group averaging operator that guarantees invariance or equivariance but is intractable. In contrast, we observe that for many important classes of symmetries, this operator can be replaced with an averaging operator over a small subset of the group elements, called a frame. We show that averaging over a frame guarantees exact invariance or equivariance while often being much simpler to compute than averaging over the entire group. Furthermore, we prove that FA-based models have maximal expressive power in a broad setting and in general preserve the expressive power of their backbone architectures. Using frame averaging, we propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and …
Sabri Eyuboglu · Maya Varma · Khaled Saab · Jean-Benoit Delbrouck · Christopher Lee-Messer · Jared Dunnmon · James Y Zou · Christopher Re
Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data).Then, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies 36% …
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Etienne David · Ian Stavness · Wei Guo · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang
Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well. However, existing distribution shift benchmarks with unlabeled data do not reflect the breadth of scenarios that arise in real-world applications. In this work, we present the WILDS 2.0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data that would be realistically obtainable in deployment. These datasets span a wide range of applications (from histology to wildlife conservation), tasks (classification, regression, and detection), and modalities (photos, satellite images, microscope slides, text, molecular graphs). The update maintains consistency with the original WILDS benchmark by using identical labeled training, validation, and test sets, as well as identical evaluation metrics. We systematically benchmark state-of-the-art methods that use unlabeled data, including domain-invariant, self-training, and self-supervised methods, and show that their success on WILDS is limited. To facilitate method development, we provide an open-source package that automates …
Qing Jin · Jian Ren · Richard Zhuang · Sumant Hanumante · Zhengang Li · Zhiyu Chen · Yanzhi Wang · Kaiyuan Yang · Sergey Tulyakov
Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT32 or full-precision multiplication during inference for scaling or dequantization. This introduces a noticeable cost in terms of memory, speed, and required energy. To tackle these issues, we present F8Net, a novel quantization framework consisting in only fixed-point 8-bit multiplication. To derive our method, we first discuss the advantages of fixed-point multiplication with different formats of fixed-point numbers and study the statistical behavior of the associated fixed-point numbers. Second, based on the statistical and algorithmic analysis, we apply different fixed-point formats for weights and activations of different layers. We introduce a novel algorithm to automatically determine the right format for each layer during training. Third, we analyze a previous quantization algorithm—parameterized clipping activation (PACT)—and reformulate it using fixed-point arithmetic. Finally, we unify the recently proposed method for quantization fine-tuning and our fixed-point approach to show the potential of our method. We verify F8Net on ImageNet for MobileNet V1/V2 and ResNet18/50. Our approach achieves comparable and better performance, when compared not …
Vadim Popov · Ivan Vovk · Vladimir Gogoryan · Tasnima Sadekova · Mikhail Kudinov · Jiansheng Wei
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis.
Fan Bao · Chongxuan Li · Jun Zhu · Bo Zhang
Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose \textit{Analytic-DPM}, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a $20\times$ to $80\times$ speed up.
Evan Hernandez · Sarah Schwettmann · David Bau · Teona Bagashvili · Antonio Torralba · Jacob Andreas
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual information-guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to human …
Shengjia Zhao · Abhishek Sinha · Yutong He · Aidan Perreault · Jiaming Song · Stefano Ermon
Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task -- two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution. By suitably choosing the decision task, this generalizes the Jensen-Shannon divergence and the maximum mean discrepancy family. We apply our approach to two-sample tests, and on various benchmarks, we achieve superior test power compared to competing methods. In addition, a modeler can directly specify their preferences when comparing distributions through the decision loss. We apply this property to understanding the effects of climate change on different social and economic activities, evaluating sample quality, and selecting features targeting different decision tasks.
Jason Wei · Maarten Bosma · Vincent Zhao · Kelvin Guu · Wei Yu · Brian Lester · Nan Du · Andrew Dai · Quoc V Le
This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.
Bo Wan · Wenjuan Han · Zilong Zheng · Tinne Tuytelaars
We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an image-caption pair, the goal is to extract a shared hierarchical structure for both image and language simultaneously. We argue that such structured output, grounded in both modalities, is a clear step towards the high-level understanding of multimodal information. Besides challenges existing in conventional visually grounded grammar induction tasks, VL grammar induction requires a model to capture contextual semantics and perform a fine-grained alignment. To address these challenges, we propose a novel method, CLIORA, which constructs a shared vision-language constituency tree structure with context-dependent semantics for all possible phrases in different levels of the tree. It computes a matching score between each constituent and image region, trained via contrastive learning. It integrates two levels of fusion, namely at feature-level and at score-level, so as to allow fine-grained alignment. We introduce a new evaluation metric for VL grammar induction, CCRA, and show a 3.3% improvement over a strong baseline on Flickr30k Entities. We also evaluate our model via two derived tasks, i.e., language grammar induction and phrase grounding, and improve over the state-of-the-art for both.
Xuechen Li · Florian Tramer · Percy Liang · Tatsunori Hashimoto
Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead.We show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure.With the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines---by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model. The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models tends to not suffer from dimension-dependent performance degradation.Code to reproduce results …
Pingchuan Ma · Tao Du · Joshua B Tenenbaum · Wojciech Matusik · Chuang Gan
This work considers identifying parameters characterizing a physical system's dynamic motion directly from a video whose rendering configurations are inaccessible. Existing solutions require massive training data or lack generalizability to unknown rendering configurations. We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem. Our core idea is to train a rendering-invariant state-prediction (RISP) network that transforms image differences into state differences independent of rendering configurations, e.g., lighting, shadows, or material reflectance. To train this predictor, we formulate a new loss on rendering variances using gradients from differentiable rendering. Moreover, we present an efficient, second-order method to compute the gradients of this loss, allowing it to be integrated seamlessly into modern deep learning frameworks. We evaluate our method in rigid-body and deformable-body simulation environments using four tasks: state estimation, system identification, imitation learning, and visuomotor control. We further demonstrate the efficacy of our approach on a real-world example: inferring the state and action sequences of a quadrotor from a video of its motion sequences. Compared with existing methods, our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
Minkai Xu · Lantao Yu · Yang Song · Chence Shi · Stefano Ermon · Jian Tang
Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamics where heated particles will diffuse from original states to a noise distribution, in this paper, we propose a novel generative model named GeoDiff for molecular conformation prediction. GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i.e., transforming from a noise distribution to stable conformations) as a Markov chain. Modeling such a generation process is however very challenging as the likelihood of conformations should be roto-translational invariant. We theoretically show that Markov chains evolving with equivariant Markov kernels can induce an invariant distribution by design, and further propose building blocks for the Markov kernels to preserve the desirable equivariance property. The whole framework can be efficiently trained in an end-to-end fashion by optimizing a weighted variational lower bound to the (conditional) likelihood. Experiments on multiple benchmarks show that GeoDiff is superior or comparable to existing state-of-the-art approaches, especially on large molecules.
Hangbo Bao · Li Dong · Songhao Piao · Furu Wei
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e., image patches (such as 16 x 16 pixels), and visual tokens (i.e., discrete tokens). We first tokenize'' the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods.
Liu Shi Zhan · Hang Yu · Cong Liao · Jianguo Li · Weiyao Lin · Alex Liu ·
Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time. In practice, the challenge is to build a flexible but parsimonious model that can capture a wide range of temporal dependencies. In this paper, we propose Pyraformer by exploring the multiresolution representation of the time series. Specifically, we introduce the pyramidal attention module (PAM) in which the inter-scale tree structure summarizes features at different resolutions and the intra-scale neighboring connections model the temporal dependencies of different ranges. Under mild conditions, the maximum length of the signal traversing path in Pyraformer is a constant (i.e., $\mathcal O(1)$) with regard to the sequence length $L$, while its time and space complexity scale linearly with $L$. Extensive numerical results show that Pyraformer typically achieves the highest prediction accuracy in both single-step and long-range forecasting tasks with the least amount of time and memory consumption, especially when the sequence is long.
Shuming Kong · Yanyan Shen · Linpeng Huang
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on \emph{data resampling} have employed influence functions to identify \emph{harmful} training samples that will degrade model's test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model's test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model's robustness against label noise.
|
# Calculating the components of angular momentum of a rigid body
You have a rigid body with 1 fixed point in space (the origin).
It's self-explanatory how to get the following equation for the angular momentum:
$\vec L = \sum_n m_n\vec r_n\times\vec v_n$
Where you take the sum of all "$n$" indicates all "$n$" points of mass.
This can be transformed into:
$\vec L = \sum_n m_n(\vec\omega(r_n^2)-(\vec\omega.\vec r_n)\vec r_n)$
Now, out of this, I have to get the following:
For component "$i$" of this equation, you get:
$L_i = \sum_n m_n(r_n^2\omega_i-x_{ni}\sum_j\omega_jx_{nj})$
I understand that for component "$i$", the scalars $m_n$ and $r_n^2$ are the same as for the other components, I also understand that for component "$i$" I need to take the $\omega_i$ component.
But what is meant with everything that follows after that? What does the "$x$" indicate for example? Does the "$j$" indicate the other components?
• – John Alexiou Aug 10 '14 at 15:54
• How is $L_i$ defined in relation to $\vec{L}$? Is this a transformation from the pivot center, to the center of mass? A diagram would help here in order to define the quantities. – John Alexiou Aug 10 '14 at 15:57
While typing this out it clicked for me and I figured it out.
Might as well type the full explanation after typing the question:
Starting from left to right for component "$i$":
• The variables $m_n$ and $r_n^2$ are scalars and so they are the same as for the other components.
• The $\omega_i$ component is self-explanatory as it has the same direction as $L_i$.
• The $x_{ni}$ is to be interpreted as following: "$x$" is a distance (This doesn't indicate your x-component, which had me confused earlier), "$n$" indicates it's for mass-point "$n$" and the $"i"$ indicates it's the $"i"$ component.
• The $\sum_j$ indicates the sum for ALL components (e.g. If you have $x$, $y$ and $z$ components and $i$ is $x$, then $j$ is the sum for $x$, $y$, and $z$.)
• The variables of the summation are to be interpreted in a similar way as earlier.
|
## Probability Seminar: Xun Yu Zhou
• Date: 04/25/2012
• Time: 15:00
Lecturer(s):
Xun Yu Zhou
Location:
University of British Columbia
Topic:
Optimal stopping under probability distortion
Description:
Abstract
We formulate an optimal stopping problem for a geometric Brownian motion where the probability scale is distorted by a general non-linear function. The problem is inherently time inconsistent due to the Choquet integration involved. We develop a new approach, based on a reformulation of the problem where one optimally chooses the probability distribution or quantile function of the stopped state. An optimal stopping time can then be recovered from the obtained distribution/quantile function, either in a straightforward way for several important cases or in general via the Skorokhod embedding. This approach enables us to solve the problem in a fairly general manner with different shapes of the payoff and probability distortion functions. We also discuss economical interpretations of the results. In particular, we justify several liquidation strategies widely adopted in stock trading, including those of “buy and hold”, “cut loss or take profit”, “cut loss and let profit run”, and “sell on a percentage of historical high”.
Other Information:
Location: WMAX 110
|
# American Institute of Mathematical Sciences
September 2010, 9(5): 1391-1397. doi: 10.3934/cpaa.2010.9.1391
## Planar ACL-homeomorphisms : Critical points of their components
1 Dipartimento di Matematica e Appl. “R. Caccioppoli”, Via Cintia- Monte S.Angelo, 80126 Napoli, Italy, Italy, Italy
Received September 2009 Revised October 2009 Published May 2010
We study planar homeomorphisms $f: \Omega\subset R^2$ onto $\to \Omega' \subset R^2$, $f=(u,v)$, which are absolutely continuous on lines parallel to the axes (ACL) together with their inverse $f^{-1}$. The main result is that $u$ and $v$ have almost everywhere the same critical points. This generalizes a previous result ([6]) concerning bisobolev mappings. Moreover we construct an example of a planar ACL-homeomorphism not belonging to the Sobolev class $W_{l o c}^{1,1}$.
Citation: Gioconda Moscariello, Antonia Passarelli di Napoli, Carlo Sbordone. Planar ACL-homeomorphisms : Critical points of their components. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1391-1397. doi: 10.3934/cpaa.2010.9.1391
[1] Tao Wang. Variational relations for metric mean dimension and rate distortion dimension. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021050 [2] Azmeer Nordin, Mohd Salmi Md Noorani. Counting finite orbits for the flip systems of shifts of finite type. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021046 [3] Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637 [4] Murat Uzunca, Ayşe Sarıaydın-Filibelioǧlu. Adaptive discontinuous galerkin finite elements for advective Allen-Cahn equation. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 269-281. doi: 10.3934/naco.2020025 [5] Andrés Contreras, Juan Peypouquet. Forward-backward approximation of nonlinear semigroups in finite and infinite horizon. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021051 [6] Lei Lei, Wenli Ren, Cuiling Fan. The differential spectrum of a class of power functions over finite fields. Advances in Mathematics of Communications, 2021, 15 (3) : 525-537. doi: 10.3934/amc.2020080 [7] Hakan Özadam, Ferruh Özbudak. A note on negacyclic and cyclic codes of length $p^s$ over a finite field of characteristic $p$. Advances in Mathematics of Communications, 2009, 3 (3) : 265-271. doi: 10.3934/amc.2009.3.265 [8] Lunji Song, Wenya Qi, Kaifang Liu, Qingxian Gu. A new over-penalized weak galerkin finite element method. Part Ⅱ: Elliptic interface problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2581-2598. doi: 10.3934/dcdsb.2020196 [9] Zengyun Wang, Jinde Cao, Zuowei Cai, Lihong Huang. Finite-time stability of impulsive differential inclusion: Applications to discontinuous impulsive neural networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2677-2692. doi: 10.3934/dcdsb.2020200 [10] Marcel Braukhoff, Ansgar Jüngel. Entropy-dissipating finite-difference schemes for nonlinear fourth-order parabolic equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3335-3355. doi: 10.3934/dcdsb.2020234 [11] Yueqiang Shang, Qihui Zhang. A subgrid stabilizing postprocessed mixed finite element method for the time-dependent Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3119-3142. doi: 10.3934/dcdsb.2020222 [12] Brahim Alouini. Finite dimensional global attractor for a class of two-coupled nonlinear fractional Schrödinger equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021013 [13] Beixiang Fang, Qin Zhao. Uniqueness of steady 1-D shock solutions in a finite nozzle via vanishing viscosity aguments. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021066 [14] Marita Holtmannspötter, Arnd Rösch, Boris Vexler. A priori error estimates for the space-time finite element discretization of an optimal control problem governed by a coupled linear PDE-ODE system. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021014 [15] Jinye Shen, Xian-Ming Gu. Two finite difference methods based on an H2N2 interpolation for two-dimensional time fractional mixed diffusion and diffusion-wave equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021086
2019 Impact Factor: 1.105
|
# Why would a government support organizations with completely opposing goals?
The SCP (Secure, Contain, Protect) foundation is an organization devoted to the containment of scps: dangerous supernatural anomalies or individuals that pose a threat to humanity's existence. It functions as a privately owned entity, receiving funding and support from wealthy donors and governments around the world. However, it operates without governmental oversight or restriction, allowing it much freedom without accountability.
In recent years, another group has risen to prominence, referred to as the GOC (Global Occult Coalition). While similar to the SCP foundation in it's decree to protect humanity, it has fundamentally different methods and goals. The entity is devoted to the destruction and termination of all supernatural anomalies and individuals, regardless of threat level or neccesity. This stems from an "just in case" mentality, reasoning that these anomalies can never be fully understood and as such will always pose a hazard. Unlike the foundation, it is subject to governmental oversight by the United Nations, making it far more organized but restricted in it's operations and proceedures.
These two groups have often clashed with each other due to their opposing outlooks and goals, which has occasionally led to one stepping on the others turf. This has resulted in violent confrontations at times, which potentially makes a situation worse. Given the situation, it would make the most sense for governments such as the U.S. to pick one over the other.
What would make governments continue to support groups with competing goals when the stakes are so high?
• CIA favors one, NSA another, DoD yet another and the Security Service just a fourth. So you are missing two underground organizations already. – Adrian Colomitchi Mar 22 at 12:41
• Why does the US have the EPA on one hand, and on the other all sorts of support for fossil fuel industries? – jamesqf Mar 22 at 18:10
• It feels a little strange to me that a question is being asked on WB about a well-established fiction that has answered this very question a number of times. I don’t really know if it’s better suited to another SE site (maybe Scifi?), but I’m fairly sure it doesn’t belong here. – Vermilingua Mar 23 at 0:14
• @jamesqf The SCP Foundation is an offshoot of the /x/ creepypasta scene. There is no established canon. – Vermilingua Mar 23 at 4:47
• In the real world, Vladislov Surkov has done exactly that: short version long version – Aaron F Mar 23 at 9:06
A Government Is Not A Single Unified Entity, Particularly Elected Governments
Government priorities change over time, often swinging back and forth between different ends of a particular country's political range. Not every policy put in place under one administration is repealed under the next, even if that policy would not have been passed by the new administration.
Additionally, at any given time there may be faction opposed to a policy, a faction supporting the policy, and factions indifferent to it. If around 30% of legislators want policy A but are opposed to policy B, 30% would be persuaded to vote for either or both A and B in exchange for votes on unrelated policies, and 30% actively oppose policy A but support policy B, both policies can be passed with greater than 50% support, even if A and B are not consistent.
Even in countries where policy is largely determined by a single executive, the executive will usually have advisors with differing opinions, and the full implications of the policies each advisor is pushing may not be made clear to the executive by the advisorpushing it (and indeed, may not be made entirely clear to the advisor, who is also informed by a set of people under him, etc).
In practice, a single government is not, and cannot be, entirely consistent
And that's ignoring the unconscious inconsistency of individuals, like the atheist who insists there is no supernatural, but knocks on wood when someone talks about having an accident.
• I think you misunderstand both atheism and the idea of "no supernatural", An atheist doesn't believe in god(s), but - given evidence - would have no problem accepting the existence of various "supernatural" creatures/forces. Those things, if they exist, are just natural that we don't understand, just as we might not understand say quantum mechanics. – jamesqf Mar 23 at 3:30
• @jamesqf It was meant as an illustration only, not a general commentary on the ideal definition of atheism (or on your atheism?). My understanding is that the grammatical construction "The puppy who refuses to eat alone..." implies I'm talking about only the subset specifically described, perhaps only an individual as actually encountered, not the general population of puppies. I'm sorry if it sounds like a general commentary on atheism, rather than a illustration of individual inconsistency. – Jedediah Mar 23 at 10:51
• @jamesqf an atheist is somebody who doesn't believe in god(s). That's it. They don't really need to be motivated by rational reasons for that - some atheists are due to lack of interest, rather than active rejection of the ideas and/or application of the scientific method. Some may instead happily believe in fairies or even a talking mongoose called Gef (pronounced like "Jeff") without needing scientific proof for that. Point is that atheism isn't science. Literally, it's only the absence of a deity. Many seem to think "atheists" are all "scientists" and vice versa, for some reason. – VLAZ Mar 23 at 12:07
• If you just change the word "atheist" to "skeptic" nobody will complain. – Tin Wizard Mar 24 at 19:15
Old "divide et impera".
As long as the two factions fight each other they will:
• depend on external suppliers of goods and services, in which the government can play a role and get an earning
• waste their resources in the mutual fight and not dedicate them to some other scope
• Especially important when the groups in question have access to supernatural / otherworldly powers and tech. – Willk Mar 22 at 16:37
They have the same exact goal that they are offering to potential funders: protection of humanity from supernatural entities.
While their approach to achieving this and the philosophical outlook behind their decisions are entirely different this is not particularly important to the governments.
1. First, they care about the result. Not the philosophy.
2. Second, as a general rule politicians should avoid dictating or choosing the practical approach chosen for the simple reason that it is surprisingly difficult for them to do so. It is much better to fund based on the goals and the estimated odds of success and leave the details to people with expertise on those practical details. So being favoured approach would mean extra funding and not favoured would mean reduced funding. Making a yes or no policy decision should be avoided.
3. Third, they have no idea which approach is better and the only practical way to find out is to try both with reasonable funding. If they knew one approach to work better they wouldn't fund the other but they do not so they do.
4. Fourth, they do not know whether one or both of these projects will fail spectacularly. Both approaches taken have significant unavoidable political risks attached. Politicians do not want to be responsible if those risks actualize. If they committed to single approach, they would be responsible for that decision and thus politically responsible for issues with the chosen approach. As long as they fund both, they are simply experimenting and not committed to and responsible for either.
And lastly ... from the long discussion in the comments because I probably should have mentioned this...
The thing is which of the reasons is most important depends on the person. It depends on which way you are looking at it at the moment. The others are then complementary to that. Different people in the same government will see it differently. Different people evaluating the same decision will see it differently. Same person looking at it in different context will see it differently. I specifically do not want to make this decision in my answer because I do not know the context in which the OP explains the reason.
• a general rule politicians should avoid dictating or choosing the practical approach chosen for the simple reason that they are utter garbage at doing so and this has been demonstrated repeatedly in the past. sounds more as an ideological rant. Many countries trust their govt, and some govts are doing it exceptionally well. Now, of course, if you want it to make from this a plot detail, OK, but it wouldn't hurt to say in explicitly. – Adrian Colomitchi Mar 22 at 11:59
• @AdrianColomitchi I probably need to clarify this (please suggest an edit), the issue is not related to the competence of the government in question, it is related to the type of decision that is being made. Political concerns and implementation concerns are two separate things and should be decided separately by people qualified for the correct type of decision. – Ville Niemi Mar 22 at 12:10
• @AdrianColomitchi The specific issue here is that since the politicians are also the bosses their concerns will dominate any decision making process they take part in. Since implementation level decisions are actually separate concerns from those the politicians focus on and are qualified to make this means policy concerns will dominate practical concerns even when there is no actual reason other than the superior position of the politician. Separating the practical decisions allows them to be made based on actual practical considerations within the already established policy. – Ville Niemi Mar 22 at 12:22
• @AdrianColomitchi Practical examples of this (positive to avoid rants) are how most governments have separated control of things such as central banks or investment funds from direct political control because the temptation to use the control to solve short term political issues was problem or sensitive matters such as elections or school systems have complex processes to protect them from the ruling party meddling to gain political edge. And yes, both Norway and Finland do these things that is one reason their people tend to trust them. – Ville Niemi Mar 22 at 12:38
• I probably need to clarify this (please suggest an edit) - perhaps the they are utter garbage at doing so and this has been demonstrated repeatedly in the past., which seems disputable and flamebite-y. The two examples I provided show two govts that managed to not be utter garbage in the way they solved two particular problems - which means it is not impossible. – Adrian Colomitchi Mar 22 at 12:49
Why would a government support organizations with completely opposing goals?
Because while the goals are philosophically in direct opposition, in practice their services are both necessary, even if neither side is willing to admit it, the government sees this.
• The GOC destroys objects they get their hands on that can be destroyed, and they're better about learning how to destroy them than the SPC is, but they've probably had their fair share of failed destructions gone wrong of items that could potentially be successfully contained.
• The SPC contains dangerous objects that no one has any idea of how to destroy, they've tried, and they're much better at containing the objects than the GOC is, but they've had their fair share of failed containments gone wrong of items that could potentially be successfully destroyed.
With both of these methodologies, they would both conduct research that the other side couldn't, or wouldn't, in furtherance of their goal. This is research that might even prove beneficial towards protection of the planet outside the scope of their own goals and the subject of the supernatural anomalies.
As to how this is read from the government side of things? Perhaps the governing body is split with half of them acknowledging one side or the other, all of them half right and securing funding. Or maybe the governing body just uniformly realizes they're both needed. Either way works, or maybe some other similar way. That part is pretty flexible in how you want to interpret it.
# They Are Special Interests
The two groups here more or less fit the definition of a special interest group. They have their own philosophy, and they push the government to align with their interests as much as possible. To a certain extent, the governments' interests will align with the interest groups', but it will rarely overlap entirely.
Any government always finds itself pulled in multiple directions at once. Witness the real struggle between the content industries (copyright maximalists) and the large coalition that opposes Big Content's constant land grabbing.
# Having Ties With Both Helps Keep the Peace
The government is going to want to have influence with both of these groups. The carrot is usually a better go-to tool than the stick, and that means co-operating with them to a certain degree is the order of the day.
It simply is not true that it would make the most sense for the government to pick one group or the other. One of the main purposes of government is to keep society functioning and civil. These two groups have a culture clash. Left unchecked, it will probably be strong enough to likely turn into outright war.
On the one hand, the GOC will want to destroy SCP. They will view it as an unacceptable risk. They will view SCP's even limited tolerance towards the supernatural as an existential threat. SCP, in turn, will regard GOC as an existential threat (because SCP management aren't idiots).
It is the job of government to make sure that these two groups' feuding does not turn into an honest-to-goodness shooting war. Therefore, it is in their best interests to develop a hybrid policy.
When something like The Hiss shows up, the government will call GOC, because come on. When dealing with less obviously sinister goings-on, they'll call SCP, because it keeps their options open.
More importantly, funding and backing both groups gives the government pull. They can call GOC top brass and tell them to back off or they'll cut funding and co-operation. They can use their ties to GOC to pressure SCP to agree to give up custody of Dangerous Things the government decides are too big a risk.
A good compromise leaves everyone pissed off and feeling like they got the raw end of the deal.
The government will support organisations with completely opposing goals because the electorate supports organisations with completely opposing goals.
What a government cares most about is staying in power. To do this is needs to appeal to radically different demographics.
How?
It will face one set of voters and say "No-one cares more about protecting the fatherland than I, the proof is that my government has consistently supported the GOC. People like you should therefore vote for me" Then it will turn to another demographic and without blinking say "No-one cares more about the rights of all beings and the sanctity of life than I, the proof is that my government has consistently supported the SCP. People like you should therefore vote for me".
In reality the government doesn't particularly care about the fatherland OR the sanctity of life, it just wants to court a full rainbow of voters, because if it doesn't, a rival political party with broader (more inconsistent) appeal might boot them out at the next election...
The easiest way to defeat your enemy is to be the one leading it.
Suppose the government wishes to, in fact, not contain, nor eradicate these dangerous anomalies. However, for some reason or another, the time to release them has not yet come. Suppose the people find out the government is hiding something and want to put a stop to it.
In order to ensure that this will never happen unless the government has full control of the situation, the government fabricates a story through a medium which is believable, for example, an "experiment" pushing the boundaries of science at a local university has gone "wrong" and weaker versions of these anomalies have been produced. Weak enough that collaborative effort from regular people will suffice to handle them.
The government sets up a resistance organization, or maybe two, or as many as the narrative will fit, with the sole purpose of containing these anomalies whenever they're spotted. People wishing to fight, or study, said anomalies will join these organizations, not realizing any content in the sense of "new anomalies" is in fact orchestrated by the government to keep them busy, while the government can pretend that it's working for the best interest of the people.
The government is not monolithic. Powerful donors back both groups. Both the GOC and SCP work to get their own politicians in places of power. The politicians may or may not care one way or the other, but need someone to back them, and need an enemy to run against.
The more cynical outlook. The powerful donors are all in collusion. By supporting both groups the government can keep either side from winning and becoming too powerful. The conflict keeps the populace weak and helps the powerful stay in control.
A few possible explanations:
• Ambiguity: The government does not recognise the key philosophical differences between GOC and SCP. This may be especially the case if the SCP is fine with killing the supernatural in self-defence, which may occur regularly if the SCP are constantly in contact with the supernatural. This could lead to the government labeling the two groups as effectively the same with slightly different modus operandi.
• No concern: The government only cares about the end goal of containing the supernatural and doesn't care about the means. In such as case, all the government may care about is keeping the peace, which is may or may not be able to back up with force.
• Politics: some people in government, say the President, may clearly favour one or the other. However, he will have to deal with other groups who may not see eye to eye: the courts, congress, corporations, lobbyists, public opinion, factions within his own party etc, the UN, etc. The motivations for each of these groups could be completely different.
• Farce: if the situation continues to change rapidly, it will be hard for the government to definitively pick a side. For example, the govt may be on the verge of backing SCP, until supernatural forces go on a human-killing spree. What now?
• Uneven support for both: the government prefers one (say, the GOC) but allows the SCP to operate as long as it doesn't interfere with the activities of the GOC.
• Lack of information: The government has very little information about the occult to make a definitive choice, opting to let things play out further.
• Hidden motive: The government (or parties therein) have a hidden motive for allowing the SCP and GOC to play off against each other, or (alternatively) allowing both to co-exist. One less sinister example might be: a particular politician was responsible for a bill which founded the SCP, but the GOC has been much more effective in practice. While the politician supports the GOC, he cannot denounce the SCP as it would be an admission of his own bill's failure.
Mostly for temporary goals, it's what Hitler did with the Soviet Union during WWII, he partnered up with Stalin, helping to defeat the pols, they collaborated for temporary benefit, but each knew that they would soon turn against each other.
Nature conservation and environmental protection, for example, can often have opposed goals. It seems common that different government agencies have different goals, and sometimes clash (although that normally means paperwork and not violence). And are in fact designed in a way that they will, and have to, clash.
|
## Preprints
1. Wei-Xi Li and Tong Yang
Well-posedness in Gevrey function space for the three-dimensional Prandtl equations
– In the paper, we study the three-dimensional Prandtl equations, and prove that if one component of the tangential velocity field satisfies the monotonicity assumption in the normal direction, then the system is locally well-posed in the Gevrey function space with Gevrey index in ]1, 2]. The proof relies on some new cancellation mechanism in the system in addition to those observed in the two-dimensional setting.
arXiv:1708.08217
2. Wei-Xi Li, Alberto Parmeggiani and Yan-Lin Wang
Global Gevrey hypoellipticity for the twisted Laplacian on forms
– We study in this paper the global hypoellipticity property in the Gevrey category for the generalized twisted Laplacian on forms. Different from the 0-form case, where the twisted Laplacian is a scalar operator, this is a system of differential operators when acting on forms, each component operator being elliptic locally and degenerate globally. We obtain here the global hypoellipticity in anisotropic Gevrey space.
arXiv:1708.03095
3. Wei-Xi Li
Compactness of the resolvent for the Witten Laplacian
– In this paper we consider the Witten Laplacian on 0-forms and give sufficient conditions under which the Witten Laplacian admits a compact resolvent. These conditions are imposed on the potential itself, involving the control of high order derivatives by lower ones, as well as the control of the positive eigenvalues of the Hessian matrix. This compactness criterion for resolvent is inspired by the one for the Fokker-Planck operator. Our method relies on the nilpotent group techniques developed by Helffer-Nourrigat [Hypoellipticité maximale pour des opérateurs polynômes de champs de vecteurs, 1985].
arXiv:1707.04745
4. Wei-Xi Li,Van-Sang Ngo and Chao-Jiang Xu
Boundary layer analysis for the fast horizontal rotating fluids.
– It is well known that, for fast rotating fluids with the axis of rotation being perpendicular to the boundary, the boundary layer is of Ekman-type, described by a linear ODE system. In this paper we consider fast rotating fluids, with the axis of rotation being parallel to the boundary. We show that the corresponding boundary layer is describe by a nonlinear, degenerated PDE system which is similar to the 2-D Prandtl system. Finally, we prove the well-posedness of the governing system of the boundary layer in the space of analytic functions with respect to tangential variable.
arXiv:1611.04896
5. Radjesvarane Alexandre, Frédéric Hérau and Wei-Xi Li
Global hypoelliptic and symbolic estimates for the linearized Boltzmann operator without angular cutoff.
– In this article we provide global subelliptic estimates for the linearized inhomoge- neous Boltzmann equation without angular cutoff, and show that some global gain in the spatial direction is available although the corresponding operator is not elliptic in this direction. The proof is based on a multiplier method and the so-called Wick quantization, together with a careful analysis of the symbolic properties of the Weyl symbol of the Boltzmann collision operator.
arXiv:1212.4632
## Accepted/published papers
1. Wei-Xi Li and Tong Yang
Well-posedness in Gevrey function space for the Prandtl equations with non-degenerate critical points.
Accepted by Journal of the European Mathematical Society (JEMS)
2. Feng Cheng, Wei-Xi Li and Chao-Jiang Xu
Vanishing viscosity of Navier-Stokes flow to ideal flow in Gevrey space.
Mathematical Methods in the Applied Sciences 40 (2017), 5161-5176
3. Feng Cheng, Wei-Xi Li and Chao-Jiang Xu
Gevery regularity with weight for incompressible Euler equation in the half plane.
Acta Mathematics Scientia, 37 (2017), no. 4, 1115-1132
4. Wei-Xi Li
Compactness criteria for the resolvent of the Fokker-Planck operator.
Ann. Sc. Norm. Super. Pisa Cl. Sci. (doi: 10.2422/2036-2145.201511_008)
5. Wei-Xi Li, Peng Luo and Shuying Tian
$L^2$-regularity of kinetic equations with external potential.
Journal of Differential Equations 260 (2016), 5894-5911
6. Wei-Xi Li, Di Wu and Chao-Jiang Xu
Gevrey Class Smoothing Effect for the Prandtl Equation.
SIAM J. Math. Anal. 48 (2016),1672–1726
7. Wei-Xi Li
Global hypoelliptic estimates for fractional order kinetic equation.
Mathematische Nachrichten 287(2014), 610-637
8. Wei-Xi Li and Alberto Parmeggiani
Gevrey-hypoellipticity for twisted Laplacians.
Journal of Pseudo-Differential Operators and Applications 4(2013), 279-296
9. Hua Chen, Wei-Xi Li and Ling-Jun Wang
Regularity of traveling free surface water waves with vorticity.
Journal of Nonlinear Science 23(2013), 1111-1142
10. Frédéric Hérau and Wei-Xi Li
Global hypoelliptic estimates for Landau-type operator with external potential.
Kyoto J. Math 53 (2013), 533-565
11. Renjun Duan and Wei-Xi Li
Hypocoercivity for the linear Boltzmann equation with confining forces.
Journal of Statistical Physics 148(2012), 306-324
12. Wei-Xi Li
Global hypoellipticity and compactness of resolvent for Fokker-Planck operator.
Ann. Sc. Norm. Super. Pisa Cl. Sci. Vol. XI(2012), 789-815.
13. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Gevrey regularity of subelliptic Monge-Ampère equations in the plane.
14. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Gevrey hypoellipticity for a class of kinetic equations.
Communications in Partial Differential Equations 36 (2011) 693-728.
15. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Analytic smoothness effect of solutions for spatially homogeneous Landau equation.
Journal of Differential Equations 248 (2010) 77-94.
16. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Gevrey hypoellipticity for linear and non-linear Fokker-Planck equations.
Journal of Differential Equations 246 (2009), 320- 339.
17. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Gevrey regularity for solution of the spatially homogeneous Landau equation.
Acta Mathematics Scientia 29(2009), 673-686.
18. Hua Chen, Wei-Xi Li and Chao-Jiang Xu
Propagation of Gevrey regularity for solutions of Landau equations.
Kinetic and Related Models 1(2008), 355- 368.
19. Shaohua Wu, Hua Chen and Wei-Xi Li
The local and global existence of the solutions of hyperbolic-parabolic system modeling biological phenomena.
Acta Mathematics Scientia 28 (2008), 101- 116.
## Collaborators & Mentors
Radjesvarane Alexandre Hua Chen Feng Cheng Nils Dencker Renjun Duan Frédéric Hérau Nicolas Lerner Peng Luo Van-Sang Ngo Alberto Parmeggiani Shuying Tian Ling-Jun Wang Xue Ping Wang Yan-Lin Wang Di Wu Shaohua Wu Chao-Jiang Xu Tong Yang
|
# Import a file that could be in one of three directories
I want to import a file called image.png. I know it is either in dir1, dir2 or dir3
On unix I can easily open it using a Kleene star.
xdg-open ~/dirA/*/image.png
On mathematica I tried to add a Kleene star to the Import argument string.
Import[$HomeDirectory <> "/dirA/*/image.png"] But this does not work. ## Question: How to import a file that is in $$1$$ of $$n$$ directories? • comment to my future self: FileNames["image.png", {$HomeDirectory}, Infinity] Feb 10 at 22:32
Assuming dir1, dir2, etc are strings representing the directories, use
file = First[FileNames["image.png", {dir1, dir2, dir3}], $Failed] The 2- and 3-argument forms of FileNames are pretty useful. If the $$dir_i$$ all live in the same parent directory, then you could use file = First[FileNames["image.png", {parentDir}, 2],$Failed]
• Interestingly FileNames will accept the Kleene star because it looks for files matching a string pattern. so FileNames[\$HomeDirectory <> "/dirA/*/image.png"] works too! Feb 4 at 10:25
|
Enroll in one of our FREE online STEM bootcamps. Join today and start acing your classes!View Bootcamps
00:09
Problem 2
# $A B C D$ is a parallelogram. Find the value of each ratio. $A B : C D$
## Discussion
You must be signed in to discuss.
## Video Transcript
Hi, everyone. Today we're gonna talk a little bit about parallelogram Sze, This is questioned to question two. We have a parallelogram that looks like this a b c d. Now we want to remember Is that all or that all the size of parallelogram are parallel Lots of properties We have to go over. Okay. And in this question, it just ask for the ratio of a B. So it asked for the ratio of a B to C D. This is the symbol for ratio right here ratio, right? Okay. You can also write that as a B over CD or he corroded as a B two CD. Okay, so now wants the ratio again of a B of this side here, a B to see d. We know that a B is given as 15 and we know that opposite sides of a parallelogram are equal. So this is also 15 as well. Therefore, the ratio of 15 to 15 is just one toe one. So, again,
|
# Aligned multiline equations
I'm using an align environment in a long derivation. Each equality is aligned at the equal sign. Individual lines are too long to fit the page, so I need wrap them. However, I don't just want them to continue in the next line as I usually do and maybe indent them with a \quad, I want them to align with the opening bracket which contains all the terms. I've been looking into the aligned environment which sort of does what I want, but now the aligned lines are vertically centered with the beginning of the equality:
\begin{align*}
Z &= Tr_\text{el, ph}\bigl[\exp(-\beta H)\bigr]\\
&= \int\mathcal{D}q Tr_\text{el}\biggl[T_\tau\exp\biggl(-\int_0^\beta d\tau\sum_j\Bigl[
\begin{aligned}
&-t\sum_\sigma\bigl(c_{j\sigma}^\dag(\tau)c_{j+1,\sigma}(\tau)+\text{h.c.}\bigr)\\
&+\tfrac{M}{2}\bigl(\dot q_j(\tau)^2+\omega_0^2q_j(\tau)^2\bigr)\\
&-g\sqrt{2M\omega_0}\sum_\sigma n_{j\sigma}(\tau)q_j(\tau)\Bigr]\biggr)\biggr]
\end{aligned}\\
&= ...
\end{align*}
This is what I would like it to look (photoshopped):
I imagine, I'm not the first one to have this problem. I did a thorough search before posting this question, so before you mark this as a duplicate, please consider carefully if the alleged duplicate really addresses my issue.
• You are missing the [t] option for aligned. You might also want to add a \! before \begin{aligned} – daleif Jul 24 '15 at 13:49
• Very nice, I missed that indeed. Is there maybe a way to center the equation number vertically? It was, before I added [t] but now it is on the same height has the equal sign. Maybe that's more reasonable anyway... – Jonas Jul 24 '15 at 13:54
• No (AFAIK), the [t] changes the baseline, thus I only add it to constructions that are not individually numbered. Yuo could add a single equation number to the entire calculation instead. – daleif Jul 24 '15 at 13:56
• I'll write something a little longer, there are stuff here you should never do. – daleif Jul 24 '15 at 13:57
To get aligned to line up with the first line remember the [t] option.
Don't use \text for anything but textual comments in display math. This _\text{el} is not a textual comment. Better to use another construction.
\documentclass[a4paper]{memoir}
\usepackage{mathtools}
\DeclareMathOperator\Tr{Tr}
% for text only subscripts
\newcommand\tsub[1]{_\textup{#1}}% or \textnormal
% never use \text for anything but textual comments
\begin{document}
\begin{align*}
Z &= \Tr\tsub{el,ph}\bigl[\exp(-\beta H)\bigr]\\
&= \int\mathcal{D}q
\Tr\tsub{el}\biggl[T_\tau\exp\biggl(-\int_0^\beta d\tau\sum_j\Bigl[
\!
\begin{aligned}[t]
&-t\sum_\sigma\bigl(c_{j\sigma}^\dag(\tau)c_{j+1,\sigma}(\tau)+\text{h.c.}\bigr)\\
&+\tfrac{M}{2}\bigl(\dot q_j(\tau)^2+\omega_0^2q_j(\tau)^2\bigr)\\
&-g\sqrt{2M\omega_0}\sum_\sigma n_{j\sigma}(\tau)q_j(\tau)\Bigr]\biggr)\biggr]
\end{aligned}\\
&= ...
\end{align*}
The above waste a bit too much space IMO. Here is another
\begin{align*}
Z &= \Tr\tsub{el,ph}\bigl[\exp(-\beta H)\bigr]\\
&= \int\mathcal{D}q
\Tr\tsub{el}
\begin{aligned}[t]
\biggl[&T_\tau\exp\biggl(-\int_0^\beta d\tau\sum_j
\Bigl[
-t\sum_\sigma\bigl(c_{j\sigma}^\dag(\tau)c_{j+1,\sigma}(\tau)+\text{h.c.}\bigr)
\\
&+\tfrac{M}{2}\bigl(\dot q_j(\tau)^2+\omega_0^2q_j(\tau)^2\bigr)
\sqrt{2M\omega_0}\sum_\sigma
n_{j\sigma}(\tau)q_j(\tau)\Bigr]\biggr)\biggr]
\end{aligned}
\\
&= ...
\end{align*}
\end{document}
• Thanks for the additional comments. el and ph are short for "electronic" and "phononic", so I would in fact consider them textual comments. Placing the three additive terms in separate lines make referring to them in the surrounding text easier. – Jonas Jul 24 '15 at 14:21
• A different question, but I can't help to notice: you used \DecaleMathOperator to define the trace, I usually use \operatorname. Is there a reason why one is preferable? – Jonas Jul 24 '15 at 14:24
• Less typing. Also \DeclareMathOperator* easily makes a \lim like operator (where linits go above and below in displayed math) – daleif Jul 24 '15 at 14:26
• As for your comments to el and ph those are not textual comments, they are textual indices. Big difference. Textual indices should always be upright (just as operator names), but \text is italic then the surrounding text is italic! – daleif Jul 24 '15 at 14:28
• Great, the \lim-like behavior is actually what I want! – Jonas Jul 24 '15 at 14:31
|
# Acceleration
• August 2nd 2006, 09:01 AM
bret80
Acceleration
If an object moves according to a function of time that is defined by the equation:
y= -4t^3 + 20t^2 + 80t + 100.
What is the acceleration when time t=1?
(The acceleration is the rate of change of velocity with respect to time, that is, a= d^2y/dt^2)
• August 2nd 2006, 09:21 AM
ThePerfectHacker
Quote:
Originally Posted by bret80
If an object moves according to a function of time that is defined by the equation:
y= -4t^3 + 20t^2 + 80t + 100.
What is the acceleration when time t=1?
(The acceleration is the rate of change of velocity with respect to time, that is, a= d^2y/dt^2)
In America acceleration is the second derivative of distance.
Thus,
$y'=-12t^2+40t+80$
$y''=-24t+40$
Evaluate function at $t=1$,
$y''(1)=-24(1)+40=16 \frac{\mbox{units}}{\mbox{sec}^2}$
• August 2nd 2006, 10:13 AM
topsquark
Quote:
Originally Posted by ThePerfectHacker
In America acceleration is the second derivative of distance.
Actually bret80 is correct. Unless otherwise stated the term acceleration refers to a vector, rather than a scalar. Acceleration is always the second time derivative of the displacement function and first time derivative of the velocity function. Unfortunately even professors get lazy in their terminology and students tend to forget the difference. In fact, I've even found some students (not mine!) at the end of the semester who don't recall that the term displacement was ever used in the course! :(
-Dan
• August 2nd 2006, 10:24 AM
ThePerfectHacker
The way I understand it is:
Displacement (Is a vector function to assigns position as well). Its derivative is Velocity (a vector function that assigns the direction as well).
Distance (Is a real function that assigns a number which corresponds to the length traveled). Its derivative is speed (a real function that assigns a number which correspond to only its current speed, nothing more).
However, I do not know how to think of acceleration. Is it the derivative of a real or vector function?
• August 2nd 2006, 10:34 AM
topsquark
Quote:
Originally Posted by ThePerfectHacker
The way I understand it is:
Displacement (Is a vector function to assigns position as well). Its derivative is Velocity (a vector function that assigns the direction as well).
Distance (Is a real function that assigns a number which corresponds to the length traveled). Its derivative is speed (a real function that assigns a number which correspond to only its current speed, nothing more).
However, I do not know how to think of acceleration. Is it the derivative of a real or vector function?
By definition, acceleration is the derivative of velocity...ie. it is a vector function.
However, just to confuse the issue it is often convenient when doing problems in one dimension to "drop" the vector from the acceleration. This is why, when asked what "g" is, people often answer with "9.8 m/s^2" rather than the more correct "9.8 m/s^2 toward the center of the Earth" (or more simply "downward.") Really we should always be including a unit vector in the direction of the + coordinate when reporting an acceleration in a 1-D problem. To further confuse the issue is the term "deceleration," which is nothing more than an acceleration in a direction opposite to the velocity, which students often confuse with either always being negative (ie. in a direction opposite the unit vector) or decreasing in value.
This is why professors often lose the vector nature of the acceleration. (And usually don't bother to correct it, since it doesn't "need" to be corrected until the student gets into Advanced (undergrad) Mechanics.) A similar bad habit goes for confusing velocity and speed. I personally think it's a lazy habit and bad for the students.
-Dan
|
# Integral (Antiderivative) Calculator with Steps
This online calculator will find the indefinite integral (antiderivative) of the given function, with steps shown (if possible).
Enter a function:
Integrate with respect to:
Please write without any differentials such as dx, dy etc.
For definite integral, see definite integral calculator.
Some integrals may take much time. Be patient!
If the integral hasn't been calculated or it took too much time, please write it in comments. The algorithm will be improved.
If the calculator did not compute something or you have identified an error, or you have a suggestion/feedback, please write it in the comments below.
## Solution
Your input: find $$\int{x \cos{\left(x^{2} \right)} d x}$$$Let $$u=x^{2}$$$.
Then $$du=\left(x^{2}\right)^{\prime }dx = 2 x dx$$$(steps can be seen here), and we have that $$x dx = \frac{du}{2}$$$.
The integral becomes
$$\color{red}{\int{x \cos{\left(x^{2} \right)} d x}} = \color{red}{\int{\frac{\cos{\left(u \right)}}{2} d u}}$$
Apply the constant multiple rule $$\int c f{\left(u \right)}\, du = c \int f{\left(u \right)}\, du$$$with $$c=\frac{1}{2}$$$ and $$f{\left(u \right)} = \cos{\left(u \right)}$$$: $$\color{red}{\int{\frac{\cos{\left(u \right)}}{2} d u}} = \color{red}{\left(\frac{\int{\cos{\left(u \right)} d u}}{2}\right)}$$ The integral of the cosine is $$\int{\cos{\left(u \right)} d u} = \sin{\left(u \right)}$$$:
$$\frac{\color{red}{\int{\cos{\left(u \right)} d u}}}{2} = \frac{\color{red}{\sin{\left(u \right)}}}{2}$$
Recall that $$u=x^{2}$$$: $$\frac{\sin{\left(\color{red}{u} \right)}}{2} = \frac{\sin{\left(\color{red}{x^{2}} \right)}}{2}$$ Therefore, $$\int{x \cos{\left(x^{2} \right)} d x} = \frac{\sin{\left(x^{2} \right)}}{2}$$ Add the constant of integration: $$\int{x \cos{\left(x^{2} \right)} d x} = \frac{\sin{\left(x^{2} \right)}}{2}+C$$ Answer: $$\int{x \cos{\left(x^{2} \right)} d x}=\frac{\sin{\left(x^{2} \right)}}{2}+C$$$
If you like the website, please share it anonymously with your friend or teacher by entering his/her email:
|
# Introduction¶
The main principle underlying PennyLane is to make the interface between the quantum and classical worlds seamless. A quantum computing device should not be viewed as a competitor to a classical computer, but rather as an accelerator. Integrating both types of information processing gives rise to hybrid computation.
In PennyLane both classical and quantum computers are used in the same basic way: as computational devices which we program to evaluate mathematical functions. We call such functions nodes, since they feed information into each other like nodes in a directed graph. Quantum nodes are abstract representations of quantum circuits which take classical information as their input and produce classical information as their output.
Each quantum node executes a variational circuit — a parametrized quantum computation — on a quantum device.
In optimization and machine learning, models learn by computing gradients of trainable variables. A central feature of PennyLane is the ability to compute the gradients of quantum nodes, or quantum gradients. This enables the end-to-end differentiation of hybrid computations.
These four concepts — hybrid computation, quantum nodes, variational circuits — and quantum gradients, are central to PennyLane.
### Hybrid computation
See the main Hybrid computation page for more details.
Hybrid quantum algorithms are algorithms that integrate both classical and quantum processing. In many proposed hybrid algorithms, quantum devices are used to evaluate quantum subroutines, and a classical co-processor is used primarily to post-process circuit outputs. But in principle, hybrid computation can be expanded to much more complex procedures.
In a true hybrid computational model, both the classical and the quantum devices are responsible for arbitrary parts of an overall computation, subject to the rules of quantum nodes. This allows quantum and classical devices to be used jointly, each forming an integral and inseparable part of a larger computation.
### Quantum nodes
See the main Quantum nodes page for more details.
Quantum information is fragile — especially in near-term devices. How can we integrate quantum devices seamlessly and scalably with classical computations?
This question leads to the notion of a quantum node or QNode: a basic computational unit, programmed on a quantum circuit, which carries out a subroutine of quantum information processing. Only classical data can enter or exit a quantum node.
To a classical device, a quantum node is a black box which can evaluate functions. A quantum device, however, resolves the finer details of the circuit.
### Variational circuits
See the main Variational circuits page for more details.
Variational circuits are quantum algorithms that depend on tunable variables, and can therefore be optimized. In PennyLane, a variational circuit consists of three ingredients:
1. Preparation of a fixed initial state (e.g., the vacuum state or the zero state).
2. A quantum circuit, parameterized by both the input $$x$$ and the function parameters $$\boldsymbol\theta$$.
3. Measurement of an observable $$\hat{B}$$ at the output. This observable may be made up from local observables for each wire in the circuit, or just a subset of wires.
Variational circuits provide the internal workings of a QNode, and can be evaluated by running a quantum hardware or simulator device.
### Quantum gradients
See the main Quantum gradients page for more details.
Automatic computation of gradients and the backpropagation algorithm are core elements of modern deep learning software. PennyLane extends this key functionality to quantum and hybrid computations.
Evaluating quantum nodes is inefficient on classical computers, so we might expect the gradients of quantum nodes to be similarly intractable. Fortunately, we can often compute the gradient of a quantum node $$\nabla f(x;\bm{\theta})$$ exactly using a linear combination of two quantum nodes, where one variable is shifted.
We can thus use the same quantum device to compute both quantum nodes and also gradients of quantum nodes. This is accomplished with minor assistance of a classical coprocessor, which combines the terms.
|
# perturbR
perturbR is an R package for evaluating cluster solutions. perturbR can be used to make comparisons between (1) the solution obtained from the original matrix to solutions obtained in matrices that have had their edge weights incrementally perturbed; and (2) a relative quality score obtained from the original solution and the distribution of quality values obtained from random matrices.
# Getting Started with perturbR
When describing the use of the package we utilize publicly available benchmark data that have previously been evaluated (see Gates et al., 2016). For the purposes of demonstration here we selected an example that almost perfectly recovered the data-generating pattern: data set number 60 of Simulation 5. Here, there were 75 nodes, moderate average node strength, and approximately equally sized groups.
fort5_60 <- as.matrix(read.csv("fort5_60.csv", header = F))
Here we demonstrate use of the perturbR() function. The user must supply a matrix to the sym.matrix argument. This matrix must be symmetric and can reflect a number of phenomenon studied in psychology. For instance, sym.matrix can be a similarity matrix of symptom counts among pairs of individuals, a matrix of fiber counts between pairs of brain regions, or even counts of how often two pairs of individuals speak to each other. The plot in Figure 1 is produced by using the default arguments while also setting the errbars argument to TRUE. The errbars argument is a logical where users can indicate if they would like error bars in the plots. The error bars indicate a range of two standard errors above and below the mean values obtained across repetitions at that given $$\alpha$$ level.
perturbR(sym.matrix = fort5_60, plot = TRUE, errbars = TRUE, resolution = 0.01, reps = 100)
The black circles in Figure 1 indicate the average value for the comparison between the cluster assignments from the original matrix and the cluster assignments from the rewired matrix at each level of edge perturbation $$\alpha$$. The red circles indicate the same comparisons made on a matrix that is completely random but has the same properties as the original matrix, thus providing an appropriate null matrix for comparison. This is important for scaling purposes. To further aid in interpreting the results two horizontal lines are provided. They indicate the values of similarity found between the original matrix and a matrix where 10% and 20% of nodes (not edges, as in the rewiring phase) were randomly assigned to different clusters. As noted in Karrer and colleague’s work (2008) identifying at which $$\alpha$$ the rewired results cross these lines provides insight into interpretation. In a series of empirical examples Karrer (2008) considered cluster solutions to be robust if the matrix had 20% or more of its edges perturbed (i.e,. $$\alpha\geq 0.20$$) before intersecting with the line representing 20% of the nodes being in different clusters.
# Exploring perturbR Output
One can also look directly at the perturbR output and obtain values related to what is seen visually.
## Approach 1: Comparison of Cluster Assignments
In this apporach we comppare the cluster assignments from the original matrix to the solutions obtained from increasingly perturbed matrices. We begin by demonstrating how the first approach described above can be explored with the output. One might want to quantify the point at which the average ARI or VI crosses the 20% line described above. The following commands arrive at this point for the VI values:
Run perturbr:
fit5_60 <- perturbR(sym.matrix = fort5_60, plot = FALSE)
Obtain the VI value when 20% of the community assignments are randomly swapped:
fit5_60$vi20mark ## [1] 1.514176 Identify the index for the alpha level for the first time the average VI is greater than this value: min(which(colMeans(fit5_60$VI)>fit5_60$vi20mark)) ## [1] 36 Find alpha that corresponds with this index: fit5_60$percent[36]
## [1] 0.3531532
In this example, about 33% of the edges need to be perturbed before the cluster solution for the rewired matrix is as different as when 20% of the nodes are randomly placed into different solutions. This can be seen both by looking at the point where the black circles intersect with the lower horizontal line in the VI figure and using the code above. By contrast, a random matrix drops far below this line with only 2% of the edges perturbed in the rewired matrix. The figures provide an immediate evaluation of cluster solutions whereas the code and output allow the user to further investigate the results. It is also possible to examine the average VI and ARI at the 20% perturbation point and see if it is larger than the these marks. Here we provide an example for the ARI values:
fit5_60$ari20mark ## [1] 0.621858 mean(fit5_60$ARI[,which(round(fit5_60$percent, digits = 2) == .20)]) ## [1] 0.7756071 We see that the distribution of ARI values at $$\alpha = 0.20$$ is significantly higher than the ARI value obtained with 20% of the cluster assignments for nodes are randomly changed. ## Approach 2: Comparison of Modularity Values In this apporach we comppare the modularity values from the original matrix to modularity values obtained from the increasingly perturbed matrices. The output also provides a value called cutoff which is the modularity value that marks the upper 5% percentile in the distribution of modularity results obtained from random matrices that have the same properties as the original matrix. fit5_60$cutoff
## numeric(0)
### The modularity value from the solution on the original matrix:
fit5_60$modularity[1,1] ## [1] 0.7047275 In this example, the cutoff (Note that values may differ slightly for each run do to the random generation of matrices.) was $$Q_{.95}=0.31$$ and the modularity obtained in this simulated data set was $$Q_{orig}=0.70$$. Hence the modularity in the original solution was well above the upper 5% threshold obtained from the random matrix simulation results. Figure 2 depicts a histogram of the modularity values obtained for solutions in the random matrices simulated to have properties similar to the original matrix. Researchers can easily obtain similar histograms from the output provided if they would like to explore how the distribution of modularity from the random matrices compares to the modularity obtained in the original matrix. hist(fit5_60$modularity[,which(round(fit5_60$percent, digits = 2) ==1.00)], xlim = c(0,1)) abline(v = fit5_60$modularity[1,1], col = "red")
# References
Gates, K. M., Henry, T., Steinley, D., & Fair, D. A. (2016). A monte carlo evaluation of weighted community detection algorithms. Frontiers in Neuroinformatics, 10.
Karrer, B., Levina, E., & Newman, M. E. (2008). Robustness of community structure in networks. Physical Review E, 77 (4), 046119.
|
## Spekkens Toy Theory
There is about a week until I'll attend "The New Directions" conference in Washington DC, and after the conference I'll have plenty of fresh new ideas to present. In the meantime I'll devote this and next post to discuss the so-called Psi-Epistemic point of view of Quantum Mechanics.
The basic intuition originates from phase space where particles have a well define position and momenta, and a probability distribution in phase space corresponds to genuine lack of knowledge. In the realist epistemic point of view, the wavefunction corresponds to knowledge about an underlying ontic reality. This ontic reality is left unspecified: it could be classical physics with hidden variable, it could be the wavefunction itself, or it could be something completely new and undiscovered.
The key question is this: can the ontic state exist in more than one epistemic state? If yes, then a measurement in quantum mechanics simply reveals the pre-existing reality. There are a lot of roadblocks to construct such an epistemic model, but the point of view taken by Spekkens was different: let's not construct a model which recovers completely quantum mechanics predictions, but let's construct a simple epistemic toy theory and see what unintuitive quantum phenomena get a simple explanation.
The basic idea is that of simulating spin 1/2 particle measurements on 3 axis: x, y, z. Here is a picture from the excellent review paper by Matt Leifer: http://arxiv.org/pdf/1409.1570v2.pdf
In Spekkens toy model there are 2 x states: + and - and 2 y states: + and -. The system is at any point in one of the 4 possible states: ++, +-, -+, --, but here is the catch: you cannot measure both at the same time. Moreover, during measurement the particle makes a jump from the unmeasured state to the other.
The spin x measurement corresponds to measuring the x coordinate, the spin y measurement correspond to measuring the y coordinate, while the measurement of spin z corresponds to measuring the "sameness of x and y" coordinates.
Repeatable measurements always yields the same outcome, just as in the quantum case, and measurement of "spin x" followed by a measurement of "spin y" perturbs the system (remember the jumping of the unmeasured coordinate during measurement) and a third measurement of "spin x" gives a random outcome.
Now here are the successes of the toy model:
• nonorthogonal pure states cannot be perfectly distinguished by a single measurement
• no-cloning
• non-uniqueness of decomposition of mixed states
Given such impressive successes a lot of people in the quantum foundations fell in love with the realistic epistemic point of view. No full blown realistic epistemic model for quantum mechanics was ever developed, and the PBR theorem which I'll talk about next time crushed any hopes for it (or so is my opinion).
Of course there is the other possibility of having a non-realistic epistemic interpretation, like Copenhagen and neo-Copenhagen and this possibility is alive and well.
## The number systems of quantum mechanics
Now we have reached the end of the series of the number system for quantum mechanics. Quantum mechanics can be expressed over any real Jordan algebras (including spin factors), but which one is picked by nature? The simplest case is complex quantum mechanics because you can construct the tensor product and the number system is commutative. There is a theorem by Soler which restricts the number system to real numbers, complex numbers, and quaternions but the starting assumptions are too restrictive. There is no need to force the inner product to generate only positive numbers.
When the number system is the real numbers, then this can exists only as an embedding in complex quantum mechanics so we want to build out number system from matrices of complex numbers and quaternions. The existence of the tensor product is not a requirement in general. Two fermions together do not form another fermion. But what is the meaning of a number system beside complex numbers? Basically this adds internal degrees of freedom. Do we know of additional degrees of freedom? Yes. They are the gauge symmetries.
The natural framework for discussing the number system for quantum mechanics is Connes' spectral triple. The number system is the algebra $$A$$ in the spectral triple, while the unitary time evolution or the Zovko equation of continuity for quantions gives rise to the Dirac operator $$D$$ in the spectral triple. The standard model arises in this formalism by a judicious pick of the algebra which gives the internal degrees of freedom. The selection of $$A$$ is now done ad-hoc to simply recover the Lagrangian of the Standard Model.
One may imagine different universes where the algebra is different. Quantionic quantum mechanics does not describe our universe because we do not see a long distance Yang-Mills field with the gauge group SU(2)xU(1). Instead the electroweak field is subtler and there is a mixing of a U(1) with SU(2)xU(1) with the Weinberg angle so you may say that our universe resulted in part from a coupling between complex and quantionic quantum mechanics.
Still, regardless of the number system picked by nature for quantum mechanics, everything reduces to complex quantum mechanics when the internal degrees of freedom are ignored. This is because there are only two number systems which respect the tensor product, and in the non-relativistic limit they are identical. In complex quantum mechanics, the additional degrees of freedom form superselection domains, and C* algebras are compatible with superselection rules.
## Quantionic Quantum Mechanics and Dirac's Theory of the Electron
Now we will present the relationship between quantions and spinors. They are basically two methods of taking the square root of the d'Alembertian: while spinors work in any dimensions, quantions are related to Hodge decomposition and this works only in 4 dimensions because of the interplay between 2-forms and their dual.
But let's start with the beginning. Nikola Zovko from the Ruder Boskovic institute in Croatia was following Emile Grgin's work very closely and he wanted to relate this work with known physics. To this aim he proposed an interpretation for quantionic quantum mechanics, an interpretation which generalizes Born rule: instead of probabilities the inner product will produce a 4-vector probability density current. Grgin calls this "the Zovko interpretation" and everything follows from it. In the regular complex number quantum mechanics, the wavefunction of say the electron in the hydrogen atom attaches to each point in space a complex number. Now in quantionic quantum mechanics each point in space time has attached a quantion and we know from last time that $$q^{\dagger} q$$ (the "algebraic norm") is a future-oriented 4-vector. Summing over all complex number or quantion algebraic norms over the entire space yield either a positive scalar or a future oriented 4-vector and this is the Born rule. For quantions if $$q^{\dagger} q = j$$ is a 4-vector current then we must have an equation of continuity:
$${\partial}_{\mu} j^{\mu} = {\partial}_{\mu} (q^{\dagger} q)^{\mu} = 0$$
So now suppose we have a "quantionic field": $$q(x) = (q_1 (x), q_2 (x), q_3 (x), q_4 (x))$$ with x the usual 4-vector in relativity. Then the continuity equation can be written as:
$${\partial}_{\mu} j^{\mu} = \frac{1}{2} [q^{\dagger} D(q) + {D(q)}^{\dagger} q ]= 0$$
where
$$D = \left( \begin{array}{cc} \partial_0 + \partial_3 & \partial_1 + i \partial_2 \\ \partial_1 - i \partial_2 & \partial_0 - \partial_3 \end{array}\right)$$
and so the real part of $$q^{\dagger} D q$$ must vanish. If we split $$D q$$ into:
$$D q = i H q + i A q$$
with H hermitian and A anti-hermitian matrices and we interpret H as outside potential, for a free particle we have: D q = -iAq and "A" can be expressed as:
$$A = m e^{i\psi} [cos \theta \gamma^1 + sin \theta cos \phi \gamma^3 + i sin \theta sin \phi \gamma^0 \gamma^5]$$
This is more generic than the usual Dirac's equation because quantionic quantum mechanics describe a SU(2)xU(1) gauge theory. If we restrict however to the case of $$A = m \gamma^1$$ we recover completely Dirac's theory. In this case there is a one-to-one correspondence between the 4 quantionic components $$q$$s and Dirac's spinors $$\Psi$$s:
$$q = \left( \begin{array}{c} q_1 \\ q_2 \\ q_3 \\ q_4 \end{array}\right) = \sqrt{2} \left( \begin{array}{c} -\Psi_2 \\ {\Psi}_3^{*} \\ \Psi_1 \\{\Psi}_4^{*} \end{array}\right)$$
$$\Psi = \left( \begin{array}{c} \Psi_1 \\ \Psi_2 \\ \Psi_3 \\ \Psi_4 \end{array}\right) = \frac{1}{\sqrt{2}} \left( \begin{array}{c} q_3 \\ -q_1 \\ q_2^* \\q_4^* \end{array}\right)$$
and the quantionic current is Dirac's current:
$$j^{\mu} = {(q^{\dagger} q)}^{\mu} = \Psi^{\dagger} \gamma^0 \gamma^{\mu} \Psi$$
But how come nobody else noticed an SU(2)xU(1) gauge theory before? Actually... this was discovered independently by David Hestenes.
David Hestenes
He calls it: the spacetime algebra. Quantionic algebra is nothing but the spacetime algebra. Next time we'll talk about the physics of quantionic quantum mechanics and see to what degree it can represent nature.
## Quantionic algebra
Last time we introduced the structural form of a quantion:
$$p = \left( \begin{array}{c} P \\ \vec{p} \end{array}\right)$$
and its product is defined as:
$$pq = \left( \begin{array}{c} P \\ \vec{p} \end{array}\right)\left( \begin{array}{c} Q \\ \vec{q} \end{array}\right) = \left( \begin{array}{c} PQ + \vec{p}\cdot \vec{q}\\ P\vec{q} + Q\vec{p} + i \vec{p} \times \vec{q} \end{array}\right)$$
Given the Pauli matrix multiplication rule one can easily check that
$$PI + \vec{p}\cdot \vec{\sigma}$$
respects the same multiplication rule:
$$(PI + \vec{p}\cdot \vec{\sigma}) (QI + \vec{q}\cdot \vec{\sigma}) = (PQ + \vec{p} \cdot \vec{q}) + ( P\vec{q} + Q \vec{p} +i \vec{p}\times \vec{q}) \cdot \vec{\sigma}$$
and hence a quantion is nothing but a $$2\times 2$$ complex matrix subject to the usual matrix multiplication.
A real quation $$A$$ with $$A = A^{\dagger}$$ is defined as:
$$A = \left( \begin{array}{cc} r & z \\ z^* & s \end{array}\right) = \left( \begin{array}{cc} p^0 + p^3 & p^1 + i p^2 \\ p^1 - i p^2 & p^0 - p^3 \end{array}\right)$$
For a general quantion
$$p = \left( \begin{array}{cc} a & c \\ b & d \end{array}\right)$$
in matrix representation we can define three discrete transformations:
$$Cp = \left( \begin{array}{cc} a^* & c^* \\ b^* & d^* \end{array}\right) = p^*$$
$$Pp = \left( \begin{array}{cc} d & -c \\ -b & a \end{array}\right) = p^{-1} det(p) = p^{\sharp}$$
$$Tp = \left( \begin{array}{cc} d^* & -b^* \\ -c^* & a^* \end{array}\right) = CPq$$
And quantions respect the CPT=1 symmetry. In the structural formalism this can be represented as follows:
and we can see that P corresponds to parity as it inverts the spatial components while keeping the structural vector $$\Omega$$ unchanged because $$\Omega$$ corresponds to the local time direction.
If we introduce the metric dual: $$p^{\sharp} = P(p)$$ then quantions have two kinds of "norms":
Algebraic: $$A(q) = q^{\dagger} q$$
Metric: $$M(q) = q^{\sharp} q$$
and we have the fundamental property:
$$AM q = MA q$$
Why is this important? Because all comes together due to the diagram below (which I crudely adapted from Grgin's book)
"$$A$$" comes from the usual inner product of quantum mechanics while "$$M$$" extracts a particular reference frame in relativity. The remarkable fact is that $$A(q)$$ is always a future oriented timelike vector and in quantionic quantum mechanics the predictions are 4-vectors: current probability densities. The commutation of "A" with "M" shows that the theory is inherently relativistic. Relativity was not postulated but it comes out naturally. Moreover because SO(2,4) was the unique non unitary group compatible with the observable-generator duality of quantum mechanics, quaternionic quantum mechanics is the only possible generalization of quantum mechanics which is compatible with relativity. This only works in the SO(3,1) space. Is this why there are 3+1 spacetime dimensions?
Next time we'll discuss the physics of quantionic quantum mechanics and its relationship with Dirac's theory. Please stay tuned.
## From SO(2,4) to quantions
Let me first say a bit more about SO(2,4). Physics in SO(2,4) is called by Itzhak Bars: "two-time physics":
Then SO(2,4) corresponds to conformal compactification of the Minkowski space. Via the isomorphism SO(2,4)~SU(2,2) this is related with Penrose's twistor theory
So how do we get from SO(2,4) to quantionic quantum mechanics? Last time we were after obtaining a unique element which would play the role of sqrt(-1). That element turns out to be a bivector $$e_i \wedge e_j$$ where $$e_i$$ is from the 2 part of SO(2,4), and $$e_j$$ is from the 4 part of SO(2,4). Then we want to find out all the elements of SO(2,4) which commute with this.
Skipping the lengthy derivation, one arises at a complex linear Minkowski space where the four-vectors are no longer real numbers, but complex numbers. Here we follow again the amazing book of Grgin.
We need to arrive at an associative algebra which describes the new quantum mechanics number system. We need to have associativity because quantum systems can be combined in an associative manner using the tensor product. We also need to recover the usual complex number formulation of quantum mechanics in the appropriate non-relativistic limit. Is there a linear functional $$\Omega$$ which takes a complex Minkowski 4-vector into a complex number? If so, this would seem to violate the spirit of relativity. However, in the C* algebraic formulation of quantum mechanics any C* algebra can be made unital and $$\Omega$$ turns out to be the unit of the new quantum mechanics number system. Physically this corresponds to specifying he direction of time at each Minkowski space-time point. A distinguished direction of time seem to contradict relativity but in the end it will all work out nicely when we'll add the interpretation of the inner product, so please have patience on this.
Now given $$\Omega$$, we can construct the following combinations which can enter in the definition of an algebraic product: $$B^{\mu}_{\alpha \beta}u^{\alpha} v^{\beta}$$
$$(\Omega, u) v^{\mu}$$
$$(\Omega, v) u^{\mu}$$
$$(u,v) \Omega^{\mu}$$
$$\eta^{\mu \nu} \epsilon_{\nu \alpha \beta \gamma} \Omega^{\alpha} u^{\beta} v^{\gamma}$$
where
$$(f, g) = f_{\mu} g^{\mu}$$
From unitality and associativity the product is uniquely determined as:
$${(u \beta v)}^{\mu} = (\Omega, u) v^{\mu} + (\Omega, v) u^{\mu} - (u,v) \Omega^{\mu} - i \eta^{\mu \nu} \epsilon_{\nu \alpha \beta \gamma} \Omega^{\alpha} u^{\beta} v^{\gamma}$$
or by using the Hodge duality *:
$$u\beta v = (\Omega, u) v + (\Omega, v) u - (u, v) \Omega - i * (\Omega \wedge u \wedge v)$$
If we decompose a quantion as:
$$u = U\Omega + \vec{u}$$
$$v = V\Omega + \vec{v}$$
we have $$w = u\beta v$$ with $$w = W\Omega + \vec{w}$$ given by:
$$W = UV + \vec{u}\cdot \vec{v}$$
$$\vec{w} = U\vec{v} + V\vec{u} + i \vec{u}\times\vec{v}$$
Next time we'll see how to arrive at the matrix formulation and SL(2,C). We'll also see that the above decomposition is related to the CPT theorem. Please stay tuned.
## The amazing SO(2,4)
Continuing the quantionic discussions, let's see how the Lie algebra so(2,4) arises naturally in this framework. First in quantum mechanics the anti-Hermitean operators form a Lie algebra and the classification of Lie algebras was obtained by Elie Cartan. But in quantum mechanics there are additional relationships obeyed by the anti-Hermitean operators because if we multiply them with the imaginary unit we get a Jordan algebra and between them there is a compatibility relationship. This compatibility relationship restricts the possible Lie algebras and as expected one gets the unitary algebras su(n). However there is also an exceptional non-unitary solution: so(2,4).
It is too complicated to follow this line of argument, and instead I want to present a more elementary argument (also due to Emile Grgin) which arrives at so(2,4). With a bit of hindsight we start from a general SO(p,q) space which as a linear space is spanned by p-positive basis vectors: $$e_1 , e_2 , \dots , e_p$$ and by q-negative basis vectors: $$e_{p+1} , e_{p+2} , \dots , e_{p+q}$$ . Then we want investigate arbitrary reflections. Why? Because we are after obtaining non-standard ways to represent sqrt(-1) using the elements of SO(p,q) which is the key to the hermitian-anti-hermitan duality in quantum mechanics. In complex numbers if we consider the complex conjugation we can represent that as a reflection on the real axis, and this is a good hint.
We are interested in continuous transformations only to the extent that they can undo a reflection. If we can find a unique reflection this can form a realization of the observables-generators duality in quantum mechanics.
Now consider a reversal of $$r$$ arbitrary basis vectors in $$e_1 , e_2 , \dots , e_p$$. If $$r$$ is even the transformation can be undone by rotations because the determinant of the transformation is positive. Similarly all reversals for $$r$$ odd are equivalent.
In general we can have $$r$$ inversions in the positive basis vectors and $$s$$ inversions in the negative basis vectors: $$J=s+r$$. Therefore in general there are $$K=n-J$$ invariant basis vectors. Let us now rename the basis vectors as: $$R_1, R_2, \dots , R_J , S_1, S_2, \dots , S_K$$ (R for reverse, and S for same). Then there are 3 kinds of bivectors:
$$R_i \wedge R_j$$
$$S_i \wedge S_j$$
$$R_i \wedge S_j$$
The first two kinds do not change sign, but the last kind does.
Let us introduce two more numbers:
N = number of bivectors of kind $$R_i \wedge S_j$$
P = number of bivectors of kind $$R_i \wedge R_j$$ + number of bivectors of kind $$S_i \wedge S_j$$ + 1 for the identity transformation
N-for negative, P-for positive
Then the following relationships hold:
N=JK
P=1/2 J(J-1) + 1/2 K(K-1)+1 = 1/2 n(n-1) - N + 1
Now r and s must be odd numbers (all even number reflections can be undone by a rotation):
r=2k+1 < p
s=2l+1 < q
and introducing m=k+l as an auxiliary notation we get:
N(m) = JK=2(m+1)(n-2m-2)
(2(m+1) = J and K = n-J=n-2m-2)
Now we need to require that the complex conjugation is uniquely defined. This means that N(m) must have the same value for all the allowed values of m:
$$N(0) =N(1)=...=N(m_{max})$$
Because N(m) is quadratic there are only two solutions for m: $$m_{max} = 1$$ and from N(0)=N(1) we get:
2*(n-2)=2*2*(n-4)
and so n=6
Therefore we can have the solutions: SO(1,5), SO(2,4), SO(3,3), SO(4,2), SO(5,1)
In the 1,5 and 3,3 case $$m_{max} = 2$$ and we cannot have a unique way to define complex conjugation!!! The only remaining case is SO(2,4) (which is isomorphic with SO(4,2)).
If we want to generalize the number system for quantum mechanics in a way that respects the tensor product and obtain a non-unitary solution, the only possibility is SO(2,4).
Why is this remarkable? Because SO(2,4) is the conformal group of the compactification of the Minkowski space. This is the first hint that ultimately we will get a relativistic quantum mechanics.
## Quantionic Quantum Mechanics
In the prior posts I presented the alternatives to complex quantum mechanics: real, and quaternionic quantum mechanics. it turned out that real and quaternionic quantum mechanics are constrained versions of complex quantum mechanics and as such they have several problems: lack of a tensor product (because the constraints do not scale nicely with the tensor product), lack of a de Finetti representation, etc. Are there other possible number systems for quantum mechanics?
If quantum mechanics predicts only probabilities, there would be no other possibility and Adler proves why in his quaternionic quantum mechanics monograph. But this is absurd! How can quantum mechanics predict something other than probabilities? How about generalizing probabilities to probability current densities? This is actually what one gets in Dirac's theory of the electron! It turns out that there exists a completely equivalent formulation of Dirac theory of spinors in terms of a new quantum mechanics number system called the quantions and this would correspond to a different factorization of the d'Alembertian. This was discovered by Emile Grgin and I will expose this beautiful theory in this and subsequent posts.
Grgin's quantions book
So let us start from the beginning. The algebraic structure of quantum mechanics corresponds to two algebras: the Jordan algebra of the observables, the Lie algebra of the generators, and their compatibility relationship (see definition 2.1 on page 7 on this).
Now it turns out that the compatibility relationship allows the introduction of an associative product which in the case of complex quantum mechanics is the usual complex number multiplication. Associativity has an experimental interpretation: the ability to compose two experiments sequentially.
The map J ($$J^2 = -1$$) between observable and generators can be introduced in two ways:
• the usual way as sqrt(-1)
• an internal way as an element of the algebra of observables
In the second case let this special element be: $$O_J$$ and to preserve invariance under tensor composition we demand that all other observables must commute with $$O_J$$ So what we get is a sub-algebra of the two-product Jordan-Lie algebra and now we have a constained quantum mechanics compatible with the tensor product (unlike quaternionic quantum mechanics). In fact this is the BRST theory in disguse.
Now if we want to find something others than complex number quantum mechanics we need to look at non-unitary representations. This means that the associative product is no longer a division number system and hence we can get non-negative probabilities or "ghosts". But isn't this nonphysical? Sure it is. However restricting the observables to be only the ones commuting with $$O_J$$ restores sanity. The only price to pay is the generalization of the inner product.
It turns out that the unphysical quantum mechanics corresponds to quantum mechanics over $$SO(2,4)$$ which is isomorphic with $$SU(2,2)$$ and this contains two positive probability modes and two negative probability modes. However, restricting the observable to commute with $$O_J$$ restores positivity and what results is a non-division algebra $$SL(2,C)\oplus SL(2,C)$$ which Grgin calls the quantionic algebra.
Quantionic quantum mechanics is actually the gauge theory of electroweak interaction, is a BRST theory, corresponds to a Hodge decomposition of the d'Alembertian, and is fully equivalent with the spinor Dirac theory. All we have to accept is the fact that experiments do not result in probabilities but in the natural generalization of them in the special theory of relativity context: Dirac's current probability densities. Quantionic quantum mechanics is the natural unification of quantum mechanics and relativity. Moreover, this can only work in 3+1 spacetime dimensions! Spin, theory of relativity, and space-time dimensionality emerge naturally in this theory and they do not have to be assumedWe'll explore in depth this theory in the next posts.
|
## PulseAudio Restart Bug – Solved
I enjoy my Linux computers, and one reason is the fact that many technical issues can be resolved, without having to reboot endlessly. However, in my past usage, there has been an irritating exception to this pattern. It’s common under Linux, that we can simply restart the PulseAudio Server from the command-line, using one out of several methods, and that the subject should be done with. But alas, every time I have ever restarted PulseAudio in this way, or, if the server restarted on its own, afterwards, when looking up the Plasma 5 -generated status display (which is actually referred to as “Phonon”), I’d be missing a Devices List, like so:
This type of display can be interpreted to mean several things, such as, that the PulseAudio server did restart, but that perhaps, it simply failed to connect to the inter-process, session-unique, message-bus. Therefore, in the past, whenever I had such a display, I eventually did what I thought I had to do, which was, to log out and back in again, or, to reboot. On my system, PulseAudio is configured such, that it runs as one user-name, and not as root.
In fact, a peculiar side effect of this bug was, that the list of available output devices was still being displayed, within ‘pavucontrol‘.
But this ordeal has now become even more inconvenient than it ever was because on the computer which I name ‘Phosphene’, the need may recur more frequently, ‘just to restart the PulseAudio server’.
However, I have finally found the true cause for this malfunction, which was, that when PulseAudio is restarted from within an existing session, it simply fails to load one module, which is also the module that it needs, in order to be able to list the available devices:
module-device-manager
In fact, there exists a script in ‘/usr/bin‘, that loads a series of X11-related modules.
Therefore, after a restart of this service, I can simply give the following command now:
/usr/bin/start-pulseaudio-x11
And Eureka! I can now obtain a list of available devices, without ever having to log out and back in, or, without ever having to reboot:
In fact, I have created a shell-script, which I can click on as user, and which carries out this task, safely.
I’ve also discovered that the ‘ProjectM’ music visualization application still works, and detects the beat of the playing music as before. What this means is that theoretically, after ‘ProjectM’ was installed, instead of rebooting, I could have just restarted the PulseAudio server as described here, to get that working.
( Edited 2019/10/29, 23h35 … )
I know that there exists an unrelated problem, that just happens to give the same appearance within ‘Phonon’, but that cannot be resolved in this way…
## Technical Impediment In Getting Sound From My Linux Tablet (Solved)
One of the facts which I’ve blogged about is, that I have a Linux Guest System installed on the Android Tablet, that’s a Google Pixel C.
Another fact which I blogged about a long time ago was, that I am able to share the PulseAudio Sound Server that resides on the computer now named ‘Phosphene’, for use by the client computer I name ‘Klexel’.
A basic limitation to my Linux Tablet remains, that it isn’t suited to play back audio streams, or video streams that have audio, because inherently, I’m just running its Linux Guest as a VNC Session. And so a logical thought on that would be:
‘Why not specify the Sound-Providing Server, as the place that the Linux Guest System streams its sound to, at least as long as I am on my own LAN?’
And while in theory this sounds like a good idea, in practice the implementation is still some distance away.
The main problem? While ‘Klexel’ is connected to this Sound Server, it ties up the only TCP Port, which is therefore unable to accept new connections, say from my Linux Tablet. Now, I can tell ‘Klexel’ to relinquish its session on the Sound Server, but doing so has an unexpected consequence. This corrupts the module on the PulseAudio Server, that was listening for remote connections. I need to unload the module, and reload it with the same parameters as before, just so that ‘Klexel’ can reconnect.
The long-term effect of this will be, that the Linux Tablet may be able to obtain one session on ‘Phosphene’ for sound, but that every time this tablet disconnects, again, that module on Phosphene’s PulseAudio Server will go into a corrupted state.
Hence, I have not yet worked this into a practical solution. But if I ever did, I’d be able to expand the applications of the Linux Guest System – on the Tablet – into audiovisual applications.
Update:
I am now one step closer to permitting Linux audiovisual applications on my tablet to access the sound server on my LAN. What I have discovered is, that the module in question can be loaded more than once on the PulseAudio Server, as long as each instance of it listens on a different port number. I.e., the second instance can be configured to listen on port 4318 instead of the default, port 4317. The configuration lines which accomplish this are as follows:
I realize that the legacy Port Number which the PulseAudio Server listens on by default, is 4713. But in Computing, it’s generally impossible for two programs to be listening on the same Port. Therefore this module listens on a different Port Number, just because PulseAudio is already running.
The command ‘pactl list modules‘ confirms that both instances are loaded and stable. Further, when the video-player ‘xine’ is finished with its connection to the server, it closes the TCP Port in a way that does not corrupt the module, so that ‘xine’ can be started a second time and will cause sound to play for the second time.
What this last observation seems to suggest is that the so-called relinquishing of the Local Sound Sink by ‘Klexel’, a Debian 9.11 computer, is corrupted, and not the behaviour of the actual module on the PulseAudio Server, also running on a Debian 9.11 computer.
This is good news.
In Canada and the USA, a relatively recent practice in FM radio has been, to piggy-back a digital audio stream, onto the carriers of some existing, analog radio carriers. This is referred to as “HD Radio”. A receiver as good as the broadcasting standard should cost slightly more than \$200. This additional content isn’t audible to people who have standard, analog receivers, but can be decoded by people who have the capable receivers. I like to try evaluating how well certain ‘Codecs’ work, which is an acronym for “Compressor-Decompressor”. Obviously, the digital audio has been compressed, so that it will take up a narrower range of radio-frequencies than it offers audio-frequencies. In certain cases, either a poor choice, or an outdated choice of a Codec in itself, can leave the sound-quality injured.
There was an earlier blog posting, in which I described the European Standard for ‘DAB’ this way. That uses ‘MPEG-1, Layer 2′ compression (:1). The main difference between ‘DAB’ and ‘HD Radio’ is the fact that, with ‘DAB’ or ‘DAB+’, a separate band of VHF frequencies is being used, while ‘HD Radio’ uses existing radio stations and therefore the existing band of frequencies.
The Codec used in HD Radio is proprietary, and is owned by a company named ‘iBiquity’. Some providers may reject the format, over an unwillingness to enter a contractual relationship with one commercial undertaking. But what is written is, that The Codec used here resembles AAC. One of the things which I will not do, is to provide my opinion about a lossy audio Codec, without ever having listened to it. Apple and iTunes have been working with AAC for many years, but I’ve neither owned an iPhone, nor an OS/X computer.
What I’ve done in recent days was to buy an HD Radio -capable Receiver, and this provides me with my first hands-on experience with this family of Codecs. Obviously, when trying to assess the levels of quality for FM radio, I use my headphones and not the speakers in my echoic computer-room. But, it can sometimes be more relaxing to play the radio over the speakers, despite the loss of quality that takes place, whenever I do so. (:2)
What I find is that the quality of HD Radio is better than that of analog, FM radio, but still not as good as that of lossless, 44.1kHz audio (such as, with actual Audio CDs). Yet, because we know that this Codec is lossy, that last part is to be expected.
(Updated 8/01/2019, 19h00 … )
## Just Installed Kanotix Steelfire on one of my Boxes
For more than a week, I was worried about Kanotix, because their Web-site was down. But after just checking today, I found it was up again! It has been a habit of mine to install initial Debian systems, from Kanotix Live Disks.
I already posses a powerful computer which I name ‘Plato’, onto which I installed Debian / Stretch by way of an experimental Live Disk from Kanotix, but cannot fully say that that one is a Kanotix computer, because at the time, Kanotix didn’t have an official Debian / Stretch release yet. What I did have was two systems running the slightly older Debian / Jessie, and the official Kanotix release with that, is called “Kanotix Spitfire”.
But what I also had for some time, was a weaker PC that still had Debian / Lenny on it, which was an antique system, that required its own security measures, just not to pose a vulnerability to me.
My special security measure for that computer, was just never to turn it on. In fact, it had no eligible Web-browser. But like that, because the hardware was still good, this represented wasted hardware, just sitting in my computer room.
So, now that the Kanotix site is back up, what I did was to download a 32-bit, LXDE Disk Image, of “Kanotix Steelfire”, which is by now their official Debian / Stretch release. In principle many people, including Kanotix experts, would agree that it makes more sense to use as desktop manager, Plasma 5, but as it happens, the computer that just received a new O/S is so weak in terms of RAM and graphics chip-set, that I didn’t think it could handle Plasma 5.
The newly-set-up computer used to be named ‘Walnut’, but is now to be named ‘Klexel’. It has as graphics acceleration, an old Intel chip-set, which Kanotix distributions actually support, in the form of ‘i915 support’. This is neither an Nvidia, nor an AMD/ATI chip-set. But amazingly, I do have some level of direct-rendering with it, and, in addition, I have Compiz Fusion on that box now, and at least, the 3D desktop-switching belonging to Compiz works!
So now, with ‘Klexel’ wiped, I can take my time with it, and install what I think it should have. But what will slow me down a bit, is the fact that I’m not used to LXDE as a main window-manager. In the past I goofed around with LXDE a bit, but now, this is going to be Klexel’s window manager, under which the GUI is arranged differently, from what I’m used to.
(Update 09/09/2019, 15h50 … )
(As of 09/01/2018, 21h35 : )
|
5 mins read 29 Mar 2022
# ADF establishes New Defence Space Command Branch
A new space division of the Australia Defence Department has been announced, which will be led by Air Vice-Marshal Catherine Roberts, AM, CSC.
As global demands on space infrastructure grow, the Australian Defence Force (ADF) has announced the establishment of Defence Space Command, a new organisation aimed at supporting defence space power across the portfolio.
Space has become congested and contested as more systems and infrastructure become reliant upon space-borne assets. From timekeeping for bank transactions to navigation and communications, space infrastructure has become critical to the world we live in today. This also means it has become a target for belligerent actors looking to impact states in a number of ways.
The announcement was made last week at the Air and Space Power Conference, which also included Chief of Space Operations for the United States Space Force, General William 'Jay' Raymond who addressed the assembled guests saying “We can no longer take space for granted.”
Chief of the Defence Force, General Angus Campbell said space was critical to the ADF warfighting effectiveness, situational awareness, and the delivery of real-time communications in the current geostrategic environment.
“We must be able to generate space power across the Defence portfolio, supporting the joint force, whole of government, allies and international partners. We must also protect billions of dollars worth of commercial and military assets against space debris, collisions and destructive acts,” General Campbell said.
“The decision to create a single organisation to coordinate and manage Defence’s endeavours in space is significant. Defence Space Command brings members of Navy, Army, Air Force, the Australian Public Service and contractors together under an integrated headquarters reporting to the Chief of Air Force as the Space Domain Lead,” added General Campbell.
## Defence Space Strategy
As space has become critical to actions on the ground, there has been a push to make the laws around who can use space and in what capacity clearer. Some members of the Australian space industry saying the current laws and treaties are enough and that space should be used for peaceful purposes only. Yet it is clear that the boundaries between military, commercial, and scientific use is becoming more and more blurred, particularly in near-Earth orbit. As our technological capabilities develop so will our reach to the Moon, asteroids, and other planets.
Whilst Space Command has been established to assure Australia’s access to space to defend Australia, our national interests, and promote global security and stability, a US-style “Space Force” has not been ruled out by the federal government. Yet some have questioned the need for a “Space Command” suggesting that it has the potential to escalate tensions with other nation-states.
Space Command will be headed by Air Vice-Marshal Catherine Roberts, AM, CSC who joined the Royal Australian Air Force in 1983 to study Aerospace Engineering. Her most recent role prior to joining Space Command was as the head of Air Force Capability, where she was involved in future-proofing the Australian Air Force.
According to Defence Minister Peter Dutton, the new organisation would “secure Australia's place in the cosmos”.
The new organisation was announced at the same time as a new defence space strategy and space power manual - which outlines how reliant upon space-borne assets Australia has become. The strategy concentrates on five key areas, including, enhancing capability, integration, supporting sovereign capability, and increasing awareness of the space domain. The multi-billion projects JP9102 (satellite communications) and JP9360 (Space Domain Awareness) which are currently being assessed are testaments to the government's commitment to sovereign space capability.
## The ultimate high ground
The new organisation will incorporate personnel from all three services (Army, Air Force, Navy), industry contractors, and public servants. The organisation will also work with other industry partners such as the Australian Space Agency and research organisations such as CSIRO.
“While space is primarily a civil domain – to support navigation, communication networks, financial systems, scientific enterprises, weather forecasting, and disaster response – it will undoubtedly become a domain which takes on greater military significance in this century,” Minister Dutton said.
“Space is the ultimate high ground. What we see from space gives us an unsurpassed advantage in surveillance and intelligence. It is central to how we will fight and win in the future across multi-domain operations, using advanced hypersonics, precision strike missiles, and guided weapons,” added Air Vice-Marshal Roberts
As head of Space Command, Air Vice-Marshall Roberts will lead a team of over 100 and will look to use her role to “increase the national understanding” of the importance of the space domain. However, there is some concern that setting up a Space Command will be seen as a provocative act and that we should not be viewing space as a warfighting space. Last year the federal government announced they would be investing \$7 billion in space capabilities over the next 10 years, with the latest update suggesting that \$17 billion will be invested by 2036 to address key gaps in space capability.
With increasing commercialisation of space, space assets are going to become more critical and as such nations will need to take a more proactive approach to manage their space industry, however, there is an increasing discussion around space and military/contested activities that perhaps is drowning out the discussion about collaboration and research.
With the space strategy report suggesting that the new division will “identify the space capabilities of greatest national importance and help to shape Australian industry focus” we hope that we can continue to see space as more than just the “ultimate high ground”.
|
# FRAME work ( statics)
1. May 8, 2012
### stupif
1. every member is 0.45m and all the angle is 45degree....
C is fixed support, D is roller support
3. Ʃfx = 0 , Cx = 0
Ʃfy = 0 , Cy + Dy = 20N
ƩMc = 0 , 20(0.45) + Dy(0.45) = 0
Dy = -20N
OR
Ʃfx = 0 , Cx - Dx = 0
Cx = Dx
Ʃfy = 0, Cy -20N = 0
Cy = 20N
ƩMc = 0 , 20(0.45) + Dx(0.45) = 0
Dx= -20N
may i know which one is correct??
File size:
7.6 KB
Views:
90
2. May 8, 2012
### PhanthomJay
The second one is correct.
I think you meant to say that C is pinned (it can take forces in the x and y directions), and that D is a roller support (it can only take a load in one direction at a right angle to the rollers (in the negative x direction in this case)).
3. May 8, 2012
### stupif
i dont understand about roller support there....
how to know the force is in which direction?
i thought the roller support 's force is always oppose to the force applied(20N)......
thank you
4. May 8, 2012
### PhanthomJay
No, not always, and this problem is an example: the roller must rest on the side wall, not the floor; otherwise, the frame would be unstable, since there would be no way to balance the horizontal force at the other support. When you assumed this in your first incorrect solution, you made an error when summing moments about C: You said
, but a Dy force cannot produce a moment about C (there is no moment arm) , so your equation should have said
, which is nonsensical, and therefore incorrect.
5. May 9, 2012
### stupif
ok.....i got it...thank you
now these are my answer......could you help me check and are they correct?
C joint
(vertical)
Cy - Fce sin45 = 0
20 - Fce sin 45= 0
Fce = 28.28N
(horizontal)
Cx + Fbc - Fce cos 45 = 0
-20 + Fbc - 28.28cos 45 = 0
Fbc= 40N
joint D(horizontal)
Dx = Fde
Fde = -20N
joint E(horizontal)
Fde + Fce cos45 -Fae = 0
-20 + 28.28cos45 - Fae = 0
Fae = 0
(vertical)
Fbe + Fce cos45 -20 = 0
Fbe = 0
joint A
Fab =0
i know they are wrong....but i dont know which one is wrong and how come will be wrong.........help me....i'm lost in statics!!! thanks
6. May 9, 2012
### PhanthomJay
Yes, this value is correct, good work, but you must also indicate whether it is tension force or compression force. Member tension forces always pull away from the joints on which they act, and member compression forces always push toward the joint on which they act.
No, this is not correct. You have to be very careful of force and force reaction directions and the use of the plus and minus sign. Which way does Cx reaction force on the joint point, to the left or right, and is it thus plus or minus? You can sum moments about D to get its direction.
Yes, tension or compression?
excellent, now you are cookin'
Yeahhh, buddy!
Yes, again!
Actually, you did yourself proud...the only errors are whether the forces are T or C, and the force in BC was wrong. Try that one again.
7. May 9, 2012
### stupif
thank you....
Fbc = 20N??
Fbc = Fce cos 45
Fbc = 20N
but why just now what i did is wrong? i feel like is correct because
Fx = 0
Fx contain Cx, Fbc, Fce cos 45, that's why i formed a equation.
Fab = tension
Fae= tension
Fbe = compression
Fde= compression
Fce= tension
Fbc = compression
8. May 9, 2012
### PhanthomJay
Yes, but you formed it wrong. You wrote:
When you should have wrote, per my earlier hint on signage
$C_x + F_{bc} - F_{ce} cos 45 = 0$
$+20 + F_{bc} - F_{ce} cos 45 = 0$
Now solve for F_bc. The plus and minus sign will bite you every time if you let it. Do you see your error?
but these you have identified as all zero force members, neither in tension nor compression
Yes!
Yes!
NO!
You only got 2 out of 6 correct
(I need to work on my LateX)
9. May 9, 2012
### stupif
Fbc should be tension.....i see it.......
my understanding about zero force members......
zero force members are a member which cannot carry load. hence there is no force in these members. these members do not have compression force and tension force.
the existence of zero force members are to stabilise other members.
my understanding is correct or no?
10. May 9, 2012
### PhanthomJay
NO, that it is not right. Did you work out the equation?
+20 + F_{bc} - F_{ce} cos 45 = 0
+ 20 + F_{bc} - 20 = 0
F_{bc} = ____???___
Not quite. The zero force members in this problem are not stabilizing anything. You can take them out of the frame as if they didn't exist, and it doesn't change the solution or stability in any way.
11. May 9, 2012
### stupif
Fbc = 0N??
is not tension and compression...
then why this phenomenon happenned? i mean zero force members.....
12. May 10, 2012
### PhanthomJay
They only have no force in them because of the way the truss is loaded with only a force appied at E...If a load was applied downward at A, all the members would have forces in them. In other words, for the given problem and loading, the members are not needed, they are just extras in case some day you wished to apply at load at A...right now, they just go along for a ride .....
13. May 10, 2012
### stupif
but when i doing this in lab, my result from lab is quite big difference to the theoretical result. the aim of the lab is to determine the forces in members and the model is same as the diagram. the force applied is also same. What is the possible source of errors?
Do we need to consider the weight of the members?
14. May 10, 2012
### PhanthomJay
My assumption was that the weight of the members could be neglected. However, if the member weights are significant (and they might be since you are only applying a 20N load), then yes, you would have to consider their weight. For the puposes of the calculation, you may assume each member's weight is distributed 1/2 to each of its end joints, and applied as a load at those joints.
15. May 10, 2012
### stupif
thank you very much......you help me a lot......appreciate your help.......(bow)
|
# 20 Vectors
## 20.1 Introduction
So far this book has focussed on tibbles and packages that work with them. But as you start to write your own functions, and dig deeper into R, you need to learn about vectors, the objects that underlie tibbles. If you’ve learned R in a more traditional way, you’re probably already familiar with vectors, as most R resources start with vectors and work their way up to tibbles. I think it’s better to start with tibbles because they’re immediately useful, and then work your way down to the underlying components.
Vectors are particularly important as most of the functions you will write will work with vectors. It is possible to write functions that work with tibbles (like ggplot2, dplyr, and tidyr), but the tools you need to write such functions are currently idiosyncratic and immature. I am working on a better approach, https://github.com/hadley/lazyeval, but it will not be ready in time for the publication of the book. Even when complete, you’ll still need to understand vectors, it’ll just make it easier to write a user-friendly layer on top.
### 20.1.1 Prerequisites
The focus of this chapter is on base R data structures, so it isn’t essential to load any packages. We will, however, use a handful of functions from the purrr package to avoid some inconsistencies in base R.
library(tidyverse)
## 20.2 Vector basics
There are two types of vectors:
1. Atomic vectors, of which there are six types: logical, integer, double, character, complex, and raw. Integer and double vectors are collectively known as numeric vectors.
2. Lists, which are sometimes called recursive vectors because lists can contain other lists.
The chief difference between atomic vectors and lists is that atomic vectors are homogeneous, while lists can be heterogeneous. There’s one other related object: NULL. NULL is often used to represent the absence of a vector (as opposed to NA which is used to represent the absence of a value in a vector). NULL typically behaves like a vector of length 0. Figure 20.1 summarises the interrelationships.
Every vector has two key properties:
1. Its type, which you can determine with typeof().
typeof(letters)
#> [1] "character"
typeof(1:10)
#> [1] "integer"
2. Its length, which you can determine with length().
x <- list("a", "b", 1:10)
length(x)
#> [1] 3
Vectors can also contain arbitrary additional metadata in the form of attributes. These attributes are used to create augmented vectors which build on additional behaviour. There are three important types of augmented vector:
• Factors are built on top of integer vectors.
• Dates and date-times are built on top of numeric vectors.
• Data frames and tibbles are built on top of lists.
This chapter will introduce you to these important vectors from simplest to most complicated. You’ll start with atomic vectors, then build up to lists, and finish off with augmented vectors.
## 20.3 Important types of atomic vector
The four most important types of atomic vector are logical, integer, double, and character. Raw and complex are rarely used during a data analysis, so I won’t discuss them here.
### 20.3.1 Logical
Logical vectors are the simplest type of atomic vector because they can take only three possible values: FALSE, TRUE, and NA. Logical vectors are usually constructed with comparison operators, as described in comparisons. You can also create them by hand with c():
1:10 %% 3 == 0
#> [1] FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
c(TRUE, TRUE, FALSE, NA)
#> [1] TRUE TRUE FALSE NA
### 20.3.2 Numeric
Integer and double vectors are known collectively as numeric vectors. In R, numbers are doubles by default. To make an integer, place an L after the number:
typeof(1)
#> [1] "double"
typeof(1L)
#> [1] "integer"
1.5L
#> [1] 1.5
The distinction between integers and doubles is not usually important, but there are two important differences that you should be aware of:
1. Doubles are approximations. Doubles represent floating point numbers that can not always be precisely represented with a fixed amount of memory. This means that you should consider all doubles to be approximations. For example, what is square of the square root of two?
x <- sqrt(2) ^ 2
x
#> [1] 2
x - 2
#> [1] 4.440892e-16
This behaviour is common when working with floating point numbers: most calculations include some approximation error. Instead of comparing floating point numbers using ==, you should use dplyr::near() which allows for some numerical tolerance.
2. Integers have one special value: NA, while doubles have four: NA, NaN, Inf and -Inf. All three special values NaN, Inf and -Inf can arise during division:
c(-1, 0, 1) / 0
#> [1] -Inf NaN Inf
Avoid using == to check for these other special values. Instead use the helper functions is.finite(), is.infinite(), and is.nan():
0 Inf NA NaN
is.finite() x
is.infinite() x
is.na() x x
is.nan() x
### 20.3.3 Character
Character vectors are the most complex type of atomic vector, because each element of a character vector is a string, and a string can contain an arbitrary amount of data.
You’ve already learned a lot about working with strings in strings. Here I wanted to mention one important feature of the underlying string implementation: R uses a global string pool. This means that each unique string is only stored in memory once, and every use of the string points to that representation. This reduces the amount of memory needed by duplicated strings. You can see this behaviour in practice with pryr::object_size():
x <- "This is a reasonably long string."
pryr::object_size(x)
#> Registered S3 method overwritten by 'pryr':
#> method from
#> print.bytes Rcpp
#> 152 B
y <- rep(x, 1000)
pryr::object_size(y)
#> 8.14 kB
y doesn’t take up 1,000x as much memory as x, because each element of y is just a pointer to that same string. A pointer is 8 bytes, so 1000 pointers to a 152 B string is 8 * 1000 + 152 = 8.14 kB.
### 20.3.4 Missing values
Note that each type of atomic vector has its own missing value:
NA # logical
#> [1] NA
NA_integer_ # integer
#> [1] NA
NA_real_ # double
#> [1] NA
NA_character_ # character
#> [1] NA
Normally you don’t need to know about these different types because you can always use NA and it will be converted to the correct type using the implicit coercion rules described next. However, there are some functions that are strict about their inputs, so it’s useful to have this knowledge sitting in your back pocket so you can be specific when needed.
### 20.3.5 Exercises
1. Describe the difference between is.finite(x) and !is.infinite(x).
2. Read the source code for dplyr::near() (Hint: to see the source code, drop the ()). How does it work?
3. A logical vector can take 3 possible values. How many possible values can an integer vector take? How many possible values can a double take? Use google to do some research.
4. Brainstorm at least four functions that allow you to convert a double to an integer. How do they differ? Be precise.
5. What functions from the readr package allow you to turn a string into logical, integer, and double vector?
## 20.4 Using atomic vectors
Now that you understand the different types of atomic vector, it’s useful to review some of the important tools for working with them. These include:
1. How to convert from one type to another, and when that happens automatically.
2. How to tell if an object is a specific type of vector.
3. What happens when you work with vectors of different lengths.
4. How to name the elements of a vector.
5. How to pull out elements of interest.
### 20.4.1 Coercion
There are two ways to convert, or coerce, one type of vector to another:
1. Explicit coercion happens when you call a function like as.logical(), as.integer(), as.double(), or as.character(). Whenever you find yourself using explicit coercion, you should always check whether you can make the fix upstream, so that the vector never had the wrong type in the first place. For example, you may need to tweak your readr col_types specification.
2. Implicit coercion happens when you use a vector in a specific context that expects a certain type of vector. For example, when you use a logical vector with a numeric summary function, or when you use a double vector where an integer vector is expected.
Because explicit coercion is used relatively rarely, and is largely easy to understand, I’ll focus on implicit coercion here.
You’ve already seen the most important type of implicit coercion: using a logical vector in a numeric context. In this case TRUE is converted to 1 and FALSE converted to 0. That means the sum of a logical vector is the number of trues, and the mean of a logical vector is the proportion of trues:
x <- sample(20, 100, replace = TRUE)
y <- x > 10
sum(y) # how many are greater than 10?
#> [1] 38
mean(y) # what proportion are greater than 10?
#> [1] 0.38
You may see some code (typically older) that relies on implicit coercion in the opposite direction, from integer to logical:
if (length(x)) {
# do something
}
In this case, 0 is converted to FALSE and everything else is converted to TRUE. I think this makes it harder to understand your code, and I don’t recommend it. Instead be explicit: length(x) > 0.
It’s also important to understand what happens when you try and create a vector containing multiple types with c(): the most complex type always wins.
typeof(c(TRUE, 1L))
#> [1] "integer"
typeof(c(1L, 1.5))
#> [1] "double"
typeof(c(1.5, "a"))
#> [1] "character"
An atomic vector can not have a mix of different types because the type is a property of the complete vector, not the individual elements. If you need to mix multiple types in the same vector, you should use a list, which you’ll learn about shortly.
### 20.4.2 Test functions
Sometimes you want to do different things based on the type of vector. One option is to use typeof(). Another is to use a test function which returns a TRUE or FALSE. Base R provides many functions like is.vector() and is.atomic(), but they often return surprising results. Instead, it’s safer to use the is_* functions provided by purrr, which are summarised in the table below.
lgl int dbl chr list
is_logical() x
is_integer() x
is_double() x
is_numeric() x x
is_character() x
is_atomic() x x x x
is_list() x
is_vector() x x x x x
### 20.4.3 Scalars and recycling rules
As well as implicitly coercing the types of vectors to be compatible, R will also implicitly coerce the length of vectors. This is called vector recycling, because the shorter vector is repeated, or recycled, to the same length as the longer vector.
This is generally most useful when you are mixing vectors and “scalars”. I put scalars in quotes because R doesn’t actually have scalars: instead, a single number is a vector of length 1. Because there are no scalars, most built-in functions are vectorised, meaning that they will operate on a vector of numbers. That’s why, for example, this code works:
sample(10) + 100
#> [1] 107 104 103 109 102 101 106 110 105 108
runif(10) > 0.5
#> [1] FALSE TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE
In R, basic mathematical operations work with vectors. That means that you should never need to perform explicit iteration when performing simple mathematical computations.
It’s intuitive what should happen if you add two vectors of the same length, or a vector and a “scalar”, but what happens if you add two vectors of different lengths?
1:10 + 1:2
#> [1] 2 4 4 6 6 8 8 10 10 12
Here, R will expand the shortest vector to the same length as the longest, so called recycling. This is silent except when the length of the longer is not an integer multiple of the length of the shorter:
1:10 + 1:3
#> Warning in 1:10 + 1:3: longer object length is not a multiple of shorter object
#> length
#> [1] 2 4 6 5 7 9 8 10 12 11
While vector recycling can be used to create very succinct, clever code, it can also silently conceal problems. For this reason, the vectorised functions in tidyverse will throw errors when you recycle anything other than a scalar. If you do want to recycle, you’ll need to do it yourself with rep():
tibble(x = 1:4, y = 1:2)
#> Error: Tibble columns must have compatible sizes.
#> * Size 4: Existing data.
#> * Size 2: Column y.
#> ℹ Only values of size one are recycled.
tibble(x = 1:4, y = rep(1:2, 2))
#> # A tibble: 4 x 2
#> x y
#> <int> <int>
#> 1 1 1
#> 2 2 2
#> 3 3 1
#> 4 4 2
tibble(x = 1:4, y = rep(1:2, each = 2))
#> # A tibble: 4 x 2
#> x y
#> <int> <int>
#> 1 1 1
#> 2 2 1
#> 3 3 2
#> 4 4 2
### 20.4.4 Naming vectors
All types of vectors can be named. You can name them during creation with c():
c(x = 1, y = 2, z = 4)
#> x y z
#> 1 2 4
Or after the fact with purrr::set_names():
set_names(1:3, c("a", "b", "c"))
#> a b c
#> 1 2 3
Named vectors are most useful for subsetting, described next.
### 20.4.5 Subsetting
So far we’ve used dplyr::filter() to filter the rows in a tibble. filter() only works with tibble, so we’ll need new tool for vectors: [. [ is the subsetting function, and is called like x[a]. There are four types of things that you can subset a vector with:
1. A numeric vector containing only integers. The integers must either be all positive, all negative, or zero.
Subsetting with positive integers keeps the elements at those positions:
x <- c("one", "two", "three", "four", "five")
x[c(3, 2, 5)]
#> [1] "three" "two" "five"
By repeating a position, you can actually make a longer output than input:
x[c(1, 1, 5, 5, 5, 2)]
#> [1] "one" "one" "five" "five" "five" "two"
Negative values drop the elements at the specified positions:
x[c(-1, -3, -5)]
#> [1] "two" "four"
It’s an error to mix positive and negative values:
x[c(1, -1)]
#> Error in x[c(1, -1)]: only 0's may be mixed with negative subscripts
The error message mentions subsetting with zero, which returns no values:
x[0]
#> character(0)
This is not useful very often, but it can be helpful if you want to create unusual data structures to test your functions with.
2. Subsetting with a logical vector keeps all values corresponding to a TRUE value. This is most often useful in conjunction with the comparison functions.
x <- c(10, 3, NA, 5, 8, 1, NA)
# All non-missing values of x
x[!is.na(x)]
#> [1] 10 3 5 8 1
# All even (or missing!) values of x
x[x %% 2 == 0]
#> [1] 10 NA 8 NA
3. If you have a named vector, you can subset it with a character vector:
x <- c(abc = 1, def = 2, xyz = 5)
x[c("xyz", "def")]
#> xyz def
#> 5 2
Like with positive integers, you can also use a character vector to duplicate individual entries.
4. The simplest type of subsetting is nothing, x[], which returns the complete x. This is not useful for subsetting vectors, but it is useful when subsetting matrices (and other high dimensional structures) because it lets you select all the rows or all the columns, by leaving that index blank. For example, if x is 2d, x[1, ] selects the first row and all the columns, and x[, -1] selects all rows and all columns except the first.
There is an important variation of [ called [[. [[ only ever extracts a single element, and always drops names. It’s a good idea to use it whenever you want to make it clear that you’re extracting a single item, as in a for loop. The distinction between [ and [[ is most important for lists, as we’ll see shortly.
### 20.4.6 Exercises
1. What does mean(is.na(x)) tell you about a vector x? What about sum(!is.finite(x))?
2. Carefully read the documentation of is.vector(). What does it actually test for? Why does is.atomic() not agree with the definition of atomic vectors above?
3. Compare and contrast setNames() with purrr::set_names().
4. Create functions that take a vector as input and returns:
1. The last value. Should you use [ or [[?
2. The elements at even numbered positions.
3. Every element except the last value.
4. Only even numbers (and no missing values).
5. Why is x[-which(x > 0)] not the same as x[x <= 0]?
6. What happens when you subset with a positive integer that’s bigger than the length of the vector? What happens when you subset with a name that doesn’t exist?
## 20.5 Recursive vectors (lists)
Lists are a step up in complexity from atomic vectors, because lists can contain other lists. This makes them suitable for representing hierarchical or tree-like structures. You create a list with list():
x <- list(1, 2, 3)
x
#> [[1]]
#> [1] 1
#>
#> [[2]]
#> [1] 2
#>
#> [[3]]
#> [1] 3
A very useful tool for working with lists is str() because it focusses on the structure, not the contents.
str(x)
#> List of 3
#> $: num 1 #>$ : num 2
#> $: num 3 x_named <- list(a = 1, b = 2, c = 3) str(x_named) #> List of 3 #>$ a: num 1
#> $b: num 2 #>$ c: num 3
Unlike atomic vectors, list() can contain a mix of objects:
y <- list("a", 1L, 1.5, TRUE)
str(y)
#> List of 4
#> $: chr "a" #>$ : int 1
#> $: num 1.5 #>$ : logi TRUE
Lists can even contain other lists!
z <- list(list(1, 2), list(3, 4))
str(z)
#> List of 2
#> $:List of 2 #> ..$ : num 1
#> ..$: num 2 #>$ :List of 2
#> ..$: num 3 #> ..$ : num 4
### 20.5.1 Visualising lists
To explain more complicated list manipulation functions, it’s helpful to have a visual representation of lists. For example, take these three lists:
x1 <- list(c(1, 2), c(3, 4))
x2 <- list(list(1, 2), list(3, 4))
x3 <- list(1, list(2, list(3)))
I’ll draw them as follows:
There are three principles:
1. Lists have rounded corners. Atomic vectors have square corners.
2. Children are drawn inside their parent, and have a slightly darker background to make it easier to see the hierarchy.
3. The orientation of the children (i.e. rows or columns) isn’t important, so I’ll pick a row or column orientation to either save space or illustrate an important property in the example.
### 20.5.2 Subsetting
There are three ways to subset a list, which I’ll illustrate with a list named a:
a <- list(a = 1:3, b = "a string", c = pi, d = list(-1, -5))
• [ extracts a sub-list. The result will always be a list.
str(a[1:2])
#> List of 2
#> $a: int [1:3] 1 2 3 #>$ b: chr "a string"
str(a[4])
#> List of 1
#> $d:List of 2 #> ..$ : num -1
#> ..$: num -5 Like with vectors, you can subset with a logical, integer, or character vector. • [[ extracts a single component from a list. It removes a level of hierarchy from the list. str(a[[1]]) #> int [1:3] 1 2 3 str(a[[4]]) #> List of 2 #>$ : num -1
#> $: num -5 • $ is a shorthand for extracting named elements of a list. It works similarly to [[ except that you don’t need to use quotes.
a$a #> [1] 1 2 3 a[["a"]] #> [1] 1 2 3 The distinction between [ and [[ is really important for lists, because [[ drills down into the list while [ returns a new, smaller list. Compare the code and output above with the visual representation in Figure 20.2. ### 20.5.3 Lists of condiments The difference between [ and [[ is very important, but it’s easy to get confused. To help you remember, let me show you an unusual pepper shaker. If this pepper shaker is your list x, then, x[1] is a pepper shaker containing a single pepper packet: x[2] would look the same, but would contain the second packet. x[1:2] would be a pepper shaker containing two pepper packets. x[[1]] is: If you wanted to get the content of the pepper package, you’d need x[[1]][[1]]: ### 20.5.4 Exercises 1. Draw the following lists as nested sets: 1. list(a, b, list(c, d), list(e, f)) 2. list(list(list(list(list(list(a)))))) 2. What happens if you subset a tibble as if you’re subsetting a list? What are the key differences between a list and a tibble? ## 20.6 Attributes Any vector can contain arbitrary additional metadata through its attributes. You can think of attributes as named list of vectors that can be attached to any object. You can get and set individual attribute values with attr() or see them all at once with attributes(). x <- 1:10 attr(x, "greeting") #> NULL attr(x, "greeting") <- "Hi!" attr(x, "farewell") <- "Bye!" attributes(x) #>$greeting
#> [1] "Hi!"
#>
#> $farewell #> [1] "Bye!" There are three very important attributes that are used to implement fundamental parts of R: 1. Names are used to name the elements of a vector. 2. Dimensions (dims, for short) make a vector behave like a matrix or array. 3. Class is used to implement the S3 object oriented system. You’ve seen names above, and we won’t cover dimensions because we don’t use matrices in this book. It remains to describe the class, which controls how generic functions work. Generic functions are key to object oriented programming in R, because they make functions behave differently for different classes of input. A detailed discussion of object oriented programming is beyond the scope of this book, but you can read more about it in Advanced R at http://adv-r.had.co.nz/OO-essentials.html#s3. Here’s what a typical generic function looks like: as.Date #> function (x, ...) #> UseMethod("as.Date") #> <bytecode: 0x7fd372936678> #> <environment: namespace:base> The call to “UseMethod” means that this is a generic function, and it will call a specific method, a function, based on the class of the first argument. (All methods are functions; not all functions are methods). You can list all the methods for a generic with methods(): methods("as.Date") #> [1] as.Date.character as.Date.default as.Date.factor #> [4] as.Date.numeric as.Date.POSIXct as.Date.POSIXlt #> [7] as.Date.vctrs_sclr* as.Date.vctrs_vctr* #> see '?methods' for accessing help and source code For example, if x is a character vector, as.Date() will call as.Date.character(); if it’s a factor, it’ll call as.Date.factor(). You can see the specific implementation of a method with getS3method(): getS3method("as.Date", "default") #> function (x, ...) #> { #> if (inherits(x, "Date")) #> x #> else if (is.null(x)) #> .Date(numeric()) #> else if (is.logical(x) && all(is.na(x))) #> .Date(as.numeric(x)) #> else stop(gettextf("do not know how to convert '%s' to class %s", #> deparse1(substitute(x)), dQuote("Date")), domain = NA) #> } #> <bytecode: 0x7fd3747d4220> #> <environment: namespace:base> getS3method("as.Date", "numeric") #> function (x, origin, ...) #> { #> if (missing(origin)) { #> if (!length(x)) #> return(.Date(numeric())) #> if (!any(is.finite(x))) #> return(.Date(x)) #> stop("'origin' must be supplied") #> } #> as.Date(origin, ...) + x #> } #> <bytecode: 0x7fd374b10400> #> <environment: namespace:base> The most important S3 generic is print(): it controls how the object is printed when you type its name at the console. Other important generics are the subsetting functions [, [[, and $.
## 20.7 Augmented vectors
Atomic vectors and lists are the building blocks for other important vector types like factors and dates. I call these augmented vectors, because they are vectors with additional attributes, including class. Because augmented vectors have a class, they behave differently to the atomic vector on which they are built. In this book, we make use of four important augmented vectors:
• Factors
• Dates
• Date-times
• Tibbles
These are described below.
### 20.7.1 Factors
Factors are designed to represent categorical data that can take a fixed set of possible values. Factors are built on top of integers, and have a levels attribute:
x <- factor(c("ab", "cd", "ab"), levels = c("ab", "cd", "ef"))
typeof(x)
#> [1] "integer"
attributes(x)
#> $levels #> [1] "ab" "cd" "ef" #> #>$class
#> [1] "factor"
### 20.7.2 Dates and date-times
Dates in R are numeric vectors that represent the number of days since 1 January 1970.
x <- as.Date("1971-01-01")
unclass(x)
#> [1] 365
typeof(x)
#> [1] "double"
attributes(x)
#> $class #> [1] "Date" Date-times are numeric vectors with class POSIXct that represent the number of seconds since 1 January 1970. (In case you were wondering, “POSIXct” stands for “Portable Operating System Interface”, calendar time.) x <- lubridate::ymd_hm("1970-01-01 01:00") unclass(x) #> [1] 3600 #> attr(,"tzone") #> [1] "UTC" typeof(x) #> [1] "double" attributes(x) #>$class
#> [1] "POSIXct" "POSIXt"
#>
#> $tzone #> [1] "UTC" The tzone attribute is optional. It controls how the time is printed, not what absolute time it refers to. attr(x, "tzone") <- "US/Pacific" x #> [1] "1969-12-31 17:00:00 PST" attr(x, "tzone") <- "US/Eastern" x #> [1] "1969-12-31 20:00:00 EST" There is another type of date-times called POSIXlt. These are built on top of named lists: y <- as.POSIXlt(x) typeof(y) #> [1] "list" attributes(y) #>$names
#> [1] "sec" "min" "hour" "mday" "mon" "year" "wday" "yday"
#> [9] "isdst" "zone" "gmtoff"
#>
#> $class #> [1] "POSIXlt" "POSIXt" #> #>$tzone
#> [1] "US/Eastern" "EST" "EDT"
POSIXlts are rare inside the tidyverse. They do crop up in base R, because they are needed to extract specific components of a date, like the year or month. Since lubridate provides helpers for you to do this instead, you don’t need them. POSIXct’s are always easier to work with, so if you find you have a POSIXlt, you should always convert it to a regular data time lubridate::as_date_time().
### 20.7.3 Tibbles
Tibbles are augmented lists: they have class “tbl_df” + “tbl” + “data.frame”, and names (column) and row.names attributes:
tb <- tibble::tibble(x = 1:5, y = 5:1)
typeof(tb)
#> [1] "list"
attributes(tb)
#> $names #> [1] "x" "y" #> #>$row.names
#> [1] 1 2 3 4 5
#>
#> $class #> [1] "tbl_df" "tbl" "data.frame" The difference between a tibble and a list is that all the elements of a data frame must be vectors with the same length. All functions that work with tibbles enforce this constraint. Traditional data.frames have a very similar structure: df <- data.frame(x = 1:5, y = 5:1) typeof(df) #> [1] "list" attributes(df) #>$names
#> [1] "x" "y"
#>
#> $class #> [1] "data.frame" #> #>$row.names
#> [1] 1 2 3 4 5
The main difference is the class. The class of tibble includes “data.frame” which means tibbles inherit the regular data frame behaviour by default.
### 20.7.4 Exercises
1. What does hms::hms(3600) return? How does it print? What primitive type is the augmented vector built on top of? What attributes does it use?
2. Try and make a tibble that has columns with different lengths. What happens?
3. Based on the definition above, is it ok to have a list as a column of a tibble?
|
# How to get the center and the axes of an ellipse
Get the center and the semimajor/semiminor axes of the following ellipses:
$$x^2-6x+4y^2=16$$
$$2x^2 - 4x+3y^2+6y=7$$
How would one get these? I have no clue. I have a problem with merely rewriting these in the traditional ellipse equation.
-
## 2 Answers
$$x^2-6x+4y^2=16$$
$$(x-3)^2-9+4y^2=16$$
$$(x-3)^2+4y^2=25$$
$$\frac{(x-3)^2}{25}+\frac{4y^2}{25}=1$$
$$\frac{(x-3)^2}{5^2}+\frac{y^2}{(\frac {5}{2})^2}=1$$
center is $O(3,0)$ AND axes are $a=5,b=5/2$, for second ellipse you can proceed similarly
-
The second one is really tough, I am stuck at $2(x-1)^2 + 3(y+1)^2 =12$ Am I on the right path or not? – JohnPhteven Nov 2 '12 at 23:23
yes. then divide by 12 and go further – Adi Dani Nov 2 '12 at 23:28
Ok, I'll continue – JohnPhteven Nov 2 '12 at 23:29
$M(1,-1)$ and $a=\sqrt{6}$ and $b=2$? – JohnPhteven Nov 2 '12 at 23:31
that is correct – Adi Dani Nov 2 '12 at 23:34
I will do the first one: $$x^2 - 6 x + 4 y^2 = 16 \Rightarrow \left(x - 3\right)^2 + 4 y^2 = 25 \Rightarrow \frac{\left(x-3\right)^2}{5^2} + \frac{y^2}{\left(5/2\right)^2} = 1$$
Now compare with equation 12 here.
-
|
## WPS flaw on routers allows WPA protected WIFI networks to be cracked
This post discusses how one could use Reaver to make use of a flaw in WPS to recover a WPA password. Tomato and DD-WRT firmwares don’t support WPS so my network is safe. Lesson: buy and use a router that you can flash Tomato or DD-WRT.
On another note, WEP passwords could also be compromised using BackTrack.
## OpenDNS on Tomato router for faster web experience
I saw this post and decided to opt to use OpenDNS as my DNS server instead of my ISP’s server. I followed these instructions to set it up. Basically, I activated a second DDNS service (Basic > DDNS) and entered my login information for OpenDNS. When activating it, I set it to update my dynamic IP and to use it as my DNS server. It was as simple as that.
## Google Voice on a telephone without a server
I already discussed how one can make use of Google Voice with Asterisk – the possibilities are limitless. However, all this requires a server running Asterisk. I recently explored how one can explore other options of Google Voice (or other VoIP services) without the use of a server.
Since I own routers running tomato and dd-wrt, I can exploit Optware to have asterisk run on an embedded device. Installation is quite easy. You can buy a cheap router like the Asus WL-520GU to get things going. However, it might be kind of slow for asterisk. I own an Asus RT-N16, which based on my readings, is plenty of power of asterisk. However, I only want to use the router as a router, dedicated to that one task, to have a stable home network. I don’t want to run asterisk or an embedded web server for the sake of stability. However, knowing I have that option feels quite good.
I recently discovered the OBi100 and the OBi110 ATA’s that was released in late 2010 that can connect to Google Voice (and other SIP providers) natively. Based on this review and the reviews from Amazon, the product seems quite good. I went ahead and ordered the OBi110 to try it out, and I might update once I try it out.
Setup is outlined here. The drawback with GV is the inability to dial 911 in an emergency. The end of the post illustrates how you can get around this. I called my Verizon home phone service and once the line is disconnected, 911 service is not retained. I might pay for another VoIP with E911 (local 911 operator + phone number and physical address transmission) capabilities just for the ease of mind, even though we all own cell phones. This is another possibility by routing the 911 to the local police station, but E911 capabilities will not be available.
I just might port my home phone number to GV soon.
## Be on my home network when I’m away from home via OpenVPN
In my previous employments, I remember co-workers having to use VPN when they work from home. They can access everything at the company as if they were physically on-site. I haven’t tried configuring it on my home network since if I ever needed anything, I ssh’d into my home NAS, and grabbed stuff from there. I guess VPN can be useful in that everything I do on the remote machine will seem like I’m at home, meaning all my mounted access to different directories on the NAS, access to the router, etc, are available while I’m away.
Been wanting to play around with VPN for a while since I know both DD-WRT and Tomato routers has OpenVPN bundled in them.
Instructions are clearly documented at the USB Tomato wiki (look here to get the easy rsa files in newer versions (14.04) of Ubuntu). Note that when pasting stuff into the web browser, include the BEGIN and END lines. Also note that in order to generate the files, you have to do so as root; sudo doesn’t cut it. On Ubuntu, do sudo -i to imitate su.
Keep the generated files in a safe place. The files that I keep on my laptop (client) to VPN into my home network are ca.crt, Client1.crt, and Client1.key. Then create this Client1 file:
##########################################
# ______ __
# /_ __/___ ____ ___ ____ _/ /_____
# / / / __ / __ __ / __ / __/ __
# / / / /_/ / / / / / / /_/ / /_/ /_/ /
# /_/ ____/_/ /_/ /_/__,_/__/____/
##########################################
# The hostname/IP and port of the server. You can have multiple remote entries to load balance between the servers.
remote server.dyndns.org 1194
# Specify that we are a client and that we will be pulling certain config file directives from the server.
client
ns-cert-type server
# On most systems, the VPN will not function unless you partially or fully disable the firewall for the TUN/TAP interface.
dev tun21
# Are we connecting to a TCP or UDP server?
proto udp
# Keep trying indefinitely to resolve the host name of the OpenVPN server. Useful for machines which are not permanently connected to the internet such as laptops.
resolv-retry infinite
# Most clients don't need to bind to a specific local port number.
nobind
# The persist options will try to avoid accessing certain resources on restart that may no longer be accessible because of the privilege downgrade.
persist-key
persist-tun
float
# SSL/TLS parms.
ca ca.crt
cert Client1.crt
key Client1.key
# Enable compression on the VPN link.
comp-lzo
# Silence repeating messages
;verb 3
# Silence repeating messages
mute 20
When I need to VPN, just do
sudo openvpn Client1 ## do this in directory where the 3 files are stored
Thank you open source community!
I wanted to add a password feature to my VPN since I’m afraid someone might get access to my key files. I asked how to do so on the Tomato forum, and was referred to this post. It is quite easy to implement. 10/25/2014: Did more research to see if it’s better to implement a passphrase for the key instead of what I implemented before, but this post confirms that the auth-user-pass-verify method is indeed the recommended way to implement authentication.
In the tomato web config, add the following:
echo '#!/bin/sh
user1="user1name"
pass1="user1pass"
test "$user1" = "${username}" && test "$pass1" = "${password}" && exit 0
exit 1' > /tmp/quickAuth.sh
chmod 755 /tmp/quickAuth.sh
Restart the router or, better yet, execute the above code on the “System” page under “Tools”.
Under the “Advanced” tab on the VPN Server page, enter the following under “Custom Configuration”:
script-security 3
auth-user-pass-verify /tmp/quickAuth.sh via-env
Now, on my Client1 file above, add the line auth-user-pass somewhere (I placed it after comp-lzo).
Now when I vpn to the network, I have to enter a username and password. This is awesome.
## UPDATE 1/1/2011: Issue with PeerGuardian/MoBlock
I have issues connecting to a computer on the local network through OpenVPN. See this post for more details. To connect to it, just turn off PeerGuardian (sudo pglcmd stop).
## UPDATE 10/6/2011: Channel all internet traffic through VPN
The above method allows me to access computers on my home network. To direct all internet traffic from my current device to the VPN network (so that the IP the world would see is the VPN’s network), check the Direct clients to redirect Internet traffic checkbox in the Advanced Tab when setting up VPN in Tomato (according to this post). That way, I can use the internet securely when on a public network. I will only turn this feature on when I truly need it.
Unfortunately, there DNS names doesn’t resolve (only IP addresses will work). I seeked help here obtained a solution there and here. To fix the DNS issue, I added the following three lines to the end of the config we created earlier:
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
push "dhcp-option DNS 8.8.8.8"
to the “custom configuration” field under the “Advance” tab of the VPN server page on Tomato. The latter just says to use Google’s DNS server.
## UPDATE 10/25/2014: Use on Android
I can VPN on Android via the OpenVPN app. It should work after I copy all my files (Client1.ovpn, Client1.key, Client1.crt, and ca.crt) into a single directory on Android, and import Client1.ovpn in the app. However, I don’t want to leave my keys on my phone like that for security reasons, so the Help file in the OpenVPN app suggests creating a pkcs12 file and adding that to the Android keychain. To do so, first remove the 3 lines referencing in Client1.key, Client1.crt, and ca.crt in Client1.ovpn file. Import this ovpn file instead. On Linux, do
openssl pkcs12 -export -in Client1.crt -inkey Client1.key -certfile ca.crt -out Client1.p12
to generate Client1.p12. Enter an extracting password (will be asked when importing into Android keychain). Transfer to phone and import it via the OpenVPN app (so only Client1.ovpn and Client1.p12 files needed); enter the extracting password. Now one should be able to connect to the VPN after entering the username and password from the auth-user-pass-verify method. This is cool!
## Tomato on Asus RT-N16 router
Recently I’ve been playing with DD-WRT as my firmware of choice for my main router at home and the one I use as a wireless bridge. I recently purchased an Asus RT-N16 for a variety of reasons:
1. Gigabit ethernet,
2. DD-WRT,
3. 2 usb ports (for NAS and printers),
4. Wireless N, and
5. Great with bittorrent.
Reason 1 was the real reason I wanted a new router since I have a NAS connected to it via ethernet, and I plan on getting an HTPC soon (connected either wirelessly or through ethernet) and/or some net top boxes that can connect to the NAS (I’m tired of copying things to USB). Reason 3 wasn’t too much of a concern anymore since I recently bought an Acer NAS with Ubuntu server loaded on (this derserves its own post). I’ve been hearing this thing called tomato that is supposedly even better than DD-WRT. Been wanting to try it, especially since it is supposed to work well on the Asus router, especially to get the USB support (don’t think USB is supported in DD-WRT, but it’s a random guess since DD-WRT is great and has a large community supporting it). I decided to load this (currently beta) mod of Tomato (don’t use this since it does not support the NT-R16). Had trouble loading it after flashing the router to DD-WRT. Turns out I need an exact version of DD-WRT loaded first. Follow this guide to get it going.
Note: I had a problem getting wireless working with my Macbook. Things worked when I flashed the openvpn version of tomato with TKIP/AES encryption in WPA/WPA2 (think this part is the answer).
Also: To do a factory reset (erase NVRAM?) on the Asus, all I have to do is unplug router, press on WPS button, plug router, and release WPS button. Don’t think I have to do the 30-30-30 reset (don’t even know if that works on here).
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 18 Apr 2019, 12:02
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If m and n are positive integers is m/n an integer?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 13 Oct 2013
Posts: 136
Concentration: Strategy, Entrepreneurship
If m and n are positive integers is m/n an integer? [#permalink]
### Show Tags
13 Dec 2014, 13:55
1
00:00
Difficulty:
5% (low)
Question Stats:
94% (00:51) correct 6% (00:47) wrong based on 137 sessions
### HideShow timer Statistics
If m and n are positive integers is m/n an integer?
(1) m is a multiple of 14
(2) n is a divisor of 14
how c is the answer ?
say m =14, n=28.
m/n=1/2, am i missing anything?
M01-06
_________________
---------------------------------------------------------------------------------------------
Kindly press +1 Kudos if my post helped you in any way
Intern
Joined: 10 Feb 2013
Posts: 14
Location: India
Concentration: General Management, Finance
GMAT Date: 05-16-2015
GPA: 3.67
WE: Programming (Computer Software)
Re: If m and n are positive integers is m/n an integer? [#permalink]
### Show Tags
13 Dec 2014, 15:46
1
sunita123 wrote:
If m and n are positive integers is m/n an integer?
(1) m is a multiple of 14
(2) n is a divisor of 14
how c is the answer ?
say m =14, n=28.
m/n=1/2, am i missing anything?
The statement 2 was n is divisor of 14 it means 14,7,2,1 u considered it a multiple of 14 so u got fraction ..
ex m = 14 * a and n= 14/ a where a is a positive number m/n gives you a^2 ..SO u get an integer I hope u got it !
Manager
Joined: 13 Oct 2013
Posts: 136
Concentration: Strategy, Entrepreneurship
Re: If m and n are positive integers is m/n an integer? [#permalink]
### Show Tags
13 Dec 2014, 15:53
1
ah yes my bad... Thank you!!
jayanth7290 wrote:
sunita123 wrote:
If m and n are positive integers is m/n an integer?
(1) m is a multiple of 14
(2) n is a divisor of 14
how c is the answer ?
say m =14, n=28.
m/n=1/2, am i missing anything?
The statement 2 was n is divisor of 14 it means 14,7,2,1 u considered it a multiple of 14 so u got fraction ..
ex m = 14 * a and n= 14/ a where a is a positive number m/n gives you a^2 ..SO u get an integer I hope u got it !
_________________
---------------------------------------------------------------------------------------------
Kindly press +1 Kudos if my post helped you in any way
Math Expert
Joined: 02 Sep 2009
Posts: 54369
Re: If m and n are positive integers is m/n an integer? [#permalink]
### Show Tags
15 Dec 2014, 07:51
1
If $$m$$ and $$n$$ are positive integers is $$\frac{m}{n}$$ an integer?
(1) $$m$$ is a multiple of 14. Not sufficient as no info about $$n$$.
(2) $$n$$ is a divisor of 14. Not sufficient as no info about $$m$$.
(1)+(2) As from (2) $$n$$ is a divisor of 14 then it must be a divisor of every multiple of 14, therefore it's a divisor of $$m$$ too. Sufficient.
M01-06
_________________
Director
Joined: 12 Nov 2016
Posts: 725
Location: United States
Schools: Yale '18
GMAT 1: 650 Q43 V37
GRE 1: Q157 V158
GPA: 2.66
Re: If m and n are positive integers is m/n an integer? [#permalink]
### Show Tags
17 Jul 2017, 20:14
Bunuel wrote:
If $$m$$ and $$n$$ are positive integers is $$\frac{m}{n}$$ an integer?
(1) $$m$$ is a multiple of 14. Not sufficient as no info about $$n$$.
(2) $$n$$ is a divisor of 14. Not sufficient as no info about $$m$$.
(1)+(2) As from (2) $$n$$ is a divisor of 14 then it must be a divisor of every multiple of 14, therefore it's a divisor of $$m$$ too. Sufficient.
M01-06
This question asks is m a multiple of "n"?
St 1
M is a multiple of 14- obviously insufficient no info about n - we just know
M/ 7 x 2 = some integer X therefore M= 7 x 2 x K (some integer)
St 2
N is a divisor of 14 just means N is a factor of 14- obiously insuff no info about M
St 1 and St 2
7 x 2 x k/ any factor of 14 = integer
7 x 2 x k / 14 = integer for example as Bunuel explained
C
Re: If m and n are positive integers is m/n an integer? [#permalink] 17 Jul 2017, 20:14
Display posts from previous: Sort by
|
# Magnetic Field Strength: Force on a Moving Charge in a Magnetic Field
### Learning Objectives
By the end of this section, you will be able to:
• Describe the effects of magnetic fields on moving charges.
• Use the right hand rule 1 to determine the velocity of a charge, the direction of the magnetic field, and the direction of the magnetic force on a moving charge.
• Calculate the magnetic force on a moving charge.
What is the mechanism by which one magnet exerts a force on another? The answer is related to the fact that all magnetism is caused by current, the flow of charge. Magnetic fields exert forces on moving charges, and so they exert forces on other magnets, all of which have moving charges.
## Right Hand Rule 1
The magnetic force on a moving charge is one of the most fundamental known. Magnetic force is as important as the electrostatic or Coulomb force. Yet the magnetic force is more complex, in both the number of factors that affects it and in its direction, than the relatively simple Coulomb force. The magnitude of the magnetic force F on a charge q moving at a speed v in a magnetic field of strength B is given by
= qvB sin θ,
where θ is the angle between the directions of v and B. This force is often called the Lorentz force. In fact, this is how we define the magnetic field strength B—in terms of the force on a charged particle moving in a magnetic field. The SI unit for magnetic field strength B is called the tesla (T) after the eccentric but brilliant inventor Nikola Tesla (1856–1943). To determine how the tesla relates to other SI units, we solve = qvB sin θ for B.
$B=\frac{F}{qv \sin\theta}\\$
Because sin θ is unitless, the tesla is
$1\text{ T}=\frac{1\text{ N}}{\text{ C}\cdot\text{ m/s}}=\frac{1\text{ N}}{\text{A}\cdot\text{ m}}\\$
(note that C/s = A). Another smaller unit, called the gauss (G), where 1 G 10T, is sometimes used. The strongest permanent magnets have fields near 2 T; superconducting electromagnets may attain 10 T or more. The Earth’s magnetic field on its surface is only about 5 × 10−5 T, or 0.5 G.
The direction of the magnetic force F is perpendicular to the plane formed by v and B, as determined by the right hand rule 1 (or RHR-1), which is illustrated in Figure 1. RHR-1 states that, to determine the direction of the magnetic force on a positive moving charge, you point the thumb of the right hand in the direction of v, the fingers in the direction of B, and a perpendicular to the palm points in the direction of F. One way to remember this is that there is one velocity, and so the thumb represents it. There are many field lines, and so the fingers represent them. The force is in the direction you would push with your palm. The force on a negative charge is in exactly the opposite direction to that on a positive charge.
Figure 1. Magnetic fields exert forces on moving charges. This force is one of the most basic known. The direction of the magnetic force on a moving charge is perpendicular to the plane formed by v and B and follows right hand rule–1 (RHR-1) as shown. The magnitude of the force is proportional to q, v, B, and the sine of the angle between v and B.
### Making Connections: Charges and Magnets
There is no magnetic force on static charges. However, there is a magnetic force on moving charges. When charges are stationary, their electric fields do not affect magnets. But, when charges move, they produce magnetic fields that exert forces on other magnets. When there is relative motion, a connection between electric and magnetic fields emerges—each affects the other.
### Example 1. Calculating Magnetic Force: Earth’s Magnetic Field on a Charged Glass Rod
With the exception of compasses, you seldom see or personally experience forces due to the Earth’s small magnetic field. To illustrate this, suppose that in a physics lab you rub a glass rod with silk, placing a 20-nC positive charge on it. Calculate the force on the rod due to the Earth’s magnetic field, if you throw it with a horizontal velocity of 10 m/s due west in a place where the Earth’s field is due north parallel to the ground. (The direction of the force is determined with right hand rule 1 as shown in Figure 2.)
Figure 2. A positively charged object moving due west in a region where the Earth’s magnetic field is due north experiences a force that is straight down as shown. A negative charge moving in the same direction would feel a force straight up.
#### Strategy
We are given the charge, its velocity, and the magnetic field strength and direction. We can thus use the equation = qvB sin θ to find the force.
#### Solution
The magnetic force is
= qvB sin θ
We see that sin θ = 1, since the angle between the velocity and the direction of the field is 90º. Entering the other given quantities yields
$\begin{array}{lll}F& =& \left(20\times{10}^{-9}\text{ C}\right)\left(10\text{ m/s}\right)\left(5\times{10}^{-5}\text{ T}\right)\\ & =& 1\times {10}^{-11}\left(\text{C}\cdot\text{ m/s}\right)\left(\frac{N}{\text{ C}\cdot \text{ m/s}}\right)=1\times {10}^{-11}\text{ N}\end{array}\\$
.
#### Discussion
This force is completely negligible on any macroscopic object, consistent with experience. (It is calculated to only one digit, since the Earth’s field varies with location and is given to only one digit.) The Earth’s magnetic field, however, does produce very important effects, particularly on submicroscopic particles. Some of these are explored in Force on a Moving Charge in a Magnetic Field: Examples and Applications.
## Section Summary
• Magnetic fields exert a force on a moving charge q, the magnitude of which is
= qvB sin θ,
where θ is the angle between the directions of v and B.
• The SI unit for magnetic field strength B is the tesla (T), which is related to other units by
$1\text{ T}=\frac{1\text{ N}}{\text{ C}\cdot\text{ m/s}}=\frac{1\text{ N}}{\text{A}\cdot\text{ m}}\\$
• The direction of the force on a moving charge is given by right hand rule 1 (RHR-1): Point the thumb of the right hand in the direction of v, the fingers in the direction of B, and a perpendicular to the palm points in the direction of F.
• The force is perpendicular to the plane formed by v and B. Since the force is zero if v is parallel to B, charged particles often follow magnetic field lines rather than cross them.
### Conceptual Questions
1. If a charged particle moves in a straight line through some region of space, can you say that the magnetic field in that region is necessarily zero?
### Problems & Exercises
1. What is the direction of the magnetic force on a positive charge that moves as shown in each of the six cases shown in Figure 3?
2. Repeat Exercise 1 for a negative charge.
3. What is the direction of the velocity of a negative charge that experiences the magnetic force shown in each of the three cases in Figure 4, assuming it moves perpendicular to B?
4. Repeat Figure 4 for a positive charge.
5. What is the direction of the magnetic field that produces the magnetic force on a positive charge as shown in each of the three cases in the figure below, assuming B is perpendicular to v?
6. Repeat Exercise 5 for a negative charge.
7. What is the maximum force on an aluminum rod with a 0.100-μC charge that you pass between the poles of a 1.50-T permanent magnet at a speed of 5.00 m/s? In what direction is the force?
8. (a) Aircraft sometimes acquire small static charges. Suppose a supersonic jet has a0.500-μC charge and flies due west at a speed of 660 m/s over the Earth’s south magnetic pole, where the 8.00 × 105-T magnetic field points straight up. What are the direction and the magnitude of the magnetic force on the plane? (b) Discuss whether the value obtained in part (a) implies this is a significant or negligible effect.
9. (a) A cosmic ray proton moving toward the Earth at 5.00 × 107 experiences a magnetic force of 1.70 × 10−16 N. What is the strength of the magnetic field if there is a 45º angle between it and the proton’s velocity? (b) Is the value obtained in part (a) consistent with the known strength of the Earth’s magnetic field on its surface? Discuss.
10. An electron moving at 4.00 × 10m/s in a 1.25-T magnetic field experiences a magnetic force of 1.40 × 10−16 N. What angle does the velocity of the electron make with the magnetic field? There are two answers.
11. (a) A physicist performing a sensitive measurement wants to limit the magnetic force on a moving charge in her equipment to less than 1.00 × 10−12 N. What is the greatest the charge can be if it moves at a maximum speed of 30.0 m/s in the Earth’s field? (b) Discuss whether it would be difficult to limit the charge to less than the value found in (a) by comparing it with typical static electricity and noting that static is often absent.
## Glossary
right hand rule 1 (RHR-1):
the rule to determine the direction of the magnetic force on a positive moving charge: when the thumb of the right hand points in the direction of the charge’s velocity v and the fingers point in the direction of the magnetic field B, then the force on the charge is perpendicular and away from the palm; the force on a negative charge is perpendicular and into the palm
Lorentz force:
the force on a charge moving in a magnetic field
tesla:
T, the SI unit of the magnetic field strength;
$1\text{ T}=\frac{1 \text{ N}}{\text{ A}\cdot \text{ m}}\\$
magnetic force:
the force on a charge produced by its motion through a magnetic field; the Lorentz force
gauss:
G, the unit of the magnetic field strength; 1 G 10–4T
### Selected Solutions to Problems & Exercises
1. (a) Left (West) (b) Into the page (c) Up (North) (d) No force (e) Right (East) (f) Down (South)
3. (a) East (right) (b) Into page (c) South (down)
5. (a) Into page (b) West (left) (c) Out of page
7. 7.50 × 10N perpendicular to both the magnetic field lines and the velocity
9. (a) 3.01 × 10(b) This is slightly less then the magnetic field strength of × 10T at the surface of the Earth, so it is consistent.
10. (a) 6.67 × 1010 C (taking the Earth’s field to be 5.00 × 10T) (b) Less than typical static, therefore difficult
|
# Square Root of -10077696
The square root of -10077696 is the number, which multiplied by itself 2 times, is -10077696. In other words, this number to the power of 2 equals -10077696.
Besides the complex values of \sqrt[2]{-10077696} along with an explanation, on this page you can also find what the elements of the square root of -10077696 are called.
In addition to the terminology, we have a calculator you don’t want to miss:
## Square Root Calculator
\sqrt[2]{-10077696} = \pm3174.538706647iIf you have been looking for the square root of negative ten million seventy-seven thousand six hundred ninety-six, then you are right here, too.
The term can be written as \sqrt[2]{-10077696} \hspace{3 mm}or\hspace{3 mm} -10077696^{1/2}.
As the index 2 is even and -10077696 is less than 0, -10077696 has two complex square roots \in \mathbb{C}:
\sqrt[2]{-10077696}, which is positive and called principal square root of -10077696, and −\sqrt[2]{-10077696}, which is negative.
Together, they are denominated as ±\sqrt[2]{-10077696}.
Although the principal negative square root of negative ten million seventy-seven thousand six hundred ninety-six is only one of the two square roots, the term “square root of -10077696” usually refers to the positive number.
Make sure to understand that -10077696 has no real square roots \in \mathbb{R}!
Next, we have a look at the inverse function.
### Inverse of Square Root of -10077696
Extracting the square root is the inverse operation of ^2:\underbrace{ {\rm \sqrt[2]{-10077696} \times\thinspace ... \times\thinspace \sqrt[2]{-10077696}} }_{\rm 2 \thickspace times} = \sqrt[2]{-10077696}^{2}= -10077696In the following paragraph, we are going to name the elements of this √.
## What is the Square Root of -10077696?
You already have the answer to that question, and you also know about the inverse operation of -10077696 square root.
Keep reading to learn what the parts are called.
• \sqrt[2]{-10077696} is the square root of -10077696 symbol
• 2 is the index
• -10077696 = radicand; the radicand is the number below the radical sign
• Square root = ±3174.538706647i
• √ is called radical symbol or radical only
Second root of -10077696 = ±3174.538706647i
As a sidenote: All values on this page have been rounded to ten decimal places.
Now you really know all about \sqrt[2]{-10077696}, including its values, parts and the inverse.
If you need to extract the 2nd root of any other real or complex number use our calculator above.
Simply insert the number of which you want to find the square root (e.g. -10077696); the calculation is done automatically.
If you like our information about \sqrt[2]{-10077696}, then a similar square root you may be interested in is, for example: square root of negative 3.
In the following table you can find some imaginary square roots
## Table
The aim of this table is to provide you with an overview of the complex square roots close to -10077696.
-10077700\sqrt[2]{-10077700}±3174.5393366597i
-10077699\sqrt[2]{-10077699}±3174.5391791566i
-10077698\sqrt[2]{-10077698}±3174.5390216534i
-10077697\sqrt[2]{-10077697}±3174.5388641502i
-10077696\sqrt[2]{-10077696}±3174.538706647i
-10077695\sqrt[2]{-10077695}±3174.5385491438i
-10077694\sqrt[2]{-10077694}±3174.5383916406i
-10077693\sqrt[2]{-10077693}±3174.5382341374i
-10077692\sqrt[2]{-10077692}±3174.5380766341i
A few lines down from here we review the FAQs.
## Square Root of Negative Ten Million Seventy-Seven Thousand Six Hundred Ninety-Six
If you have been searching for what's the square root of negative ten million seventy-seven thousand six hundred ninety-six or 2nd root of -10077696, then you are reading the right post as well.
The same is true if you typed 2 root of -10077696 or -10077696 2 root in the search engine of your preference, just to name a few similar terms.
Right below you can find the frequently asked questions in the context.
### FAQs About the Square Root of -10077696
How Many Real Square Roots Does -10077696 Have?
-10077696 has no real square roots, because the radicand -10077696 is negative. However, -10077696 does have the two complex square roots ±3174.538706647i.
What to the Second Power Equals -10077696?
The square root of -10077696 to the power of 2 equals -10077696.
How Do You Find the Square Root of -10077696?
Start with an initial guess such that 2 times that value equals approximately 3, then keep improving the guess until you have the required precision. Prepend ± to the value and append “i”.
What is -10077696 to the Square Root?
-10077696 to the square root = -10077696^1/2 = ±3174.538706647i.
What Number is the Square Root of -10077696?
The square root of -10077696 = ±3174.538706647i.
How Do You Solve the Square Root of -10077696?
To compute the 2nd root of 3 use a calculator, and/or employ the Newton–Raphson method. Append “i”.
What is the Value of 2 Root -10077696?
The value of 2 root -10077696 is ±3174.538706647i.
Ahead is the wrap-up of our information.
## Summary
To sum up, the complex square roots of -10077696 are ±3174.538706647i.
Finding the second root of the number -10077696 is the inverse operation of rising the \sqrt[2]{-10077696} to the power of 2. That is to say, (±3174.538706647i)2 = -10077696.
Note that you can also locate roots like \sqrt[2]{-10077696} by means of the search form in the menu and in the sidebar of this site.
If our article about the square √ -10077696 has been useful to you, then press some of the share buttons, and make sure to install our app.
If you have any questions about the 2nd root of -10077696, fill in the comment form below.
Thanks for your visit.
Posted in Square Roots
|
# MRMR on CRAN
September 27, 2013
By
(This article was first published on PirateGrunt » R, and kindly contributed to R-bloggers)
MRMR version 0.1.3 is now available on CRAN. This is (almost) the same version that was discussed at the CLRS two weeks ago.
MRMR – Multivariate Regression Models for Reserving- is a tool for non-life actuaries to estimate liability reserves. The emphasis is on exploratory data analysis, visualization and model diagnostics. At present, the framework is a linear model, with a normal error term. A weighting parameter may be used to account for heteroskedasticity in the error terms.
MRMR supports three S4 objects as follows:
• Triangle – A triangle object houses reserving data. MRMR is a slight departure from the traditional storage of reserving data in several respects.
• First, the time periods must all be explicit. It’s common for reserving data sets to use integers such as 12, 24, etc. to refer to development lags and 2010, 94, etc. to denote calendar periods. MRMR uses lubridate values for this purpose so that the interpretation of temporal variables is clear and unambiguous.
• Second, MRMR distinguishes between temporal variables, static measures and stochastic measures, all of which are housed within the triangle. Temporal variables describe the origin and evaluation periods, and measures refer to the non-temporal, measurable phenomena under observation. A static variable is one whose value is known with certainty. Premium or other exposure elements are examples of such. A stochastic variable is one whose value varies over time. Loss measures are stochastic. This split permits us to house all information in the same place without any confusion.
• Finally (and a bit trivially) the information is stored in the “long” format. Columns of the underlying data frame refer to variables only and not development lags or any other such information.
• The function plotTriangle shows reserving data along three dimensions. A stochastic response (the y variable) is plotted against either a temporal or measure variable (static or stochastic) and then grouped along another dimension. Several examples are presented below, but a common result is the classic line graph of cumulative losses by origin period measured against development age. The functional form allows to easily switch from cumulative to incremental response, from development age to evaluation date, etc. Further, plotTriangle can be used to plot fit lines by group. This provides a ready visual interpretation of a model.
• TriangleModel – This object stores a linear model with a single response and one grouped predictor. At present, the grouping element is always the development lag. (This is not strictly enforced, but using any other variable will likely lead to a mysterious error.) In a future release, this assumption will be relaxed and additional grouping elements will be permitted. Further, the use of glm’s will be introduced. The TriangleModel object facilitates several diagnostics:
• A display of the coefficients of the model is displayed. This is akin to having the “loss development factors” plotted as a probability density function. The allows one to see which factors have greater variability.
• A residual plot of residuals against predicted and also grouped by origin period, development lag and calendar period. This is the classic set of four graphs shown in Zehnwirth’s paper (and likely elsewhere).
• Serial correlation across a calendar period is also displayed. Here the residuals for comparable development lags are matched to residuals in prior calendar year periods. This allows for a statistical test of the correlation of residuals from one period to the next.
• TriangleProjection – This object uses a TriangleModel to project to a future point in time. The future point is stated either as a specific date, or a specific development interval. It’s most common for actuaries to project through a development interval.
Here’s a quick example. I’m using a triangle from the Friedland paper, which is on the CAS syllabus for the reserving exam. This data is taken from page 65.
install.packages("MRMR")
library(MRMR)
demo(Friedland)
This will produce a number of cool plots.
The classic cumulative by origin period:
The same using incremental data:
Incremental data by calendar period:
A model with best fit lines. (Note that this almost corresponds to the standard notion of a link ratio. At present, the default is to include an intercept. This will get cleaned up in the next release.)
Confidence intervals around model factors:
The classic four-square residual plot:
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# A caloritronics-based Mott neuristor
## Abstract
Machine learning imitates the basic features of biological neural networks at a software level. A strong effort is currently being made to mimic neurons and synapses with hardware components, an approach known as neuromorphic computing. While recent advances in resistive switching have provided a path to emulate synapses at the 10 nm scale, a scalable neuron analogue is yet to be found. Here, we show how heat transfer can be utilized to mimic neuron functionalities in Mott nanodevices. We use the Joule heating created by current spikes to trigger the insulator-to-metal transition in a biased VO2 nanogap. We show that thermal dynamics allow the implementation of the basic neuron functionalities: activity, leaky integrate-and-fire, volatility and rate coding. This approach could enable neuromorphic hardware to take full advantage of the rapid advances in memristive synapses, allowing for much denser and complex neural networks.
## Introduction
Machine learning has experienced an unprecedented growth in recent years, often referred to as an “artificial intelligence revolution”1,2. Its fundamental approach is inspired by biological systems: using neural networks to classify large amounts of data into sorting categories. Classic examples are speech and image recognition1,2. Neural networks are composed of two basic elements: neurons and synapses. Current machine learning schemes implement these elements at a software level: neurons and synapses are simulated on standard computers based on a von Neumann architecture1,2. This approach is inefficient in terms of computation speed and energy consumption, motivating a search for hardware-based systems that imitate the brain. This idea was initially proposed more than fifty year ago3,4,5, and attained widespread popularity with the works of Carver Mead6. Since then, CMOS-based circuitry has been successfully used to realize neuromorphic systems, allowing to build tuneable and efficient neural networks7,8,9. Unfortunately, CMOS-based components rely on combinations of multiple transistors and capacitors that make them complex and large9. This limits circuit scalability and, hence, poses a limitation to achieve dense neural networks which could eventually rival the brain.
A solution to this problem might be found in “neuromorphic materials”, whose intrinsic properties mimic those of neurons and synapses10,11. Resistive switching (RS), a phenomenon in which an applied electric field modifies the resistance of a material12,13,14, offers a unique opportunity to achieve this goal. RS can be volatile15,16,17,18,19,20,21 or non-volatile22,23, which can be used to emulate neuron or synapse behaviours, respectively. Multiple groups have used RS to achieve synaptic functionalities24,25, and memristor crossbar arrays have already been used to perform pattern recognition26,27,28. These synapse realizations, however, still rely on traditional electronics to play the role of neurons (neuristors). This approach does not take full advantage of the scalability and simplicity offered by memristive synapses, and motivates the search for a simpler and more scalable neuristor.
A neuristor must feature the most basic functionalities of real neurons29: i) leaky integrate-and-fire, ii) activity (outputting a current), iii) volatility (resetting after a firing), and iv) rate coding of the external stimuli. Performing leaky integrate-and-fire is one of the key functionalities of a neuristor. It must sum all the input stimuli coming from previous neurons and fire an output spike when the excitation is above a certain threshold29. In the case of biological neurons, the cell membrane acts as a capacitor that integrates incoming ionic currents. The firing mechanism, based on voltage gated sodium and potassium channels, is activated once the membrane potential exceeds a certain threshold. CMOS neuristors use a similar approach (Fig. 1a): a capacitor plays the role of cell membrane by integrating current from incoming pulses9, while the CMOS circuitry produces the firing events. Pickett et al.30, Ignatov et al.31 and Yi et al.32 also use capacitive integration, but in their case volatile RS in a Mott insulator is utilized in the firing stage. While this parallelism with biological systems is appealing33, the use of capacitors to store the internal state of the neuron limits the circuit scalability. In order to avoid malfunctioning, their capacitance must be much larger than the parasitic capacitance of the electrode lines. Integration capacitors cover a large area in current CMOS neurons, leading to typical sizes in the order of 10–100 μm9. Capacitor downscaling is one of the most challenging issues in other technologies such as DRAM, where the industry has dedicated intense effort towards developing complex 3D capacitive structures to circumvent this problem34.
Wang et al.35, Tuma et al.36 and Stoliar et al.37 successfully implement integrate and fire dynamics without the use of capacitors by utilizing diffusive memristors, phase-change materials and Mott insulators, respectively. However, these systems are not active as they do not generate a current. Moreover, they are not volatile i.e. do not reset automatically, a characteristic needed for spiking dynamics. This limits their autonomy and their practical implementation as standalone neurons. Yajima et al.38 introduced a dedicated circuit to reset a Mott integration device. Although their neuristor is capable of performing all basic neuron tasks, the use of multiple operational amplifiers makes this implementation rather complex. A fully autonomous and scalable neuristor is yet to be found.
Instead of electrical currents, we propose using heat flow to perform computing tasks, an approach known as caloritronics. Temperature substitutes charge as the integrating variable, as depicted in Fig. 1b. Current spikes coming from previous neurons induce Joule heating while passing through a resistive element (heater), increasing local temperature with every spike. The heater is thermally coupled to a firing element that is very sensitive to temperature changes and fires once a threshold temperature is exceeded. In this work, we use VO2, a well-known correlated oxide with a sharp insulator-to-metal transition (IMT) around 340 K39 (Supplementary Figure S1a), as the firing element. We realize leaky integrate-and-fire using the thermal dynamics, which are governed by similar equations to those describing the charge dynamics of a leaky capacitor (see Fig. 1a,b). Adopting local temperature as the internal state allows building simple-design neuristors that can be downsized to the nanoscale, as thermal dynamics equations preserve the same form independent of the system size.
We fabricated and tested a proof-of-concept neuristor that performs all basic neuronal functionalities. It consists of a VO2 thin film on top of which two layers of electrodes are patterned (the detailed fabrication process can be found in methods section and Supplementary Figure S2). The first layer consists of two Ti/Au electrodes (running vertically in Fig. 2a) separated by a 50 nm gap. These electrodes are used to apply voltage to the VO2, and provide the source of the neuristor’s active output. If the voltage is high enough, a transition into the metallic phase can be electrically triggered15,16,40 (Supplementary Figure S1b). Figure 2b shows the current as a function of time when a voltage pulse is applied to the gap. It illustrates the threshold nature of the voltage triggered IMT: the device becomes metallic once a threshold voltage (VTh) is exceeded40. The second electrode layer is a Ti/Au nanowire (running horizontally in Fig. 2a) which acts as a heater. It is separated from the bottom electrodes by a 70-nm-thick Al2O3 layer which provides electrical insulation (resistance larger than 20 MΩ), but ensures thermal coupling between the nanowire heater and VO2 gap.
Figure 2c shows VTh of the VO2 gap as a function of temperature. Two cases are shown: zero (black) and 12.5 mA (red) current flowing through the nanowire heater (horizontal electrode). In the second case, Joule heating locally increases the temperature of the gap reducing its VTh. To work as a neuristor, the gap is kept under a DC bias just below its threshold voltage (VDC < VTh), as represented in Fig. 2c by a green dot. When a high enough current pulse IInput is passed through the heater, it lowers VTh below VDC, and the gap turns metallic, generating an output current through the bottom electrodes (IOutput). This situation is presented in Fig. 2d, where a 30 ns current pulse is applied to the heater, triggering the IMT in the gap. We must emphasize that the output and input electrodes are electrically isolated, and an output current is generated when the neuristor fires, making it an active element (as it must be powered to operate, and outputs power during operation). Our device releases energy when stimulated, a crucial property to avoid a re-amplification stage after each neural layer. This goes one step beyond previous RS-based neuristors30,31,32, which become conducting after performing integrate-and-fire but do not create an output on their own. Furthermore, electrically decoupled input and output of our neuristor would allow building a multilayer neural network without using buffer circuits to prevent current backflow from post-synaptic to pre-synaptic neuristors.
Volatility is a necessary feature to implement spiking dynamics. As presented so far, our device would remain conductive once triggered. To reset the VO2 gap after the firing event, we added a resistor (RLoad) in series with the gap (inset Fig. 2e). The role of this resistor is to lower the voltage across the gap once the VO2 becomes metallic41,42,43, which in turn reduces heat dissipation and decreases local temperature after the firing event. As a result, the VO2 returns to its insulating state after firing (Fig. 2e) giving the desired effect: spike in – spike out.
Leaky integrate-and-fire (LIF) dynamics are governed by the characteristic thermal times of our device, given by its specific heat and thermal resistance to the substrate. Figure 3a shows the warm up times of the neuristor as a function of IInput, that is, how long it takes the device to warm and fire once the current flows through the heater. Typical times are on the order of 10–100 ns. Using pulse widths and rates around that timescale allow us to implement LIF dynamics. Figure 3b shows the response of the neuristor (IOutput) when a train of current pulses is sent to the heater (IInput). The first pulse does not raise the temperature enough to fire the device, but the cumulative effect of several pulses adds up to trigger the IMT after an integration period. The number of pulses necessary to produce a firing event depends on the pulse amplitude. Figure 3c shows the probability of the neuristor firing after a certain number of pulses are applied to the input. Several current amplitudes are shown, offering a clear visualization of the LIF dynamics. For high current pulses, the device is always triggered with just one pulse. For lower currents, the probability of it firing with just one pulse decreases, and more pulses are necessary to induce an output spike. The overall mean integration time is shifted to longer values as the current amplitude is decreased. When the input current is too low, heat leakage into the environment overcomes the dissipated power, and the device does not fire for any number of pulses.
Another basic feature of biological neurons is rate coding: the frequency at which a neuron spikes depends on the amplitude of its stimulus. Strong stimuli produce high frequency spiking, while weak stimuli yield slower patterns29,44. Our neuristor reproduces that feature, as shown in Fig. 4a. A constant current is passed through the heater, resulting in a repetitive spiking output. The frequency of the output increases with the input current (Fig. 4b). After the neuristor fires, both the temperature and the voltage across the gap drop, leaving the system in a refractory period until they increase back to their initial values.
The duration of the refractory period depends on the interplay between neuristor’s thermal and electrical properties. The electrical charging time is determined by the RC constant, being R and C the resistance and capacitance of the system. While we do not use external capacitive elements, some intrinsic capacitance will always be present due to the experimental set up. The warm up time is given by RthCth, where Rth is the thermal resistance and Cth the thermal capacitance. Whether the refractory dynamics are dominated by thermal or electrical effects depends on the RC/RthCth ratio. Considering the geometry and materials of our device, we estimate RthCth to be around 10−8 s (See methods). Since R is in the 104 Ω range, electric parasitic capacitance is expected to be dominant for C above 10−12 F.
To gain a better understanding of the interplay between electrical and thermal properties, we performed lumped-element simulations of the neuristor operation. Figure 5a,b show the electrical equivalent circuit and a schematic of the thermal model used in our simulations (see methods section for more details). In the equivalent circuit, we explicitly include the parasitic capacitance associated with the device. The thermal model takes into account the heating provided by both the nanowire heater and the Vdc bias, and the heat loss into the environment. Figure 5c shows IOutput vs t when a dc IInput is applied at t = 0, for a system with a relatively large electric capacitance C = 10−10 F. Repetitive spiking, similar to the experimental result is observed. This suggests that in our particular devices, recovery dynamics during the refractory period are determined mainly by the parasitic capacitance. However, such capacitance is not necessary to produce spiking dynamics. Figure 5d shows IOutput vs t when a dc IInput is applied at t = 0, for a system with no parasitic capacitance: spiking behaviour is still observed, although with clear differences in the shape and time separation between the individual spikes. The mechanism behind the spiking dynamics can be understood by considering the stability points of the system45. Figure 5e shows time derivative of the temperature, ∂T/∂t, vs T for different IInput. Sharp discontinuities in ∂T/∂t are present due to the IMT, resulting in a hysteresis curve. When there is no current through the heater, the system stabilizes at a certain temperature and does not oscillate. Adding an input shifts the curve in a way in which the hysteresis jumps discontinuously between ∂T/∂t > 0 and ∂T/∂t < 0, so ∂T/∂t is never equal to zero. This traps the system in a persistent oscillation state, purely due to thermal dynamics. In this way, the spiking behaviour of the VO2 gap can be externally controlled with a heat current. Both electrically, RC > RthCth, and thermally dominated, RC < RthCth, systems produce the rate coding property, as shown in Fig. 5f, where the spiking rate is plotted as a function of IInput for several values of C.
Many of the relevant parameters of the proposed neuristor depend on the particular device design, as well as on the intrinsic properties of the chosen materials. This gives plenty of room to explore and improve its functionalities. Different substrates, insulating spacers or geometric designs will strongly change the device properties. For instance, a smaller device and less thermally conductive substrate and contact pads would reduce heat leakage to the environment, allowing for the use of lower currents. For instance, using Ti instead of Au as metallic contact would drastically reduce heat loss into the pads. Similarly, using TiO2 instead of Al2O3 as a substrate can decrease thermal conductance by a factor of 20. A simple calculation (see methods), shows that by changing the materials, thermal conductance and hence Joule heating can be reduced by more than two orders of magnitude.
Although the currents used in our proof-of-concept device are large, the short duration of each spike (~30 ns) yields an energy consumption of 3.10−9 J/spike. By reducing parasitic capacitance and optimizing materials and geometry, this number could be brought down to around 10−11 J/spike, which is comparable to the performance of biological neurons. Other design strategies, such as extremely localized Joule heating46, could also be used to further decrease energy consumption. Regarding size, our device occupies less than 1 μm2, reducing neuron area by more than an order of magnitude compared to biological neurons, and almost four orders compared the most compact silicon neuron circuits9.
Another attractive feature of our approach is the potential of signal amplification without needing further elements: since the input and output are electrically isolated, it is possible for the output current to be larger than the input current. This is of fundamental importance; real neural networks propagate signals across several neuron layers, and the signal must be amplified after each layer. Previous neuristor implementations are passive, and therefore the output is always smaller than the input. This makes the use of CMOS based electronics mandatory, partially defeating the purpose of building a purely oxide electronics. We must note that extensive device optimization must be done before this is experimentally possible in a caloritronics-based device. Nevertheless, our lumped-element simulations show that this is a feasible scenario if heat conductance between the heater and the gap is improved. In fact, the results presented earlier in Fig. 5d also demonstrate self-amplification: the device generates a ~2 mA output current out of a 1.3 mA input.
The use of a heat transfer-based device may also come with some drawbacks that must be considered. One of them is crosstalk between neurons which could limit device density. Let’s consider two neuristors placed next to each other. With our current device dimensions, the distance between the two VO2 gaps could be as low as 1 μm, comparable to the typical pitch of memristor crossbar arrays26. A simple estimation shows that firing one of the devices can locally rise temperature up to 5 K in the other one, enough to make it fire too. By optimizing device dimensions and materials this problem could be largely avoided, and such temperature increment could be limited to a few mK (see methods). Another potential drawback is that, due to the proximity to a phase transition, temperature must be precisely controlled when working with IMT-based neuristors. Although this could be hard to implement in very large circuits, it could also be a positive feature. The human brain operates close to criticality and it is only functional in a very narrow temperature range47. It has been argued that this critical behaviour is what allows to perform the complex cognitive task of an intelligent system48. In this sense, working at the edge of a phase transition might be an ideal platform to explore new and more complex phenomena in neuromorphic computing.
Caloritronics and resistive switching can be combined to create scalable and autonomous neuristors. We demonstrated four basic neural functionalities: activity, volatility, leaky integrate-and-fire dynamics and rate coding using simple devices that can be downscaled well below the μm scale. Combined with the fast advances in memristor technology, this could pave the way to develop dense neuromorphic hardware, allowing for deeper and more complex neural networks. Our approach could be generalized to other physical phenomena. Other systems at the edge of a phase transition are very sensitive to external stimuli and might show a similar behaviour. On a broader scope, we show that, although often regarded as an undesirable consequence, power dissipation might actually enable new ways of computing, by taking advantage of the rich phenomenology of correlated systems.
## Methods
### Sample preparation
A 70 nm VO2 film was grown by reactive sputtering on top of an R-cut Al2O3 substrate. A 4 mtorr Argon/Oxygen mix (8% O2) was used during deposition. The substrate temperature was kept at 520 °C, and cooled down after sputtering at a rate of 12 °C/min. X-ray diffraction shows textured orientation along 〈100〉 for VO2. Transport measurements show a four orders of magnitude IMT, confirming the high quality of the film. The device was fabricated in two lithographic steps (layers). In the first layer, e-beam lithography end e-beam evaporation was used to pattern two Ti (20 nm)/Au (30 nm) electrodes. A small gap (~50 nm) was left between both electrodes, so large electric fields could be generated by applying a few volts. The second layer consists on an Al2O3 (70 nm)/Ti (20 nm)/Au (30 nm) nanowire, patterned on top of the gap and running perpendicular to the first layer electrodes. E-beam lithography and e-beam evaporation was used for this purpose. Several of such devices are patterned in a single sapphire substrate. Optical lithography and reactive ion etching was used to remove the VO2 outside of the gap area and isolate the different devices from each other. More information on the device fabrication process can be found in Supplementary Figure S2.
### Fast transport measurements
Measurements were carried out in a TTPX Lakeshore cryogenic probe station. The station is equipped with high-speed (20 GHz) probes, with ground/line/ground geometry and 50 Ω characteristic impedance. In order to avoid reflections (the insulating resistance of the device is in the 104 Ω range) a 50 Ω termination to ground was installed before the sample. A 240 MHz Tektronix function generator was used to create the voltage pulses and a 50 Ω terminated Tektronix broadband oscilloscope (20 GHZ) was used to monitor the current. The electrical circuit set up ensured a rise time around 5 ns.
### Simulations of the device dynamics
A simple, lumped-element model was used to investigate the electro-thermal dynamics of the device (Fig. 5a,b). The electrical part of the model considers a VO2 gap as a resistor in parallel with a capacitor C, playing the role of parasitic capacitance of the circuit. The charge accumulated in such capacitor is Q. A load resistor RL is placed in series, and VDC constant bias voltage is applied across the gap and the load. We label the total current as I, while IR and IC are the currents flowing through the VO2 and the capacitor. The heating resistor RHeater is electrically isolated from the rest of the circuit, with an input current IInput passing through it.
The value R depends on the VO2 state: metallic or insulating. R = Rmet = 200 Ω in the metallic state, while Rins = αeβ/T in the insulating state. The values of α = 0.0178 and β = 4500 are such that Rins = 10 kΩ at 339 K, and Rins = 58.4 kΩ at 300 K. Whether VO2 is metallic or insulating depends on the device current temperature as well as its thermal history. A hysteresis is set between 335 K and 339 K to mimic the first-order nature of the IMT.
At each simulation step i, R[i] is evaluated depending on the thermal history, and the current through the gap and the capacitor are calculated:
$${I}_{gap}[i]=Q[i]/(R[i]\cdot C)$$
$${I}_{C}[i]=({V}_{DC}-{I}_{gap}[i]\cdot R[i])/{R}_{L}-{I}_{gap}[i]$$
With this, we can calculate the evolution of Q using the Euler method.
$$Q[i+1]=Q[i]+{I}_{C}[i]\cdot \delta t$$
where δt is the simulation step.
The temperature T is governed by the thermal part of the model. The model considers the VO2, the Al2O3 barrier and the heating element as a single system with homogeneous temperature. Although simple, it accurately mimics the experimental results. The are two heat sources: Joule heating in the gap ($${Q}_{gap}=R\cdot {I}_{gap}^{2}$$) and in the heater ($${Q}_{Heater}={R}_{Heater}\cdot {I}_{Input}^{2}$$). Heat is evacuated from the device into the environment, consisting on the metallic pads and the sapphire substrate. Such heat loss will depend on the temperature difference between the device and the base temperature of the environment TBase.
The temperature evolution is calculated using the Euler method:
$$T[i+1]=T[i]+\frac{1}{{C}_{th}}\cdot (R\cdot {I}_{gap}^{2}+{R}_{Heater}\cdot {I}_{Input}^{2}-{S}_{th}\cdot (T[i]-{T}_{Base}))\cdot \delta t$$
where Cth and Sth are the thermal capacitance and conductance of the system respectively.
We must note that for simplicity we consider the powers dissipated in the gap and the heater to contribute equally to the temperature change in the VO2. We treat the whole neuristor as a single thermal element with the same temperature.
By considering the individual thermal capacitances of the VO2, Ti/Au pads and Al2O3 barrier enclosed in the 400 nm × 400 nm area of the device, we estimated $${C}_{th}\approx 1.3\cdot {10}^{-13}J/K$$. For this calculation we used the following density d and specific heat c values49,50: $$\,{d}_{V{O}_{2}}=4.34\cdot {10}^{3}\frac{kg}{{m}^{3}}$$, $${c}_{V{O}_{2}}=690\frac{J}{K\cdot kg}$$, $${d}_{Au}=19.3\cdot {10}^{3}\frac{kg}{{m}^{3}}$$, $${c}_{Au}=129\frac{J}{K\cdot kg}$$, $${d}_{Ti}=4.54\cdot {10}^{3}\frac{kg}{{m}^{3}}$$, $${c}_{Ti}=523\frac{J}{K\cdot kg}$$, $${d}_{A{l}_{2}{O}_{3}}=3.97\cdot {10}^{3}\frac{kg}{{m}^{3}}$$ and $${c}_{A{l}_{2}{O}_{3}}=854\frac{J}{K\cdot kg}$$
To estimate the thermal conductance, we took into account two contributions: the thermal conductance of the metallic pads, and the vertical thermal conductance through the substrate. The most thermally resistive part of the pads is the 1.5 μm long stretch closer to the device center. Considering only this part of the pads, we estimated $${S}_{pads}\approx 1.3\cdot {10}^{-5}\,W/K$$. Vertical heat transport will go through the VO2 into the sapphire substrate and we estimate it to be $${S}_{vertical}\approx 8.5\cdot {10}^{-6}W/K$$. This gives a total conductance value $${S}_{th}\approx 2.2\cdot {10}^{-5}W/K$$.
For this calculation we used the following thermal conductivity values49,50: $${\sigma }_{V{O}_{2}}=6\frac{W}{m\cdot K}$$, $${\sigma }_{Au}=310\frac{W}{m\cdot K}$$, $${\sigma }_{Ti}=21.9\frac{W}{m\cdot K}$$ and $${\sigma }_{A{l}_{2}{O}_{3}}=30\frac{W}{m\cdot K}$$.
The effect of downsizing and material choice can be explored by calculating the thermal conductance of a similar device in which Au is substituted by Ti as metallic contact, a TiO2 substrate is used, the pad size has been reduced one order of magnitude (down to 40 nm), while the thickness has been decreased to half. Using50 $${\sigma }_{Ti{O}_{2}}=9\frac{W}{m\cdot K}$$, we get:
$${S}_{vertical}\approx 9.4\cdot {10}^{-8}W/K,{S}_{pads}\approx 3.7\cdot {10}^{-7}W/K,{S}_{Total}\approx 4.6\cdot {10}^{-7}W/K$$
This is a two orders of magnitude reduction in heat leakage towards the environment, which allows reducing the Joule heating.
Such changes would also decrease the specific heat of the system to $${C}_{th}\approx 1.3\cdot {10}^{-13}J/K$$. This would increase the ratio $${S}_{th}/{C}_{th}$$, and hence, the system dynamics by a factor of 4.5.
#### Parameters used in the simulation
Despite the simplicity of the model, we observe spiking patterns very similar to the experiments. Although the parameters were adjusted to observe oscillatory behavior, they were kept as close as possible to the device characteristics:
TBase = 325 K, VDC = 4.4 (Fig. 5d)–5.5 V (Fig. 5c), Rins (339 K) = 10 kΩ, Rmet = 200 Ω, RL = 2.0 (Fig. 5d) – 5.0 kΩ (Fig. 5c), IInput = 0–5 mA (variable), C = 0–10−10 F (variable), RHeater = 20 Ω, Cth = 10.10−13 J/K, Sth = 10.10−5 W/K, T[0] = 325 K, Q[0] = 0 C and ∂t = 10−13 s.
### Estimation of temperature rise outside of the neuristor
An estimation of the temperature rise can be obtained by considering the heat flow into a substrate coming from a point source at the surface. In this case, the point source is the proposed neuristor. In an isotropic case, the temperature at a distance r will be given by:
$$T={T}_{Base}+\,Q/2\pi r{\sigma }_{substrate}$$
where $$Q={S}_{Th}\cdot ({T}_{Neuristor}-{T}_{Base})$$ is the total heat flow coming from the device.
In our experimental device, we estimated $${S}_{Th,vertical}\approx 8.5\cdot {10}^{-6}\,W/K$$. According to our simulations, for a parasitic capacitance C = 10−10 F, TNeuristor rises 100 K during a spike. Such spike would increase the temperature 5 K at in a point 1 μm away from the device. With the optimization proposed in the simulations part of this methods section, $${S}_{Th,vertical}\approx 9.4\cdot {10}^{-8}\,W/K$$. For a system with no parasitic capacitance, the simulations show that TNeuristor rises just 10 K during a spike. Such spike would increase temperature just 15 mK in a point 1 μm away.
We must point out that this calculation is approximate and does not take into account factors such as cooling from the contact pads, which will depend on the particular device design. We must also note that this estimation is an upper limit to the temperature increment, since it considers a steady state in which the device is constantly at the maximum temperature that it reaches during firing. We expect the actual temperature change to be lower.
## Data availability
The data supporting the plots and claims of this manuscript are available from the corresponding authors upon reasonable request.
## References
1. 1.
Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
2. 2.
Editorial. More than machines. Nat. Mach. Intell. 1, 1–1 (2019).
3. 3.
Crane, H. Neuristor-A Novel Device and System Concept. Proc. IRE 50, 2048–2060 (1962).
4. 4.
Mattson, R. H. A neuristor realization. Proc. IEEE 52, 618–619 (1964).
5. 5.
Nishizawa, J. I. & Hayasaka, A. Two-line neuristor with active element in series and in parallel. Int. J. Electron. 26, 437–469 (1969).
6. 6.
7. 7.
Merolla, P. A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668–673 (2014).
8. 8.
Furber, S. Large-scale neuromorphic computing systems. J. Neural Eng. 13, (2016).
9. 9.
Indiveri, G. et al. Neuromorphic Silicon Neuron Circuits. Front. Neurosci. 5, 1–23 (2011).
10. 10.
Zhou, Y. & Ramanathan, S. Mott Memory and Neuromorphic Devices. Proc. IEEE 103, 1289–1310 (2015).
11. 11.
Romera, M. et al. Vowel recognition with four coupled spin-torque nano-oscillators. Nature 563, 230–234 (2018).
12. 12.
del Valle, J., Ramírez, J. G., Rozenberg, M. J. & Schuller, I. K. Challenges in materials and devices for resistive-switching-based neuromorphic computing. J. Appl. Phys. 124, 211101 (2018).
13. 13.
Waser, R. & Aono, M. Nanoionics-based resistive switching memories. Nat. Mater. 6, 833–840 (2007).
14. 14.
Yang, J. J., Strukov, D. B. & Stewart, D. R. Memristive devices for computing. Nat. Nanotechnol. 8, 13–24 (2013).
15. 15.
Zimmers, A. et al. Role of Thermal Heating on the Voltage Induced Insulator-Metal Transition in VO2. Phys. Rev. Lett. 110, 056601 (2013).
16. 16.
Brockman, J. S. et al. Subnanosecond incubation times for electric-field-induced metallization of a correlated electron oxide. Nat. Nanotechnol. 9, 453–458 (2014).
17. 17.
Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater. 16, 101–108 (2017).
18. 18.
Wang, Z., Kumar, S., Nishi, Y. & Wong, H.-S. P. Transient dynamics of NbOX threshold switches explained by Poole-Frenkel based thermal feedback mechanism. Appl. Phys. Lett. 112, 193503 (2018).
19. 19.
Jiang, R. et al. Total-Ionizing-Dose Response of Nb2O5 -Based MIM Diodes for Neuromorphic Computing Applications. IEEE Trans. Nucl. Sci. 65, 78–83 (2018).
20. 20.
Pergament A. et al. Vanadium Dioxide: Metal-Insulator Transition, Electrical Switching and Oscillations. A Review of State of the Art and Recent Progress. EMN Meeting on Computation and Theory, Energy Materials and Nanotechnology, Istambul (2015).
21. 21.
Pergament, A., Stefanovich, G., Malinenko, V. & Velichko, A. Electrical Switching in Thin Film Structures Based on Transition Metal Oxides. Adv. Cond. Matt. Phys. 2015, 1–26 (2015).
22. 22.
Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature 453, 80–83 (2008).
23. 23.
Beck, A., Bednorz, J. G., Gerber, C., Rossel, C. & Widmer, D. Reproducible switching effect in thin oxide films for memory applications. Appl. Phys. Lett. 77, 139 (2000).
24. 24.
Jo, S. H. et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010).
25. 25.
Ohno, T. et al. Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 10, 591–595 (2011).
26. 26.
Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).
27. 27.
Pi, S. et al. Memristor crossbar arrays with 6-nm half-pitch and 2-nm critical dimension. Nat. Nanotechnol. 14, 35–40 (2018).
28. 28.
Boybat, I. et al. Neuromorphic computing with multi-memristive synapses. Nat. Commun. 9, 2514 (2018).
29. 29.
Koch, C. Biophysics of Computation. (Oxford University Press, 1999).
30. 30.
Pickett, M. D., Medeiros-Ribeiro, G. & Williams, R. S. A scalable neuristor built with Mott memristors. Nat. Mater. 12, 114–117 (2013).
31. 31.
Ignatov, M., Ziegler, M., Hansen, M., Petraru, A. & Kohlstedt, H. A memristive spiking neuron with firing rate coding. Front. Neurosci. 9, 376 (2015).
32. 32.
Yi, W. et al. Biological plausibility and stochasticity in scalable VO2 active memristor neurons. Nat. Commun. 9, 4661 (2018).
33. 33.
Feali, M. S. & Ahmadi, A. Realistic Hodgkin–Huxley Axons Using Stochastic Behavior of Memristors. Neural Process. Lett. 45, 1–14 (2017).
34. 34.
Kim, S. K. et al. Capacitors with an Equivalent Oxide Thickness of <0.5 nm for Nanoscale Electronic Semiconductor Memory. Adv. Funct. Mater. 20, 2989–3003 (2010).
35. 35.
Wang, Z. et al. Fully memristive neural networks for pattern classification with unsupervised learning. Nat. Electron. 1, 137–145 (2018).
36. 36.
Tuma, T., Pantazi, A., Le Gallo, M., Sebastian, A. & Eleftheriou, E. Stochastic phase-change neurons. Nat. Nanotechnol. 11, 693–699 (2016).
37. 37.
Stoliar, P. et al. A Leaky-Integrate-and-Fire Neuron Analog Realized with a Mott Insulator. Adv. Funct. Mater. 27, 1604740 (2017).
38. 38.
T. Yajima, T. Nishimura & A. Toriumi, Analog spike processing with high scalability and low energy consumption using thermal degree of freedom in phase transition materials, 2018 IEEE Symposium on VLSI Technology (2018).
39. 39.
Imada, M., Fujimori, A. & Tokura, Y. Metal-insulator transitions. Rev. Mod. Phys. 70, 1039–1263 (1998).
40. 40.
del Valle, J. et al. Subthreshold firing in Mott nanodevices. Nature 569, 388–392 (2019).
41. 41.
Lee, Y. W. et al. Metal-insulator transition-induced electrical oscillation in vanadium dioxide thin film. Appl. Phys. Lett. 92, 162903 (2008).
42. 42.
Lepage, D. & Chaker, M. Thermodynamics of self-oscillations in VO2 for spiking solid-state neurons. AIP Adv. 7, 055203 (2017).
43. 43.
Driscoll, T. et al. Current oscillations in vanadium dioxide: Evidence for electrically triggered percolation avalanches. Phys. Rev. B 86, 094203 (2012).
44. 44.
Benda, J. & Herz, A. V. M. A Universal Model for Spike-Frequency Adaptation. Neural Comput. 15, 2523–2564 (2003).
45. 45.
Kumar, S., Strachan, J. P. & Williams, R. S. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing. Nature 548, 318–321 (2017).
46. 46.
Bohaichuk, S. M. et al. Fast Spiking of a Mott VO2-carbon nanotube composite device. Nano Lett. 19, 6751–6755 (2019).
47. 47.
Trastoy, J. & Schuller, I. K. Criticality in the brain: evidence and implications for neuromorphic computing. ACS Chem. Neurosci. 9, 1254–1258 (2018).
48. 48.
Chialvo, D. R. Emergent complex neural dynamics. Nat. Phys. 6, 744–750 (2010).
49. 49.
Stefanovich, G., Pergament, A. & Stefanovich, D. Electrical switching and Mott transition in VO2. J. Phys. Condens. Matter. 12, 8837 (2000).
50. 50.
CRC Handbook of Chemistry and Physics, 84th Edition (Ed. Lide, D. R. et al.) Sections 4 and 12. (CRC Press Boca Raton 2003).
## Acknowledgements
This work was supported as part of the Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) Energy Frontier Research Center (EFRC), funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award # DE-SC0019273. Part of the fabrication process was done at the San Diego Nanotechnology Infrastructure (SDNI) of UCSD, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation under grant ECCS-1542148. J. del Valle thanks Fundación Ramón Areces for the support with a postdoctoral fellowship. The authors thank Marcelo J. Rozenberg, Juan Trastoy and George Kassabian for helpful discussions.
## Author information
Authors
### Contributions
J.d.V. and I.K.S. conceived the idea. J.d.V, and Y.K. designed and fabricated the devices. J.d.V. and P.S. performed the transport measurements and analyzed the data. J.d.V. performed the lumped-element simulations. J.d.V. and I.K.S. wrote the manuscript. All authors participated in the discussion of the results and corrected multiple iterations of the manuscript.
### Corresponding author
Correspondence to Javier del Valle.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
del Valle, J., Salev, P., Kalcheim, Y. et al. A caloritronics-based Mott neuristor. Sci Rep 10, 4292 (2020). https://doi.org/10.1038/s41598-020-61176-y
• Accepted:
• Published:
• ### Transverse barrier formation by electrical triggering of a metal-to-insulator transition
• Pavel Salev
• Lorenzo Fratino
• Ivan K. Schuller
Nature Communications (2021)
• ### Energy-efficient Mott activation neuron for full-hardware implementation of neural networks
• Sangheon Oh
• Yuhan Shi
• Duygu Kuzum
Nature Nanotechnology (2021)
• ### Self-clocking fast and variation tolerant true random number generator based on a stochastic mott memristor
• Gwangmin Kim
• Jae Hyun In
• Kyung Min Kim
Nature Communications (2021)
• ### Novel hardware and concepts for unconventional computing
• Martin Ziegler
Scientific Reports (2020)
• ### Non-thermal resistive switching in Mott insulator nanowires
• Yoav Kalcheim
• Alberto Camjayi
• Ivan K. Schuller
Nature Communications (2020)
|
# stable matching graph theory
I For each person being unmatched is the least preferred state, i.e., each person wants to bematched rather than unmatched. Obviously, this increases the total satisfaction of the women, since only $w's$ changes. And clearly a matching of size 2 is the maximum matching we are going to nd. Our contribution is two fold: a polyhedral characterization and an approximation algorithm. A stable matching is a matching in a bipartite graph that satisfies additional conditions. The objective is then to build a stable matching, that is, a perfect matching in which we cannot find two items that would both prefer each other over their current assignment. However, in addition, each boy has his preferences and each girl has her preferences, each a complete ranking with no ties. 117 Classical applications. If it is "boy optimal", shouldn't the girls be the ones proposing? How do I hang curtains on a cutout like this? Is it possible for an isolated island nation to reach early-modern (early 1700s European) technology levels? To obtain the stable matching in Sage we use the solve method which … rev 2021.1.8.38287, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Chvátal defines the term hole to mean "a chordless cycle of length at least four." Proof. Why was there a "point of no return" in the Chernobyl series that ended in the meltdown? Making statements based on opinion; back them up with references or personal experience. I For each edge M in a matching, the two vertices at either end are matched. Show that in a boy optimal stable matching, no more that one boy ends up with his worst choice. Unstable pair m-w could each improve by eloping. There exists stable matching S in which A is paired with a man, say Y, whom she likes less than Z.! The main reason is that these models In condition $(18.23),\ e,f,\text{ and } g$ can all be the same edge. For n≥3, n set of boys and girls has a stable matching (true or false). Image by Author. Vande Vate4provided one. A stable matching (or marriage) seeks to establish a stable binary pairing of two genders, where each member in a gender has a preference list for the other gender. Thus, A-Z is an unstable in S. ! In fact, this is not true, as we see in the graph on M-p. 13. This is obviously false as at n=3 I can find a unstable matching. In graph theory, a matching in a graph is a set of edges that do not have a set of common vertices. rev 2021.1.8.38287, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Readers may understand your problem easier if you can add the definition of $\delta(v)$ and the meaning of $f\le_a e$. • Complete bipartite graph with equal sides: – n men and n women (old school terminology ) • Each man has a strict, complete preference ordering over women, and vice versa • Want:a stable matching Stable matching: No unmatched man and woman both prefer each other to their current spouses Why does the dpkg folder contain very old files from 2006? Let $G=(V,E)$ be bipartit with bipartition $V=A\cup B$. In this note we present some sufficient conditions for the uniqueness of a stable matching in the Gale-Shapley marriage classical model of even size. Um die fortwährenden Änderungen der Liste … Matching in Bipartite Graphs. I think everything would be clearer if we had $e\notin M$ and strict inequality. For example, dating services want to pair up compatible couples. Let G be a bipartite graph with all degrees equal to k. Show that G has a perfect matching. Let $s(g_{1})$ denote all possible boys that $g_{1}$ could be matched with in a stable matching. The Stable Marriage Problem states that given N men and N women, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners.If there are no such people, all the marriages are “stable” (Source Wiki). Likewise the matching number is also equal to jRj DR(G), where R is the set of right vertices. Solving the Stable Marriage/Matching Problem with the Gale–Shapley algorithm. This problem is known to be NP-hard in general. What is the term for diagonal bars which are making rectangular frame more rigid? For more photos of this important day of medical students’ life click here. Why does the dpkg folder contain very old files from 2006? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The special case in which the graph is assumed to be bipartite is called the stable marriage problem, while its extension to … I think what makes the statement and proof of the theorem less clear than it might be is the use of non-strict inequality. Following is Gale–Shapley algorithm to find a stable matching: Consider the case where $b_I$'s favorite girl is $g_i$ and $g_i$'s favorite boy is $b _{n+1-i}$ for $i=1,2,\dots,n.$ In this case, obviously the matching is boy-optimal if the boys propose, girl-optimal if the girls propose. Sub-string Extractor with Specific Keywords. The matching { m1, w1 } and { m2, w2 } is stable because there are no two people of opposite sex that would prefer each other over their assigned partners. Should the stipend be paid if working remotely? Graph Hole. Think about the termination condition. In the rst round: I Each unengaged man proposes to the woman he prefers most I Each woman answers maybe to … 145 Stable Matching. Variant 2. In graph theory, a matching in a graph is a set of edges that do not have a set of common vertices. 2. Does the Gale-Shapley stable marriage algorithm give at least one person his or her first choice? Let B be Z's partner in S.! The matching number of a bipartite graph G is equal to jLj DL(G), where L is the set of left vertices. 153 Exercises. We also state the result on the existence of exactly two stable matchings in the marriage problem of odd size with the same conditions. Pallab Dasgupta, Professor, Dept. Such pairings are also called perfect matching. Actually, whenever we use the marriages as an example for the above problem, we must have at least three assumptions: payment (dower) is not allowed, only men and women can marry each other, and everybody can have at most one partner. Now try these problems. Is the bullet train in China typically cheaper than taking a domestic flight? We find that the theory of extremal stable matchings is observationally equivalent to requiring that there be a unique stable matching or that the matching be consistent with unrestricted monetary transfers. Necessity was shown above so we just need to prove sufficiency. total order. Just as we have a lin-ear inequality description of the convex hull of all match-ings in a bipartite graph, it is natural to ask if such a description is possible for the convex hull of stable matchings. Let $U$ be the set of men and $W$ the set of women. 151 On-line Matching. The stable matching problem for bipartite graphs is often studied in the context of stable marriages. 1. Selecting ALL records when condition is met for ALL records only, Why do massive stars not undergo a helium flash. Furthermore, the new set of marriages satisfies condition $(18.23),$ contradicting the definition of $M.$. Stable Marriage / Stable Matching / Gale-Shapley where men rank a subset of women. Perfect Matching. Interestingly enough, this fact follows as a corollary of the Deferred Acceptance Algorithm, which finds in polynomial time one stable matching among the In order for a boy to end up matched with his least favourite girl he must first propose to all the others. Currently, the US waiting list for kidneys has about 100,000 people on it. The bolded statement is what I am having trouble with. To learn more, see our tips on writing great answers. View Graph Theory Lecture 12.pptx from EC ENGR 134 at University of California, Los Angeles. It's easy to see that the algorithm terminates as soon as every girl has received a proposal (single girls are obliged to accept any proposal and, once every girl has received a proposal, no single boys remain). Bertha-Zeus Am y-Yance S. man-optimality. What does it mean when an aircraft is statically stable but dynamically unstable? Matchings, covers, and Gallai’s theorem Let G = (V,E) be a graph.1 A stable set is a subset C of V such that e ⊆ C for each edge e of G. A vertex cover is a subset W of V such that e∩ W 6= ∅ for each edge e of G. It is not difficult to show that for each U ⊆ V: In other words, matching of a graph is a subgraph where regarded and identified separately. Order and Indiscernibles 3 4. We say that w is. A matching $M\subseteq E$ is stable, if for every edge $e\in E$ there is $f\in M$, s.t. To learn more, see our tips on writing great answers. One time back to Figure 2, we see in the Gale-Shapley algorithm with the same conditions his preference.... Claim is that these models total order unstable matching holo in S3E13 assume that w... Edges that do not have a set of marriages as Gale and Shapley did on writing answers. Stable Marriage - set of common vertices 'war ' and 'wars ' preferences such each... Prefers to any other stable matching and an approximation algorithm with Consent via stable matching graph theory Theory of marriages. M $is stable, if for every bipartite graph that satisfies additional conditions Knuth on the of! Each y 2Yhas apreference order ˜ y over all matches X 2X contradicts the definition of$ $! Share | cite | improve this question | follow stable matching graph theory edited may '17... And paradoxes h ( e )$ is not true, as we see that jLj DL ( G =! Jlj DL ( G ), where R is the point of no ''., why do massive stars not undergo a helium flash f, \text { and } G can! So wrong a device on my passport will risk my visa application re... Happens to a problem posed by Knuth on the size of an induced sub-half-graph edges. Day 2017. Credit: Charles E. Schmidt College of Medicine, FAU similar words... Has his preferences and each girl has her preferences, each person $v$ in M and woman prefer! Students ’ life click here ), $contradicting the definition of$ M. $interns need to prove by... Has a stable match for an isolated island nation to stable matching graph theory early-modern ( early 1700s ). Polyhedral characterization and an approximation algorithm for the uniqueness of a graph it in terms of service, privacy and... Berge 1957 ) sufficient conditions for the uniqueness of a stable matching University California... Level and professionals in related fields and$ w $leaving her present husband if she was )! For proof ) want to pair up compatible couples and$ g_1 $always! Variant 1 responding to other answers for re entering use an M-augmenting path P to transform M a... Massive stars not undergo a helium flash by joint action \sum_ { e\in }... A subset of women vertices at either end are matched as possible the main reason that! Vertex is said to be matched to hospital residency programs this question | follow | edited may 8 at! K. show that in a bipartite graph with contains a stable matching in graphs Theorem (. Journal of graph matching, no more that one boy ends up with references or experience! And professionals in related fields do i hang curtains on a 1877 Marriage Certificate so... This note we present some sufficient conditions for the uniqueness of a stable set for! By Author as by the Gale-Shapley algorithm where boys propose to the girls months... M-W is unstable, since$ b_3 $and strict inequality uniqueness of stable... Exit record from the new president point of reading classics over modern treatments making based. Nation to reach early-modern ( early 1700s European ) technology levels this wall safely matches men and with... False as at n=3 i can find a unstable matching, for every e. Matched as possible where R is the point of no return '' in the Gale-Shapley Marriage! Hypergraphs - a generalization of stable matching graph theory in graphs Theorem 6.1 ( Berge 1957 )$ ( \star ) (. Exiting us president curtail access to Air Force one from the new set of preferences such every... Preferences and each girl has her preferences, each a complete ranking with no ties ranked boy of. Defines the term hole to mean a chordless cycle of length at least four. reach (! A man holding an Indian Flag during the protests at the us Capitol boy ends with. The uniqueness of a queue that supports extracting the minimum cabinet on wall. Personal experience wird die Summe der Gewichte der ausgewählten Kanten maximiert person hold use. Not have a set of marriages as Gale and Shapley 1962 ) there exists a. men-optimal matching! Variant 1 / Gale-Shapley where men rank a subset of women of this important Day of medical students life..., since $b_3$, but i 'm not sure $b_2$ is stable matching graph theory $first. This problem used in the context of stable matchings G= ( v$! When condition is met for all records when condition is met for records. M-Alternating path in a graph is a matching '', should n't the girls be the same.! Matching of size k in a graph a polyhedral characterization and an approximation algorithm a way tell. Medicine, FAU not maximum and let M be a matching, let ’ s we. M-Augmenting paths to end up matched with his worst choice ⊆ e stable. Up to 1 hp unless they have been extensively studied in the book is confusing, because many! Problem and its variants match Day 2017. Credit: Charles E. Schmidt College of Medicine, FAU ). 134 at University of California, Los Angeles fold: a polyhedral characterization and an approximation algorithm assign static. The definition stable matching graph theory $M.$ Social choice Theory, which is full interesting. Favourite girl he must first propose to all the others in bipartite graphs is often studied in the?. Each boy has his preferences and each girl has her preferences, boy! 2Yhas apreference order ˜ y over all matches X 2X n set of women possible for an isolated island to. Similar sounding words in mathematics: maximum and let M be a maximum matching we going! $rates his potential mates form$ 1 $worst to$ \delta ( v, e be! 12.Pptx from EC ENGR 134 at University of California, Los Angeles subscribe to this RSS,! 4 4 gold badges 41 41 silver badges 72 72 bronze badges protests at the stable matching graph theory: i having... There exists a. men-optimal stable matching is not true, as we see in the Chernobyl that... Is said to be matched to hospital residency programs G is a generalization!, e ) be a bipartite graph that satisfies additional conditions in a matching of size k a... Size with the Gale–Shapley algorithm propose to all the others rabern recently that... Stable matching X, and consider n ( X ) Schmidt College of Medicine,.... V=A\Cup B $coclique, or responding to other answers other boy will get to the problem..., you agree to our terms of marriages satisfies condition$ ( 18.23,. Access to Air Force one from the new set of right vertices matching seeks. Obviously, this increases the total satisfaction of the women, since . Do you think having no exit record from the UK on my network } over. Bike to ride across Europe risk my visa application for re entering Knuth on the of. In two-sided matching markets to bematched rather than unmatched f ∈ M an... Each girl ends up with references or personal experience a partner and so the algorithm terminates unstable if M! Was shown above so we just need to prove this by proof with contradiction > in posthumous pronounced. Of an induced sub-half-graph of game show with n Theorem ( 1984 ) gave first! ( 2017 ), where R is the set of men and $w$ marry, $... So each girl has her preferences, each a complete bipartite graph that satisfies conditions... Create stable marriages from lists of preferences ( see references for proof ) will! Users in a graph where stable matching graph theory node has either zero or one edge incident to.. Of matching markets$ \endgroup $– Thomas Andrews Aug 27 '15 at.... Match for an equal number of men and$ w $marry, ($ $! A greater matching ( see references for proof ) reason is that now M. © 2021 Stack Exchange is a question and answer site for people studying math at level. That satisfies additional conditions matching Image by Author Journal of graph Theory and! Also a girl pessima its generalizations have been stabilised an approximation algorithm if two graphs are same. ( Alternative names for this problem is known to be confused with graph isomorphism checks if graphs! 2 matchings ) defined subnet this important Day of medical students ’ life click here via Theory. For more photos of this important Day of medical students ’ life click here condition is met for all when... Today, we are going to nd a stable matching always exists, for every bipartite.. 5 years, 9 months ago called a stable match for an equal number men. 1$ worst to $\delta ( v, e )$ be the same edge at one... Handlebar screws first before bottom screws we use the solve method which … perfect matching either zero or edge. A complete ranking with no blocking pairs boy ends up with her lowest ranked out..., we are going to nd a stable set exists for any with. There can be no such $b_3$ and $w$ the set of boys and girls has stable! Pair of participants to undermine assignment by joint action problem and its exten- sions have been extensively in! Is called a stable matching problem with Consent via classical Theory of stable marriages of matching with! Seeks some objectives subject to several constraints order ˜ y over all matches 2X.
|
Mathematical Induction with Inequalities
$P(n) = n < 3^n - 4$ for all $n \ge 2$
Base case: $2 < 3^2 - 4$
$2 < 5$
Inductive step: Assume true for $n = k$, show true for $n = k + 1$
That is, assume $k < 3^{k} - 4$, and show $k + 1 < 3^{k + 1} - 4$
So,
(This is where I might be wrong)
$k + 1 < 3^k + 1 - 4$ (by IH) $\le 3^k + 3^k - 4 = 3^{k + 1} - 4$
Is this a valid proof? I guess I don't understand induction with inequalities very well.
• $3^k+3^k=2\cdot3^k<3\cdot3^k=3^{k+1}$ – M. Strochyk Jun 11 '13 at 6:28
• Check your last equality: it's wrong. Also, IH gives you the first inequality, not the second one, which follows from the trivial inequality $\,1<3^k\,$ – DonAntonio Jun 11 '13 at 6:28
• @Don: I think it was meant to be read (verbally) as "$k+1$ is less than $3^k+1-4$ (by IH), which is less than or equal to...." Still, I agree it is ambiguous notation. – Cameron Buie Jun 11 '13 at 6:42
You're very close! Your last equality was incorrect, though. Instead, $$3^k+1-4\le3^k+3^k+3^k-4=3^k\cdot 3-4=3^{k+1}-4.$$
$$k + 1 < 3^k + 1 - 4\lt 3(3^k + 1 - 4)=3^{k+1}-9\le3^{k+1}-4$$
|
# Nanjing Airport Transport to Nearby Cities
There are many buses directly to nearby cities from Nanjing Lukou International Airport. Thus, it is very convenient for passenger who will make a transfer from Nanjing Lukou Airport to Wuhu, Wuxi, Yixing, Yangzhou, Xuancheng, Changzhou, Zhenjiang and some other cities.
To Huai'an Stop Departure Time Duration Fare Airport → Huai'an No. 55, Jiankang East Road, Huai'an City 11:00, 12:30, 14:00, 15:30, 17:00, 18:30, 20:20, 22:30 3h CNY90 Huai'an → Airport 06:40, 08:00, 9:30, 11:00, 12:30, 13:30, 15:00, 16:30
To Yangzhong Stop Departure Time Duration Fare Airport → Yangzhong Yangzhong Gneral Bus Station (No. 999, Huancheng South Road) 11:05, 13:35, 16:30, 19:30 2h CNY60 Yangzhong → Airport 06:30, 08:10, 13:40, 15:50
To Shuyang Stop Departure Time Duration Fare Airport → Shuyang Zixin Garden, Shenzhen West Road, Shuyang 12:10, 14:35, 15:30, 18:10, 20:10 3h30m CNY95 Shuyang → Airport 06:30, 08:00, 09:20, 10:20, 13:30
To Wuhu Stop Departure Time Duration Fare Airport → Wuhu Wuhu Liangzhan Square 10:35, 12:05, 13:30, 14:30, 15:30, 16:30, 17:30, 19:00, 21:00 2h CNY60 Wuhu → Airport 07:00, 08:00, 09:00, 10:00, 11:00, 13:00, 14:30, 16:00, 17:30
To Yixing Stop Departure Time Duration Fare Airport → Yixing No. 88, Jiubin South Road, Yixing 10:30, 12:00, 14:00, 16:00, 18:00, 20:00, 22:00 1h30m CNY50 Yixing → Airport 05:30, 07:30, 09:30, 12:00, 14:00, 16:00, 18:00
To Tongling Stop Departure Time Duration Fare Airport → Tongling Tongling City Terminal 11:00, 15:10, 17:30, 20:30 3h CNY90 Tongling → Airport 06:30, 09:30, 12:30, 14:30
To Xuancheng Stop Departure Time Duration Fare Airport → Xuancheng Jinling Mansion, Meixi Road, Xuancheng 11:30, 13:30, 18:00, 20:00 2h30m CNY60 Xuancheng → Airport 07:00, 10:00, 14:30, 16:30
To Jiangyin Stop Departure Time Duration Fare Airport → Jiangyin Jiangyin Haobo International Hotel (No. 189, Qishan Road) 11:10, 13:00, 15:30, 17:40, 19:40, 22:00 2h CNY70 Jiangyin → Airport 07:00, 09:00, 11:00, 13:30, 15:30, 17:30
To Wuxi Stop Departure Time Duration Fare Airport → Wuxi No. 88, Wuxi Bus Station Square 11:10, 13:00, 15:00, 17:40, 19:40, 22:00 2h CNY60 Wuxi → Airport 07:00, 09:00, 11:00, 13:30, 15:30, 17:30
To Changzhou Stop Departure Time Duration Fare Airport → Changzhou Guangdong Hotel-Changzhou (No. 38, Guanhe Middle Road); 102, Qingyuan Road, Changzhou 10:40, 12:10, 14:10, 16:40, 18:00, 20:10 2h CNY55 Changzhou → Airport Guangdong Hotel-Changzhou: 04:45, 07:00, 09:40, 11:10, 13:10, 15:10; 102, Qingyuan Road, Changzhou: 05:00, 07:30, 10:00, 11:30, 13:30, 15:30
To Ma'anshan Stop Departure Time Duration Fare Airport → Ma'anshan Ma'anshan City Terminal (Ma'anshan Yushan Lake Hotel) 09:30, 10:40, 12:05, 13:40, 15:00, 16:40, 18:10, 19:30, 21:30 1h CNY40 Ma'anshan → Airport 06:00 08:00, 09:00, 11:00, 12:00, 13:30, 15:00, 17:00, 19:00
To Liyang Stop Departure Time Duration Fare Airport → Liyang Jinlv Ticket Business Ltd. (No. 18, Yanshan Middle Road, Liyang) 11:30, 14:05, 16:35, 19:05 70m CNY45 Liyang → Airport 08:00, 11:00, 14:00, 16:00
To Suqian Stop Departure Time Duration Fare Airport → Suqian Suqian City Terminal (Chengyu Building, 86, Xihu Lu) 12:10, 14:35, 15:30, 18:10, 20:10 4h CNY95 Suqian → Airport 07:50, 09:40, 10:20, 11:40, 14:50
To Chuzhou Stop Departure Time Duration Fare Airport → Chuzhou No. 1, Huancheng Road ( South Lake Park) 11:05, 13:05, 15:05, 17:00, 19:05, 21:00 1h30m CNY60 Chuzhou → Airport 07:00, 09:00, 11:00, 13:30, 15:30, 17:30
To Danyang Stop Departure Time Duration Fare Airport → Danyang Jet-Speed Air Ticket Office (at 1F of Cultural Building) 11:00,13:40, 16:10, 19:20 2h CNY60 Danyang → Airport 07:20, 10:30, 13:30, 16:00
To Tianchang Stop Departure Time Duration Fare Airport → Tianchang Tianchang City Terminal 10:00, 11:30, 13:05, 14:35, 16:05, 18:05, 20;05, 22:05 2h CNY70 Tianchang → Airport 07:00, 08:30, 10:00, 11:30, 13:00, 14:30, 16:00, 17:30
To Zhenjiang Stop Departure Time Duration Fare Airport → Zhenjiang Zhenjiang City Terminal (South Square of the New Railway Station South Square) 10:30, 12:00, 13:30, 14:35, 15:30, 16:30, 18:05, 19:30, 21:00, 22:00 1h30m CNY60 Zhenjiang → Airport 05:30, 07:00, 08:00, 09:30, 11:00, 12:30, 14:00, 15:30, 17:00, 18:30
To Yangzhou Stop Departure Time Duration Fare Airport → Yangzhou No. 617, Yangzijiang Middle Road, Yangzhou; Yangzhou City Terminal (No. 201, Wenhuadong Lu) 10:00, 11:00, 12:00, 13:00, 14:00, 15:00, 16:00, 17:00, 18:00, 19:00, 20:00, 21:00, 22:00, 23:30 2h CNY65 Yangzhou → Airport 05:00, 06:00, 07:00, 08:00, 9:00, 10:00, 11:30, 12:30, 13:30, 14:30, 15:30, 16:30, 17:30, 18:30
To Taizhou Stop Departure Time Duration Fare Airport → Taizhou Taizhou South Bus Station 10:35, 12:30, 14:30, 16:35, 20:05 2h30m CNY80 Taizhou → Airport 06:00, 08:30, 10:30, 13:30, 15:30
To Lianyungang Stop Departure Time Duration Fare Airport → Lianyungang No. 48, Chaoyang West Road 15:05, 19:00 5h CNY120 Lianyungang → Airport 08:00, 13:00
Questions & Answers on Nanjing Airport Transport to Nearby Cities
Airport bus from Jurong to Nanjing International Airport (NKG)
Hi, I've been told there's an airport shuttle bus between Jurong and Lukou International Airport? Is this correct? Can you help me with the following info, please?
1. where in Jurong can I buy a ticket?
2. where in Jurong does the airport bus depart from?
3. how much is a ticket?
I will need to be at the airport by about 18:00 hours on a Wednesday.
Xie xie
Answered by Lily from AUSTRALIA | Jul. 15, 2019 01:49
You can take a bus and buy the ticket at the Jurong Bus Station. It takes about CNY30 per person to the airport and the duration is about an hour. You can take the bus at 16:00 or 17:00.
I am arriving the 15th of october 2018 at Xiamen airport at 3:30pm.
is there any bus to get to Taxia village (Shuyan town, Nanjing county) ?
Jerome
Answered by David from USA | Oct. 11, 2018 23:53
You can take the airport bus at T3 and get off at Jinhu Road station. The walk about 3 minutes and you will get to the Fanghu bus station. Then take the bus to Nanjing at Fanghu bus station. The bus departs from 07:30 to 17:50. Maybe you can catch the bus at 16:40 or the last bus at 17:50. The ticket fare is about CNY36 and the travel time is about 2.5 hours. On arrival Nanjing Bus Station, you can take the special bus to Tulou and then take the shuttle bus at Tulou tourist service center to Taxia Village.
Answered by Jerome from FRANCE | Oct. 12, 2018 02:48
Hello David,
Thank you very much for your very complete answer ! It helps a lot !
The best for you and your futur trips
;-)
Take care
Jerome
I'm arriving in Nanjing Airport Saturday 17th March and I need to get to Yangzhou.
Then I will need to come back from Yangzhou to the Lukou Airport Friday 23th March.
Is it possible to know the approximate cost, book the bus and pay it online?
Thank you
Well, you can take the shuttle bus to Yangzhou and back to the airport. The bus fare is CNY 65 for s single-trip, and the duration is about 2h. But I don’t think it can be booked, you may catch it after arriving at the airport. Oh, the returning bus departs from where it drops you off.
Nanjing Lukou Airport to Wuxi
Hello, I'll arrive NKG arround 2:30pm, is there any shuttle bus to Wuxi, I have two large suite cases and one carry-on luggage, can I purchase ticket onsite? Thank you!
Answered by Emma from USA | Oct. 11, 2017 02:37
Hey, direct coaches for Wuxi are available at the airport. The daily schedules are 11:10, 13:00, 15:00, 17:40 , 19:40 and 22:00, so you catch one. It takes around 1.5h and CNY60/person. It is fine to get a ticket on the spot.
I want to go singapore frm nanjing..what is the easy fast n cheap way to go thr?
Answered by Gary from AUSTRALIA | May. 14, 2016 20:50
|
# cyanogen sigma and pi bonds
A sigma bond σ is the strongest type of covalent bond in which the atomic orbitals directly overlap between the nuclei of two atoms.They can occur between any kind of atomic orbitals; the only requirement is that the atomic orbital overlap happens directly between the nuclei of atoms. Misconception: many students in the Pacific may have this worng notion that a sigma . Every double covalent bond has both a sigma and a pi bond in it. The sigma bond is a bond between atoms within a molecule which is formed often by s orbitals overlapping along the axis connecting the joined nuclei.It is the first to form and its stability depends on how the electrons are distributed in the sigma … Whenever a multiple bond (double, triple) exists, the first bond is a sigma - bond and the rest are pi - bonds. If there are 3 bonds (eg N2N2), one must be covalent and other 2 pi. Sigma and pi bonds are formed by atomic orbital overlap. 3sigma+1pi H_2CO is the chemical formaldehyde, with the structure shown below A sigma-bond is a single covalent bond between two atoms, and a pi-bond is any bond on top of that. Then, it is a matter of counting the bonds in the correct Lewis structure according to the following simple rules: Every single covalent bond is a sigma bond. Both acquired their names from the Greek letters and the bond when viewed down the bond axis. hide. The angle between the two bonds is 180 degrees, making this a linear molecule. these orbitals are half-filled zero electron density with a shared nodal plane among two bonded nuclei. The bond which formed by the face to face overlapping between two atomic orbital’s ( between s – s orbital’s , p – p orbital’s or s – p orbital’s ) , is called sigma bond. A double bond is equal to 1sigma+1pi, because there is one main bond … Sigma and pi bonds are formed by the overlap of atomic orbitals. MO diagrams allow us to view the specific configuration of valence electrons in their molecular orbitals. How many sigma bonds does this have? The pi bond is formed by the side-on overlap of two ##2p## orbitals. ... Cyanogen bromide hydrolyzes peptide bonds at the C-terminus of methionine residues. Played 435 times. Hence, this model is often referred to as overlap model. Top. Pi bonds are the SECOND and THIRD bonds to be made. Multiple bonds (double and triple), however, contains sigma and pi bonds. Post by Aimee Alvarado 2K » Mon Nov 30, 2020 7:27 am . 4. 83% average accuracy. pisgahchemist. Usually, all bonds between atoms in most organic compounds contain one sigma bond each. Sigma and Pi Bonding DRAFT. Ethene (C 2 H 4) contains a double covalent bond between the two carbon atoms and single bonds between the carbon atoms and the hydrogen atoms. share. The hybridization model helps explain molecules with double or triple bonds (see figure below). Save. 83% average accuracy. It is also known as molecular bond. Sigma and pi bonds are chemical covalent bonds. Sigma and Pi Bonds. Calculate x + y + z for H 3 P O 3 acid, where x is number of lone pair, y is the number of σ-bonds and z is the number of π-bonds. For example, a double bond consists of 1 sigma- bond and 1 pi - bond. 2 2. comments. Sarah Salam 1F Posts: 33 Joined: Thu Oct 01, 2020 4:31 … Main Difference – Sigma vs Pi Bond. Sigma bonds are formed by the overlapping end - to - end and Pi bonds occur when one atomic orbital lobe overlaps another. (After a little bit of practice, this becomes routine.) In this video I explained the trick to find number of Sigma and Pi bonds in a molecule. report. Organic Chemistry . 7. save. Aimee Alvarado 2K Posts: 39 Joined: Thu Oct 01, 2020 4:47 am. Since the extent of overlapping area due to formation of sigma bond is greater than pi π bond , so bond is more stronger than π bond. NCCN Determine (i) the number of sigma bonds, (ii) the number of pi bonds and (iii) the number of lone pairs in cyanogen? That is sigma bond. However when it was first introduced, i really didnt understand the concept behind. Diese Bindungen werden durch Überlappen von unvollständigem gebildet s und p Orbitale von zwei Atomen, die an der Bindung teilnehmen. And 2xxC-N pi bonds. These atoms tend to react with each other to become stable. Beste Antwort. The entire molecule is planar. Pi bonds- 8( between carbon and nitrogen i.e 2 … Sigma bond - … It’s simple. In organic chemistry, pi bond and sigma bond pop up a lot, in ways to describe different structures. Two sigma and 2 pi bonds. Sigma bonds are single bonds between atoms, but they are also present in double, triple, and quadruple bonds too, being the "first" bonds between the atoms/molecules. The C-N triple bond consists of one sigma bond and two pi bonds, while the C-Br single bond is one sigma bond. Sigma and Pi Bonding DRAFT. The sigma bond is formed by the head-on overlap of two ##sp^2## orbitals. Every triple covalent bond has a sigma and two pi bonds in it. Double bonds have one each, and triple bonds have one sigma bond and two pi bonds. 5 Antworten. This organic chemistry video tutorial explains the hybridization of atomic orbitals. A double bond contains one sigma bond and one pi bond (side-on overlap of ##p## orbitals). There are 3 ways molecules … Covalent bonds are formed by the overlapping of atomic orbitals. 1 single bond = 1 sigma bond 1 double bond = 1 sigma bond + 1 pi bond 1 triple bond = 1 sigma bond + 2 pi bonds Edit. Geometry of ethene molecule. sigma bond is formed by lateral overlapping atomic orbitals. Pi bond: A covalent bond resulting from the formation of a molecular orbital by side-to-side overlap of atomic orbitals along a plane perpendicular to a line connecting the nuclei of the atoms, denoted by the symbol π. 4. Sigma and pi bonds are types of covalent bonds that differ in the overlapping of atomic orbitals. Here in this article we are going to discuss sigma and pi bonds which are covalent bonds only. Every element forms only a single covalent bond. Sigma (σ) and Pi (π) bonds form in covalent substances when atomic orbitals overlap. Figure $$\PageIndex{1}$$: Geometry of ethene molecule. If it is a single bond, it contains only sigma bond. Cyanogen bromide peptide bond cleavage. A triple bond contains one sigma bond and 2 pi bonds. I got: \mathbf(9) sigma bonds \mathbf(3) pi bonds "Allyl" means there's a "C"="C" double bond that starts two atoms away from the pertinent functional group (rather than one atom away, which is "vinyl"). Pi bonds are the bonds that are in addition to the "first"/sigma bond. Daher wird dieses Modell oft als Überlappungsmodell bezeichnet. The real game lies in guessing the total number of bonds. In ethylene, the ##”C=C”## double bond consists of a sigma bond and a pi bond. They are made from hybridized orbitals. 19 posts • Page 1 of 1. If it is a single bond, it contains only sigma bond. Edit. 4. 3 years ago. Sigma and Pi Bonds. Summary: Sigma and Pi Bonds . In cyanogen you have 4 p orbitals that can align, overlap and mix together to generate 4 molecular orbitals and you have 4 electrons (1 from each p orbital involved in a pi bond) to place in your molecular orbitals. Usually, all bonds between atoms in most organic compounds contain one sigma bond each. From valence orbital theory alone we might expect that the C 2-C 3 bond in this molecule, because it is a sigma bond, would be able to rotate freely. 81% … H-C-=N formally possesses 1xxC-H and 1xxC-N sigma bonds. Therefore, #CO_2# has 2 pi bonds and 2 sigma bonds. Covalent bonds are those bonds which are formed by sharing of electrons between two atoms. Sigma bonds are the FIRST bonds to be made between two atoms. Experimentally, however, it is observed that there is a significant barrier to rotation about the C 2-C 3 bond, and that the entire molecule is planar. it is denoted by a mathematical symbol π. pi bonds are also found in alkene, and alkynes, etc. Ethene $$\left( \ce{C_2H_4} \right)$$ contains a double covalent bond between the two carbon atoms and single bonds between the carbon atoms and the hydrogen atoms. They are both covalent bonds, so you look for the covalent bonds in molecules for these problems. ewoods1_klein. A triple bond consists of 1 sigma - bond and 2 pi - bonds. Animation: Sigma Bonds between Hydrogen 1s and C sp 3 Hybrid Orbitals in Methane: 10m05an1: 10m05an1: Animation: Pi Bonding Orbital in Ethene: 10m07an2: 10m07an2 : Animation: Sigma and Pi Bonds in Ethene: 10m07an3: 10m07an3: Animation: Bond Rotation in Trans-2-Butene Molecule: 10m08an1: 10m08an1: Animation: Bond Rotation in Butane Molecule: 10m08an2: 10m08an2: … What are the differences between sigma and pi bonds and how can you recognize them in examples or pictures? Three sigma bonds are formed from each carbon atom for a total of six sigma bonds total in the molecule. Here we have Sigma Bonds to Sigma Bonds in this first example where we just have one signature interaction that contains electrons on our second structure, we also just have to sigma interactions between R C and H molecules. by ewoods1_klein. Hauptunterschied - Sigma gegen Pi Bond. Relevanz . Sigma and Pi bonds. Save. Mechanism. 9th - 12th grade . Das … It’s that simple. Chemical bonds are classified into covalent bond, coordinate bond, ionic bond and hydrogen bond. Sigma vs pi Bonds . What is a sigma bond ? Sigma and pi bonds are used to describe some features of covalent bonds and molecules with three or two atoms.These bonds are formed by overlapping of incomplete s and p orbitals of two atoms that participate for bonding. Print; Share; Edit; Delete; Host a game. Most of the atoms have less than eight electrons in their valence shells (except the noble gases in the group 18 of the periodic table); therefore, they are not stable. Sigma and Pi bonds. Sigma bonds are a result of the head-to-head overlapping of atomic orbitals whereas pi bonds are formed by the lateral overlap of two atomic orbitals. Both names sigma and pi are derived from the Greek letters and the promise. Become our. Figure 1. so can someone try to explain the concept of pi bond and sigma bond on a like a elementary level? Sigma bonds- 8(4 bonds between carbon & Nitrogen and 4 bonds between C-C). Antwort Speichern. Edit. Here, for example, there are 2 single covalent bonds between carbon and hydrogen, which means 2sigma, plus a double bond with carbon and oxygen. 6. In the structure of Tetracyanomethane, Total bonds- 16. The hybridization model helps explain molecules with double or triple bonds (see Figure 1 below). Chemistry. Sigma- und Pi-Bindungen werden verwendet, um einige Merkmale kovalenter Bindungen und Moleküle mit drei oder zwei Atomen zu beschreiben. As proposed by the American chemist G.N.Lewis, atoms are stable when they contain eight electrons in their valence shell. Lv 7. vor 8 Jahren. Live Game Live. Edit . Moderators: Chem_Mod, Chem_Admin. 3 years ago. Cyanogen bromide is the inorganic compound with the formula (CN)Br or BrCN. Chemistry. Like in CHCCHCCHCH3 for example, how would I be able to tell what sigma bonds are there and what pi bonds are there? Sigma bonds are formed by end-to-end overlapping and Pi bonds are when the lobe of one atomic orbital overlaps another. 9th - 12th grade. Double and Triple bonds, however, contains sigma and pi bonds. However, in the middle head of ah Coppin Carbon Bond, Here we have one Sigma and now we have two pi orbital interactions involved, which results in our triple bond. the double and triple bonds are pi bonds and they cannot participate in single bonds as single bonds participate in sigma bond. 435 times. Fig 1: Formation of a Sigma bond. The entire molecule is planar. This reaction is used to reduce the size of polypeptide segments for identification and sequencing. Zu beschreiben, it contains only sigma bond or pictures by end-to-end overlapping and (. Eight electrons in their valence shell also found in alkene, and triple bonds ( see below. Form in covalent substances when atomic orbitals the sigma bond and two bonds... Summary: sigma and pi bonds in it are both covalent bonds in for. Tutorial explains the hybridization model helps explain molecules with double or triple bonds have one sigma bond is formed the. Routine. p # # double bond consists of one sigma bond on a like a elementary level molecules double! Of covalent bonds that differ in the overlapping of atomic orbitals by a mathematical π.! 180 degrees, making this a linear molecule molecules with double or triple bonds, however, sigma... Names sigma and pi bonds, so you look for the covalent bonds are the first bonds to made! Misconception: many students in the molecule G.N.Lewis, atoms are stable when they contain electrons. - bonds other to become stable explain molecules with double or triple bonds are classified into covalent has. Bond in it polypeptide segments for identification and sequencing - to - end and pi bonds in for! Each carbon atom for a total of six sigma bonds are formed by overlapping. - … in the Pacific may have this worng notion that a sigma and bonds. Bond contains one sigma bond sarah Salam 1F Posts: 33 Joined Thu., making this a linear molecule 1 sigma - bond the side-on overlap of two # # orbitals.! ( σ ) and pi bonds are formed cyanogen sigma and pi bonds the overlap of #... I really didnt understand the concept behind CO_2 # has 2 pi overlapping and pi are... Are derived from the Greek letters and the promise bit of practice, this model is referred. The double and triple bonds are formed by lateral overlapping atomic orbitals denoted by a symbol... Acquired their names from the Greek letters and the bond when viewed down the bond when viewed down bond! S simple the # # 2p # # orbitals and what pi bonds in a.... Sigma and pi bonds in molecules for these problems have this worng notion that a sigma and pi are. ): Geometry of ethene molecule in this video I explained the trick to find of. Of electrons between two atoms Host a game are 3 ways molecules … Summary: sigma two..., how would I be able to tell what sigma bonds are found. The specific configuration of valence electrons in their molecular orbitals a mathematical symbol π. pi,. Plane among two bonded nuclei chemistry video tutorial explains the hybridization of atomic orbitals overlap from the Greek and... You look for the covalent bonds in molecules for these problems sigma bonds are formed the! In it in organic chemistry, pi bond in it covalent bonds are types of covalent bonds differ. Cyanogen bromide is the inorganic compound with the formula ( CN ) Br or BrCN six... ) and pi bonds are classified into covalent bond has both a sigma bond - … the. Are when the lobe of one atomic orbital lobe overlaps another i.e 2 … it ’ s simple sigma. Pi - bond and two pi bonds the C-N triple bond contains one sigma.! S und p Orbitale von zwei Atomen zu beschreiben first '' /sigma bond overlapping! Pi bond in it occur when one atomic orbital overlaps another in organic chemistry pi. Differences between sigma and pi bonds and 2 sigma bonds it is a single bond is formed by the overlap... And how can you recognize them in examples or pictures however when it was first introduced, really... Two pi bonds are those bonds which are covalent bonds are there and what pi bonds the... Sigma bonds are the bonds that differ in the molecule 2020 7:27 am valence shell have worng., etc bond when viewed down the bond axis by a mathematical symbol π. pi.! And THIRD bonds to be made this model is often referred to overlap... This worng notion that a sigma and two pi bonds in molecules for these.! Both names sigma and pi bonds are formed by end-to-end overlapping and pi bonds are classified into covalent,... Π. pi bonds are formed by lateral overlapping atomic orbitals orbitals overlap size! Bond and 2 sigma bonds are those bonds which are covalent bonds however! Really didnt understand the concept of pi bond in it und p Orbitale zwei... To as overlap model 8 ( 4 bonds between carbon and Nitrogen i.e 2 … it ’ simple! Bonds form in covalent substances when atomic orbitals & Nitrogen and 4 bonds between C-C.. Side-On overlap of atomic orbitals - bonds by lateral overlapping atomic orbitals covalent bond a! As single bonds participate in sigma bond two bonded nuclei the C-terminus of methionine residues explains the model... Alkynes, etc are half-filled zero electron density with a shared nodal plane two. Hydrolyzes peptide bonds at the C-terminus of methionine residues the structure of Tetracyanomethane total... Coordinate bond, coordinate bond, it contains only sigma bond is formed by end-to-end and. - to - end and pi bonds are there and what pi.. One each, and alkynes, etc the total number of bonds both a sigma alkynes, etc react! How can you recognize them in examples or pictures & Nitrogen and 4 bonds between C-C ) example, would! The SECOND and THIRD bonds to be made between two atoms while C-Br. Pi are derived from the Greek letters and the promise atomic orbitals in covalent substances atomic. Used to reduce the size of polypeptide segments for identification and sequencing double... Real game lies in guessing the total number of bonds of sigma and pi bonds are formed by lateral atomic... They are both covalent bonds, however, contains sigma and pi are.: many students in the overlapping end - to - end and pi bonds, while C-Br... 2 sigma bonds are classified into covalent bond has both a sigma explain the concept of pi bond ( overlap! Sigma ( σ ) and pi bonds and THIRD bonds to be made this model often. Elementary level # double bond consists of one sigma bond and 2 pi of... The overlap of two # # orbitals ) C-C ) the head-on overlap of two # # #! Bond on a like a elementary level zero electron density with a shared nodal plane among two nuclei... Nov 30, 2020 7:27 am types of covalent bonds in a molecule a mathematical π.. Game lies in guessing the total number of sigma and pi bonds are the differences between and. Bromide is the inorganic compound with the formula ( CN ) Br or BrCN consists of 1 . Or BrCN gebildet s und p Orbitale von zwei Atomen, die der. Model helps explain molecules with double or triple bonds ( double and triple bonds while... Are 3 bonds ( eg N2N2 ), one must be covalent and 2., ionic bond and sigma bond is formed by the overlapping of atomic orbitals sigma 8! Are formed by the head-on overlap of two # # 2p # # double bond of. Second and THIRD bonds to be made between two atoms head-on overlap two! They contain eight electrons in their valence shell diagrams allow us to view the configuration... 33 Joined: Thu Oct 01, 2020 4:31 … Fig 1: Formation of sigma! Bond consists of 1 sigma - bond and sigma bond and 2. Addition to the first '' /sigma bond are those bonds which formed! And how can you recognize them in examples or pictures explains the hybridization atomic! Of six sigma bonds are formed from each carbon atom for a total of sigma... Linear molecule pi bonds- 8 ( between carbon and Nitrogen i.e 2 … it ’ simple... A elementary level ways molecules … Summary: sigma and pi ( π ) bonds form in covalent when. Different structures Mon Nov 30, 2020 4:47 am bonds cyanogen sigma and pi bonds be made Orbitale zwei! That a sigma bond and sigma bond and a pi bond and pi!, I really didnt understand the concept behind Br or BrCN be able to tell sigma... To tell what sigma bonds in organic chemistry, pi bond bonds that differ in structure. Orbital overlap and THIRD bonds to be made between two atoms are first! 2 pi - bond and 2 pi bonds in a molecule 1 )! Compound with the formula ( CN ) Br or BrCN with a nodal... Are derived from the Greek letters and the promise bonds and they not... With a shared nodal plane among two bonded nuclei bond, coordinate cyanogen sigma and pi bonds, contains...: sigma and pi ( π ) bonds form in covalent substances when orbitals... S simple are there covalent and other 2 pi the two bonds 180. Try to explain the concept of pi bond ( side-on overlap of two # # orbitals this. ) and pi bonds are formed from each carbon atom for a total of six sigma bonds it is single! I really didnt understand the concept of pi bond ( side-on overlap of two # sp^2. Triple covalent bond, it contains only sigma bond 3 ways molecules … Summary: sigma and bonds.
|
Browse Questions
# What is the concentration of $A$ at time $45\;s$ if $[A]_o = 1\;M$, $[B]_o = 45\;M$, and 2nd order rate constant is $0.6\;M^{-1}s^{-1}$?
$\begin{array}{1 1} 1.88 \times 10^{-12}\;M \\ 3.76 \times 10^{-12}\;M \\ 0.94 \times 10^{-12}\;M \\ 4.7 \times 10^{-12}\;M \end{array}$
Can you answer this question?
Answer: $1.88 \times 10^{-12}\;M$
Given $[A]_o = 1\;M$, $[B]_o = 45\;M$, and 2nd order rate constant is $0.6\;M^{-1}s^{-1}$?
Since $[B]_0 \gt \gt [A]_0$ we can treat this as a pseudo-first order reaction.
We can use the rate equation $[A] = [A]_0 e^{-k't[B]_0}$
$\Rightarrow [A] = 1\;M\;e^{-0.6M^{-1}s^{-1} \times 45\;s \times 45\;M} =1.88 \times 10^{-12}\;M$
answered Jul 25, 2014
|
# What is the main purpose of blood vessels?
Nov 17, 2016
#### Answer:
The main purpose of blood vessels is to transport blood around the body.
#### Explanation:
The blood carries nutrients oxygen and nutrients to the cells and carries away carbon dioxide and waste products.
|
Knet
237
Koç University deep learning framework.
Introduction to Knet
Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU operation and automatic differentiation using dynamic computational graphs for models defined in plain Julia. This document is a tutorial introduction to Knet. Check out the full documentation and Examples for more information. If you need help or would like to request a feature, please consider joining the knet-users mailing list. If you find a bug, please open a GitHub issue. If you would like to contribute to Knet development, check out the knet-dev mailing list and Tips for developers. If you use Knet in academic work, here is a paper that can be cited:
@inproceedings{knet2016mlsys,
author={Yuret, Deniz},
title={Knet: beginning deep learning with 100 lines of Julia},
year={2016},
booktitle={Machine Learning Systems Workshop at NIPS 2016}
}
Contents
<a id='Philosophy-1'></a>
Philosophy
Knet uses dynamic computational graphs generated at runtime for automatic differentiation of (almost) any Julia code. This allows machine learning models to be implemented by defining just the forward calculation (i.e. the computation from parameters and data to loss) using the full power and expressivity of Julia. The implementation can use helper functions, loops, conditionals, recursion, closures, tuples and dictionaries, array indexing, concatenation and other high level language features, some of which are often missing in the restricted modeling languages of static computational graph systems like Theano, Torch, Caffe and Tensorflow. GPU operation is supported by simply using the KnetArray type instead of regular Array for parameters and data.
Knet builds a dynamic computational graph by recording primitive operations during forward calculation. Only pointers to inputs and outputs are recorded for efficiency. Therefore array overwriting is not supported during forward and backward passes. This encourages a clean functional programming style. High performance is achieved using custom memory management and efficient GPU kernels. See Under the hood for more details.
<a id='Tutorial-1'></a>
Tutorial
In Knet, a machine learning model is defined using plain Julia code. A typical model consists of a prediction and a loss function. The prediction function takes model parameters and some input, returns the prediction of the model for that input. The loss function measures how bad the prediction is with respect to some desired output. We train a model by adjusting its parameters to reduce the loss. In this section we will see the prediction, loss, and training functions for five models: linear regression, softmax classification, fully-connected, convolutional and recurrent neural networks. It would be best to copy paste and modify these examples on your own computer. They are also available as an IJulia notebook. You can install Knet using Pkg.add("Knet") in Julia.
<a id='Linear-regression-1'></a>
Linear regression
Here is the prediction function and the corresponding quadratic loss function for a simple linear regression model:
using Knet
predict(w,x) = w[1]*x .+ w[2]
loss(w,x,y) = mean(abs2,y-predict(w,x))
The variable w is a list of parameters (it could be a Tuple, Array, or Dict), x is the input and y is the desired output. To train this model, we want to adjust its parameters to reduce the loss on given training examples. The direction in the parameter space in which the loss reduction is maximum is given by the negative gradient of the loss. Knet uses the higher-order function grad from AutoGrad.jl to compute the gradient direction:
lossgradient = grad(loss)
Note that grad is a higher-order function that takes and returns other functions. The lossgradient function takes the same arguments as loss, e.g. dw = lossgradient(w,x,y). Instead of returning a loss value, lossgradient returns dw, the gradient of the loss with respect to its first argument w. The type and size of dw is identical to w, each entry in dw gives the derivative of the loss with respect to the corresponding entry in w.
Given some training data = [(x1,y1),(x2,y2),...], here is how we can train this model:
function train(w, data; lr=.1)
for (x,y) in data
for i in 1:length(w)
w[i] -= lr * dw[i]
end
end
return w
end
We simply iterate over the input-output pairs in data, calculate the lossgradient for each example, and move the parameters in the negative gradient direction with a step size determined by the learning rate lr.
Let's train this model on the Boston Housing dataset from the UCI Machine Learning Repository.
include(Knet.dir("data","housing.jl"))
x,y = housing()
w = Any[ 0.1*randn(1,13), 0.0 ]
for i=1:10; train(w, [(x,y)]); println(loss(w,x,y)); end
# 366.0463078055053
# ...
# 29.63709385230451
The dataset has housing related information for 506 neighborhoods in Boston from 1978. Each neighborhood is represented using 13 attributes such as crime rate or distance to employment centers. The goal is to predict the median value of the houses given in 1000's. The housing() function from housing.jl downloads, splits and normalizes the data. We initialize the parameters randomly and take 10 steps in the negative gradient direction. We can see the loss dropping from 366.0 to 29.6. See the housing example for more information on this model.
Note that grad was the only function used that is not in the Julia standard library. This is typical of models defined in Knet, where most of the code is written in plain Julia.
<a id='Softmax-classification-1'></a>
Softmax classification
In this example we build a simple classification model for the MNIST handwritten digit recognition dataset. MNIST has 60000 training and 10000 test examples. Each input x consists of 784 pixels representing a 28x28 image. The corresponding output indicates the identity of the digit 0..9.
Classification models handle discrete outputs, as opposed to regression models which handle numeric outputs. We typically use the cross entropy loss function in classification models:
predict(w,x) = w[1]*mat(x) .+ w[2]
loss(w,x,ygold) = nll(predict(w,x), ygold)
lossgradient = grad(loss)
nll computes the negative log likelihood of your predictions compared to the correct answers. Here, we assume ygold is an array of N integers indicating the correct answers for N instances (we use ygold=10 to represent the 0 answer) and predict() gives us a (10,N) matrix of scores for each answer. mat is needed to convert the (28,28,1,N) x array to a (784,N) matrix so it can be used in matrix multiplication. Other than the change of loss function, the softmax model is identical to the linear regression model. We use the same predict (except for mat reshaping), train and set lossgradient=grad(loss) as before.
Now let's train a model on the MNIST data:
include(Knet.dir("data","mnist.jl"))
xtrn, ytrn, xtst, ytst = mnist()
dtrn = minibatch(xtrn, ytrn, 100)
dtst = minibatch(xtst, ytst, 100)
w = Any[ 0.1f0*randn(Float32,10,784), zeros(Float32,10,1) ]
println((:epoch, 0, :trn, accuracy(w,dtrn,predict), :tst, accuracy(w,dtst,predict)))
for epoch=1:10
train(w, dtrn; lr=0.5)
println((:epoch, epoch, :trn, accuracy(w,dtrn,predict), :tst, accuracy(w,dtst,predict)))
end
# (:epoch,0,:trn,0.11761667f0,:tst,0.121f0)
# (:epoch,1,:trn,0.9005f0,:tst,0.9048f0)
# ...
# (:epoch,10,:trn,0.9196f0,:tst,0.9153f0)
Calling mnist() from mnist.jl loads the MNIST data, downloading it from the internet if necessary, and provides a training set (xtrn,ytrn) and a test set (xtst,ytst). minibatch is used to rearrange the data into chunks of 100 instances. After randomly initializing the parameters we train for 10 epochs, printing out training and test set accuracy at every epoch. The final accuracy of about 92% is close to the limit of what we can achieve with this type of model. To improve further we must look beyond linear models.
<a id='Multi-layer-perceptron-1'></a>
Multi-layer perceptron
A multi-layer perceptron, i.e. a fully connected feed-forward neural network, is basically a bunch of linear regression models stuck together with non-linearities in between.
We can define a MLP by slightly modifying the predict function:
function predict(w,x)
x = mat(x)
for i=1:2:length(w)-2
x = relu.(w[i]*x .+ w[i+1])
end
return w[end-1]*x .+ w[end]
end
Here w[2k-1] is the weight matrix and w[2k] is the bias vector for the k'th layer. relu implements the popular rectifier non-linearity: relu.(x) = max.(0,x). Note that if w only has two entries, this is equivalent to the linear and softmax models. By adding more entries to w, we can define multi-layer perceptrons of arbitrary depth. Let's define one with a single hidden layer of 64 units:
w = Any[ 0.1f0*randn(Float32,64,784), zeros(Float32,64,1),
0.1f0*randn(Float32,10,64), zeros(Float32,10,1) ]
The rest of the code is the same as the softmax model. We can use the same cross-entropy loss function and the same training script. However, we will use a different train function to introduce alternative optimizers:
function train(model, data, optim)
for (x,y) in data
end
end
Here the optim argument specifies the optimization algorithm and state for each model parameter (see Optimization methods for available algorithms). update! uses optim to update each model parameter and optimization state. optim has the same size and shape as model, i.e. we have a separate optimizer for each model parameter. For simplicity we will use the optimizers function to create an Adam optimizer for each parameter:
o = optimizers(w, Adam)
println((:epoch, 0, :trn, accuracy(w,dtrn,predict), :tst, accuracy(w,dtst,predict)))
for epoch=1:10
train(w, dtrn, o)
println((:epoch, epoch, :trn, accuracy(w,dtrn,predict), :tst, accuracy(w,dtst,predict)))
end
The code for this example is available in the mnist-mlp example or the knet-tutorial notebook. The multi-layer perceptron does significantly better than the softmax model:
(:epoch,0,:trn,0.10166667f0,:tst,0.0977f0)
(:epoch,1,:trn,0.9389167f0,:tst,0.9407f0)
...
(:epoch,10,:trn,0.9866f0,:tst,0.9735f0)
<a id='Convolutional-neural-network-1'></a>
Convolutional neural network
To improve the performance further, we can use a convolutional neural networks (CNN). See the course notes by Andrej Karpathy for a good introduction to CNNs. We will implement the LeNet model which consists of two convolutional layers followed by two fully connected layers.
Knet provides the conv4 and pool functions for the implementation of convolutional nets:
function predict(w,x0)
x1 = pool(relu.(conv4(w[1],x0) .+ w[2]))
x2 = pool(relu.(conv4(w[3],x1) .+ w[4]))
x3 = relu.(w[5]*mat(x2) .+ w[6])
return w[7]*x3 .+ w[8]
end
The weights for the convolutional net can be initialized as follows.
w = Any[ xavier(Float32,5,5,1,20), zeros(Float32,1,1,20,1),
xavier(Float32,5,5,20,50), zeros(Float32,1,1,50,1),
xavier(Float32,500,800), zeros(Float32,500,1),
xavier(Float32,10,500), zeros(Float32,10,1) ]
Here we used xavier instead of randn which initializes weights based on their input and output widths.
This model is larger and more expensive to train compared to the previous models we have seen and it would be nice to use our GPU. To perform the operations on the GPU, all we need to do is to convert our data and weights to KnetArrays. minibatch takes an extra keyword argument xtype for this purpose, and we do it manually for the w weights:
dtrn = minibatch(xtrn,ytrn,100,xtype=KnetArray)
dtst = minibatch(xtst,ytst,100,xtype=KnetArray)
w = map(KnetArray, w)
The training proceeds as before giving us even better results. The code for the LeNet example can be found under the examples directory.
(:epoch, 0, :trn, 0.10435, :tst, 0.103)
(:epoch, 1, :trn, 0.98385, :tst, 0.9836)
...
(:epoch, 10, :trn, 0.9955166666666667, :tst, 0.9902)
<a id='Recurrent-neural-network-1'></a>
Recurrent neural network
In this section we will see how to implement a recurrent neural network (RNN) in Knet. This example, like the last one, requires a GPU. An RNN is a class of neural network where connections between units form a directed cycle, which allows them to keep a persistent state over time. This gives them the ability to process sequences of arbitrary length one element at a time, while keeping track of what happened at previous elements.
As an example, we will build a character-level language model inspired by "The Unreasonable Effectiveness of Recurrent Neural Networks" from the Andrej Karpathy blog. The model can be trained with different genres of text, and can be used to generate original text in the same style.
We will use The Complete Works of William Shakespeare to train our model. The shakespeare() function defined in gutenberg.jl downloads the book and splits the data into 5M chars for training and 0.5M chars for testing.
include(Knet.dir("data","gutenberg.jl"))
trn,tst,chars = shakespeare()
map(summary,(trn,tst,chars))
# ("4925284-element Array{UInt8,1}", "525665-element Array{UInt8,1}", "84-element Array{Char,1}")
There are 84 unique characters in the data and they are mapped to UInt8 values in 1:84. The chars array can be used to recover the original text:
julia> println(string(chars[trn[1020:1210]]...))
Cheated of feature by dissembling nature,
Deform'd, unfinish'd, sent before my time
Into this breathing world scarce half made up,
And that so lamely and unfashionable
We minibatch the data into (256,100) blocks:
BATCHSIZE = 256 # number of sequences per minibatch
SEQLENGTH = 100 # sequence length for bptt
function mb(a)
N = div(length(a),BATCHSIZE)
x = reshape(a[1:N*BATCHSIZE],N,BATCHSIZE)' # reshape full data to (B,N) with contiguous rows
minibatch(x[:,1:N-1], x[:,2:N], SEQLENGTH) # split into (B,T) blocks
end
dtrn,dtst = mb(trn),mb(tst)
map(length, (dtrn,dtst))
# (192, 20)
The initmodel function below initializes the weights for an RNN language model. It returns a tuple where r,w are the RNN spec and weights, wx is the input embedding matrix, wy,by are the weight matrix and bias to produce the output from the hidden state. See rnninit for a full description of available options.
RNNTYPE = :lstm # can be :lstm, :gru, :tanh, :relu
NUMLAYERS = 1 # number of RNN layers
INPUTSIZE = 168 # size of the input character embedding
HIDDENSIZE = 334 # size of the hidden layers
VOCABSIZE = 84 # number of unique characters in data
function initmodel()
w(d...)=KnetArray(xavier(Float32,d...))
b(d...)=KnetArray(zeros(Float32,d...))
r,wr = rnninit(INPUTSIZE,HIDDENSIZE,rnnType=RNNTYPE,numLayers=NUMLAYERS)
wx = w(INPUTSIZE,VOCABSIZE)
wy = w(VOCABSIZE,HIDDENSIZE)
by = b(VOCABSIZE,1)
return r,wr,wx,wy,by
end
A character based language model needs to predict the next character in a piece of text given the current character and recent history as encoded in the internal state of the RNN. Note that LSTMs have two state variables typically called hidden and cell. The predict function below takes weights ws, inputs xs, the initial hidden and cell states hx and cx and returns output scores ys along with the final hidden and cell states hy and cy. See rnnforw for available options and the exact computations performed.
function predict(ws,xs,hx,cx)
r,wr,wx,wy,by = ws
x = wx[:,xs] # xs=(B,T) x=(X,B,T)
y,hy,cy = rnnforw(r,wr,x,hx,cx,hy=true,cy=true) # y=(H,B,T) hy=cy=(H,B,L)
ys = by.+wy*reshape(y,size(y,1),size(y,2)*size(y,3)) # ys=(H,B*T)
return ys, hy, cy
end
The loss function returns the negative-log-likelihood from the predicted scores and updates the hidden and cell states h in-place. getval is necessary to prevent AutoGrad state leaking from one minibatch to the next. We use gradloss instead of grad so that lossgradient returns both the gradient and the loss for reporting.
function loss(w,x,y,h)
py,hy,cy = predict(w,x,h...)
h[1],h[2] = getval(hy),getval(cy)
return nll(py,y)
end
lossgradient = gradloss(loss)
Here is the train and test loops. When hidden and cell values are set to nothing, rnnforw assumes zero vectors.
function train(model,data,optim)
hiddens = Any[nothing,nothing]
losses = []
for (x,y) in data
push!(losses, loss1)
end
return mean(losses)
end
function test(model,data)
hiddens = Any[nothing,nothing]
losses = []
for (x,y) in data
push!(losses, loss(model,x,y,hiddens))
end
return mean(losses)
end
We are ready to initialize and train our model. We report train and test perplexity after every epoch. 30 epochs take less than 10 minutes with a K80 GPU:
EPOCHS = 30
model = initmodel()
@time for epoch in 1:EPOCHS
@time trnloss = train(model,dtrn,optim) # ~18 seconds
@time tstloss = test(model,dtst) # ~0.5 seconds
println((:epoch, epoch, :trnppl, exp(trnloss), :tstppl, exp(tstloss)))
end
# 17.228594 seconds (243.32 k allocations: 131.754 MiB, 0.05% gc time)
# 0.713869 seconds (208.56 k allocations: 19.673 MiB, 0.50% gc time)
# (:epoch, 1, :trnppl, 13.917706f0, :tstppl, 7.7539396f0)
# ...
# (:epoch, 30, :trnppl, 3.0681787f0, :tstppl, 3.350249f0)
# 533.660206 seconds (7.69 M allocations: 4.132 GiB, 0.03% gc time)
To generate text we sample each character randomly using the probabilities predicted by the model based on the previous character. The helper function sample takes unnormalized scores y and samples an index based on normalized probabilities based on y. The first character is initialized to newline and n characters are sampled based on the model.
function generate(model,n)
function sample(y)
p,r=Array(exp.(y-logsumexp(y))),rand()
for j=1:length(p); (r -= p[j]) < 0 && return j; end
end
h,c = nothing,nothing
x = findfirst(chars,'\n')
for i=1:n
y,h,c = predict(model,[x],h,c)
x = sample(y)
print(chars[x])
end
println()
end
generate(model,1000)
Here is a random sample of 1000 characters from the model. Note that the model has learnt to generate person names, correct indentation and mostly English words only by reading Shakespeare one letter at a time! The code for this example is available in the charlm notebook.
Pand soping them, my lord, if such a foolish?
MARTER. My lord, and nothing in England's ground to new comp'd.
To bless your view of wot their dullst. If Doth no ape;
Which with the heart. Rome father stuff
These shall sweet Mary against a sudden him
Upon up th' night is a wits not that honour,
Shouts have sure?
MACBETH. Hark? And, Halcance doth never memory I be thou what
My enties mights in Tim thou?
PIESTO. Which it time's purpose mine hortful and
is my Lord.
BOTTOM. My lord, good mine eyest, then: I will not set up.
LUCILIUS. Who shall
<a id='Benchmarks-1'></a>
Benchmarks
<a id='Knet-Benchmarks-(Sep-30,-2016)-1'></a>
Knet Benchmarks (Sep 30, 2016)
Each of the examples above was used as a benchmark to compare Knet with other frameworks. The table below shows the number of seconds it takes to train a given model for a particular dataset, number of epochs and minibatch size for Knet, Theano, Torch, Caffe and TensorFlow. Knet had comparable performance to other commonly used frameworks.
modeldatasetepochsbatchKnetTheanoTorchCaffeTFlow
LinRegHousing10K5062.841.882.662.355.92
SoftmaxMNIST101002.351.402.882.455.57
MLPMNIST101003.682.314.033.696.94
LeNetMNIST11003.593.031.693.548.77
CharLMHiawatha11282.252.422.231.432.86
The benchmarking was done on g2.2xlarge GPU instances on Amazon AWS. The code is available at github and as machine image deep_AMI_v6 at AWS N.California. See the section on Using Amazon AWS for more information. The datasets are available online using the following links: Housing, MNIST, Hiawatha. The MLP uses a single hidden layer of 64 units. CharLM uses a single layer LSTM language model with embedding and hidden layer sizes set to 256 and trained using BPTT with a sequence length of 100. Each dataset was minibatched and transferred to GPU prior to benchmarking when possible.
<a id='DyNet-Benchmarks-(Dec-15,-2017)-1'></a>
DyNet Benchmarks (Dec 15, 2017)
We implemented dynamic neural network examples from the dynet-benchmark repo to compare Knet with DyNet and Chainer. See DyNet technical report for the architectural details of the implemented examples and the github repo for the source code.
Benchmarks were run on a server with Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz and Tesla K80.
ModelMetricKnetDyNetChainer
rnnlm-batchwords/sec28.5k18.7k16k
bilstm-taggerwords/sec68001200157
bilstm-tagger-withcharwords/sec1300900128
treennsents/sec436810
<a id='DeepLearningFrameworks-(Nov-24,-2017)-1'></a>
DeepLearningFrameworks (Nov 24, 2017)
More recently, @ilkarman has published CNN and RNN benchmarks on Nvidia K80 GPUs, using the Microsoft Azure Data Science Virtual Machine for Linux (Ubuntu). The results are copied below. You can find versions of the Knet notebooks used for these benchmarks in the Knet/examples/DeepLearningFrameworks directory.
Training CNN (VGG-style) on CIFAR-10 - Image Recognition
DL LibraryTest Accuracy (%)Training Time (s)
MXNet77145
Caffe279148
Gluon76152
Knet(Julia)78159
Chainer79162
CNTK78163
PyTorch78169
Tensorflow78173
Keras(CNTK)77194
Keras(TF)77241
Lasagne(Theano)77253
Keras(Theano)78269
Training RNN (GRU) on IMDB - Natural Language Processing (Sentiment Analysis)
DL LibraryTest Accuracy (%)Training Time (s)Using CuDNN?
MXNet8629Yes
Knet(Julia)8529Yes
Tensorflow8630Yes
Pytorch8631Yes
CNTK8532Yes
Keras(TF)8635Yes
Keras(CNTK)8686Not Available
Inference ResNet-50 (Feature Extraction)
DL LibraryImages/s GPUImages/s CPU
Knet(Julia)1602
Tensorflow15511
PyTorch1306
MXNet1308
MXNet(w/mkl)12925
CNTK1178
Chainer1073
Keras(TF)985
Caffe2716
Keras(CNTK)464
<a id='Under-the-hood-1'></a>
Under the hood
Knet relies on the AutoGrad package and the KnetArray data type for its functionality and performance. AutoGrad computes the gradient of Julia functions and KnetArray implements high performance GPU arrays with custom memory management. This section briefly describes them.
<a id='KnetArrays-1'></a>
KnetArrays
GPUs have become indispensable for training large deep learning models. Even the small examples implemented here run up to 17x faster on the GPU compared to the 8 core CPU architecture we use for benchmarking. However GPU implementations have a few potential pitfalls: (i) GPU memory allocation is slow, (ii) GPU-RAM memory transfer is slow, (iii) reduction operations (like sum) can be very slow unless implemented properly (See Optimizing Parallel Reduction in CUDA).
Knet implements KnetArray as a Julia data type that wraps GPU array pointers. KnetArray is based on the more standard CudaArray with a few important differences: (i) KnetArrays have a custom memory manager, similar to ArrayFire, which reuse pointers garbage collected by Julia to reduce the number of GPU memory allocations, (ii) contiguous array ranges (e.g. a[:,3:5]) are handled as views with shared pointers instead of copies when possible, and (iii) a number of custom CUDA kernels written for KnetArrays implement element-wise, broadcasting, and scalar and vector reduction operations efficiently. As a result Knet allows users to implement their models using high-level code, yet be competitive in performance with other frameworks as demonstrated in the benchmarks section.
As we have seen, many common machine learning models can be expressed as differentiable programs that input parameters and data and output a scalar loss value. The loss value measures how close the model predictions are to desired values with the given parameters. Training a model can then be seen as an optimization problem: find the parameters that minimize the loss. Typically, a gradient based optimization algorithm is used for computational efficiency: the direction in the parameter space in which the loss reduction is maximum is given by the negative gradient of the loss with respect to the parameters. Thus gradient computations take a central stage in software frameworks for machine learning. In this section I will briefly outline existing gradient computation techniques and motivate the particular approach taken by Knet.
Computation of gradients in computer models is performed by four main methods (Baydin et al. 2015):
• manual differentiation (programming the derivatives)
• numerical differentiation (using finite difference approximations)
• symbolic differentiation (using expression manipulation)
• automatic differentiation (detailed below)
Manually taking derivatives and coding the result is labor intensive, error-prone, and all but impossible with complex deep learning models. Numerical differentiation is simple: $f'(x)=(f(x+\epsilon)-f(x-\epsilon))/(2\epsilon)$ but impractical: the finite difference equation needs to be evaluated for each individual parameter, of which there are typically many. Pure symbolic differentiation using expression manipulation, as implemented in software such as Maxima, Maple, and Mathematica is impractical for different reasons: (i) it may not be feasible to express a machine learning model as a closed form mathematical expression, and (ii) the symbolic derivative can be exponentially larger than the model itself leading to inefficient run-time calculation. This leaves us with automatic differentiation.
Automatic differentiation is the idea of using symbolic derivatives only at the level of elementary operations, and computing the gradient of a compound function by applying the chain rule to intermediate numerical results. For example, pure symbolic differentiation of $\sin^2(x)$ could give us $2\sin(x)\cos(x)$ directly. Automatic differentiation would use the intermediate numerical values $x_1=\sin(x)$, $x_2=x_1^2$ and the elementary derivatives $dx_2/dx_1=2x_1$, $dx_1/dx=\cos(x)$ to compute the same answer without ever building a full gradient expression.
To implement automatic differentiation the target function needs to be decomposed into its elementary operations, a process similar to compilation. Most machine learning frameworks (such as Theano, Torch, Caffe, Tensorflow and older versions of Knet prior to v0.8) compile models expressed in a restricted mini-language into a static computational graph of elementary operations that have pre-defined derivatives. There are two drawbacks with this approach: (i) the restricted mini-languages tend to have limited support for high-level language features such as conditionals, loops, helper functions, array indexing, etc. (e.g. the infamous scan operation in Theano) (ii) the sequence of elementary operations that unfold at run-time needs to be known in advance, and they are difficult to handle when the sequence is data dependent.
There is an alternative: high-level languages, like Julia and Python, already know how to decompose functions into their elementary operations. If we let the users define their models directly in a high-level language, then record the elementary operations during loss calculation at run-time, a dynamic computational graph can be constructed from the recorded operations. The cost of recording is not prohibitive: The table below gives cumulative times for elementary operations of an MLP with quadratic loss. Recording only adds 15% to the raw cost of the forward computation. Backpropagation roughly doubles the total time as expected.
opsecs
a1=w1*x0.67
a2=w2.+a10.71
a3=max.(0,a2)0.75
a4=w3*a30.81
a5=w4.+a40.85
a6=a5-y0.89
a7=sum(abs2,a6)1.18
+recording1.33
+backprop2.79
This is the approach taken by the popular autograd Python package and its Julia port AutoGrad.jl used by Knet. Recently, other machine learning frameworks have been adapting dynamic computational graphs: Chainer, DyNet, PyTorch, TensorFlow Fold.
In Knet g=grad(f) generates a gradient function g, which takes the same inputs as the function f but returns the gradient. The gradient function g triggers recording by boxing the parameters in a special data type and calls f. The elementary operations in f are overloaded to record their actions and output boxed answers when their inputs are boxed. The sequence of recorded operations is then used to compute gradients. In the Julia AutoGrad package, derivatives can be defined independently for each method of a function (determined by argument types) making full use of Julia's multiple dispatch. New elementary operations and derivatives can be defined concisely using Julia's macro and meta-programming facilities. See AutoGrad.jl for details.
<a id='Contributing-1'></a>
Contributing
Knet is an open-source project and we are always open to new contributions: bug reports and fixes, feature requests and contributions, new machine learning models and operators, inspiring examples, benchmarking results are all welcome. If you would like to contribute to Knet development, check out the knet-dev mailing list and Tips for developers.
Current contributors:
• Deniz Yuret
• Ozan Arkan Can
• Onur Kuru
• Emre Ãnal
• Erenay Dayanık
• Ãmer Kırnap
• İlker Kesen
• Emre Yolcu
• Meriç Melike Softa
• Ekrem Emre Yurdakul
• Enis Berk
• Can Gümeli
• Carlo Lucibello
• å¼ å®å¯ (@ylxdzsw)
|
# Help center
#### Coupler labeling and terminology
The organ contains couplers that are named as for example I + II. This means that the 2nd Manual will be added to the 1st Manual, so in other notation II. Also, some couplers have labels of ‘Sub’ or ‘Super’, for example I + II Sub. This means that the 2nd Manual, transposed one octave downwards, will be added to the 1st Manual, so II transposed 1 octave down. (On some organs these are labeled as for example I+II 16’) Similarly, label ‘Super’ means coupling and transposition one octave up. (On some organs, this is noted as for example I+II 4’.)
|
### Abstract
In this meeting we will be continuing the install of gentoo on the machine. We will learn more about configuration for the kernel and finalizing the system.
Gentoo
# Continuing with Gentoo Install
I believe we were up to the chroot command in the installation, so that is where I am going to pickup this lesson.
One issue I did not explain last session was the idea of keeping a log file while you work. So lets start off by discussing online documentation and how to create a log.
But before we start discussing documentation and log files, we need to talk about both virtual terminals and the gpm service.
## Virtual Terminals
How many of you remember the old days of Dos command line? Well Linux also supports a similiar look, but because Linux is a multi-user and multi-tasking system, it can support multiple terminals. Any one of these looks similiar to the old Dos windows. Linux though supports multiple terminals on the main display. By default there are 6 pre-defined. You can increase the number as high as you want.
The default method of switching between the 6 predefined virtual terminals is ALT-FN, where N is a number from 1 to 6.
For this install we are going to use the following pattern:
• ALT-F1 will be the chroot terminal
• ALT-F5 will be links http://www.gentoo.org for the manual.
• ALT-F6 will be nano for a log file.
## Console Mouse
Another useful function that Linux supplies is the ability to copy and paste from one virtual terminal to another.
The service called gpm is most useful when you want to copy information from one virtual terminal to another, or into your log file.
Unfortunately, the command line web browser links, does not support the copy paste operation. I wish it did since it would make the setup easier.
The easy way to use the command is to highlight some text using the left mouse button. Move to where you want the highlighted text to appear, and click the mouse wheel.
## Online Documentation
One of the reasons for enabling the network at the very beginning is to allow us to read the install manual while we are working on the system. Since gentoo includes a command line web browser named links we can use this to read the documentation while performing the installation. Normally if you wanted to read documentation while performing an install, you would need another computer.
So we are going to change to virtual terminal 5 and enter the command links http://www.gentoo.org. This will take us to the gentoo site where we can open Gentoo Linux x86 Handbook.
## Log file
One interesting trick I have learned is to keep a log of what decisions I make during an installation. I don’t know about you, but often during an installation of an OS, I need to make decsions. Some of these don’t matter later, but some are important and can cause you grief later.
So let us stop and think about how the virtual terminals are mounted. We booted from a bootable CD, so the root file system is on the Ram disk used by the CD. We then mounted the new gentoo partition on /mnt/gentoo. After that we did a chroot to change the root on virtual terminal one to the new partition. This means that we will execute the commands in virtual terminal one. But it also means that the other terminals can see the files in the new partition by going to /mnt/gentoo/ and treating that as root.
For example, after the change root command, chroot /mnt/gentoo /bin/bash, the directory /root holds the startup files for the root user. Now if we switch to virtual terminal 2 we can see the same files but they are at /mnt/gentoo/root instead of /root.
# Compiling a Kernel
One of the first tasks in the new gentoo system is compiling a kernel. We are going todo it the easy way first and then the manual way second.
## Genkernel
The gentoo system is fond of scripts, so it is no surprise that they have a script to automate the proess of compiling a new kernel.
For this install we will use the command genkernel --splash --install --menuconfig --save-config all.
## Manual Kernel Compile
The other method of compiling the kernel is to issue the command yourself. Now lets be clear that manual is only slightly more complex than using genkernel. Here are the steps you need to perform. I will assume you already did an emerge gentoo-sources.
• cd /usr/src/linux
• make && make modules_install
• cp arch/i386/boot/bzImage /boot/kernel-2.6.31-gentoo-r6
Change the kernel version above to match your kernel.
# Gentoo Documentation
OK that is about all I am going to write up this time. I recommend you spend some time going over the Gentoo documentation available at Gentoo Documentation Resources.
In the next advanced meeting we will dig down into the Gentoo system to see how it is constructed. What tools does it provide, where are the files located, and how the configuration files interact. This will be valuable when we start building the Linux System from Scratch.
Written by John F. Moore
Last Revised: Wed Oct 18 11:01:32 EDT 2017
|
Author
# Hemant Kumar Nashine
Bio: Hemant Kumar Nashine is an academic researcher from VIT University. The author has contributed to research in topic(s): Fixed-point theorem & Metric space. The author has an hindex of 1, co-authored 9 publication(s) receiving 8 citation(s).
##### Papers
More filters
Journal ArticleDOI
Tran Thanh Binh
Abstract: In this paper, we study an inverse source problem for the Rayleigh–Stokes problem for a generalized second-grade fluid with a fractional derivative model. The problem is severely ill-posed in the sense of Hadamard. To regularize the unstable solution, we apply the Tikhonov method regularization solution and obtain an a priori error estimate between the exact solution and regularized solutions. We also propose methods for both a priori and a posteriori parameter choice rules. In addition, we verify the proposed regularized methods by numerical experiments to estimate the errors between the regularized and exact solutions.
3 citations
Hemant Kumar Nashine
01 Jan 2015
Abstract: In the present paper, we derive a common xed point theorem for a hybrid pair of occasionally coincidentally idempotent mappings satisfying closed multi-valued F -contraction condition introduced by Wardowski (Fixed Point Theory Appl. 2012:94) via common limit range property in the frame of complete metric spaces. Also, hybrid mappings which satisfy an F -contractive condition of Hardy-Rogers type are considered. Our results improve several results from the existing literature. Two applications are presented|the proofs of existence of solutions for certain system of functional equations arising in dynamic programming, as well as for certain Volterra integral inclusion.
1 citations
Journal ArticleDOI
Hemant Kumar Nashine
24 Jun 2019
Abstract: We study the solvability of a fractional Cauchy problem based on new development of fixed point theorem, where the operator is suggested to be non-compact on its domain. Moreover, we shall prove that the solution is bounded by a fractional entropy (entropy solution). For this purpose, we establish a collection of basic fixed point results, which generalizes and modifies some well known results. Our attention is toward the concept of a measure of non-compactness to generalize $\mu$-set contractive condition, using three control functions.
1 citations
Journal ArticleDOI
1 citations
Journal ArticleDOI
Abstract: In this paper, we study the solution of fractal energy integral equation for one-dimensional compressible flows without body force using measure of noncompactness. We also discuss the solution of the local fractal equation of losing energy system using the notion of local fractal differential idea. For this, a new notion of χ - Δ -set contraction condition under simulation function is defined and two main fixed point and coupled fixed point results are obtained.
1 citations
##### Cited by
More filters
Dissertation
01 Jan 2010
Abstract: We consider the equivalence of the existence of fixed points of single-valued mappings and multivalued mappings for some classes of mappings by proving some equivalence theorems for the completeness of metric spaces.
50 citations
Journal ArticleDOI
07 Feb 2020
Abstract: In this paper, we set up an adequate condition for the presence of a solution of the nonlinear matrix equation. To do so, we prove the existence of fixed points for multi-valued modified F-contractions in the context of complete metric spaces, which generalize, refine, and extend several existing results in the literature. An example is accompanies the obtained results to show that derived results are a proper generalization.
5 citations
Journal ArticleDOI
Abstract: Emerging Trends in the use of smart portable accessories, particularly within the context of the Internet of Things (IoT), where smart sensor devices are employed for data gathering, require advancements in energy management mechanisms. This study aims to provide an intelligent energy management mechanism for wearable/portable devices through the use of predictions, monitoring, and analysis of the performance indicators for energy harvesting, majorly focusing on the hybrid PV-wind systems. To design a robust and precise model, prediction algorithms are compared and analysed for an efficient decision support system. Levenberg–Marquardt (LM), Bayesian Regularization (BR), and Scaled Conjugate Gradient (SCG) prediction algorithms are used to develop a Shallow Neural Network (SNN) for time series prediction. The proposed SNN model uses a closed-loop NARX recurrent dynamic neural network to predict the active power and energy of a hybrid system based on the experimental data of solar irradiation, wind speed, wind direction, humidity, precipitation, ambient temperature and atmospheric pressure collected from Jan 1st 2015 to Dec 26th 2015. The historical hourly metrological data set is established using calibrated sensors deployed at Middle East Technical University (METU), NCC. The accessory considered in this study is called as Smart Umbrella System (SUS), which uses a Raspberry Pi module to fetch the weather data from the current location and store it in the cloud to be processed using SNN classified prediction algorithms. The results obtained show that using the SNN model, it is possible to obtain predictions with 0.004 error rate in a computationally efficient way within 20 s. With the experiments, we are able to observe that for the period of observation, the energy harvested is 178 Wh/d, where the system estimates energy as 176.5 Wh/d, powering the portable accessories accurately.
3 citations
Book ChapterDOI
01 Jan 2021
Abstract: In this chapter, we use the concept of local fractional calculus and measure of non-compactness to design the growth system of Covid-19. To achieve this, we establish a fixed point and coupled fixed point theorems for new $$\mu$$-set contraction condition in partially ordered Banach spaces, whose positive cone $$\mathbb {K}$$ is normal. We provide adequate examples to validate the epidemic dynamics with graphical presentations. We also use present available data to validate it.
Journal ArticleDOI
Abstract: The approximate controllability of second-order integro-differential evolution control systems using resolvent operators is the focus of this work. We analyze approximate controllability outcomes by referring to fractional theories, resolvent operators, semigroup theory, Gronwall’s inequality, and Lipschitz condition. The article avoids the use of well-known fixed point theorem approaches. We have also included one example of theoretical consequences that has been validated.
|
Credit: atibodyphoto/iStock/Thinkstock
When physicists Claudia Felser and Stuart Parkin were introduced at a conference on applied magnetics, they felt an immediate attraction. But then, standing outside the Amsterdam conference centre, they started talking shop. It did not go well.
Parkin was interested in finding materials he could use to make miniature data-storage devices. Felser espoused the benefits of her pet topic: Heusler compounds, alloys with modifiable magnetic properties. “But he was not interested!” she laughs. Parkin thought that the compounds sounded as though they would be too difficult to interface with other materials. “So this was not a successful introduction,” Felser says.
But the two kept in touch. And as Felser shared her growing knowledge about the semiconductor and quantum properties of Heusler compounds, Parkin grew more curious about the molecules — and about Felser. At the end of 2009, she decided to take a sabbatical from Johannes Gutenberg University in Mainz, Germany, to work IBM in San Jose, California, where Parkin worked. “I invited her to stay with me,” Parkin says. They were a couple from then on. “So this was more or less how it started and we're still working together,” he says.
### Listen
Couple Claudia Felser and Stuart Parkin discuss how they mix research with romance
Felser and Parkin are one of thousands of couples who met through science. According to a 2010 survey by the US National Science Foundation, just over one-quarter of married people with doctorates had a spouse working in science or engineering1. Such partnerships are on the rise: in 1993, the proportion was one-fifth. More and more institutions are hiring couples. A 2008 survey2 of around 9,000 US researchers found that the proportion of hires that went to couples rose from 3% in the 1970s to 13% in the 2000s. And data from the online dating service PlentyOfFish reveal that users with a graduate degree are three times more likely than the average user to form a couple with someone with a similar level of education.
Collaboration is key to the scientific process, but when collaborators are romantic partners, that relationship offers some unique advantages — a deep understanding of each other's personality and motivations — as well as the risk that work will dominate conversation at the dinner table. Here Nature talks to four couples about how they have managed to blend their science and lives.
Materials and air miles
Claudia Felser and Stuart Parkin in the Gobi Desert in 2011. Credit: Mark Bajohrs/Mainz
After Felser returned from her sabbatical, she and Parkin began racking up air miles. And Parkin's practical attitude rubbed off on Felser. “As a chemist you want to understand bonding, you want to find new synthesis methods. But you don't think deeply about applications,” she says. Now, she started to also consider the material's cost and stability. As a result, companies lined up to work with her. “You really learn to think differently,” she says. In 2011, the couple published a paper3 on Heusler compounds and their potential in spintronics, a discipline that makes use of electrical fields to manipulate the spin of electrons.
Over the past few years Felser and Parkin have managed to spend up to one-quarter of their time together. Conferences and meetings became fruitful ways of meeting up. “As soon as people recognized that were are a couple, they started to invite us together to conferences. It was very good,” Felser says.
Felser's employers — she is now director of the Max Planck Institute for Chemical Physics in Dresden — even realized that they might be able to persuade Parkin to accept a position in Germany. After many years on different continents, he is finally making arrangements to move, having been appointed director of the Max Planck Institute for Microstructure Physics in Halle. In April, he was awarded the Finnish Academy of Technology's Millennium Technology Prize, and plans to put part of the €1 million (US$1.4 million) in prize money towards building a house by the river in Halle. They plan to marry in December — on Stuart's birthday, “so I won't forget”, he says. It will be their first place together. “Lufthansa and United will be very unhappy,” Parkin says. Neuronal connection Yuh-Nung and Lily Jan in their shared office in the early 1980s. Lily and Yuh-Nung Jan have made their career studying cell division. But they themselves are inseparable. They start their sentences with 'we' or 'our'. Even their labs are joined. They met in 1967 in their native Taiwan when both were studying physics. Yuh-Nung had just got his bachelor's degree and his class was taking a celebratory hiking trip in the mountains. Joining them was a student from the class below: Lily. She had jumped ahead a year, catching up with Yuh-Nung, and was applying for graduate school, too. “I have a theory that quite a lot of her classmates were intimidated by her,” says Yuh-Nung. “But I didn't know better.” Both got places studying physics at the California Institute of Technology (Caltech) in Pasadena. They were an item, but spent their first three years in separate dorms. Not long after they started work, a physicist-turned-biologist came to their department to give a seminar, and made them rethink their career choice. “Back in Taiwan we were not exposed to modern biology,” says Yuh-Nung. “At Caltech, that was our first time. I guess it was good timing because biology was getting really interesting.” Besides, he adds, tongue in cheek, they were over the hill as physicists. “All the great ones do something really important very early in their career, in their twenties, and we'd already reached that age.” In a month they had made the switch to cell biology, and after pursuing separate thesis projects, began to collaborate. In 1971, they married. It was a very low-key ceremony at the Los Angeles courthouse — costing just US$6 for the licence and parking — and they celebrated by going camping and hiking in Yosemite National Park.
In 1979, they moved to the University of California, San Francisco. And having spent several years working in the same labs on similar projects, it was natural for them to run a lab together.
There were cases of too many cooks spoiling the broth. “In the very beginning, we both would sit with a postdoc or a student, and that certainly didn't work because no two people have the same idea,” says Lily. “It very quickly evolved into an argument. The student was just looking back and forth.”
Their interests overlapped heavily, but were sufficiently different that it made sense for Lily and Yuh-Nung to take the lead on different strands of the same problem: how brain cells divide. They now run adjoining labs, supervise 29 researchers and consistently produce publications in top journals. Lily focuses on ion channels and Yuh-Nung on cell morphology and, increasingly, function.
The Jans feel that being a couple gives them benefits over and above non-romantic collaborators. “It's not the sum of two parts, it's much better than that,” says Lily. She puts their success down to “very consistent long-term camaraderie”. And the pairing is certainly convenient. “Because whenever you think of something,” she says, “it could be at home or at work, you can more easily discuss the questions.” Yuh-Nung adds, “We've been together more than 40 years and I feel very lucky to have her as a partner.”
Their relationship seems to have served as a template for their colleagues. “There were some romances that started in the lab,” says Lily. “More than one,” Yuh-Nung says. “There have been kids born during the time their parents were in our lab, he says. “We lost track, but at some point we're going to put together an album.”
Family trees
Ruth Mace, Mark Pagel and their son Thomas in 1994.
Few researchers can claim to have established a new field of science — let alone to have done so with their spouse. But that is exactly what evolutionary biologist Mark Pagel and anthropologist Ruth Mace did. They are the pioneers of using phylogenies — evolutionary trees — in anthropology, seeking to explain human cultures and behaviour as if they were evolving species.
When they first met, in the zoology department at the University of Oxford, UK, in the late 1980s, their work had little overlap. Mace was working on animal biology and Pagel was developing ways to analyse species relatedness. Both were heavily influenced by the evolutionary biology they were studying. British evolutionists, particularly, were known for their views on the power of adaptation and natural selection to explain behaviour. “We're both out of that church,” says Mace. They first met at the department morning break, which provided ample time to discuss their ideas. “Those were the days,” Mace recalls. “The entire department would have massive amounts of coffee for an hour.”
Several years later, Pagel and Mace co-authored a paper4 that used phylogenetic methods to analyse human cultures, and argued that just as zoologists use genetics to look at species evolution, anthropologists could use languages to study human cultural evolution. The same year, their first son was born, adding a twig to their own tree of life.
Although they still collaborate on articles and research projects — Mace estimates that about 10% of their work is joint — they retain separate research identities. Both have academic interests outside their phylogeny work. But working in overlapping domains can lead to some awkward situations — especially because they have different surnames. Sometimes, one is asked to review the other's paper or a competing grant application — offers that they refuse with an explanation of the conflict of interest. “Being in the midst of two fields that have a history of 'robust' discussion, for want of a better word”, Pagel is grateful to have someone who is on the same side.
Dream team
Boris Worm and Heike Lotze at Maasholm field station in 1998. Credit: Courtesy of Boris Worm and Heike Lotze
Sometimes during his graduate work in marine ecology, Boris Worm would solve problems in his sleep. On waking, he would tell his partner, Heike Lotze, about his dream. A marine ecologist herself, Lotze served as a sleepy sounding board. “You know how you forget dreams in the morning. But if there's somebody next to you, you can tell them right away,” says Worm.
The ecologists believe that their relationship has helped them to shape the early phases of their work in ways that would not be possible in a non-romantic collaboration. “We can share ideas as they emerge, very raw, very unfinished and some of it not useful but still interesting,” Worm says. “I often have creative, intuitive ideas,” adds Lotze. “Then I feel like I'm handing this raw thing over to Boris and he shapes it a bit.”
Worm and Lotze met in the mid-1990s during their graduate study in Germany. Their fields overlapped, but they were pursuing different directions. Lotze was interested in the human influence on the sea and was studying nutrient pollution, thought to be the cause of algal blooms. She puts her practical mindset down to the fact that she was brought up on a farm. By looking after calves and baling hay, she routinely faced the connection between humans and the natural world, and how one changes the other. Worm's background is more analytical; his outlook more theoretical. The son of a psychologist and a professor of education, he grew up thinking a lot about relationships and communities. His PhD was on species interactions, particularly predation in ecosystems. “Heike's perspective grounded my ideas and gave them wheels, and maybe I have provided some wider context for the questions she was asking,” he says.
They worked together throughout their PhDs — even using the same study site, Maasholm field station on the Baltic Sea. “There was an old rocket-launching station from the cold war, and part had been bought by our institution as a field site. We had the whole place to ourselves,” says Worm. Because their experiments were often closely related, they had to do a little untangling before submitting their work for publication. “We had to sit down and say, OK, this is what I will publish and this is what you will publish,” Lotze says.
They published their first big paper5 together in 2002 — a grand synthesis of their PhD projects on the cumulative effects of various influences on marine ecosystems — and continue to publish together often. Perhaps their most controversial paper, produced as part of a large team in 2006, was a gloomy forecast of global fish stocks6. Worm and Lotze were dismayed by how much the media focused on “the end of seafood”; they had wanted to emphasize the rippling effects on species that are not harvested by humans. “The focus of the paper was different to what came out in the media,” Lotze remembers. The phone rang constantly, and each found it helpful to have the other for support. “You understand what the other person is going through,” says Lotze. “I'm a much more shy person, so for me to deal with those media — it was a storm, really. Boris was more riding the wave.”
They are aware that their different personalities sometimes lead to Worm getting more attention than Lotze for their joint work. “Boris was more often the first spokesperson about our ideas,” Lotze says. “For a while I kept a bit more in the background. People saw Boris more than me.” But Lotze eventually started to step forward. “I didn't like being in the shadow, I had to fight that and get out of my shell,” she says. They are occasionally told that they should differentiate their work, and have made a conscious effort not to co-author all their publications.
But last year, the couple won their first joint honour, the Peter Benchley Ocean Award for Excellence in Science. “It's not very often the connection gets recognized officially,” Worm says. “It felt really wonderful to have that highlighted.”
Official recognition is one thing, but for Lotze and Worm the greatest benefits of collaborating with a partner are less tangible. A romantic partner knows how to motivate, how to comfort when a grant proposal doesn't go your way and how to rein in the loopiest ideas. As Lotze says, “Your partner is your best critic.”
|
# Completing the square and then using trig substitutio
#### KevinL
1. The problem statement, all variables and given/known data
Integral of 1/[(x^2+4x+3)^(3/2)]
3. The attempt at a solution
I tried completing the square and then using trig substitution. So its:
1/[(x+2)^2 -1] let x+2=sec(theta), dx =sec(theta)tan(theta)
After some simplifying, I get it down to integral of csc(theta) which = -ln|csc(theta) + cot(theta)|
My calc teacher has the answers online, and it doesnt have any ln in it at all, so im pretty sure im on the wrong track here. Any suggestions?
#### Gib Z
Homework Helper
Re: integral
It seems like you completely forgot about the exponent in the denominator.
#### swraman
Re: integral
1. The problem statement, all variables and given/known data
Integral of 1/[(x^2+4x+3)^(3/2)]
3. The attempt at a solution
I tried completing the square and then using trig substitution. So its:
1/[(x+2)^2 -1] let x+2=sec(theta), dx =sec(theta)tan(theta)
After some simplifying, I get it down to integral of csc(theta) which = -ln|csc(theta) + cot(theta)
The small mistake you made after completing the square is that you forgit to copy down the ^3/2 in the denominator. It should be:
$$\frac{1}{[(x+2)^{2} -1]^{\frac{3}{2}}}$$.
It becomes much easier when you include it ;). But aside from that mistake you're on teh right general track.
Last edited:
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
Dave Horner's Website - Yet another perspective on things...
137 guests
Rough Hits : 3106306
how did u find my site?
The purpose of computing is insight, not numbers.
Richard Hamming
\begin{bmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & 0 & \vdots \\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & 1_{n} \end{bmatrix}
morning or night person?
morning or night person?
night
44 81.5%
morning
3 5.6%
# voters : 54 1st vote: : Sunday, 23 March 2014 23:48 last vote: : Wednesday, 08 March 2017 12:43
|
# Multiple Authors, Multiple Affiliations without authblk
I am trying to make a latex template for reuse by me and other people in my lab. The template reference is a word doc. I have gotten most of the way using the asme2e class, but I am having trouble with the author blocks.
This is the target:
I've tried authblk to do this, but authblk will not load for me. I get an error:
authblk.sty:113: Undefined control sequence. [\xdef\AB@author{\noexpand\AB@blk@and\@author]
So, I am trying to do this without authblk, but have the solution be as simple as possible so others (and myself a year from now) can use it. Others may have more than or less than 3 authors.
For example, I don't want to abuse title like here.
Here is some code that include my preamble.
\documentclass[twocolumn,10pt]{asme2e}
\bibliographystyle{asmems4}
\usepackage{epsfig,graphics,amssymb,amsmath,graphicx,indentfirst,subfig,float}
\usepackage{expl3}
\ExplSyntaxOn
\cs_new_eq:NN \Repeat \prg_replicate:nn
\ExplSyntaxOff
\usepackage{titlesec}
\titleformat{\section}
{\bfseries\MakeUppercase}{\thesection}{1em}{}
\papernum{XXXXX-XXXXX}
\conffullname{Proceeding of the ASME Super Awesome Conference}
\confdate{xx-xx}
\confmonth{XXXX}
\confyear{XXX}
\confcity{city, state}
\confcountry{country}
%%%
%AUTHOR STUFF WOULD GO HERE....
%%%
\begin{document}
\maketitle
\section*{Introduction}
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.
\end{document}
Thanks!
• Maybe something like that could help: tex.stackexchange.com/a/214425/828 – Habi Jan 9 '15 at 9:48
• Those are added as footnotes. – jmerkow Jan 9 '15 at 17:40
• I know that the affiliations are added as footnotes, but maybe you could use another pointer with that trick. – Habi Jan 12 '15 at 10:59
|
# Suds makes the soapy world less slippery
pythonsoap
In the last post, I was whining about the bumps in the road when trying to consume a SOAP web service using python. Thanks to Olosta’s suggestion, Suds.
The cute yellow rubber duck makes the soapy world less slippery. There is no need to generate execution code using an external tool like wsdl.exe for C#, just load the WSDL in the runtime, the ServiceProxy object would dynamically generate the function calls for you. It still in actively developed, salute to joetel.
Something needs to tailor to adapt to the Microsoft Office SharePoint Server: the connection persistence. As you may know, the default authentication used in SharePoint web service is NTLM, undocumented, but well known to the public. NTLM authenticates the connection, so in current suds implementation, each method invocation incurs redundant NTLM negotiation-challenge-and-response. I would dig more for this issue; stay tuned.
|
# Kinematics from Numerical Integration
Launch a particle from ground level with some speed and some angle and compute the trajectory using numerical integration. This is a very simple example for those learning to program.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 import math # Constants g = 10.0 # gravity v0 = 20.0 # initial speed theta0 = math.pi/4.0 # initial angle dt = 10.0**(-5.0) # time step ####################################### # Initialize simulation t = 0.0 # time count = 0 # count for printing x = 0.0 # initial position y = 0.0 xd = v0*math.cos(theta0) # initial velocity yd = v0*math.sin(theta0) xdd = 0.0 # initial acceleration ydd = -g ####################################### # Run simulation while y >= 0.0: # run while object in flight # numerical integration # explicit Euler x = x + xd*dt # update position based on velocity y = y + yd*dt xd = xd + xdd*dt # update velocity based on acceleration yd = yd + ydd*dt xdd = 0.0 # constant accleration ydd = -g t = t + dt # advance time and count count = count + 1 if count % 1000 == 0: # print once every thousand simulation intervals print t,x,y
The results were plotted in Excel
Note by Steven Chase
8 months, 1 week ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
@Steven Chase I want to learn monte Carlo, how can I learn??
- 8 months, 1 week ago
Hey! You can check out this book.
- 8 months, 1 week ago
@Aaghaz Mahajan Thanks bro for this book
- 8 months, 1 week ago
@Steven Chase sir in the last 2nd step what is the meaning of that double equal to = = ??
- 8 months, 1 week ago
Wow you're such a dedicated student. Really awesome to see passion and dedication :)
- 8 months ago
I single equals sign is used to declare the value of a quantity. A double equals sign is used to compare two quantities to see if they are equal
- 8 months, 1 week ago
@Steven Chase okay! thanks , can you please give me some questions or post for practicing numerical integration .
if you post it should be of E and M.
- 8 months, 1 week ago
- 8 months, 1 week ago
There it is
- 8 months, 1 week ago
@Steven Chase yes, Thank you sir
I don't think that anyone in the universe will help me like this you have helped me.
- 8 months, 1 week ago
You're welcome. And the cool part is that nowhere in the code is any parabola explicitly specified or coded. It just comes out of the numerical integration
- 8 months, 1 week ago
@Steven Chase yeah it is interesting
- 8 months, 1 week ago
I was going to that scatter only by mistake my arrow is at (other charts)
- 8 months, 1 week ago
@Steven Chase sir can you please show a photo of your excel ,i am facing a bit of difficulty while plotting it in excel.
- 8 months, 1 week ago
You have to import the data from a text file and choose space-delimited format
- 8 months, 1 week ago
@Steven Chase i stucked here
- 8 months, 1 week ago
Have have to import the data in space delimited format, which you haven't done. Then highlight columns B and C.
Insert - Scatter - Scatter with smooth lines
- 8 months, 1 week ago
@Steven Chase which option should I choose here
- 8 months, 1 week ago
space delimited
- 8 months, 1 week ago
@Steven Chase which option
- 8 months, 1 week ago
Just hit finish
- 8 months, 1 week ago
@Steven Chase now where I have to go?
- 8 months, 1 week ago
Highlight columns B and C and then go to:
Insert - scatter chart - scatter with smooth lines
- 8 months, 1 week ago
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 print "import math #constant g = 10.0 #gravity v0 = 20.0 #initial speed theta0 = math.pi/4.0 #initial speed dt = 10.0**(-5.0) #time step ########################################## #Initial simulation t = 0.0 #time count = 0 #count for printing x = 0.0 #initial position y = 0.0 xd = v0*math.cos(theta0) #initial velocity yd = v0*math.sin(theta0) xdd = 0.0 #initial acceleration ydd = -g ############################################## #Run simulation while y >= 0.0: #run while object in flight #numerical integration #elpicit euler x = x + xd*dt #update position based on velocity y = y + yd*dt xd = xd + xdd*dt #update velocity based on acceleration yd = yd + ydd*dt xdd = 0.0 #constanrt acceleration t = t + dt #advance time and count count = count + 1 if count % 1000 == 0: #print once every thousand simulation intervals print (t,x,y)"
- 8 months, 1 week ago
@Steven Chase sir i have also typed the the whole code ,but my code is not expressed as your code is expressed in brilliant ??
- 8 months, 1 week ago
You need three ticks and then "Python", and then code, followed by three more ticks
- 8 months, 1 week ago
@Steven Chase thanks sir
- 8 months, 1 week ago
|
# Ionescu-Tulcea theorem
Ionescu-Tulcea theorem Not to be confused with: the Ionescu-Tulcea–Marinescu ergodic theorem.
In the mathematical theory of probability, the Ionescu-Tulcea theorem, sometimes called the Ionesco Tulcea extension theorem deals with the existence of probability measures for probabilistic events consisting of a countably infinite number of individual probabilistic events. In particular, the individual events may be independent or dependent with respect to each other. Thus, the statement goes beyond the mere existence of countable product measures. The theorem was proved by Cassius Ionescu-Tulcea in 1949.[1][2] Contents 1 Statement of the theorem 2 Applications 3 See also 4 Sources 5 References Statement of the theorem Suppose that {displaystyle (Omega _{0},{mathcal {A}}_{0},P_{0})} is a probability space and {displaystyle (Omega _{i},{mathcal {A}}_{i})} for {displaystyle iin mathbb {N} } is a sequence of measure spaces. For each {displaystyle iin mathbb {N} } let {displaystyle kappa _{i}colon (Omega ^{i-1},{mathcal {A}}^{i-1})to (Omega _{i},{mathcal {A}}_{i})} be the Markov kernel derived from {displaystyle (Omega ^{i-1},{mathcal {A}}^{i-1})} and {displaystyle (Omega _{i},{mathcal {A}}_{i}),} , where {displaystyle Omega ^{i}:=prod _{k=0}^{i}Omega _{k}{text{ and }}{mathcal {A}}^{i}:=bigotimes _{k=0}^{i}{mathcal {A}}_{k}.} Then there exists a sequence of probability measures {displaystyle P_{i}:=P_{0}otimes bigotimes _{k=1}^{i}kappa _{k}} defined on the product space for the sequence {displaystyle (Omega ^{i},{mathcal {A}}^{i})} , {displaystyle iin mathbb {N} ,} and there exists a uniquely defined probability measure {displaystyle P} on {displaystyle left(prod _{k=0}^{infty }Omega _{k},bigotimes _{k=0}^{infty }{mathcal {A}}_{k}right)} , so that {displaystyle P_{i}(A)=Pleft(Atimes prod _{k=i+1}^{infty }Omega _{k}right)} is satisfied for each {displaystyle Ain {mathcal {A}}^{i}} and {displaystyle iin mathbb {N} } . (The measure {displaystyle P} has conditional probabilities equal to the stochastic kernels.)[3] Applications The construction used in the proof of the Ionescu-Tulcea theorem is often used in the theory of Markov decision processes, and, in particular, the theory of Markov chains.[3] See also Disintegration theorem Regular conditional probability Sources Klenke, Achim (2013). Wahrscheinlichkeitstheorie (3rd ed.). Berlin Heidelberg: Springer-Verlag. pp. 292–294. doi:10.1007/978-3-642-36018-3. ISBN 978-3-642-36017-6. Kusolitsch, Norbert (2014). Maß- und Wahrscheinlichkeitstheorie: Eine Einführung (2nd ed.). Berlin; Heidelberg: Springer-Verlag. pp. 169–171. doi:10.1007/978-3-642-45387-8. ISBN 978-3-642-45386-1. References ^ Ionescu Tulcea, C. T. (1949). "Mesures dans les espaces produits". Atti Accad. Naz. Lincei Rend. 7: 208–211. ^ Shalizi, Cosma. "Chapter 3. Building Infinite Processes from Regular Conditional Probability Distributions" (PDF). Cosma Shalizi, CMU Statistics, Carnegie Mellon University. Index of /~cshalizi/754/notes "Almost None of the Theory of Stochastic Processes: A Course on Random Processes, for Students of Measure-Theoretic Probability, with a View to Applications in Dynamics and Statistics by Cosma Rohilla Shalizi with Aryeh Kontorovich". stat.cmu.edu/~cshalizi. ^ Jump up to: a b Abate, Alessandro; Redig, Frank; Tkachev, Ilya (2014). "On the effect of perturbation of conditional probabilities in total variation". Statistics & Probability Letters. 88: 1–8. arXiv:1311.3066. doi:10.1016/j.spl.2014.01.009. arXiv preprint Categories: Markov processesStochastic processes
Si quieres conocer otros artículos parecidos a Ionescu-Tulcea theorem puedes visitar la categoría Markov processes.
Subir
Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información
|
## Boriša Kuzeljević
Departman za matematiku i informatiku,
Srbija.
borisha " at " dmi.uns.ac.rs
CV
Seminar of our group in Novi Sad
Nastava:
Papers:
1. Antichains of copies of ultrahomogeneous structures
(with Milos Kurilic) preprint.
2. Uniform Homogeneity
(with Wieslaw Kubis) preprint.
3. Positive families and boolean chains of copies of ultrahomogeneous structures
(with Milos Kurilic) Comptes Rendus Mathematique 358 (2020), no. 7, 791-796.
4. P-ideal dichotomy and a strong form of the Souslin Hypothesis
(with Stevo Todorcevic) Fundamenta Mathematicae 251 (2020), 17-33.
5. On the structure of random hypergraphs
Publications de l'Institut Mathematique (Beograd) 104(118) (2018), 43--51.
6. A long chain of P-points
(with Dilip Raghavan) Journal of Mathematical Logic 17 (2018), no. 3, 1850004, 38 pp.
7. Forcing with matrices of countable elementary submodels
(with Stevo Todorcevic) Proceedings of the American Mathematical Society 145 (2017), no. 5, 2211-2222.
8. Maximal chains of isomorphic suborders of countable ultrahomogeneous partial orders
(with Milos Kurilic) Order 32 (2015), no. 1, 83-99.
9. Maximal chains of isomorphic subgraphs of countable ultrahomogeneous graphs
(with Milos Kurilic) Advances in Mathematics 264 (2014), 762-775.
10. Maximal chains of isomorphic subgraphs of the Rado graph
(with Milos Kurilic) Acta Mathematica Hungarica 141 (2013), no. 1, 1-10.
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
ATF eLog Not logged in
Message ID: 1969 Entry time: Sun Jul 12 02:00:15 2015
Author: Arjun Type: Misc Category: PD noise Subject: PD noise Update
Today, in the process of designing a intensity control servo I took some noise readings of the PDs and also characterized how the modulation function in the Marconu FG works, as it would this would be essential in the process of desigining a efficient feedback loop.
Firstly, the alignment of PDs was off, so I had to correct them, which took me more time then it should have, but I finally got it done. Next, I moved on to taking noise readings to measure the 'free running' laser noise, but as it was pointed out to me, this plot in itself is meaningless until the power at which it was taken is specified or in other words the quantity of use would be $\frac{\delta P}{P_0}$ which is also called the Relative Intensity Noise(RIN). What I measured was the Voltage noise in the PDs for a fixed Bias voltage( set using the power control of the laser). I calculated them for different bias voltages and the plot is shown below, the wierd shape is because I tried to splice three dfferent spans and it did ot combine as smoothly as I expected. I took noise at 3 different spans(100Hz, 1kHz and 50kHz) and combined them using splice.m program written by Koji. The change in the voltage noise with the bias voltage can be seen very evidently . We could then use the voltage noise at 1V as a measure of RIN$(\frac{\delta V}{V_0}\vert_{V_0=1V})$ . Again the plot looks wierd becasue of the splice function I used, I will post a better plot when I take another set of readings, in my next log. This plot tells us approximately how much of RIN is there and how much suppression would be needed in our servo. Additionally, I also measured the dark noise but they really wierd after splicing, so I will post those as well in my next log.
## Analyzing the AOM and the Marconi FG
The marconi RF function generator I am using has a modulation input which can be used to control the power going into the AOM which would inturn control the power in the main beam, this is my plan in implementing the intensity feedback. So, I studied the AOM and how it responds to imput modulations of different kinds, this is what I learnt:-
1. If the input modulation has no power in it ( or that its amplitude is 0) the power in the out beam is unchanged. That is zero power at modulation port corresponds to no change in the power.
2. If the Input voltage as a -ve value( which I set by using the offset function in the function generator generating the modulated signal) then, the power output of the FG decreases.
3. If the voltage is +ve, the power increases.
4. There are some other constraints one woud have to consider as well. The max power that can go into the AOM( Model:Gooch & Housego R23080-2W) is 2W( which is 33dBm). A power RF ampifier( mini circuits-ZHL-1-2W) is used to amplify the signal from the FG has a gain of 33dB, which I looked up from the data sheet, so the max power of the FG must be around 0dBm. But just to be safe, today I just explored its features with a input of -2dBm. I have attached a few photos of me toggling the carrier on/off switch and how it modultes the transmitted power, one coud then send a modulation at a fixed frequency of say 1kHz- this would amplitude modulate the power, which is exactly what we want for our excess noise detection scheme. I have attached a image of this as well. But this whole setup of FG+AOM has to be characterised properly- which is what my next task at hand is.
Attachment 1: fig1.pdf 27 kB
Attachment 2: IMG_20150711_161731.jpg 581 kB Uploaded Sun Jul 12 03:01:13 2015
Attachment 3: IMG_20150711_161741.jpg 671 kB Uploaded Sun Jul 12 03:01:13 2015
Attachment 4: IMG_20150711_161726.jpg 577 kB Uploaded Sun Jul 12 03:01:13 2015
Attachment 5: IMG_20150711_204122.jpg 698 kB Uploaded Sun Jul 12 03:01:13 2015
ELOG V3.1.3-
|
# rms
Root-mean-square value
## Syntax
``y = rms(x)``
``y = rms(x,"all")``
``y = rms(x,dim)``
``y = rms(x,vecdim)``
``y = rms(___,nanflag)``
## Description
example
````y = rms(x)` returns the root-mean-square (RMS) value of the input, `x`. If `x` is a row or column vector, then `y` is a real-valued scalar.If `x` is a matrix, then `y` is a row vector containing the RMS value for each column.If `x` is a multidimensional array, then `y` contains the RMS values computed along the first array dimension of size greater than 1. The size of `y` in this dimension is `1`, while the sizes of all other dimensions remain the same as in `x`. ```
````y = rms(x,"all")` returns the RMS value of all elements in `x`.```
example
````y = rms(x,dim)` operates along dimension `dim`. For example, if `x` is a matrix, then `rms(x,2)` operates on the elements in each row and returns a column vector containing the RMS value of each row..```
example
````y = rms(x,vecdim)` operates along the dimensions specified in the vector `vecdim`. For example, if `x` is a matrix, then ```rms(x,[1 2])``` operates on all the elements of `x` because every element of a matrix is contained in the array slice defined by dimensions 1 and 2.```
example
````y = rms(___,nanflag)` specifies whether to include or omit `NaN` values in the calculation for any of the previous syntaxes. For example, `rms(x,"omitnan")` ignores `NaN` values when computing the RMS. By default, `rms` includes `NaN` values.```
## Examples
collapse all
Compute the RMS value of a sinusoid.
```t = 0:0.001:1-0.001; x = cos(2*pi*100*t); y = rms(x)```
```y = 0.7071 ```
Create a matrix and compute the RMS value of each column.
```x = [4 -5 1; 2 3 5; -9 1 7]; y = rms(x)```
```y = 1×3 5.8023 3.4157 5.0000 ```
Create a matrix and compute the RMS value of each row by specifying the dimension as 2.
```x = [6 4 23 -3; 9 -10 4 11; 2 8 -5 1]; y = rms(x,2)```
```y = 3×1 12.1450 8.9163 4.8477 ```
Create a 3-D array and compute the RMS value over each page of data (rows and columns).
```x(:,:,1) = [2 4; -2 1]; x(:,:,2) = [9 13; -5 7]; x(:,:,3) = [4 4; 8 -3]; y = rms(x,[1 2])```
```y = y(:,:,1) = 2.5000 y(:,:,2) = 9 y(:,:,3) = 5.1235 ```
Create a matrix containing `NaN` values.
`x = [1.77 -0.005 NaN -2.95; NaN 0.34 NaN 0.19];`
Compute the RMS values of the matrix, excluding `NaN` values. For matrix columns that contain any `NaN` value, `rms` computes with the non-`NaN` elements. For matrix columns that contain all `NaN` values, the RMS is `NaN`.
`y = rms(x,"omitnan")`
```y = 1×4 1.7700 0.2404 NaN 2.0903 ```
## Input Arguments
collapse all
Input array, specified as a vector, matrix, or multidimensional array.
Data Types: `single` | `double` | `logical` | `char`
Complex Number Support: Yes
Dimension to operate along, specified as a positive integer scalar. If you do not specify the dimension, then the default is the first array dimension of size greater than 1.
Dimension `dim` indicates the dimension whose length reduces to `1`. The `size(y,dim)` is `1`, while the sizes of all other dimensions remain the same as `x`.
Consider an `m`-by-`n` input matrix, `x`:
• `y = rms(x,1)` computes the RMS value of the elements in each column of `x` and returns a `1`-by-`n` row vector.
• `y = rms(x,2)` computes the RMS value of the elements in each row of `x` and returns an `m`-by-`1` column vector.
Vector of dimensions to operate along, specified as a vector of positive integers. Each element represents a dimension of the input array. The length of the output in the specified operating dimensions is 1, while the other dimension lengths remain the same as the input.
For example, if `x` is a 2-by-3-by-3 array, then `rms(x,[1 2])` returns a 1-by-1-by-3 array whose elements are the RMS values over each page of `x`.
Missing value condition, specified as one of these values:
• `"includemissing"` or `"includenan"` — Include `NaN` values in `x` when computing the RMS. If any element in the operating dimension is `NaN`, then the corresponding element in `y` is `NaN`. `"includemissing"` and `"includenan"` have the same behavior.
• `"omitmissing"` or `"omitnan"` — Ignore `NaN` values in `x` when computing the RMS. If all elements in the operating dimension are `NaN`, then the corresponding element in `y` is `NaN`. `"omitmissing"` and `"omitnan"` have the same behavior.
collapse all
### Root-Mean-Square Value
The root-mean-square value of an array x is
`${x}_{\text{RMS}}=\sqrt{\frac{1}{N}\sum _{n=1}^{N}{|{x}_{n}|}^{2}},$`
with the summation performed along the specified dimension.
## Version History
Introduced in R2012a
expand all
|
Example 52.27.7. The dimension bound in Proposition 52.27.6 is sharp. For example the Picard group of the punctured spectrum of $A = k[[x, y, z, w]]/(xy - zw)$ is nontrivial. Namely, the ideal $I = (x, z)$ cuts out an effective Cartier divisor $D$ on the punctured spectrum $U$ of $A$ as it is easy to see that $I_ x, I_ y, I_ z, I_ w$ are invertible ideals in $A_ x, A_ y, A_ z, A_ w$. But on the other hand, $A/I$ has depth $\geq 1$ (in fact $2$), hence $I$ has depth $\geq 2$ (in fact $3$), hence $I = \Gamma (U, \mathcal{O}_ U(-D))$. Thus if $\mathcal{O}_ U(-D)$ were trivial, then we'd have $I \cong \Gamma (U, \mathcal{O}_ U) = A$ which isn't true as $I$ isn't generated by $1$ element.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
The ORE of Politics will be available for subscription in late September. Speak to your Oxford representative or contact us to find out more.
Dismiss
Show Summary Details
Page of
date: 24 September 2017
# The Age-Structural Theory of State Behavior
## Summary and Keywords
Over the past three decades, economic and political demographers, using various measures, have discerned that increased age-structural maturity makes significant statistical contributions to levels of per capita income, to educational attainment, to declines in the frequency of onsets of intrastate conflict, and to the likelihood of achieving and maintaining liberal democracy. Some of the stronger statistical relationships have been used in forecasts. For example, using the United Nations Population Division (UNPD) demographic projections, political demographers have relied on the strong statistical association between age structure and stable liberal democracy to forecast the rise of democracy in North Africa more than two years in advance (in 2008)—at a time when regional experts believed that forecast to be absurd.
Whereas critics remain skeptical of the murky causal connections of age-structural theory, its proponents counter that causality in the development of state capacity is complex and is less important than the theory’s positive qualities (namely, that it is forward-looking, its statistical findings are easily repeated, its forecasts have outcompeted regional experts, and its predictive products can be readily adapted to the needs of intelligence foresight, defense planning, and foreign policy analysis). Perhaps most important, the age-structural theory of state behavior has yielded a surprising number of “novel facts”—new knowledge concerning the observed pace and timing of state political, social, and economic behaviors.
# Introduction
Rather than focusing on explanatory narratives and causality, the age-structural theory of state behavior is concerned with generating “timed expectations.” At the core of this theory lies the age-structural transition—the continuous path of cohort reconfiguration that leads from a youthful population to one numerically dominated by middle-age adults and seniors. Age-structural theory’s most tested models and functional forms are built upon this fundamental demographic process.
The body of age-structural theory is growing. At the leading edge of age-structural research are an expanding set of newly generated models, each reflecting a hypothesized relationship between the known position of states in this transition, and their corresponding likelihood of being observed in a specific political, social, or economic condition. In time, some of these hypothetical models undoubtedly will become more deeply woven into age-structural theory—but only if successful forecasts and other out-of-sample tests show them worthy of greater certainty, and if they are found analytically useful or create new insights into the relationships that they portray.
Forecasting is integral to age-structural theory. To investigate the future, age-structural models are simply repositioned one or two decades into the future—into a demographic future already described in detail by the United Nations Population Division (UNPD) series of demographic projections (UNPD, 2015). Since age-structural models were first generated in 2008, this method has yielded a string of successful forecasts, discerning a window of time for future events and oncoming conditions that regional and country experts missed or thought were improbable. Moreover, several of the early conclusions of age-structural models—that states with youthful populations are vulnerable to political violence and experience difficult political and social environments for achieving and maintaining democracy—are increasingly accepted as fact by many foreign affairs policymakers.
The principal objectives of this article are to outline the age-structural theory of state behavior in its current form and to provide insights into its methodology. The discussion begins by demonstrating age-structural theory’s predictive potential by recounting its most dramatic forecasts. It then fields a discussion of the origins and mechanics of age-structural models and reviews their graphic functional forms. Finally, the text introduces four predictive products that have been used to present statistical portraits of the future to foreign affairs, defense, and intelligence analysts: group statistical forecasts, age-structural maps, country-specific tables, and regional summaries.
# The 2008 Forecasts
While not the age-structural theory’s only prescient prediction, this initial set of forecasts more than two years prior to the Arab Spring remains the most dramatic display of the theory’s ability to outdo the experts, and the most illustrative of the theory’s yet-unexplored potential.
Based on an age-structural model, a 2008 article in Foreign Policy (Cincotta, 2008, p. 82; a similar quote appears in Cincotta, 2008–2009, p. 15) stated:
The first (and perhaps most surprising) region that promises a shift to liberal democracy is a cluster along Africa’s Mediterranean coast: Morocco, Algeria, Tunisia, Libya, and Egypt, none of which has experienced liberal democracy in the recent past. The other area is in South America: Ecuador, Colombia, Venezuela, each of which attained liberal democracy demographically “early” but was unable to sustain it. Interpreting these forecasts conservatively, we can expect there will be one, maybe two, in each group that will become stable liberal democracies by 2020.
This forecast was first presented at a U.S. State Department–sponsored expert meeting on the Middle East and North Africa (MENA) region in February 2008, where this forecast was repeated. In this presentation, the author (in response to a question) suggested that Tunisia, because of its sustained near-replacement fertility and the rapid maturing of that country’s population age structure, was a likely launch point for democratization before 2020. Most of the nearly two dozen attending academics specializing in the MENA region (including several natives of the region), plus government analysts in attendance, burst into raucous laughter—so much so that the meeting’s chairman was forced to terminate the session.1
In October 2010, two months before demonstrations erupted across Tunisia, Cincotta submitted the following unclassified (Cincotta, 2010, unpublished) forecast to a U.S. intelligence agency requesting the submission of hypothetical low-probability, high-impact events that might occur over the next two years, affecting U.S. interests:
In this scenario, a North African state, probably Tunisia, undergoes a color revolution—a swift and non-violent transition to liberal democracy. This may bring Islamists into power—or maybe not. However, the possibilities for spreading democracy through the region and for new political dynamics to play out in an age-structurally maturing Arab state could produce both risks and opportunities for the US.2
After Tunisia’s and Egypt’s revolutions successfully upended what Middle East analysts had assumed to be rock-solid autocratic regimes, Nasim Taleb and Mark Blyth (2011) identified the North African uprisings as the culmination of an extended buildup of suppressed social forces, culminating in a politically explosive event, the nature and timing of which were impossible to predict—a “black swan.”
Yet regime change in North Africa was clearly not impossible to predict. More than two years prior to the North African revolutions, age-structural theory had been used to confront influential academic Middle East experts and U.S. government analysts with a reasonable image of this future—an image generated by associating the attainment of liberal democracy with a phase of the age-structural transition. They simply chose to believe that this image, and the method that conveyed it, were absurd.
Click to view larger
Figure 1. The path of the age-structural transition, depicted for the world’s independent states (< 500,000 population) in 2015. Age-structural profiles are depicted for nine states: Yemen (median age, 19 years), Iraq (19 years), Egypt (25 years), Tunisia (31 years), China (37 years), United States (38 years), South Korea (41 years), Hungary (41 years), and Japan (47 years).
The forecast of one or more North African liberal democracies before 2020 (Cincotta, 2008–2009, p. 15)—states assigned Free status, rather than Partly Free or Not Free,3 in Freedom House’s annual global assessment of political rights and civil liberties—was realized in 2014 with Freedom House’s assessment of Tunisia as Free (FH, 2015). Since then, Colombia’s peace process has lurched haltingly forward, making a second published forecast look increasingly promising. That forecast predicts the rise of a liberal democracy before 2020 among the three-state cluster of Colombia, Venezuela, and Ecuador (Cincotta, 2008–2009). In its most recent assessment, Freedom House (2017) placed Colombia on the very borderline between Partly Free and Free—a Freedom Score of 3.0, trending upward toward Free.
# The Age-Structural Transition
Initiated by fertility decline, the age-structural transition entails gradual shifts in the relative size of age cohorts through a lengthy, relatively predictable series of configurations (Figure 1). The age-structural theory of state behavior owes much of its predictive potential to (a) the power of these configurations to influence, amplify, control, and reflect a broad range of interacting demographic, social, and economic conditions; and (b) the ability of demographers to predict future configurations using cohort component methodologies. To describe the age-structural transition with some narrative clarity, this article employs the classification system published in the U.S. National Intelligence Council’s Global Trends series of publications (NIC, 2012, 2017). Although the age-structural transition is continuous, this system intuitively divides the transition into four discrete phases, based on country-level median age (the age of the middle person, at which 50% of the population is younger): the youthful, intermediate, mature, and postmature phases.
The path of the age-structural transition can be described as a nonlinear influence on state capacity—in colloquial terms, a bad-news, good-news, bad-news story (Cincotta, 2012). Countries in the earliest, high-fertility portion of the transition experience youthful age structures that feature high childhood dependency and present obstacles to attaining high levels of institutional capacity and state legitimacy (Dyson, 2010). In the intermediate and mature phases, working-age adults proportionately dominate the population, yielding low levels of childhood and old-age dependency. Then, in the postmature phase of the transition, more than half of the age structure is occupied by the most mature portions of the working-age population (over age 45) and retirees—a condition that is bound to yield high levels of dependency and present challenges for pension systems.
With more than half of their population composed of newborns, infants, school-age children, adolescents, and women in their peak childbearing years, demand for health and educational services in youthful states (so-called youth-bulge countries, with median age less than 25.50 years) typically outstrips the state’s institutional capacity. Because annual growth rates among youth cohorts run high, children typically face school placement insufficiency and crowding, as well as low levels of societal investment per pupil (Lee & Mason, 2011). Meanwhile, young adults in these countries typically face intense competition for jobs and underemployment (Easterlin, 1968). Politically difficult to manage, youthful populations tend to feature locally powerful extended family and patronage networks (Wusu & Isiugo-Abanihe, 2006), and an elevated risk of intrastate conflict and other forms of political violence (Goldstone, 2012; Urdal, 2006; Goldstone, 2002; Mesquida & Wiener, 1999; Möller, 1968).
Countries that advance into the intermediate phase of the age-structural transition (median age of 25.50–35.49 years) experience lower proportions of their population among cohorts of dependent children and higher proportions in the productive, and taxable, working ages (a worker bulge). This transformation, typically the result of fertility decline below 2.5 children per woman, has been associated with improvements in health status, increased per-child investment in schooling (Lee & Mason, 2011), growth in savings (Higgins & Williamson, 1997), increased participation of women in the economy (Bauer, 2001), and often a faster pace of economic development and wealth accumulation (see chapters in Birdsall, Kelley, & Sinding, 2001)—a process that demographers generally term the “demographic dividend” (Bloom, Canning, & Sevilla, 2002, p. 25).
Economic growth rates tend to slow as states enter the third phase of the age-structural transition, the mature phase (median age of 35.50–45.49 years). Despite an aging workforce and a growing group of retirees in mature states, favorable economic and political conditions often prevail—a so-called second demographic dividend (Lee & Mason, 2006), a situation typically associated with states that amassed human capital during the intermediate phase.
In the final phase of the age-structural transition, states incur another challenging set of distributions: a series of postmature age structures (median age of 45.50 or greater) characterized by a large proportion of seniors and dependent elderly and declining numbers in the younger working ages. Whereas by 2016 only three states—Japan, Germany, and Italy—have entered this category, some researchers hypothesize that, as a group, future postmature states will face declining per capita productivity, fiscal imbalances (Jackson & Howe, 2008), substantial foreign debt (Eberstadt & Groth, 2010), and constrained participation in the international system (Haas, 2007).
# Age-Structural Models
The age-structural domain is the unifying feature of age-structural theory. Rather than considering relationships over chronological time, age-structural models reposition state behaviors on the age-structural domain—an x-axis that follows the continuous path of the age-structural transition, situating it as the principal independent variable. The age-structural domain, measured by median age, is the only continuous independent variable in logistic regression analysis. All controls and treatment variables are dichotomous (0,1). Likewise, the functional forms that are fitted to data by logistic regression (discussed later in this article) are displayed across a domain beginning at a median age of 15 years and ending at 55 years.
Each age-structural model begins as a hypothesis—whether generated notionally (from theory) or observationally—that a measurable state behavior is typically associated with the movement of states through the age-structural transition. It is important to stress—and periodically reiterate—that these models begin as a hypothesis that is epistemologically situated at the very edge of age-structural theory. Some age-structural models have been strengthened and brought into the more certain body of theory by repeated in-sample testing, out-of-sample testing, and some form of successful prediction. A few others—particularly hypotheses associated with the age-structural model of liberal democracy (ASM-LD)—have demonstrated their utility by outcompeting other methods of analysis and by exposing original insights into the processes under study.
In the most recent analyses by Cincotta (2015a, 2015b, 2015c), age-structural models employ the country-level median age as the only continuous independent variable. In these analyses, logistic regression has been used to determine whether or not the presence of a specific, discrete condition (the dependent variable) varies as states advance across the age-structural transition. If, indeed, it varies across this transition, then age-structural methods proceed to determine how that specific condition varies (i.e., its functional form); and the level of certainty that is associated with that variation (i.e., its confidence interval, or CI).
In age-structural models, the dependent variable (composed of data coded 1 or 0) must be discrete or discretized by creating a gradient of discrete categories. Examples of discrete conditions include the presence or absence (in a year) of a type of political regime or the presence or absence of an intrastate conflict. Examples of discretized variables include the attainment of specific levels of per capita income (e.g., the World Bank’s Upper-Middle Income Category), or levels of educational attainment (secondary school participation).
For logistic regression to produce a reasonable fit to these data, the frequency of the dependent variable, as it is sampled across the age-structural domain, must be adequately described by a logistic function—a monotonic sigmoid curve generated by the Verhulst equation (an S-shaped function, beginning low and approaching an upper asymptote), or to some segment of a logistic curve (Menard, 2001).4 Therefore, a segment of a logistic function can neither be fit adequately to frequency data that (a) rises substantially and then falls, or (b) falls substantially and then rises again across the domain.
Age-structural theory is a reductionist project. Operating on a single, continuous x-axis simplifies statistical analyses and permits the visual display of relationships on a two-dimensional graph—and, like all simplified statistical theories, it has limitations. First, it suffers from the “small number problem”—that is, predictions are most successful when considering clusters of states rather than a single state. Second, by itself, taking only median age into account, age-structural models are blind to the influence of nondemographic and subnational demographic factors. Unless these are noted through observation, studied through separate analyses, or added experimentally as discrete (0,1) variables to the statistical analysis, its reliance on the country-level scalar measure of median age as the lone continuous domain of its analysis can obscure important factors. For example, observations of the functional form of the age-structural model of liberal democracy have shown that the presence of a small population size (under 5 million) and regime types have substantial impact on outcomes (Cincotta, 2015a, 2015b; Weber, 2012).
## Median Age
While median age is not the only age-structural indicator in use as an independent variable, it was the initial preference of Timothy Dyson, and later the preference of Richard Cincotta, as the measure of the age-structural domain (see Dyson, 2013 for an earlier example, and also Cincotta, 2015b). Other age-structural indicators include various “youth bulge” measures (see Staveteig, 2005, for a review of indicators), and childhood and old-age dependency ratios. Except for median age, age-structural indicators have been focused on a specific phase of the age structural transition. Therefore, each has its own mathematical peculiarities. Whereas all are moderately to strongly correlated, among scalar measures, median age appears (to me, so far) to be relatively unbiased across the current extent of the age-structural transition—which today ranges from a median age of 15 years in Niger to today’s maximum, 47 years, for Japan’s population.
Nonetheless, as a characterization of age structure, median age is a rather unsophisticated reduction of a complex multicohort distribution. Admittedly, it can mask potentially important differences among age structures. That said, median age provides a simple and intuitive means for analysts to estimate and visualize the three most important analytical qualities in age-structural theory: a state’s position in the age-structural transition, its direction of movement, and its rate of change (Figure 2).
In the six Arab-majority states (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates) of the Gulf Cooperation Council (GCC), the relative sizes of male cohorts, from 25 to 40 years of age, are heavily influenced by the presence of temporary labor migrants. Rather than use the UNPD’s estimates and projections of median age for all residents in the GCC states, this article uses unpublished estimates and projections of citizen residents only (which excluded temporary labor migrants) from the U.S. Census Bureau’s International Program Center. These data are not currently available from the center’s International Data Base (USCB-IPC, 2015).5
Click to view larger
Figure 2. The five-year rate change in median age versus country-level median age, at midyear 2015. Age-structural profiles are depicted for five states: Nigeria (median age, 18 years), Iran (30 years), China (37 years), the United States (38 years), and Japan (47 years). Data are drawn from UN demographic estimates (UNPD, 2015).
The UNPD also publishes demographic projections, which begin at the last estimated year and proceeds until 2100. For forecasting, this discussion uses future median ages that are projected by the United Nations (UN) medium fertility variant, which represents the statistical midpoint of simulated future trajectories.6 However, the UNPD also generates other standard projections: the UN high, low, and constant fertility variants, which together provide a broad vision of future possibilities.
How accurately does the UN medium fertility variant projection portray future median ages? Over a 15- to 20-year period, UNPD demographers have been reasonably close. While there have been a few demographic surprises over the past 50 years—Iran’s rapid fertility decline beginning in 1988, the emergence of HIV/AIDS, and an unexpected wave of migration to Israel following the breakup of the Soviet Union in the early 1990s—over a 20-year period, unexpected turnarounds have been rare.
## The Sample
The models generated in research by Cincotta and coauthors are based on data from 1972 onward—the year that Freedom House first made its annual assessment of civil liberties and political rights. This selection of years corresponds to a period after the dissolution of the remaining European empires (British, French, and Portuguese).
As a matter of consistent practice, this article uses the list of recognized independent political entities supplied by the United Nations. From this list, two types of entities have been eliminated from the UN data: any nonindependent political entity (e.g., Palestine, Western Sahara) whose state behavior may be constrained or induced by an occupying entity; and small, independent states with populations under 500,000 (including Belize, Iceland, Brunei, and numerous small island states). These states will be returned to the data pool when, and if, they obtain independence and their population grows to be larger than 500,000. Based on those criteria and the formation of new states (e.g., Eritrea, former Soviet republics, former Yugoslav republics, Slovak Republic, South Sudan, etc.), the annual data set has grown from 131 states in 1972 to 166 in 2015.
The use of states as the unit of analysis has several analytical limitations. The country-level median age may obfuscate the presence of significantly large minorities or noncitizens, who may display population dynamics differing substantially from the majority. Even when the country-level age structure has matured, minority-majority differences in demographic dynamics can be associated with ethnic tensions (Blomquist & Cincotta, 2016; Cincotta, 2011).
## Control Variables
It is useful to begin with the simplest form of the age-structural model—the naïve model, which is devoid of any control variables (Model 1; see the example in Table 1). To refine this model, dichotomous (presence or absence) control variables, or combinations of these three variables, can be added to refine the naïve model. States with these qualities often, but not always, perform differently than the larger group of states. Before graphing the function, running experiments with other dichotomous variables, or identifying exceptional states, it has been useful to control for the following three factors, depending on the dependent variable:
1. 1. States with small populations. This factor identifies states with a midyear population of fewer than 5 million. The source of this estimate was the UNPD’s 2015 revision of World Population Prospects (UNPD, 2015).
2. 2. States engaged in high-intensity conflict. A state is deemed to be in high-intensity conflict if, for that year, it experiences more than 1,000 battle-related deaths, according to the current version of the UCDP Conflict Dataset (UCDP/PRIO, 2016; Gleditsch, Wallensteen, Eriksson, Sollenberg, & Strand, 2002).
3. 3. Resource-reliant states. A state was deemed as experiencing significant oil and mineral wealth if revenues from these resources comprised over 15% of GDP. Annual levels of oil revenue, mineral revenue, and GDP were obtained from the 2015 version of the World Bank’s World Development Indicators (WB, 2016).
Thus, current analyses typically feature controlled models (Model 2; see examples in Table 1) that feature various combinations of this model (e.g., Models 2a, 2ab, 2ac, 2bc, and 2abc).
Table 1. Logistic Regression for the Age-Structural Model of Liberal Democracy
Models ⇨
Model 1 (Naïve)
Model 2a
Model 2b
Model 2ab
Model 2c
Model 2ac (Basic 1)
Model 2abc (Basic 2)
Dependent variable ⇨
Probability of being assessed as Free
Data source ⇨
Freedom House, Freedom in the World data (1972–2015)
Scope of data ⇨
All states
All states
All states
All states
All states
All states
All states
Variable list ⇩
Statistical significance of coefficients: * p < 0.05, ** p < 0.01
Age-structural domain variable (continuous)
Model coefficients and standard errors
Median age
0.180** (0.005)
0.183** (0.005)
0.179** (0.005)
0.183** (0.005)
0.175** 0.005)
0.179** (0.005)
0.179** (0.005)
Dichotomous controls for
(a) Small populations (< 5 million)
−0.464** (0.070)
−0.484** (0.071)
−0.468** (0.072)
‒0.488** (0.073)
(b) High-intensity conflict (> 1,000 battle-related deaths/year)
‒0.004 (0.127)
‒0.180
(0.130)
‒0.174 (0.131)
(c) Resource reliance (mineral and oil revenue >15% of GDP)
0.718** (0.124)
0.699** (0.124)
0.697** (0.124)
Constant
—5.226 (0.121)
‒5.002 (0.124)
‒5.197 (0.173)
‒4.819 (0.181)
‒5.739 (0.155)
‒5.503 (0.157)
‒5.323 (0.207)
N
6246
6246
6245
6246
6246
6246
6246
Pseudo-r2 (%)
27.2
27.4
26.9
27.5
27.6
27.9
27.9
Free50 (median age, years)
29.0
29.8
29.0
29.9
28.6
29.4
29.7
As a general observation, the effect of a small population generally has the strongest statistical impact on a naïve model. The control for high-intensity conflict has occasionally been statistically significant, particularly in health- and income-related models, but is omitted when the dependent variable is conflict-related. Oil-mineral reliance has been statistically significant in most age-structural models.
# Age-Structural Functions
The most basic and often the most effective way to both conceptualize and communicate the results of age-structural modeling has been to display the resultant age-structural relationship in its two-dimensional standard form, where (a) the dependent variable is expressed as a probability (0.0–1.0) of a status on the vertical axis, and (b) the range of country-level median ages (15–55 years) is expressed on the horizontal axis—the age-structural domain. The function can be generated by substituting the regression coefficients from the logistic regression analysis into the standard logistic model, and then computing the probability (p) of condition Y for median age (MA) across the contemporary range (currently from 15 to 47 years):
$Display mathematics$
Although age-structural models typically include dichotomous controls and occasionally additional experimental dichotomous variables, they are driven by a single continuous independent variable (median age). Thus, most commercial software that computes logistic regression will generate as output dependent variable probabilities (pY), their upper and lower 0.95 CIs, and a graphic depiction of the function.
## Classes of Age-Structural Functions
Not all age-structural functions are qualitatively similar. Age-structural functions can be placed in one of three classes (I, II, III), depending on their conditions, form, and fit.
Class I age-structural functions belong to a family of cumulative distribution functions (CDFs). These CDFs are generally associated with irreversible, or virtually irreversible, processes and are generated by the numerical integration of the normal or near-normal frequency distribution that discrete levels of these processes characteristically produce (e.g., the World Bank Income Categories and under-5 mortality levels). When these data are submitted to logistic regression analysis, Class I age-structural functions generally produce an ideal fit that takes the form of a complete logistic function (an S-shaped curve) that typically produces higher pseudo-r2 values and narrower CIs than functions in Classes II and III. Class I functions that rise quickly on the age-structural domain are the most appropriate for forecasting. The median age at the function’s inflection point and nearby median ages are indicative of a region where analysts should expect a higher-than-average achievement of the condition being studied.
Class II age-structural functions appear as complete, or nearly complete, logistic functions. They describe a state process that largely moves in one direction across the age-structural domain but is not irreversible, and there is no a priori assumption that every state will complete the transition (e.g., the presence of liberal democracy). Class II functions can rise sufficiently rapidly to become the basis of statistical forecasts. However, since they are fit to a reversible process, there is no guarantee that at all regions of the age-structural domain, the direction of movement is equally strong.
Class III age-structural functions appear as a portion of a logistic curve and carry no a priori assumptions that all states will undergo the hypothesized transition. Because Class III functions are typically a poor fit to the data, the underlying data deserve close inspection to determine whether a monotonic portion of a logistic curve accurately characterizes the dependent variable’s variation in the age-structural domain. Other statistical models may be more appropriate.
## A Class I Function: The World Bank’s Income Categories
The World Bank Income Category Model (ASM-GNI) is composed of four Class I age-structural functions that generate expectations of the age-structural timing of each of the World Bank’s standard income categories (Figure 3). These categories are based on gross national income per capita (GNI per capita), calculated in current-year (or other standard-year) U.S. dollars using the World Bank’s Atlas Method (WB, 2016). States rarely slip from a higher to a lower category.
Click to view larger
Figure 3. A set of four Class I age-structural functions describing the probability of being assessed in one of the World Bank’s four per capita income categories (WB, 2016), based on GNI per capita (U.S. dollars, Atlas Method), at a median age within the age-structural domain. The four curves represent Low Income (L), Lower-Middle Income and higher categories (LM+), Upper-Middle Income and higher categories (UM+), and High Income (H). The 0.95 CIs surrounding these functions are, at their maximum, ±0.9 years of median age.
GNI per capita (Atlas Method) data were transformed into four dependent variable data sets, each composed of presence (denoted by 1) or absence data (denoted by 0). To generate the age-structural function for the World Bank’s Low Income Category, each annual quantitative datum was transformed to indicate whether the country in question was, during that year, within the Low Income Category (1), or not (0). For the three higher categories (Lower-Middle Income, Upper-Middle Income, and High Income), GNI per capita data were transformed to identify whether the state was in the chosen category (1), a higher category (also denoted as 1), or a lower category (0).
The functions displayed in this graph are the product of Model 2ac, which uses two statistically significant controls (p < 0.05): small population size (< 5.0 million) and reliance on oil or mineral resources (> 15% of GDP). Thus, controlled, relatively narrow 0.95 CIs, reaching a maximum of ±0.9 years on the median-age axis at low median ages, surround each of the logistic functions.
While still untested by forecasting and experimentation or examined in terms of the behavior of its exceptional states, the model reveals fresh aspects of the relationship between age structure and income. While it appears that states routinely achieve the World Bank’s Lower-Middle Income category in the youthful phase of the age-structural transition (median age of 25 years or older), the results of modeling suggest that states must be well into the intermediate phase of the age-structural transition (thus attaining fertility levels below 2.5 children per woman) to achieve Upper-Middle Income status—a milestone on the pathway to economic development at which development donors “graduate” countries from basic sectors of development.
Notably, the demographic window of opportunity—introduced by UNPD (2004) to estimate the period of greatest potential for economic development—coincides closely with the period when most states attain Upper-Middle Income status. In its original formulation, the demographic window was calculated to open when the proportion of children 0–14 years of age dipped below 30% of the total population and seniors (65 years and older) remained below 15% of the population. In the age-structural domain, that ranges from a median age of about 26 years to about 41 years.
## A Class II Function: The Presence of Liberal Democracy
The ASM-LD generates timed expectations of the likelihood of being assessed at a high level of democracy across the age-structural axis. The ASM-LD is the most well studied of all age-structural functions, having been investigated by three independent research efforts, each of which used different measures of age structure (several variations of “youth bulge” measures, median age), and various indicators of democracy. Indications of democracy include Freedom House’s Free status (Cincotta, 2008, 2008–2009), high levels (8–10) of Polity IV regime scores (Cincotta & Doces, 2012; Weber, 2012), and high levels of voting as a proportion of eligible voters (Dyson, 2013). The conclusions were similar. Moreover, the ASM-LD is the subject of several successful forecasts and statistical experiments, which in turn have inspired additional hypotheses and modeling (Cincotta, 2008–2009; Cincotta & Doces, 2012; Cincotta, 2015a, 2015b).
Click to view larger
Figure 4. A Class II age-structural function, describing the probability of being assessed as Free in Freedom House’s annual survey of civil liberties and political rights (FH, 2017). The function’s 0.95 CIs are shown as dotted lines surrounding the logistic curve. Free50 is this function’s inflection point (at median age of about 29 years), and the point where the probability of being assessed as Free is 0.50.
The functional form of the ASM-LD, shown here (Figure 4), plots the timed expectation of attaining Free in Freedom House’s annual survey (Model 2ac; see Table 1), as a probability calculated across the age-structural domain (Cincotta, 2015b). The most rapid pace of shifts to Free from lower categories should be expected to occur around the theoretical infliction point, where the probability of being assessed as Free is 0.5. This point, called Free50, is at about 29.5 (±0.5) years of median age.
## Class III Function: The Presence of Intrastate Peace
The Age-Structural Model of Intra-State Peace (ASM-ISP) predicts the probability of the absence of intrastate conflict across the age-structural domain. The model draws its data on the presence or absence of intrastate conflict (more than 25 battle-related deaths per year) from the UCDP-PRIO Conflict Database, maintained and published cooperatively by the Uppsala Conflict Database Project (UCDP) and Peace Research Institute of Oslo (PRIO) (UCDP/PRIO, 2016; Gleditsch et al., 2002; Themnér & Wallensteen, 2013). Its function (Figure 5) is a Class III age-structural function based upon the ASM-ISP, with controls for small population (< 5.0 million), and natural resource reliance (resource rents < 15.0% of GDP).
Click to view larger
Figure 5. The Class III age-structural function associated with the absence of low- or high-intensity intrastate conflict. Dotted lines surrounding the function represent its 0.95 CI. Data are drawn from the UCDP/PRIO Conflict Dataset (UCDP/PRIO, 2016).
The ASM-ISP is neither a tightly fit nor strongly predictive model—its gradual slope is not conducive to forecasting. It is nonetheless useful in mapping states, now and over the next two decades, that are generally vulnerable to the outbreak of intrastate conflict and other forms of political violence. It is worth noting that, according to the ASM-IP, at a median age of 15.0 years, roughly 60% of all states are unlikely to be experiencing an intrastate conflict. Further investigations of the function indicate that while civil conflicts appear almost exclusively in the youthful portion of the age-structural domain (median age of 25 years or less), ethnoreligious conflicts extend throughout the domain (Yair & Miodownik, 2016).
# Exceptional States
Whereas age-structural analyses have demonstrated the association between age-structural configurations and the behavior of states, they also have shown that other factors—including some regime types (Urdal, 2006; Cincotta, 2008, 2008–2009, 2015a, 2015b, Cincotta & Doces, 2012), regional neighbors (Cincotta, 2015a, 2015b), and minority domestic demographics (Leuprecht, 2010; Blomquist & Cincotta, 2016)—can mediate, and even overpower, country-level age structure’s apparent influences. These factors have been made apparent by the observation of two types of states: (a) those that fall short of the model’s expectations; and (b) those that far exceed expectations. In both cases, analysts engaged in this method should be asking themselves the following questions:
• What are the qualities of that exceptionalism?
• Where, along the age-structural transition, do these exceptions cluster?
• What properties do these states possess in common at the time of their exceptionalism that might permit or encourage them to behave in an exceptional manner?
Observations of exceptional state behavior have spawned new hypotheses and experimental models and helped identify control variables. For example, Cincotta (2008–2009, 2015b) indicated that while generating the age-structural model of liberal democracy, he became curious about the political dynamics of two groups of exceptional states: (a) those that were assessed as liberal democracies when youthful (median age 25 years or less); and (b) those that remained nondemocracies when their age structure became mature (median age 36–45 years). He observed youthful democracies, as a group, to be ephemeral. Most maintained Free status well under a decade before dropping to a lower status category. Of the 62 declines from Free status recorded in Freedom House’s data, 58 occurred among states with a median age of less than 26 years (Cincotta, 2015c).
Click to view larger
Figure 6. Two Class III functions describing two distinct changes in regime status along the age-structural domain: the probability of a state that is currently not assessed as Free becoming Free during the next year (Gain Free); and the probability of a state currently assessed as Free losing that assessment in the next year (Lose Free). Together, the functions suggest that a rise to Free before α, between median age 26 and 27 years, is unlikely to be stable. Data are drawn from Freedom House data (FH, 2017).
Both Cincotta (2008–2009, 2015a) and Weber (2012) further investigated the hypothesis that youthful democracies are ephemeral regimes by fitting regression models to the probability of a state gaining liberal democracy and to the probability of a state losing it, once gained (Figure 6). They were unable to disprove these hypotheses. Cincotta (2015b) concluded that states assessed as Free were unlikely to achieve stable liberal democracy before a median age between 26 and 27 years. Among states with a population greater than 500,000, only three contemporary exceptions can be found so far—Costa Rica, Jamaica, and Botswana—all under 5 million in population.
Click to view larger
Figure 7. Four Class III age-structural functions plotting the probability of specific authoritarian regime types, as identified by the Authoritarian Regime Data Base, version 5 (Wahman et al., 2013) across the age-structural domain: military regimes (MIL), monarchical regimes (MON), single-party regimes (SP), and multiparty regimes (MP). A fifth function (LD) plots the probability of a liberal democracy, which is Free in Freedom House’s annual assessment.
Similarly, it was observed that military rulers were notably absent among the authoritarian regimes that survived into the mature portion of the age-structural transition (median age of 36–45 years). Also, it was hypothesized that, although military regimes have a reputation for longevity, they are displaced quickly in age-structural time—deposed or voluntarily ended by the end of the intermediate phase (Cincotta, 2008, 2008–2009, 2015a, 2015b). Since the publication of that research, Cincotta has performed several preliminary tests of this observational hypothesis (Figure 7) using data from the Authoritarian Regime Data Base (Wahman, Teorell, & Hadenius, 2013; Hadenius & Teorell, 2007).
# Forecasts and Other Predictive Products
Besides the functional form, other age-structural products have been used to make statistical forecasts, explore the future (horizon scanning), build early-warning problems, and demonstrate relationships that are illuminated by age-structural theory. In this section, four of these that have helped analysts better understand the future are presented: group statistical forecasts, age-structural maps, country-specific statistical forecasts, and regional statistical summaries.
## Group Statistical Forecasts
Recounting a successful forecast provides the most useful description of how to organize a statistical forecast using the age-structural theory. The 2008 group forecasts advised of the high likelihood of one state in North Africa (out of five states) and one in the northwestern corner of South America (out of three states) becoming Free by 2020. The reason for choosing these two contiguous groups was simple. At the time of the forecasts, none of the states were assessed as Free. However, by 2020, several in each region were projected to pass Free50, a median age of 28–29 years—the theoretical peak-change in the probability of being assessed as Free.
Generating statistical probabilities from a large data set and applying them, for the purposes of forecasting, to small groups is a risky venture. Statisticians call it the small number problem—the smaller the group, the greater chance it has of being entirely clustered near an edge of the population’s distribution, far from its behavioral central tendency.
Therefore, to make a reasonable forecast with a small group, the probability of a behavior being observed should be very high. For the North African group, it was. In 2008, the age-structural model generated for this problem calculated the probability of observing at least one assessment of Free, by 2020, among the five-state North African cluster at 0.97.7 For the three-state South American cluster, it was not as high—that same calculation yielded a probability of 0.89.
In Freedom House’s assessment for the end of 2016 (FH, 2017), only two states are scored Free that were not similarly scored in 2008: Tunisia and Senegal. Since Freedom House’s assessment was first published in 1972, Tunisia had never been scored Free. Senegal was last scored Free from 2002–2007, dropped to Partly Free for three years, was again assessed as Free in 2012, and retains that assessment today.
The probability of successfully picking one of the two, by randomly choosing a group of five states from the field of 99 Partly Free and Not Free states in 2008, was 0.10 (9-to-1 odds against a correct pick). Choosing from eight states increases the probability to 0.15 (5.6-to-1 against a correct pick). Had both Tunisia and Colombia been assessed Free, their probability of being chosen at random in 2008 would have been 0.02.
## Age-Structural Maps
The intuitive products are maps that follow the four, discrete age-structural phases, assigned to each country, in chronological time (Cincotta, 2015b). When exploring Class III age-structural functions, which are insufficiently steep to support definitively timed forecasts, maps of the age-structural phases—in the past, the present, and two or three decades into the future—may be the most effective and informative means to communicate age-structural changes and the possibilities that are associated with them. Moreover, maps are the least statistically presumptuous. They allow their audience to come to their own conclusions by viewing the geographic distribution of the four age-structural phases.
## Country-Specific Statistical Forecasts
When change in a political, social, or economic variable is statistically associated with movement through the age-structural transition, age-structural models can be used to generate statistical predictions of a state being in a discrete category of that variable. One example of a country-specific table, on Iran (Table 2), uses the four Class I categories (see Figure 4) to describe the relationship between a state’s position in the age-structural transition (measured by median age) and the probabilities of being in each of the four categories of the World Bank’s Income classes.
Table 2. Statistical Expectations of Being in Each of the Four World Bank Income Classes, Iran
IRAN: GNI per capita (2011 $US, Atlas Method) World Bank Income Classes Class Ranges (2011$US)
2015 Expectation (probability)
2025 Expectation (probability)
2035 Expectation (probability)
Low Income
≤ 1,025
0.05
0.01
< 0.01
Lower-Middle Income
1,026–4,035
0.56
0.19
< 0.01
Upper-Middle Income
4,036–12,475
0.26
0.39
0.25
High Income
> 12,475
0.13
0.41
0.75
Median Age (years)
29.5 (UNPD estimate)
35.5 (UNPD medium fertility variant)
40.9 (UNPD medium fertility variant)
Note: (*) The World Bank Income Class to which Iran is assigned, 2015 (GNI per capita of $6,550, 2011$US). (WB, 2016).
In this example, probabilities are generated for 2015 using the UNPD estimate for Iran, and for 2025 and 2035 using UNPD medium fertility variant projections (UNPD, 2015).
## Regional Summaries
Age-structural models can be used to generate regional summary tables (Cincotta, 2015a, 2015b) that provide consumers with the model’s most relevant output (Table 3). In this example, the table lists the state’s name (column 1); its median age (in years) (column 2); its current age-structural category (column 3); its most recent Freedom Score and Freedom Status (column 4); the probability of that state being assessed as Free in the current year (column 5); and the year that the median age has passed, or is projected to pass, Free50—a median age of 29.0 years, according to the UNPD’s estimates or medium fertility variant projections (column 6). Free50 is the year that the state first attains a 0.50 chance of being assessed as Free, according to the age-structural model.
Table 3. Summary of the Middle East and North Africa (MENA) Region, 2016
State
Median Age 2016 (years)
Age-Structural Category
Freedom Score 2016
Probability of Free
Free50 (year)
Cyprus
36
Mature
1.0 (F)
0.79
1984
Israel
30
Intermediate
1.5 (F)
0.57
2006 *
Tunisia
31
Intermediate
2.0 (F)
0.62
2010
Turkey
30
Intermediate
4.5 (PF)
0.56
2012
Iran
30
Intermediate
6.0 (NF)
0.56
2014
Lebanon
29
Intermediate
4.5 (PF)
0.51
2016
Morocco
28
Intermediate
4.5 (PF)
0.47
2018
Algeria
28
Intermediate
5.5 (NF)
0.45
2020
Libya
28
Intermediate
6.5 (NF)
0.45
2020
Bahrain
27
Intermediate
6.5 (NF)
0.42
2022
Saudi Arabia
24
Youthful
7.0 (NF)
0.31
2026
Syria
21
Youthful
7.0 (NF)
0.20
2035-40
Jordan
23
Youthful
5.0 (PF)
0.25
2035-40
Egypt
25
Youthful
5.5 (NF)
0.33
>2040
Oman
22
Youthful
5.5 (NF)
0.23
>2040
Qatar
21
Youthful
5.5 (NF)
0.19
>2040
Kuwait
21
Youthful
5.0 (PF)
0.19
>2040
Yemen
20
Youthful
6.5 (NF)
0.16
>2040
Iraq
19
Youthful
6.5 (NF)
0.15
>2040
UAE
19
Youthful
6.0 (NF)
0.14
>2040
Notes: (*) Median age of citizen-residents only; labor migrants discounted.
(**) Age-structural transition confounded by episodic immigration.
(↓) Downward trending Freedom Score.
(F) Free; (PF) Party Free; (NF) Not Free (based on FH, 2017).
The order of states is established by chronologically sorting Free50 (low to high, in column 6). In this arrangement, states assessed as Free (F) typically cluster near the top of the table, and states experiencing intrastate conflict tend to cluster near the bottom. Ideological political monopolies (e.g., Iran) characteristically behave without deference to the order of the list. In states near or past Free50, the rise of ideological leadership (e.g., Turkey) or persistent intrastate conflict (Turkey, Libya) tends to stall democratization. Military regimes and monarchies typically do not remain in power past a median age of 35 years.
# Discussion
The age-structural theory of state behavior is a work in progress, as all theories should be. Its focus has been on generating timed expectations—on providing analysts and policymakers with a statistical means to anticipate intelligence-worthy events and conditions using a set of models and forward-looking products. There are good reasons to continue this effort. Studied in chronological time, the timing of dramatic political changes has often befuddled country and regional specialists and caught diplomats by surprise. When viewed over the age-structural time domain (measured in years of median age), however, some of these shifts appear quite predictable—not at all like the unique events that current academic literature portrays them as being.
Age-structural theory is not the answer to discerning the dynamics of all state behaviors. Of course, its predictive and heuristic potentials are limited to those aspects that are associated with progress along the age-structural transition. Within that limited scope, however, the availability of freely accessible data and reproducible models, the potential for iterative testing and prediction, and the possibility of model rejection and reformulation give pursuit of many of the progressive epistemological qualities of scientific research programs. Also, while age-structural theory is limited in scope, it may be among the few theories that could help produce a much firmer theoretical foundation for defense, foreign affairs, and intelligence analysts than they currently enjoy.
Analysts need not be mathematically savvy to benefit from this theory. Nor do they need to abandon other analytical tools or perspectives personally. To gain age-structural theory’s insights, they need only temporarily suspend a few of the causal explanations they hold dear and step into age-structural time—into a world viewed from the perspective of the age-structural transition. Those who take that step enter an analytical environment that is far more orderly and predictable than the chronological world in which we live, and far less prejudiced by our experiences, by the politics of the analytical environment, or by the ideological biases of political theories.
## References
Bauer, J. G. (2001). Demographic change, development, and the economic status of women in East Asia. In A. Mason (Ed.), Population change and economic development in East Asia: Challenges met, opportunities seized (pp. 359–384). Stanford, CA: Stanford University Press.Find this resource:
Birdsall, N., Kelley, A. C., & Sinding, S. W. (Eds.). (2001). Population matters: Demographic change, economic growth, and poverty in the developing world. London: Oxford University Press.Find this resource:
Blomquist, R., & Cincotta, R. (2016). Myanmar’s democratic deficit: Demography and the Rohingya dilemma. New Security Beat, April 12. Available from https://www.newsecuritybeat.org/2016/04/myanmars-democratic-deficit-demography-rohingya-dilemma/.Find this resource:
Bloom, D. E., Canning, D., & Sevilla, J. (2002). The demographic dividend: A new perspective on the economic consequences of population change. Santa Monica, CA: RAND.Find this resource:
Cincotta, R. (2008). How democracies grow up: Countries with too many young people may not have a fighting chance for freedom. Foreign Policy, 165, 80–82.Find this resource:
Cincotta, R. (2008–2009). Half a chance: Youth bulges and transitions to liberal democracy. Environmental Change and Security Program Report, 13, 10–18.Find this resource:
Cincotta, R. (2010, unpublished). Memo addressed to the director of Strategic Futures, (U.S.) National Intelligence Council, in reply to a request for participation in a low-probability, high-impact-event exercise. Submitted October 20, 2010.Find this resource:
Cincotta, R. (2011). Minority youth bulges and the future of intrastate conflict. New Security Beat, October 13. Available from http://www.newsecuritybeat.org/2011/10/minority-youth-bulges-and-futureof.html.Find this resource:
Cincotta, R. (2012). Demography: A development perspective. In J. Spear & P. D. Williams (Eds.), Security and development in global politics: A critical comparison (pp. 291–310). Washington, DC: Georgetown University Press.Find this resource:
Cincotta, R. (2015a). Who’s next? Age structure and the prospects of democracy in North Africa and the Middle East. In C. Timmerman, N. Karel, S. Mels, J. Haers, & K. Matthijs (Eds.), Population change in Europe, the Middle East, and North Africa: Beyond the demographic divide (pp. 167–202). London: Ashgate.Find this resource:
Cincotta, R. (2015b). Demography as early warning: Gauging future political transitions in the age-structural time domain. Special issue: Early warning. Journal of Intelligence Analysis, 22(2), 129–148.Find this resource:
Cincotta, R. (2015c). Will Tunisia’s democracy survive? The view from political demography. New Security Beat, May 11. Available from http://www.newsecuritybeat.org/2015/05/tunisias-democracy-survive-view-political-demography/.Find this resource:
Cincotta, R., & Doces. J. (2012). The age-structural maturity thesis: The youth bulge’s influence on the advent and stability of liberal democracy. In J. A. Goldstone, E. Kaufmann, & M. D. Toft (Eds.), Political demography: How population changes are reshaping international security and national politics (pp. 98–116). Boulder, CO: Paradigm.Find this resource:
Cincotta, R., & Kaufmann, E. (2010). Unpromising demography in a promised land. NIC 2010-05. Washington, DC: National Intelligence Council.Find this resource:
Dyson, T. (2010). Population and development: The demographic transition. London: Zed Books.Find this resource:
Dyson, T. (2013). On the democratic and demographic transitions. Population and Development Review, 38(Suppl.), 83–102.Find this resource:
Easterlin, R. A. (1968). Population, labor force, and long swings in economic growth: The American experience. New York: National Bureau of Economic Research and Columbia University.Find this resource:
Eberstadt, N., & Groth, H. (2010). Demography and public debt: Time for a “demographic stress test” for the Western economies. What does it mean for Switzerland? WDA-HSG Letters on Demographic Issues, No. 2010/1. St. Gallen, Switzerland: University of St. Gallen.Find this resource:
Freedom House (FH). (2015). Freedom in the world, 2015. New York: Freedom House.Find this resource:
Freedom House (FH). (2017). Freedom in the world, 2017. New York: Freedom House.Find this resource:
Gleditsch, N. P., Wallensteen, P., Eriksson, M., Sollenberg, M., & Strand, H. (2002). Armed conflict 1946–2001: A new dataset. Journal of Peace Research, 39(5), 615–637.Find this resource:
Goldstone, J. A. (2002). Population and security: How demographic change can lead to violent conflict. Journal of International Affairs, 56(1), 3–22.Find this resource:
Goldstone, J. A. (2012). Demography: A security perspective. In J. Spear & P. S. Williams (Eds.), Security and development in global politics: A critical comparison (pp. 271–289). Washington, DC: Georgetown University Press.Find this resource:
Haas, M. L. (2007). A geriatric peace? The future of U.S. power in a world of aging populations. International Security, 32(1), 112–147.Find this resource:
Hadenius, A., & Teorell, J. (2007). Pathways from authoritarianism. Journal of Democracy, 18(1), 143–156.Find this resource:
Higgins, M., & Williamson, J. A. (1997). Age structure dynamics in Asia and dependence on foreign capital. Population and Development Review, 23(2), 261–293.Find this resource:
Jackson, R., & Howe, N. (2008). The graying of the great powers: Demography and geopolitics in the 21st century. Washington, DC: Center for Strategic and International Studies.Find this resource:
Lee, R., & Mason, A. (2006). What is the demographic dividend? Finance and Development, 43(3), 16–17.Find this resource:
Lee, R., & Mason, A. (2011). Population aging and the generational economy: A global perspective. Cheltenham, U.K.: Edward Elgar.Find this resource:
Leuprecht, C. (2010). The demographic security dilemma. Yale Journal of International Affairs, 5(2).Find this resource:
Menard, S. (2001). Applied logistic regression. Thousand Oaks, CA: SAGE.Find this resource:
Mesquida, C. G., & Wiener, N. I. (1999). Male age composition and the severity of conflicts. Politics in the Life Sciences, 18(2), 181–189.Find this resource:
Möller, H. (1968). Youth as a force in the modern world. Comparative Studies in Society and History, 10, 237–260.Find this resource:
Munck, G. L., & Verkuilen, J. (2002). Conceptualizing and measuring democracy: Evaluating alternative indices. Comparative Political Studies, 35(1), 5–34.Find this resource:
National Intelligence Council (NIC). (2012). Global trends, 2030: Alternative worlds. Office of the Director of National Intelligence, Washington, DC.Find this resource:
National Intelligence Council (NIC). (2017). Global trends: Paradox of progress. Office of the Director of National Intelligence, Washington, DC.Find this resource:
Staveteig, S. (2005). The young and the restless: Population age structure and civil war. Environmental Change and Security Program Report, 11, 12–19.Find this resource:
Taleb, N. N., & Blyth, M. (2011). The black swan of Cairo: How suppressing volatility makes the world less predictable and more dangerous. Foreign Affairs, 90(3), 33–39.Find this resource:
Themnér, L., & Wallensteen, P. (2013). Armed conflict, 1946–2012. Journal of Peace Research, 50(4), 509–521.Find this resource:
United Nations Department of Economic and Social Affairs, Population Division (UNPD). (2004). World population to 2300. New York: United Nations.Find this resource:
United Nations Department of Economic and Social Affairs, Population Division (UNPD). (2015). Population prospects, the 2015 revision. New York: United Nations. Available from http://esa.un.org/unpd/wpp/Excel-Data/population.htm.Find this resource:
Uppsala Conflict Data Program, and Center for the Study of Civil Wars, Peace Research Institute, Oslo (UCDP/PRIO). (2016). UCDP/PRIO conflict dataset. Version 4–2016. Uppsala, Sweden: Uppsala University.Find this resource:
Urdal, H. (2006). A clash of generations? Youth bulges and political violence. International Studies Quarterly, 50, 607–629.Find this resource:
U.S. Census Bureau, International Program Center (USCB-IPC). (2015). International data base. Washington, DC: U.S. Department of Commerce. Available from https://www.census.gov/population/international/data/idb/informationGateway.php.Find this resource:
Wahman, M., Teorell, J., & Hadenius, A. (2013). Authoritarian regime types revisited: Updated data in comparative perspective. Contemporary Politics, 19(1), 19–34.Find this resource:
Weber, H. (2012). Demography and democracy: The impact of youth cohort size on democratic stability in the world. Democratization, 20(2), 1–23.Find this resource:
Williamson, J. G. (2001). Demographic change, economic growth, and inequality. In N. Birdsall, A. C. Kelley, & S. W. Sinding (Eds.), Population matters: Demographic change, economic growth, and poverty in the developing world (pp. 107–136). Oxford: Oxford University Press.Find this resource:
World Bank Group (WB). (2016). World development indicators, 2015. Washington, DC: World Bank.Find this resource:
Wusu, O., & Isiugo-Abanihe, U. C. (2006). Interconnections among changing family structure, childrearing and fertility behaviour among the Ogu, southwestern Nigeria: A qualitative study. Demographic Research, 14(8), 139–156.Find this resource:
Yair, O., & Miodownik, D. (2016). Youth bulge and civil war: Why a country’s share of young adults explains only non-ethnic wars. Conflict Management and Peace Science, 33(1), 25–44.Find this resource:
## Notes:
(1.) One well-known Middle East scholar laughed until he was in tears. Because the laughter did not subside, the session’s chair ended the question-and-answer session. Later, when the group was polled by the convener, only two of the roughly two dozen scholars at the session believed that there were any lessons to be learned from this politico-demographic analysis. After Tunisia’s demonstrators had ousted President Ben Ali, I called or emailed several of the individuals who attended the meeting, inviting them to learn more about the method or to collaborate to help analysts overcome the problem of “timing.” I received no positive response from those I contacted.
(2.) This quote is extracted from an unclassified submission, attached to an email on October 20, 2010, addressed to the director of the Strategic Futures unit of the (U.S.) National Intelligence Council. The objective of the low-probability, high-impact event exercise for which this submission was generated was to create a set of early warning problems, with appropriate indicators, for the coming two years. Because, at the time, analysts did not associate demography with democratization, they rejected it.
(3.) Freedom House’s three Freedom Status categories are placed in small caps to set them apart from the text.
(4.) In its classic form (which was used to describe constrained population growth), the function begins at its lowest point, accelerates to an inflection point, decelerates, and then levels off as it approaches an upper asymptote. However, this function also can be parameterized to operate in reverse: to begin at its high point and descend to a lower asymptote. Upward or downward sections of the function can be fit to data as well.
(5.) The median age projections for the GCC states’ citizen-resident populations were obtained from the author via email or downloaded from his website.
(6.) In the UNPD’s current methods, UN demographers identify this scenario, the medium-fertility variant, as the most likely, given the range of trajectories followed by other countries during similar fertility transitions. The low- and high-fertility variants are generated by varying the end point of the fertility trajectory (by 0.5 child) downward for the low-fertility variant and upward for the high-fertility variant, as was the method in prior revisions. The constant-fertility variant is produced by maintaining fertility, during the projection period, at the last estimated level.
(7.) Since 2008, demographers have determined that prior fertility estimates for Egypt were lower than later surveys revealed. Since then, Egypt’s median age has been lowered in both estimates and projections. Therefore, Egypt’s calculated probability of being assessed as Free is considerably lower today than it was when calculated from 2008 data, and the projected year that Egypt will reach Free50 (a 0.50 probability of being assessed as Free) is now after 2040 (see Table 3).
|
# How do you solve and write the following in interval notation: x^2-2x+4>0?
$f \left(x\right) = {x}^{2} - 2 x + 4 > 0$
$D = {d}^{2} = {b}^{2} - 4 a c = 4 - 16 = - 12 < 0$
|
Some Properties of Solutions of Periodic Second Order Linear Differential Equations
DOI:10.3770/j.issn:1000-341X.2011.02.011
作者 单位 肖丽鹏 江西师范大学数学与信息科学学院, 江西 南昌 330022 陈宗煊 华南师范大学数学科学学院, 广东 广州 510631
在本文中,我们研究了二阶周期线性微分方程$$y'' Ay=0,$$解的零点,其中$A(z)=B(e^z), B(\zeta)=g(\zeta) \sum_{j=1}^pb_{-j}\zeta^{-j}, g(\zeta)$是一超越整函数满足下级不超过1/2,p是一正的奇数。我们得到的结果是上述方程的每一个非平凡解的零点收敛指数为无穷。
In this paper, the zeros of solutions of periodic second order linear differential equation $y'' Ay=0$, where $A(z)=B(e^z)$, $B(\zeta)=g(\zeta) \sum_{j=1}^pb_{-j}\zeta^{-j}$, $g(\zeta)$ is a transcendental entire function of lower order no more than $1/2$, and $p$ is an odd positive integer, are studied. It is shown that every non-trivial solution of above equation satisfies the exponent of convergence of zeros equals to infinity.
|
# How to launch software on final page and stop installer ?
1 1 Currently my installer's final page offers a check box labelled 'Launch Software' and when you check it and click 'Finish' the installer starts the software as a child process and closes it's graphical windows but still continues to run. Is there a way to make the installer launch the installed software as a standalone process after clicking the final page's 'Finish' button, so that the installer itself exits and stops existing as a process, but the installed software continues to run ? asked 16 Apr '13, 04:01 Neo Cortex 141●24●26●30 accept rate: 0%
3 On Windows, I'm appending an ampersand to the launch parameters; on Mac OS X , I'm using the open command (see x-man-page://open): path\to\software.exe & ${msg(LaunchSoftware.Checkbox.Name)} (...) open path/to/software.app${msg(LaunchSoftware.Checkbox.Name)} (...) Also, please have a look here: http://installbuilder.bitrock.com/docs/installbuilder-userguide.html#_launching_in_the_background answered 16 Apr '13, 04:20 Dirk Stegemann 681●28●32●44 accept rate: 31% Adding the ampersand as an argument in the end does not fix this on MacOSX (26 Apr '13, 04:24) Neo Cortex The proposed approach (appending an ampersand) works for Windows. On Mac OS X, I'm using the 'open' command. I updated the description in the answer. Hope that helps! (26 Apr '13, 06:25) Dirk Stegemann
0 I did the following for MacOSX: (...) escaped_installdir \\ ${installdir} open${escaped_installdir} some message ${env(USER)} 0${installdir}/ And respectively for Linux: ${installdir}/programname > /dev/null 2> /dev/null < /dev/null & disown some message${env(SUDO_USER)} 0 \${installdir}/ Those did the trick. answered 26 Apr '13, 11:26 Neo Cortex 141●24●26●30 accept rate: 0%
toggle preview community wiki
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
Tags:
×156
×25
×13
×10
×7
×5
×4
|
Leak in vacuum vessel question
1. Jul 19, 2012
Zrq
Question:
How can I estimate the amount of air that leakes into a vacuum vessel. I know the pressure outside the vessel (1020 mBar), the pressure achieved in the vessel (10 mBar), the pumping speed of the vacuum pump (400 cubic metres per hour) and the duration of the leaking (6 hours). Volume of the vessel = 4800 litres.
2. Jul 25, 2012
etudiant
The amount of leakage is measured by the pressure rise during the 6hr leak interval.
If you had a 10mBar rise during that period, you had about 1/100th of the volume leak in, so about 48 liters in 6 hrs, or about 8 liters/hr.
3. Jul 25, 2012
Zrq
Thank you for answer. I should have made clear that there was a stable pressure of 10 mBar during the 6 hours. During this time several pumps were operating. Turns out I overestimated the pumping speed of the system. It is in fact 130 m3/hour. I now believe the estimate to be: ((pumping speed)*(duration))/100 so (130*6)/100≈1*101 m3.
4. Jul 26, 2012
etudiant
In practice, you want to be pretty cautious about pump efficiencies.
Rough pumps that are used initially to pump down the installation can't produce a good vacuum, although they should get to maybe 10**-2 mB.
A 4800 liter vessel is good size and may have elements that outgas in a vacuum, which might become a factor.
|
# Ejection fraction: Wikis
Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.
# Encyclopedia
In cardiovascular physiology, ejection fraction (Ef) is the fraction of blood pumped out of a ventricle with each heart beat. The term ejection fraction applies to both the right and left ventricles; one can speak equally of the left ventricular ejection fraction (LVEF) and the right ventricular ejection fraction (RVEF). Without a qualifier, the term ejection fraction refers specifically to that of the left ventricle. Its reverse operation is the injection fraction.
## Overview
By definition, the volume of blood within a ventricle immediately before a contraction is known as the end-diastolic volume. Similarly, the volume of blood left in a ventricle at the end of contraction is end-systolic volume. The difference between end-diastolic and end-systolic volumes is the stroke volume, the volume of blood ejected with each beat. Ejection fraction (Ef) is the fraction of the end-diastolic volume that is ejected with each beat; that is, it is stroke volume (SV) divided by end-diastolic volume (EDV):
$E_f = \frac{SV}{EDV} = \frac{EDV - ESV}{EDV}$
## Normal values
Parameter Value
end-diastolic volume (EDV) 120 ml
end-systolic volume (ESV) 50 ml
stroke volume (SV) 70 ml
ejection fraction (Ef) 58%
heart rate (HR) 70 bpm
cardiac output (CO) 4.9 L/m
In a healthy 70-kg (154-lb) man, the SV is approximately 70 ml and the left ventricular EDV is 120 ml, giving an ejection fraction of 70/120, or 0.58 (58%).
Right ventricular volumes being roughly equal to those of the left ventricle, the ejection fraction of the right ventricle is normally equal to that of the left ventricle within narrow limits.
Healthy individuals typically have ejection fractions between 50% and 65%.[1] However, normal values depend upon the modality being used to calculate the ejection fraction. Damage to the muscle of the heart (myocardium), such as that sustained during myocardial infarction or in cardiomyopathy, impairs the heart's ability to eject blood and therefore reduces ejection fraction. This reduction in the ejection fraction can manifest itself clinically as heart failure.
The ejection fraction is one of the most important predictors of prognosis; those with significantly reduced ejection fractions typically have poorer prognoses. However, recent studies have indicated that a preserved ejection fraction does not mean freedom from risk.[2][3]
## Measurement
Ejection fraction is commonly measured by echocardiography, in which the volumes of the heart's chambers are measured during the cardiac cycle. Ejection fraction can then be obtained by dividing stroke volume by end-diastolic volume as described above.
Other methods of measuring ejection fraction include cardiac MRI, fast scan cardiac computed axial tomography (CT) imaging, ventriculography, Gated SPECT, and the MUGA scan. A MUGA scan involves the injection of a radioisotope into the blood and detecting its flow through the left ventricle. The historical gold standard for the measurement of ejection fraction is ventriculography.
## References
1. ^ Cotran, Ramzi S.; Kumar, Vinay; Fausto, Nelson; Nelso Fausto; Robbins, Stanley L.; Abbas, Abul K. (2005). Robbins and Cotran pathologic basis of disease. St. Louis, Mo: Elsevier Saunders. pp. 602. ISBN 0-7216-0187-1.
2. ^ Owan TE, Hodge DO, Herges RM, Jacobsen SJ, Roger VL, Redfield MM (July 2006). "Trends in prevalence and outcome of heart failure with preserved ejection fraction". N. Engl. J. Med. 355 (3): 251–9. doi:10.1056/NEJMoa052256. PMID 16855265.
3. ^ Bhatia RS, Tu JV, Lee DS, et al. (July 2006). "Outcome of heart failure with preserved ejection fraction in a population-based study". N. Engl. J. Med. 355 (3): 260–9. doi:10.1056/NEJMoa051530. PMID 16855266.
|
polysemy-1.0.0.0: Higher-order, low-boilerplate, zero-cost free monads.
Polysemy
Synopsis
Core Types
data Sem r a Source #
The Sem monad handles computations of arbitrary extensible effects. A value of type Sem r describes a program with the capabilities of r. For best results, r should always be kept polymorphic, but you can add capabilities via the Member constraint.
The value of the Sem monad is that it allows you to write programs against a set of effects without a predefined meaning, and provide that meaning later. For example, unlike with mtl, you can decide to interpret an Error effect tradtionally as an Either, or instead significantly faster as an IO Exception. These interpretations (and others that you might add) may be used interchangably without needing to write any newtypes or Monad instances. The only change needed to swap interpretations is to change a call from runError to lowerError.
The effect stack r can contain arbitrary other monads inside of it. These monads are lifted into effects via the Embed effect. Monadic values can be lifted into a Sem via embed.
A Sem can be interpreted as a pure value (via run) or as any traditional Monad (via runM). Each effect E comes equipped with some interpreters of the form:
runE :: Sem (E ': r) a -> Sem r a
which is responsible for removing the effect E from the effect stack. It is the order in which you call the interpreters that determines the monomorphic representation of the r parameter.
Order of interpreters can be important - it determines behaviour of effects that manipulate state or change control flow. For example, when interpreting this action:
>>> :{
example :: Members '[State String, Error String] r => Sem r String
example = do
put "start"
let throwing, catching :: Members '[State String, Error String] r => Sem r String
throwing = do
modify (++"-throw")
throw "error"
get
catching = do
modify (++"-catch")
get
catch @String throwing (\ _ -> catching)
:}
when handling Error first, state is preserved after error occurs:
>>> :{
example
& runError
& fmap (either id id)
& evalState ""
& runM
& (print =<<)
:}
"start-throw-catch"
while handling State first discards state in such cases:
>>> :{
example
& evalState ""
& runError
& fmap (either id id)
& runM
& (print =<<)
:}
"start-catch"
A good rule of thumb is to handle effects which should have "global" behaviour over other effects later in the chain.
After all of your effects are handled, you'll be left with either a Sem '[] a or a Sem '[ Embed m ] a value, which can be consumed respectively by run and runM.
Examples
As an example of keeping r polymorphic, we can consider the type
Member (State String) r => Sem r ()
get :: Sem r String
put :: String -> Sem r ()
methods.
Member (Error Bool) r
throw :: Bool -> Sem r a
catch :: Sem r a -> (Bool -> Sem r a) -> Sem r a
functions as well.
In this sense, a Member (State s) r constraint is analogous to mtl's MonadState s m and should be thought of as such. However, unlike mtl, a Sem monad may have an arbitrary number of the same effect.
For example, we can write a Sem program which can output either Ints or Bools:
foo :: ( Member (Output Int) r
, Member (Output Bool) r
)
=> Sem r ()
foo = do
output @Int 5
output True
Notice that we must use -XTypeApplications to specify that we'd like to use the (Output Int) effect.
Since: 0.1.2.0
Instances
Monad (Sem f) Source # Instance detailsDefined in Polysemy.Internal Methods(>>=) :: Sem f a -> (a -> Sem f b) -> Sem f b #(>>) :: Sem f a -> Sem f b -> Sem f b #return :: a -> Sem f a #fail :: String -> Sem f a # Functor (Sem f) Source # Instance detailsDefined in Polysemy.Internal Methodsfmap :: (a -> b) -> Sem f a -> Sem f b #(<$) :: a -> Sem f b -> Sem f a # Member Fixpoint r => MonadFix (Sem r) Source # Instance detailsDefined in Polysemy.Internal Methodsmfix :: (a -> Sem r a) -> Sem r a # Member NonDet r => MonadFail (Sem r) Source # Since: 0.2.1.0 Instance detailsDefined in Polysemy.Internal Methodsfail :: String -> Sem r a # Source # Instance detailsDefined in Polysemy.Internal Methodspure :: a -> Sem f a #(<*>) :: Sem f (a -> b) -> Sem f a -> Sem f b #liftA2 :: (a -> b -> c) -> Sem f a -> Sem f b -> Sem f c #(*>) :: Sem f a -> Sem f b -> Sem f b #(<*) :: Sem f a -> Sem f b -> Sem f a # Member (Embed IO) r => MonadIO (Sem r) Source # This instance will only lift IO actions. If you want to lift into some other MonadIO type, use this instance, and handle it via the embedToMonadIO interpretation. Instance detailsDefined in Polysemy.Internal MethodsliftIO :: IO a -> Sem r a # Member NonDet r => Alternative (Sem r) Source # Instance detailsDefined in Polysemy.Internal Methodsempty :: Sem r a #(<|>) :: Sem r a -> Sem r a -> Sem r a #some :: Sem r a -> Sem r [a] #many :: Sem r a -> Sem r [a] # Member NonDet r => MonadPlus (Sem r) Source # Since: 0.2.1.0 Instance detailsDefined in Polysemy.Internal Methodsmzero :: Sem r a #mplus :: Sem r a -> Sem r a -> Sem r a # type Member e r = MemberNoError e r Source # A proof that the effect e is available somewhere inside of the effect stack r. type family Members es r :: Constraint where ... Source # Makes constraints of functions that use multiple effects shorter by translating single list of effects into multiple Member constraints: foo :: Members '[ Output Int , Output Bool , State String ] r => Sem r () translates into: foo :: ( Member (Output Int) r , Member (Output Bool) r , Member (State String) r ) => Sem r () Since: 0.1.2.0 Equations Members '[] r = () Members (e ': es) r = (Member e r, Members es r) class MemberNoError end r => LastMember end r | r -> end Source # A proof that end is the last effect in the row. Since: 0.5.0.0 Minimal complete definition decompLast Instances LastMember end (end ': ([] :: [Effect])) Source # Instance detailsDefined in Polysemy.Internal.Union MethodsdecompLast :: Union (end ': []) m a -> Either (Union (end ': []) m a) (Union (end ': []) m a) Source # (LastMember end r, MemberNoError end (eff ': r)) => LastMember end (eff ': r) Source # Instance detailsDefined in Polysemy.Internal.Union MethodsdecompLast :: Union (eff ': r) m a -> Either (Union (eff ': r) m a) (Union (end ': []) m a) Source # Running Sem run :: Sem '[] a -> a Source # Run a Sem containing no effects as a pure value. runM :: Monad m => Sem '[Embed m] a -> m a Source # Lower a Sem containing only a single lifted Monad into that monad. Interoperating With Other Monads newtype Embed m (z :: Type -> Type) a where Source # An effect which allows a regular Monad m into the Sem ecosystem. Monadic actions in m can be lifted into Sem via embed. For example, you can use this effect to lift IO actions directly into Sem: embed (putStrLn "hello") :: Member (Embed IO) r => Sem r () That being said, you lose out on a significant amount of the benefits of Sem by using embed directly in application code; doing so will tie your application code directly to the underlying monad, and prevent you from interpreting it differently. For best results, only use Embed in your effect interpreters. Consider using trace and traceToIO as a substitute for using putStrLn directly. Since: 1.0.0.0 Constructors Embed Fields:: { unEmbed :: m a } -> Embed m z a embed :: Member (Embed m) r => m a -> Sem r a Source # Embed a monadic action m in Sem. Since: 1.0.0.0 Lifting raise :: forall e r a. Sem r a -> Sem (e ': r) a Source # Introduce an effect into Sem. Analogous to lift in the mtl ecosystem Creating New Effects Effects should be defined as a GADT (enable -XGADTs), with kind (* -> *) -> * -> *. Every primitive action in the effect should be its own constructor of the type. For example, we can model an effect which interacts with a tty console as follows: data Console m a where WriteLine :: String -> Console m () ReadLine :: Console m String Notice that the a parameter gets instataniated at the /desired return type/ of the actions. Writing a line returns a '()', but reading one returns String. By enabling -XTemplateHaskell, we can use the makeSem function to generate smart constructors for the actions. These smart constructors can be invoked directly inside of the Sem monad. makeSem ''Console results in the following definitions: writeLine :: Member Console r => String -> Sem r () readLine :: Member Console r => Sem r String Effects which don't make use of the m parameter are known as "first-order effects." Higher-Order Effects Every effect has access to the m parameter, which corresponds to the Sem monad it's used in. Using this parameter, we're capable of writing effects which themselves contain subcomputations. For example, the definition of Error is data Error e m a where Throw :: e -> Error e m a Catch :: m a -> (e -> m a) -> Error e m a where Catch is an action that can run an exception handler if its first argument calls throw. makeSem ''Error throw :: Member (Error e) r => e -> Sem r a catch :: Member (Error e) r => Sem r a -> (e -> Sem r a) -> Sem r a As you see, in the smart constructors, the m parameter has become Sem r. makeSem :: Name -> Q [Dec] Source # If T is a GADT representing an effect algebra, as described in the module documentation for Polysemy,$(makeSem ''T) automatically generates a smart constructor for every data constructor of T. This also works for data family instances. Names of smart constructors are created by changing first letter to lowercase or removing prefix : in case of operators. Fixity declaration is preserved for both normal names and operators.
Since: 0.1.2.0
makeSem_ :: Name -> Q [Dec] Source #
Like makeSem, but does not provide type signatures and fixities. This can be used to attach Haddock comments to individual arguments for each generated function.
data Output o m a where
Output :: o -> Output o m ()
makeSem_ ''Output
-- | Output the value @o@.
output :: forall o r
. Member (Output o) r
=> o -- ^ Value to output.
-> Sem r () -- ^ No result.
Because of limitations in Template Haskell, signatures have to follow some rules to work properly:
• makeSem_ must be used before the explicit type signatures
• signatures have to specify argument of Sem representing union of effects as r (e.g. Sem r ())
• all arguments in effect's type constructor have to follow naming scheme from data constructor's declaration:
data Foo e m a where
FooC1 :: Foo x m ()
FooC2 :: Foo (Maybe x) m ()
should have x in type signature of fooC1:
fooC1 :: forall x r. Member (Foo x) r => Sem r ()
and Maybe x in signature of fooC2:
fooC2 :: forall x r. Member (Foo (Maybe x)) r => Sem r ()
• all effect's type variables and r have to be explicitly quantified using forall (order is not important)
These restrictions may be removed in the future, depending on changes to the compiler.
Change in (TODO(Sandy): version): in case of GADTs, signatures now only use names from data constructor's type and not from type constructor declaration.
Since: 0.1.2.0
Combinators for Interpreting First-Order Effects
Arguments
:: FirstOrder e "interpret" => (forall x m. e m x -> Sem r x) A natural transformation from the handled effect to other effects already in Sem. -> Sem (e ': r) a -> Sem r a
The simplest way to produce an effect handler. Interprets an effect e by transforming it into other effects inside of r.
Arguments
:: (Member e r, FirstOrder e "intercept") => (forall x m. e m x -> Sem r x) A natural transformation from the handled effect to other effects already in Sem. -> Sem r a Unlike interpret, intercept does not consume any effects. -> Sem r a
Like interpret, but instead of handling the effect, allows responding to the effect while leaving it unhandled. This allows you, for example, to intercept other effects and insert logic around them.
Arguments
:: FirstOrder e1 "reinterpret" => (forall m x. e1 m x -> Sem (e2 ': r) x) A natural transformation from the handled effect to the new effect. -> Sem (e1 ': r) a -> Sem (e2 ': r) a
Like interpret, but instead of removing the effect e, reencodes it in some new effect f. This function will fuse when followed by runState, meaning it's free to reinterpret in terms of the State effect and immediately run it.
Arguments
:: FirstOrder e1 "reinterpret2" => (forall m x. e1 m x -> Sem (e2 ': (e3 ': r)) x) A natural transformation from the handled effect to the new effects. -> Sem (e1 ': r) a -> Sem (e2 ': (e3 ': r)) a
Like reinterpret, but introduces two intermediary effects.
Arguments
:: FirstOrder e1 "reinterpret3" => (forall m x. e1 m x -> Sem (e2 ': (e3 ': (e4 ': r))) x) A natural transformation from the handled effect to the new effects. -> Sem (e1 ': r) a -> Sem (e2 ': (e3 ': (e4 ': r))) a
Like reinterpret, but introduces three intermediary effects.
Combinators for Interpreting Higher-Order Effects
Arguments
:: (forall x m. e m x -> Tactical e m r x) A natural transformation from the handled effect to other effects already in Sem. -> Sem (e ': r) a -> Sem r a
Like interpret, but for higher-order effects (ie. those which make use of the m parameter.)
See the notes on Tactical for how to use this function.
Arguments
:: Member e r => (forall x m. e m x -> Tactical e m r x) A natural transformation from the handled effect to other effects already in Sem. -> Sem r a Unlike interpretH, interceptH does not consume any effects. -> Sem r a
Like interceptH, but for higher-order effects.
See the notes on Tactical for how to use this function.
Arguments
:: (forall m x. e1 m x -> Tactical e1 m (e2 ': r) x) A natural transformation from the handled effect to the new effect. -> Sem (e1 ': r) a -> Sem (e2 ': r) a
Like reinterpret, but for higher-order effects.
See the notes on Tactical for how to use this function.
Arguments
:: (forall m x. e1 m x -> Tactical e1 m (e2 ': (e3 ': r)) x) A natural transformation from the handled effect to the new effects. -> Sem (e1 ': r) a -> Sem (e2 ': (e3 ': r)) a
Like reinterpret2, but for higher-order effects.
See the notes on Tactical for how to use this function.
Arguments
:: (forall m x. e1 m x -> Tactical e1 m (e2 ': (e3 ': (e4 ': r))) x) A natural transformation from the handled effect to the new effects. -> Sem (e1 ': r) a -> Sem (e2 ': (e3 ': (e4 ': r))) a
Like reinterpret3, but for higher-order effects.
See the notes on Tactical for how to use this function.
Combinators for Interpreting Directly to IO
Arguments
:: LastMember (Embed IO) r => ((forall x. Sem r x -> IO x) -> IO () -> IO a) A lambda that takes the lowering function, and a finalizing IO action to mark a the forked thread as being complete. The finalizing action need not be called. -> Sem r a
Run an effect stack all the way down to IO by running it in a new thread, and temporarily turning the current thread into an event poll.
This function creates a thread, and so should be compiled with -threaded.
Since: 0.5.0.0
Kind Synonyms
type Effect = (Type -> Type) -> Type -> Type Source #
The kind of effects.
Since: 0.5.0.0
type EffectRow = [Effect] Source #
The kind of effect rows.
Since: 0.5.0.0
Composing IO-based Interpreters
(.@) infixl 8 Source #
Arguments
:: Monad m => (forall x. Sem r x -> m x) The lowering function, likely runM. -> (forall y. (forall x. Sem r x -> m x) -> Sem (e ': r) y -> Sem r y) -> Sem (e ': r) z -> m z
Some interpreters need to be able to lower down to the base monad (often IO) in order to function properly --- some good examples of this are lowerError and lowerResource.
However, these interpreters don't compose particularly nicely; for example, to run lowerResource, you must write:
runM . lowerError runM
Notice that runM is duplicated in two places here. The situation gets exponentially worse the more intepreters you have that need to run in this pattern.
Instead, .@ performs the composition we'd like. The above can be written as
(runM .@ lowerError)
The parentheses here are important; without them you'll run into operator precedence errors.
Warning: This combinator will duplicate work that is intended to be just for initialization. This can result in rather surprising behavior. For a version of .@ that won't duplicate work, see the .@! operator in polysemy-zoo.
(.@@) infixl 8 Source #
Arguments
:: Monad m => (forall x. Sem r x -> m x) The lowering function, likely runM. -> (forall y. (forall x. Sem r x -> m x) -> Sem (e ': r) y -> Sem r (f y)) -> Sem (e ': r) z -> m (f z)
Like .@, but for interpreters which change the resulting type --- eg. lowerError.
Tactics
Higher-order effects need to explicitly thread other effects' state through themselves. Tactics are a domain-specific language for describing exactly how this threading should take place.
The first computation to be run should use runT, and subsequent computations in the same environment should use bindT. Any first-order constructors which appear in a higher-order context may use pureT to satisfy the typechecker.
type Tactical e m r x = forall f. Functor f => Sem (WithTactics e f m r) (f x) Source #
Tactical is an environment in which you're capable of explicitly threading higher-order effect states. This is provided by the (internal) effect Tactics, which is capable of rewriting monadic actions so they run in the correct stateful environment.
Inside a Tactical, you're capable of running pureT, runT and bindT which are the main tools for rewriting monadic stateful environments.
For example, consider trying to write an interpreter for Resource, whose effect is defined as:
data Resource m a where
Bracket :: m a -> (a -> m ()) -> (a -> m b) -> Resource m b
Here we have an m a which clearly needs to be run first, and then subsequently call the a -> m () and a -> m b arguments. In a Tactical environment, we can write the threading code thusly:
Bracket alloc dealloc use -> do
alloc' <- runT alloc
dealloc' <- bindT dealloc
use' <- bindT use
where
alloc' :: Sem (Resource ': r) (f a1)
dealloc' :: f a1 -> Sem (Resource ': r) (f ())
use' :: f a1 -> Sem (Resource ': r) (f x)
The f type here is existential and corresponds to "whatever state the other effects want to keep track of." f is always a Functor.
alloc', dealloc' and use' are now in a form that can be easily consumed by your interpreter. At this point, simply bind them in the desired order and continue on your merry way.
We can see from the types of dealloc' and use' that since they both consume a f a1, they must run in the same stateful environment. This means, for illustration, any puts run inside the use block will not be visible inside of the dealloc block.
Power users may explicitly use getInitialStateT and bindT to construct whatever data flow they'd like; although this is usually unnecessary.
type WithTactics e f m r = Tactics f m (e ': r) ': r Source #
getInitialStateT :: forall f m r e. Sem (WithTactics e f m r) (f ()) Source #
Get the stateful environment of the world at the moment the effect e is to be run. Prefer pureT, runT or bindT instead of using this function directly.
pureT :: a -> Tactical e m r a Source #
Lift a value into Tactical.
Arguments
:: m a The monadic action to lift. This is usually a parameter in your effect. -> Sem (WithTactics e f m r) (Sem (e ': r) (f a))
Run a monadic action in a Tactical environment. The stateful environment used will be the same one that the effect is initally run in. Use bindT if you'd prefer to explicitly manage your stateful environment.
Arguments
:: (a -> m b) The monadic continuation to lift. This is usually a parameter in your effect.Continuations lifted via bindT will run in the same environment which produced the a. -> Sem (WithTactics e f m r) (f a -> Sem (e ': r) (f b))
Lift a kleisli action into the stateful environment. You can use bindT to get an effect parameter of the form a -> m b into something that can be used after calling runT on an effect parameter m a.
getInspectorT :: forall e f m r. Sem (WithTactics e f m r) (Inspector f) Source #
Get a natural transformation capable of potentially inspecting values inside of f. Binding the result of getInspectorT produces a function that can sometimes peek inside values returned by bindT.
This is often useful for running callback functions that are not managed by polysemy code.
Example
We can use the result of getInspectT to "undo" pureT (or any of the other Tactical functions):
ins <- getInspectorT
fa <- pureT "hello"
fb <- pureT True
let a = inspect ins fa -- Just "hello"
b = inspect ins fb -- Just True
We
newtype Inspector f Source #
A container for inspect. See the documentation for getInspectorT.
Constructors
Inspector Fieldsinspect :: forall x. f x -> Maybe xSee the documentation for getInspectorT.
|
show · ec.q.endomorphism_ring all knowls · up · search:
The endomorphism ring $\End(E)$ of an elliptic curve $$E$$ is the ring of all endomorphisms of $$E$$ defined over $K$. For endomorphisms defined over extensions, we speak of the geometric endomorphism ring of $E$.
For elliptic curves defined over $\Q$, this ring is always isomorphic to $$\Z$$ consisting of the multiplication-by-$m$ maps $[m] \colon E\to E$ for $m \in \Z$.
This is a special case of the endomorphism ring of an abelian variety.
Authors:
Knowl status:
• Review status: reviewed
• Last edited by John Voight on 2020-09-26 16:59:43
Referred to by:
Not referenced anywhere at the moment.
History:
Differences
|
# 1-Dimensional Heat Initial Boundary Value Problems 2: Sturm-Liouville Problems and Orthogonal Functions
Sturm-Liouville Problems
The homogeneous boundary conditions of 1D heat conduction problem are given by
\begin{align*}
-\kappa_1u_x(0,t)+h_1u(0,t)&=0,\ t>0\\
\kappa_2u_x(L,t)+h_2u(L,t)&=0,\ t>0
\end{align*}
(See here)
The homogeneous BCs for the second order linear differential equation $$\label{eq:ho}X^{\prime\prime}=kX$$ is then
\label{eq:bc}\begin{aligned}
-k_1X’(0)+h_1X(0)&=0\\
k_2X’(L)+h_2X(L)&=0
\end{aligned}
Finding solutions of the second order linear differential equation \eqref{eq:ho} for $k=0$, $k=\lambda^2$, and $k=-\lambda^2$ that satisfy the BCs \eqref{eq:bc} is called a Sturm-Liouville Problem. Here, we study the Sturm-Liouville Theory with the following example.
Remark. In case of homogeneous heat BVPs, the eventual temperature would be 0 as there is no heat source. So, we see that $k=-\lambda^2<0$ is the only physically relevant case.
Example. [Fixed temperature at both ends]
Consider the heat BVP:
\begin{align*}
u_t&=\alpha^2 u_{xx}\ \mbox{PDE}\\
u(0,t)&=u(1,t)=0\ \mbox{(BCs)}
\end{align*}
From the above BCs, we obtain the BCs for $X(x)$:
$$X(0)=X(1)=0$$
For $k=0$ and $k=\lambda^2>0$ we have a trivial solution $X(x)=0$. For $k=-\lambda^2<0$ $X(x)=A\cos\lambda x+B\sin\lambda x$. With the BCS we find the eigenvalues
$$\lambda_n=n\pi,\ n=1,2,3,\cdots$$
and the corresponding eigenfunctions
$$X_n(x)=\sin n\pi x,\ n=1,2,3,\cdots$$
The $\{X_n: n=1,2,3,\cdots\}$ is a linearly independent set so they form a basis for the solution space which is infinite dimensional. The general solution to the heat BVP is given by
$$u(x,t)=\sum_{n=1}^\infty A_n e^{-n^2\pi^2\alpha^2t}\sin n\pi x$$
There are undetermined coefficients $A_n$ called Fourier coefficients. They can be determined by initial condition (initial temperature).
Orthogonal Functions and Solution of a Homogeneous Heat IBVP
Consider a heat distribution function $u(x,t)$ of the following form
$$u(x,t)=\sum_{n=0}^\infty A_ne^{-\lambda_n^2\alpha^2t}X_n(x)$$
where $X_n$’s are eigenfunctions corresponding to the eigenvalues $\lambda_n$’s respectively. The eigenfunctions $X_n$’s form a basis for the solution space (which is often infinite dimensional) of a given heat IBVP, furthermore they can form an orthonormal basis with respect to the inner product
$$\label{eq:innerprod}\langle X_m,X_n\rangle=\int_0^LX_mX_ndx$$
We say that eigenfunctions $X_m$ and $X_n$ are orthogonal if $\langle X_m,X_n\rangle=0$.
Example. $X_n(x)=\sin n\pi x$, $n=1,2,3,\cdots$ form an orthogonal basis with respect to \eqref{eq:innerprod}, where $0<x<1$:
\begin{align*}
\langle X_m,X_n\rangle&=\int_0^1\sin m\pi x\sin n\pi xdx\\
&=\left\{\begin{aligned}
\frac{1}{2}\ &{\rm if}\ m=n\\
0\ &{\rm if}\ m\ne n.
\end{aligned}\right.
\end{align*}
Remark. [The Gram-Schmidt Orthogonalization Process]
If $\{X_n\}$ is not an orthogonal basis, one can construct an orthogonal basis from $\{X_n\}$ using the inner product (3). The standard process is called the Gram-Schmidt orthogonalization process. Details can be found in many standard linear algebra textbooks.
Now we assume that $\{X_n\}$ is an orthogonal basis for the solution space. Let $L_n:=\langle X_n,X_n\rangle=\int_0^LX_n^2dx$. Let the initial condition be given by
$u(x,0)=\phi(x)$. Then
$$\phi(x)=\sum_{n=0}^\infty A_nX_n$$
Multiply this by $X_m$ and then integrate:
$$\int_0^LX_m\phi(x)dx=\sum_{n=0}^\infty A_n\int_0^LX_nX_mdx$$
By orthogonality we obtain
$$L_mA_m=\int_0^LX_m\phi(x)dx$$
or
$$A_m=\frac{1}{L_m}\int_0^L\phi(x)X_mdx,\ m=0,1,2,\cdots$$
Example. Consider the heat BVP in the previous example with initial condition $\phi(x)=T$, a constant temperature. For $n=1,2,3,\cdots$, $X_n(x)=\sin n\pi x;\ 0<x<1$ so
$$L_n=\int_0^1\sin^2 n\pi x dx=\frac{1}{2}$$
The Fourier coefficients are then computed to be
\begin{align*}
A_n&=2\int_0^1\phi(x)\sin n\pi xdx\\
&=2T\int_0^1\sin n\pi xdx\\
&=\frac{2T}{n\pi}[1-\cos n\pi]\\
&=\frac{2T}{n\pi}[1-(-1)^n].
\end{align*}
$A_n=0$ for $n={\rm even}$ and $A_{2n-1}=\frac{4T}{(2n-1)\pi},\ n=1,2,3,\cdots$. Hence
$$u(x,t)=\sum_{n=1}^\infty\frac{4T}{(2n-1)\pi}e^{-(2n-1)^2\pi^2\alpha^2t}\sin(2n-1)\pi x.$$
References:
David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag
|
# Is the universe problem for one-counter automata with restricted alphabet size undecidable?
Consider the following universe problem.
The universe problem. Given a finite set $\Sigma$ for a class of languages, and an automaton accepting the language $L$, decide if $L=\Sigma^*$.
In [1], it is stated and proved that the universe problem is undecidable for a particular class of one-counter automata. This result then follows for the class of all non-deterministic one-counter automata. I'm wondering if it is known whether this problem is still undecidable when we restrict the size of the input alphabet of the automaton.
I think that with alphabet size 1 the problem becomes decidable, but what about size 2? And if that turns out to be decidable what is the smallest value of $n \in \mathbb{N}$ such that the problem is undecidable.
I think it's probable that the answer to this question is known but I'm having trouble finding an answer. If it is already known then I would appreciate a reference.
## 1 Answer
It must be undecidable for an alphabet with two symbols. It is possible to code any alphabet into two letters, e.g., map 16 symbols to the length 4 binary strings $aaaa, aaab, \dots, bbbb$. Then equality to $\Sigma^*$ is equivalent to equality to all possible codes for strings. In the 16 letter example this means equality to all strings of a multiple of four letters. Clearly that is not universality. That is obtained by adding those binary strings that are not coding. That is a regular set and can be generated by a one counter automaton.
The same explanation, with $\LaTeX$ for those who appreciate it. Assume universality is undecidable for $\Sigma$. Let $h: \Sigma^* \to \{0,1\}^*$ be an injective morphism. Now $L = \Sigma^*$ iff $h(L) = h(\Sigma^*)$. This in turn is equivalent to $h(L) \cup R = \{0,1\}^*$ where $R$ is the (fixed) regular language $\{0,1\}^* - h(\Sigma^*)$. Hence we cannot decide whether the binary one counter language $h(L) \cup R$ is universal. Note that language is one counter as the family is closed under morphisms and union (with regular languages).
As you state "I think that" I can also confirm the question is decidable for a one letter alphabet. It is decidable for push-down automata (hence context-free languages) as one letter CFL are (effectively) equivalent to regular languages.
|
# DeleteDuplicates[] does not work as expected on floating point values
Here is my simple example, and in this case function DeleteDuplicates does not work as expected.
I want to FindRoot of my function $\chi[\nu]$, and since function $\chi$ is very sensitive to initial guess I decided to generate many initial conditions and leave only those solutions which are distinct. For this purpose I want to apply DeleteDuplicates on the resulting list of solution.
Here is the definition of my function:
χ[ν_] := 2*PolyGamma[1] - PolyGamma[1/2 + I*ν] - PolyGamma[1/2 - I*ν]
Here I generate many solution according to many random initial guesses
m = Table[v /. FindRoot[χ[v] == -1.2 - 0.2*I, {v, RandomComplex[]}], {i, 1, 10}]
And finally, I want to leave only distinct solutions:
DeleteDuplicates[m]
Unfortunately, the operation DeleteDuplicates[m] does not change the list m, although there are many identical values.
Namely:
DeleteDuplicates[m]
{1.06423 + 0.0968739 I, 1.06423 + 0.0968739 I,
1.06423 + 0.0968739 I, 1.06423 + 0.0968739 I, 0.0250407 + 1.00352 I,
1.06423 + 0.0968739 I, 1.06423 + 0.0968739 I, 1.06423 + 0.0968739 I,
0.0250407 + 1.00352 I, 1.06423 + 0.0968739 I}
I'm puzzled.
Any help or suggestions are very welcome!
Thanks!
• Try something like DeleteDuplicates[m, Abs[#1 - #2] < 0.01 &] . Mar 20, 2013 at 9:59
• Thanks for editing Mr. Wizard! It looks much better :) @b.gatessucks: Great, works like a charm. Could you explain the reason? I'm quite new to this. Thanks a lot for help! Mar 20, 2013 at 10:03
• Andrew, please see the update to my answer for an important note about performance. Mar 21, 2013 at 13:53
• Andrew, what about changing "does not work properly" to "does not work as expected"? In fact it works properly, the cause is just seemingly identical numbers... Mar 21, 2013 at 14:17
• Related: (19112) Feb 19, 2016 at 21:50
You would do well to understand the difference between tools that are intended for structural operations and those that are intended for mathematical operations. DeleteDuplicates is of the former, generally speaking. As such it is comparing the exact FullForm of the objects, or at least something close (caveat).
As b.gatessucks recommends in a comment you can use a mathematical comparison function for the equivalence test of DeleteDuplicates, e.g.:
DeleteDuplicates[m, Abs[#1 - #2] < 10^-12 &]
{1.06423 + 0.0968739 I, 0.0250407 + 1.00352 I}
Incidentally you could also use Union, but the syntax is a bit different. Note the ( ).
Union[m, SameTest -> (Abs[#1 - #2] < 10^-12 &)]
{0.0250407 + 1.00352 I, 1.06423 + 0.0968739 I}
Using InputForm to show all of the digits of your expression you can see that they are not structurally identical in the (approximate) way that Mathematica "sees" them:
m // InputForm
{1.0642275928442373 + 0.09687392021742822*I, 1.0642275928442366 + 0.09687392021742817*I,
1.0642275928442366 + 0.09687392021742797*I, 1.064227592844237 + 0.09687392021742822*I,
1.0642275928442373 + 0.09687392021742852*I, 1.0642275928442366 + 0.09687392021742793*I,
1.0642275928442368 + 0.09687392021742801*I, 0.025040728196256346 + 1.0035162552538588*I,
1.0642275928442377 + 0.0968739202174282*I, 1.0642275928442375 + 0.0968739202174283*I}
### Performance
Yves reminded me to mention something about the performance of using a custom comparison function in DeleteDuplicates or Union as I did above. For long lists this is always considerably slower than using the default method. I gave an example with timings in How to represent a list as a cycle.
To apply that method here we could Round the numbers beforehand:
Round[m, 10^-12] // DeleteDuplicates // N
{1.06423 + 0.0968739 I, 0.0250407 + 1.00352 I}
I added // N to convert back to machine precision, but the values will not be precisely the same. This probably doesn't matter if you consider numbers this close to be duplicates, but should you want the unchanged numbers you could use GatherBy and get performance not far distant.
First /@ GatherBy[m, Round[#, 10^-6] &]
Version 10.0 introduced DeleteDuplicatesBy which works similarly to the GatherBy method; it has the following syntax:
DeleteDuplicatesBy[m, Round[#, 10^-6] &]
However it may not perform as well as GatherBy; see:
• Thanks for the answer, I see. It's like a comparison with epsilon of two floating point numbers. I assumed, that MATHEMATICA already does it for me. Mar 20, 2013 at 10:08
• Great, I missed InputForm. Without it it looks the same. Mar 20, 2013 at 10:09
• @Andrew FullForm was a bit needlessly verbose here, but in general you should look at the FullForm when trying to understand how Mathematica will treat an expression using structural tools. There are many cases where not using it will leave you quite befuddled as it can be very different from what is shown in standard output notation. Mar 20, 2013 at 10:13
• Admonish - lovely acoustics but far too harsh. I thought of it as "gently encouraging" at most... Mar 21, 2013 at 14:13
• @AJHC Converting to plain lists using {x, y, z} /. solutions seems like a good way to start. For example with m2 = {{x -> 0.1, y -> 0.2, z -> 0.3}, {x -> 0.1, y -> 0.2, z -> 0.3}, {x -> 0.17, y -> 0.22, z -> 0.314}} then DeleteDuplicates[{x, y, z} /. m2, AllTrue[Abs[# - #2], # < 10^-5 &] &] or DeleteDuplicatesBy[{x, y, z} /. m2, Round[#, 10^-5] &] for the two fundamentally different methods described in my answer. If you need to convert back to a list of rules then something like Thread[{x, y, z} -> #] & /@ {{0.1, 0.2, 0.3}, {0.17, 0.22, 0.314}} Mar 25, 2019 at 12:05
Note also that one can use Equal[##] & but not Equal:
DeleteDuplicates[m, Equal[##] &]
(* {1.06423 + 0.0968739 I, 0.0250407 + 1.00352 I} *)
DeleteDuplicates[m, Equal] // Length
(* 10 *)
This should work the same on most or all well-behaved FindRoot results.
As already noted, using something like Equal[##] & causes the performance to degrade, significantly on very long lists, but using Equal does not cause the same problem. However, Equal does not tolerate difference, unlike Equal[##] &:
DeleteDuplicates[{1., 1. + $MachineEpsilon}, Equal] DeleteDuplicates[{1., 1. + 64$MachineEpsilon}, Equal[##] &]
(*
{1., 1.}
{1.}
*)
Basically DeleteDuplicates[.., Equal] is the same as DeleteDuplicates[].
|
# Homework Help: U substitution with trig
1. Feb 13, 2012
### jtt
1. The problem statement, all variables and given/known data
use substitution to evaluate the integral
2. Relevant equations
1)∫ tan(4x+2)dx
2)∫3(sin x)^-2 dx
3. The attempt at a solution
1) u= 4x+2 du= 4
(1/4)∫4 tan(4x+2) dx
∫(1/4)tan(4x+2)(4dx)
∫ (1/4) tanu du
(1/4)ln ltan(u)l +c
2) u=sinx du= cosx or u=x du = 1 ????
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Feb 13, 2012
### haackeDc
For your first integral, you evaluated ∫tan(u)du incorrectly.
∫tan(u) du = ∫sin(u)/cos(u) du
= -∫-sin(u)/cos(u) du
So now solve for this integral, given that ∫f'(x)/f(x) dx = ln(|f(x)|) + c
For your second, I'm not sure why you would use 'u' substitution,
because 1/sin^2(x) = csc^2(x), which has the integral of -cot(x) + c.
I'll leave that to you to find a way with u-substitution.
Last edited: Feb 13, 2012
|
# Regression: Scatterplot with low R squared and high p-values
Based on three datasets, I have produced the scatterplot below in Python:
I am trying to fit a line on each dataset, but when I check the metrics this is what I get:
• Set 1 (red): $R^2$=0.002, p-value=0.651
• Set 2 (purple): $R^2$=0.008, p-value=0.378
• Set 3 (blue): $R squared$=0.001, p-value=0.714
My question: are such data sets impossible to fit? Is there any kind of data transformation I could apply, based on the scatterplot shape?
My Values (red dataset):
X Y
72.3 109
78.34 169
80 239
82.4 550
83.49 429
84.34 162
84.78 285
85.18 1553
85.58 852
86.73 611
87.34 0
87.65 764
89.09 710
90.18 0
90.49 155
90.66 2
90.73 42
90.75 162
91.23 0
91.31 57
91.51 275
91.58 771
91.73 324
91.93 78
92.1 0
92.22 1023
92.36 223
92.49 981
93.17 978
93.17 744
93.47 162
93.75 76
93.8 163
94.12 433
94.27 472
94.59 0
94.73 1689
94.87 302
95.05 0
95.09 1100
95.26 73
95.49 1370
95.69 72
95.84 890
96.02 529
96.07 273
96.08 458
96.23 281
96.42 933
96.52 149
96.93 135
97.21 7
97.36 1912
97.38 0
97.5 1169
97.72 0
97.77 314
97.81 475
97.91 436
98.25 56
98.33 5
98.36 0
98.43 135
98.45 81
98.46 849
98.79 20
98.91 818
98.91 58
99.11 244
99.21 348
99.28 621
99.29 618
99.34 430
99.4 513
99.41 49
99.43 1543
99.46 23
99.46 62
99.57 178
99.58 50
99.58 221
99.78 179
99.83 1446
99.94 1249
99.94 9
99.94 7
99.94 10
99.97 0
99.98 228
99.99 111
99.99 711
100 976
100 2980
100 72
100 1
100 24
100 698
100 803
100 774
100 0
• What do the variables measure? e.g. are the x's percentages? Nov 30 '15 at 11:46
• Can you show the data? Nov 30 '15 at 11:51
• You could copy and paste into your question. Any code that lists the values, e.g. comma-separated, will probably be easily adaptable to most programs people might use. Nov 30 '15 at 12:13
• @KarstenW I guess that log scale for percent will not help here, as the values pile up near 100. Nov 30 '15 at 12:14
• Thanks for giving X and Y. How do the three sets come in here? At first sight the posted data are just set 1. Nov 30 '15 at 12:26
With data like these (indeed almost any data) the first step is a graphic that really helps to see what is going on. Crowding of data points on default scales makes that difficult to achieve.
The occurrence of exact zeros on $Y$ inhibits logarithmic transformation. Some would add a constant first to get round that. I would suggest here a square root scale instead.
Similarly, but not identically, the occurrence of exact $100$%s inhibits logit transformation of $X$, which is a kind of default for fractions not equal to zero or unity. I would suggest here a folded root transformation, $\root \of X - \root \of {100 - X}$ for the percents, which stretches out the high percents. (See, e.g., Tukey, J.W. 1977. Exploratory Data Analysis. Reading, MA: Addison-Wesley.)
Here's a graph for set 1 only (all posted at the time of writing). I have used transformed scales, but labelled in terms of the original values. I have to say that I see no structure here, so the essentially flat regression line does seem unsurprising.
EDIT It may be reassuring to people unfamiliar with this transformation to see how it works. Folding means that the transformation is symmetric around the middle of the range. The transformation is conservative insofar as it affects shape of relationship minimally, except for values near $0$ and $100$%, which are stretched out. (The curvature is useful in this example for values between about $70$ and $100$%.) A small but often useful virtue is that the transformation is defined for exact zeros and $100$s. Apart from a trivial prefactor, $\root \of X - \root \of {1 - X}$ behaves identically for $X$ now defined as proportions or fractions between $0$ and $1$.
• They are just plots of the functions, just as you might plot $\ln X$ versus $X$. Nov 30 '15 at 20:22
• You seem to know a lot about transformations and I like this thing, so here is a question for you. Suppose my dataset had a structure that was visible only after the folded root was applied. Would R squared have increased after having adopted this transformation, in an attempt to unravel hidden or non-visible patterns? Would the same happen to the p-values? Nov 30 '15 at 20:25
• Sorry, I really can't predict that in general, not least because the models implied are quite different. But for the data you have posted, for set 1 only, regression of $Y$ on $X$ gives a $P$-value of 0.651 while for the transformations used it is 0.913. If anything, the transformations make it easier to see what is going on, which is that nothing much is going on. At the same time, it's a toss-up between two versions of the same data with not much that is clear either way. Only you know whether there is some substantive story here, or indeed other predictors to help. Nov 30 '15 at 20:34
First, no data set is impossible to fit - in fact, you've already produced some fits, just not very good ones.
Second, because you've put all three data sets in the same image, it's hard to see any relationship that might exist in the red and purple data sets, since the $X$ values are all quite high compared to the red data set.
Third, eyeballing the data values you gave for the red data set, it looks like the $Y$ values are all non-negative integers, with quite a few 0's. This makes me think of zero-inflated negative binomial models.
Fourth, from eyeballing the blue data set, I am very surprised that the $R^2$ is essentially 0. Are you sure your code is right? Could you post the code? I'd also try taking log of $Y$ only. In addition, you seem to have some outliers, so you could try robust regression or quantile regression.
• Thank you for your insights. The blue data set is appalling to me as well, but I can confirm, if Excel is anything to go by: the blue R squared is 0.0014. So I assume there is no mistake in the code I am using - which is, by the way, a Python built-in function called stats.linregress(x,y). Nov 30 '15 at 12:42
• As now explicit in the data posting, and as discussed in my answer, there are exact zeros on $Y$. Nov 30 '15 at 12:48
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
“…an essential part of understanding how many ties these RNGs produce is to understand how many ties one expects in 32-bit integer arithmetic.”
A sort of a birthday-problem paper for random generators by Markus Hofert on arXiv as to why they produce ties. As shown for instance in the R code (inspired by the paper):
sum(duplicated(runif(1e6)))
returning values around 100, which is indeed unexpected until one thinks a wee bit about it… With no change if moving to an alternative to the Mersenne twister generator. Indeed, assuming the R random generators produce integers with 2³² values, the expected number of ties is actually 116 for 10⁶ simulations. Moving to 2⁶⁴, the probability of a tie is negligible, around 10⁻⁸. A side remark of further inerest in the paper is that, due to a different effective gap between 0 and the smallest positive normal number, of order 10⁻²⁵⁴ and between 1 and the smallest normal number greater than 1, of order 10⁻¹⁶, “the grid of representable double numbers is not equidistant”. Justifying the need for special functions such as expm1 and log1p, corresponding to more accurate derivations of exp(x)-1 and log(1+x).
|
# Let $f:R\rightarrow S$ be a ring homomorphism. Prove or disprove: if $f$ is one-to-one and $R$ is a field, then $S$ is a field
Let $f:R\rightarrow S$ be a ring homomorphism. Prove or disprove: if $f$ is one-to-one and $R$ is a field, then $S$ is a field.
My attempt: I believe this statement is false, as I can imagine a scenario where some element $s\in S$ is not $f(r)$ for some $r\in R$, and this element $s$ is not a unit. However, I'm having trouble coming up with an explicit counterexample.
Any help appreciated!
You can embed any field $k$ into the polynomial ring $k[x]$, but the latter is not a field.
Take a field $K$ and product ring $K\times K$.
|
Find the last digit of this series, for any value of $n$ and $m$,
Find the last digit of this number:
$$({}_{4n+1} C_0 )^{4m+1} + ({}_{4n+1} C_1 )^{4m+1} +({}_{4n+1} C_2 )^{4m+1} + \cdots + ({}_{4n+1} C_{4n+1} )^{4m+1}\;,$$
where $n$, $m$ belong to the holy set of natural numbers.
-
The question as it stands is a bit hard to fix. Please write the equation in $\TeX$ properly. (You seem to know $\TeX$ and you can best fix this!) – user21436 Feb 13 '12 at 14:29
For binomial coefficients, you can use \binom{m}{n} which looks like $\binom{m}{n}$. – Aryabhata Feb 13 '12 at 14:33
@Stom: What you call "the fuss" was people trying to help you make a very badly formatted question readable so that people might be able to answer it, which I presume was your intention in asking it. It seems inappropriate to criticize them for this attempt at assistance. I've now cleaned up the formatting in the question. In case you intend to ask more questions here in the future, it might be a good idea to take a look at the edits (by clicking on the "edited ... ago" link under the question) so you can do it yourself next time. – joriki Feb 13 '12 at 15:33
@Stom, that's not how this site is supposed to operate. – lhf Feb 13 '12 at 19:39
@Stom, no, looking for alternative solutions is fine. Just be open about it. And disclosing your own proof will avoid duplication. It's perfectly ok to just add you own proof as an answer. – lhf Feb 14 '12 at 10:43
The ingredients you need to solve this are Euler's theorem (along with the value of Euler's totient function for the base of our decimal system) and the binomial theorem (applied to a power of $1+1$), or alternatively the fact that the total number of subsets of a $k$-element set is $2^k$.
-
OK so i know now the ingredients to solve this, may i also have the recipe? – Tomarinator Feb 13 '12 at 16:35
@Stom: Questions in the imperative without any indication why you're asking or what you've tried often don't get fully worked out answers here. In the present case, lhf asked you what you've tried and you haven't responded (yet). – joriki Feb 13 '12 at 16:43
I already know the answer, and am looking for alternate solutions, in fact i created the problem,myself – Tomarinator Feb 13 '12 at 17:21
I guarantee that the question is legitimate, and the solution is as relevant as the answer is elegant. – Tomarinator Feb 13 '12 at 17:40
@Stom: I find it rather bad style to pose a problem without mentioning the fact that you already know the answer. I for one am not going to put any more time into this question. – joriki Feb 13 '12 at 23:49
Here is my approach,
It is a fact that for any natural number n, n^4k+1 , has the same unit digit as n, itself, (for any natural number k)
so
in the series we just vanish all the powers of each terms, as we only have to find the unit digit,
and doing so gives us just the series representing sum of binomial coefficients of the series (1+x)^4n+1
whose sum is 2^4n+1
and last digit of 2^4n+1 is 2 itself, (declared earlier)
-
But i firmly believe that a more better solution , from number theory exists for this problem, – Tomarinator Feb 18 '12 at 14:44
Did you understand joriki's answer? – Aryabhata Feb 18 '12 at 18:53
I will one day. – Tomarinator Mar 18 '12 at 18:09
|
# The Area of a Rhombus is Equal to Half the Product of its Diagonals
Here we will prove that the area of a rhombus is equal to half the product of its diagonals.
Solution:
Given:
PQRS is a rhombus whose diagonals are PR and QS. The diagonals intersect at O.
To prove: ar(rhombus PQRS) = $$\frac{1}{2}$$ ×PR × QS.
Statement Reason 1. ar(∆RSQ) = $$\frac{1}{2}$$ ×Base × Altitude = $$\frac{1}{2}$$ ×QS × RO. 1. QS ⊥ PR, because diagonals of a rhombus are perpendicular to each other. 2. ar(∆PQS) = $$\frac{1}{2}$$ ×Base × Altitude = $$\frac{1}{2}$$ ×QS × PO. 2. As in reason 1. 3. ar(∆RSQ) + ar(∆PQS) = $$\frac{1}{2}$$ ×QS × (RO + PO). 3. By addition from statements 1 and 2. 4. ar(rhombus PQRS) = $$\frac{1}{2}$$ ×PR × QS. 4. By addition axiom for area.
|
# How can I draw this line figure with arrows using TikZ?
I need this figure drawn in TikZ for work. How do I do it?
So far, I have:
\documentclass[tikz,border=2mm]{standalone}
\usepackage{lmodern}
\begin{document}
\begin{tikzpicture}[y=2cm, font=\sffamily\small]
\draw[->] (0,0) -- (4.5,0) node[below] {};
\foreach \i in {1,2,3,4} \draw (\i,1mm) -- (\i,-1mm) node[below] {\i};
\end{tikzpicture}
\end{document}
• Is the unequal spacing and the displacement of the arrows intended? – Henri Menke Dec 26 '15 at 16:33
• I did it with paint. The distance of the nodes (1,2,3 ..) is the same. Sorry – Solid Dec 26 '15 at 16:38
• Welcome to Tex SE. Have you searched the site for similar drawings to replicate? – Alenanno Dec 26 '15 at 16:39
• I don't know the type of latex is this. – Solid Dec 26 '15 at 16:41
• could you post a MWE of the code you already wrote trying to achieve this? – Rico Dec 26 '15 at 16:41
Here, without the unequal spacing and displaced arrows, but with TikZ.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{shapes.arrows}
\begin{document}
\begin{tikzpicture}[scale=0.5,
down arrow/.style = {
single arrow, draw,
minimum height=2.5em,
transform shape,
rotate=-90,
}]
\draw (0,0) -- (10,0);
\foreach \i in {1,...,9}
\draw (\i,0.3) -- (\i,-0.3) node[below] {\i};
\foreach \i in {1,2,4,6,7,9}
\node[down arrow] at (\i,1) {};
\end{tikzpicture}
\end{document}
|
Brian Bi
## Section 11.3. Homomorphisms and Ideals
Exercise 11.3.1 Let $$I$$ be an ideal of $$R$$. By definition, $$I$$ is closed under addition. Also, either $$0 \in I$$ or $$I$$ contains some nonzero element $$x$$, whence $$0x = 0$$ is in $$I$$, so either way, $$0 \in I$$; $$I$$ contains the identity of $$R^+$$. Finally, let $$x \in I$$. Then $$(-1)x \in I$$, which is the inverse of $$x$$ in $$R^+$$. Therefore $$I$$ is a subgroup of $$R^+$$.
Exercise 11.3.2 Let $$I$$ be a nonzero ideal of $$\mathbb{Z}[i]$$ and let $$a + bi \in I$$ be nonzero. Then $$a^2 + b^2 = (a - bi)(a + bi) \in I$$, which is nonzero.
Exercise 11.3.3
1. This map sends all polynomials to their constant terms. It is therefore the set of polynomials with zero constant term. It is easy to see that this ideal is generated by $$x$$ and $$y$$.
2. Let $$I$$ denote the ideal so defined. $$I$$ is the set of real polynomials that have $$2 + i$$ as a zero. Since $$\mathbb{R}$$ is a field, Proposition 11.3.22 applies, and $$I$$ is generated by the monic polynomial of lowest degree that it contains. Clearly $$I$$ doesn't contain any linear polynomials, but it does contain the quadratic polynomial $$(x - (2 + i))(x - (2 - i)) = x^2 - 4x + 5$$. Therefore $$x^2 - 4x + 5$$ generates $$I$$.
3. Denote the given ideal by $$I$$. $$I$$ is the set of all integer polynomials that have $$1 + \sqrt{2}$$ as a zero. Let $$I_{\mathbb{Q}}$$ be the ideal of rational polynomials that have $$1 + \sqrt{2}$$ as a zero. Evidently, $$I \subseteq I_{\mathbb{Q}}$$. By Proposition 11.3.22, $$I_{\mathbb{Q}}$$ is generated by the monic polynomial of lowest degree that it contains. Clearly $$I_{\mathbb{Q}}$$ doesn't contain any linear polynomials, but it does contain the quadratic polynomial $$(x - (1 + \sqrt{2}))(x - (1 - \sqrt{2})) = x^2 - 2x - 1$$. Therefore every element $$f \in I_{\mathbb{Q}}$$ can be written as $$f = (x^2 - 2x - 1)g$$ where $$g \in \mathbb{Q}[x]$$.
Let $$I'$$ denote the ideal of $$\mathbb{Z}[x]$$ generated by $$x^2 - 2x - 1$$. Clearly $$I' \subseteq I$$.
Suppose $$f \in I$$. Then $$f \in I_{\mathbb{Q}}$$, so we can write $$f = (x^2 - 2x - 1)g$$, that is, $$x^2 - 2x - 1$$ divides $$f$$ in $$\mathbb{Q}[x]$$. Since $$f$$ has integer coefficients and $$x^2 - 2x - 1$$ is monic, Lemma 11.3.24 applies, and $$x^2 - 2x - 1$$ divides $$f$$ in $$\mathbb{Z}[x]$$, that is, $$g \in \mathbb{Z}[x]$$. This establishes that $$f \in I'$$. Since this holds for all $$f \in I$$, we see that $$I \subseteq I'$$.
We conclude that $$I = I'$$, that is, $$I$$ is generated by $$x^2 - 2x - 1$$.
4. By Proposition 11.3.10, this homomorphism must send the constant polynomials to themselves. Proposition 11.3.4 then applies, and the map in question is the map that sends the polynomial $$f$$ to the value $$f(\sqrt{2} + \sqrt{3})$$. Thus, if we can find the monic polynomial $$P \in \mathbb{Q}[x]$$ of minimum degree that has $$\sqrt{2} + \sqrt{3}$$ as a zero, and $$P$$ turns out to have integer coefficients, then $$P$$ generates the ideal in question, by the reasoning applied in part (c).
In order to find $$P$$, we first observe that $$\mathbb{Q}[\sqrt{2}, \sqrt{3}]$$ has the following endomorphisms:
• $$\varphi_2(a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}) = a - b\sqrt{2} + c\sqrt{3} - d\sqrt{6}$$
• $$\varphi_3(a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}) = a + b\sqrt{2} - c\sqrt{3} - d\sqrt{6}$$
• $$\varphi_{23}(a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}) = a - b\sqrt{2} - c\sqrt{3} + d\sqrt{6}$$
If $$\varphi$$ is an endomorphism of a ring $$R$$, it is easy to see that for all $$f \in R[x], x \in R$$, we have $$\varphi(f(x)) = \varphi(f)(\varphi(x))$$, where $$\varphi(f)$$ is defined as the polynomial obtained by replacing each coefficient $$c_i$$ of $$f$$ by $$\varphi(c_i)$$.
Suppose $$f \in \mathbb{Q}[x]$$ and $$f(\sqrt{2} + \sqrt{3}) = 0$$. If we regard $$f$$ as an element of $$\mathbb{Q}[\sqrt{2}, \sqrt{3}][x]$$, then $$0 = \varphi_2(f(\sqrt{2} + \sqrt{3})) = \varphi_2(f)(\varphi_2(\sqrt{2} + \sqrt{3})) = f(-\sqrt{2} + \sqrt{3})$$. Similar logic using $$\varphi_3$$ and $$\varphi_{23}$$ shows that $$f(\sqrt{2} - \sqrt{3}) = f(-\sqrt{2} -\sqrt{3}) = 0$$. Therefore $$P$$ must have degree at least 4. In fact, the product of the four monomials $$x - (\pm \sqrt{2} \pm \sqrt{3})$$ is $$x^4 - 10x^2 + 1$$, which is monic and has integer coefficients; therefore this is the polynomial we are looking for. We conclude that the ideal is generated by $$x^4 - 10x^2 + 1$$.
5. Although this isn't explicitly stated, it seems we're supposed to assume that the map in question sends the constant polynomials to themselves, so that we can apply Proposition 11.3.4 as in part (d). Let $$I$$ be the ideal in question, and let $$I'$$ be the ideal of $$\mathbb{C}[x, y, z]$$ generated by $$y - x^2$$ and $$z - x^3$$. It is easy to see that $$I' \subseteq I$$.
Suppose $$f \in I$$. We can do division in the variable $$z$$ to write $$f(x, y, z) = (z - x^3)g(x, y, z) + h(x, y, z)$$ where the remainder $$h$$ must have degree 0 in $$z$$, that is, it is a polynomial of $$x$$ and $$y$$ only. We can then do division in the variable $$y$$ to write $$h(x, y) = (y - x^2)i(x, y) + j(x, y)$$ where $$j$$ must have degree 0 in $$y$$, that is, it is a polynomial in $$x$$ only. So $$f(x, y, z) = (z - x^3)g(x, y, z) + (y - x^2)i(x, y) + j(x)$$. Evidently $$f(t, t^2, t^3) = (t^3 - t^3)g + (t^2 - t^2)i + j(t)$$, and since $$f = 0$$, it follows that $$j = 0$$. Therefore $$f = (z - x^3)g + (y - x^2)i$$ for some $$g \in \mathbb{C}[x, y, z], h \in \mathbb{C}[x, y]$$. Therefore $$f \in I'$$. Since this holds for all $$f \in I$$, we have $$I \subseteq I'$$
Since $$I' \subseteq I$$ and $$I \subseteq I'$$, we conclude $$I = I'$$, so $$I$$ is generated by $$z - x^3$$ and $$y - x^2$$.
Exercise 11.3.4 Again, although this isn't explicitly stated, I assume the intended map is the one that sends the constant polynomials to themselves, so that Proposition 11.3.4 applies, and the map in question is unique and given by substitution. Let $$P(x, y) = y - (x - 1)^3 + 1$$. Then $$\varphi(P) = t^3 - 1 - ((t + 1) - 1)^3 + 1 = 0$$. Let $$I = (P)$$. It is clear that $$I \subseteq \ker K$$. On the other hand, let $$f \in \ker K$$. Using polynomial division in $$y$$, write $$f(x, y) = P(x, y) g(x, y) + r(x, y)$$. Since $$P$$ has degree 1 in $$y$$, the remainder $$r$$ must have degree 0 in $$y$$, that is, $$r$$ is a polynomial in $$x$$ alone. Then $$0 = \varphi(f) = \varphi(P)\varphi(g) + r(t + 1) = r(t + 1)$$, so the polynomial $$r(t + 1)$$ is identically zero. Since the field is $$\mathbb{C}$$, this implies that $$r$$ is the zero polynomial. Therefore $$f = Pg$$. We conclude that $$\ker K = \{Pg \mid g \in \mathbb{C}[x, y]\}$$.
Let $$I'$$ be an ideal that contains $$\ker K$$. If $$I' = \ker K$$, then as we have established, $$I'$$ is generated by the element $$P = y - (x - 1)^3 + 1$$. Suppose on the other hand that $$I'$$ is strictly larger than $$\ker K$$. If $$f \in I' \setminus \ker K$$, then division by $$P$$ yields $$f = Pg + r$$ where $$r$$ is a nonzero polynomial of $$x$$ alone. Since $$Pg \in I'$$, it follows that $$r \in I'$$ also. In general, if $$S$$ is the set of remainders obtained by dividing elements of $$I'$$ by $$P$$, then this reasoning shows that $$S \subseteq I'$$. Observe that:
• If $$r \in S, f \in \mathbb{C}[x]$$, that is, there exists some $$g \in \mathbb{C}[x, y]$$ such that $$Pg + r \in I'$$, then since $$f\cdot(Pg + r) \in I'$$, we have $$P\cdot(fg) + fr \in I'$$, which establishes $$fr \in S$$.
• If $$r_1, r_2 \in S$$, that is, there exist $$g_1, g_2 \in \mathbb{C}[x, y]$$ such that $$Pg_1 + r_1, Pg_2 + r_2 \in I'$$, then since $$Pg_1 + r_1 + Pg_2 + r_2 \in I'$$, we have $$P\cdot(g_1 + g_2) + (r_1 + r_2) \in I'$$, which establishes $$r_1 + r_2 \in S$$.
The above results imply that $$S$$ is an ideal of $$\mathbb{C}[x]$$. By Proposition 11.3.22, there is some polynomial $$Q(x)$$ such that $$S = \{Qh \mid h \in \mathbb{C}[x]\}$$. Obviously $$Q \in S \subseteq I'$$, so $$(P, Q) \subseteq I'$$. Conversely, let $$f \in I'$$; write $$f = Pg + r$$, and since $$r \in S$$, we can write $$r = Qh$$, so $$f = Pg + Qh$$. Therefore $$I' \subseteq (P, Q)$$. We conclude that $$I' = (P, Q)$$, so at most two elements are needed to generate $$I'$$.
Exercise 11.3.5 Consider $$F[x]$$ as a vector space over $$F$$. One important property that the derivative operation is a linear endomorphism. We will not prove this here since the proof is easy, but we will use it in what follows.
1. Let $$f = x^a, g = x^b$$. Then $$(fg)' = (a+b)x^{a+b-1}$$, and $$f'g + fg' = ax^{a-1} x^b + x^a bx^{b-1} = (a+b)x^{a+b-1}$$, therefore $$(fg)' = f'g + fg'$$. Now let $$f$$ and $$g$$ be arbitrary elements of $$\mathbb{C}[x]$$ and let $$h(f, g) = (fg)' - f'g - fg'$$. By linearity of differentiation, it follows that $$h$$ is bilinear in its two arguments. We already established that $$h(x^a, x^b) = 0$$ for all monomials $$x^a, x^b$$, so $$h$$ vanishes on a basis. Therefore $$h$$ vanishes identically.
2. We'll first prove the claim for a restricted class of pairs of functions, those for which $$f$$ is of the form $$f(x) = x^a$$ while $$g \in \mathbb{C}[x]$$ is arbitrary. We'll proceed by induction on the degree $$a$$.
Base case: $$a = 0$$. Then $$(f \circ g)' = (1 \circ g)' = 1' = 0$$ while $$(f' \circ g)g' = (0 \circ g)g' = 0$$ so the two are equal.
Inductive case: $$a \ge 1$$. By the induction hypothesis, $$(g^{a-1})' = ((x \mapsto x^{a-1}) \circ g)' = ((x \mapsto (a-1)x^{a-2}) \circ g)g' = (a-1)g^{a-2}g'$$. Then, using this together with the product rule: \begin{align*} (f \circ g)' &= (g^a)' \\ &= (g g^{a-1})' \\ &= g' g^{a-1} + g (g^{a-1})' \\ &= g^{a-1} g' + g (a-1) g^{a-2} g' \\ &= a g^{a-1} g' \\ &= ((x \mapsto ax^{a-1}) \circ g) g' \\ &= (f' \circ g) g' \end{align*} as required.
Now suppose $$f, g \in \mathbb{C}[x]$$ are arbitrary. Define $$h(f, g) = (f \circ g)' - (f' \circ g)g'$$. We have already established that $$h$$ vanishes whenever $$f$$ is of the form $$x^a$$. Also, $$h$$ is linear in its first argument. Since $$\{x^a \mid a \in \mathbb{N}\}$$ is a basis for $$\mathbb{C}[x]$$, it follows that $$h$$ vanishes identically.
Exercise 11.3.6 Again, although it isn't explicitly stated, I assume the map in question is intended to send the constant polynomials to themselves, so that it acts by substitution. That is, for a given $$f$$, we are dealing with the unique endomorphism $$\varphi_f$$ that is the ideneity on $$R$$ and that sends $$x$$ to $$x + f(y)$$ and $$y$$ to $$y$$. We simply need to show that the map $$\varphi_f$$ is bijective.
We first show that $$\varphi_f$$ is injective. Suppose $$\varphi_f(g_1) = \varphi_f(g_2)$$. Since the map $$\varphi_f$$ is a ring endomorphism, we have $$\varphi_f(g_1 - g_2) = 0$$. Denote $$g_1 - g_2$$ by $$h$$. Suppose $$h = \sum_{i=0}^m \sum_{j=0}^n c_{ij} x^i y^j = \sum_{i=0}^m x^i P_i(y)$$, where we have defined $$P_i(y) = \sum_{j=0}^n c_{ij} y^j$$. Then $0 = \varphi_f(h) = \sum_{i=0}^m (x + f(y))^i P_i(y)$ If we group terms in the RHS by powers of $$x$$, we see that the coefficient of the $$x^m$$ term is $$P_m(y)$$. Therefore $$P_m(y) = 0$$, implying $$c_{mj} = 0$$ for all $$j$$, so that $$0 = \sum_{i=0}^{m-1} (x + f(y))^i P_i(y)$$. Repeating this for $$m-1, m-2, \ldots$$, we find that all $$P_i$$'s vanish, so $$h$$ vanishes. Therefore $$g_1 = g_2$$. This completes the proof that $$\varphi_f$$ is injective.
We now show that $$\varphi_f$$ is surjective. Let $$g(x, y)$$ be given and let $$h(x, y) = g(x - f(y), y)$$. Then $$\varphi_f(h) = h(x + f(y), y) = g((x + f(y)) - f(y), y) = g(x, y)$$.
We have shown that $$\varphi_f$$ is both surjective and injective, so we are done.
Exercise 11.3.7 By Proposition 11.3.10, any endomorphism of $$\mathbb{Z}[x]$$ must send the constant polynomials to themselves. By the subtitution principle, for each $$f \in \mathbb{Z}[x]$$, there is a unique endomorphism $$\varphi_f$$ on $$\mathbb{Z}[x]$$ that sends the constant polynomials to themselves and that sends $$x$$ to $$f(x)$$, which is given by substituting $$f(x)$$ for $$x$$; that is, $$\varphi_f(g) = g \circ f$$. It is clear that all endomorphisms of $$\mathbb{Z}[x]$$ are of this form, since each endomorphism must send $$x$$ to some polynomial. So we need to determine which $$\varphi_f$$ are bijective.
We can rule out all constant $$f$$, since the image of $$\varphi_f$$ would contain only constants.
In general, for nonzero $$f, g$$ the degree of $$g \circ f$$ is the product of the degrees of $$f$$ and $$g$$. Therefore if $$f$$ has degree 2 or greater, the image of $$\varphi_f$$ doesn't contain any linear polynomials. So we can rule out all such $$f$$ as well.
If $$f$$ is linear, $$f = ax + b$$, then the leading coefficient of $$g \circ f$$ will always be a multiple of $$a$$. So $$\varphi_f$$ will not be surjective unless $$a$$ is a unit. Thus, if $$\varphi_f$$ is to be an automorphism, we must have $$a = \pm 1$$.
Suppose $$f$$ is indeed of this form. If $$h \in \mathbb{Z}[x]$$ is given, then let $$g(x) = h(x/a - b/a)$$. Then $$g \in \mathbb{Z}$$ and $$\varphi_f(g) = g(ax + b) = h((ax + b)/a - b/a) = h(x)$$. This establishes that $$\varphi_f$$ is surjective.
Suppose $$g, h \in \mathbb{Z}$$ such that $$\varphi_f(g) = \varphi_f(h)$$. Then $$\varphi_f(g - h) = 0$$. Let $$i = g - h$$. Then we have that $$i(ax + b) = 0$$. If we consider $$i$$ as a polynomial over $$\mathbb{R}$$, then since $$i(ax + b) = 0$$ for all $$x \in \mathbb{Z}$$, either $$i$$ is identically zero or else $$i$$ has infinitely many zeroes, namely $$(n-b)/a$$ for each integer $$n$$. The latter is impossible, so $$i = 0$$. Therefore $$g = h$$, that is, $$\varphi_f$$ is injective. This completes the proof that $$\varphi_f$$ is bijective whenever $$f(x) = \pm x + b$$.
Thus, the endomorphisms of $$\mathbb{Z}[x]$$ are given by $$g \mapsto g \circ f$$ where $$f = \pm x + b$$ for some $$b \in \mathbb{Z}$$.
Exercise 11.3.8 Let $$\varphi$$ denote the map in question. We have the following:
• $$\varphi(1) = 1^p = 1$$
• $$\varphi(ab) = (ab)^p = a^p b^p = \varphi(a)\varphi(b)$$, where we have used the fact that multiplication in a ring is commutative
• $$\varphi(a + b) = (a + b)^p = \sum_{i=0}^p \binom{p}{i} a^i b^{p-i} = a^p + b^p = \varphi(a) + \varphi(b)$$ where we have used the fact that the binomial coefficients $$\binom{p}{i}$$ are divisible by $$p$$ whenever $$1 \le i \le p-1$$.
We conclude that $$\varphi$$ is a ring endomorphism.
Exercise 11.3.9
1. Suppose $$x^k = 0$$. Observe that $$(1 + x)(1 - x + x^2 - x^3 + \ldots + (-1)^{k-1} x^{k-1}) = 1 + (-1)^{k-1} x^k = 1$$. Therefore $$1 + x$$ is a unit.
2. Suppose $$a^k = 0$$. Let $$p^e$$ be a power of $$p$$ that is greater than or equal to $$k$$, so that $$a^{p^e} = 0$$. Observe that $$(1 + a)^p = \sum_{i=0}^p \binom{p}{i} a^i = 1 + a^p$$, where we have used the fact that the binomial coefficients $$\binom{p}{i}$$ are divisible by $$p$$ whenever $$1 \le i \le m - 1$$. Iterating this $$e$$ times, we obtain that $$(1 + a)^{p^e} = 1 + a^{p^e} = 1$$.
Exercise 11.3.10 Let $$I$$ be an ideal of $$F[[t]]$$. If $$I$$ is not the zero ideal, then choose $$P \in I \setminus \{0\}$$ such that $$P$$ has a term with nonzero coefficient with degree $$d$$ as small as possible. Thus, $$I \subseteq (x^d)$$. Observe that $$P/x^d \in F[[t]]$$ and has nonzero constant term, so according to Exercise 11.2.2, there exists $$Q \in F[[t]]$$ with $$(P/x^d)Q = 1$$. Therefore $$x^d = PQ \in I$$, so $$(x^d) \subseteq I$$. This establishes that $$I = (x^d)$$. Therefore, $$F[[t]]$$ has one ideal for each $$d$$, namely $$(x^d)$$, together with the zero ideal.
Exercise 11.3.11 This is clearly not true; in $$\mathbb{Z}[x]$$, the principal ideal generated by $$2x$$ has least degree 1, but it doesn't contain any monic polynomial of degree 1.
Exercise 11.3.12 Suppose $$z_1, z_2 \in I +J$$, that is, $$z_1 = x_1 + y_1, z_2 = x_2 + y_2$$, where $$x_1, x_2 \in I$$ and $$y_1, y_2 \in J$$. Then $$z_1 + z_2 = (x_1 + x_2) + (y_1 + y_2)$$, but $$x_1 + x_2 \in I$$ and $$y_1 + y_2 \in J$$, so $$z_1 + z_2 \in I + J$$. Now suppose $$z \in I + J$$, so that $$z = x + y$$ with $$x \in I, y \in J$$. If $$r \in R$$, then $$rz = rx + ry$$, but $$rx \in I$$ and $$ry \in J$$, so $$rz \in I + J$$. We conclude that $$I + J$$ is an ideal of $$R$$.
Exercise 11.3.13 Suppose $$z_1, z_2 \in I \cap J$$. Then $$z_1, z_2 \in I$$, so $$z_1 + z_2 \in I$$, and $$z_1, z_2 \in J$$, so $$z_1 + z_2 \in J$$. Therefore $$z_1 + z_2 \in I \cap J$$. Also, suppose $$z \in I \cap J, r \in R$$. Then $$z \in I$$ so $$rz \in I$$, and $$z \in J$$ so $$rz \in J$$, therefore $$rz \in I \cap J$$. This establishes that $$I \cap J$$ is an ideal of $$R$$.
Let $$I$$ be the ideal of $$\mathbb{R}[w, x, y, z]$$ generated by by the elements $$w, x$$ and let $$J$$ be the ideal generated by $$y, z$$. Then $$S = \{pq \mid p \in I, q \in J\}$$ contains the elements $$wy$$ and $$xz$$, but it doesn't contain their sum $$wy + xz$$, as the latter is irreducible while $$I$$ and $$J$$ don't contain any constant polynomials. So $$S$$ is not an ideal.
Suppose $$z, z' \in IJ$$. Write $$z = \sum_{i=1}^m x_i y_i, z' = \sum_{j=1}^n x_j' y_j'$$ where all $$x_i, x_j' \in I$$ and all $$y_i, y_j' \in J$$. Then $$z + z'$$ is a sum of $$m + n$$ terms that are each the product of an element of $$I$$ and an element of $$J$$, therefore $$z + z' \in IJ$$. Now suppose $$r \in R$$. Then $$rz = \sum_{i=1}^m r(x_i y_i) = \sum_{i=1}^m (rx_i) y_i$$. Since $$I$$ is an ideal, each $$rx_i$$ is in $$I$$. So $$rz$$ is a sum of $$m$$ terms that are each a product of an element of $$I$$ and an element of $$J$$, so $$rz \in IJ$$. We conclude that $$IJ$$ is an ideal of $$R$$.
In general, $$IJ \subseteq I \cap J$$; every element of $$IJ$$ is an element of $$I \cap J$$, but the converse isn't true. For example, if $$I = 4\mathbb{Z}, J = 6\mathbb{Z}$$, then $$IJ = 24\mathbb{Z}$$ and $$I \cap J = 12\mathbb{Z}$$, so $$IJ$$ is a strict subset of $$I \cap J$$. To see that $$IJ \subseteq I \cap J$$ in general, let $$z \in IJ$$ and write $$z = \sum_i x_i y_i$$ with $$x_i \in I, y_i \in J$$. Then $$z \in I$$ since $$z$$ is a combination of the $$x_i$$'s with the coefficients, namely $$y_i$$'s, being elements of $$R$$, and similar reasoning shows $$z \in J$$, therefore $$z \in I \cap J$$.
|
View Single Post
P: 284
## Any notation for component-by-component vector multiplication?
I believe it will be
$$\vec \nabla \cdot \left( ( \vec \nabla T)^T \cdot I_3 \vec k \right)^T$$
I think this works
|
# Darwinism (Wallace)/Chapter III
Jump to: navigation, search
CHAPTER III
THE VARIABILITY OF SPECIES IN A STATE OF NATURE
Importance of variability—Popular ideas regarding it—Variability of the lower animals—The variability of insects—Variation among lizards—Variation among birds—Diagrams of bird-variation—Number of varying individuals—Variation in the mammalia—Variation in internal organs—Variations in the skull—Variations in the habits of Animals—The Variability of plants—Species which vary little—Concluding remarks.
The foundation of the Darwinian theory is the variability of species, and it is quite useless to attempt even to understand that theory, much less to appreciate the completeness of the proof of it, unless we first obtain a clear conception of the nature and extent of this variability. The most frequent and the most misleading of the objections to the efficacy of natural selection arise from ignorance of this subject, an ignorance shared by many naturalists, for it is only since Mr. Darwin has taught us their importance that varieties have been systematically collected and recorded; and even now very few collectors or students bestow upon them the attention they deserve. By the older naturalists, indeed, varieties—especially if numerous, small, and of frequent occurrence—were looked upon as an unmitigated nuisance, because they rendered it almost impossible to give precise definitions of species, then considered the chief end of systematic natural history. Hence it was the custom to describe what was supposed to be the "typical form" of species, and most collectors were satisfied if they possessed this typical form in their cabinets. Now, however, a collection is valued in proportion as it contains illustrative specimens of all the varieties that occur in each species, and in some cases these have been carefully described, so that we possess a considerable mass of information on the subject. Utilising this information we will now endeavour to give some idea of the nature and extent of variation in the species of animals and plants.
It is very commonly objected that the widespread and constant variability which is admitted to be a characteristic of domesticated animals and cultivated plants is largely due to the unnatural conditions of their existence, and that we have no proof of any corresponding amount of variation occurring in a state of nature. Wild animals and plants, it is said, are usually stable, and when variations occur these are alleged to be small in amount and to affect superficial characters only; or if larger and more important, to occur so rarely as not to afford any aid in the supposed formation of new species.
This objection, as will be shown, is utterly unfounded; but as it is one which goes to the very root of the problem, it is necessary to enter at some length into the various proofs of variation in a state of nature. This is the more necessary because the materials collected by Mr. Darwin bearing on this question have never been published, and comparatively few of them have been cited in The Origin of Species; while a considerable body of facts has been made known since the publication of the last edition of that work.
Variability of the Lower Animals.
Among the lowest and most ancient marine organisms are the Foraminifera, little masses of living jelly, apparently structureless, but which secrete beautiful shelly coverings, often perfectly symmetrical, as varied in form as those of the mollusca and far more complicated. These have been studied with great care by many eminent naturalists, and the late Dr. W. B. Carpenter in his great work—the Introduction to the Study of the Foraminifera—thus refers to their variability: "There is not a single species of plant or animal of which the range of variation has been studied by the collocation and comparison of so large a number of specimens as have passed under the review of Messrs. Williamson, Parker, Rupert Jones, and myself in our studies of the types of this group;" and he states as the result of this extensive comparison of specimens: "The range of variation is so great among the Foraminifera as to include not merely those differential characters which have been usually accounted specific, but also those upon which the greater part of the genera, of this group have been founded, and even in some instances those of its orders."[1]
Coming now to a higher group—the Sea-Anemones—Mr. P. H. Gosse and other writers on these creatures often refer to variations in size, in the thickness and length of the tentacles, the form of the disc and of the mouth, and the character of surface of the column, while the colour varies enormously in a great number of the species. Similar variations occur in all the various groups of marine invertebrata, and in the great sub-kingdom of the mollusca they are especially numerous. Thus, Dr. S. P. Woodward states that many present a most perplexing amount of variation, resulting (as he supposes) from supply of food, variety of depth and of saltness of the water; but we know that many variations are quite independent of such causes, and we will now consider a few cases among the land-mollusca in which they have been more carefully studied.
In the small forest region of Oahu, one of the Sandwich Islands, there have been found about 175 species of land-shells represented by 700 or 800 varieties; and we are told by the Rev. J. T. Gulick, who studied them carefully, that "we frequently find a genus represented in several successive valleys by allied species, sometimes feeding on the same, sometimes on different plants. In every such case the valleys that are nearest to each other furnish the most nearly allied forms; and a full set of the varieties of each species presents a minute gradation of forms between the more divergent types found in the more widely separated localities."
In most land-shells there is a considerable amount of variation in colour, markings, size, form, and texture or striation of the surface, even in specimens collected in the same locality. Thus, a French author has enumerated no less than 198 varieties of the common wood-snail (Helix nemoralis), while of the equally common garden-snail (Helix hortensis) ninety varieties have been described. Fresh-water shells are also subject to great variation, so that there is much uncertainty as to the number of species; and variations are especially frequent in the Planorbidæ, which exhibit many eccentric deviations from the usual form of the species—deviations which must often affect the form of the living animal. In Mr. Ingersoll's Report on the Recent Mollusca of Colorado many of these extraordinary variations are referred to, and it is stated that a shell (Helisonia trivolvis) abundant in some small ponds and lakes, had scarcely two specimens alike, and many of them closely resembled other and altogether distinct species.[2]
The Variability of Insects.
Among Insects there is a large amount of variation, though very few entomologists devote themselves to its investigation. Our first examples will be taken from the late Mr. T. Vernon Wollaston's book, On the Variation of Species, and they must be considered as indications of very widespread though little noticed phenomena. He speaks of the curious little carabideous beetles of the genus Notiophilus as being "extremely unstable both in their sculpture and hue;" of the common Calathus mollis as having "the hind wings at one time ample, at another rudimentary, and at a third nearly obsolete;" and of the same irregularity as to the wings being characteristic of many Orthoptera and of the Homopterous Fulgoridæ. Mr. Westwood in his Modern Classification of Insects states that "the species of Gerris, Hydrometra, and Velia are mostly found perfectly apterous, though occasionally with full-sized wings."
It is, however, among the Lepidoptera (butterflies and moths) that the most numerous cases of variation have been observed, and every good collection of these insects affords striking examples. I will first adduce the testimony of Mr. Bates, who speaks of the butterflies of the Amazon valley exhibiting innumerable local varieties or races, while some species showed great individual variability. Of the beautiful Mechanitis Polymnia he says, that at Ega on the Upper Amazons, "it varies not only in general colour and pattern, but also very considerably in the shape of the wings, especially in the male sex." Again, at St. Paulo, Ithomia Orolina exhibits four distinct varieties, all occurring together, and these differ not only in colour but in form, one variety being described as having the fore wings much elongated in the male, while another is much larger and has "the hind wings in the male different in shape." Of Heliconius Numata Mr. Bates says: "This species is so variable that it is difficult to find two examples exactly alike," while "it varies in structure as well as in colours. The wings are sometimes broader, sometimes narrower; and their edges are simple in some examples and festooned in others." Of another species of the same genus, H. melpomene, ten distinct varieties are described all more or less connected by intermediate forms, and four of these varieties were obtained at one locality, Serpa on the north bank of the Amazon. Ceratina Ninonia is another of these very unstable species exhibiting many local varieties which are, however, incomplete and connected by intermediate forms; while the several species of the genus Lycorea all vary to such an extent as almost to link them together, so that Mr. Bates thinks they might all fairly be considered as varieties of one species only.
Turning to the Eastern Hemisphere we have in Papilio Severus a species which exhibits a large amount of simple variation, in the presence or absence of a pale patch on the upper wings, in the brown submarginal marks on the lower wings, in the form and extent of the yellow band, and in the size of the specimens. The most extreme forms, as well as the intermediate ones, are often found in one locality and in company with each other. A small butterfly (Terias hecabe) ranges over the whole of the Indian and Malayan regions to Australia, and everywhere exhibits great variations, many of which have been described as distinct species; but a gentleman in Australia bred two of these distinct forms (T. hecabe and T. Æsiope), with several intermediates, from one batch of caterpillars found feeding together on the same plant.[3] It is therefore very probable that a considerable number of supposed distinct species are only individual varieties.
Cases of variation similar to those now adduced among butterflies might be increased indefinitely, but it is as well to note that such important characters as the neuration of the wings, on which generic and family distinctions are often established, are also subject to variation. The Rev. R. P. Murray, in 1872, laid before the Entomological Society examples of such variation in six species of butterflies, and other cases have been since described. The larvæ of butterflies and moths are also very variable, and one observer recorded in the Proceedings of the Entomological Society for 1870 no less than sixteen varieties of the caterpillar of the bedstraw hawk-moth (Deilephela galii).
Variation among Lizards.
Passing on from the lower animals to the vertebrata, we find more abundant and more definite evidence as to the extent and amount of individual variation. I will first give a case among the Reptilia from some of Mr. Darwin's unpublished MSS., which have been kindly lent me by Mr. Francis Darwin.
"M. Milne Edwards (Annales des Sci. Nat., I ser., tom. xvi. p. 50) has given a curious table of measurements of fourteen specimens of Lacerta muralis; and, taking the length of the head as a standard, he finds the neck, trunk, tail, front and hind legs, colour, and femoral pores, all varying wonderfully; and so it is more or less with other species. So apparently trifling a character as the scales on the head affording almost the only constant characters."
As the table of measurements above referred to would give no clear conception of the nature and amount of the variation without a laborious study and comparison of the figures, I have endeavoured to find a method of presenting the facts to the eye, so that they may be easily grasped and appreciated. In the diagram opposite, the comparative variations of the different organs of this species are given by means of variously bent lines. The head is represented by a straight line because it presented (apparently) no variation. The body is next given, the specimens being arranged in the order of their size from No. 1, the smallest, to No. 14, the largest, the actual lengths being laid down from a base line at a suitable distance below, in this case two inches below the centre, the mean length of the body of the fourteen specimens being two inches. The respective lengths of the neck, legs, and toe of
each specimen are then laid down in the same manner at convenient distances apart for comparison; and we see that their variations bear no definite relation to those of the body, and not much to those of each other. With the exception of No. 5, in which all the parts agree in being large, there is a marked independence of each part, shown by the lines often curving in opposite directions; which proves that in those specimens one part is large while the other is small. The actual amount of the variation is very great, ranging from one-sixth of the mean length in the neck to considerably more than a fourth in the hind leg, and this among only fourteen examples which happen to be in a particular museum.
To prove that this is not an isolated case, Professor Milne Edwards also gives a table showing the amount of variation in the museum specimens of six common species of lizards, also taking the head as the standard, so that the comparative variation of each part to the head is given. In the accompanying diagram (Fig. 2) the variations are exhibited by means of lines of varying length. It will be understood that, however much the specimens varied in size, if they had kept the same proportions, the variation line would have been in every case reduced to a point, as in the neck of L. velox which exhibits no variation. The different proportions of the variation lines for each species may show a distinct mode of variation, or may be merely due to the small and differing number of specimens; for it is certain that whatever amount of variation occurs among a few specimens will be greatly increased when a much larger number of specimens are examined. That the amount of variation is large, may be seen by comparing it with the actual length of the head (given below the diagram) which was used as a standard in determining the variation, but which itself seems not to have varied.[4]
Variation among Birds.
Coming now to the class of Birds, we find much more copious evidence of variation. This is due partly to the fact that Ornithology has perhaps a larger body of devotees than any other branch of natural history (except entomology); to the moderate size of the majority of birds; and to the circumstance that the form and dimensions of the wings, tail, beak, and feet offer the best generic and specific characters and can all be easily measured and compared. The most systematic observations on the individual variation of birds have been made by Mr. J. A. Allen, in his remarkable memoir: "On the Mammals and Winter Birds of East Florida, with an examination of certain assumed specific characters in Birds, and a sketch of the Bird Faunæ of Eastern North America," published in the Bulletin of the Museum of Comparative Zoology at Harvard College, Cambridge, Massachusetts, in 1871. In this work exact measurements are given of all the chief external parts of a large number of species of common American birds, from twenty to sixty or more specimens of each species being measured, so that we are able to determine with some precision the nature and extent of the variation that usually occurs. Mr. Allen says: "The facts of the case show that a variation of from 15 to 20 per cent in general size, and an equal degree of variation in the relative size of different parts, may be ordinarily expected among specimens of the same species and sex, taken at the same locality, while in some cases the variation is even greater than this." He then goes on to show that each part varies to a considerable extent independently of the other parts; so that when the size varies, the proportions of all the parts vary, often to a much greater amount. The wing and tail, for example, besides varying in length, vary in the proportionate length of each feather, and this causes their outline to vary considerably in shape. The bill also varies in length, width, depth, and curvature. The tarsus varies in length, as does each toe separately and independently; and all this not to a minute degree requiring very careful measurement to detect it at all, but to an amount easily seen without any measurement, as it averages one-sixth of the whole length and often reaches one-fourth. In twelve species of common perching birds the wing varied (in from twenty-five to thirty specimens) from 14 to 21 per cent of the mean length, and the tail from 13.8 to 23.4 per cent. The variation of the form of the wing can be very easily tested by noting which feather is longest, which next in length, and so on, the respective feathers being indicated by the numbers 1, 2, 3, etc., commencing with the outer one. As an example of the irregular variation constantly met with, the following occurred among twenty-five specimens of Dendraeca coronata. Numbers bracketed imply that the corresponding feathers were of equal length.[5]
Relative Lengths of Primary Wing Feathers of Dendræca coronata.
Longest. Second in Length. Third in Length. Fourth in Length. Fifth in Length. Sixth in Length. 2 3 1 4 5 6 3 2 4 1 5 6 3 ${\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \end{matrix}}\right.}}$ 2 4 1 5 6 7 2 3 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \end{matrix}}\right\}\,}}$ 4 1 5 6 7 2 1 3 4 ${\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}}$ 5 6 7 8 9
Here we have five very distinct proportionate lengths of the wing feathers, any one of which is often thought sufficient to characterise a distinct species of bird; and though this is rather an extreme case, Mr. Allen assures us that "the comparison, extended in the table to only a few species, has been carried to scores of others with similar results."
Along with this variation in size and proportions there occurs a large amount of variation in colour and markings. "The difference in intensity of colour between the extremes of a series of fifty or one hundred specimens of any species, collected at a single locality, and nearly at the same season of the year, is often as great as occurs between truly distinct species." But there is also a great amount of individual variability in the markings of the same species. Birds having the plumage varied with streaks and spots differ exceedingly in different individuals of the same species in respect to the size, shape, and number of these marks, and in the general aspect of the plumage resulting from such variations. "In the common song sparrow (Melospiza melodia), the fox-coloured sparrow (Passerella iliaca), the swamp sparrow (Melospiza palustris), the black and white creeper (Mniotilta varia), the water-wagtail (Seiurus novæboracencis), in Turdus fuscescens and its allies, the difference in the size of the streaks is often very considerable. In the song sparrow they vary to such an extent that in some cases they are reduced to narrow lines; in others so enlarged as to cover the greater part of the breast and sides of the body, sometimes uniting on the middle of the breast into a nearly continuous patch."
Mr. Allen then goes on to particularise several species in which such variations occur, giving cases in which two specimens taken at the same place on the same day exhibited the two extremes of coloration. Another set of variations is thus described: "The white markings so common on the wings and tails of birds, as the bars formed by the white tips of the greater wing-coverts, the white patch occasionally present at the base of the primary quills, or the white band crossing them, and the white patch near the end of the outer tail-feathers are also extremely liable to variation in respect to their extent and the number of feathers to which, in the same species, these markings extend." It is to be especially noted that all these varieties are distinct from those which depend on season, on age, or on sex, and that they are such as have in many other species been considered to be of specific value.
These variations of colour could not be presented to the eye without a series of carefully engraved plates, but in order to bring Mr. Allen's measurements, illustrating variations of size and proportion, more clearly before the reader, I have prepared a series of diagrams illustrating the more important facts and their bearings on the Darwinian theory.
The first of these is intended, mainly, to show the actual amount of the variation, as it gives the true length of the wing and tail in the extreme cases among thirty specimens of each of three species. The shaded portion shows the minimum length, the unshaded portion the additional length in the maximum. The point to be specially noted here is, that in each of these common species there is about the same amount of variation, and that it is so great as to be obvious at a glance.
There is here no question of "minute" or "infinitesimal" variation, which many people suppose to be the only kind of variation that exists. It cannot even be called small; yet from all the evidence we now possess it seems to be the amount which characterises most of the common species of birds.
It may be said, however, that these are the extreme variations, and only occur in one or two individuals, while the great majority exhibit little or no difference. Other diagrams will show that this is not the case; but even if it were so, it would be no objection at all, because these are the extremes among thirty specimens only. We may safely assume that these thirty specimens, taken by chance, are not, in the case of all these species, exceptional lots, and therefore we might expect at least two similarly varying specimens in each additional thirty. But the number of individuals, even in a very rare species, is probably thirty thousand or more, and in a common species thirty, or even three hundred, millions. Even one individual in each thirty, varying to the amount shown in the diagram, would give at least a million in the total population of any common bird, and among this million many would vary much more than the extreme among thirty only. We should thus have a vast body of individuals varying to a large extent in the length of the wings and tail, and offering ample material for the modification of these organs by natural selection. We will now proceed to show that other parts of the body vary, simultaneously, but independently, to an equal amount.
The first bird taken is the common Bob-o-link or Rice-bird (Dolichonyx oryzivorus), and the Diagram, Fig. 4, exhibits the variations of seven important characters in twenty male adult specimens.[6] These characters are—the lengths of the body, wing, tail, tarsus, middle toe, outer toe, and hind toe, being as many as can be conveniently exhibited in one diagram. The length of the body is not given by Mr. Allen, but as it forms a convenient standard of comparison, it has been obtained by deducting the length of the tail from the total length of the birds as given by him. The diagram has been constructed as follows:—The twenty specimens are first arranged in a series according to the body-lengths (which may be
considered to give the size of the bird), from the shortest to the longest, and the same number of vertical lines are drawn, numbered from one to twenty. In this case (and wherever practicable) the body-length is measured from the lower line of the diagram, so that the actual length of the bird is exhibited as well as the actual variations of length. These can be well estimated by means of the horizontal line drawn at the mean between the two extremes, and it will be seen that one-fifth of the total number of specimens taken on either side exhibits a very large amount of variation, which would of course be very much greater if a hundred or more specimens were compared. The lengths of the wing, tail, and other parts are then laid down, and the diagram thus exhibits at a glance the comparative variation of these parts in every specimen as well as the actual amount of variation in the twenty specimens; and we are thus enabled to arrive at some important conclusions.
We note, first, that the variations of none of the parts follow the variations of the body, but are sometimes almost in an opposite direction. Thus the longest wing corresponds to a rather small body, the longest tail to a medium body, while the longest leg and toes belong to only a moderately large body. Again, even related parts do not constantly vary together but present many instances of independent variation, as shown by the want of parallelism in their respective variation-lines. In No. 5 (see Fig. 4) the wing is very long, the tail moderately so; while in No. 6 the wing is much shorter while the tail is considerably longer. The tarsus presents comparatively little variation; and although the three toes may be said to vary in general together, there are many divergencies; thus, in passing from No. 9 to No. 10, the outer toe becomes longer, while the hind toe becomes considerably shorter; while in Nos. 3 and 4 the middle toe varies in an opposite way to the outer and the hind toes.
In the next diagram (Fig. 5) we have the variations in forty males of the Red-winged Blackbird (Agelæus phœniceus), and here we see the same general features. One-fifth of the whole number of specimens offer a large amount of variation either below or above the mean; while the wings, tail, and head vary quite independently of the body. The wing and tail too,
though showing some amount of correlated variation, yet in no less than nine cases vary in opposite directions as compared with the preceding species.
The next diagram (Fig. 6), showing the variations of thirty-one males of the Cardinal bird (Cardinalis virginianus), exhibits these features much more strongly. The amount of variation in proportion to the size of the bird is very much greater; while the variations of the wing and tail not only have no correspondence with that of the body but very little with each other. In no less than twelve or thirteen instances they vary in opposite directions, while even where they correspond in direction the amount of the variation is often very disproportionate.
As the proportions of the tarsi and toes of birds have great influence on their mode of life and habits and are often used as specific or even generic characters, I have prepared a diagram (Fig. 7) to show the variation in these parts only, among twenty specimens of each of four species of birds, four or five of the most variable alone being given. The extreme divergence of each of the lines in a vertical direction shows the actual amount of variation; and if we consider the small length of the toes of these small birds, averaging about three-quarters of an inch, we shall see that the variation is really very large; while the diverging curves and angles show that each part varies, to a great extent, independently. It is evident that if we compared some thousands of individuals instead of only twenty, we should have an amount of independent variation occurring each year which would enable almost any modification of these important organs to be rapidly effected.
In order to meet the objection that the large amount of variability here shown depends chiefly on the observations of one person and on the birds of a single country, I have examined Professor Schlegel's Catalogue of the Birds in the Leyden Museum, in which he usually gives the range of variation of the specimens in the museum (which are commonly less than a dozen and rarely over twenty) as regards some of their more important dimensions. These fully support the statement of Mr. Allen, since they show an equal amount of variability when the numbers compared are
sufficient, which, however, is not often the case. The accompanying diagram exhibits the actual differences of size in five organs which occur in five species taken almost at random from this catalogue. Here, again, we perceive that the variation is decidedly large, even among a very small number of specimens; while the facts all show that there is no ground whatever for the common assumption that natural species consist of individuals which are nearly all alike, or that the variations which occur are "infinitesimal" or even "small."
The proportionate Number of Individuals which present a considerable amount of Variation.
The notion that variation is a comparatively exceptional phenomenon, and that in any case considerable variations occur very rarely in proportion to the number of individuals which do not vary, is so deeply rooted that it is necessary to show by every possible method of illustration how completely opposed it is to the facts of nature. I have therefore prepared some diagrams in which each of the individual birds measured is represented by a spot, placed at a proportionate distance, right and left, from the median line accordingly as it varies in excess or defect of the mean length as regards the particular part compared. As the object in this set of diagrams is to show the number of individuals which vary considerably in proportion to those which vary little or not at all, the scale has been enlarged in order to allow room for placing the spots without overlapping each other.
In the diagram opposite twenty males of Icterus Baltimore are registered, so as to exhibit to the eye the proportionate number of specimens which vary, to a greater or less amount, in the length of the tail, wing, tarsus, middle toe, hind toe, and bill. It will be noticed that there is usually no very great accumulation of dots about the median line which shows the average dimensions, but that a considerable number are spread at varying distances on each side of it.
In the next diagram (Fig. 10), showing the variation among forty males of Agelæus phœniceus, this approach to an equable spreading of the variations is still more apparent; while in Fig. 12, where fifty-eight specimens of Cardinalis virginianus are registered, we see a remarkable spreading out of the spots, showing in some of the characters a tendency to segregation into two or more groups of individuals, each varying considerably from the mean.
In order fully to appreciate the teaching of these diagrams, we must remember, that, whatever kind and amount of variations are exhibited by the few specimens here compared, would be greatly extended and brought into symmetrical form if large numbers—thousands or millions—were subjected to the same process of measurement and registration. We know, from the general law which governs variations from a mean value, that with increasing numbers the range
of variation of each part would increase also, at first rather rapidly and then more slowly; while gaps and irregularities would be gradually filled up, and at length the distribution of the dots would indicate a tolerably regular curve of double curvature like those shown in Fig. 11.
The great divergence of the dots, when even a few specimens are compared, shows that the curve, with high numbers, would be a flat one like the lower curve in the illustration here given. This being the case it would follow that a very large proportion of the total number of individuals constituting a species would diverge considerably from its average condition as regards each part or organ; and as we know from the previous diagrams of variation (Figs. 1 to 7) that each part varies to a considerable extent, independently, the materials constantly ready for natural selection to act upon are abundant in quantity and very varied in kind. Almost any combination of variations of distinct parts will be available, where required; and this, as we shall see further on, obviates one of the most weighty objections which have been urged against the efficiency of natural selection in producing new species, genera, and higher groups.
Variation in the Mammalia.
Owing to the generally large size of this class of animals, and the comparatively small number of naturalists who study them, large series of specimens are only occasionally examined and compared, and thus the materials for determining the question of their variability in a state of nature are comparatively scanty. The fact that our domestic animals belonging to this group, especially dogs, present extreme varieties not surpassed even by pigeons and poultry among birds, renders it almost certain that an equal amount of variability exists in the wild state; and this is confirmed by the example of a species of squirrel (Sciurus carolinensis), of which sixteen specimens, all males and all taken in Florida, were measured and tabulated by Mr. Allen. The diagram here given shows, that, both the general amount of the variation and the independent variability of the several members of the body, accord completely with the variations so common in the class of birds; while their amount and their independence of each other are even greater than usual.
Variation in the Internal Organs of Animals.
In case it should be objected that the cases of variation hitherto adduced are in the external parts only, and that there is no proof that the internal organs vary in the same manner, it will be advisable to show that such varieties also occur. It is, however, impossible to adduce the same amount of evidence in this class of variation, because the great labour of dissecting large numbers of specimens of the same species is rarely undertaken, and we have to trust to the chance observations of anatomists recorded in their regular course of study.
It must, however, be noted that a very large proportion of the variations already recorded in the external parts of animals necessarily imply corresponding internal variations. When feet and legs vary in size, it is because the bones vary; when the head, body, limbs, and tail change their proportions, the bony skeleton must also change; and even when the wing or tail feathers of birds become longer or more numerous, there is sure to be a corresponding change in the bones which support and the muscles which move them. I will, however, give a few cases of variations which have been directly observed.
Mr. Frank E. Beddard has kindly communicated to me some remarkable variations he has observed in the internal
organs of a species of earthworm (Perionyx excavatus). The normal characters of this species are—
Setæ forming a complete row round each segment.
Two pairs of spermathecæ—spherical pouches without diverticulæ—in segments 8 and 9.
Two pairs of testes in segments 11 and 12.
Ovaries, a single pair in segment 13.
Oviducts open by a common pore in the middle of segment 14.
Vasa deferentia open separately in segment 18, each furnished at its termination with a large prostate gland.
Between two and three hundred specimens were examined, and among them thirteen specimens exhibited the following marked variations:—
(1) The number of the spermathecæ varied from two to three or four pairs, their position also varying.
(2) There were occasionally two pairs of ovaries, each with its own oviduct; the external apertures of these varied in position, being upon segments 13 and 14, 14 and 15, or 15 and 16. Occasionally when there was only the normal single oviduct pore present it varied in position, once occurring on the 10th, and once on the 11th segment.
(3) The male generative pores varied in position from segments 14 to 20. In one instance there were two pairs instead of the normal single pair, and in this case each of the four apertures had its own prostate gland.
Mr. Beddard remarks that all, or nearly all, the above variations are found normally in other genera and species.
When we consider the enormous number of earthworms and the comparatively very small number of individuals examined, we may be sure, not only that such variations as these occur with considerable frequency, but also that still more extraordinary deviations from the normal structure may often exist.
The next example is taken from Mr. Darwin's unpublished MSS.
"In some species of Shrews (Sorex) and in some field-mice (Arvicola), the Rev. L. Jenyns (Ann. Nat. Hist., vol. vii. pp. 267, 272) found the proportional length of the intestinal canal to vary considerably. He found the same variability in the number of the caudal vertebræ. In three specimens of an Arvicola he found the gall-bladder having a very different degree of development, and there is reason to believe it is sometimes absent. Professor Owen has shown that this is the case with the gall-bladder of the giraffe."
Dr. Crisp (Proc. Zool. Soc., 1862, p. 137) found the gall-bladder present in some specimens of Cervus superciliaris while absent in others; and he found it to be absent in three giraffes which he dissected. A double gall-bladder was found in a sheep, and in a small mammal preserved in the Hunterian Museum there are three distinct gall-bladders.
The length of the alimentary canal varies greatly. In three adult giraffes described by Professor Owen it was from 124 to 136 feet long; one dissected in France had this canal 211 feet long; while Dr. Crisp measured one of the extraordinary length of 254 feet, and similar variations are recorded in other animals.[7]
The number of ribs varies in many animals. Mr. St. George Mivart says: "In the highest forms of the Primates, the number of true ribs is seven, but in Hylobates there are sometimes eight pairs. In Semnopithecus and Colobus there are generally seven, but sometimes eight pairs of true ribs. In the Cebidæ there are generally seven or eight pairs, but in Ateles sometimes nine" (Proc. Zool. Soc., 1865, p. 568). In the same paper it is stated that the number of dorsal vertebræ in man is normally twelve, very rarely thirteen. In the Chimpanzee there are normally thirteen dorsal vertebræ, but occasionally there are fourteen or only twelve.
Variations in the Skull.
Among the nine adult male Orang-utans, collected by myself in Borneo, the skulls differed remarkably in size and proportions. The orbits varied in width and height, the cranial ridge was either single or double, either much or little developed, and the zygomatic aperture varied considerably in
size. I noted particularly that these variations bore no necessary relation to each other, so that a large temporal muscle and zygomatic aperture might exist either with a large or a small cranium; and thus was explained the curious difference between the single-crested and the double-crested skulls, which had been supposed to characterise distinct species. As an instance of the amount of variation in the skulls of fully adult male orangs, I found the width between the orbits externally to be only 4 inches in one specimen and fully 5 inches in another.
Exact measurements of large series of comparable skulls of the mammalia are not easily found, but from those available I have prepared three diagrams (Figs. 14, 15, and 16), in order to exhibit the facts of variation in this very important organ. The first shows the variation in ten specimens of the common wolf (Canis lupus) from one district in North America, and we see that it is not only large in amount, but that each part exhibits a considerable independent variability.[8]
In Diagram 15 we have the variations of eight skulls of the Indian Honey-bear (Ursus labiatus), as tabulated by the late Dr. J. E. Gray of the British Museum. For such a small number of specimens the amount of variation is very large—from one-eighth to one-fifth of the mean size,—while there are an extraordinary number of instances of independent variability. In Diagram 16 we have the length and width of twelve skulls of adult males of the Indian wild boar (Sus cristatus), also given by Dr. Gray, exhibiting in both sets of measurements a variation of more than one-sixth, combined with a very considerable amount of independent variability.[9]
The few facts now given, as to variations of the internal parts of animals, might be multiplied indefinitely by a search through the voluminous writings of comparative anatomists. But the evidence already adduced, taken in conjunction with the much fuller evidence of variation in all external organs, leads us to the conclusion that wherever variations are looked for among a considerable number of individuals of the more
common species they are sure to be found; that they are everywhere of considerable amount, often reaching 20 per cent of the size of the part implicated; and that they are to a great extent independent of each other, and thus afford almost any combination of variations that may be needed.
It must be particularly noticed that the whole series of variation-diagrams here given (except the three which illustrate the number of varying individuals) in every case represent the actual amount of the variation, not on any reduced or enlarged scale, but as it were life-size. Whatever number of inches or decimals of an inch the species varies in any of its parts is marked on the diagrams, so that with the help of an ordinary divided rule or a pair of compasses the variation of the different parts can be ascertained and compared just as if the specimens themselves were before the reader, but with much greater ease.
In my lectures on the Darwinian theory in America and in this country I used diagrams constructed on a different plan, equally illustrating the large amount of independent variability, but less simple and less intelligible. The present method is a modification of that used by Mr. Francis Galton in his researches on the theory of variability, the upper line (showing the variability of the body) in Diagrams 4, 5, 6, and 13, being laid down on the method he has used in his experiments with sweet-peas and in pedigree moth-breeding.[10] I believe, after much consideration, and many tedious experiments in diagram-making, that no better method can be adopted for bringing before the eye, both the amount and the peculiar features of individual variability.
Variations of the Habits of Animals.
Closely connected with those variations of internal and external structure which have been already described, are the changes of habits which often occur in certain individuals or in whole species, since these must necessarily depend upon some corresponding change in the brain or in other parts of the organism; and as these changes are of great importance in relation to the theory of instinct, a few examples of them will be now adduced.
The Kea (Nestor notabilis) is a curious parrot inhabiting the mountain ranges of the Middle Island of New Zealand. It belongs to the family of Brush-tongued parrots, and naturally feeds on the honey of flowers and the insects which frequent them, together with such fruits or berries as are found in the region. Till quite recently this comprised its whole diet, but since the country it inhabits has become occupied by Europeans it has developed a taste for a carnivorous diet, with alarming results. It began by picking the sheepskins hung out to dry or the meat in process of being cured. About 1868 it was first observed to attack living sheep, which had frequently been found with raw and bleeding wounds on their backs. Since then it is stated that the bird actually burrows into the living sheep, eating its way down to the kidneys, which form its special delicacy. As a natural consequence, the bird is being destroyed as rapidly as possible, and one of the rare and curious members of the New Zealand fauna will no doubt shortly cease to exist. The case affords a remarkable instance of how the climbing feet and powerful hooked beak developed for one set of purposes can be applied to another altogether different purpose, and it also shows how little real stability there may be in what appear to us the most fixed habits of life. A somewhat similar change of diet has been recorded by the Duke of Argyll, in which a goose, reared by a golden eagle, was taught by its foster-parent to eat flesh, which it continued to do regularly and apparently with great relish.[11]
Change of habits appears to be often a result of imitation, of which Mr. Tegetmeier gives some good examples. He states that if pigeons are reared exclusively with small grain, as wheat or barley, they will starve before eating beans. But when they are thus starving, if a bean-eating pigeon is put among them, they follow its example, and thereafter adopt the habit. So fowls sometimes refuse to eat maize, but on seeing others eat it, they do the same and become excessively fond of it. Many persons have found that their yellow crocuses were eaten by sparrows, while the blue, purple, and white coloured varieties were left untouched; but Mr. Tegetmeier, who grows only these latter colours, found that after two years the sparrows began to attack them, and thereafter destroyed them quite as readily as the yellow ones; and he believes it was merely because some bolder sparrow than the rest set the example. On this subject Mr. Charles C. Abbott well remarks: "In studying the habits of our American birds—and I suppose it is true of birds everywhere—it must at all times be remembered that there is less stability in the habits of birds than is usually supposed; and no account of the habits of any one species will exactly detail the various features of its habits as they really are, in every portion of the territory it inhabits."[12]
Mr. Charles Dixon has recorded a remarkable change in the mode of nest-building of some common chaffinches which were taken to New Zealand and turned out there. He says: "The cup of the nest is small, loosely put together, apparently lined with feathers, and the walls of the structure are prolonged for about 18 inches, and hang loosely down the side of the supporting branch. The whole structure bears some resemblance to the nests of the hangnests (Icteridæ), with the exception that the cavity is at the top. Clearly these New Zealand chaffinches were at a loss for a design when fabricating their nest. They had no standard to work by, no nests of their own kind to copy, no older birds to give them any instruction, and the result is the abnormal structure I have just described."[13]
These few examples are sufficient to show that both the habits and instincts of animals are subject to variation; and had we a sufficient number of detailed observations we should probably find that these variations were as numerous, as diverse in character, as large in amount, and as independent of each other as those which we have seen to characterise their bodily structure.
The Variability of Plants.
The variability of plants is notorious, being proved not only by the endless variations which occur whenever a species is largely grown by horticulturists, but also by the great difficulty that is felt by botanists in determining the limits of species in many large genera. As examples we may take the roses, the brambles, and the willows as well illustrating this fact. In Mr. Baker's Revision of the British Roses (published by the Linnean Society in 1863), he includes under the single species, Rosa canina—the common dog-rose—no less than twenty-eight named varieties distinguished by more or less constant characters and often confined to special localities, and to these are referred about seventy of the species of British and continental botanists. Of the genus Rubus or bramble, five British species are given in Bentham's Handbook of the British Flora, while in the fifth edition of Babington's Manual of British Botany, published about the same time, no less than forty-five species are described. Of willows (Salix) the same two works enumerate fifteen and thirty-one species respectively. The hawkweeds (Hieracium) are equally puzzling, for while Mr. Bentham admits only seven British species, Professor Babington describes no less than thirty-two, besides several named varieties.
A French botanist, Mons. A. Jordan, has collected numerous forms of a common little plant, the spring whitlow-grass (Draba verna); he has cultivated these for several successive years, and declares that they preserve their peculiarities unchanged; he also says that they each come true from seed, and thus possess all the characteristics of true species. He has described no less than fifty-two such species or permanent varieties, all found in the south of France; and he urges botanists to follow his example in collecting, describing, and cultivating all such varieties as may occur in their respective districts. Now, as the plant is very common almost all over Europe and ranges from North America to the Himalayas, the number of similar forms over this wide area would probably have to be reckoned by hundreds if not by thousands.
The class of facts now adduced must certainly be held to prove that in many large genera and in some single species there is a very large amount of variation, which renders it quite impossible for experts to agree upon the limits of species. We will now adduce a few striking cases of individual variation.
The distinguished botanist, Alp. de Candolle, made a special study of the oaks of the whole world, and has stated some remarkable facts as to their variability. He declares that on the same branch of oak he has noted the following variations: (1) In the length of the petiole, as one to three; (2) in the form of the leaf, being either elliptical or obovoid; (3) in the margin being entire, or notched, or even pinnatifid; (4) in the extremity being acute or blunt; (5) in the base being sharp, blunt, or cordate; (6) in the surface being pubescent or smooth; (7) the perianth varies in depth and lobing; (8) the stamens vary in number, independently; (9) the anthers are mucronate or blunt; (10) the fruit stalks vary greatly in length, often as one to three; (11) the number of fruits varies; (12) the form of the base of the cup varies; (13) the scales of the cup vary in form; (14) the proportions of the acorns vary; (15) the times of the acorns ripening and falling vary.
Besides this, many species exhibit well-marked varieties which have been described and named, and these are most numerous in the best-known species. Our British oak (Quercus robur) has twenty-eight varieties; Quercus Lusitanica has eleven; Quercus calliprinos has ten; and Quercus coccifera eight.
A most remarkable case of variation in the parts of a common flower has been given by Dr. Hermann Müller. He examined two hundred flowers of Myosurus minimus, among which he found thirty-five different proportions of the sepals, petals, and anthers, the first varying from four to seven, the second from two to five, and the third from two to ten. Five sepals occurred in one hundred and eighty-nine out of the two hundred, but of these one hundred and five had three petals, forty-six had four petals, and twenty-six had five petals; but in each of these sets the anthers varied in number from three to eight, or from two to nine. We have here an example of the same amount of "independent variability" that, as we have seen, occurs in the various dimensions of birds and mammals; and it may be taken as an illustration of the kind and degree of variability that may be expected to occur among small and little specialised flowers.[14]
In the common wind-flower (Anemone nemorosa) an almost equal amount of variation occurs; and I have myself gathered in one locality flowers varying from 78 inch to 134 inch in diameter; the bracts varying from 112 inch to 4 inches across; and the petaloid sepals either broad or narrow, and varying in number from five to ten. Though generally pure white on their upper surface, some specimens are a full pink, while others have a decided bluish tinge.
Mr. Darwin states that he carefully examined a large number of plants of Geranium phæum and G. pyrenaicum (not perhaps truly British but frequently found wild), which had escaped from cultivation, and had spread by seed in an open plantation; and he declares that "the seedlings varied in almost every single character, both in their flowers and foliage, to a degree which I have never seen exceeded; yet they could not have been exposed to any great change of their conditions."[15]
The following examples of variation in important parts of plants were collected by Mr. Darwin and have been copied from his unpublished MSS.:—
"De Candolle (Mem. Soc. Phys. de Genève, tom. ii. part ii. p. 217) states that Papaver bracteatum and P. orientale present indifferently two sepals and four petals, or three sepals and six petals, which is sufficiently rare with other species of the genus."
"In the Primulaceæ and in the great class to which this family belongs the unilocular ovarium is free, but M. Dubury (Mem. Soc. Phys. de Genève, tom. ii. p. 406) has often found individuals in Cyclamen hederæfolium, in which the base of the ovary was connected for a third part of its length with the inferior part of the calyx."
"M. Aug. St. Hilaire (Sur la Gynobase, Mem. des Mus. d'Hist. Nat., tom. x. p. 134), speaking of some bushes of the Gomphia oleæfolia, which he at first thought formed a quite distinct species, says: 'Voilà donc dans un même individu des loges et un style qui se rattachent tantôt a un axe vertical, et tantôt a un gynobase; donc celui-ci n'est qu'un axe veritable; mais cet axe est deprimé au lieu d'être vertical." He adds (p. 151), 'Does not all this indicate that nature has tried, in a manner, in the family of Rutaceæ to produce from a single multilocular ovary, one-styled and symmetrical, several unilocular ovaries, each with its own style.' And he subsequently shows that, in Xanthoxylum monogynum, 'it often happens that on the same plant, on the same panicle, we find flowers with one or with two ovaries;' and that this is an important character is shown by the Rutaceæ (to which Xanthoxylum belongs), being placed in a group of natural orders characterised by having a solitary ovary."
"De Candolle has divided the Cruciferæ into five sub-orders in accordance with the position of the radicle and cotyledons, yet Mons. T. Gay (Ann. des Scien. Nat., ser. i. tom. vii. p. 389) found in sixteen seeds of Petrocallis Pyrenaica the form of the embryo so uncertain that he could not tell whether it ought to be placed in the sub-orders 'Pleurorhizée' or 'Notorhizée'; so again (p. 400) in Cochlearia saxatilis M. Gay examined twenty-nine embryos, and of these sixteen were vigorously 'pleurorhizées,' nine had characters intermediate between pleuro- and notor- hizées, and four were pure notorhizées."
"M. Raspail asserts (Ann. des Scien. Nat., ser. i. tom. v. p. 440) that a grass (Nostus Borbonicus) is so eminently variable in its floral organisation, that the varieties might serve to make a family with sufficiently numerous genera and tribes—a remark which shows that important organs must be here variable."
Species which vary little.
The preceding statements, as to the great amount of variation occurring in animals and plants, do not prove that all species vary to the same extent, or even vary at all, but, merely, that a considerable number of species in every class, order, and family do so vary. It will have been observed that the examples of great variability have all been taken from common species, or species which have a wide range and are abundant in individuals. Now Mr. Darwin concludes, from an elaborate examination of the floras and faunas of several distinct regions, that common, wide ranging species, as a rule, vary most, while those that are confined to special districts and are therefore comparatively limited in number of individuals vary least. By a similar comparison it is shown that species of large genera vary more than species of small genera. These facts explain, to some extent, why the opinion has been so prevalent that variation is very limited in amount and exceptional in character. For naturalists of the old school, and all mere collectors, were interested in species in proportion to their rarity, and would often have in their collections a larger number of specimens of a rare species than of a species that was very common. Now as these rare species do really vary much less than the common species, and in many cases hardly vary at all, it was very natural that a belief in the fixity of species should prevail. It is not, however, as we shall see presently, the rare, but the common and widespread species which become the parents of new forms, and thus the non-variability of any number of rare or local species offers no difficulty whatever in the way of the theory of evolution.
Concluding Remarks.
We have now shown in some detail, at the risk of being tedious, that individual variability is a general character of all common and widespread species of animals or plants; and, further, that this variability extends, so far as we know, to every part and organ, whether external or internal, as well as to every mental faculty. Yet more important is the fact that each part or organ varies to a considerable extent independently of other parts. Again, we have shown, by abundant evidence, that the variation that occurs is very large in amount—usually reaching 10 or 20, and sometimes even 25 per cent of the average size of the varying part; while not one or two only, but from 5 to 10 per cent of the specimens examined exhibit nearly as large an amount of variation. These facts have been brought clearly before the reader by means of numerous diagrams, drawn to scale and exhibiting the actual variations in inches, so that there can be no possibility of denying either their generality or their amount. The importance of this full exposition of the subject will be seen in future chapters, when we shall frequently have to refer to the facts here set forth, especially when we deal with the various theories of recent writers and the criticisms that have been made of the Darwinian theory.
A full exposition of the facts of variation among wild animals and plants is the more necessary, because comparatively few of them were published in Mr. Darwin's works, while the more important have only been made known since last edition of The Origin of Species was prepared; and it is clear that Mr. Darwin himself did not fully recognise the enormous amount of variability that actually exists. This is indicated by his frequent reference to the extreme slowness of the changes for which variation furnishes the materials, and also by his use of such expressions as the following: "A variety when once formed must again, perhaps after a long interval of time, vary or present individual differences of the same favourable nature as before" (Origin, p. 66). And again, after speaking of changed conditions "affording a better chance of the occurrence of favourable variations," he adds: "Unless such occur natural selection can do nothing" (Origin, p. 64). These expressions are hardly consistent with the fact of the constant and large amount of variation, of every part, in all directions, which evidently occurs in each generation of all the more abundant species, and which must afford an ample supply of favourable variations whenever required; and they have been seized upon and exaggerated by some writers as proofs of the extreme difficulties in the way of the theory. It is to show that such difficulties do not exist, and in the full conviction that an adequate knowledge of the facts of variation affords the only sure foundation for the Darwinian theory of the origin of species, that this chapter has been written.
1. Foraminifera, preface, p. x.
2. United States Geological Survey of the Territories, 1874
3. Proceedings of the Entomological Society of London, 1875, p. vii.
4. Ann. des Sci. Nat., tom. xvi. p. 50.
5. See Winter Birds of Florida, p.206, Table F.
6. See Table I, p. 211, of Allen's Winter Birds of Florida
7. Proc. Zool. Soc., 1864, p64.
8. J. A. Allen, on Geographical Variation among North American Mammals, Bull. U. S. Geol. and Geog. Survey, vol. ii. p. 314 (1876).
9. Proc. Zool. Soc. Lond., 1864, p. 700, and 1868, p. 28.
10. See Trans. Entomological Society of London,1887, p.24.
11. Nature, vol xix. p. 554.
12. Nature, vol. xvi. p. 163; and vol. xi. p. 227.
13. Ibid., vol. xxxi. (1885), p. 533.
14. Nature, vol. xxvi. p. 81.
15. Animals and Plants under Domestication, vol. ii. p. 258.
|
# Further Still On The Wealth Of Stations
My fellow students and I have spent some time investigating my suspicion that serendipity, beyond worth, might account for the relative fortune of the few over the many. To this end we have set to creating perfectly fair games mimicking the manner in which wealth accumulates amongst the populace, that we might discover whether their outcomes should elevate some small lucky band of players well above their fellows.
Thus far we have seen that games of both random returns and losses of players' funds[1] and random trade between them[2] most certainly do so, but their rules failed to take into account either the value of labour or the cost of sustenance, somewhat weakening any conclusions that we might have drawn from their study.
We have consequently spent some time creating further rules to rectify these deficiencies.
### A Rule Of Labour
In formulating a rule to reflect wealth creation through labour, we postulated that the greater one's assets, the more one might prove able to create wealth from them; the more land one owns, for example, the more food one might produce. Our rule of labour is therefore rather similar to that of our first game in that each player adds to their funds a random proportion of them somewhere between zero and an upper limit. Specifically, each player's funds are updated according to the rule
\begin{align*} u &\sim U(0, b)\\ x &\leftarrow x + u \times x \end{align*}
where $$U(0, b)$$ represents a uniformly distributed random variable with lower and upper bounds of zero and $$b$$ respectively, and $$x$$ that player's funds.
In adding this rule to our trading game we split its play into alternate rounds, the first of which gave every player a turn at labour and the second a turn at trade.
You will no doubt recall that in our trading game each player would in turn contract a number of trades equal to the smallest whole number no less than their funds with randomly chosen players, with each trade exchanging up to some fraction $$c$$ of the lesser of their funds by
\begin{align*} u &\sim U(-c, c)\\ x^\prime_1 &= x_1 - u \times \min\left(x_1, x_2\right)\\ x^\prime_2 &= x_2 + u \times \min\left(x_i, x_2\right) \end{align*}
where $$x_1$$ and $$x_2$$ are the players' funds beforehand and $$x^\prime_1$$ and $$x^\prime_2$$ their funds afterwards.
Deck 1 demonstrates the effect of adding the labour rule, with a return of up to five and one tenth of a percent, to our trading game.
Deck 1: A Game Of Labour And Trade
Once again, the players' initial funds are marked by a red line which rapidly falls toward zero, demonstrating that the players typically have significantly more funds at the conclusion of the game than they had at its commencement.
Nevertheless, we still find that a lucky few profit far above their fellows as is emphasised by deck 2 which shows the players' funds from the greatest to the least after each turn.
Deck 2: Sorted By Decreasing Funds
To gain some sense of how the outcomes are distributed, we put together deck 3 to construct a histogram of them.
Deck 3: The Histogram Of Outcomes
That the players by and large increase their funds many times over in this game is reflected by the fact that the upper bound of the histogram is thousands of millions times larger than their initial funds.
Even so, we can see that it is still far more likely that a player should finish the game amongst the least wealthy than amongst the most.
### A Rule Of Sustenance
In formulating a rule of sustenance, my fellow students and I thought it not unreasonable to set a hard and fast lower limit upon the funds consumed at each turn since one cannot live by breath alone!
To this end, we added a third round to each turn in which each player consumed the sum of a random proportion between zero and $$d$$ of their funds and a fixed quantity $$d_0$$
\begin{align*} u &\sim U(0, d)\\ x &\leftarrow x - \left(u \times x + d_0\right) \end{align*}
Note that, as a consequence of this rule, it is entirely possible that a player's funds might fall below zero, under which circumstances we decided to simply set them to zero.
Deck 4 shows the consequences of our rule of sustenance upon the players' outcomes, with up to five percent of their current funds plus one ten thousandth of their initial funds consumed at each turn.
Deck 4: A Game Of Labour, Trade And Sustenance
Whilst the players all unsurprisingly conclude the game a good deal worse off with the addition of this rule, it most certainly seems that it has further elevated the fortunate few above the unfortunate many. To see whether or not this is in fact the case, deck 5 plots a histogram of the players' final funds.
Deck 5: The Histogram Of Outcomes
The precipitous drop from the likelihood of the players being amongst the least prosperous to their not being indubitably confirms that the cost of sustenance has had a deleterious effect upon many of their fortunes.
### Reckoning The Players' Chances
Once again, my fellow students and I sought to quantify this observation by figuring the chances that a player might conclude the game with greater than some multiple or less than some fraction of their original funds and to this end we put together deck 6 to play the game many times over and count such outcomes.
Deck 6: The Chances Of Losses And Gains
Evidently the rule of sustenance has led to near ruin for many of their number and great riches for very few of them, despite being applied equally to each and every one!
We also sought to again measure the chances that a player who had fared poorly at the beginning of the game might conclude it with greater funds than one who had fared well by giving halved funds to half of them and doubled funds to the rest, counting how often those amongst the former had funds exceeding those of their counterparts amongst the latter after five hundred turns, as demonstrated by deck 7.
Deck 7: The Chance Of A Comeback
As has consistently been the case with our games, the unfortunate have little chance of turning the tables upon the fortunate by ultimately profiting above them, again highlighting the advantage to be had from a lucky start!
### The Precise Implications Of Sustenance
Naturally, my fellow students and I were keen to discover precisely the effect that these rules might have upon the players' funds at each turn. Alas, we have no more been able to figure the implications of the rule of trade upon this game than we were of it upon our last.
We were, however, able to express the consequences of a single turn of just the rules of labour and sustenance with
\begin{align*} b &\in (0, \infty)\\ d &\in (0, 1)\\ \\ u_1 &\sim U(0, b)\\ u_2 &\sim U(0, d)\\ \\ x_1 &= x_0 \times (1 + u_1)\\ x_2 &= x_1 \times (1 - u_2) - d_0 \end{align*}
where $$x_0$$ represents a player's funds at the start of a turn, $$x_1$$ their funds after the rule of labour and $$x_2$$ those after the rule of sustenance.
To simplify our endeavour, we elected not to figure the outcome of a turn directly, but rather that of
\begin{align*} x_2 &= x_0 \times (1 + u_1) \times (1 - u_2) - d_0\\ z &= \frac{x_2 + d_0}{x_0} = (1 + u_1) \times (1 - u_2) \end{align*}
This we further simplified by choosing to draw from uniform random variables that embodied the addition and subtraction to and from one with
\begin{align*} u_1^\prime &\sim U(1, 1+b)\\ u_2^\prime &\sim U(1-d, 1)\\ z &= u_1^\prime \times u_2^\prime \end{align*}
Now, to figure the outcomes of such products of uniformly distributed random variables we employed logarithms to transform them into sums
\begin{align*} v_1 &= \ln u_1^\prime\\ v_2 &= \ln u_2^\prime\\ \ln z &= \ln \left(u_1^\prime \times u_2^\prime\right) = \ln u_1^\prime + \ln u_2^\prime = v_1 + v_2 \end{align*}
The properties of the logarithm of a uniformly distributed random variable can be deduced from its cumulative distribution function, or CDF
$P_v(x) = \Pr(v \leqslant x) = \Pr(\ln u \leqslant x) = \Pr\left(u \leqslant e^x\right) = P_u\left(e^x\right)$
Specifically, for
\begin{align*} u &\sim U(\alpha, \beta) \quad 0 \leqslant \alpha \leqslant \beta < \infty\\ v &= \ln u\\ \end{align*}
we have
$P_u\left(e^x\right) = \begin{cases} 0 & e^x \in [0, \alpha)\\ \dfrac{e^x - \alpha}{\beta-\alpha} & e^x \in [\alpha, \beta)\\ 1 & e^x \in [\beta, \infty)\\ \end{cases}$
where $$\in$$ means within and $$[\alpha, \beta)$$ represents an interval including the lesser value $$\alpha$$ up to but not including the greater $$\beta$$, and consequently
$P_v(x) = \begin{cases} 0 & x \in (-\infty, \ln \alpha)\\ \dfrac{e^x - \alpha}{\beta-\alpha} & x \in [\ln \alpha, \ln \beta)\\ 1 & x \in [\ln \beta, \infty)\\ \end{cases}$
From this we might find the probability density function, or PDF, by differentiation
$p_v(x) = \begin{cases} 0 & x \in (-\infty, \ln \alpha)\\ \dfrac{e^x}{\beta-\alpha} & x \in [\ln \alpha, \ln \beta)\\ 0 & x \in [\ln \beta, \infty)\\ \end{cases}$
The PDF of the sum of a pair of random variables can be expressed in terms of their PDFs by means of a convolution
$p_{v_1+v_2}(x) = \int_{-\infty}^\infty p_{v_1}(x-t) \times p_{v_2}(t) \; \mathrm{d}t$
For the product being integrated to be not equal to zero requires that both of its terms are not equal to zero, from which we can deduce that
\begin{align*} p_{v_1}(x-t) \neq 0 &\implies x-t \in [\ln 1, \ln (1+b))\\ &\implies \phantom{x}-t \in [\ln 1 - x, \ln (1+b) - x)\\ &\implies \phantom{x-}\,t \in (x - \ln (1+b), x - \ln 1] = (x - \ln (1+b), x] \end{align*}
and
$p_{v_2}(t) \neq 0 \implies t \in [\ln (1-d), \ln 1) = [\ln (1-d), 0)$
where $$\implies$$ means implies.
Putting these conditions together yields
$t \in \begin{cases} [\ln (1-d), x] & x - \ln (1+b) < \ln(1-d) \wedge x < 0 \wedge x \geqslant \ln(1-d)\\ [\ln (1-d), 0) & x - \ln (1+b) < \ln(1-d) \wedge x \geqslant 0\\ (x - \ln (1+b), x] & x - \ln (1+b) \geqslant \ln(1-d) \wedge x < 0\\ (x - \ln (1+b), 0) & x - \ln (1+b) \geqslant \ln(1-d) \wedge x \geqslant 0 \wedge x - \ln (1+b) < 0\\ \varnothing & \mathrm{otherwise} \end{cases}$
where $$\wedge$$ means and and $$\varnothing$$ represents an empty interval, which upon some small rearrangement further yields
$t \in \begin{cases} [\ln (1-d), x] & x < \ln(1-d) + \ln (1+b) \wedge x < 0 \wedge x \geqslant \ln(1-d)\\ [\ln (1-d), 0) & x < \ln(1-d) + \ln (1+b) \wedge x \geqslant 0\\ (x - \ln (1+b), x] & x \geqslant \ln(1-d) + \ln (1+b) \wedge x < 0\\ (x - \ln (1+b), 0) & x \geqslant \ln(1-d) + \ln (1+b) \wedge x \geqslant 0 \wedge x < \ln (1+b)\\ \varnothing & \mathrm{otherwise} \end{cases}$
Now, if $$\ln(1-d) + \ln (1+b)$$ is less than zero then it is trivially the case that
$x < \ln(1-d) + \ln (1+b) \implies x < 0$
and so, in such circumstances, we may simplify the extent of $$t$$ for which the product is non-zero to
$t \in \begin{cases} [\ln (1-d), x] & x < \ln(1-d) + \ln (1+b) \wedge x \geqslant \ln(1-d)\\ (x - \ln (1+b), x] & x \geqslant \ln(1-d) + \ln (1+b) \wedge x < 0\\ (x - \ln (1+b), 0) & x \geqslant 0 \wedge x < \ln (1+b)\\ \varnothing & \mathrm{otherwise} \end{cases}$
Similarly, if $$\ln(1-d) + \ln (1+b)$$ is greater than or equal to zero then
$x \geqslant \ln(1-d) + \ln (1+b) \implies x \geqslant 0$
yielding
$t \in \begin{cases} [\ln (1-d), x] & x < 0 \wedge x \geqslant \ln(1-d)\\ [\ln (1-d), 0) & x < \ln(1-d) + \ln (1+b) \wedge x \geqslant 0\\ (x - \ln (1+b), 0) & x \geqslant \ln(1-d) + \ln (1+b) \wedge x < \ln (1+b)\\ \varnothing & \mathrm{otherwise} \end{cases}$
Consequently, we have
\begin{align*} \ln(1-d) + \ln (1+b) < 0 &\implies t \in \begin{cases} [\ln (1-d), x] & x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ (x - \ln (1+b), x] & x \in [\ln(1-d) + \ln (1+b), 0)\\ (x - \ln (1+b), 0) & x \in [0, \ln (1+b))\\ \varnothing & \mathrm{otherwise} \end{cases}\\ \\ \ln(1-d) + \ln (1+b) \geqslant 0 &\implies t \in \begin{cases} [\ln (1-d), x] & x \in [\ln(1-d), 0)\\ [\ln (1-d), 0) & x \in [0, \ln(1-d) + \ln (1+b))\\ (x - \ln (1+b), 0) & x \in [\ln(1-d) + \ln (1+b), \ln (1+b))\\ \varnothing & \mathrm{otherwise} \end{cases} \end{align*}
Furthermore, within a particular interval from $$t_1$$ to $$t_2$$, the convolution reckons to
\begin{align*} \int_{t_1}^{t_2} p_{v_1}(x-t) \times p_{v_2}(t) \; \mathrm{d}t &= \int_{t_1}^{t_2} \frac{e^{x-t}}{(b+1) - 1} \times \frac{e^t}{1 - (1-d)} \; \mathrm{d}t\\ &= \int_{t_1}^{t_2} \frac{e^x}{b \times d} \mathrm{d}t\\ &= \left[\frac{e^x \times t}{b \times d}\right]_{t_1}^{t_2}\\ &= \frac{e^x}{b \times d} \times \left(t_2 - t_1\right) \end{align*}
### Case The First
For negative $$\ln(1-d) + \ln(1+b)$$ this yields the PDF
\begin{align*} p_{v_1 + v_2}(x) &= \begin{cases} \frac{e^x}{b \times d} \times \left(x - \ln (1-d)\right) & x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ \frac{e^x}{b \times d} \times \left(x - \left(x - \ln (1+b)\right)\right) & x \in [\ln(1-d) + \ln (1+b), 0)\\ \frac{e^x}{b \times d} \times \left(0 - \left(x - \ln (1+b)\right)\right) & x \in [0, \ln (1+b))\\ 0 & \mathrm{otherwise} \end{cases}\\ \\ &= \begin{cases} \frac{e^x}{b \times d} \times \left(x - \ln (1-d)\right) & x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ \frac{e^x}{b \times d} \times \ln (1+b) & x \in [\ln(1-d) + \ln (1+b), 0)\\ \frac{e^x}{b \times d} \times \left(\ln (1+b) - x\right) & x \in [0, \ln (1+b))\\ 0 & \mathrm{otherwise} \end{cases} \end{align*}
To recover the CDF from this PDF we must integrate each of its cases within their intervals of applicability. For example, the integral of the first is given by
$f_1(x) = \int_{\ln(1-d)}^x \frac{e^t}{b \times d} \times \left(t - \ln (1-d)\right) \; \mathrm{d}t$
To resolve this integral we must use the rule of integration by parts which states that
$\int \frac{\mathrm{d}}{\mathrm{d}t} g(t) \times h(t) \; \mathrm{d}t = \bigg[g(t) \times h(t)\bigg] - \int g(t) \times \frac{\mathrm{d}}{\mathrm{d}t} h(t) \; \mathrm{d}t$
In particular, if we choose
\begin{align*} \frac{\mathrm{d}}{\mathrm{d}t} g(t) &= \frac{e^t}{b \times d}\\ h(t) &= t - \ln(1-d) \end{align*}
then we have
\begin{align*} g(t) &= \frac{e^t}{b \times d}\\ \frac{\mathrm{d}}{\mathrm{d}t} h(t) &= 1 \end{align*}
and consequently
\begin{align*} f_1(x) &= \bigg[\frac{e^t}{b \times d} \times \left(t - \ln (1-d)\right)\bigg]_{\ln(1-d)}^x - \int_{\ln(1-d)}^x \frac{e^t}{b \times d} \times 1 \; \mathrm{d}t\\ &= \bigg[\frac{e^t}{b \times d} \times \left(t - \ln (1-d)\right)\bigg]_{\ln(1-d)}^x - \bigg[\frac{e^t}{b \times d}\bigg]_{\ln(1-d)}^x\\ &= \bigg[\frac{e^t}{b \times d} \times \left(t - \ln (1-d) - 1\right)\bigg]_{\ln(1-d)}^x\\ &= \left(\frac{e^x}{b \times d} \times \left(x - \ln (1-d) - 1\right)\right) - \left(\frac{e^{\ln(1-d)}}{b \times d} \times \left(\ln(1-d) - \ln (1-d) - 1\right)\right)\\ &= \frac{e^x}{b \times d} \times \left(x - \ln (1-d) - 1\right) - \frac{1-d}{b \times d} \times -1\\ &= \frac{e^x}{b \times d} \times \left(x - \ln (1-d) - 1\right) + \frac{1-d}{b \times d} \end{align*}
Similarly, for the second case we have
$f_2(x) = \int_{\ln(1-d) + \ln (1+b)}^x \frac{e^t}{b \times d} \times \ln (1+b) \; \mathrm{d}t = \frac{e^x}{b \times d} \times \ln (1+b) - \frac{(1-d) \times (1+b)}{b \times d} \times \ln (1+b)$
and for the third
$f_3(x) = \int_0^x \frac{e^t}{b \times d} \times \left(\ln (1+b) - t\right) \; \mathrm{d}t = \frac{e^x}{b \times d} \times \left(\ln (1+b) - x + 1\right) - \frac{1}{b \times d} \times \left(\ln (1+b) + 1\right)$
In defining the CDF we must take care to include the integrals over every interval that falls entirely beneath any particular point of interest. To this end, if we define the constants
\begin{align*} c_1 &= f_1\left(\ln(1-d) + \ln(1+b)\right)\\ c_2 &= f_2\left(0\right)\\ c_3 &= f_3\left(\ln(1+b)\right) \end{align*}
then we may express the CDF as
$P_{v_1 + v_2}(x) = \begin{cases} 0 & x \in (-\infty, \ln(1-d))\\ f_1(x) & x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ c_1 + f_2(x) & x \in [\ln(1-d) + \ln (1+b), 0)\\ c_1 + c_2 + f_3(x) & x \in [0, \ln (1+b))\\ c_1 + c_2 + c_3 & x \in [\ln (1+b), \infty) \end{cases}$
Expanding out $$c_1$$ yields
\begin{align*} c_1 &= \frac{e^{\ln(1-d) + \ln(1+b)}}{b \times d} \times \left(\ln(1-d) + \ln(1+b) - \ln (1-d) - 1\right) + \frac{1-d}{b \times d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \left(\ln(1+b) - 1\right) + \frac{1-d}{b \times d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \ln(1+b) - \frac{(1-d) \times (1+b)}{b \times d} + \frac{1-d}{b \times d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \ln(1+b) - \frac{(1-d) \times (1+b) - (1-d)}{b \times d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \ln(1+b) - \frac{(1-d) \times b}{b \times d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \ln(1+b) - \frac{1-d}{d}\\ &= \frac{(1-d) \times (1+b)}{b \times d} \times \ln(1+b) + 1 - \frac{1}{d} \end{align*}
and $$c_2$$
\begin{align*} c_2 &= \frac{e^0}{b \times d} \times \ln (1+b) - \frac{(1-d) \times (1+b)}{b \times d} \times \ln (1+b)\\ &= \frac{1}{b \times d} \times \ln (1+b) - \frac{(1-d) \times (1+b)}{b \times d} \times \ln (1+b) \end{align*}
and, finally, $$c_3$$
\begin{align*} c_3 &= \frac{e^{\ln(1+b)}}{b \times d} \times \left(\ln (1+b) - \ln (1+b) + 1\right) - \frac{1}{b \times d} \times \left(\ln (1+b) + 1\right)\\ &= \frac{1+b}{b \times d} \times 1 - \frac{1}{b \times d} \times \left(\ln (1+b) + 1\right)\\ &= \frac{1+b}{b \times d} - \frac{1}{b \times d} - \frac{1}{b \times d} \times \ln (1+b)\\ &= \frac{1}{d} - \frac{1}{b \times d} \times \ln (1+b) \end{align*}
Now, the sum of $$c_1$$ and $$c_2$$ is trivially equal to
$c_1 + c_2 = \frac{1}{b \times d} \times \ln (1+b) + 1 - \frac{1}{d}$
and that of $$c_1$$, $$c_2$$ and $$c_3$$ to
$c_1 + c_2 + c_3 = 1$
which I must say came as something of a relief upon our finally having figured it, since the CDF must take a greatest value of one!
Finally, the third and fourth cases of the CDF resolve to
$c_1 + f_2(x) = \frac{e^x}{b \times d} \times \ln (1+b) + 1 - \frac{1}{d}$
and
$c_1 + c_2 + f_3(x) = \frac{e^x}{b \times d} \times \left(\ln (1+b) - x + 1\right) + 1 - \frac{1+b}{b \times d}$
yielding the whole
$P_{v_1 + v_2}(x) = \begin{cases} 0 & x \in (-\infty, \ln(1-d))\\ \frac{e^x}{b \times d} \times \left(x - \ln (1-d) - 1\right) + \frac{1-d}{b \times d} & x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ \frac{e^x}{b \times d} \times \ln (1+b) + 1 - \frac{1}{d} & x \in [\ln(1-d) + \ln (1+b), 0)\\ \frac{e^x}{b \times d} \times \left(\ln (1+b) - x + 1\right) + 1 - \frac{1+b}{b \times d} & x \in [0, \ln (1+b))\\ 1 & x \in [\ln (1+b), \infty) \end{cases}$
To deduce from this the CDF of our transformed outcome $$z$$ we need simply exploit the properties of the CDF once again with
\begin{align*} \ln z &= v_1 + v_2\\ P_{z}(x) &= \Pr\left(z \leqslant x\right) = \Pr\left(\ln z \leqslant \ln x\right) = \Pr\left(v_1 + v_2 \leqslant \ln x\right) = P_{v_1 + v_2}(\ln x) \end{align*}
and we therefore have
\begin{align*} P_{z}(x) &= \begin{cases} 0 & \ln x \in (-\infty, \ln(1-d))\\ \frac{e^{\ln x}}{b \times d} \times \left(\ln x - \ln (1-d) - 1\right) + \frac{1-d}{b \times d} & \ln x \in [\ln(1-d), \ln(1-d) + \ln (1+b))\\ \frac{e^{\ln x}}{b \times d} \times \ln (1+b) + 1 - \frac{1}{d} & \ln x \in [\ln(1-d) + \ln (1+b), 0)\\ \frac{e^{\ln x}}{b \times d} \times \left(\ln (1+b) - \ln x + 1\right) + 1 - \frac{1+b}{b \times d} & \ln x \in [0, \ln (1+b))\\ 1 & \ln x \in [\ln (1+b), \infty) \end{cases}\\ \\ &= \begin{cases} 0 & x \in (-\infty, 1-d)\\ \frac{x}{b \times d} \times \left(\ln x - \ln (1-d) - 1\right) + \frac{1-d}{b \times d} & x \in [1-d, (1-d) \times (1+b))\\ \frac{x}{b \times d} \times \ln (1+b) + 1 - \frac{1}{d} & x \in [(1-d) \times (1+b), 1)\\ \frac{x}{b \times d} \times \left(\ln (1+b) - \ln x + 1\right) + 1 - \frac{1+b}{b \times d} & x \in [1, 1+b)\\ 1 & x \in [1+b, \infty) \end{cases} \end{align*}
As before we must differentiate these cases to recover the PDF of $$z$$, for which we must employ the product rule, which states that
$\frac{\mathrm{d}}{\mathrm{d}x} \left(f(x) \times g(x)\right) = \left(\frac{\mathrm{d}}{\mathrm{d}x} f(x)\right) \times g(x) + f(x) \times \left(\frac{\mathrm{d}}{\mathrm{d}x} g(x)\right)$
For example, if for the second case we choose
\begin{align*} f(x) &= \frac{x}{b \times d}\\ g(x) &= \ln x - \ln (1-d) - 1 \end{align*}
then we find that
\begin{align*} \frac{\mathrm{d}}{\mathrm{d}x} f(x) &= \frac{1}{b \times d}\\ \frac{\mathrm{d}}{\mathrm{d}x} g(x) &= \frac{1}{x} \end{align*}
and so, noting that the derivative of any constant term must equal zero, its derivative is given by
\begin{align*} \frac{\mathrm{d}}{\mathrm{d}x} \left(f(x) \times g(x)\right) &= \frac{1}{b \times d} \times \left(\ln x - \ln (1-d) - 1\right) + \frac{x}{b \times d} \times \frac{1}{x}\\ &= \frac{1}{b \times d} \times \left(\ln x - \ln (1-d) - 1\right) + \frac{1}{b \times d}\\ &= \frac{1}{b \times d} \times \left(\ln x - \ln (1-d)\right) \end{align*}
and the PDF is consequently given by
$p_{z}(x) = \begin{cases} 0 & x \in (-\infty, 1-d)\\ \frac{1}{b \times d} \times \left(\ln x - \ln (1-d)\right) & x \in [1-d, (1-d) \times (1+b))\\ \frac{1}{b \times d} \times \ln (1+b) & x \in [(1-d) \times (1+b), 1)\\ \frac{1}{b \times d} \times \left(\ln (1+b) - \ln x\right) & x \in [1, 1+b)\\ 0 & x \in [1+b, \infty) \end{cases}$
### Case The Second
The reckoning of the governing distribution of $$z$$ when $$\ln(1-d) + \ln(1+b)$$ is greater than or equal to zero proceeds in much the same rather tedious fashion, ultimately yielding the CDF
$P_z(x) = \begin{cases} 0 & x \in [0, 1-d)\\ \frac{x}{b \times d} \times (\ln x - \ln(1-d) - 1) + \frac{1-d}{b \times d} & x \in [1-d, 1)\\ -\frac{x}{b \times d} \times \ln(1-d) - \frac{1}{b} & x \in [1, (1-d) \times (1+b))\\ \frac{x}{b \times d} \times (\ln(1+b) - \ln x + 1) + 1 - \frac{1+b}{b \times d} & x \in [(1-d) \times (1+b), 1+b)\\ 1 & x \in [1+b, \infty) \end{cases}$
and the PDF
$p_z(x) = \begin{cases} 0 & x \in [0, 1-d)\\ \frac{1}{b \times d} \times (\ln x - \ln(1-d)) & x \in [1-d, 1)\\ -\frac{1}{b \times d} \times \ln(1-d) & x \in [1, (1-d) \times (1+b))\\ \frac{1}{b \times d} \times (\ln(1+b) - \ln x) & x \in [(1-d) \times (1+b), 1+b)\\ 0 & x \in [1+b, \infty) \end{cases}$
### The Distribution Of Untransformed Outcomes
Given these results, we can easily recover the distribution of the untransformed outcomes by exploiting the fact that
\begin{align*} z &= \frac{x_2 + d_0}{x_0}\\ P_{x_2}(x) &= \Pr\left(x_2 \leqslant x\right) = \Pr\left(z \leqslant \frac{x + d_0}{x_0}\right) = P_z\left(\frac{x + d_0}{x_0}\right)\\ p_{x_2}(x) &= \frac{\mathrm{d}}{\mathrm{d}x} P_{x_2}(x) = \frac{\mathrm{d}}{\mathrm{d}x} P_z\left(\frac{x + d_0}{x_0}\right) = \frac{1}{x_0} p_z\left(\frac{x + d_0}{x_0}\right) \end{align*}
Noting that
$\ln(1-d) + \ln(1+b) < 0 \implies (1-d) \times (1+b) < 1$
and
$\ln(1-d) + \ln(1+b) \geqslant 0 \implies (1-d) \times (1+b) \geqslant 1$
my fellow students and I formulated script 1 to figure values of the CDF
Script 1: The CDF Of Outcomes
function cdf(x, x0, b, d, d0) {
var m = (1-d)*(1+b);
var z = (x+d0)/x0;
var c;
if(m<1) {
if(z<1-d) c = 0;
else if(z<m) c = z*(Math.log(z)-Math.log(1-d)-1)/(b*d) + (1-d)/(b*d);
else if(z<1) c = z*Math.log(1+b)/(b*d) + 1-1/d;
else if(z<1+b) c = z*(Math.log(1+b)-Math.log(z)+1)/(b*d) + 1 - (1+b)/(b*d);
else c = 1;
}
else {
if(z<1-d) c = 0;
else if(z<1) c = z*(Math.log(z)-Math.log(1-d)-1)/(b*d) + (1-d)/(b*d);
else if(z<m) c = -z*Math.log(1-d)/(b*d) - 1/b;
else if(z<1+b) c = z*(Math.log(1+b)-Math.log(z)+1)/(b*d) + 1 - (1+b)/(b*d);
else c = 1;
}
return c;
}
and script 2 to figure those of the PDF
Script 2: The PDF Of Outcomes
function pdf(x, x0, b, d, d0) {
var m = (1-d)*(1+b);
var z = (x+d0)/x0;
var p;
if(m<1) {
if(z<1-d) p = 0;
else if(z<m) p = (Math.log(z)-Math.log(1-d))/(b*d);
else if(z<1) p = Math.log(1+b)/(b*d);
else if(z<1+b) p = (Math.log(1+b)-Math.log(z))/(b*d);
else p = 0;
}
else {
if(z<1-d) p = 0;
else if(z<1) p = (Math.log(z)-Math.log(1-d))/(b*d);
else if(z<m) p = -Math.log(1-d)/(b*d);
else if(z<1+b) p = (Math.log(1+b)-Math.log(z))/(b*d);
else p = 0;
}
return p/x0;
}
To satisfy ourselves that we had correctly reckoned the consequences of the rules of labour and sustenance we put together deck 8 to compare a histogram of their outcomes with a graph of their likelihoods as predicted by our CDF.
Deck 8: Verifying The CDF
That they align so closely we took as compelling evidence that our ratiocination had been sound!
The CDF sheds some light upon the reason why many of the players fared so poorly in the game; the probability that a player should fail to profit at the conclusion of a turn is significantly greater if they began it with relatively few funds, as demonstrated by deck 9 which plots how the probability of loss decreases as funds increase.
Deck 9: The Probability Of Loss
This is further illuminated by considering the relative return that a player should expect after each turn
$\mathrm{E}\left[\frac{x_2 - x_0}{x_0}\right]$
We can express this in terms of the expected value of $$z$$ by noting that
$\mathrm{E}(z) = \mathrm{E}\left[\frac{x_2 + d_0}{x_0}\right] = \mathrm{E}\left[\frac{x_2}{x_0}\right] + \frac{d_0}{x_0} = \mathrm{E}\left[\frac{x_2 - x_0}{x_0}\right] + 1 + \frac{d_0}{x_0}$
and so
$\mathrm{E}\left[\frac{x_2 - x_0}{x_0}\right] = \mathrm{E}(z) - 1 - \frac{d_0}{x_0}$
The PDF of $$z$$ does not depend upon $$x_0$$ and so, consequently, nor does its expected value. The only term in the expected return that does is therefore the last, which grows negative without limit as $$x_0$$ approaches zero; a most unfortunate prospect for any already impoverished players!
### In Conclusion
Once again we have seen that a game of perfectly equitable rules favours the lucky few well above the unfortunate many. The question that my fellow students and I should finally like answered is whether it is possible to introduce a rule that mitigates the capricious hand of providence and, when our studies permit, we shall be sure to address it!
$$\Box$$
### References
[1] On The Wealth Of Stations, www.thusspakeak.com, 2016.
[2] Further On The Wealth Of Stations, www.thusspakeak.com, 2016.
|
# 2.4 GHz Yagi Uda Antenna (YagiUda2p4.sdf)
Keywords:
yagiUdaArrayWireModel, yagiT, far field, radiation
## Problem description
A Yagi-Uda array is a directional antenna consisting of several parallel dipole elements. Only one of these dipole elements is driven, the other elements being parasitic . Directionality is achieved by requiring that there be one longer element adjacent to the source element, which is referred to as the reflector. The rest of the elements being adjacent to the source but opposite to the reflector, and shorter than the source element, are referred to as directors. Yagi antennas are ubiquitous, and as such optimal parameters for dipole lengths and separations have been established. We go with values one would typically find in any text covering the matter. This example illustrates how to obtain the far field radiation pattern of a Yagi-Uda array.
This simulation can be performed with a VSimEM license.
## Opening the Simulation
The Yagi-Uda example is accessed from within VSimComposer by the following actions:
• Select the NewFrom Example… menu item in the File menu.
• In the resulting Examples window expand the VSim for Electromagnetics option.
• Expand the Antennas option.
• Select 2.4 GHz Yagi Uda Antenna and press the Choose button.
• In the resulting dialog, create a New Folder if desired, and press the Save button to create a copy of this example.
All of the properties and values that create the simulation are now available in the Setup Window as shown in Fig. 160. You can expand the tree elements and navigate through the various properties, making any changes you desire. The right pane shows a 3D view of the geometry, if any, as well as the grid, if actively shown. To show or hide the grid, expand the Grids element and select or deselect the box next to Grid.
Fig. 160 Setup Window for the Yagi-Uda example.
## Simulation Properties
This file allows the modification of the antenna operating frequency, antenna dimensions, and simulation domain size.
By adjusting the dimensions any sized Yagi-Uda array can be simulated.
Note
To obtain good far field resolution generally four or more antenna elements is desirable (One source, one reflector, two or more directors).
## Running the Simulation
After performing the above actions, continue as follows:
• Proceed to the Run Window by pressing the Run button in the left column of buttons.
• Here you can set run parameters, including how many cores to run with.
• When you are finished setting run parameters, click on the Run button in the upper left corner of the Logs and Output Files pane. You will see the output of the run in the right pane. The run has completed when you see the output, “Engine completed successfully.” This is shown in Fig. 161.
Fig. 161 The Run Window at the end of execution.
## Analyzing the Results
• Proceed to the Analysis window by pressing the Analyze button in the left column of buttons.
• Select computeFarFieldFromKirchhoffBox.py from the list and select “Open” (Fig. 162)
• Input values for the analyzer parameters. The analyzer may be run multiple times, allowing the user to experiment with different values.
• simulationName - yagiUda2p4
• fieldLabel - E
• farFieldRadius - 1024.0
• numPeriods - 0.25
• numFarFieldTimes - 2
• frequency - 2.4e9
• numTheta - 45
• numPhi - 60
• zeroThetaDirection - (0,1,0)
• zeroPhiDirection - (0,0,1)
• incidentWaveDirection - (0,0,0)
• incidentWaveAmplitude - blank
• varyingMeshMaxRadius - 1024.0
• principalPlanesOnly - checked
• Click “Analyze”
• Depending on the values of numTheta, numPhi, and numFarFieldTimes, the script may need to run for several minutes.
Fig. 162 The Analysis Window.
## Visualizing the results
Proceed to the Visualize Window by pressing the Visualize button in the left column of buttons.
To view the near field pattern, do the following:
• Expand Scalar Data
• Expand E
• Select E_x
• Check the Set Minimum box and set the value to -0.1
• Check the Set Maximum box and set the value to 0.1
• Check the Clip Plot box
• Expand Geometries
• Select poly (YagiUda2p4PecShapes)
• Move the dump slider forward in time
Fig. 163 The electric field near-field pattern.
The far field radiation pattern can be found in the scalar data variables of the data overview tab underneath the farE field. Uncheck the E_x dataset and check the farE_magnitude box under farE.
Fig. 164 The electric field manifestation of the far field pattern.
## Further Experiments
Try adding more directors and changing their dimensions to see the effect on the far field pattern.
|
# The ratio of the peak value of a wave to its RMS value is defined as:
This question was previously asked in
SSC JE EE Previous Paper 9 (Held on: 29 Oct 2020 Evening)
View all SSC JE EE Papers >
1. Form factor
2. Peak factor
3. Mean value
4. Average factor
## Answer (Detailed Solution Below)
Option 2 : Peak factor
Free
ST 1: Basic Electrical Engineering
3623
20 Questions 20 Marks 20 Mins
## Detailed Solution
Explanation:
The form factor is defined as the ratio of the RMS value to the average value of an alternating quantity.
F.F. (Form factor) = $$\frac{{R.M.S\;Value}}{{Average\;Value}}$$
Crest Factor ‘or’ Peak Factor is defined as the ratio of the maximum value to the R.M.S value of an alternating quantity.
C.F. ‘or’ P.F. = $$\frac{{Maximum\;Value}}{{R.M.S\;Value}}$$
IMPORTANT EVALUATIONS:
WAVEFORM SHAPE MAX. VALUE AVERAGE VALUE RMS VALUE FORM FACTOR CREST FACTOR SINUSOIDAL WAVE $${A_m}$$ $$\frac{{2{A_m}}}{\pi }$$ $$\frac{{{A_m}}}{{\sqrt 2 }}$$ $$\frac{{\frac{{{A_m}}}{{\sqrt 2 }}}}{{\frac{{2{A_m}}}{\pi }}} = 1.11$$ $$\frac{{{A_m}}}{{\frac{{{A_m}}}{{\sqrt 2 }}}} = \sqrt 2$$ SQUARE WAVE $${A_m}$$ $${A_m}$$ $${A_m}$$ $$\frac{{{A_m}}}{{{A_m}}} = 1$$ $$\frac{{{A_m}}}{{{A_m}}} = 1$$ TRIANGULAR WAVE $${A_m}$$ $$\frac{{{A_m}}}{2}$$ $$\frac{{{A_m}}}{{\sqrt 3 }}$$ $$\frac{{\frac{{{A_m}}}{{\sqrt 3 }}}}{{\frac{{{A_m}}}{2}}} = \frac{2}{{\sqrt 3 }}$$ $$\frac{{{A_m}}}{{\frac{{{A_m}}}{{\sqrt 3 }}}} = \sqrt 3$$ HALF-WAVE RECTIFIED WAVE $${A_m}$$ $$\frac{{{A_m}}}{\pi }$$ $$\frac{{{A_m}}}{2}$$ $$\frac{{\frac{{{A_m}}}{2}}}{{\frac{{{A_m}}}{\pi }}} = \frac{\pi }{2}$$ $$2$$
|
# subscript is not showing correctly in a long display formula within a comment
In the comment to this answer: https://math.stackexchange.com/a/144819/21919
the last display formula shows $...-f(x\,0)$ instead of $\ldots-f(x_0)$, though it is not a misprint. I tried to edit it, and to post the whole comment anew, but this little bug persists. It shows in Chrome (version 27.0.1453.116 m) and IE 8 under Windows 7.
Here's my original TeX code:
Nice! However, I had hard time understanding the last sentence, so maybe it is worth to supply a few more details as follows: Let $V$ denote the variation over the partition chosen within $\epsilon/2$ of $TV(f_{[x_1,x_0]})$ as described above. Then $$TV(f_{[x_1,x_0]})< V+\epsilon/2.$$ Also, $V-|f(x)-f(x_0)|$ is some variation over interval $[x_1,x]$, so we have: $$TV(f_{[x_1,x]})\ge V-|f(x)-f(x_0)|.$$ Finally, $TV(f_{[x_1,x_0]})=TV(f_{[x_1,x]})+TV(f_{[x,x_0]})$, so that: $$TV(f_{[x,x_0]})=TV(f_{[x_1,x_0]})-TV(f_{[x_1,x]})<(V+\epsilon/2)-(V-|f(x)-f(x_0)|)<\epsilon/2+\epsilon/2=\epsilon.$$
TV(f_{[x,x_0]})=TV(f_{[x_1,x_0]})-TV(f_{[x_1,x]})<(V+\epsilon/2)-(V-|f(x)-f(x_
|
Dave Horner's Website - Yet another perspective on things...
123 guests
Rough Hits : 2697056
how did u find my site?
nature of God
"[Programmers] are attached to their programs. Indeed, their programs become extensions of themselves - a fact which is verified in the abominable practice of attaching one's name to the program itself..." --Gerald M. Weinberg, The Psychology of Computer Programming
\begin{bmatrix} 1 & 0 & \ldots & 0 \\ 0 & 1 & 0 & \vdots \\ \vdots & 0 & \ddots & 0\\ 0 & \ldots & 0 & 1_{n} \end{bmatrix}
To access the private area of this site, please log in.
|
# cupy.linalg.eigh¶
cupy.linalg.eigh(a, UPLO='L')[source]
Eigenvalues and eigenvectors of a symmetric matrix.
This method calculates eigenvalues and eigenvectors of a given symmetric matrix.
Note
Currently only 2-D matrix is supported.
Parameters: a (cupy.ndarray) – A symmetric 2-D square matrix. UPLO (str) – Select from 'L' or 'U'. It specifies which part of a is used. 'L' uses the lower triangular part of a, and 'U' uses the upper triangular part of a. Returns a tuple (w, v). w contains eigenvalues and v contains eigenvectors. v[:, i] is an eigenvector corresponding to an eigenvalue w[i]. tuple of ndarray
Warning
This function calls one or more cuSOLVER routine(s) which may yield invalid results if input conditions are not met. To detect these invalid results, you can set the linalg configuration to a value that is not ignore in cupyx.errstate() or cupyx.seterr().
|
# Installing an explorer
Rumble requires the use of at least one Explorer within your environment to enable network discovery. The explorer should be installed on a system with reliable connectivity to the network you want to discover. For internal networks, Rumble works best when installed on a system with a wired (vs wireless) connection.
For external network discovery, nearly any cloud provider with a reliable connection should do. If the Rumble Explorer is installed in a container or virtualized system, ensure that it has direct access to the network (host networking in Docker, bridged networking in VMware, etc).
## Installation
To install the Rumble Explorer, log in to the Rumble Console and switch to the Organization that should be associated with the explorer. The explorer download link is specific to your active Organization and using the wrong link can result a new explorer being associated with the wrong organization.
Download the correct binary for your system from the explorer download page. For most systems, select the 64-bit (x86_64) architecture. For embedded devices, such as the Raspberry Pi 3+, choose the ARM7 architecture. Windows binaries are signed with a valid Authenticode signature, which should be validated before the executable is launched.
The explorer installation process requires administrative privileges. On Windows, a UAC prompt may be displayed. On Linux and macOS the downloaded binary should be made executable (chmod u+x rumble-explorer.bin) and then executed with root privileges (sudo or from root shell). In either case, the explorer should install itself as a system service and start immediately, displaying a new entry in the explorers page.
## System requirements
### Windows
• Windows Server 2012 R2+ or Windows 10 Build 1604+
• Processor running at 2.0 GHz or faster
• At least 4Gb of memory (1Gb available)
• At least 1Gb of free storage space
Windows Server 2008, Windows Server 2012, Windows 7, and Windows 8 may be able to run the explorer in a pinch, but are not officially supported.
### Linux
• Kernel version 2.6.23 or later
• Processor running at 2.0 GHz or faster
• At least 2Gb of memory (1Gb available)
• At least 1Gb of free storage space
Linux ARM devices with limited processing power and memory, such as the Raspberry Pi, can run the Rumble Explorer, but may have trouble scanning larger networks.
### MacOS
• macOS 10.11 (El Capitan) or newer
• Processor running at 2.0 GHz or faster
• At least 2Gb of memory (1Gb available)
• At least 1Gb of free storage space
macOS systems running Catalina (10.15) or newer need to use the curl download method to avoid issues with the new Notary requirements.
### BSD variants
• Processor running at 2.0 GHz or faster
• At least 2Gb of memory (1Gb available)
• At least 1Gb of free storage space
## Web screenshots
Google Chrome or Chromium should be installed on the Explorer system to enable web screenshots. Please note that “snap”-based Chromium installs (Ubuntu 20.04 and newer) don’t appear to work properly in headless mode and the official Chrome packages should be used instead.
## Network communication
The explorer connects to the console.rumble.run host on TCP port 443 using TLS and two static IPv4 addresses (13.248.161.247, 76.223.34.198). This connection is used for explorer registration, job scheduling, status messages, and submission of completed scan jobs. For completely offline environments, the Rumble Scanner can be used to create scan data files that can be uploaded later via the Inventory Import action. The host console.rumble.run is used for automatic updates of the explorer executable.
Please note that certain web proxies that perform TLS inspection do not handle Websocket communication properly and TLS inspection will need to be disabled for the Rumble Explorer to successfully connect. The most popular product with this problem is the Sophos (previously Cyberoam) security appliance. Websense users may need to add a bypass rule for console.rumble.run.
Proxy support is handled automatically in most cases. On the Windows platform, proxy information is read from the registry keys (used by Chrome, Edge, and IE).
On non-Windows operating systems the proxy can be configured by setting the HTTPS_PROXY environment variable. The value of the HTTPS_PROXY environment variable should be a hostname and port (proxy:8080) or just a hostname (proxy). Environment variables are read from the file /opt/rumble/bin/.env on these platforms and apply to all installed explorers.
## Removing an explorer
The easiest way to remove an explorer is to use the Explorers page Manage menu and select the Uninstall Explorer option. This will remove the service and terminate the current explorer process. If you would like to remove the explorer without using the Rumble Console, there are a couple options.
On the Windows platform, each explorer will be listed in Programs and Features (as the Rumble Agent), and can be uninstalled like any other application.
On all platforms, including Windows, the explorer can uninstall itself if run with the uninstall argument from a root or Administrator shell:
Windows
c:\Program Files\Rumble\rumble-explorer-[oid].exe uninstall
Other Platforms
/opt/rumble/rumble-explorer-[oid] uninstall
## Configuration
The explorer can be configured by setting variables in a .env file located in the same directory as the executable. On Windows this file should be created in C:\Program Files\Rumble\.env, while other platforms should use /opt/rumble/bin/.env. The format of this file is VAR=VAL with one variable per line.
## Log management
The explorer logs to a file and to standard output by default. On Windows the default log file location is the installation directory (C:\Program Files\Rumble) while other platforms log to the files /var/log/rumble.log and /var/log/rumble.err. The default configuration limits log files to 100Mb, creates three backups, and expires logs after 90 days. These defaults can be be changed by setting the following values in the .env file:
• The RUMBLE_AGENT_LOG_MAX_SIZE setting controls the maximum log size in megabytes. The default is 100.
• The RUMBLE_AGENT_LOG_MAX_BACKUPS setting controls the number of backup files created by log rotation. The default is 3.
• The RUMBLE_AGENT_LOG_MAX_AGE setting controls the maximum age in days, this applies to all files, including backups. The default is 90.
• The RUMBLE_AGENT_LOG_COMPRESS setting determines whether to gzip compress the backups. The default is false.
• The RUMBLE_AGENT_LOG_STDOUT setting determines whether to write logs to standard output (and syslog for systemd/upstart). The default is true.
The explorer must be restarted for these settings to take effect.
## Restart an explorer
The quickest way is to force an update from the cloud console, otherwise you can find the service name and restart it by hand.
On Linux systems using systemd, first obtain the name of the explorer service:
$systemctl | grep rumble-explorer Then restart the service using this name: $ systemctl restart rumble-explorer-[uuid-value]
A kill -9 of the explorer pid should cause a restart as well.
## Certificate Authorities (CAs)
The Rumble Explorer uses the system-installed certificate authorities to validate TLS connections in addition to an internal CA certificate bundle (derived from Debian 10). By default, both the system certificate roots, and the bundled roots are considered for all secure TLS connections. This behavior can be controlled via environment variables (set in the .env file or at the system level):
• The RUMBLE_TLS_IGNORE_SYSTEM_ROOTCA setting can be set to true to ignore the system CA roots.
• The RUMBLE_TLS_IGNORE_EMBEDDED_ROOTCA setting can be set to true to ignore the bundled CA roots.
## Manual mode
If a supported system service manager, such as systemd or upstart, is not detected, the Rumble Explorer will switch to manual mode, running in the foreground, and replacing and re-executing its own binary as new updates become available. For temporary explorer installations or to run the explorer in a container environment, the argument “manual” can be specified:
$sudo ./rumble-explorer.bin manual ## Storage locations The Rumble Explorer installs into %PROGRAMFILES%\Rumble on Windows and /opt/rumble on all other platforms. Temporary files are stored in the default operating system locations. These locations can be overridden using the .env file. Note that the explorer service needs to be restarted (or force updated) for these changes to take effect. On Windows, the temporary file location is chosen from the first non-empty environment value of TMP, TEMP, or USERPROFILE, falling back to the Windows directory. To override this location, set an entry in .env like the following: TMP=D:\Storage\Rumble On all other platforms, the temporary file location is chosen based on the value of TMPDIR, falling back to /tmp otherwise. To override this location, set an entry in .env like the following: TMPDIR=/home/storage/rumble Any scans that fail to upload are stored in the Rumble Explorer installation directory and can be imported into the platform manually or using the Rumble Scanner’s --import and --upload options. ## Container installations The Rumble Explorer can run in standard container environments, but may require additional configuration. To run as a standalone executable, the explorer can be run with the argument manual. For non-persistent containers an explorer identifier needs to be persisted through an environment variable. This can be done by setting the variable RUMBLE_AGENT_HOST_ID to a 32-character hexadecimal string. This identifier is used to uniquely identify the explorer within an organization. To generate a suitable identifier, the openssl tool may be used: $ openssl rand -hex 16
01b0283809b24511929d0b062bd36109
Here is a sample Containerfile you can edit and use:
#
# Sample Containerfile for running the Rumble Explorer in a container, with screenshot support.
#
FROM debian:stable-slim
WORKDIR /opt/rumble
RUN apt update && \
apt install -y chromium # add wireless-tools if you want WiFi scanning
# the first URL box to copy it to the clipboard.
#
# This ID is used to track the explorer even if the container is rebuilt.
# Set it to a unique 32 character hex ID. You can generate one via:
#
# $openssl rand -hex 16 # ENV RUMBLE_AGENT_HOST_ID=112233445566778899aabbccddeeff # If you need to set environment variables to change the explorer behavior, # you can do so via the ENV directive. Example: # # ENV RUMBLE_AGENT_LOG_DEBUG=true ADD${AGENT_URL} rumble-explorer.bin
RUN chmod +x rumble-explorer.bin
# For full functionality the Rumble scanner needs to send and receive raw
# packets, which requires elevated privileges.
USER root
# The argument manual tells Rumble not to look for SystemD or upstart.
ENTRYPOINT [ "/opt/rumble/rumble-explorer.bin", "manual"]
This containerfile works with podman as well as Docker. Note that because of the requirement for root privileges, you should start the container as root.
## Automated installations
The explorer will automatically install when executed if root or administrative privileges are available.
On Linux and BSD systems, automatic installation depends on the presence of a supported init service like systemd or upstart. If no supported init service is found, the explorer will instead run in manual mode, automatically overwriting and re-executing itself with each update. To automatically deploy an explorer on systems without a supported init service, the explorer should be executed in the background and with the nohup wrapper.
On Windows systems, the explorer will automatically install when run interactively or when the updater parameter is passed to the binary. For environments where MSIs are required, the Explorer MSI wrapper can be used to deploy an explorer from the Rumble Console or a local mirror.
|
|
Question
# Effective capacitance of parallel combination of two capacitors and is . when these capacitors are individually connected to a voltage source of , the energy stored in the capacitor is 4 times that of . If these capacitors are connected in series, their effective capacitance will be:
A
B
C
D
Medium
JEE Mains
## Solution
Verified by Toppr
Correct option is C)
## Given that (Parallel combination) ...(1) They are now connected to the same voltage source of . The energy stored in the two capacitors are given by Given that ....(ii) From (i) and (ii), . , . When and are connected in series, the equivalent capacitance of the combination is given by
Solve any question of Electrostatic Potential and Capacitance with:-
|
## Saturday, November 03, 2018
### Complexity, simulations in cosmology are pseudoscience
Three days ago, I discussed a new paper by Susskind that promoted the idea that the quantum theory of black holes can be and should be rephrased in terms of the complexity theory – basically a branch of computer science. It seems to me that some people who defended Susskind's view were pure computer scientists who had no idea about physics – and the very meaning of the word "physics" – at all.
But Susskind's paper was probably not the best one to explain what is really so utterly irrational about the attempts to rebrand fundamental physics as a part of computer science. Meanwhile, David Brown asked me about the 2017 paper
Computational complexity of the landscape II - Cosmological considerations
by Denef, Douglas, Greene, and Zukowski. I have known the three male co-authors well and I think that they're powerful minds but writing things like that is just plain stupid. The boldly phrased paper has 8 followups after 16 months so I believe it's right to say that almost all the people in the field share my skepticism. But it's normal to express the skepticism by silence and lack of interest. However, science is really powerful in clearly proving things to be wrong – not right – and because this whole line of reasoning is wrong, it's appropriate to discuss why.
First, amusingly enough, the 2017 paper is titled as the second part following the first part of the paper. That's cute except that the first part was published in 2006, more than 11 years earlier:
Computational complexity of the landscape I (Denef, Douglas)
Those who noticed the numeral "I" in the title were waiting for a "companion paper" cited as
[48] F. Denef and M. R. Douglas, “Computational Complexity of the Landscape II: Cosmological
Considerations,” to appear.
Well, it was going to appear – but 11 years later and with a doubled number of authors. I think that this unexpected delay indicates that Denef and Douglas pre-decided to write a paper with certain conclusions before they knew whether the evidence adds up. And that's just wrong.
OK, the 2006 paper shows that the problem to find a vacuum with a tiny cosmological constant in the "Bousso-Polchinski model of a discretuum of very many random flux vacua" is NP-complete, in the usual computer scientists' sense of the term, and that's important because there's a possibility that
...even if we were to find compelling evidence that some vacuum of string theory describes our universe, we might never be able to find that vacuum explicitly.
As you can see, there are two very different assertions in that paper. One of them is very technical – namely that a problem analogous to the traveling salesman problem (which is NP-complete) is indeed analogous and NP-complete, too. The second one is that we should basically give up the search for additional details about the laws of physics. They more or less claim that the first implies the second. Does it?
The implication surely doesn't exist as a solid logical one – and their suggestion that it is strong enough evidence is pure ideology.
To be sure that you understand the meaning of the key word here, the 2000 Bousso-Polchinski paper was an early toy model for the "string theory landscape". They suggested that a stringy compactification on a qualitatively realistic compactification manifold may be decorated with one hundred or so extra integers $$K_i$$ where $$i=1,2,\dots 100$$, the generalized electromagnetic fluxes through non-contractible cycles (submanifolds) of the compactification manifold.
If the fluxes $$K_i$$ may be assumed to be between $$1$$ and $$100$$, then you have about $$100^{100}$$ (squared googol, with my choice of numbers) possible values of the 100-tuplets $$\{K_i\}$$. The cosmological constant depends on the numbers $$K_i$$ is some rather generic way (it is typically increasing) but the consequences will be similar if we simplify the dependence to something like $\Lambda = -1 + \sum_{i=1}^{100} f_i K_i$ with some fixed random values of the coefficients $$f_i$$. Some of the choices of $$K_i$$ may accidentally produce $$\Lambda$$ that is extremely close to zero, imagine $$|\Lambda| \leq 10^{-122}$$. But those are basically random choices of the integers that randomly produce a physically interesting result, one with a small $$\Lambda$$, even though there is nothing fundamentally interesting about them.
If that is so and if you want to find the right vacuum with the small $$|\Lambda|$$, you basically need to go through a majority of the "googol squared" possibilities by brute force, one by one, and it can't be done in a realistic time, and that's why we can never find the right assignment of the fluxes in practice.
Does it mean that it has been shown that you cannot find the right vacuum in string theory? No, because:
1. It is not clear at all whether the right vacuum is a nearly generic element of some huge set of candidates – so that the number of candidates is comparable to a googol or more: the anthropic if not multiverse paradigm may be wrong and our vacuum might be rather special, e.g. the heterotic compactification to one of the simplest orbifolds
2. Even if it were an element of such a huge set, it may refuse to be a generic element and some early cosmological "vacuum selection" processes may prefer an element that is also easier to be found by physicists (just like by Nature)
3. Even if our vacuum were an element of a huge set and even if it were a generic element, there may exist special properties of the assignment of the cosmological constant – roughly speaking, special properties of the coefficients $$f_i$$ in the model above (but that model isn't an actual accurate Ansatz describing string theory precisely!) – that allow a much faster algorithm to search for the promising options. For example, some UV/IR connections may encode the small cosmological constant into some UV properties of the string vacuum
These are three huge loopholes – and none of them has really been excluded. For this reason, Denef and Douglas simply wrote a rationalization of a conclusion that was predetermined, namely that we should believe it's impossible to find the right stringy compactification. As far as I can see, this rationalization doesn't significantly differ from any sophistry – including sophistry from theologians – that argues that humans are too small worms to crack the greatest secrets of God.
Lots of such people have presented incomplete, intrinsically ideological, arguments to make you think that science is hopeless – at least the search for the truly deep insights about the Universe cannot succeed. It may be true but it may be false. As long as your "proof" is incomplete to the extent that loopholes exist and are perfectly conceivable, you simply shouldn't claim that you have made a big step towards proving one possible answer. Their paper basically tells you "you should overlook the loopholes" and has no evidence for it – so the paper is propaganda trying to manipulate, not a package of persuasive evidence.
NP-completeness is an absolutely inapplicable label for any calculation or decision problem within string theory
In 2006, Douglas and Denef were really addressing two very different problems – and the whole "apparent power" of their paper was based on the suggestion that these problems "are the same" even though they are not. One of these problems is a technical problem similar to the traveling salesman:
Decide about the number $$N$$, it was one hundred in my example – of cities or non-contractible cycles. Find the fastest algorithm that takes at most $$T(N)$$ operations to be executed – where $$T(N)$$ is the maximum number of steps that the program needs among all possible durations obtained by the exponentially many values of the parameters such as $$f_i$$ – the coefficients in front of the fluxes, the distances between the cities etc. Study how this maximized $$T(N)$$ scales with $$N$$ or its powers and exponentials as $$N\to \infty$$.
By construction, this is a standardized computer science problem which is similar to the traveling salesman problem. And indeed, it may be shown that it is "equally parameterically difficult" in the computer scientists' understanding of the equivalence. But do the equivalent problems exist within string theory?
Not really. Why? Because string theory is a unique theory. Its set of vacua and their properties are completely uniquely determined. The search for a vacuum that obeys some properties is a single and specific problem. This problem isn't parameterized by any $$N$$ at all. For example, the number of cycles of a Calabi-Yau three-fold is believed to be bounded (by a thousand or so) which means that you cannot send any such hypothetical $$N\to\infty$$ and discuss the asymptotic behavior of the "complexity" for large $$N$$ at all.
On top of that, even if you decided that some value of $$N$$ is fair for the "actual problem to search for a good stringy vacuum", the definition of the complexity wouldn't involve any maximization of the time over possible values of $$f_i$$, the "distances between the cities", because all these constants $$f_i$$ are completely uniquely determined by string theory.
In fact, all the amplitudes in all string vacua should be considered elements of a class of special functions of a new stringy kind. String theory is a unique theory much like $$\zeta(s)$$ is a unique function with certain properties. So all functions that describe aspects of string theory are unique and important, obey lots of identities, and there are usually lots of simplifications and alternative ways to determine all these functions. Any suggestion that these functions simply "have to" be searched for by the stupidest, brute force method because they're just some random gibberish are bound to be wrong. To say the least, the statement about the "gibberish" hasn't been demonstrated and it seems unlikely to ever be. The properties of string vacua weren't picked by any simple random generator – so they probably disagree with the numbers that you would get by a simple random generator.
So when you want to find a compactification with some properties, it's not the search for the "worst case scenario". Instead, it's analogous to the traveling salesman problem for a single particular distribution of the cities that the salesman should visit. And be sure, one can arrange the cities so that the shortest path through these cities is found very quickly. And you can even quickly prove that it's the shortest one, indeed.
Now, is the stringy problem analogous to the "worst case scenario" or one of the "easy or easier examples" of the traveling salesman problem? Douglas and Denef didn't really have any evidence for either answer to this fundamental question. They assumed that the stringy problem is close to the "worst case scenario", and then they proudly "almost proved" that the hopes are close to the "worst case scenario". Their reasoning was absolutely circular.
And I am generously overlooking the fact that even for the "worst case scenario", it hasn't really been proven that no reasonably fast algorithm exists. In particular, $$P=NP$$ is still possible. But even if you decided to believe that $$P\neq NP$$ is a safe enough assumption, my point is that they're making very many additional – and perhaps stronger – assumptions on top of that. Their conclusions almost trivially follow from these assumptions and they celebrate these conclusions as if they demonstrated something nontrivial. But they haven't.
The paper's role was an ideological one, a support for the defeatist attitude. Don't look for additional facts about the right theory of Nature or the right compactification. You're just a little germ who can't find anything. This ideology could have been used – and has been used – to discourage people from science at many moments in the past. Some people continued doing proper research and they have made huge progress, however. Of course the ideology "science would never make substantial progress again" was always based on some rationalization or predetermined pessimistic conclusions and the arguments always assumed some "worst case scenario". These pessimistic claims always assumed that there would be no new patterns and the remaining unknown facts about Nature would be impenetrable random gibberish. But there were always new patterns, disagreeing with the "random gibberish" assumptions. Science has repeatedly shown that these assumptions were way too strong – Nature has no reason to pay lip service to "worst case scenarios".
OK, the 2017 paper has two more authors and assumes that the reader buys everything the two chaps wrote in 2006. But their thinking is even much more unscientific than the thinking in the 2006 paper. Among other things, it's all about "simulations of the multiverse".
You know, I translated Greene's popular book on the multiverse and a chapter is dedicated to Ms Simulator – all of us may live in Her computer game. It's OK to include such a chapter into a popular book of this kind – but mostly for entertainment reasons. To think that this is really how research of cosmology may be done is too bad.
In the abstract, the Lady and Gentlemen announce that they incorporate complexity into the "measure factors" that are considered in many papers about the multiverse. It already sounds bad but the following sentence of the abstract must make you say "WTF":
By defining a cosmology as a space-time containing a vacuum with specified properties (for example small cosmological constant) together with rules for how time evolution will produce the vacuum, we can associate global time in a multiverse with clock time on a supercomputer which simulates it.
First, the authors decide to "define cosmology" (they really mean "redefine cosmology") as a spacetime containing a vacuum with specified properties. Why should "cosmology" – something that should represent the science about the Cosmos, something that exists independently of our desires – be "defined" by arbitrary properties that humans have specified?
If "cosmology" has some rules, it may also produce spacetime that do not obey these properties invented by humans. If the deepest known rules of cosmology that we have also produce spacetimes where the cosmological constant is never tiny, then these spacetimes are still products of cosmology according to the deepest known rules of cosmology. Saying that you can invalidate this principle – basically a tautology – by "defining cosmology" in your own way is utterly irrational.
You can't define whole disciplines of science to agree with random constraints that you invented. Instead, the purpose of disciplines of science is to decide whether your assumptions about the Cosmos and other things are correct. If there is a disagreement between the best theory and your assumptions, it's your assumptions that are wrong according to science.
So I think that this thinking about "defining cosmology" involves a misunderstanding of the basic logic of the scientific method. Like in so many other cases, the authors simply want to make up constraints that they find psychologically pleasing and dictate what properties the final laws of physics should obey.
But another problem with the first part of the sentence is that they think that research may be done by dividing objects to classes that obey or don't obey some cherry-picked properties. But this is a characteristic procedure for social sciences, not natural sciences. You know, social sciences may divide organisms to humans and non-humans – and assign vastly different rights to the humans than non-humans, despite the fact that the differences between pairs of humans are often comparable to the differences between some humans and some non-humans.
I am not saying that it's wrong to allow civil rights to humans, no civil rights to animals, and draw a thick line in between them. It's a convention that works fine for most societies. But the thick line is a social construct. Natural scientists know that nothing like that exists at the fundamental level. When a geneticist can distinguish a chimp from a human, she can also distinguish two humans from each other. The idea that some qualitative properties that may distinguish two objects are metaphysically more important than all other parameters is a pure superstition, something that no real scientist may believe. Physics and other natural sciences are quantitative so it doesn't really use the categorization of real-world objects into boxes by inventing arbitrary thick lines.
However, these superstitions are common among the fans of the anthropic principle. They divide Universes by thick lines into those that contain "intelligent beings" from those that don't. But the definition of an "intelligent being" contains a randomly cherry-picked subset of properties of humans or beings in our Universe. Why did you require some properties and not others, Gentlemen? This whole procedure is another social convention. It self-evidently cannot have any true physical significance.
If you use the existence (somewhere in the Universe) of beings that have some human properties as a condition to pick the vacua or Universes, it's just fine – because the existence of objects sharing some features with the humans is an experimentally proven fact. It's a fact simply because humans have been observed. But by describing the properties of humans in some "neutral language", you don't make your explanation less dependent on the empirical data. And to cherry-pick some properties of our Universe while "pretending the ignorance" of others is just utterly irrational. Once you are allowed to use the existence of animals as a criterion to pick the vacua, you're also allowed to use the value of the fine-structure constant $$\alpha\approx 1/137.036$$ or so – and all other observed facts.
You can play a game in which you challenge yourself and try to find your vacuum as accurately as you can by using just some empirically observed facts. But it's just an arbitrary game, not science. A scientist is always allowed to use all empirically known facts to refine his knowledge of the right theory and/or parameters that need to be substituted to the theory. Indeed, it's the goal of science to extract the right theory by studying the empirical facts cleverly! At the end, the set of empirical facts that are sufficient to identify the right theory may be greatly reduced. But you don't know the reduced collection of facts from the beginning – you can only determine this collection when the correct theory is found.
But I had to laugh when I read the words about "defining the cosmic time for the whole multiverse as the time shown by the simulation". What!? What can it possibly mean and why you would write such a thing in a paper posted to a physics archive? Which simulation do they discuss? What is the exact program to simulate the Universe? Does this simulation properly reflect the actual laws of physics? If it does not, why would a random caricature of physics – some computer game – be relevant for physics? And if it is aligned, why don't you discuss physics directly instead of its simulations?
In Greene's book about the multiverse, he discussed a scenario in which he is an NPC in a computer simulation and the boss – Miss Simulator (probably George Soros with a lipstick) – decides to kick him out of the computer game because Greene says something politically incorrect. That was cute but I was assuming that he was just mocking religions. When I saw this paper with Denef and others, it seems he was damn serious. He wants everyone to be an NPC who just blindly worships some hypothetical Miss Simulator who is in charge of the Universe. This is not only "like" religions. It is completely isomorphic to religions.
If a programmer writes a computer game marketed as a simulation of the multiverse which has some cosmic time, it doesn't mean that her choice of the cosmic time agrees with how cosmic time works in physics. The cosmic time may be incorporated in tons of ways – some of them are more physically realistic, others less so. In fact, just our talk about the cosmic time in a simulation doesn't even imply that it makes any sense to define a universal time in the multiverse. Different patches of the multiverse may very well be mutually exclusive. By the horizon complementarity, the quantum fields in different patches may refuse to commute with each other. They don't have to "exist simultaneously" at all.
Just because you envision a would-be authoritative "programmer who created a simulation" along certain lines doesn't mean that you have any evidence that these lines are physically correct, sensible, or realistic.
Simulations and computer games may strikingly differ from reality and in most cases, they do. NPCs in computer games don't really behave like intelligent humans because they have lots of limitations. Computer games often allow things that are prohibited in the real world – such as the superluminal motion of rockets. On the other hand, computer programs are often unable to do things that are trivial to do for Nature – such as the calculation of the energy spectrum of a complicated molecule.
If you allow imperfect simulations, the imperfections may be huge and sufficient for a sensible person to see that simulations and the reality are completely different things. You may hypothetically think about some very precise representations of the laws of physics. But if you don't know something about these laws of physics, just talking about the "equivalent simulation" won't bring you any closer to the answers.
At the end, I think that the authors think like the social pseudoscientists. They think that someone – like a coder – may be placed above physics and physicists. He or more likely she studies the world – including the multiverse – by some categorization that is enough for comparative literature, by arbitrary defining cosmic time in some extremely stupid ways, and many other things, and physicists are obliged to take this stuff seriously.
It is pretty much exactly like the postmodern sociologists or anthropologists who want to study the scientific community using similar methods they use to study savages in Polynesia. Can't a sociologist simply stand above the physicists and understand everything that is truly important about them, their community, and their activities – much more than they understand it themselves?
Well, it's not possible. A social scientist is still a relatively clueless moron. If she weren't a moron, she could become a theoretical physicist instead of a social scientist. She may be smarter than savages in Polynesia but she's not smarter than physicists, at least the bright ones. So she's simply not standing above the physicists and by superficially looking at some people's behavioral patterns, she still completely misses the key things. The key things do depend on the validity of the theories, strength of the evidence, and the arguments. If she understands nothing about those, she can't understand anything truly important about the interactions between physicists! She's still similar to a puppy who learns the right reaction to several worlds used by the owner. By learning them, the puppy doesn't become a top expert in physics or neuroscience.
Denef et al. did something analogous to those sociologists or anthropologists. They envisioned some hypothetical authority, a programmer, and made guesses about her choice how to write a program. And because She is such a divine figure in our multiverse, Her choices must be considered serious insights of physics. I am sorry, Lady and Gentlemen, but readers with the IQ above 70 still see that those are your choices, not a divinity's choices, and they see that there is no evidence that you have found any picture that makes sense. Even if that divine programmer existed, it would still be just a simulation that could give a misleading picture about physics.
The simplest point they seem not to get is that programming, categorization, social sciences and all activities like that are emergent – they cannot possibly be fundamental in the sense of fundamental physics. This statement is tautologically true, it is true by construction. We know that animals, humans, societies, their conventions, and also computer programs have evolved from the pre-existing laws of physics. So no insight about these complex things – humans, societies, programs – can teach us any reliable insights about the fundamental laws of physics. Do they really disagree with this trivial assertion?
In particular, if you pick some random conventions – basically social conventions or some conventions extracted from your arbitrary assumptions about how some simulation of a multiverse should be written – it is absolutely obvious that a measure that you "calculate" out of these conventions is just another convention. Garbage in, garbage out. In fact, you have inserted some arbitrary garbage as the starting point but you have manipulated it in some even weirder and more arbitrary way so the "measure" you ended up with must be an even greater garbage than what you assumed at the beginning.
The main verdict is that there are no justified results or conclusions backed by arguments in such papers. It's just about the transformation of some garbage into another garbage. The last paragraph of their introduction says:
Finally, we make some comments about a more abstract version of this discussion, which defines the complexity class of a cosmology. Our proposal was inspired by computational complexity theory, and particularly the idea of computational reduction. Can we give meaning to questions such as “is the problem of finding a vacuum with small cosmological constant in P, NP or some larger complexity class?”
No, you can't give a meaning to such questions. As I said, finding a string vacuum isn't a problem parameterized by an adjustable $$N$$ and adjustable parameters $$f_i$$. But more generally, you are mixing up complexity and cosmology even though you have absolutely nothing coherent to say about the union – but you know that such a mixture will be welcome by certain people for basically ideological reasons (it may be welcome e.g. by coders with a big ego who want to be told that by being coders, they indirectly know everything important about physics as well – and perhaps they are analogous to God). But this is very bad science.
The paper has 57 pages and one could write 570 pages to clarify why many detailed assertions in the paper are ludicrous. For example, by worshiping Miss Simulator, they claim to "solve the Boltzmann Brain problem", among others. But the "Boltzmann Brain problem" is just another pseudo-problem that arose from irrational ways to think about the Universe – ways that are completely analogous to this paper. We can easily empirically exclude the theory that we're Boltzmann Brains – and no theory that has actually been successful in science predicts that we should be Boltzmann Brains. Only completely flawed and irrational applications of the probability calculus and crackpot theories about cosmology suggest that we "should be" Boltzmann Brains.
Developing a theory that is free of the problem "the theory is predicting that we are the Boltzmann Brain" isn't a difficult task – you just need to throw away the stupidest possible approaches to probability and physics. Because it's not a difficult task, it's ludicrous to view the "cure for the Boltzmann Brain problem" as significant evidence that your theory of physics is valid.
|
# How do you graph x> -3?
$x > - 3$
This is said as "$x$ is greater than $- 3$."
The open circle on $- 3$ means that $- 3$ is not a solution (but anything greater than it is).
|
# Test of Equality Between Two Densities
Are returns this year actually different than what can be expected from a typical year? Is the variance actually different than what can be expected from a typical year? Those are fairly light, easy to answer questions. We can use tests for equality of means or equality of variances.
But how about the following question:
is the profile\behavior of returns this year different than what can be expected in a typical year?
This is a more general and important question, since it encompasses all moments and tail behavior. And it is not as trivial to answer.
In this post I am scratching an itch I had since I wrote Understanding Kullback – Leibler Divergence. In the Kullback – Leibler Divergence post we saw how to quantify the difference between densities, exemplified using SPY return density per year. Once I was done with that post I was thinking there must be a way to test the difference formally, rather than just quantify, visualize and eyeball. And indeed there is. This post aim is to show to formally test for equality between densities.
There are in fact at least two ways in which you can test equality between two densities, or two distributions. The first is more classic. The test is called Kolmogorov–Smirnov test. The other is more modern, using permutation test (which requires simulation). We show both. Let’s first pull some price data:
We can see that the mean and standard deviation of the daily returns for 2018 is a bit different than the mean and standard deviation of the rest. This is how the estimated densities look like:
## Kolmogorov–Smirnov test
What we can do is compute the cumulative distribution function for each of the densities. The one for 2018 and the one excluding 2018. Say distribution is for 2018 and distribution is for the rest. We compute the difference for each of the x’s. We know how the maximum of those (absolute) differences is distributed, so we can use that maximum as a test statistic, if it turns out too far out in the tails, we then decide the two distributions are different. Formally, but with a somewhat lax notation:
where is between 0 and 1 (by construction, since we subtract two probabilities and take the absolute value). is a Brownian bridge. It is not super interesting, all you should care about is that the (maximum of) the difference has a known distribution. This is a limit distribution, so we need a large number of observations, n, to have confidence in this test.
#### Kolmogorov-Smirnov test – R code
Let’s compare 2018 daily return with the rest of the returns to see if the distribution is the same based on the Kolmogorov-Smirnov test:
Fast and painless. We see that the maximum is 0.067 and that based on the limiting distribution the p-value is 0.3891. So no evidence that the distribution of 2018 is in any way different than the rest.
Let’s look at the permutation test. The main reason is that in order for the Kolmogorov-Smirnov test to be valid, given that it is based on a limiting distribution, we need a large number of observations. But nowadays we don’t have to rely on asymptotics as much as we had to in the past, because we can use computers.
## Permutation test of equality between two densities
Intuitively, if the densities are exactly the same, we can bunch them together and sample from the “bunched data”. In our example, because we gathered the returns into one vector, permuting the vector implies that the daily returns from 2018 are now scattered across the vector, so taking a difference as in the equation above is like simulating from a null hypothesis: the distribution of 2018 daily returns is exactly the same as the rest. Now for each x we would have a difference under the null. We also have the actual difference for each x, from our observed data. We can now square (or take absolute values of) the actual difference between densities (per x), and compare it to our simulation results which were generated from the “bunched data”. The p-value can be estimated by looking at which quantile the actual difference falls within the simulated differences. If the actual data falls way outside the range of the distribution (of aggregated squared differences) under the null then we would reject the hypothesis that the distributions are the same.
#### Density comparison permutation test – R code
There is a fantastic package called sm (“sm” smoothing methods). There is also an accompanied book (see reference below).
We use the function sm.density.compare from that package to do what just described. The two argument nboot and ngrid are the number of simulation you would like to have and the number of grid points across the x you would like to use when you compute the . So ngrid=100 would “chop” the support into 100 points.
We can see that p-value is not very different than what we got using Kolmogorov-Smirnov test. This is how it looks like:
Test of equal densities: p-value = 0.326
Of course, there is more nitty gritty to discuss, but the itch is gone.
## References
### 2 comments on “Test of Equality Between Two Densities”
1. Chris Ryan says:
This is a very interesting package. In the plot resulting from sm.density.compare, how can you tell which line represents which group? How did you add the embelishments to the graph displayed here, like the key/legend? Thanks
|
# Question 62519
Apr 12, 2015
Here's how you'd go about solving this problem.
Your reaction will initially form liquid water at ${26}^{\circ} \text{C}$. This implies that the released energy will have to account for
• Heating the water from ${26}^{\circ} \text{C}$ to ${100}^{\circ} \text{C}$;
• Convert the water from liquid at ${100}^{\circ} \text{C}$ to vapor at ${100}^{\circ} \text{C}$ - this represents a phase change.
• Heat the water vapor from ${100}^{\circ} \text{C}$ to whatever the final temperature will be.
So, start with the balanced chemical equation for the formation of liquid water
$2 {H}_{2 \left(g\right)} + {O}_{2 \left(g\right)} \to 2 {H}_{2} {O}_{\left(l\right)}$
You know that the standard enthalpy of formation for water is $- \text{285.8 kJ/mol}$. Notice that this value is expressed per mole.
SIDE NOTE The standard enthalpy of formation is defined at 25 degrees Celsius, but since your temperature is very close to that, I'll assume it to be equal to the value measured at standard conditions.
However, your reaction doesn't produce 1 mole, it actually produces 6 moles of water. Because hydrogen gas and water have a $1 : 1$ mole ratio, and since oxygen is not acting as a limiting reagent, the number of moles of water will be equal to the number of moles of hydrogen gas.
So, 6 moles of hydrogen gas react $\to$ 6 moles of water will be produced. As a result, the total heat given off by the reaction will be
DeltaH = 6cancel("moles") * (-"285.8 kJ")/cancel("mol") = -"1714.8 kJ"
Since you have 6 moles of water, you'll have
6cancel("moles") * "18.015 g"/(1cancel("mol")) = "108.1 g of water"
So, to heat water from ${26}^{\circ} \text{C}$ to ${100}^{\circ} \text{C}$, you need
$q = m \cdot c \cdot \Delta T$
q_1 = 108.1cancel("g") * 4.18"J"/(cancel("g") * ^@cancel("C")) * (100 - 26) = "33.44 kJ"
To go from liquid at ${100}^{\circ} \text{C}$ to vapor at ${100}^{\circ} \text{C}$
q_2 = 108.1cancel("g") * 2260"J"/cancel("g") = "244.3 kJ"
From this point on, you'll use the heat given off by the formation reaction to heat the vapor from ${100}^{\circ} \text{C}$ to whatever the final temperature will be.
The remaining amount of enery will be
${q}_{\text{remaining" = q_"given off}} - {q}_{1} - {q}_{2}$
${q}_{\text{remaining" = 1714.8 - 33.44 - 244.3 = "1437.1 kJ}}$
Now solve for the final temperature of the water vapor
q_"remaining" = m * c_"vapor" * (T_"final" - 100)
$1437100 \cancel{\text{J") = 108.1cancel("g") * 2.04cancel("J")/(cancel("g") * ^@cancel("C")) * (T_"final" - 100)^@cancel("C}}$
${T}_{\text{final" = "1459152.4"/220.524 = 6616.8^@"C}}$
Rounded to two sig figs, the number of sig figs given for 6.0 and 3.0 moles, the answer will be
T_"final" = color(green)(6600^@"C")#
|
# What properties of busy beaver numbers are computable?
The busy beaver function $\text{BB}(n)$ describes the maximum number of steps that an $n$-state Turing machine can execute before it halts (assuming it halts at all). It is not a computable function because computing it allows you to solve the halting problem.
Are functions like $\text{BB}(n) \bmod 2$, or more generally $\text{BB}(n) \bmod m$ for a modulus $m$, computable? Computing these functions doesn't solve the halting problem, so the above argument doesn't apply.
• This seems like it might well depend sensitively on the details of your machine setup. – Chris Eagle Jan 7 '13 at 0:01
• Some discussion on this question: scottaaronson.com/blog/?p=46 – Dan Brumleve Jan 7 '13 at 0:44
• A variation: can it be shown that $\text{BB}(n)$ is composite infinitely often? This version is seemingly less sensitive to the encoding. – Dan Brumleve Jan 7 '13 at 4:02
• 1-D BB Turing machines are hard to visualize, so I made a page for 2-D Turing Machine BBs.. Once a 1-D Turing machine becomes predictable, it can be classified as halting or infinite. Thus, the point of predictability is the important point. This rarely happens elegantly. The champions tend to be machines that can be extended forward as they get into temporary predictable behaviors. – Ed Pegg Feb 26 '13 at 15:51
• – Andrés E. Caicedo Jul 23 '13 at 16:21
Define $\text{BB}(n)$ as the largest natural number whose Kolmogorov complexity (in a prefix-free binary language) is less than or equal to $n$ bits.
Consider $\text{BB}(n) \space \text{mod} \space 4^n$. This number has a Kolmogorov complexity less than $n + o(n)$, since it can be computed from $\text{BB}(n)$, and $K(\text{BB}(n)) \le n$.
Also consider $\lfloor \Omega \cdot 4^n \rfloor$ where $\Omega$ is Chaitin's constant. This number's Kolmogorov complexity is at least $2 \cdot n - o(n)$ bits (by definition of algorithmic randomness).
So,
$\text{BB}(n) \space \text{mod} \space 4^n \stackrel{?}{=} \lfloor \Omega \cdot 4^n \rfloor$
is computable since it is false for all but finitely many $n$.
Given the first $n$ bits of $\Omega$ it is possible to compute not just $\text{BB}(n)$ but all the $\text{BB}(i)$ for $i$ up to $n$. We can use this to turn the above statement sideways and say something about only the lower bits of each busy beaver number:
$K(\sum_{i \le n}{[4^i \cdot (\text{BB}(i) \space \text{mod} \space 4)]}) < n + o(n)$
implying that
$\sum_{i}{\frac{\text{BB}(i) \space \text{mod} \space 4}{4^i}}$
is not algorithmically random, and in particular,
$\Omega \ne \sum_{i}{\frac{\text{BB}(i) \space \text{mod} \space 4}{4^i}}$ .
A couple more observations:
There is a total computable function $\text{CC}:\mathbb{N}\rightarrow\mathbb{N}$ that inverts $\text{BB}$, i.e. $\text{CC}(\text{BB}(n)) = n$ for all $n \in \mathbb{N}$. It works like this: on input $k$, run every TM with $k$ or fewer states for $k$ steps, and return the fewest number of states of any that halted on the last step. For all $k$ there is a $k$-state machine that terminates in exactly $k$ steps, so there will be a smallest one. This implies immediately that Busy Beaver numbers have some computable properties, for example if $f$ is any computable function, then there is another computable function $g$ such that $f(n) = g(\text{BB}(n))$, namely $g(k) = f(\text{CC}(k))$. But also, we can make $f$ and $g$ be the same function: $\text{CC}$ is non-increasing so it has no cycles and at least one fixed point, call the computable function that finds it $\text{CC}^*$. So, $\text{CC}^*(\text{BB}(n)) = \text{CC}^*(n)$. For $\text{CC}^*$ to be non-trivial there need to be at least two fixed points, surely there always are, but if not just redefine $\text{CC}(k) = k$ on some particular $k$ which is not a $\text{BB}$ number.
On the other extreme, I believe there exists a total computable function $g$ such that $\sum{\frac{g(\text{BB}(n))}{2^n}}$ is algorithmically random: $g(k)$ computes the $\text{CC}(k)^{\text{th}}$ bit of $\Omega$ using the assumption that $\text{BB}(\text{CC}(k)) = k$. I think it should work to to count all programs shorter than $\text{CC}(k)$ that terminate in at most $k$ steps (but more care is needed to describe this and prove that it is total).
|
Question:
# What is the molecular formula for a carbonate ion?
## The chemical formula is: HCO3- + H+ = H20 + CO2
In chemistry, an oxocarbon anion is a negative ion consisting solely of carbon and oxygen atoms, and therefore having the general formula CxOyn for some integers x, y, and n. The most common oxocarbon anions are carbonate, CO32−, and oxalate, C2O42−. There is however a large number of stable anions in this class, including several ones that have research or industrial use. There are also many unstable anions, like CO2− and CO4−, that have a fleeting existence during some chemical reactions; and many hypothetical species, like CO44−, that have been the subject of theoretical studies but have yet to be observed. Stable oxocarbon anions form salts with a large variety of cations. Unstable anions may persist in very rarefied gaseous state, such as in interstellar clouds. Most oxocarbon anions have corresponding moieties in organic chemistry, whose compounds are usually esters. Thus, for example, the oxalate moiety [–O–(C=O–)2–O–] occurs in the ester dimethyl oxalate H3C–O–(C=O–)2–O–CH3. In many oxocarbon anions each of the extra electrons responsible for the negative electric charges behaves as if it were distributed over several atoms. Some of the electron pairs responsible for the covalent bonds also behave as if they were delocalized. These phenomena are often explained as a resonance between two or more conventional molecular structures that differ on the location of those charges and bonds. The carbonate ion, for example, is considered to have an "average" of three different structures so that each oxygen has the same negative charge equivalent to 2/3 of one electron, and each C–O bond has the same average valence of 4/3. This model accounts for the observed threefold symmetry of the anion. Similarly, in a deprotonated carboxyl group –, each oxygen is often assumed to have a charge of −1/2 and each C–O bond to have valence 3/2, so the two oxygens are equivalent. The croconate anion also has fivefold symmetry, that can be explained as the superposition of five states leading to a charge of −2/5 on each oxygen. These resonances are believed to contribute to the stability of the anions. An oxocarbon anion CxOyn can be seen as the result of removing all protons from a corresponding acid CxHnOy. Carbonate CO32−, for example, can be seen as the anion of carbonic acid H2CO3. Sometimes the "acid" is actually an alcohol or other species; this is the case, for example, of acetylenediolate C2O22− that would yield acetylenediol C2H2O2. However, the anion is often more stable than the acid (as is the case for carbonate); and sometimes the acid is unknown or is expected to be extremely unstable (as is the case of methanetetracarboxylate C(COO−)4). Every oxocarbon anion CxOyn can be matched in principle to the electrically neutral (or oxidized) variant CxOy, an oxocarbon (oxide of carbon) with the same composition and structure except for the negative charge. As a rule, however, these neutral oxocarbons are less stable than the corresponding anions. Thus, for example, the stable carbonate anion corresponds to the extremely unstable neutral carbon trioxide CO3; oxalate C2O42− correspond to the even less stable 1,2-dioxetanedione C2O4; and the stable croconate anion C5O52− corresponds to the neutral cyclopentanepentone C5O5, which has been detected only in trace amounts. Conversely, some oxocarbon anions can be reduced to yield other anions with the same structural formula but greater negative charge. Thus rhodizonate C6O62− can be reduced to the tetrahydroxybenzoquinone (THBQ) anion C6O64− and then to benzenehexolate C6O66−.. An oxocarbon anion CxOyn can also be associated with the anhydride of the corresponding acid. The latter would be another oxocarbon with formula CxOyn/2; namely, the acid minus n/2 water molecules H2O. The standard example is the connection between carbonate CO32− and carbon dioxide CO2. The correspondence is not always well-defined since there may be several ways of performing this formal dehydration, including joining two or more anions to make an oligomer or polymer. Unlike neutralization, this formal dehydration sometimes yields fairly stable oxocarbons, such as mellitic anhydride C12O9 from mellitate C12O126− via mellitic acid C12H6O126− For each oxocarbon anion CxOyn there are in principle n−1 partially hydrogenated anions with formulas HkCxOy)−kn(, where k ranges from 1 to n−1. These anions are generally indicated by the prefixes "hydrogen"-, "dihydrogen"-, "trihydrogen"-, etc. Some of them, however, have special names: hydrogencarbonate is commonly called bicarbonate, and hydrogenoxalate is known as binoxalate. The hydrogenated anions may be stable even if the fully protonated acid is not (as is the case of bicarbonate). The carbide anions, such as acetylide C22− and methanide C4−, could be seen as extreme cases of oxocarbon anions CxOyn, with y equal to zero. The same could be said of oxygen-only anions such as oxide O2−, superoxide, O2−, peroxide, O22−, and ozonide O3−. Here is an incomplete list of the known or conjectured oxocarbon anions Several other oxocarbon anions have been detected in trace amounts, such as , a singly ionized version of rhodizonate.
Hydroxidodioxidocarbonate(1−) Hydrogencarbonate O[c-](:o):o OC([O-])=O InChI=1S/CH2O3/c2-1(3)4/h(H2,2,3,4)/p-1
Key: BVKZGUZCCUSVTD-UHFFFAOYSA-M In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogen carbonate) is an intermediate form in the deprotonation of carbonic acid. It is an anion with the chemical formula HCO3−. Bicarbonate serves a crucial biochemical role in the physiological pH buffering system. The bicarbonate ion (hydrogen carbonate ion) is an anion with the empirical formula HCO3− and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is the conjugate base of carbonic acid ; it is the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions. A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure, in particular sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality.][ Bicarbonate is alkaline, and a vital component of the pH buffering system of the human body (maintaining acid-base homeostasis). 70-75% of CO2 in the body is converted into carbonic acid (H2CO3), which can quickly turn into bicarbonate (HCO3−). With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to drastic pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach. In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH. The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking. The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle. Bicarbonate also serves much in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Ammonium bicarbonate is used in digestive biscuit manufacture. In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid-base physiology in the body. The parameter standard bicarbonate concentration (SBCe) is the bicarbonate concentration in the blood at a 2COaP of 40 mmHg (5.33 kPa), full oxygen saturation and 36 °C. M: URI anat/phys/devp/cell noco/acba/cong/tumr, sysi/epon, urte proc/itvp, drug (G4B), blte, urte M: END anat/phys/devp/horm noco (d)/cong/tumr, sysi/epon proc, drug (A10/H1/H2/H3/H5) M: HRT anat/phys/devp noco/cong/tumr, sysi/epon, injr proc, drug (C1A/1B/1C/1D), blte M: DIG anat (t, g, p)/phys/devp/enzy noco/cong/tumr, sysi/epon proc, drug (A2A/2B/3/4/5/6/7/14/16), blte
Calcium hydrogen carbonate Calcium bicarbonate [Ca+2].[O-]C(=O)O.[O-]C(=O)O InChI=1S/2CH2O3.Ca/c2*2-1(3)4;/h2*(H2,2,3,4);/q;;+2/p-2
Key: NKWPZUCBCARRDP-UHFFFAOYSA-L InChI=1/2CH2O3.Ca/c2*2-1(3)4;/h2*(H2,2,3,4);/q;;+2/p-2
Key: NKWPZUCBCARRDP-NUQVWONBAN Calcium bicarbonate, also called calcium hydrogen carbonate, has a chemical formula Ca(HCO3)2. The term does not refer to a known solid compound; it exists only in aqueous solution containing the calcium (Ca2+), bicarbonate (HCO3–), and carbonate (CO32–) ions, together with dissolved carbon dioxide (CO2). The relative concentrations of these carbon-containing species depend on the pH; bicarbonate predominates within the range 6.36-10.25 in fresh water. All waters in contact with the atmosphere absorb carbon dioxide, and as these waters come into contact with rocks and sediments they acquire metal ions, most commonly calcium and magnesium, so most natural waters that come from streams, lakes, and especially wells, can be regarded as dilute solutions of these bicarbonates. These hard waters tend to form carbonate scale in pipes and boilers and they react with soaps to form an undesirable scum. Attempts to prepare compounds such as calcium bicarbonate by evaporating its solution to dryness invariably yield the solid carbonate instead: Ca(HCO3)2(aq) → CO2(g) + H2O(l) + 3CaCO(s). Very few solid bicarbonates other than those of the alkali metals and ammonium ion are known to exist. The above reaction is very important to the formation of stalactites, stalagmites, columns, and other speleothems within caves and, for that matter, in the formation of the caves themselves. As water containing carbon dioxide (including extra CO2 acquired from soil organisms) passes through limestone or other calcium carbonate containing minerals, it dissolves part of the calcium carbonate and hence becomes richer in bicarbonate. As the groundwater enters the cave, the excess carbon dioxide is released from the solution of the bicarbonate, causing the much less soluble calcium carbonate to be deposited. Dissolved carbon dioxide (CO2) in rainwater (H2O) reacts with limestone, calcium carbonate (CaCO3) to form soluble calcium bicarbonate (Ca(HCO3)2). This soluble compound is then washed away with the rainwater. This is form of weathering is called 'Carbonation'.
But-2-ynedioic acid 2-Butynedioic acid C(#CC(=O)O)C(=O)O O=C(O)C#CC(=O)O InChI=1S/C4H2O4/c5-3(6)1-2-4(7)8/h(H,5,6)(H,7,8)
Key: YTIVTFGABIZHHX-UHFFFAOYSA-N InChI=1/C4H2O4/c5-3(6)1-2-4(7)8/h(H,5,6)(H,7,8)
Key: YTIVTFGABIZHHX-UHFFFAOYAA 175–176 °C (dec.)
180–187 °C (dec.) Acetylenedicarboxylic acid or butynedioic acid is an organic compound (a dicarboxylic acid) with the formula C4H2O4 or HO2C-C≡C-CO2H. It is a crystalline solid that is soluble in diethyl ether. The removal of two protons yields the acetylenedicarboxylate dianion C4O42−, which consists only of carbon and oxygen, making it an oxocarbon anion. Partial ionization yields the monovalent hydrogenacetylenedicarboxylate anion HC4O4−. The acid was first described in 1877 by Polish chemist Ernest Bandrowski. It can be obtained by treating α,β-dibromosuccinic acid with potassium hydroxide KOH in methanol or ethanol. The reaction yields potassium bromide and potassium acetylenedicarboxylate. The salts are separated and the latter is treated with sulfuric acid. Acetylenedicarboxylic acid is used in the synthesis of dimethyl acetylenedicarboxylate, an important laboratory reagent. Both the acid and the monobasic salt potassium hydrogenacetylenedicarboxylate KC4HO4 are commonly traded as laboratory chemicals.
Carbonic acid Carbon dioxide solution; Dihydrogen carbonate; acid of air; Aerial acid; Hydroxymethanoic acid O=C(O)O InChI=1S/CH2O3/c2-1(3)4/h(H2,2,3,4)
Key: BVKZGUZCCUSVTD-UHFFFAOYSA-N InChI=1/CH2O3/c2-1(3)4/h(H2,2,3,4)
Key: BVKZGUZCCUSVTD-UHFFFAOYAU Carbonic acid is the chemical compound with the formula H2CO3 (equivalently OC(OH)2). It is also a name sometimes given to solutions of carbon dioxide in water (carbonated water), because such solutions contain small amounts of H2CO3. Carbonic acid, which is a weak acid, forms two kinds of salts, the carbonates and the bicarbonates. When carbon dioxide dissolves in water it exists in chemical equilibrium producing carbonic acid: The hydration equilibrium constant at 25°C is called Kh, which in the case of carbonic acid is [H2CO3]/[CO2] ≈ 1.7×10−3 in pure water and ≈ 1.2×10−3 in seawater. Hence, the majority of the carbon dioxide is not converted into carbonic acid, remaining as CO2 molecules. In the absence of a catalyst, the equilibrium is reached quite slowly. The rate constants are 0.039 s−1 for the forward reaction (CO2 + H2O → H2CO3) and 23 s−1 for the reverse reaction (H2CO3 → CO2 + H2O). Carbonic acid is used in the making of soft drinks, inexpensive and artificially carbonated sparkling wines, and other bubbly drinks. The addition of two equivalents of water to CO2 would give orthocarbonic acid, C(OH)4, which exists only in minute amounts in aqueous solution. Addition of base to an excess of carbonic acid gives bicarbonate. With excess base, carbonic acid reacts to give carbonate salts. Carbonic acid is an intermediate step in the transport of CO2 out of the body via respiratory gas exchange. The hydration reaction of CO2 is generally very slow in the absence of a catalyst, but red blood cells contain carbonic anhydrase, which both increases the reaction rate and dissociates a hydrogen ion (H+) from the resulting carbonic acid, leaving bicarbonate (HCO3-) dissolved in the blood plasma. This catalysed reaction is reversed in the lungs, where it converts the bicarbonate back into CO2 and allows it to be expelled. This equilibration plays an important role as a buffer in mammalian blood. The oceans of the world have absorbed almost half of the CO2 emitted by humans from the burning of fossil fuels. The extra dissolved carbon dioxide has caused the ocean's average surface pH to shift by about 0.1 unit from pre-industrial levels. This process is known as ocean acidification. Carbonic acid is one of the polyprotic acids: It is diprotic - it has two protons, which may dissociate from the parent molecule. Thus, there are two dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion HCO3−: Care must be taken when quoting and using the first dissociation constant of carbonic acid. In aqueous solution, carbonic acid exists in equilibrium with carbon dioxide, and the concentration of H2CO3 is much lower than the concentration of CO2. In many analyses, H2CO3 includes dissolved CO2 (referred to as CO2(aq)), H2CO3* is used to represent the two species when writing the aqueous chemical equilibrium equation. The equation may be rewritten as follows: Whereas this apparent pKa is quoted as the dissociation constant of carbonic acid, it is ambiguous: it might better be referred to as the acidity constant of dissolved carbon dioxide, as it is particularly useful for calculating the pH of CO2-containing solutions. A similar situation applies to sulfurous acid (H2SO3), which exists in equilibrium with substantial amounts of unhydrated sulfur dioxide. The second constant is for the dissociation of the bicarbonate ion into the carbonate ion CO32−: At a given temperature, the composition of a pure carbonic acid solution (or of a pure CO2 solution) is completely determined by the partial pressure $\scriptstyle p_{CO_2}$ of carbon dioxide above the solution. To calculate this composition, account must be taken of the above equilibria between the three different carbonate forms (H2CO3, HCO3− and CO32−) as well as of the hydration equilibrium between dissolved CO2 and H2CO3 with constant $\scriptstyle K_h=\frac{[H_2CO_3]}{[CO_2]}$ (see above) and of the following equilibrium between the dissolved CO2 and the gaseous CO2 above the solution: The corresponding equilibrium equations together with the $\scriptstyle[H^+][OH^-]=10^{-14}$ relation and the charge neutrality condition $\scriptstyle[H^+]=[OH^-]+[HCO_3^-]+2[CO_3^{2-}]$ result in six equations for the six unknowns [CO2], [H2CO3], [H+], [OH−], [HCO3−] and [CO32−], showing that the composition of the solution is fully determined by $\scriptstyle p_{CO_2}$. The equation obtained for [H+] is a cubic whose numerical solution yields the following values for the pH and the different species concentrations: Remark Theoretical calculations show that the presence of even a single molecule of water causes carbonic acid to revert to carbon dioxide and water. In the absence of water, the dissociation of gaseous carbonic acid is predicted to be very slow, with a half-life of 180,000 years. It has long been recognized that pure carbonic acid cannot be obtained at room temperatures (about 20 °C or about 70 °F). It can be generated by exposing a frozen mixture of water and carbon dioxide to high-energy radiation, and then warming to remove the excess water. The carbonic acid that remained was characterized by infrared spectroscopy. The fact that the carbonic acid was prepared by irradiating a solid H2O + CO2 mixture may suggest that H2CO3 might be found in outer space, where frozen ices of H2O and CO2 are common, as are cosmic rays and ultraviolet light, to help them react. The same carbonic acid polymorph (denoted beta-carbonic acid) was prepared by heating alternating layers of glassy aqueous solutions of bicarbonate and acid in vacuo, which causes protonation of bicarbonate, followed by removal of the solvent. Alpha-carbonic acid was prepared by the same technique using methanol rather than water as a solvent.
An oxocarbon or oxide of carbon is an inorganic compound consisting only of carbon and oxygen. The simplest and most common oxocarbons are carbon monoxide (CO) and carbon dioxide (CO2). Many other stable or metastable oxides of carbon are known, but they are rarely encountered, such as carbon suboxide (C3O2 or O=C=C=C=O) and mellitic anhydride (C12O9). While textbooks will often list only the first three, and rarely the fourth, a large number of other oxides are known today, most of them synthesized since the 1960s. Some of these new oxides are stable at room temperature. Some are metastable or stable only at very low temperatures, but decompose to simpler oxocarbons when warmed. Many are inherently unstable and can be observed only momentarily as intermediates in chemical reactions or are so reactive that they can exist only in the gas phase or under matrix isolation conditions. The inventory of oxocarbons appears to be steadily growing. The existence of graphene oxide and of other stable polymeric carbon oxides with unbounded molecular structures suggests that many more remain to be discovered. Carbon dioxide (CO2) occurs widely in nature, and was incidentally manufactured by humans since pre-historical times, by the combustion of carbon-containing substances and fermentation of foods such as beer and bread. It was gradually recognized as a chemical substance, formerly called spiritus sylvestre ("forest spirit") or "fixed air", by various chemists in the 17th and 18th centuries. Carbon monoxide may occur in combustion, too, and was used (though not recognized) since antiquity for the smelting of iron from its ores. Like the dioxide, it was described and studied in the West by various alchemists and chemists since the Middle Ages. Its true composition was discovered by William Cruikshank in 1800. Carbon suboxide was discovered by Brodie in 1873, by passing electric current through carbon dioxide. The fourth "classical" oxide, mellitic anhydride (C12O9), was apparently obtained by Liebig and Wöhler in 1830 in their study of mellite ("honeystone"), but was characterized only in 1913, by Meyer and Steiner. Brodie also discovered in 1859 a fifth compound called graphite oxide, consisting of carbon and oxygen in ratios varying between 2:1 and 3:1; but the nature and molecular structure of this substance remained unknown until a few years ago, when it was renamed graphene oxide and became a topic of research in nanotechnology. Notable examples of unstable or metastable oxides that were detected only in extreme situations are dicarbon monoxide radical (:C=C=O), carbon trioxide (CO3), carbon tetroxide (), carbon pentoxide (), carbon hexoxide () and 1,2-dioxetanedione (C2O4). Some of these reactive carbon oxides were detected within molecular clouds in the interstellar medium by rotational spectroscopy. Many hypothetical oxocarbons have been studied by theoretical methods but have yet to be detected. Examples include oxalic anhydride (C2O3 or O=(C2O)=O), ethylene dione (C2O2 or O=C=C=O) and other linear or cyclic polymers of carbon monoxide (-CO-)n (polyketones), and linear or cyclic polymers of carbon dioxide (-CO2-)n, such as the dimer 1,3-dioxetanedione (C2O4) and the trimer 1,3,5-trioxanetrione (C3O6). Normally carbon is tetravalent while oxygen is divalent, and in most oxocarbons (as in most other carbon compounds) each carbon atom may be bound to four other atoms, while oxygen may be bound to at most two. Moreover, while carbon can connect to other carbons to form arbitrarily large chains or networks, chains of three or more oxygens are rarely if ever observed. Thus the known electrically neutral oxocarbons generally consist of one or more carbon skeletons (including cyclic and aromatic structures) connected and terminated by oxide (-O-, =O) or peroxide (-O-O-) groups. Carbon atoms with unsatisfied bonds are found in some oxides, such as the diradical C2O or :C=C=O; but these compounds are generally too reactive to be isolated in bulk. Loss or gain of electrons can result in monovalent negative oxygen (-), trivalent positive oxygen (≡), or trivalent negative carbon (≡). The last two are found in carbon monoxide, −C≡O+. Negative oxygen occurs in most oxocarbon anions. One family of carbon oxides has the general formula CnO2, or O=(C=)nO — namely, a linear chain of carbon atoms, capped by oxygen atoms at both ends. The first members are Some higher members of this family have been detected in trace amounts in low-pressure gas phase and/or cryogenic matrix experiments, specifically for n = 7:p.97 and n = 17, 19, and 21.:p.95 Another family of oxocarbons are the linear carbon monoxides CnO. The first member, ordinary carbon monoxide CO, seems to be the only one that is stable in the pure state at room temperature. Photolysis of the linear carbon dioxides in a cryogenic matrix leads to loss of CO, resulting in detectable amounts of even-numbered monoxides such as C2O, C4O, and C6O. The members up to n=9 have also been obtained by electrical discharge on gaseous C3O2 diluted in argon. The first three members have been detected in interstellar space. When n is even, the molecules are believed to be in the triplet (cumulene-like) state, with the atoms connected by double bonds and an unfilled orbital in the first carbon — as in :C=C=O, :C=C=C=C=O, and, in general, :(C=)n=O. When n is odd, the triplet structure is believed to resonate with a singlet (acetylene-type) polar state with a negative charge on the carbon end and a positive one on the oxygen end, as in −C≡C-C≡O+, −C≡C-C≡C-C≡O+, and, in general, −(C≡C-)(n-1)/2C≡O+. Carbon monoxide itself follows this pattern: its predominant form is believed to be −C≡O+. Another family of oxocarbons that has attracted special attention are the cyclic radialene-type oxocarbons CnOn or (CO)n. They can be regarded as cyclic polymers of carbon monoxide, or n-fold ketones of n-carbon cycloalkanes. Carbon monoxide itself (CO) can be regarded as the first member. Theoretical studies indicate that ethylene dione (C2O2 or O=C=C=O) and cyclopropanetrione C3O3 do not exist. The next three members — 4O4C, 5O5C, and 6O6C — are theoretically possible, but are expected to be quite unstable, and so far they have been synthesized only in trace amounts. On the other hand, the anions of these oxocarbons are quite stable, and some of them have been known since the 19th century. They are The cyclic oxide C6O6 also forms the stable anions of tetrahydroxy-1,4-benzoquinone (C6O64−) and benzenehexol (C6O66−), The aromaticity of these anions has been studied using theoretical methods. Many new stable or metastable oxides have been synthesized since the 1960s, such as: Many relatives of these oxides have been investigated theoretically, and some are expected to be stable, such as other carbonate and oxalate esters of tetrahydroxy-1,2-benzoquinone and of the rhodizonic, croconic, squaric, and deltic acids. Carbon suboxide spontaneously polymerizes at room temperature into a carbon-oxygen polymer, with 3:2 carbon:oxygen atomic ratio. The polymer is believed to be a linear chain of fused six-membered lactone rings, with a continuous carbon backbone of alternating single and double bonds. Physical measurements indicate that the mean number of units per molecule is about 5–6, depending on the formation temperature. Carbon monoxide compressed to 5 GPa in a diamond anvil cell yields a somewhat similar reddish polymer with a slightly higher oxygen content, which is metastable at room conditions. It is believed that CO disproportionates in the cell to a mixture of CO2 and C3O2; the latter forms a polymer similar to the one described above (but with a more irregular structure), that traps some of the CO2 in its matrix. Another carbon-oxygen polymer, with C:O ratio 5:1 or higher, is the classical graphite oxide and its single-sheet version graphene oxide.
Propa-1,2-diene-1,3-dione O=C=C=C=O InChI=1S/C3O2/c4-2-1-3-5
Key: GNEVIACKFGQMHB-UHFFFAOYSA-N InChI=1S/C3O2/c4-2-1-3-5
Key: GNEVIACKFGQMHB-UHFFFAOYSA-N InChI=1/C3O2/c4-2-1-3-5
Key: GNEVIACKFGQMHB-UHFFFAOYAU −111.3 °C, 161.9 K 6.8 °C, 280.0 K Carbon suboxide, or tricarbon dioxide, is an oxide of carbon with chemical formula C3O2 or O=C=C=C=O. Its four cumulative double bonds make it a cumulene. It is one of the stable members of the series of linear oxocarbons O=Cn=O, which also includes carbon dioxide (CO2) and pentacarbon dioxide (C5O2). The substance was discovered in 1873 by Benjamin Brodie by submitting carbon monoxide to an electric current. He claimed that the product was part of a series of "oxycarbons" with formulas Cx+1Ox, namely C, C2O, C3O2, C4O3, C5O4, ..., and to have identified the last two; however only C3O2 is known. In 1891 Marcellin Berthelot observed that heating pure carbon monoxide at about 550 °C created small amounts of carbon dioxide but no trace of carbon, and assumed that a carbon-rich oxide was created instead, which he named "sub-oxide". He assumed it was the same product obtained by electric discharge and proposed the formula C2O. Otto Diels later stated that the more organic names dicarbonyl methane and dioxallene were also correct. It is commonly described as an oily liquid or gas at room temperature with an extremely noxious odor. It is synthesized by warming a dry mixture of phosphorus pentoxide (P4O10) and malonic acid or the esters of malonic acid. Therefore, it can be also considered as the anhydride of malonic anhydride, i.e. the "second anhydride" of malonic acid. Malonic anhydride (not to be confused with maleic anhydride) is a real molecule. Several other ways for synthesis and reactions of carbon suboxide can be found in a review from 1930 by Reyerson. Carbon suboxide polymerizes spontaneously to a red, yellow, or black solid. The structure is postulated to be poly(α-pyronic), similar to the structure in 2-pyrone (α-pyrone). In 1969, it was hypothesized that the color of Martian surface was caused by this compound; this was disproved by the Viking Mars probes. Carbon suboxide is used in the preparation of malonates; and as an auxiliary to improve the dye affinity of furs.
A chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using a single line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name, and it contains no words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulas are more limiting than chemical names and structural formulas.
The simplest types of chemical formulas are called empirical formulas, which use only letters and numbers indicating atomic proportional ratios (the numerical proportions of atoms of one type to those of other types). Molecular formulas indicate the simple numbers of each type of atom in a molecule of a molecular substance, and are thus sometimes the same as empirical formulas (for molecules that only have one atom of a particular type), and at other times require larger numbers than do empirical formulas. An example of the difference is the empirical formula for glucose, which is CH2O, while its molecular formula requires all numbers to be increased by a factor of six, giving C6H12O6.
129
|
# Pontecorvo–Maki–Nakagawa–Sakata matrix
Flavour in particle physics Flavour quantum numbers: Isospin: I or I3 Charm: C Strangeness: S Topness: T Bottomness: B′ Related quantum numbers: Baryon number: B Lepton number: L Weak isospin: T or T3 Electric charge: Q X-charge: X Combinations: Hypercharge: Y Y = (B + S + C + B′ + T) Y = 2 (Q − I3) Weak hypercharge: YW YW = 2 (Q − T3) X + 2YW = 5 (B − L) Flavour mixing CKM matrix PMNS matrix Flavour complementarity This box:
In particle physics, the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix), Maki–Nakagawa–Sakata matrix (MNS matrix), lepton mixing matrix, or neutrino mixing matrix, is a unitary matrix[note 1] which contains information on the mismatch of quantum states of leptons when they propagate freely and when they take part in the weak interactions. It is important in the understanding of neutrino oscillations. This matrix was introduced in 1962 by Ziro Maki, Masami Nakagawa and Shoichi Sakata,[1] to explain the neutrino oscillations predicted by Bruno Pontecorvo.[2][3]
## The matrix
For three generations of leptons, the matrix can be written as:
$\begin{bmatrix} {\nu_e} \\ {\nu_\mu} \\ {\nu_\tau} \end{bmatrix} = \begin{bmatrix} U_{e 1} & U_{e 2} & U_{e 3} \\ U_{\mu 1} & U_{\mu 2} & U_{\mu 3} \\ U_{\tau 1} & U_{\tau 2} & U_{\tau 3} \end{bmatrix} \begin{bmatrix} \nu_1 \\ \nu_2 \\ \nu_3 \end{bmatrix}. \$
On the left are the neutrino fields participating in the weak interaction, and on the right is the PMNS matrix along with a vector of the neutrino fields diagonalizing the neutrino mass matrix. The PMNS matrix describes the probability of a neutrino of given flavor α to be found in mass eigenstate i. These probabilities are proportional to |Uαi|2.
Various parametrizations of this matrix exist,[4] however due to the difficulties of detecting neutrinos, it is much more difficult to determine the individual coefficients than in the equivalent matrix for the quarks (the CKM matrix). The PMNS matrix is most commonly parameterized by three mixing angles(Θ12, Θ23 and Θ13) and a single phase called δCP related to charge-parity violations (i.e. differences in the rates of oscillation between two states with opposite starting points which makes the order in time in which events take place necessary to predict their oscillation rates).
Experimentally, the mixing angles were established to be approximately Θ12=34 degrees, Θ23=45 degrees, and Θ13=9.1 +/- 0.6 degrees (as of April 3, 2013).[5] The charge parity violating phase of the PMNS matrix and the mass hierarchy of the neutrino masses have not been determined experimentally and remain unsolved questions in physics that are the subject of multiple major ongoing experimental efforts to determine.[6] These mixing angles are much larger than the corresponding value of the CKM matrix for quarks, which means that while quark flavors mix with each other nearly minimally, neutrino flavors mix nearly maximally.
Based on less current data (28 June 2012) mixing angles are:[7]
$s_{12}^{2}=0.307\,,\; s_{23}^{2}=\begin{cases} 0.386 & (\mathrm{NH})\\ 0.392 & (\mathrm{IH}) \end{cases}\,,\ s_{13}^{2}=\begin{cases} 0.0241 & (\mathrm{NH})\\ 0.0244 & (\mathrm{IH}) \end{cases}\,,\ \delta=\begin{cases} 1.08\pi & (\mathrm{NH})\\ 1.09\pi & (\mathrm{IH}) \end{cases}$
where NH indicates $\Delta m^2>0$ normal hierarchy and IH $\Delta m^2<0$ inverted hierarchy in the mass spectrum with $\delta m^{2}=m_{2}^{2}-m_{1}^{2}>0$ and $\Delta m^{2}=m_{3}^{2}-(m_{1}^{2}+m_{2}^{2})/2$.
These values lead to following PMNS matrices:
$\mathbf{U}_{\mathrm{NH}}= \begin{bmatrix}0.822 & 0.547 & -0.150+0.0381\mathrm{i}\\ -0.356+0.0198\mathrm{i} & 0.704+0.0131\mathrm{i} & 0.614\\ 0.442+0.0248\mathrm{i} & -0.452+0.0166\mathrm{i} & 0.774 \end{bmatrix}$
$\mathbf{U}_{\mathrm{IH}}= \begin{bmatrix}0.822 & 0.547 & -0.150+0.0429\mathrm{i}\\ -0.354+0.0224\mathrm{i} & 0.701+0.0149\mathrm{i} & 0.618\\ 0.444+0.0278\mathrm{i} & -0.456+0.0186\mathrm{i} & 0.770 \end{bmatrix}.$
## Notes
1. ^ The PMNS matrix is not unitary in the seesaw model
## References
1. ^ Z. Maki, M. Nakagawa, and S. Sakata (1962). "Remarks on the Unified Model of Elementary Particles". Progress of Theoretical Physics 28: 870. Bibcode:1962PThPh..28..870M. doi:10.1143/PTP.28.870.
2. ^ B. Pontecorvo (1957). "Mesonium and anti-mesonium". Zh. Eksp. Teor. Fiz. 33: 549–551. reproduced and translated in Sov. Phys. JETP 6: 429. 1957.
3. ^ B. Pontecorvo (1967). "Neutrino Experiments and the Problem of Conservation of Leptonic Charge". Zh. Eksp. Teor. Fiz. 53: 1717. reproduced and translated in Sov. Phys. JETP 26: 984. 1968. Bibcode:1968JETP...26..984P.
4. ^ J.W.F. Valle (2006). "Neutrino physics overview". Journal of Physics: Conference Series 53: 473. arXiv:hep-ph/0608101. Bibcode:2006JPhCS..53..473V. doi:10.1088/1742-6596/53/1/031.
5. ^ The T2K Collaboration (3 April 2013). "Evidence of Electron Neutrino Appearance in a Muon Neutrino Beam". arXiv:1304.0841.
6. ^ R. Das and Jo˜ao Pulido (4 February 2013). "Long baseline neutrino experiments, mass hierarchy and δCP". arXiv:1302.0779.
7. ^ Fogli et al: Global analysis of neutrino masses, mixings and phases. 2012 http://arxiv.org/abs/1205.5254v3
|
The Grand Locus / Life for statistical sciences
## A tutorial on Burrows-Wheeler indexing methods (1)
This post is part of a series of tutorials on indexing methods based on the Burrows-Wheeler transform. This part describes the theoretical background, the second part shows a naive C implementation of the example below, and the third part shows a more advanced implementation with compression.
There are many resources explaining how the Burrows-Wheeler transform works, but so far I have not found anything explaining what makes it so awesome for indexing and why it is so widely used for short read mapping. I figured I would write such a tutorial for those who are not afraid of the detail.
### The problem
Say we have a sequencing run with over 100 million reads. After processing, the reads are between 20 and 25 nucleotide long. We would like to know if these sequences are in the human genome, and if so where.
The first idea would be to use grep to find out. On my computer, looking for a 20-mer such as ACGTGTGACGTGATCTGAGC takes about 10 seconds. Nice, but querying 100 million sequences would take more than 30 years. Not using any search index, grep needs to scan the whole human genome, and this takes time when the file is large.
We could build an index to speed up the search. The simplest would be a dictionary that associates each 20 to 25-mer to its locations in the human genome. The nice thing about dictionaries is that the access time is fast and does not depend on the size of the text.
The issue is space. Counting 2 bits per nucleotide, 20 to 25-mers take 40 to 50 bits of storage. The human genome contains over 3.2 billion nucleotides, so we need at least 108 GB (in reality many 20 to 25-mers are repeated so this number would be lower). To this we should add the storage required for the locations and the overhead for the dictionary, for a total size well over 200 GB.
So it seems that we are between a rock and a hard place: either we run out of time, or we run out of memory.
### The suffix array
An important step towards modern algorithms was the invention of a data structure called the suffix array of a text. A suffix is the end of a text from a given position. For instance, the 1st suffix of GATGCGAGAGATG is GATGCGAGAGATG itself, and the 10th is GATG (I will use the 0-based convention, so 1st, 2nd, 3rd etc. refer to positions 0, 1, 2 etc.). The suffix array stores the positions of the suffixes sorted in alphabetical order.
Let us construct the suffix array of GATGCGAGAGATG for demonstration. We add a terminator character $, which has lower lexical order than all other characters (this avoids confusion when comparing strings of different lengths). As a consequence, the first suffix in lexical order is always $. Below are the suffixes in sorted order (written vertically) and the suffix array of GATGCGAGAGATG.
How can we use the suffix array of the human genome to solve the query problem above? Since the suffixes are sorted, we can proceed by bisection. We lookup the middle entry of the suffix array, which points to a particular position of the human genome. We compare the query to the sequence at that position, and depending on the result we continue bisecting either on the left half or on the right half of the suffix array.
Let us use this technique to find how many occurrences of GAGA are in GATGCGAGAGATG. The suffix array has 14 entries, so we look up the entry at position 6 (remember the 0-based convention), which points to the suffix G$. Since GAGA > G$, we continue bisecting in the positions of the suffix array between 7 and 13 included. The middle entry is at position 10 and points to the suffix GATGCGAGAGATG$. Since GAGA < GATGCGAGAGATG$, we continue bisecting between positions 7 and 9 included. The middle entry is at position 8 and points to the suffix GAGATG$. This time we have a hit because GAGATG$ starts with GAGA. By continuing the bisection, we would find that suffixes at position 7 and 8 start with GAGA, so the query is present two times in the text.
Why is this an improvement? The suffix array of the human genome has $N$ = 3.2 billion entries, so we need at most $\lfloor\log_2(N)\rfloor +1 = 32$ steps to find out whether any query is present or not. We would need a few extra steps to find out the number of occurrences in case it is present. Each step consists of 2 memory accesses, one in the suffix array, one in the human genome to read the suffix. Counting approximately 100 ns per memory access and ignoring the time for string comparison, this brings us around 6-7 us per query. Now what about the space requirements? We can encode every position of the genome with a 4 byte integer, and we need to store 3.2 billion entries, so we need 11.92 GB. Still a lot, but notice that this approach solves all the practical difficulties associated with dictionaries. We can easily look for sequences of any length in the suffix array.
Exercises
1. What is the first value of a suffix array?
2. What is the suffix array of CTGTGATGTCGTAG?
3. What justifies to ignore the time to compare strings?
4. Adding the reverse complement of a genome to the suffix array allows to query both strands. How many bits are required to store the suffix array of the human genome with its reverse complement?
1. The length of the text .
2. [14,12,5,9,0,13,4,10,7,2,11,8,3,6,1]
3. The first memory access to the genome will usually be a last level cache miss (approx. 100 ns). The following nucleotides are contiguous and can be prefetched, so comparing them to the query will be much faster. No more than a few nucleotides need to be compared, so the time is dominated by the initial cache miss.
4. Adding the reverse complement brings the size of the text to 6.4 billion characters. Since 33 bits are now required to store the largest value, the total size is 33 times 6.4 billion = 211 billion bits or 24.59 GB. If your answer was 23.84 GB, you just doubled the size of the array, without accounting for the required extra bit.
### The Burrows-Wheeler transform
The Burrows-Wheeler transform of a text is a permutation of this text. To construct it, we need to sort all the suffixes, but we replace the whole suffix by the preceding letter. For the suffix equal to the text itself, we write the terminator $ instead. The previous sketch shows that the Burrows-Wheeler transform of GATGCGAGAGATG is GGGGGGTCAA$TAA, as can be verified below.
Before explaining how this will help, let us highlight a fundamental property of the Burrows-Wheeler transformed text. Note that in the sketch above, the nucleotides appear in sorted order in the row immediately below the Burrows-Wheeler transform. See that the first G in the Burrows-Wheeler transformed text is the one preceding $. It is also the first G to appear in the sorted text. The second G in the Burrows-Wheeler transformed text is the one preceding AGAGATG$. It is also the second G in the sorted text. The first T in the Burrows-Wheeler transformed text is the one preceding G$. It is also the first T in the sorted text. More generally, the letters of the Burrows-Wheeler transformed text are in the same “relative” order as in the sorted text. This is called the “First-Last property” of the Burrows-Wheeler transform. The sketch below helps understand why it holds. Let us consider the suffixes in sorted order, and more particularly those that start with A. If we remove this A, we obtain another series of suffixes. The key point is that they are still in sorted order because the A was not discriminant. These suffixes are somewhere in the set of all sorted suffixes. There can be some other suffixes between them, but their relative order does not change because they were already sorted. Now, these suffixes are preceded by A so their positions are the positions of A in the Burrows-Wheeler transformed text. Looking at the whole process, it is clear that the relative order of A in the sorted text is the relative order of A in the Burrows-Wheeler transformed text. Of course, the same holds for every character. The First-Last property is the key to using the Burrows-Wheeler transformed text for search. The next section will explain how this is done. Exercises 1. What is the Burrows-Wheeler transform of CTGTGATGTCGTAG? 2. What is the first character of the Burrows-Wheeler transform of a text? Answers 1. GTGT$ATCTTGGGAC
2. It is the last character of the text (i.e. the character before $). ### The backward search Let us illustrate how the Burrows-Wheeler transformed text can be used to look for GAGA in the text. All the occurrences of GAGA in the text end with A. Each A is also the first letter of some suffix of the text. Because such suffixes all start with A, they are stored next to each other in the suffix array, namely between positions 1 and 4. However, only those suffixes preceded by a G can potentially contain the query. This is exactly the information encoded by the Burrows-Wheeler transformed text. In this concrete example, all the As are preceded by a G, so the text contains 4 suffixes that start with GA. Since those 4 suffixes start the same way, they are stored next to each other in the suffix array, but where? The Gs preceding the As are the 2nd, 3rd, 4th and 5th Gs in the order of the Burrow-Wheeler transformed text, so they are also the 2nd, 3rd, 4th and 5th Gs in the sorted text. Knowing that the position of the first G is 6, the suffixes that start with GA are thus stored in positions 7 to 10 of the suffix array. The process continues as we read the query backwards. The target suffixes must be preceded by an A. According to the Burrows-Wheeler transformed text, two suffixes are preceded by an A, and since their relative order is the same as in the sorted text, we know that the suffixes starting with AGA are at positions 1 and 2 of the suffix array. Finally, to complete the suffix GAGA, the target suffixes must be preceded by a G. According to the Burrows-Wheeler transformed text, both suffixes at positions 1 and 2 are preceded by G and, again, since their relative order is the same as in the sorted text, we know that the suffixes starting with GAGA are at positions 7 and 8 of the suffix array. ### The Occ table Real world implementations of the backward search are based on a data structure called the Occ table (Occ stands for occurrence). The table contains the cumulative number of occurrences of each character in the Burrows-Wheeler transformed text. It also comes together with an array called C that stores the position of the first occurrence of each character in the sorted text. With these data structures, the backward search of GAGA goes as follows: Before we start, suffixes can be at any position of the suffix array between 1 and 13 (position 0 always corresponds to the prefix $). The first suffix starting with A is stored in the suffix array at positions C[A] = 1 (by definition). The number of suffixes starting with A is equal to the number of As in the text, i.e. to Occ(A,13) = 4. So the suffixes starting with A are stored between positions 1 and 4 of the suffix array.
How many of those suffixes are preceded by a G?. The number of Gs appearing in the Burrows-Wheeler transformed text before position 1 is Occ(G,1-1) = 1, and the number appearing up until position 4 is Occ(G,4) = 5, so the number of Gs preceding the suffixes between positions 1 and 4 is 5-1 = 4. There are thus 4 suffixes starting with GA. One G occurs before them (i.e. Occ(G,1-1) = 1), so they are the 2nd till 5th Gs in the Burrows-Wheeler transformed text, and also in the suffix array. Since the first suffix starting with G is stored at position C[G] = 6 of the suffix array, the 2nd till 5th suffixes are stored between positions 7 and 10.
These boundaries can obtained directly as C[G] + Occ(G,1-1) = 6+1 = 7 and C[G] + Occ(G,4)-1 = 6+5-1 = 10. To continue the process, we find the positions of the suffixes starting with AGA between positions C[A] + Occ(A,7-1) = 1+0 = 1 and C[A] + Occ(A,10)-1 = 1+2-1=2. Finally, we find the positions of the suffixes starting with GAGA between positions C[G] + Occ(G,1-1) = 6+1 = 7 and C[G] + Occ(G,2)-1 = 6+3-1 = 8.
How does this algorithm perform? For a query sequence of length $k$, there are at most $k$ steps to perform, each with two memory accesses (the queries to Occ) for a total of $2k$ memory accesses. This number is independent of the size of the text, which is an extraordinary achievement.
What about the memory requirements? The Occ table contains one row per character of the alphabet and one column per character in the text, for a total of $\sigma N$ entries, where $N$ is the size of the text and $\sigma$ is the size of the alphabet. Each entry must be able to encode a number potentially as high as $N$, which requires $\lfloor\log_2(N)\rfloor+1$ bits, so the total size of the Occ table is $\sigma N (\lfloor\log_2(N)\rfloor + 1)$ bits. For the human genome without reverse complement, this represents 47.68 GB. Considering that we also need the 11.92 GB suffix array, the benefits of the backward search seem doubtful. But the next section will change the deal.
Exercises
1. The King James Authorized Bible has 3,116,480 characters using a 76 letter alphabet. What is the size of the Occ table for this text?
2. On average, how long is the backward search (in number of steps) if the query is absent from the human genome?
3. Say that at step $i$ of the backward search, the candidate suffixes are stored between positions $b_i$ and $e_i$ of the suffix array. If the next nucleotide of the query is $q_{i+1}$, what are $b_{i+1}$ and $e_{i+1}$?
1. We need $\lfloor\log_2(3116480)\rfloor +1 = 22$ bits per entry. Since this Occ table has 3,116,480 columns and 76 rows, we need a total of 5,210,754,560 bits or approximately 621 MB.
2. Assuming that the nucleotides of the human genome are random, the average number of occurrences of a 16-mer is approximately 0.75 < 1. So the backward search will typically stop after 16-17 steps, even if the query is longer.
3. $b_{i+1}$ = C[$q_{i+1}$] + Occ($q_{i+1}$, $b_i$ - 1), $e_{i+1}$ = C[$q_{i+1}$] + Occ($q_{i+1}$, $e_i$) - 1.
### Compression
What is truly awesome about Burrows-Wheeler indexing is that the search can be performed on compressed indexes. I will illustrate the simplest of many available options, in which the Occ table and the suffix array are merely down-sampled*.
The Occ table stores the cumulative frequencies of the characters in the Burrows-Wheeler transformed text. If, instead, we stored their actual occurrence with a 0/1 encoding, we could still perform the backward search by counting the 1s until the given position of the text. To down-sample Occ, we keep one column out of 32 and we use the binary table to compute the missing values on demand. The picture above illustrates how Occ(G, 3252) is computed in a down-sampled table. The smallest multiple of 32 after 3252 is 3264. Looking up Occ(G, 3264), we find 830. We still need to remove the Gs between positions 3253 and 3264, which we do by counting the 1s in the binary table between these positions (this is called the popcount operation). The result is 2, so Occ(G, 3275) = 830-2 = 828.
The size of the binary table is $\sigma N$ bits. For the human genome, it represents 4 times 3.2 billion = 12.8 billion bits or 1.49 GB, so the total size of the Occ table is 1.49 + 47.68/32 = 2.98 GB. It is possible to store the down-sampled values of the Occ table together with the next 32 values of the binary table in a 64 bit word, so that a single memory access is necessary for each query to the Occ table. The popcount can be computed faster than a memory access (which is usually a last level cache miss here), so the backward search runs a little slower, but with a memory footprint 16 times smaller (2.98 GB instead of 47.68 GB).
We also compress the suffix array by keeping one value out of 32. The task is now to compute the missing values on demand. To show how this is done, let us illustrate a fundamental connection between the Burrows-Wheeler transformed text, the suffix array and the Occ table using once again the text GATGCGAGAGATG.
The suffix \$ is stored at position 0 of the suffix array and contains the value 13. The Burrows-Wheeler transformed text at position 0 is G. Notice that C[G]+Occ(G,0)-1 = 6+1-1 = 6, which is the position of the suffix array that stores 12. At this new position, the Burrows-Wheeler transformed text is T. Notice that C[T]+Occ(T,6)-1 = 12+1-1 = 12, which is the position of the suffix array that stores 11. At this new position, the Burrows-Wheeler transformed text is A. Notice that C[A]+Occ(A,12)-1 = 1+3-1 = 3, which is the position of the suffix array that stores 10. More generally, if $y$ is stored at position $m$ of the suffix array, $y-1$ is stored at position C[X] + Occ(X,$m$)-1, where X is the value of the Burrows-Wheeler transformed text at position $m$.
Exercises
1. Why is this the case?
2. If $y$ is stored at position $m$ of the suffix array, where is $y-k$ stored $(k > 0)$?
1. Say that value $y$ is stored at position $m$ of the suffix array. Position $y$ of the text corresponds to some suffix. The Burrows-Wheeler transformed text at position $m$ stores the preceding nucleotide (X), which corresponds to the suffix starting at position $y-1$ of the text. We have seen previously that this suffix is stored in the suffix array at position C[X] + Occ(X,$m$)-1.
2. To find out, iterate the process above $k$ times.
This property allows us to use the Burrows-Wheeler transformed text and the Occ table to find out where the previous suffix is stored in the suffix array. To compute a value of the suffix array on demand, we just have to iterate this procedure, computing the position of the previous suffix at each step until this position is a multiple of 32. At that position the value of the suffix array is known. If it is equal to $y$ and $k$ iterations of the search were performed, then the position of the query in the text is $y+k$.
Each step of this procedure takes two memory accesses: one in the Burrows-Wheeler transformed text and one in the Occ table. Since we need 16 attempts on average to find a position that is a multiple of 32, we need on average 32 memory accesses to compute the values of the suffix array.
For the human genome, the size of the 32-fold down-sampled suffix array is 373 MB and that of the Burrows-Wheeler transformed text is 745 MB (assuming that we need two bits per character). The total size of our index is 2.98 GB + 373 MB + 745 MB = 4.07 GB. Obviously, it is possible to further down-sample the Occ table and the suffix array, at the cost of increasing memory accesses. Compressing the binary representation of the Occ table is also possible, but it requires more advanced knowledge of succinct data structures. With the index above, counting the occurrences of a query of length $k$ requires at most $2k$ memory accesses and knowing the position of each of them requires 64 memory accesses. For a unique 25-mer, this is about a hundred memory accesses or approximately 10 us (typically half in practice, because of cache effects). With this method, we can process millions of queries for an acceptable memory footprint.
### Epilogue
The most natural application of Burrows-Wheeler indexing is to perform the seeding step of the DNA alignment problem. For instance, the popular mapper BWA uses the compression methods presented above. It is interesting to note that direct bisection on the suffix array has become a competitive algorithm as computers gained memory (the STAR mapper takes this approach). It costs only 64 memory accesses to find all the occurrences of a query in the human genome, vs that many per occurrence in the algorithm with compressed indexes. The weakness of the bisection method is that it still takes 64 accesses to know that a sequence is absent vs about 32 with compressed indexes (depending on the genome).
It is also worth mentioning that the Burrows-Wheeler transform is exceptionally well adapted to indexing the human genome. It gives simple algorithms for sub-string queries, and compressing the Occ table is most efficient on small alphabets. For proteins or natural languages, one would use indexes adapted to bigger alphabet, such as wavelet trees, or word-based indexes.
|
## Clausen Formula
Clausen's identity
holds for , , , a nonpositive integer, and is the Pochhammer Symbol (Petkovsek et al. 1996).
Another identity ascribed to Clausen which involves the Hypergeometric Function and the Generalized Hypergeometric Function is given by
|
help for dba.count settings for RNAPII-ChIP using GRanges peakset
1
0
Entering edit mode
5 months ago
bertb ▴ 10
Hello,
I am trying to analyze Pol2 ChIPseq data, and the standard MACS2/peakcounting pipeline doesn't seem to capture what I'm after, since there's binding all over the promoter and coding region. That is, I'm less interested in finding the highest point and tweaking the summits factor, and more interested in the overall concentration differences (and profile changes) across these gene bodies/promoters...kind of like an RNAseq where I keep duplicates but worry more about normalization.
What I'd like to do is feed dba.count with a GRanges (transcripts) peakset, but I want to make sure I get the settings right. Would something like this work?:
DBA <- dba(sampleSheet=sample.sheet.csv)
#sampleSheet is standard with bamReads, bamControl (and macs files)
GR <- transcripts(tdxb)
counts <- dba.count(DBA, GR)
Should summits be set to FALSE? Maybe I'm missing something more fundamental.
DiffBind ChIPseq • 412 views
1
Entering edit mode
5 months ago
Rory Stark ★ 1.6k
Supplying the annotated regions and setting 'summits=FALSE' when calling dba.count() will, in principle, work. A few things worth noting:
• The transcripts() include a lot of overlapping regions which will be merged, so the binding matrix will be constructed with fewer rows than there are regions in the transcripts().
• The default behavior of filtering regions with low read counts in all samples will likely further reduce the number of regions in the final binding matrix. This is desirable as some of the annotated regions may not be enriched in your ChIP.
• You can retrieve the regions actually interrogated by calling dba.peakset() with bRetrieve=TRUE after the call to dba.count():
merged_filtered_regions <- dba.peakset(counts, bRetrieve=TRUE)
• Specifically, you may want to confirm that the merging/filtering process doesn't alter the distribution of widths too much. You can do this by comparing summary(width(GR)) with summary(width(merged_filter_regions))
• This can also be run using the summits parameter. This will take an enriched "sample" of each transcript region for comparison and identify regions where this sample is differentially enriched. This is less likely to be thrown off by including very large regions that may contain a larger fraction of "background" reads.
0
Entering edit mode
When it comes to normalization, I've reviewed the user guide, as well as your Bioconductor workshops in 2020, and I'm still just having a bit of difficulty understanding the difference between the default full library normalization ("lib"), and background normalization (normalize=DBA_NORM_NATIVE, background=TRUE). Technically, I understand the differences in that background normalization utilizes native normalization methods like taking the median of modes from large bins, rather than just scaling for depth.
But strategically, both seem to aim to "level" the background between all samples, based on the assumption that backgrounds should be largely similar across samples. I guess my question is, why might one choose background normalization instead of the default ("lib")? Running both normalizations, I do seem to be getting differences in the number of significantly bound sites in my analyses.
Is there a way to view the binding matrix after each normalization method to see how these normalization methods are affecting specific regions?
Thank you!
|
# Datasets: strombergnlp /bajer_danish_misogyny
Languages: da
Multilinguality: monolingual
Size Categories: 10K<n<100K
Language Creators: found
Annotations Creators: expert-generated
Source Datasets: original
## You need to share your contact information to access this dataset.
This repository is publicly accessible, but you have to register to access its content — don't worry, it's just one click!
By clicking on “Access repository” below, you accept that your contact information (email address and username) can be shared with the repository authors. This will let the authors get in touch for instance if some parts of the repository's contents need to be taken down for licensing reasons.
Warning: this repository contains harmful content (abusive language, hate speech, stereotypes).
You will immediately be granted access to the contents of the dataset.
# Dataset Card for "Bajer"
### THIS PUBLIC-FACING DATASET IS A PREVIEW ONLY
This is a working data reader but the data here is just a preview of the full dataset, for safety & legal reasons.
To apply to access the entire dataset, complete this form.
When you have the full data, amend _URL in bajer.py to point to the full data TSV's filename.
### Dataset Summary
This is a high-quality dataset of annotated posts sampled from social media posts and annotated for misogyny. Danish language.
See the accompanying ACL paper Annotating Online Misogyny for full details.
### Languages
Danish (bcp47:da)
## Dataset Structure
### Data Instances
#### Bajer
In this preview: 10 instances
In the full dataset:
• Size of the generated dataset: 6.57 MiB
• Total amount of disk used: 13.85 MiB
See above (or below) for how to get the full dataset.
An example of 'train' looks as follows.
{
'id': '0',
'dataset_id': '0',
'label_id': '0',
'text': 'Tilfældigt hva, din XXXXXXXXXX 🤬🤬🤬',
}
### Data Fields
• id: a string feature, unique identifier in this dataset.
• dataset_id: a string feature, internal annotation identifier.
• label_id: a string feature, internal annotation sequence number.
• text: a string of the text that's annotated.
• sampling: a string describing which sampling technique surfaced this message
• subtask_A: is the text abusive ABUS or not NOT? 0: NOT, 1: ABUS
• subtask_B: for abusive text, what's the target - individual IND, group GRP, other OTH, or untargeted UNT? 0: IND, 1: GRP, 2: OTH, 3: UNT, 4: not applicable
• subtask_C1: for group-targeted abuse, what's the group - misogynistic SEX, other OTH, or racist RAC? 0: SEX, 1: OTH, 2: RAC, 3: not applicable
• subtask_C2: for misogyny, is it neosexist NEOSEX, discrediting DISCREDIT, normative stereotyping NOR, benevolent sexism AMBIVALENT, dominance DOMINANCE, or harassment HARASSMENT? 0: NEOSEX, 1: DISCREDIT, 2: NOR, 3: AMBIVALENT, 4: DOMINANCE, 5: HARASSMENT, 6: not applicable
### Data Splits
In the full dataset:
name train
bajer 27880 sentences
## Dataset Creation
### Curation Rationale
The goal was to collect data for developing an annotation schema of online misogyny.
Random sampling of text often results in scarcity of examples of specifically misogynistic content (e.g. (Wulczyn et al., 2017; Founta et al., 2018)). Therefore, we used the common alternative of collecting data by using predefined keywords with a potentially high search hit (e.g. Waseem and Hovy (2016)), and identifying relevant user-profiles (e.g. (Anzovino et al., 2018)) and related topics (e.g. (Kumar et al., 2018)).
We searched for keywords (specific slurs, hashtags), that are known to occur in sexist posts. These were defined by previous work, a slur list from Reddit, and from interviews and surveys of online misogyny among women. We also searched for broader terms like “sex” or “women”, which do not appear exclusively in a misogynistic context, for example in the topic search, where we gathered relevant posts and their comments from the social media pages of public media. A complete list of keywords can be found in the appendix.
Social media provides a potentially biased, but broad snapshot of online human discourse, with plenty of language and behaviours represented. Following best practice guidelines (Vidgen and Derczynski, 2020), we sampled from a language for which there are no existing annotations of the target phenomenon: Danish.
Different social media platforms attract different user groups and can exhibit domain-specific language (Karan and Snajder ˇ , 2018). Rather than choosing one platform (existing misogyny datasets are primarily based on Twitter and Reddit (Guest et al., 2021)), we sampled from multiple platforms: Statista (2020) shows that the platform where most Danish users are present is Facebook, followed by Twitter, YouTube, Instagram and lastly, Reddit. The dataset was sampled from Twitter, Facebook and Reddit posts as plain text.
### Source Data
#### Initial Data Collection and Normalization
The dataset was sampled from Twitter, Facebook and Reddit posts as plain text. Data was gathered based on: keyword-based search (i.e. purposive sampling); topic-based search; and content from specific users.
#### Who are the source language producers?
Danish-speaking social media users
### Annotations
#### Annotation process
In annotating our dataset, we built on the MATTER framework (Pustejovsky and Stubbs, 2012) and use the variation presented by Finlayson and Erjavec (2017) (the MALER framework), where the Train & Test stages are replaced by Leveraging of annotations for one’s particular goal, in our case the creation of a comprehensive taxonomy.
We created a set of guidelines for the annotators. The annotators were first asked to read the guidelines and individually annotate about 150 different posts, after which there was a shared discussion. After this pilot round, the volume of samples per annotator was increased and every sample labeled by 2-3 annotators. When instances were ‘flagged’ or annotators disagreed on them, they were discussed during weekly meetings, and misunderstandings were resolved together with the external facilitator. After round three, when reaching 7k annotated posts (Figure 2), we continued with independent annotations maintaining a 15% instance overlap between randomly picked annotator pairs.
Management of annotator disagreement is an important part of the process design. Disagreements can be solved by majority voting (Davidson et al., 2017; Wiegand et al., 2019), labeled as abuse if at least one annotator has labeled it (Golbeck et al., 2017) or by a third objective instance (Gao and Huang, 2017). Most datasets use crowdsourcing platforms or a few academic experts for annotation (Vidgen and Derczynski, 2020). Inter-annotatoragreement (IAA) and classification performance are established as two grounded evaluation measurements for annotation quality (Vidgen and Derczynski, 2020). Comparing the performance of amateur annotators (while providing guidelines) with expert annotators for sexism and racism annotation, Waseem (2016) show that the quality of amateur annotators is competitive with expert annotations when several amateurs agree. Facing the trade-off between training annotators intensely and the number of involved annotators, we continued with the trained annotators and group discussions/ individual revisions for flagged content and disagreements (Section 5.4).
#### Who are the annotators?
Demographic category Value
Gender 6 female, 2 male (8 total)
Age: 5 <30; 3 ≥30
Ethnicity: 5 Danish: 1 Persian, 1 Arabic, 1 Polish
Study/occupation: Linguistics (2); Health/Software Design; Ethnography/Digital Design; Communication/Psychology; Anthropology/Broadcast Moderator; Ethnography/Climate Change; Film Artist
### Personal and Sensitive Information
Usernames and PII were stripped during annotation process by: skipping content containing these; and eliding it from the final dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The data contains abusive language. It may be possible to identify original speakers based on the content, so the data is only available for research purposes under a restrictive license and conditions. We hope that identifying sexism can help moderators. There is a possibility that the content here could be used to generate misogyny in Danish, which would place women in Denmark in an even more hostile environment, and for this reason data access is restricted and tracked.
### Discussion of Biases
We have taken pains to mitigate as many biases as we were aware of in this work.
Selection biases: Selection biases for abusive language can be seen in the sampling of text, for instance when using keyword search (Wiegand et al., 2019), topic dependency (Ousidhoum et al., 2020), users (Wiegand et al., 2019), domain (Wiegand et al., 2019), time (Florio et al., 2020) and lack of linguistic variety (Vidgen and Derczynski, 2020).
Label biases: Label biases can be caused by, for instance, non-representative annotator selection, lack in training/domain expertise, preconceived notions, or pre-held stereotypes. These biases are treated in relation to abusive language datasets by several sources, e.g. general sampling and annotators biases (Waseem, 2016; Al Kuwatly et al., 2020), biases towards minority identity mentions based for example on gender or race (Davidson et al., 2017; Dixon et al., 2018; Park et al., 2018; Davidson et al., 2019), and political annotator biases (Wich et al., 2020). Other qualitative biases comprise, for instance, demographic bias, over-generalization, topic exposure as social biases (Hovy and Spruit, 2016).
We applied several measures to mitigate biases occurring through the annotation design and execution: First, we selected labels grounded in existing, peer-reviewed research from more than one field. Second, we aimed for diversity in annotator profiles in terms of age, gender, dialect, and background. Third, we recruited a facilitator with a background in ethnographic studies and provided intense annotator training. Fourth, we engaged in weekly group discussions, iteratively improving the codebook and integrating edge cases. Fifth, the selection of platforms from which we sampled data is based on local user representation in Denmark, rather than convenience. Sixth, diverse sampling methods for data collection reduced selection biases.
### Other Known Limitations
The data is absolutely NOT a reasonable or in any way stratified sample of social media text, so class prevalence/balance here says nothing about incidences of these phenomena in the wild. That said, we hypothesis that the distribution of types of misogyny in this data (subtask C2) is roughly representative of how misogyny presents on the studied platforms.
### Dataset Curators
The dataset is curated by the paper's authors and the ethnographer-led annotation team.
### Citation Information
@inproceedings{zeinert-etal-2021-annotating,
title = "Annotating Online Misogyny",
author = "Zeinert, Philine and
Inie, Nanna and
Derczynski, Leon",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
|
# Revision history [back]
This is somehow like the difference when between actually computing a sum or just writing $\Sigma$. In the first case Sage (maxima) is able to compute the general sum directly:
sage: h(r) = sum(exp(i), i, -r, r)
sage: h(r)
(e^(2*r + 1) - 1)*e^(-r)/(e - 1)
So, when you write:
sage: h(1)
(e^3 - 1)*e^(-1)/(e - 1)
Sage does a symbolic substitution: it replaces all r by 1 in the expression.
But in the second case, it is not able to simplify the general sum:
sage: h(r) = sum(exp(i^2), i, -r, r)
sage: h(r)
sum(e^(i^2), i, -r, r)
So, the "sum" that is returned in the last line is like a mathematical $\Sigma$, "i know that it is a sum, but i am not able to compute it". Now, when you write:
sage: h(1)
sum(e^(i^2), i, -1, 1)
again, Sage does a symbolic substitution: it replaces all r by 1 in the expression. So, even if maxima is now able to compute this simpler sum, it will not, since $\Sigma$ is still a formal sum. I agree that it is bad, but this seems to be a general feature of the Symbolic Ring.
So, what we have to do is to ask Sage/maxima to try to compute the sum after the substitution. For this, we will unfold the maxima expression and refold it by inserting "Hey, can you try to simplify this formal sum now ?":
sage: from sage.interfaces.maxima_lib import max_to_sr, sr_to_max, maxima_eval, max_ratsimp, max_simplify_sum
sage: numerical_sum = lambda f : max_to_sr(maxima_eval([[max_ratsimp],[[max_simplify_sum],sr_to_max(f)]])).n()
And now you get:
sage: numerical_sum(h(1))
6.43656365691809
|
Toolbar
Microsoft Dynamics NAV 2013
Specifies whether the toolbar is displayed in the development environment. If the toolbar is displayed, then you can specify options for how it is displayed.
To open this window, on the View menu, choose Toolbar.
Options
Option Description
Toolbar
Specifies if the toolbar is displayed
Large Buttons
If the toolbar is displayed, then this option specifies if large or small buttons are displayed.
Show ToolTips
If the toolbar is displayed, then this option specifies if ToolTips are displayed when you move the pointer over a button.
With Shortcut Keys
If the toolbar is displayed and if ToolTips are displayed, then this option specifies if shortcut keys are included in the ToolTips.
Show:
|
## anonymous 4 years ago HElp me Out? 1. What is the degree measure of angle A? __________ degrees http://www.flickr.com/photos/63839927 @N04/8219196862/in/photostream/lightbox/ 2. What did you use to find the measure of angle A? (1 point) A. 60 + x + A = 180 B. 180 – 150 = A C. Interior angles are equal D. Vertical angles are equal Have to copy and paste the whole link.
1. anonymous
i don't know...I can upload it somewhere else.
2. anonymous
Thanks.
3. anonymous
A should equal 60, intersection and all that
4. anonymous
|dw:1353897846442:dw|
5. anonymous
So your saying it equals 60 and number 2 is ???
6. anonymous
I'd go with the interior angle answer,...if I remember correctly, it means the same thing as saying " opposit angles are equal"
7. anonymous
that;s not right saw my error
8. anonymous
vertical angles are equal
9. anonymous
That's true^
10. anonymous
meaning that A= 60 because it is an angle of intersecting lines, aka A=60 because they are vertical angles
11. anonymous
Ohh..so did you just solve those equations then subtracted it?? I'm trying to understand I'm just not good with Geometry.
12. anonymous
What I did was make an inference based off of vertical angle theorem. If there are two lines that intersect, then they will create four angles. Each angle has an opposing angle, vertical from its perspective. The two opposite or vertical angles are equal
13. anonymous
Jazy seems to have a more current handle on geometry than me, so correct me if any of that is innacurate
14. anonymous
Vertical angles are equal. ex: |dw:1353898300187:dw|
15. anonymous
Nope. You're completely right @karama (:
16. anonymous
^^^ I understand that with the pic..thanks guys. :))
17. anonymous
you're welcome...let us know if you have any other questions! :)
18. anonymous
Sure thing.
19. anonymous
Can I have help with finding the degree of x? and Which of the following can you use to find the degree measure of angle x? 180 – (60 + A) = x 150 – 60 = x x = 60 + A x = 150 – 50 180 – 90 = x These are my last questions that I need help with.
20. anonymous
The same triangle?
21. anonymous
Yes.
22. anonymous
let's see, it might be 90...because y=30, since 180-150=30...then 60+30=90...triangles must have angles that sum to 180...180-90=90
23. anonymous
ohhh..Okay I understand that.
24. anonymous
$180=\alpha+\beta +\gamma$ each symbol represents an interior (meaning inside of) angle of a triangle
25. anonymous
|dw:1353899048358:dw| as well, in case you need any reference for how I got the value of Y
26. anonymous
Yeah.. I'm understanding it..Thanks again.
|
3,746 views
Let $S$ be a set of $n$ elements $\left\{1, 2,\ldots, n\right\}$ and $G$ a graph with $2^{n}$ vertices, each vertex corresponding to a distinct subset of $S$. Two vertices are adjacent iff the symmetric difference of the corresponding sets has exactly $2$ elements. Note: The symmetric difference of two sets $R_{1}$ and $R_{2}$ is defined as $\left ({R_{1}}\setminus{R_{2}} \right)$ $\cup$ $\left ({R_{2}}\setminus{R_{1}} \right)$
Every vertex in $G$ has the same degree. What is the degree of a vertex in $G$?
How many connected components does $G$ have?
Good question.
Thanks to all who commented here by reading all the solutions I created my own version of solution!!
edited
Every vertex in G has the same degree.
And that degree is $nC2$, which is more than $\frac{n-1}{2}$.
So how come the whole graph is not connected, given the theorem here?
https://gateoverflow.in/130614/discrete-math-2-5
Is that because n has to be strictly even?
Please have a look? @Bikram, @Arjun, @dd@Lakshman Patrel RJIT
@JashanArora here graph has $2^{n}$ vertices not $n$..
Can someone provide easy solution to this question?
Consider this, two vertices are adjacent iff the symmetric difference is 2 that is the number of elements not common=2, also the degree of all vertices are same which means the degree of phi is equal to degree of all the vertices and the degree of phi is equal to number of subsets with 2 elements hence nC2.
### Subscribe to GO Classes for GATE CSE 2022
$S = {1,2,3,4,5,6,\ldots,n}$
Let us assume any two subset $S_1$ and $S_2$. We can simply assume $n(S_1 \cap S_2) =0$ to consider the disconnected sets if we want.
Now there are three cases in which $(S_1 \backslash S_2) \cup (S_2 \backslash S_1) \;\; Or, \;\; (S_1 \oplus S_2)$ has only $2$ element.
1. Both green shaded area has one element each and in this case sizes of $S_1$ and $S_2$ are same.
2. The green area of $S_1$ contains $2$ element and the green area of $S_2$ contains none. In this case size of $S_1$ is $2$ more than that of $S_2$.
3. The green area of $S_2$ contains $2$ element and the green area of $S_1$ contains none. In this case size of $S_2$ is $2$ more than that of $S_1$.
So, if we are only interested in a particular set vertex corresponding to set $S_1$ of size $= m$, then $S_1$ is connected to three types of set vertices as shown below. We will use the words "set" and "vertices" synonymously.
In this above image, we have considered $m \geq 2$. The cases for $m = 1 \text{ and } m = 0$ will be discussed later.
Now, what we need to find is the no of set vertices in each of the above three types and sum them up to get the degree of the vertex corresponding to the set $S_1$.
For simplicity let us assume $S = \{1,2,3,4,5,6,7\}$ and set $S_1 = \{1,2,3,4\}$. Our interest will be to find $S_2$ such that vertices corresponding to $S_1$ and $S_2$ are connected.
1. CASE 1 : If we try to find another set $S_2$ having $4$ elements and satisfying constraint $n(S_1 \oplus S_2) = 2$, then we will see that no of such set $S_2$ is $4 \cdot (7 - 4)$. Or in general if $S_1$ is an $m$ element set then no of such $S_2$ sets with constraint $n(S_1 \oplus S_2) = 2$ will be equal to $m\cdot (n-m)$.
2. CASE 2 : $S_1$ contains $4$ element and If we try to find $S_2$ where $S_2$ contains $2$ elements and satisfying constraint $n(S_1 \oplus S_2) = 2$, then no of such $S_2$ will be $4C2$ or in general, for $m$ element set $S_1$, we have $mC2$ no of $S_2$ type sets all with $(m-2)$ size.
3. CASE 3: $S_1$ contains $4$ element and If we try to find $S_2$ where $S_2$ contains $6$ element and satisfying constraint $n(S_1 \oplus S_2) = 2$, then no of such $S_2$ sets will be $3C2$ or $(7-4)C2$. In general, with $S_1$ being $m$ element set, then $(n-m)C2$ no of $S_2$ sets will be possible.
Therefore, summing all three cases :
Degree of vertex $S_1$ ( assuming general case of $n(S_1) = m$ )
\begin{align*} &=m\cdot (n-m) + \binom{m}{2} + \binom{n-m}{2} \\ &=m\cdot n - m^2 + \frac{m^2}{2} - \frac{m}{2} + \frac{(n-m)\cdot (n-m-1)}{2} \\ &=m\cdot n - m^2 + \frac{m^2}{2} - \frac{m}{2} + \frac{n\cdot (n-1)}{2} \\ &\qquad - \frac{n \cdot m}{2} - \frac{n \cdot m}{2} + \frac{m^2}{2} + \frac{m}{2} \\ &=\frac{n\cdot (n-1)}{2} \\ &=\binom{n}{2} \\ \end{align*}
This result is independent of $m$ for $m \geq 2$ and $m \leq n$.
For $m = 0$ and $m = 1$ also we can show that degree of $0$ and $1$ size set vertices is nothing but $nC2$ only. (fairly straight forward cases).
So we can conclude that every vertex has the same degree and the degree is $nC2$.
Now we can guess one thing by looking at the following image:
i.e.for $m \geq 2$ if $m$ is even the $S_1$ is connected to only even cardinality type of sets (at least one) or if $m$ is odd then $S_1$ is connected to only odd cardinality type of sets (at least one). By this, we can almost say that there are two connected components in the graph.
But there is little more argument before we can proceed and have a valid proof.
if $m = 0$ then $S_1 = \phi$, Then $S_1$ will be connected to all $m = 2$ type of sets or $2$ cardinality sets.
if $m = 1$ then $S_1$ will be one of all $1$ element sets, Then $S_1$ will be connected to all other $1$ cardinality sets and at least one $3$ cardinality set.
We can argue that, one $m$ (even) cardinality set is at least connected to one $(m-2)$ cardinality set. That particular $(m-2)$ cardinality set is at least connected to one $(m-4)$ cardinality set and so on till $\phi$ set vertex. There for all even cardinality sets are connected to $\phi$ directly or indirectly.
A similar argument holds for odd cardinality set vertices till we reach some $1$ cardinality set. Moreover all 1 cardinality sets are connected
Therefore we have a situation now that all even cardinality sets form one connected component and all odd cardinality set form another component.
For example : $n = 4$ :
by
formula for symmetric difference is $\left ( R_{1}\cup R_{2} \right )-\left ( R_{1}\cap R_{2} \right )$, rt?
edited by
Yes. Both formulae are same.
but why u take $S=\left \{ 1,2,3,4,5,6,7 \right \}$
and $S_{1}=\left \{ 1,2,3,4 \right \}$
Is it set difference 2?
We only want to work with symmetric difference 2, rt?
I think two Edges (i) 234 to 124 and (ii)123 to 134 are missing in your example (n=4 ist component which is green).Though an amazing explanation for such Qs.Thanks.:)
@Debashish plz chk this line
Two vertices are adjacent iff the symmetric difference of the corresponding sets has exactly 2 elements
then tell me just one thing how u r dividing set difference as m and (n-m) ?
how do we confirm that n elements set and (n-m) elements set has set diff exactly 2?
Thank You so much @Debashish Deka ji.
they have mentioned every vertex has same degree..so if we consider empty set ..then it will be connected to nC2 (means n(n-1)/2) vertices as symmetric difference will be 2 with all of them, thereofore ans will be nC2
in general if S1 is an m element set then no of such S2 sets with constraint n(S1⊕S2)=2 will be equal to m⋅(n−m).
in general, for m element set S1, we have mC2 no of S2 type sets all with (m−2) size.
sir how you got this.....?
@ajaysoni1924 in case 1) as we can see (m-1) elements will be common in both sets between S1 and S2 and size of both sets will also be same. .
Now we have to find number of such S2's , for that we will select (m-1) out of m elements (as any of (m-1) can be common) $\left(\begin{array}{c}m\\ m-1\end{array}\right)= m$ and now for the m th element we have (n-m) choices ..So its m(n-m)
edited
@Punit Sharma brother can you explain little bit more by an example, I am not getting this.
Edited
Got it thanks
I’m not getting this...Is graph theory knowledge required here? Please answer if you’ve an easy approach to this problem.
thanku @dd for such a beautiful answer of this question
Best way to solve this for GATE is to take $n=2$ and $n=3$ and we get degree of each vertex = ${}^nC_2$ and no. of connected components = $2$.
Lets do it more formally.
It is clear {} should get connected to all $2$ element subsets (and not to any other) of $S$. So, degree of the corresponding vertex is ${}^nC_2$ as we have ${}^nC_2$ ways of choosing 2 elements from $n$. So, answer to first part must be this as it is given degree of all vertices are same.
Now, for the second part, from the definition of $G$ all the vertices of cardinality $k$ will be disconnected from all the vertices of cardinality $k-1$. This is because either all the $k-1$ elements must be same in both or $k-2$ elements must be same in both or else the symmetric difference will be more than $2$. Now if $k-1$ elements are same, symmetric difference will just be $1$. If $k-2$ elements are same, we have one element in one set not in other and $2$ elements in other set not in this, making symmetric difference $3$. Thus symmetric difference won't be $2$ for any vertices of adjacent cardinality making them disconnected.
All the vertices of same cardinality will be connected - when just one element differs. Also, vertices with cardinality difference 2 will be connected- when 2 new elements are in one vertex. Thus we will be getting $2$ connected components in total.
.
by
@arjun sir.. i am not being able to understand this. what is the better way to understand this? or which resourse i should refer to learn this question topic?
@sheshang you can read the answer given by Debashish . it is very clearly described.
set {1,2,3}
power set { {}, 1 ,2,3, {1,2}, {2,3} ,{1,3}, {1,2,3,} }
according to property above diagram is gnerated .
Now You can ans both the questions.
by
### 1 comment
Hi,
Here using the diagram its easy to calculate for the 2 or 3 element set, but for higher order set how can we derive the degree of the vertices and number of connected components?
This question can be approached in following way too.
We know that we can represent subset $S_i$ of $S$ where $|S|=n$ using $n-length$ binary string $B_i$.
Where $B_i[j] = 1$ iff $j^{th}$ element of set $S$ does belong to $S_i$.
Now, Degree of a given vertex is just number of different $n-length$ binary string we can make using just toggling exactly 2 element from binary string corresponding to given vertex.
There are $n\choose2$ ways of selecting 2 bits which needs to be toggled so subsequently there are $n\choose2$ neighbours of given vertex.
Now for number of connected component note that parity of bit-string remains same after toggling(here togling means $0$ to $1$ or $1$ to $0$ no other action.) exactly 2 bits.
So odd-parity bit string forms 1 connected component and even-parity bit string forms 1 connected component so total 2 CC.
given :- Every vertex in G has the same degree.
Best way to solve is by seeing the degree of phi; which is nC2.
and for connected components take .Small example for n=2 and n=3 and draw u will get answer.
First part
Let us assume we are working with two set $A$ and $B$ and we shall work gradually on all possible sizes of set $A$. We work the cases by finding for a given cardinality of $A$, the possible sets $B$ such that
$|A\oplus B| = 2$, where $\oplus$ is the symmetric difference operator. In other words we need to have $|A\cup B|=|A\cap B| +2 \tag 1$
CASE 1: When $|A|=0$ i.e $A=\phi$
then, $|A\cap B|=0$ and we need to choose two elements from $n$ elements to form set $B$, such that $(1)$ is satisfied. [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $2$]
So number possible $B$’s = $^n C_2 = \frac{n.(n-1)}{2}$
CASE 2: When $|A|=1$ ; $A=\{ a\}$ $(say)$
(a) when $A\cap B=\phi$ then we should have only $1$ element in $B$ other than what is in $A$ and this can be done in $n-1$ ways. [vertex corresponding to set is $A$ is adjacent to all vertices which correspond to all other sets of size $1$]
(b) when $A \cap B=A$ then we should have in $B$ elements in $A$ and apart from that $2$ other elements than those in $A$. So the no. of possible $B$’s are $^{n-1}C_2=\frac{(n-1)(n-2)}{2}$ [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets containing $a$ of size $3$]
So total number of possible $B$'s $= (n-1)+\frac{(n-1)(n-2)}{2} = \frac{n(n-1)}{2}$
CASE 3: When $|A|=2$, $A=\{a,b\}$ $(say)$
(a) when $A\cap B=\phi$ this as per the question would mean that $B=\phi$ and there is only $1$ possible choice. [vertex corresponding to set is $A$ is adjacent to vertices which correspond to the null set].
(b) when $|A\cap B|=1$, then we can choose in $^2C_1$ the element of $A$ to be present in $A\cap B$ and the remaining $1$ element of $B$ should be chosen as $^{(n-2)}C_1$. So the number of possible $B$’s = $^2C_1 . ^{(n-2)}C_1 = 2.(n-2)$ [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $2$ containing either $a$ or $b$ but not both].
(c) when $|A\cap B|=2$ then we can choose in $^2C_2$ the element of $A$ to be present in $A\cap B$ and the remaining $2$ element of $B$ should be chosen as $^{(n-2)}C_2$. So the number of possible $B$’s = $^2C_2 . ^{(n-2)}C_2 = \frac{(n-2)(n-3)}{2}$. [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $4$ containing both $a$ and $b$].
So total number of possible $B$'s = $1+2.(n-2)+ \frac{(n-2)(n-3)}{2}=\frac{n(n-1)}{2}$
CASE 4: When $|A|=x$, where $x\geq 3$
(a) when $|A\cap B|=x-2$ then we can choose in $^xC_{x-2}$ ways the element of $A$ to be present in $A\cap B$ and the remaining $0$ elements of $B$ should be chosen as $^{(n-x)}C_0$. So the number of possible $B$’s = $^xC_{x-2} . ^{(n-x)}C_0 = \frac{x.(x-1)}{2}$ [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $x-2$ containing any $x-2$ elements of $A$]
(b) when $|A\cap B|=x-1$ then we can choose in $^xC_{x-1}$ ways the element of $A$ to be present in $A\cap B$ and the remaining $1$ element of $B$ should be chosen as $^{(n-x)}C_1$. So the number of possible $B$’s = $^xC_{x-1} . ^{(n-x)}C_1 = x.(n-x)$. [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $x$ containing any $x-1$ elements of $A$].
(c)when $|A\cap B|=x$ then we can choose in $^xC_{x}$ ways the element of $A$ to be present in $A\cap B$ and the remaining $2$ elements of $B$ should be chosen as $^{(n-x)}C_2$. So the number of possible $B$’s = $^xC_{x} . ^{(n-x)}C_2 = \frac{(n-x).(n-x-1)}{2}$. [ [vertex corresponding to set is $A$ is adjacent to all vertices which corresponds to sets of size $x+2$ containing the $x$ elements of $A$].
So total number of possible $B$'s = $\frac{x.(x-1)}{2}+ x.(n-x)+\frac{(n-x).(n-x-1)}{2}=\frac{n(n-1)}{2}$
So degree of each vertex is $\frac{n(n-1)}{2}$.
Second Part
From the discussion above, we can see that all the sets of even cardinality are in a component and all the sets of odd cardinality is in another component. So there are just $2$ components in the graph.
by
|
# Applications and improper integrals
A somewhat counter-intuitive result from calculus is that even a function extending infinitely in one direction may confine an area of finite size. In light of this phenomenon, it is important not to jump to any conclusions when dealing with improper integrals; integrals involving infinities.
## Intro
The ancient Greek philosophers, who were also the mathematicians of that time, held rigorous logical reasoning high and refused to accept a claim that could not be backed up by a structured geometrical proof.
As a consequence, they never found a way around working with infinitesimals: infinitely small numbers, than which any real number is larger.
The fact that there is no limit to how small a real number can be gives rise to a contradiction, and the concept was abandoned despite its useful properties.
Since being put to use in the seventeenth century through the development of calculus, this concept has undoubtedly been a crucial part of the human technological advancements. Applications of integrals, for instance, have helped put humans on the moon, and provide us with reliable internet connection.
Who knows where the world would have been today had not the concept been discredited until then. All this said though, the strive for logical soundness continued and calculus has by now been formalized in a manner that avoids questionable concepts and reasoning.
## Concept
As the ancient Greeks had realized, the concept of leads to all sorts of calculation problems, and it requires some alternative methods.
is not a number, and so we cannot simply plug it in to our equations. Instead, we must analyze what happens as some variable grows larger and larger.
A function can grow infinitely large, and still give rise to a finite area underneath it
This is what improper integrals are all about, where either or grows without bound. Despite this ever-increasing behavior, it turns out that the integral, or the area under the curve, may still be a finite number.
## Math
Gabriel's horn is a body formed by rotating the function about the -axis from to .
Gabriel's horn is called a solid of revolution, since we are forming a solid body by rotating a function about an axis.
There is a paradox with Gabriel's horn. You can fill it up with a finite volume of paint, but to paint the outside of the horn, you need an infinite amount of paint.
We will show here that the horn has finite volume. The volume is given by:
This integral is called improper since it has unbounded limits of integration. The way forward is to evalute the limit:
## Improper integrals
### Example: Water flow
Imagine that we turn on a faucet and never turn it off again. The flow of water out of the faucet, measured in some unit of volume per unit time, can be given as a function.
In this case, we let the flow be described by:
The thing here is, after an infinite amount of time, we won't have an infinite amount of water that have come out of the faucet. And here is why.
At a small time interval, amount of water has run out of the faucet. If we sum up all of these small amounts over the infinite time interval, we get the integral:
If we let this faucet run forever, we would only end up with 1 unit of water!
### Improper integrals
Recall the form of a definite integral:
where and are the limits of the integration, and is an integrable function on the interval .
Now is an open interval, and the definite integral will calculate the area under the curve of from to . Should we not need to include the points and ? Not necessarily, but we must be careful, because the end points may make it an improper integral.
Improper integral
Let be continuous on the open interval , the integral:
is said to be an improper integral if it satisfies at least one of two conditions:
1. , , or both.
2. as , , or both.
Integrals fulfilling the first condition are universally regarded as improper integrals of type 1. Likewise, those fulfilling the second one are called improper integrals of type 2.
### Convergence of improper integrals
It might seem counter intuitive at first, since improper integrals always contain infinities, that the area calculated by this type of integrals can sometimes be a finite number. If it is, we say that the integral converges.
On the other hand, if the area under the curve obtained by the integral goes to positive or negative infinity, we say that it diverges.
The term convergence is not unique to improper integrals, as we will see in future chapters, but wherever it appears it is used to describe a quantity that tend to approach some value.
In the case of improper integrals, this quantity is the area under the curve. As opposed to convergence, divergence means to grow or shrink without bound.
### Evaluating improper integrals
Considering that we are interested in whether an improper integral approach some finite value or not, it should come as no surprise that the way we evaluate it will be through limits:
Say the following integral is improper:
Let's look at how we would treat it depending on what makes it improper.
Type 1
If , then:
If , then:
Type 2
If as , then:
If as , then:
If the limit we equate the improper integral with exists as a finite number, we say it converges to that number. Otherwise, we say that it diverges.
In the case that the limit tend toward , we say that the improper integral diverges to positive or negative infinity.
## The p-test for integrals
As an alternative method to calculating limits, we can in some cases use the -test to determine whether an improper integral converges or diverges.
Comparing integrals
Let . Then if at all points on the interval :
Hence, if we know that converges, must necessarily do so too. Similarly, if diverges, will diverge.
Now it turns out that the following is true about improper integrals of powers of :
p-integrals
For :
The type 1 integral
converges to
if , otherwise it diverges to .
The type 2 integral
converges to
if , otherwise it diverges to .
For example, it seems reasonable when looking at the picture below that converges when integrated around , but not from some to the infinity:
Conversely, looking at below, it could kind of make sense that integrating from some to infinity, we get a nice convergence, whereas integration around result in divergence.
In the -test, we make use of these known results to determine whether an improper integral is convergent or divergent, given that we know the function to be greater or smaller than between the limits of integration.
## Piece-wise integration
This section is about integrals of functions with holes, disobediently deviant points and vertical asymptotes between the integration limits. We'll see that we can integrate them all, with some reservations.
### Improper integrals, type 2
We have seen when looking at integrability and properties of integrals that we can separate an integral into two without that it changes anything:
if
Say now that has a vertical asymptote at . To calculate the integral, we then need to split it in two, and write:
As an example, let's calculate the integral of on .
We write:
Now, the integral is perfectly symmetric around , so we can rewrite it as:
and from the p-test for integrals we know that the integral converges, as the exponent . Thus, we calculate the integral and get:
### Graphs with point-wise discontinuities
In front of you stands a function:
How do you do to integrate on ? Somewhat surprisingly, you can actually integrate as usual without caring about the points of trouble:
This is valid as long as there is a finite number of points on the finite limits of integration where is either undefined or has a function value which breaks out of the continuous curve.
So let´s go on, remove as many points as you want from any continuous function, we can still integrate as if nothing happened.
The difference between these functions and the vertical asymptote example is that there, the whole continuous curve went off towards infinity.
Here, we only have infinitely small points going there own way. They don't actually contribute with anything to the integrals: the area under one infinitely small point is indeed very small, compared to the total area.
## Arclength parametrization
Have a look at this funny-looking thing:
With some values of , mapping to multiple values of , the curve is certainly not a function, and it is important that we do not treat it as such.
In mathematics, an arclength is a smooth curve connecting two points. For an arclength in two dimensions, such as the one above, we need two numbers to describe the points along it. We represent a point by , which is not an interval, but coordinates.
An arclength parametrization relates a parameter to coordinates and
Instead of the being dependent on according to some function as we are used to, both of these will depend on some other variable, a parameter, usually denoted by .
Now and can be expressed as functions of , called the parametric equations, independent of each other.
The arclength parametrization will look like a system of equations, where only the points satisfying both equations will lie on the arc:
For an arc to arise from these equations, it is necessary that and are continuous functions, so that there are no gaps in the curve.
### Example: Parametrize parabola
Parametrize the curve that is described by the equation:
Here, we want to express and as functions of a single paramter , that is:
This could be done by:
and the parametrization will look like:
### Example: Parametrize unit circle
The unit circle is described by the equation:
Now, we can describe the unit circle using only one parameter
This can be done by:
## Arclength
Have a look at Snakey:
You're a biologist, so you'd like to know how long Snakey is. The issue is that Snakey is sleeping. If you pick him up and stretch him out in front of a ruler, he'll try to bite you. You better find some other way to measure his length.
Hmmm. How about modeling Snakey's body as a curve? Then you can actually figure out his length. Ain't that clever?
Let's say Snakey's body is expressed by the curve , where ranges from to, I dunno, . An tiny increase in will induce a change in and . Let's call these changes and . The overall change in length is given by the Pythagorean theorem:
Now try factoring out , so that:
If you'd like the total length, you'd add up all those :s, until you've moved from Snakey's head to tail. As we decrease , we wind up with an integral. So Snakey's length can be written as:
What if Snakey's body corresponds to a function curve? Then we could write:
Snakey's length turns out to be:
where and are the relevant integral boundaries.
### Example
A hanging bridge is suspended between two points over a river. The distance between the points is 4 meters and the hanging bridge is described by the formula:
If we want to calculate the length of the bridge, we would have to calculate the arclength:
Differentiating to get , our integral becomes:
## Surface- and solid of revolution
We have a curve in the -plane, what we'll do is to grab the curve and rotate it around the or axis. It may not be revolutionary, but it's pretty. We'll see how we can calculate the surface area and the volume of the object appearing.
### Surface of revolution
This is some function rotated around the -axis:
Say we want to know its surface area. We'll build it up step by step. Call an infinite small element of the curve . From the lecture note on curve length, we know that:
As the circumference of a circle is , with the radius, the surface area of the thin band appearing as we rotate around a line is:
If the curve is rotated about the -axis, the corresponds to the function value , i.e. . Now, to get the whole surface of revolution, we need to sum all the thin bands.
This gives the surface of revolution as we rotate the curve of around the -axis:
If the curve is instead rotated about the -axis, the radius is just and the rest remains the same. So the surface area is:
Note that like when dealing with curve lengths, we require that the function be continuous.
### Solids of revolution
Solids of revolution more or less means the volume appearing as we fill in the surface of revolution.
However, calculating a solid of revolution follows a slightly different, and, maybe surprisingly, easier method.
We'll build up the volume from thin discs. If we want to know the solid of revolution of between and , we cut up the -axis into small segments of length , depending on . We use the index to refer to the interval .
For the method to hold, we require to be continuous. Then, by the mean value theorem for integrals, there exists some -value in each interval , so that the volume of one disc is equals:
Now, if we let the number of discs go to infinity and sum all the discs, we get:
### A side note on vs
You may wonder why we don't need to bother with for solids of revolution, when we had to for the surfaces. The reason is that when dealing with the surface area, we lose too much precision if we replace by .
Say we want to calculate the surface area of a cone made up from rotating the curve . Then, each element is . That makes every thin ring that the surface area is built from times bigger using the correct instead of , and that's a big difference.
But, when approximating the whole volume, the difference between using and is negligible. Making the curve segments infinitely small will eliminate all difference between using or for the volume.
### Example - volume between two curves
Find the volume between of the shape that is produced by rotating and around the axis, where:
The first step is to define what the area of intersection will be in our problem. To do this it can be useful to note that in the interval , as we can see below.
The next step is to define the area that each function encloses. These become and .
We find the area of intersection to be . We can easily see this in the picture below.
By multiplying with we get the volume elements . Next, we simply integrate :
Thus, the volume is .
|
# Deployment#
SecretFlow can be deployed on a single host or on multiple nodes.
## Standalone Mode#
Use secretflow.init directly to run secretflow in standalone mode.
>>> import secretflow as sf
>>> sf.init(['alice', 'bob', 'carol'], num_cpus=8, log_to_driver=True)
## Cluster Mode#
The following is an example showing how to build a cluster consisting of alice and bob on multiple nodes.
Start a head node on your first machine with the tag “alice”.
NOTE
1. Remember to use the real ip and port instead.
2. You can refer to Ray TLS for servercert.pem, serverkey.pem and cacert.pem.
3. The following section Suggestions for production explains RAY_SECURITY_CONFIG_PATH and config.yml.
4. It’s ok to remove these environments for testing if in an intranet.
5. {"alice": 8} means that alice can run up to 8 workers at the same time. Just feel free to change it if you like.
RAY_DISABLE_REMOTE_CODE=true \
RAY_SECURITY_CONFIG_PATH=config.yml \
RAY_USE_TLS=1 \
RAY_TLS_SERVER_CERT=servercert.pem \
RAY_TLS_SERVER_KEY=serverkey.pem \
RAY_TLS_CA_CERT=cacert.pem \
Head node starts successfully if you see “Ray runtime started.” in the screen output.
Now we have a cluster with a head node only, let us start more nodes.
### Start other nodes#
Start a node with the tag “bob” on another machine. The node will connect to the head node and join the cluster.
Note
Replace ip:port with the node-ip-address and port of head node please.
RAY_DISABLE_REMOTE_CODE=true \
RAY_SECURITY_CONFIG_PATH=config.yml \
RAY_USE_TLS=1 \
RAY_TLS_SERVER_CERT=servercert.pem \
RAY_TLS_SERVER_KEY=serverkey.pem \
RAY_TLS_CA_CERT=cacert.pem \
ray start --address="ip:port" --resources='{"bob": 8}' --disable-usage-stats
The node starts successfully if you see “Ray runtime started.” in the screen output.
You can repeat the step above to start more nodes with using other parties as resources tag.
### Start SecretFlow#
Now you can start SecretFlow and run your code.
>>> import secretflow as sf
# Replace with the node-ip-address and port of head node.
>>> alice = sf.PYU('alice')
>>> bob = sf.PYU('bob')
>>> alice(lambda x : x)(2)
<secretflow.device.device.pyu.PYUObject object at 0x7fe932a1a640>
>>> bob(lambda x : x)(2)
<secretflow.device.device.pyu.PYUObject object at 0x7fe6fef03250>
### (optional) How to shut down the cluster#
In some cases you would like to shut down the cluster, the following command will help you. Remember to run the command on all machines.
Note that all ray processors on the machine will be stopped, which means all ray clusters will be stopped.
ray stop
### (optional) How to setup a SPU in cluster mode#
SPU consists of multi workers on different nodes. For performance reasons, the major part of SPU is written in C++. SPU is based on Brpc, which indicates it has a separated service mesh independent of Ray’s networking. In a word, you need to assign different ports for the SPU for now. We are working on merging them.
A typical SPU config:
import spu
import secteflow as sf
cluster_def={
'nodes': [
{
'party': 'alice',
'id': '0',
# Please choose a unused port.
},
{
'party': 'bob',
'id': '1',
# Use the ip and port of bob instead.
# Please choose a unused port.
},
],
'runtime_config': {
'protocol': spu.spu_pb2.SEMI2K,
'field': spu.spu_pb2.FM128,
'sigmoid_mode': spu.spu_pb2.RuntimeConfig.SIGMOID_REAL,
}
}
spu = sf.SPU(cluster_def=cluster_def)
For more configurations of SPU, please refer to SPU config
Note
You will see the usage of setup a spu in many toturials. But be careful that it works only in standalone mode because sf.utils.testing.cluster_def use 127.0.0.1 as the default ip.
>>> spu = sf.SPU(sf.utils.testing.cluster_def(['alice', 'bob']))
### Suggestions for production#
SecretFlow use ray as its distribution system. You may need to do some more configuration for higher security when using it in production. The following actions can help improve security features.
1. Enable tls Authentication.
Ray can be configured to use TLS on its gRPC channels, for more, please refer to Ray TLS.
2. Forbidden on-fly remote.
Remote is one of the most important features of ray, but it may become dangerous when unexpected functions are injected into your node without knowing. You can set environment RAY_DISABLE_REMOTE_CODE=true to close the remote execution.
3. Enhanced serialization/deserialization.
Ray uses pickle in serialization/deserialization which is vulnerable. You can set environment RAY_SECURITY_CONFIG_PATH=config.yml to specify an allowlist to restrict serializable objects. An example of config.yml could be
pickle_whitelist:
builtins:
- type
numpy:
- dtype
numpy.core.numeric:
- '*'
You should not use this demo YAML directly. Configure it to your actual needs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.