text
stringlengths
104
605k
9.1 Current (new)  (Page 3/7) Page 3 / 7 Drift velocity Electrical signals are known to move very rapidly. Telephone conversations carried by currents in wires cover large distances without noticeable delays. Lights come on as soon as a switch is flicked. Most electrical signals carried by currents travel at speeds on the order of ${\text{10}}^{8}\phantom{\rule{0.25em}{0ex}}\text{m/s}$ , a significant fraction of the speed of light. Interestingly, the individual charges that make up the current move much more slowly on average, typically drifting at speeds on the order of ${\text{10}}^{-4}\phantom{\rule{0.25em}{0ex}}\text{m/s}$ . How do we reconcile these two speeds, and what does it tell us about standard conductors? The high speed of electrical signals results from the fact that the force between charges acts rapidly at a distance. Thus, when a free charge is forced into a wire, as in [link] , the incoming charge pushes other charges ahead of it, which in turn push on charges farther down the line. The density of charge in a system cannot easily be increased, and so the signal is passed on rapidly. The resulting electrical shock wave moves through the system at nearly the speed of light. To be precise, this rapidly moving signal or shock wave is a rapidly propagating change in electric field. Good conductors have large numbers of free charges in them. In metals, the free charges are free electrons. [link] shows how free electrons move through an ordinary conductor. The distance that an individual electron can move between collisions with atoms or other electrons is quite small. The electron paths thus appear nearly random, like the motion of atoms in a gas. But there is an electric field in the conductor that causes the electrons to drift in the direction shown (opposite to the field, since they are negative). The drift velocity     ${v}_{\text{d}}$ is the average velocity of the free charges. Drift velocity is quite small, since there are so many free charges. If we have an estimate of the density of free electrons in a conductor, we can calculate the drift velocity for a given current. The larger the density, the lower the velocity required for a given current. Conduction of electricity and heat Good electrical conductors are often good heat conductors, too. This is because large numbers of free electrons can carry electrical current and can transport thermal energy. The free-electron collisions transfer energy to the atoms of the conductor. The electric field does work in moving the electrons through a distance, but that work does not increase the kinetic energy (nor speed, therefore) of the electrons. The work is transferred to the conductor’s atoms, possibly increasing temperature. Thus a continuous power input is required to keep a current flowing. An exception, of course, is found in superconductors, for reasons we shall explore in a later chapter. Superconductors can have a steady current without a continual supply of energy—a great energy savings. In contrast, the supply of energy can be useful, such as in a lightbulb filament. The supply of energy is necessary to increase the temperature of the tungsten filament, so that the filament glows. Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
Browse Questions # We know that equilibrium constant, K changes with temperature. At 300K, equilibrium constant is 25 and at 400K it is 10. Hence, backward reaction will have energy of activation: (a) equal to that of forward reaction (b) less than that of forward reaction (c) greater than that of forward reaction (d) given values are not sufficient to explain given statement • Van't Hoff equation: $log\frac{K_2}{K_1} = \frac{\Delta H}{2.303 \times R} [\frac{T_2 - T_1}{T_1 T_2}]$ • When $\Delta H = -ve$, then the reaction is exothermic
# Is there something missing in the usual calculation by integral for hyper-surface volume of a $3$-D ball? Where is my $$4$$-D intuition going wrong about hypersurface volume of a $$3$$-D ball? There are plenty of examples of how to calculate the hypersurface volume and hypervolume of a $$3$$-D ball (eg. wikipedia) which clearly infer that for a unit $$3$$-D ball ($$r = 1$$) the hyper-volume is $$\frac{1}{2} \pi^2 = 4.935$$ Also the volume of its $$3$$-D hyper-surface is $$2 \pi^2 = 19.739 \tag{1}$$ And the volume of a $$2$$-D ball is $$\frac{4}{3} \pi = 4.189 \tag{2}$$ Also in higher dimensions volume and surface are are said to decrease with increasing dimension, which I do get. But I have for a long time thought that something was missing in the integration process, because it appears to me (who gave up on maths $$40$$ years ago) that we go from one $$2$$-D ball to one 3 ball, ignoring the other $$2$$-D balls needed, which would double in number for each dimension added after $$4$$. My intuitional imagining of the $$3$$-D ball tells me it has [is composed of, in part] four $$2$$-D balls, each with unit radius, and each with one coordinate value of 0. Indeed we would see $$x=y=z=1$$ with $$w=0$$ as a simple $$2$$-D ball in $$x,y,z$$, and I assume there must also be $$w=x=y=1$$ with $$z=0$$ and so on for $$x=0$$ and $$y=0$$. These would have a combined volume of $$4 \cdot 4.189 = 16.756 \tag{3}$$ Subtracting from $$(1)$$ the total of $$19.739$$, gives a difference of $$2.983 \tag{4}$$ Also, my intuitional imagining says there is a minimum radius 2 ball that is a surface at the “center” of the $$3$$-D ball, where $$w,x,y$$ and $$z$$ are equal to $$\sqrt{0.25}r = 0.5r$$. Actually that there are $$4$$ of these superimposed, each has a volume of $$\frac{4}{3} \pi r^3 = 0.523$$ so the total is $$2.094$$. Subtracting from $$(3)$$ leaves only $$0.888$$ for the rest of the hyper-surface, which doesn’t seem like very much. So where am I going wrong, or is $$0.888$$ enough of a figleaf to cover it? Or is it the case that $$(2)$$ is not included in $$(1)$$? • I tried to make your post more readable. Check if I corrected something wrong. – Nathanael Skrepek Apr 14 at 16:18 ## 1 Answer Using wikipedia notation, I guess that: • $$\frac 12\pi^2$$ refers to $$V_4(1)$$ which is the $$4-$$dimensional volume of the (unit) $$4-$$ball, • $$2\pi^2$$ refers to $$S_3(1)$$, which is the $$3-$$dimensional volume of the (unit) $$3-$$sphere, • $$\frac 43\pi$$ refers to $$V_3(1)$$, which is the $$3-$$dimensional volume of the (unit) $$3-$$ball. Now if I've understood your question/reasoning, you're listing some $$3-$$balls of varying radii that are contained within the (unit) $$4-$$ball. From there, you sum up the $$3-$$volumes of these $$3-$$balls and substract it from the $$3-$$volume of the $$3-$$sphere. Then, you observe that the resulting value is small and that something is wrong. The short answer is that as you've surmised, $$(2)$$ is not included in $$(1)$$, at least not in the way you're manipulating those values. If we go down one dimension, back in good old regular $$3$$D, it should be simpler to vizualize what you tried to compute: • $$S_2(1) = 4\pi$$ is the surface of a unit sphere, • $$V_2(1) = \pi$$ is the surface of a unit disk. Inside of a unit ball, you can easily fit three unit disks, one for $$x=0$$, another for $$y=0$$, and $$z=0$$. That's already $$3\pi$$'s worth of surface, so you're left with only one $$\pi$$ worth of surface to fill the whole unit ball... except that you don't fill a ball with "surfaces", you're supposed to use "volumes". Also, basing your computation on the surface of the sphere will not help you here. Although the surface of a sphere, and the volume of the ball that it delimits, share somewhat of a relationship, these two values represent two different things of very different nature. In maths, you'll never obtain a volume by just adding together surfaces. In the real world, if you stack enough sheets of paper, you'll get thick books. That's because no matter how thin your sheets are, they will still have some width/thickness to them. In maths, an "ideal" $$2$$D disk has zero width. So no matter how many disks you stack, you'll be stuck at zero width. Back to your computation, I'll try to give you some understanding of what you did. Let's stay in $$3$$D. Say you bought a spherical chocolate Easter egg for your kids, but you forgot it in your car, and it melted into a (weirdly-shaped, but) perfectly flat tablet. Since you don't want to waste the chocolate, you decide to repackage it. To do that, you take a cookie-cutter, and cut circular disk shapes into your (weird) chocolate tablet. Since the shape is irregular, you get a couple of nice, circular disks, but also a bunch of leftover chocolate. That is what you computed. Figuring out how to split a sphere into a bunch of non-overlapping disks (or other shapes) can be an interesting topic, but it has, a priori, little to do with how to split a ball into disks. Even more so with mathematical disks with zero width/height. • @N Bach - That is most helpful. I now see that the 3D ball I was contemplating is merely a slice through the 4D ball, so thankyou. Sorry I am unable to upvote your answer. – Jeremy C Apr 15 at 13:43
## Note: this wiki is now retired and will no longer be updated! The static final versions of the pages are left as a convenience for readers. Note that meta-pages such as "discussion," "history," etc., will not work. # SICP exercise 2.40 ## Problem Define a procedure unique-pairs that, given an integer n, generates the sequence of pairs (i, j) with [itex]1 \le j < i \le n[/itex]. Use unique-pairs to simplify the definition of prime-sum-pairs given in the text. ## Solution Here are all of the definitions needed for the version of prime-sum-pairs given in the text: ```(define nil (quote ())) (define (filter predicate sequence) (cond ((null? sequence) nil) ((predicate (car sequence)) (cons (car sequence) (filter predicate (cdr sequence)))) (else (filter predicate (cdr sequence))))) (define (enumerate-interval low high) (if (> low high) nil (cons low (enumerate-interval (+ low 1) high)))) (define (accumulate op initial sequence) (if (null? sequence) initial (op (car sequence) (accumulate op initial (cdr sequence))))) (define (flatmap proc seq) (accumulate append nil (map proc seq))) (define (square x) (* x x)) (define (smallest-divisor n) (find-divisor n 2)) (define (find-divisor n test-divisor) (cond ((> (square test-divisor) n) n) ((divides? test-divisor n) test-divisor) (else (find-divisor n (+ test-divisor 1))))) (define (divides? a b) (= (remainder b a) 0)) (define (prime? n) (= n (smallest-divisor n))) (define (prime-sum? pair) (prime? (+ (car pair) (cadr pair)))) (define (make-pair-sum pair) (define (prime-sum-pairs n) (map make-pair-sum (filter prime-sum? (flatmap (lambda (i) (map (lambda (j) (list i j)) (enumerate-interval 1 (- i 1)))) (enumerate-interval 1 n)))))``` Here's a definition of unique-pairs that makes list pairs (instead of Scheme primitive pairs): ```(define (unique-pairs n) (flatmap (lambda (i) (map (lambda (j) (list i j)) (enumerate-interval 1 (- i 1)))) (enumerate-interval 1 n)))``` Test: `(unique-pairs 3)` Output: ```((2 1) (3 1) (3 2)) ``` And here's a simpler definition of prime-sum-pairs that uses unique-pairs: ```(define (prime-sum-pairs n) (map make-pair-sum (filter prime-sum? (unique-pairs n))))``` Test: `(prime-sum-pairs 6)` Output: ```((2 1 3) (3 2 5) (4 1 5) (4 3 7) (5 2 7) (6 1 7) (6 5 11)) ```
# Subfloat: Label beside, caption below I'm looking for a possibility to put the label (a),(b),(c)... of a subfigure beside the subfigure and the caption below. So I'm tried sidesubfigure (Subfig label positioning), which puts the label exactly as I want to, but I didn't find a solution to put the caption below the image. \documentclass{article} \usepackage{floatrow} \usepackage{subfig} \usepackage{caption} \usepackage{rule} \begin{document} \begin{figure} \sidesubfloat[description]{\rule{2cm}{2cm}} \hfill \sidesubfloat[description]{\rule{2cm}{2cm}} \end{figure} \end{document} Is there a possiblity to do that? Edit: Just got another problem, when I'm adding 4 same size picture in 2 rows like this, the pictures are shifted: \begin{figure}[H] \begin{mysubfigure}{desc}{imageA} \includegraphics[width=4cm, height=4cm]{imageA} \end{mysubfigure} \hfill \begin{mysubfigure}{shorter desc}{imageB} \includegraphics[width=4cm, height=4cm]{imageB} \end{mysubfigure} \newline \begin{mysubfigure}{desc}{imageC} \includegraphics[width=4cm, height=4cm]{imageC} \end{mysubfigure} \hfill \begin{mysubfigure}{shorter desc}{imageD} \includegraphics[width=4cm, height=4cm]{imageD} \end{mysubfigure} \caption{Two images: \subref{imageA} is imageA, \subref{imageB} is imageB} \end{figure} Here is a solution using the stackengine (and the varwidth) package. \documentclass{article} \usepackage[demo]{graphicx} \usepackage{subcaption} \usepackage{varwidth} \usepackage{stackengine} \usepackage{lipsum} \usepackage{float} \DeclareCaptionLabelFormat{none}{} \captionsetup[subfigure]{justification=centering, labelformat=none} \newsavebox{\mybox} \newlength{\mylength} \makeatletter \newenvironment{mysubfigure}[2]{% \def\arg@caption{#1}% \def\arg@label{#2}% \def\arg@ref{(\subref{\arg@label})}% \sbox{\mybox}{\begin{varwidth}{\linewidth}{\arg@ref}\end{varwidth}}% \edef\arg@reflen{\the\wd\mybox}% \edef\arg@refline{\the\dimexpr\ht\mybox+\dp\mybox}% \begin{lrbox}{\mybox}% \begin{varwidth}{\linewidth}% }{% \end{varwidth}% \end{lrbox}% \setlength{\mylength}{\the\dimexpr\ht\mybox+\dp\mybox-\arg@refline}% \setlength{\mylength}{0.5\mylength}% \hspace{\arg@reflen}\subfigure[b]{\the\wd\mybox}% \centering% \toplap[\mylength]{l}{\arg@ref}\usebox{\mybox}% \caption{\arg@caption\label{\arg@label}}% \endsubfigure% } \makeatother \begin{document} \lipsum[1] \begin{figure}[H] \centering \begin{mysubfigure}{longer description than the other}{imageA} \includegraphics[width=3cm, height=6cm]{imageA} \end{mysubfigure} \begin{mysubfigure}{shorter desc}{imageB} \includegraphics[width=4cm, height=3cm]{imageB} \end{mysubfigure} \caption{Two images: \subref{imageA} is imageA, \subref{imageB} is imageB} \end{figure} \lipsum[2] \end{document} • Sorry, but this is not solution I'm looking for, because I still want to have a supcaption below each picture. I insert a picture to my first post to make it clearer. – Thomas Nov 19 '13 at 9:03 • @Thomas I'll check... – masu Nov 19 '13 at 9:20 • @Thomas I've "checked" :) – masu Nov 19 '13 at 10:47 • Hi, nearly perfect now, thank you. There's only one question left: When the two subfigures have different heights, they are vertically centered. What do I have to change for an alignment at the bottom (as a subfloat does) of the pictures? – Thomas Nov 19 '13 at 20:46 • @Thomas really good question which revealed a bug on my part... have to look into this... – masu Nov 19 '13 at 21:25
# Can't get table border lines to meet in the corners I'm trying to create a table with horizontal and vertical lines, but the lines aren't meeting in the corners and I can't work out why. Can anyone point out what I'm doing wrong? \documentclass[a4paper,10pt]{report} \usepackage{booktabs} \begin{document} \begin{tabular} { | r | p{2cm} | p{2cm} | } \toprule \# & One & Two \\ \midrule 1 & alpha & bravo \\ 2 & apple & banana \\ \bottomrule \end{tabular} \end{document} Running this through a few different versions of pdflatex always produces this output, in Acrobat Reader and evince: How do I get the horizontal and vertical lines to meet? - 'Using vertical lines'. The booktabs manual explains that these should not be used, and also that they will not work with its rules. If you do want to create a grid, use the LaTeX horizontal line system. –  Joseph Wright Jul 13 '12 at 11:09 As you surely noticed, \toprule and \bottomrule draw a heavier rule than \midrule. This is a really good feature of booktabs (also the spacing is especially cared for), but it's of course incompatible with vertical rules. Simply don't use vertical rules and your table will be prettier. –  egreg Jul 13 '12 at 11:18 Ahh, of course! I just copied the code from a nice table that didn't have vertical lines, and didn't notice. Also, with respect, omitting vertical lines is a luxury reserved for those who haven't been forced to create a report with too many columns for the page width... –  Malvineous Jul 13 '12 at 11:26 \documentclass[border=10pt]{standalone} \begin{document} \begin{tabular} { | r | p{2cm} | p{2cm} | } \hline \# & One & Two \\ \hline 1 & alpha & bravo \\ 2 & apple & banana \\ \hline \end{tabular} \end{document} ## Edit 3 If you are a perfectionist, please consider the following defects at the intersection of horizontal and vertical lines. Maybe you hate them even though they are small enough to be visible at a glance. ## Edit 3.1 Based on Ulrike's comment below, we need to use array package to remove such a bad feature. \documentclass[border=10pt]{standalone} \usepackage{array} \begin{document} \begin{tabular} {|r|p{2cm}|p{2cm}|} \hline \# & One & Two \\ \hline 1 & alpha & bravo \\ 2 & apple & banana \\ \hline \end{tabular} \end{document} - To get better corners: \usepackage{array}. –  Ulrike Fischer Jul 13 '12 at 11:59 @UlrikeFischer: Thanks. Problem solved! –  kiss my armpit Jul 13 '12 at 12:14 A very similar answer, but please avoid using horizontal lines when you get a new advisor. =) \documentclass[10pt,a4paper]{article} \usepackage{booktabs} \usepackage{array} \begin{document} \begin{table} \centering \setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular} { | r | p{2cm} | p{2cm} | } \toprule \# & One & Two \\ \midrule 1 & alpha & bravo \\ 2 & apple & banana \\ \bottomrule \end{tabular} \end{table} \end{document} The corners are a tad better, and you can switch the \top-\mid and \bottomrule, to \hrule if you want. This is not a global change, but changes one table. For a more global solution simply put \setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} in the preamble instead of inside the table. - while i agree that tables are generally (much!) better without horizontal lines, how do you deal with the problem of "too many columns" when you've already (1) decreased the size of the font, (2) set the table landscape, and (3) the table isn't amenable to having chunks taken out and presented separately? (in this situation, old or new advisor doesn't factor in.) –  barbara beeton Jul 13 '12 at 12:35
# Palatini variation In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection. In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric ${\displaystyle \scriptstyle {g_{\mu \nu }}}$. In the Palatini variational method one takes as independent field variables not only the ten components ${\displaystyle \scriptstyle {g_{\mu \nu }}}$ but also the forty components of the affine connection ${\displaystyle \scriptstyle {\Gamma _{\,\beta \mu }^{\alpha }}}$, assuming, a priori, no dependence of the ${\displaystyle \scriptstyle {\Gamma _{\,\beta \mu }^{\alpha }}}$ from the ${\displaystyle \scriptstyle {g_{\mu \nu }}}$ and their derivatives. The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations. Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro. The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
## ABC variable selection Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , on July 18, 2018 by xi'an Prior to the ISBA 2018 meeting, Yi Liu, Veronika Ročková, and Yuexi Wang arXived a paper on relying ABC for finding relevant variables, which is a very original approach in that ABC is not as much the object as it is a tool. And which Veronika considered during her Susie Bayarri lecture at ISBA 2018. In other words, it is not about selecting summary variables for running ABC but quite the opposite, selecting variables in a non-linear model through an ABC step. I was going to separate the two selections into algorithmic and statistical selections, but it is more like projections in the observation and covariate spaces. With ABC still providing an appealing approach to approximate the marginal likelihood. Now, one may wonder at the relevance of ABC for variable selection, aka model choice, given our warning call of a few years ago. But the current paper does not require low-dimension summary statistics, hence avoids the difficulty with the “other” Bayes factor. In the paper, the authors consider a spike-and… forest prior!, where the Bayesian CART selection of active covariates proceeds through a regression tree, selected covariates appearing in the tree and others not appearing. With a sparsity prior on the tree partitions and this new ABC approach to select the subset of active covariates. A specific feature is in splitting the data, one part to learn about the regression function, simulating from this function and comparing with the remainder of the data. The paper further establishes that ABC Bayesian Forests are consistent for variable selection. “…we observe a curious empirical connection between π(θ|x,ε), obtained with ABC Bayesian Forests  and rescaled variable importances obtained with Random Forests.” The difference with our ABC-RF model choice paper is that we select summary statistics [for classification] rather than covariates. For instance, in the current paper, simulation of pseudo-data will depend on the selected subset of covariates, meaning simulating a model index, and then generating the pseudo-data, acceptance being a function of the L² distance between data and pseudo-data. And then relying on all ABC simulations to find which variables are in more often than not to derive the median probability model of Barbieri and Berger (2004). Which does not work very well if implemented naïvely. Because of the immense size of the model space, it is quite hard to find pseudo-data close to actual data, resulting in either very high tolerance or very low acceptance. The authors get over this difficulty by a neat device that reminds me of fractional or intrinsic (pseudo-)Bayes factors in that the dataset is split into two parts, one that learns about the posterior given the model index and another one that simulates from this posterior to compare with the left-over data. Bringing simulations closer to the data. I do not remember seeing this trick before in ABC settings, but it is very neat, assuming the small data posterior can be simulated (which may be a fundamental reason for the trick to remain unused!). Note that the split varies at each iteration, which means there is no impact of ordering the observations. ## go, Iron scots! Posted in Statistics with tags , , , , , , , on June 30, 2018 by xi'an ## ABC in Ed’burgh Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on June 28, 2018 by xi'an A glorious day for this new edition of the “ABC in…” workshops, in the capital City of Edinburgh! I enjoyed very much this ABC day for demonstrating ABC is still alive and kicking!, i.e., enjoying plenty of new developments and reinterpretations. With more talks and posters on the way during the main ISBA 2018 meeting. (All nine talks are available on the webpage of the conference.) After Michael Gutmann’s tutorial on ABC, Gael Martin (Monash) presented her recent work with David Frazier, Ole Maneesoonthorn, and Brendan McCabe on ABC  for prediction. Maybe unsurprisingly, Bayesian consistency for the given summary statistics is a sufficient condition for concentration of the ABC predictor, but ABC seems to do better for the prediction problem than for parameter estimation, not losing to exact Bayesian inference, possibly because in essence the summary statistics there need not be of a large dimension to being consistent. The following talk by Guillaume Kon Kam King was also about prediction, for the specific problem of gas offer, with a latent Wright-Fisher point process in the model. He used a population ABC solution to handle this model. Alexander Buchholz (CREST) introduced an ABC approach with quasi-Monte Carlo steps that helps in reducing the variability and hence improves the approximation in ABC. He also looked at a Negative Geometric variant of regular ABC by running a random number of proposals until reaching a given number of acceptances, which while being more costly produces more stability. Other talks by Trevelyan McKinley, Marko Järvenpää, Matt Moores (Warwick), and Chris Drovandi (QUT) illustrated the urge of substitute models as a first step, and not solely via Gaussian processes. With for instance the new notion of a loss function to evaluate this approximation. Chris made a case in favour of synthetic vs ABC approaches, due to degradation of the performances of nonparametric density estimation with the dimension. But I remain a doubting Thomas [Bayes] on that point as high dimensions in the data or the summary statistics are not necessarily the issue, as also processed in the paper on ABC-CDE discussed on a recent post. While synthetic likelihood requires estimating a mean function and a covariance function of the parameter of the dimension of the summary statistic. Even though estimated by simulation. Another neat feature of the day was a special session on cosmostatistics with talks by Emille Ishida and Jessica Cisewski, from explaining how ABC was starting to make an impact on cosmo- and astro-statistics, to the special example of the stellar initial mass distribution in clusters. Call is now open for the next “ABC in”! Note that, while these workshops have been often formally sponsored by ISBA and its BayesComp section, they are not managed by a society or a board of administrators, and hence are not much contrived by a specific format. It would just be nice to keep the low fees as part of the tradition. ## from Arthur’s Seat [spot ISBA participants] Posted in Mountains, pictures, Running, Travel with tags , , , , , , , , , , , , on June 27, 2018 by xi'an ## fast ε-free ABC Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on June 8, 2017 by xi'an Last Fall, George Papamakarios and Iain Murray from Edinburgh arXived an ABC paper on fast ε-free inference on simulation models with Bayesian conditional density estimation, paper that I missed. The idea there is to approximate the posterior density by maximising the likelihood associated with a parameterised family of distributions on θ, conditional on the associated x. The data being then the ABC reference table. The family chosen there is a mixture of K Gaussian components, which parameters are then estimated by a (Bayesian) neural network using x as input and θ as output. The parameter values are simulated from an adaptive proposal that aims at approximating the posterior better and better. As in population Monte Carlo, actually. Except for the neural network part, which I fail to understand why it makes a significant improvement when compared with EM solutions. The overall difficulty with this approach is that I do not see a way out of the curse of dimensionality: when the dimension of θ increases, the approximation to the posterior distribution of θ does deteriorate, even in the best of cases, as any other non-parametric resolution. It would have been of (further) interest to see a comparison with a most rudimentary approach, namely the one we proposed based on empirical likelihoods. ## ISBA 2018, Edinburgh, 24-28 June Posted in Statistics with tags , , , , , , , , , , on March 1, 2017 by xi'an The ISBA 2018 World Meeting will take place in Edinburgh, Scotland, on 24-29 June 2018. (Since there was some confusion about the date, it is worth stressing that these new dates are definitive!) Note also that there are other relevant conferences and workshops in the surrounding weeks: • a possible ABC in Edinburgh the previous weekend, 23-24 June [to be confirmed!] • the Young Bayesian Meeting (BaYSM) in Warwick, 2-3 July 2018 • a week-long school on fundamentals of simulation in Warwick, 9-13 July 2018 with courses given by Nicolas Chopin, Art Owen, Jeff Rosenthal and others • MCqMC 2018 in Rennes, 1-6 July 208 • ICML 2018 in Stockholm, 10-15 July 2018 • the 2018 International Biometrics Conference in Barcelona, 8-13 July 2018 ## asymptotically exact inference in likelihood-free models Posted in Books, pictures, Statistics with tags , , , , , , , on November 29, 2016 by xi'an “We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs.” Following my earlier post on that paper by Matt Graham and Amos Storkey (University of Edinburgh), I now read through it. The beginning is somewhat unsettling, albeit mildly!, as it starts by mentioning notions like variational auto-encoders, generative adversial nets, and simulator models, by which they mean generative models represented by a (differentiable) function g that essentially turn basic variates with density p into the variates of interest (with intractable density). A setting similar to Meeds’ and Welling’s optimisation Monte Carlo. Another proximity pointed out in the paper is Meeds et al.’s Hamiltonian ABC. “…the probability of generating simulated data exactly matching the observed data is zero.” The section on the standard ABC algorithms mentions the fact that ABC MCMC can be (re-)interpreted as a pseudo-marginal MCMC, albeit one targeting the ABC posterior instead of the original posterior. The starting point of the paper is the above quote, which echoes a conversation I had with Gabriel Stolz a few weeks ago, when he presented me his free energy method and when I could not see how to connect it with ABC, because having an exact match seemed to cancel the appeal of ABC, all parameter simulations then producing an exact match under the right constraint. However, the paper maintains this can be done, by looking at the joint distribution of the parameters, latent variables, and observables. Under the implicit restriction imposed by keeping the observables constant. Which defines a manifold. The mathematical validation is achieved by designing the density over this manifold, which looks like $p(u)\left|\frac{\partial g^0}{\partial u}\frac{\partial g^0}{\partial u}^\text{T}\right|^{-\textonehalf}$ if the constraint can be rewritten as g⁰(u)=0. (This actually follows from a 2013 paper by Diaconis, Holmes, and Shahshahani.) In the paper, the simulation is conducted by Hamiltonian Monte Carlo (HMC), the leapfrog steps consisting of an unconstrained move followed by a projection onto the manifold. This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step. I also find it surprising that this projection step does not jeopardise the stationary distribution of the process, as the argument found therein about the approximation of the approximation is not particularly deep. But the main thing that remains unclear to me after reading the paper is how the constraint that the pseudo-data be equal to the observable data can be turned into a closed form condition like g⁰(u)=0. As mentioned above, the authors assume a generative model based on uniform (or other simple) random inputs but this representation seems impossible to achieve in reasonably complex settings.
mPDF Manual – What Else Can I Do # Layers (mPDF ≥ 5.7) CSS z-index can be used to utilise layers in the PDF document. CSS can set the z-index for any block element or image (default: 0). This does not work on block elements with fixed or absolute position. ### Set the Initial state for each layer You can set initial 'state' = "hidden" for a specific z-index (z), and/or specify a display name for the Layer e.g. <?php // Set initial state of layer: "hidden" or nothing $mpdf->layerDetails[z]['state'] = 'hidden'; <?php$mpdf->layerDetails[z]['name'] = 'Correct Answers'; • where z is the z-index (set by CSS) Note: • Using layers automatically changes the resulting PDF document to PDF 1.5 version (which is incompatible with PDFA and PDFX in mPDF). • You cannot nest layers - inner values will be ignored ### Display the Layers pane in PDF document viewer $mpdf->open_layer_pane (set by default as 'open_layer_pane' => false as a configuration variable) can be set to open the layers pane in the browser when the document is opened. <?php$mpdf->open_layer_pane = true; ### Set Programmatically If you are writing the PDF document using functions other than WriteHTML(), you can set the layers as follows: <?php $mpdf->BeginLayer($z-index); ... \$mpdf->EndLayer(); ### Reserved Layer Names mPDF automatically adds layer names for visibility: “Print only”, “Screen only” and “Hidden”; these only show when utilised.
# Math Help - need help for subspaces 1. ## need help for subspaces let a (belong to) R be fixed. Determine the dimension of the subspace W of P(n) (R) defined by w={f(belongs to) P(n) R : f(a) = 0} 2. Originally Posted by mathlovet let a (belong to) R be fixed. Determine the dimension of the subspace W of P(n) (R) defined by w={f(belongs to) P(n) R : f(a) = 0} i think by $V=P_n(\mathbb{R})$ you mean the set of all polynomials of degree at most n with real coefficients. define $\varphi: V \longrightarrow \mathbb{R}$ by $\varphi(f)=f(a).$ clearly $\varphi$ is a surjective homomorphism and $\ker \varphi = W.$ thus $V/W \simeq \mathbb{R}.$ hence: $n - \dim_{\mathbb{R}} W = \dim_{\mathbb{R}}V - \dim_{\mathbb{R}} W = \dim_{\mathbb{R}}\mathbb{R}=1,$ which gives us: $\dim_{\mathbb{R}}W=n-1. \ \ \ \Box$
Services & ResourcesWolfram ForumsMathGroup Archive Re: Re: Re: Problems with NSolve Daniel Lichtblau wrote: > Kshitij Wagh wrote: > >>Hi >>Sorry for the "fuzziness" :) >>chpoly was found by using Det[A - x*IdentityMatrix[dim]] (where A is a >>matrix function of u with \alpha etc. being some random parameters). \Alpha >>etc are random real numbers between -1 and 1 with working precision 30. Also >>please find attached a notebook of a working sample. >> >>[contact the sender to get this attachment - moderator] >> >>So I think the problem seems here, that instead of getting polynomials, I >>am getting rational functions. The thread Carl sent mentions some >>alternatives - I will try them out. >>Thanks, >>Kshitij >> [...] > Without a specific matrix it is still fuzzy. > [....] I guess I should have been more specific. If you provide plain ascii text of your code, you will not require a notebook attachment and can have it directly in the MathGroup post. That way people can try it without needing to contact you off line. > Now to address what I believe is the main bottleneck. You are computing > bivariate determinants, and Det is not using any method adapted to the > specifications of your particular family of matrices. For dimension > higher than 11 or so, it will do some thing that might well create > denominators (at least, if inputs are not exact), and moreover I expect > would be slow. I was probably incorrect in thinking this would be the main bottleneck. Subsequent experimentation indicated that the method Det uses for larger matrices is substantially faster than what I showed for dimension 11 (though still slower than the interpolation-based approach). I did not check with approximate inputs but that could well give rise to problematic denominators. That would more affect the NSolve step than the Det computation speed. > [...] > Next is the issue of finding, or at least counting, the real solutions > to the system that results when we look for multiple eigenvalues. One > approach, as you indicate, is to solve it and count the real solutions. > > findcrossings1[poly_, u_, t_] := > Cases[{u, t} /. NSolve[{poly, D[poly, t]}, {u, t}], {_Real, _Real}] > [...] > This is a bit slow, and while I expect it to not deteriorate as fast as > CharacteristicPolynomial, it will all the same become problematic as > dimension rises. > [...] When I tried at dimension 16 it took 400 seconds, compated to 10 for the CountRoots[Resultant[...]] approach. At dimension 21 that latter took 40 seconds, and I did not bother trying the NSolve method. In your case, where Det may be giving back expressions with denominators due to not recognizing cancellation (this happens when approximate input is mixed with polynomial expressions), I'd not be surprised if NSolve behavior is substantially worse. One (I hope) final remark. There are other ways to go about counting real roots. I do not have familiarity with the details. I just wanted to mention that if this problem is important to your work, and you cannot make progress with the methods already indicated, then there may still be other avenues to explore. Daniel Lichtblau Wolfram Research • Prev by Date: Re: Enable multicore calculation ? • Next by Date: Re: Does Mathematica really need more printed, introductory documentation? • Previous by thread: Re: Re: Problems with NSolve • Next by thread: Re: Problems with NSolve
summaryrefslogtreecommitdiff log msg author committer range path: root/utils/useragent.h 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 /* * Copyright 2007 Daniel Silverstone * Copyright 2007 Rob Kendrick * * This file is part of NetSurf, http://www.netsurf-browser.org/ * * NetSurf is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; version 2 of the License. * * NetSurf is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see . */ #ifndef _NETSURF_UTILS_USERAGENT_H_ #define _NETSURF_UTILS_USERAGENT_H_ /** Retrieve the core user agent for this release. * * The string returned can be relied upon to exist for the duration of * the execution of the program. There is no need to copy it. */ const char * user_agent_string(void); /** Free any memory allocated for the user_agent_string * * After calling this, the value returned by \ref user_agent_string() * is to be considered invalid. */ void free_user_agent_string(void); #endif
1 2 3 4 5 # R: if, if-else, and ifelse functions I. Introduction if, if-else, and ifelse functions are extremely powerful, and useful, in programming. In general, they allow a condition or collection of conditions to be checked, and depending on the result, certain code can then be run or not run. II. if function The if function uses the following general format: if(condition) {code} The if function works by testing a condition. If the condition is true, the code is run. If the condition is false, the code is not run. Example > x = 10 > y = 1 > > # CONDITION IS FALSE > if(x > 10) {y = 2} > y [1] 1 > > # CONDITION IS TRUE > if(x == 10) {y = 4} > y [1] 4 II. if-else function The if-else function uses the following general format: if(condition) {code1} else {code2} The if-else function works by testing a condition. If the condition is true, the code1 is run. If the condition is false, the code2 is run. Example > a = 1 > b = 1 > > # CONDITION IS TRUE > if(a == 1) {b = 2} else {b = 3} > b [1] 2 > > # CONDITION IS FALSE > if(a != 1) {b = 2} else {b = 3} > b [1] 3 III. ifelse function The ifelse function uses the following general format: ifelse(condition, true.value, false.value) The ifelse function works by testing a condition. If the condition is true, true.value is returned. If the condition is false, then false.value is returned. Example > x = 1 > > # CONDITION TRUE > ifelse(x == 1, 2, 4) [1] 2 > > # CONDITION FALSE > ifelse(x > 1, 2, 4) [1] 4
# Bézout ring with non-trivial Picard group? A ring is called Bézout when its finitely generated ideals are principal. Q: Is there a nice example of a Bézout ring $$R$$ with a non-free rank $$1$$ projective $$R$$-module? Below are some thoughts and motivation: Until recently I had assumed (and thought I had a proof in mind) that any Bézout ring would have trivial Picard group. But then I came across the paper Finitely Generated Modules over Bézout Rings of Wiegand and Wiegand, in which Theorem 2.1 implies that any Hermite ring $$R$$ which is not an elementary divisor ring contains an element $$d$$ such that $$R/(d)$$ has nontrivial Picard group. However, I'm pretty sure that it's at least the case that any Bézout ring with compact minimal prime spectrum (with respect to the Zariski topology) will necessarily have a trivial Picard group. Reasoning: first reduce to the case that $$R$$ is a reduced ring, since that affects neither the Picard group nor the compactness of $$\operatorname{minSpec}(R)$$. Then observe that $$R$$ will have Von Neumann Regular total ring of fractions $$T(R)$$ (using Bézout and compactness assumptions). Since Von Neumann Regular rings have trivial Picard groups, deduce that every rank 1 projective of $$R$$ is isomorphic to an ideal of $$R$$ which is invertible in $$T(R)$$. So in looking for an answer to Q I'm looking in the subset of Bézout rings which have non-compact minimal prime spectrum and which aren't elementary divisor rings. I've been having trouble coming up with anything explicit under these constraints. Most of the still-viable candidate Bézout rings I know occur as rings of continuous real-valued functions on the remainder of certain Stone-Čech compactifications. I find it hard to work with such rings under construction. For example, if we take $$X$$ to be the union of the positive x-axis in $$\mathbb{R}^2$$ and the positive half of the $$\sin$$ curve, and take $$R = C(\beta X \setminus X)$$ (the ring of associated real-valued continuous functions), then $$R$$ is Hermite but not an elementary divisor ring (cf example 4.11 here), so the above-cited Theorem 2.1 implies that $$R/(d)$$ would provide me an example for some $$d \in R$$. Yet I have no idea how to locate such an element $$d$$ or, having done that, what this projective module would look like. Since I've apparently been carrying around this misconception about Picard groups of Bézout rings for quite a while, I'd love to have an explicit example to sink my teeth into.
#### Remote Forensics with Mozilla Investigator ##### August 30, 2018 mozilla mig remote forensics In my recent post about osquery, I wrote about collecting telemetry from digital endpoints. This is important as skilled threat actors may be quick to manipulate anything that leaves traces on the system. A centralised logging solution is however a completely different thing to go after, where the adversary will risk detection if attempting a clean-up. This is also where the final osquery logs should reside and be streamed in realtime to, and such the intial stages of an infiltration and possibly the detection of a shutdown agent will be your first clue that something happened. But what do you do when the incident is a fact after this point? The limitation of collecting telemetry is that there are more information than can be collected available on the end-point. The collection priority will typically be on artefacts that are directly relevant to detection and triage. The next step after a careful operational security consideration will be to collect forensic evidence that answers the who, what, where and how. choose wisely where you do physical acquisition, and prioritize it in an observation phase where operational security is of utter importance. For everything else there is the option of doing nothing (observe) or using remote acquisition. In the old days, everything was done physically. This was an advantage in terms of it actually being possible to go full stealth. However, if you have a sizable network - this is no longer a feasible approach for all compromised endpoints, if you want to keep up with the operations tempo of the threat actor. It all ends up with your own ability to outmanoeuvre the adversary. In addition you will have clusters of computers and virtual machines running on the same hardware. Briefly stated: choose wisely where you do physical acquisition, and prioritize it in an observation phase where operational security is of utter importance. For everything else there is the option of doing nothing (observe) or using remote acquisition. I have been following the progress of Google Rapid Response for years. I’m quite disappointed that it has never gotten to a maturity stage that was viable in the long-run for anyone else than Google. There are loads of impressive functionality, such as the Chipsec support, which is nothing less than awesome and fundamental. However, the install process, complexity of the system and lack of compability with new versions of e.g. macOS is telling. An opportunity missed. So if the answer is not osquery or GRR, what do we have left? One way to go is the commercial route, another is the open source path. I tend to favor the latter. As I mentioned earlier, the clue is to outmanoeuvre the adversary, right? I still don’t understand how anyone thinks a standardised setup, with completely standard process names, file locations and so on can be practically effective against the more skilled adversaries. For this post I’ll focus on Mozilla InvestiGator (MIG). Stupid name aside: MIG is a platform to perform investigative surgery on remote endpoints. It enables investigators to obtain information from large numbers of systems in parallel, thus accelerating investigation of incidents and day-to-day operations security. The rest of this post will require: • A console/terminal-enabled environment on macOS or Linux • Docker (optional) The above video can be found on the MIG website. There are several things that seems reasonable with MIG, one is that queries are PGP-signed. MIG are also “sold” as a fast end-point forensics tool, since it is distributed in nature. MIG, which supports Linux, macOS and Windows, is not feature-complete for all platforms yet, but it is getting close. In remote forensics, the most used features are likely memory and file inspection - and those are fully supported on all platforms. As osquery, MIG has an easy way of getting a test environment up and running through Docker. I suggest you set it up now. When you have installed Docker, run the following queries in a terminal. This will present you with an all-in-one server, client and agent environment that you can use for testing. docker pull mozilla/mig docker run -it mozilla/mig For setting up MIG in production, the process is quite exhaustive. The last time I did something this complex was when configuring OpenLDAP. There are options almost everywhere, so make sure to pay attention to the details. Configuration mistakes, such as not enabling authentication on the web API can cause severe impact. Luckily MIG is really well documented, so I recommend reading up on the documentation and examples at their main site. The layout of the code and docs is very neat. It consists of a microarchitecture, with its own API-server and a scheduler - which contains the central task list. In addition what is exposed to the agents and the world is a RabbitMQ server. The general architecture, which is shown in the Concept Docs on Github, is describing enough for this. The below is also a taste for what to expect in the docs: clean, thorough, old-school. {investigator} -https-> {API} {Scheduler} -amqps-> {Relays} -amqps-> {Agents} \ / sql\ /sql {DATABASE} What follows is a brief reiteration of the install docs applied on Debian 9 and my notes on the installation. First install required a mix of applications. I did this on one server, but the services could be distributed and segmented on several (Postgres, RabbitMQ and MIG API and Scheduler). apt install golang postgresql nginx echo 'export GOPATH="$HOME/go"' >> /home/mig/.bashrc su mig go get github.com/mozilla/mig cd ~/go/src/github.com/mozilla/mig wget -O - 'https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc' | sudo apt-key add - echo "deb https://dl.bintray.com/rabbitmq/debian jessie erlang" > /etc/apt/sources.list.d/bintray.erlang.list echo "deb https://dl.bintray.com/rabbitmq/debian jessie main" > /etc/apt/sources.list.d/bintray.rabbitmq.list apt update && apt install erlang-nox rabbitmq-server exit For signing the agent and scheduler certificate you can setup a small PKI like the following. The certificates will be used throughout, so make sure they are well protected. cd ~ mkdir migca cd migca cp$GOPATH/src/github.com/mozilla/mig/tools/create_mig_ca.sh . bash create_mig_ca.sh # some manual work required here This step probably needs no explaining. The scheduler database is stored in Postgres. Configure it like this (you may want to consider having unique passwords per role, unlike the example though): echo "host all all 127.0.0.1/32 password" >> /etc/postgresql/9.6/main/pg_hba.conf su postgres ALTER ROLE migadmin WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD'; CREATE ROLE migapi; ALTER ROLE migapi WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD'; CREATE ROLE migscheduler; ALTER ROLE migscheduler WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD';" psql -c 'CREATE DATABASE mig;' exit sudo -u postgres psql -f /home/mig/go/src/github.com/mozilla/mig/database/schema.sql mig exit RabbitMQ is a bit of a hassle, but I got a working configuration with the following. Also do note that the “https variant” for RMQ is “ampqs and port 5671”, while plain text protocol is “ampq and port 5672”. You will need to keep the latter in mind later on. cd ~ cp {rabbitmq.crt,rabbitmq.key,ca/ca.crt} /etc/rabbitmq PASSWORD_ADMIN="<set pass here>" PASSWORD_AGENT="<set pass here>" PASSWORD_WORKER="<set pass here>" PASSWORD_SCHEDULER="<set pass here>" rabbitmqctl add_user admin "$PASSWORD_ADMIN" rabbitmqctl delete_user guest rabbitmqctl add_user scheduler "$PASSWORD_SCHEDULER" rabbitmqctl set_permissions -p mig scheduler \ '^(toagents|toschedulers|toworkers|mig\.agt\..*)$' \ '^(toagents|toworkers|mig\.agt\.(heartbeats|results))$' \ '^(toagents|toschedulers|toworkers|mig\.agt\.(heartbeats|results))$' rabbitmqctl add_user agent "$PASSWORD_AGENT" rabbitmqctl set_permissions -p mig agent \ '^mig\.agt\..*$' \ '^(toschedulers|mig\.agt\..*)$' \ '^(toagents|mig\.agt\..*)$' rabbitmqctl add_user worker "$PASSWORD_WORKER" rabbitmqctl set_permissions -p mig worker \ '^migevent\..*$' \ '^migevent(|\..*)$' \ '^(toworkers|migevent\..*)$' service rabbitmq-server restart At this point, copy /usr/share/doc/rabbitmq-server/rabbitmq.config.example.gz to /etc/rabbitmq/rabbitmq.config. Uncomment {ssl_listeners, [5671]}, and add the following to it. You will only be able to connect to the domain specified in migca (not 127.0.0.1 for instance). {ssl_options, [{cacertfile, "/etc/rabbitmq/ca.crt"}, {certfile, "/etc/rabbitmq/rabbitmq.crt"}, {keyfile, "/etc/rabbitmq/rabbitmq.key"}, {verify, verify_peer}, {fail_if_no_peer_cert, true}, {versions, ['tlsv1.2', 'tlsv1.1']}, {ciphers, [{dhe_rsa,aes_256_cbc,sha256}, {dhe_rsa,aes_128_cbc,sha256}, {dhe_rsa,aes_256_cbc,sha}, {rsa,aes_256_cbc,sha256}, {rsa,aes_128_cbc,sha256}, {rsa,aes_256_cbc,sha}]} ]} Then restart the service and make sure it is running: service rabbitmq-server restart # netstat -taupen|grep 5671 You have now configured the data stores (Postgres and RabbitMQ) and have your own small PKI CA up and running. The next steps gets into the details on compiling and deploying the actual MIG Scheduler and then the API. su mig cd $GOPATH/src/github.com/mozilla/mig make mig-scheduler exit cp /home/mig/go/src/github.com/mozilla/mig/bin/linux/amd64/mig-scheduler /usr/local/bin/ mkdir -p /etc/mig cp /home/mig/go/src/github.com/mozilla/mig/conf/scheduler.cfg.inc /etc/mig/scheduler.cfg cp /home/mig/migca/{scheduler.crt,scheduler.key,ca/ca.crt} /etc/mig chown root.mig /etc/mig/* chmod 750 /etc/mig/* mkdir /var/cache/mig/ chown mig /var/cache/mig/ Open /etc/mig/scheduler.cfg. Uncomment the TLS section under mq, and make sure it looks like: usetls = true cacert = "/etc/mig/ca.crt" tlscert = "/etc/mig/scheduler.crt" tlskey = "/etc/mig/scheduler.key" The data store sections should look like (use the PASSWORD variables from earlier): [postgres] host = "127.0.0.1" port = 5432 dbname = "mig" user = "migscheduler" password = "$PASSWORD" sslmode = "disable" maxconn = 10 [mq] host = "127.0.0.1" port = 5671 user = "scheduler" pass = "$PASSWORD_SCHEDULER" vhost = "mig" Now you can start the scheduler. Note that this is for an initial op only and that the scheduler should use a service script. su mig nohup mig-scheduler & At this point the scheduler should be running fine. To compile and boot the API client: cd$GOPATH/src/github.com/mozilla/mig make mig-api exit cp /home/mig/go/src/github.com/mozilla/mig/bin/linux/amd64/mig-api /usr/local/bin/ cp /home/mig/go/src/github.com/mozilla/mig/conf/api.cfg.inc /etc/mig/api.cfg My final API client config (/etc/mig/api.cfg) looked like the following. I forwarded this with the Nginx example in the docs. Note that authentication can only be enabled after you have added an investigator’s key, so that should be “off” and not exposed to the world at this point. This config is as everything else in MIG quite beautiful when it comes to the options it provides (such as baseroute): [authentication] enabled = on tokenduration = 10m [manifest] requiredsignatures = 2 [server] ip = "127.0.0.1" port = 8392 host = "https://<domain>:<port>" baseroute = "/api/v1" [postgres] host = "127.0.0.1" port = 5432 user = "migapi" password = "$PASSWORD" sslmode = disable [logging] mode = "stdout" level = "info" Start the MIP API like the following. Here as well: it needs a service script for permanent ops. nohup mig-api & Okay. So at this point you are done with the initial server side setup. This was where it got interesting. As we have the API up and running, we can now connect with the client applications. This part I did on macOS with Homebrew set up in advance. To note here, Mozilla haven’t made MIG GPGv2-compatible yet, so that was a bit sad. brew install gpg1 gpg1 --gen-key gpg1 --edit-key <fpr-from-above> # create a DSA subkey for signing gpg --export -a <fpr-from-subkey> > /tmp/pubkey.asc echo 'export GOPATH="$HOME/go"' >> ~/.bashrc Compile the console client: go get github.com/mozilla/mig cd ~/go/src/github.com/mozilla/mig sudo cp bin/darwin/amd64/mig-console /usr/local/bin All configuration on the investigator client side is done in ~/.migrc, which is sweet. Mine ended looking like the following. Take note of the macros, those can be used to select hosts for queries later on. [api] url = "https://<domain>/api/v1/" [gpg] home = "<homedir>/.gnupg" keyid = "<PGP Fingerprint>" [targets] macro = allonline:status='online' macro = idleandonline:status='online' OR status='idle' Boot it up! For the first user: mig> create investigator name> Tommy Allow investigator to manage users (admin)? (yes/no)> yes Allow investigator to manage loaders? (yes/no)> yes Allow investigator to manage manifests? (yes/no)> yes Add a public key for the investigator? (yes/no)> yes pubkey> /tmp/pubkey.asc create investigator? (y/n)> y Investigator 'Tommy' successfully created with ID 2 Back to the MIG server. Enable authentication in the API, by editing /etc/mig/api.cfg and switching enabled = off to enabled = on. Verify this by: curl https://<api-domain>:<api-port>/api/v1/dashboard. It’s now time to setup the agent. By default it will be compiled for the system you are on, but you can compile for other platforms as well, as shown below. Before compiling, configure the agent with the investigators PGP keys: mkdir /etc/mig/agentkeys # add pubkeys of investigators to this directory Now do the configuration. cp conf/mig-agent.cfg.inc conf/mig-agent.cfg vim conf/mig-agent.cfg make mig-agent BUILDENV=prod OS=darwin ARCH=amd64 An example agent configuration is shown below. Take note of the ampqs (5671, not 5672 which is plain) port that is used to pub and sub from the RabbitMQ queues. [agent] relay = "amqp://agent:$PASSWORD_AGENT@<domain>:5671/" api = "https://<domain>:8393/api/v1/" socket = "127.0.0.1:51664" heartbeatfreq = "300s" moduletimeout = "300s" isimmortal = on ; proxies = "proxy1:8888,proxy2:8888" installservice = on discoverpublicip = on refreshenv = "5m" extraprivacymode = off ; nopersistmods = off onlyVerifyPubKey = false ; tags = "tagname:tagvalue" [stats] maxactions = 15 [certs] ca = "/etc/mig/ca.crt" cert= "/etc/mig/agent.crt" key = "/etc/mig/agent.key" [logging] mode = "stdout" ; stdout | file | syslog level = "info" To be honest I had a bit of a headache getting the agent to run a built-in config, so I ended up copying this config to the endpoints at /etc/mig/mig-agent.cfg. You also need the CA and agent certificates deployed in /etc/mig, and you should use the whitelisting functionality since the authentication of agents has limited strength at this point. Again, this can be built in to the agent binary, I just haven’t wrapped my head around it yet. So, compiling this for both Debian and the newest macOS beta went like (there’s a script at tools/build-agent-release.sh for this as well): make mig-agent BUILDENV=prod OS=darwin ARCH=amd64 make mig-agent BUILDENV=prod OS=linux ARCH=amd64 I also added the investigator pubkeys to the endpoint’s /etc/mig/agentkeys directory. This was kind of interesting due to the ACL’s. For testing I allowed the PGP key signature only through setting onlyVerifyPubKey = false, but ACL’s in this context are quite cool - so make sure to have a look at the configuration docs for ACLs. After configuring the ACLs you can set onlyVerifyPubKey = true again. I saw a lot of debugging at first before I figured this out, since the agent won’t allow queries from one investigator alone by default. That was pretty much it. Back to the investigator endpoint, you should also compile the rapid query binary mig in addition to the console-client. cd$GOPATH/src/github.com/mozilla/mig make mig-cmd sudo cp bin/darwin/amd64/mig /usr/local/bin This enables queries like the following (remember the macros in .migrc): mig file -e 20s -path /var/log -name "^syslog\$" -maxdepth 3 Which turns out like the following. The below is a query for a system file on both macOS 10.14 and a Debian 9 server. ## Conclusions So that was my initial go at Mozilla Ivestigator. At this point I can’t praise it enough for the granular possibilities and very linux-y architecture. I had more or less no issues that weren’t due to my own lack of experience with MIG during the setup and everything worked, surprisingly also on the newest macOS beta. That is robust. Other than that all features are as advertised, perhaps except for the GnuPGv1-only support, but it seems like they are working on that as well (the ticket is old though). Compared to other solutions that I’ve seen in action, this is the first product that resonates with my workflow. It’s rapid and it integrates easily. I also look forward to having a look on Mozilla’s proposed “MIG Action Format”. When it comes to cloaking and customisation, this is also the first tool I’ve seen that provides some freedom of movement. I didn’t detail that in my notes above, but more or less everything can be customised here, so a threat group would have to work a bit against you to identify the agent. What I saw had some operational security potential though. MIG has its place alongside osquery, and I am sure that these two in combo could provide an able cross-platform hunting and DFIR tool. I will surely follow up on the progress on MIG going forward, and there really is no reason you should not either.
# Let $a_1=1, a_n=(n-1)a_{n-1}+1,n\ge 2.$ Find $n$ such that $n|a_n.$ by Makar   Last Updated July 31, 2018 04:20 AM Let $a_1=1, a_n=(n-1)a_{n-1}+1,n\ge 2.$ Find $n$ such that $n|a_n.$ My progress: Given recurrence can be rewritten as $\frac{a_n}{(n-1)!}-\frac{a_{n-1}}{(n-2)!}=\frac{1}{(n-1)!}$ $\implies a_n=(n-1)!\sum_{k=0}^{n-1}\frac{1}{k!}$ Tags :
## Understanding Integration, Integrals, Antiderivatives, and their Relationship to Derivatives: the Fundamental Theorem of Calculus So…what are integrals and how are the related to derivatives? Think of dx, the symbol for infinitesimal change in the x coordinate (it’s different from just delta x in that in dx is the limit as delta x approaches zero) , here in the context of derivatives: The idea…is to compute the rate of change as the limiting value of the ratio of the differences Δy / Δx as Δx becomes infinitely small.In Leibniz’s notation, such an infinitesimal change in x is denoted by dx, and the derivative of y with respect to x is written dy/dx. (Wikipedia) Apparently many definitions of what exactly dy/dx means are lacking (there’s a whole article on JSTOR entitled What Exactly is dy/dx?)  There are manipulations you will need to know with dy/dx.  So, say you have y=f(x), and the derivative with respect to x is  dy/dx=f'(x).  You could then write dy=f'(x)dx.  Then if you take the integral of both sides of this, you get y=f(x)!  So if you had y=ln(x), and wanted to find the integral of this (the antiderivative of natural log), you would write integral of y=integral of ln(x)dx, and you could use integration by parts to solve this–setting u = ln(x), then du = (1/x) dx, dv = dx, and v = x. With derivatives, the derivative is basically a slope, a rate of change, the change in the y values of a function as the x value, the input of the function, is changed. The derivative is basically an instantaneous rate: we look at a change in x value so small, infinitesimally small, so that you get the slope, the rate of change, of a function at a single x input. The instantaneous rate of change for an x input, the derivative, is a tangent line to the function at that point. Think about the rate of change in an intuitive way: if between two different x values there is no change in the y values, the rate of change is 0, no change in y based on change in x. If there is a positive change in the y values based on change in x inputs, there’s a positive derivative; if there’s a negative change, there’s a negative derivative. For functions that are not linear, the derivative can be zero, positive, and negative across different x values as the curve changes. See this post on critical points, points of inflection, and concavity for more information. Infinitesimal change in the x input, dx, plays an important role in integrals as well. Whereas with derivatives you are finding the rate of change of a function at a single point, the instantaneous rate, with integrals you are finding the area under a curve between two points. The notation is like $\int_a^b f(x)dx$. Here, we’re finding the area under the curve between x=a and x=b, where the curve is made by the function f(x). See this article on Riemann Sums for more information and this section on “Areas and Integrals” in Mathematics for Economists (Google Book Search free digitized version). It’s easy to think about integrals and the area under the curve when you think of the area of a rectangle on a graph. Think about the area under a curve from x=0 to x=5 where the y value is 4 at each x value; think of the area of the space sketched out by the ordered pairs (0,4) and (5, 4). Well, you know the area of a rectangle is computed by base * height. So, it’s easy to see here that the area under the curve is simply x * y=5*4=20. Well, in this case the height is uniformly 4, so that’s easy. But what about when the height varies per x, as in a curve? That’s where Riemann Sums and dx come in. Basically, think of using tons of very skinny rectangles to approximate the area under a curve; the very skinny rectangles would do a good job of approximating the area under the curve when you added them all together. Well, that’s what happens with integrals. You’re multiplying the y value (the height=) at each new x by the change between each of the x’s, where the change is the infinitisemally small dx. So you have very small changes in x, dx’s, multiplied by the y value at each of those x’s, and you add them all up, and that’s the area under a curve. Now, how does this relate integrals to derivatives? There’s the Fundamental Theorem of Calculus. Basically, it casts integrals in terms of antiderivatives. Say you have a function f(x), the integral of this function, F(x), is called the antiderivative. When you take the derivative of an antiderivative, you end up with the original function, f(x). That’s the (first?) Fundamental Theorem of Calculus; taking the derivative of an antiderivative reverses the antidifferentiation and you end up with the original equation f(x). It may be specific to indefinite integrals in this part, I’ll look into it more: The first part of the theorem, sometimes called the first fundamental theorem of calculus, shows that an indefinite integration[1] can be reversed by a differentiation. The second part, sometimes called the second fundamental theorem of calculus, allows one to compute the definite integral of a function by using any one of its infinitely many antiderivatives. This part of the theorem has invaluable practical applications, because it markedly simplifies the computation of definite integrals. (Wikipedia) Think of $\int_a^b f(x)dx$. That’s the area under the curve from a to b. Now think of $\int_a^c f(x)dx$. That’s the area under the curve from a to c. Now, if you want to find the area under the curve from b to c, you can subtract, $\int_a^b f(x)dx - \int_a^c f(x)dx$. Well, this is a key part of the Fundamental Theorem of Calculus. Say y=f(x). Say the antiderivative of that is z=F(x). The Fundamental Theorem of Calculus says that dz/dx(F(x))=y=f(x). Another way of writing this is lim h->0 (F(x+h) -F(x))/h = f(x), as shown on p. 375 of the first edition of Mathematics for Economists (Google Book Search digitized version). Now, y at an x input value is the height of the curve at that x value. Think of the Riemann Sums explanation of integration; if you made the rectangle skinny enough in the x dimension, if the difference in the x values is infinitesimally small, dx, then you basically get one height, one y value. This is how integration and derivatives fit together. Take a look at the graph at the top of the post again. Basically, if you have $\int_a^b f(x)dx$ and $\int_a^c f(x)dx$, the difference between the areas will be $\int_a^b f(x)dx - \int_a^c f(x)dx$. That leaves you with the area under the curve between b and c. Say c-b=h, and make h infinitesimally small, like dx. h is then dx, your infinitesimally small change in x. Think of the definition of derivatives via difference quotients. Here the area between b and c is z=Area=A(x)=F(c)-F(b)=F(b+h)-F(b), where F depends upon y=f(x); for each difference in x, for each little rectangle, the Area is y * difference in x, and to get the Area z, all of these little rectangles are summed up; look again at the section on Riemann Sums and the definition of integrals. Now, if you took the derivative of this area function, that would be change in the dependent variable, z over change in the indepenedent variable, x, where the change in x is h. So that would be dz/dx= (F(b+h)-F(b))/dx=(F(b+h)-F(b))/h. As h is infinitesimally small, the Area function with output z here looks at one rectangle for F(b+h)-F(b), with one y value and one dx value which is h. Looking at the definition of an integral, the one y value here is y=f(x), and there is one integral definition for this one rectangle, $\int_b^{b+h} f(x)dx$,where b+h=c, so we could also write it as $\int_b^c f(x)dx$. So here dz/dx is equivalent to dy/dx here, which is (F(b+h)-F(b)/h. Now, if the change in x, which here is h, is infinitesimally small, you basically get one y value, and the area is y*dx, where the integral is $\int_b^{b+h} f(x)dx$, the area is basically f(x)dx, here f(b)dx, which equals y*dx. As dx=h, that is y*h. So the deriviative (F(b+h)-F(b)/h would be equivalent to (y*dx)/dx=(y*h)/h=y. So tada, that’s why if F(x) is the antiderivative of f(x), when you take the derivative of F(x) you are left with y, where y=f(x). Yay! This could be slightly off/inaccurate, so check the Wikipedia post Fundamental Theorem of Calculus and pp. 375-377 of the first edition of Mathematics for Economists for more info (unfortunately p. 375 is blocked out on the Google Book Search digitized version). Calculus for Dummies has some good graphs illustrating some of these principles such as on pp. 242 and 247. Here’s the text on the illustration above, in case it’s too small for you to read: As dx is infinitisemally small,f(b)~f(b+h)=f(c). The area under the curve between b and b+h is F(b+h)-F(b) (where F(b+h) is the area under the curve from a to c=b+h, and F(b) is the area from a to b. Since h is so small, the area is basically f(b)*h. Now, the derivative of this integral area function is the change in output, area, which is f(b)*h, over change in independent variable, h. So that equals f(b), which shows the Fundamental Theorem of Calculus! Once you start doing integrals there are lots of techniques like partial fractions etc that will make it easier to solve integrals. Internal tag: math Some latex instructions, including how to make integral signs, here.
# Row Vector – Explanation & Examples A row vector is a matrix with 1 row. It is also known as a row matrix. Let’s start with a formal definition of a row vector. A row vector is a $1 \times n$ matrix consisting of a single row with n elements. In this article, we will look at what a row vector is,  some examples, and matrix operations with row vectors. ## What is a Row Vector? A row vector, also known as a row matrix, is a type of matrix with only $1$ row.  There can be $1$ column, $2$ columns, $3$ columns, or $n$ columns. But the number of row is always $1$! Generally, a row vector is: This shows a row vector, $B$, with $1$ row and $n$ columns. The first element of the matrix is $b_1$, the second element is $b_2$, and so on until the last element, $b_n$. Let’s look at some common row vectors below: $1 \times 1$ row vector: $\begin{bmatrix} d \end {bmatrix}$ This is the simplest row vector with $1$ row and $1$ column. The only element in this matrix is $d$. $1 \times 2$ row vector: $\begin{bmatrix} { – 3 } & { 6 } \end {bmatrix}$ This is a $1 \times 2$ matrix. There is $1$ row and $2$ columns. $1 \times 3$ row vector: $\begin{bmatrix} { – 2 } & { – 4 } & { – 2 } \end {bmatrix}$ This is a $1 \times 3$ matrix. There is $1$ row and $3$ columns. Theoretically, there can be as many columns as we want, but there needs to be only $1$ row. This is what makes a matrix a row vector! Transpose of a Row Vector Recall that taking the transpose of a matrix means to interchange the rows with columns. The rows become columns and the columns become rows. What happens when we take the transpose of a row vector? Since there is only $1$ row, transposing a row vector makes it a column vector! Suppose we have a row vector $A$: $A = \begin{bmatrix} { – 21 } & { – 15 } & 6 & 2 \end {bmatrix}$ If we transpose it, we will get a column vector, shown below (let’s call it matrix $B$): $B = \begin{bmatrix} { – 21 } \\ { – 15 } \\ 6 \\ 2 \end {bmatrix}$ ## How to find a Row Vector Just like with matrices, we can perform the arithmetic operations on row vectors as well. We will look at addition, subtraction, and scalar multiplication. Before adding $2$ row vectors, we have to check whether they are of the same dimensions or not. If they aren’t, we can’t add them. If they are of the same order, we just add the corresponding elements of each row vector. Consider matrices $A$ and $B$ shown below: $A = \begin{bmatrix} { – 4 } & 4 & { – 2 } \end {bmatrix}$ $B = \begin{bmatrix} { – 2 } & 4 & 6 \end {bmatrix}$ Matrices $A$ and $B$ are both $1 \times 3$ matrices. We add the two row vectors by adding the corresponding entries. Shown below: $A + B = \begin{bmatrix} { (- 4 + – 2) } & { (4 + 4) } & { (-2 + 6) } \end {bmatrix}$ $A + B = \begin{bmatrix} {-6 } & { 8 } & { 4 } \end {bmatrix}$ Subtraction Consider matrices $N$ and $M$ shown below: $N = \begin{bmatrix} { – 1 } & 1 \end {bmatrix}$ $M = \begin{bmatrix} { 3 } & { – 10 } \end {bmatrix}$ Matrices $N$ and $M$ are both $1 \times 2$ matrices. We subtract the two row vectors by subtracting the corresponding entries in each row matrix. Shown below: $N – M = \begin{bmatrix} { (- 1 – 3) } & { 1 – ( – 10 ) } \end {bmatrix}$ $N – M = \begin{bmatrix} { – 4 } & { 11 } \end {bmatrix}$ Scalar Multiplication When we want to multiply a row vector by a scalar, we simply multiply each element of the row matrix by the scalar. Consider matrix $B$ shown below: $B = \begin{bmatrix} { – 12 } & { – 1 } & { 4 } & 8 \end {bmatrix}$ If we want to multiply this column matrix by the scalar $\frac{1}{12}$, we will do so by multiplying each of its elements by $\frac{1}{12}$. The process is shown below: $\frac{1}{12} B = \frac{1}{12} \times \begin{bmatrix} { – 12 } & { – 1 } & { 4 } & 8 \end {bmatrix}$ $= \begin{bmatrix} {(\frac{1}{12} \times – 12) } & {(\frac{1}{12} \times -1)} & {(\frac{1}{12} \times 4)} & {(\frac{1}{12} \times 8)} \end {bmatrix}$ $= \begin{bmatrix} {-1 } & {-\frac{1}{12}} & { \frac{1}{3} } & {\frac{2}{3}} \end {bmatrix}$ Below, we show some examples to enhance our understanding of row vectors. #### Example 1 Out of the $4$ matrices shown below, identify which of them are row vectors. $A = \begin{bmatrix} { 4 } & 6 & { -3 } \end {bmatrix}$ $B = \begin{bmatrix} { 1 } & 1 & { 1 } & 1 \end {bmatrix}$ $C = \begin{bmatrix} { -1 } \\ { – 6 } \end {bmatrix}$ $D = \begin{bmatrix} { 14 } \end {bmatrix}$ Solution • Matrix $A$ is a $1 \times 3$ matrix. It has $1$ row and $3$ columns. Thus, it is a row vector. • Matrix $B$ is s $1 \times 4$ matrix. It has $1$ row and $4$ columns. All the entries are ones, but that doesn’t really matter. Since it has a single row, it is a row vector. • Matrix $C$  is a $2 \times 1$ matrix. It has $2$ rows and $1$ column. It is not a row vector, rather a column vector. • Matrix $D$ is a $1 \times 1$ matrix. It is the simplest form of a matrix. It has $1$ row and $1$ column. It is the simplest row matrix. It is a row vector. #### Example 2 What is the transpose of the following row vector? $\begin{bmatrix} f & g & h & i & j \end {bmatrix}$ Solution Recall that the transpose of a row vector is a column vector. We just write the same entries as a “column” instead of a row. Thus, the transpose is: $\begin{bmatrix} f \\ g \\ h \\ i \\ j \end {bmatrix}$ #### Example 3 Subtract matrix $G$ from matrix $F$. $F = \begin{bmatrix} 1 & 0 & 3 & 7 \end {bmatrix}$ $G = \begin{bmatrix} -2 & 1 & 0 & -1 \end {bmatrix}$ Solution Matrices $F$ and $G$ are both $1 \times 4$ matrix. They have the same dimension. Thus, they can be subtracted by subtracting the corresponding elements of each matrix. The process is shown below: $F – G = \begin{bmatrix} (1 – – 2) & (0 – 1) & (3 – 0) & (7 – – 1) \end {bmatrix}$ $F – G = \begin{bmatrix} 3 & { – 1 } & 3 & 8 \end {bmatrix}$ ### Practice Questions 1. Find the transpose of: 1. $\begin{pmatrix} t & b & x & e \end {pmatrix}$ 2. $\begin{pmatrix} { – 13 } \end{pmatrix}$ 2. Perform the indicated operation for the matrices shown below: $P = \begin{pmatrix} { -1 } & { 0 } & { – 1 } & { 0 } \end {pmatrix}$ $Q = \begin{pmatrix} 10 & { 20 } & {- 30 } \end {pmatrix}$ $R = \begin{pmatrix} { – 7 } & { 1 } & { 0 } \end {pmatrix}$ $S = \begin{pmatrix} { – 2 } & { 3 } & { – 2 } & 16 \end {pmatrix}$ 1. $– \frac{1}{10} Q$ 2. $P + S$ 3. $Q – P$ 1. To find the transpose of a row vector, write the row as a column, that’s it! 1. $\begin{pmatrix} t \\ b \\ x \\ e \end {pmatrix}$ 2. $\begin{pmatrix} { – 13 } \end{pmatrix}$ Note, the transpose of a $1 \times 1$ matrix is the matrix itself! 2. Part (a) is scalar multiplication. We multiply each entry of matrix $Q$ by the scalar, ${ – \frac{1}{10} }$. Part (b) is addition. Both matrix $P$ and $S$ are $1 \times 4$ matrices. Thus, addition can be performed. Part (c) is subtraction. Since the dimension of matrix $Q$ is not the same as matrix $P$, we can’t perform the subtraction. All of the answers are shown below: 1. $– \frac{1}{10} Q = { – \frac{1}{10} } \times \begin{pmatrix} 10 & { 20 } & {- 30 } \end {pmatrix}$ $= \begin{pmatrix} (- \frac{1}{10} \times 10) & (- \frac{1}{10} \times 20) & (- \frac{1}{10} \times {-30}) \end {pmatrix}$ $= \begin{pmatrix} { – 1 } & { – 2 } & 3 \end{pmatrix}$ 2. $P + S = \begin{pmatrix} { -1 } & { 0 } & { – 1 } & { 0 } \end {pmatrix} + \begin{pmatrix} { – 2 } & { 3 } & { – 2 } & 16 \end {pmatrix}$ $P + S = \begin{pmatrix} { (-1+-2) } & { (0 + 3) } & { (-1 + -2) } & (0 + 16) \end {pmatrix}$ $P + S = \begin{pmatrix} { – 3 } & 3 & { – 3 } & { 16 } \end{pmatrix}$ 3. Subtraction not possible due to matrix $Q$ and matrix $P$ having different dimensions.
# In the following question, out of the four alternatives, select the alternative which will improve the underlined part of the sentence. In case no improvement is needed, select “No improvement”.Few people would turn down the chance to receive a free exercise program. 1. put out 2. put up with 3. take over 4. No improvement Option 4 : No improvement Free Cell 345924 10 Questions 10 Marks 7 Mins ## Detailed Solution The correct answer is Option 4) i.e. No improvement​​. Key Points • Let's look at the meanings of the given options: • turn down- to refuse or reject. • put out- to extinguish. • put up with- to tolerate • take over- to control of. • According to the context of the given sentence, nobody will let go of a free exercise program. • Hence from the given meanings, we find that turn down is the correct choice here.
Browse Questions # If $\begin{vmatrix} 3x & 7 \\ 2 & 4 \end{vmatrix}$= 10, find x. Toolbox: • The determinant value of a $2\times 2$ matrix is $|A|=a_{11}\times a_{22}-a_{21}\times a_{12}$ $\begin{vmatrix}3x & 7\\2 & 4\end{vmatrix}$=10 Now let us evaluate the value of the determinant $3x\times 4-7\times 2=10.$ 12x-14=10. 12x=10+14 12x=24 x=$\frac{24}{12}=2$ Hence the value of x=2.
## anonymous one year ago Each side of a square loop of wire measures 2.0 cm. A magnetic field of 0.044 T perpendicular to the loop changes to zero in 0.10 s. What average emf is induced in the coil during this change? A. 1.8 V B. 0.088 V C. 0.88 V D. 0.00018 V 1. Michele_Laino the magnetic flux change is: $\Large \Delta \Phi = S \times \Delta B = {\left( {2 \times {{10}^{ - 2}}} \right)^2} \times 0.044 = ...Weber$ 2. Michele_Laino $\large \Delta \Phi = S \times \Delta B = {\left( {2 \times {{10}^{ - 2}}} \right)^2} \times 0.044 = ...Weber$ 3. anonymous 1.76E-5? choice D is the solution? 4. Michele_Laino no, since we have to find the emf 5. Michele_Laino more explanation: 6. anonymous ohh ok! how do we do taht? 7. Michele_Laino before magnetic flux chnaging, the flux of the magnetic field through the square loop is: area*magnetic field=0.02*0.02*0.044 after the magnetic field changing the new magnetic flux through the square loop is: area*magnetic field=0.02*0.02*0=0 |dw:1434436897217:dw| 8. Michele_Laino so the requested emf is: $\Large E = \frac{{\Delta \Phi }}{{\Delta t}} = ...volts$ that is the Faraday-Neumann law 9. anonymous oh ok! what do we plug in? :/ 10. rvc emf= flux/area 11. Michele_Laino no, emf= flux/time 12. rvc oh ye 13. Michele_Laino 14. anonymous what are we plugging in? :/ 15. rvc yes faradays law applies here :) 16. rvc i just messed the equation :) 17. Michele_Laino next step, is: $\Large E = \frac{{\Delta \Phi }}{{\Delta t}} = \frac{{0.176 \times {{10}^{ - 4}}}}{{0.1}} = ...volts$ 18. Michele_Laino no worries!! :) @rvc 19. anonymous 1.76 E-4? so the solution is choice D? 20. Michele_Laino that's right! 21. anonymous yay! thanks!:) 22. rvc you are the BEST @Michele_Laino 23. Michele_Laino :) @rvc
# Wikipedia talk:WikiProject Rock music/Archive 1 Archive 1 | Archive 2 Participants, sign your name on the participants section, on the main page (Gin & Milk 16:36, 21 June 2006 (UTC)) You will have to tell us what information about hard rock do you want to have? Bands, or songs... I think that hard rock bands are very popular and you have all the informations about them... --Aeternus 19:29, 27 June 2006 (UTC) ## Whait a sec Rush isn't a hard rock band, i'ts a prog rock band. This project is about to help the hard rock bands, is that righ? --Aeternus 15:22, 14 July 2006 (UTC) Just a thought... Did permission to land come out before or after the christmas single? I thought it was the other way round...maybe someone's got their sourcing wrong. Should probably ask that on the Rush talk page. -- Reaper X 17:42, 8 November 2006 (UTC) ## Lollipop Lust Kill up for deletion Thought it would be fair to warn this WikiProject that the article for Lollipop Lust Kill is up for deletion as a non-notable band. The nominator and at least one commenting editor haven't heard of the group. I have and voiced my opinion in the AfD, but I couldn't find much online to back up the band's notability. If you think the article should stay, I encourage any LLK / general hard rock fans to voice your opinion in the AfD discussion. Your opinion will be especially noticed if you can provide any verifiable information that the band meets WP:MUSIC notability criteria. -- H·G (words/works) 23:06, 12 August 2006 (UTC) ## Revamp and Expansion I'd like to try and make this Wikiproject more efficient. I consider WikiProject Alternative Music very good for a model. For a start, I am adding a talk page header, new userbox and the WikiProject its own category. I'm sorta concerned about just focusing on hard rock though. I know there will be debates on what bands fall under the scope of hard rock. So why don't we expand this project to rock music in general? I mean, there is no Wikiproject for rock in general, only alternative and metal. It would be massive, but I'm sure we would attract many participants. What do you say?! -- Reaper X 00:54, 13 October 2006 (UTC) Why exactly are you changing the wiki project can someone please explain it to me. — Preceding unsigned comment added by Metal Maiden (talkcontribs) So it can cover a broader range of music articles (rock music in general), and bring attention to them and hopefully attract many participants who will dedicate themselves to improving them. -- Reaper X 17:45, 20 October 2006 (UTC) ## Project Directory Hello. The WikiProject Council is currently in the process of developing a master directory of the existing WikiProjects to replace and update the existing Wikipedia:WikiProject Council/Directory. These WikiProjects are of vital importance in helping wikipedia achieve its goal of becoming truly encyclopedic. Please review the following pages: and make any changes to the entries for your project that you see fit. There is also a directory of portals, at User:B2T2/Portal, listing all the existing portals. Feel free to add any of them to the portals or comments section of your entries in the directory. The three columns regarding assessment, peer review, and collaboration are included in the directory for both the use of the projects themselves and for that of others. Having such departments will allow a project to more quickly and easily identify its most important articles and its articles in greatest need of improvement. If you have not already done so, please consider whether your project would benefit from having departments which deal in these matters. It is my hope to have the existing directory replaced by the updated and corrected version of the directory above by November 1. Please feel free to make any changes you see fit to the entries for your project before then. If you should have any questions regarding this matter, please do not hesitate to contact me. Thank you. B2T2 21:37, 23 October 2006 (UTC) Sorry if you tried to update it before, and the corrections were gone. I have now put the new draft in the old directory pages, so the links should work better. My apologies for any confusion this may have caused you. B2T2 00:23, 24 October 2006 (UTC) ## Discography This may stir up a hornet's nest but is there any way we could create a Standard Discography template that would give people a starting point & impose some consistancy? Megamanic 04:45, 10 November 2006 (UTC) There is a general format outlined by the guys in WikiProject MUSTARD for discographies, and they have used the Oasis discography as a model. Anyone making a discography should follow this style. -- Reaper X 22:01, 10 November 2006 (UTC) Thanks for that. The Oasis one is a good model - the information needs to be more widely disseminated though - I'm active in the "Rush" Discography page & none of us had a clue there was any particular model to follow Megamanic 01:46, 13 November 2006 (UTC) ## New York Sound What exactly is the so called "New York Sound" and why doesn't wiki have anything on it?--Deglr6328 04:38, 27 November 2006 (UTC) ## Proposed The Rolling Stones project I've just created a proposed project on The Rolling Stones at User:Robertjohnsonrj/WikiProject The Rolling Stones. Please feel free to add your name to either the project page or to the list of interested wikipedians on the project's section of the Wikipedia:WikiProject/List of proposed projects#The Rolling Stones. Thank you. robertjohnsonrj 22:34, 3 December 2006 (UTC) ## Distinguishing genres and styles, and ending edit wars As part of my cusade against the "genre edit wars" that plague many band articles, I have made the following proposal at Wikipedia talk:WikiProject Musicians#Genre wars and the distinguishing of genres and styles. I would appreciate feedback on this proposal. I am going to push hard for this proposal to be put into action, and I appreciate any supporters in helping me do so. Thank you. -- Reaper X 01:18, 11 December 2006 (UTC) ## Wikipedia Day Awards Hello, all. It was initially my hope to try to have this done as part of Esperanza's proposal for an appreciation week to end on Wikipedia Day, January 15. However, several people have once again proposed the entirety of Esperanza for deletion, so that might not work. It was the intention of the Appreciation Week proposal to set aside a given time when the various individuals who have made significant, valuable contributions to the encyclopedia would be recognized and honored. I believe that, with some effort, this could still be done. My proposal is to, with luck, try to organize the various WikiProjects and other entities of wikipedia to take part in a larger celebrartion of its contributors to take place in January, probably beginning January 15, 2007. I have created yet another new subpage for myself (a weakness of mine, I'm afraid) at User talk:Badbilltucker/Appreciation Week where I would greatly appreciate any indications from the members of this project as to whether and how they might be willing and/or able to assist in recognizing the contributions of our editors. Thank you for your attention. Badbilltucker 17:59, 29 December 2006 (UTC) ## Tenacious D album tracks A bunch of album tracks from Tenacious D's first album have been nominated for deletion. I am not getting much support for keeping them up. I do think they are notable, and feel the nominator is doing this out of resentment for the D. These are the tracks: Tenacious D Fans 12:27, 4 January 2007 (UTC) ## U2 This is just a suggestion, but I think the Rock Wikiproject should take over the mandate of U2 articles from the Alternative music Wikiproject. We originally brought the group under our scope because A). Some people label them alt-rock on occasion (they're more accurately a post-punk band that became arena stars), but more importantly, B). There was no overall rock Wikiproject at the time and the Alt-rock one was the closest thing, and C). Their importance is on a level releating to rock music as a whole. Many of the articles are pretty far along (two of them are GA and one might become a Featured Article soon) and lots of sources are available on this band. WesleyDodds 08:38, 5 January 2007 (UTC) ## Good article reviews I've nominated Van Halen, Black Flag (band) and Queen (band) for delisting of their GA status. Both articles are rife with problems that are listed on the WP:GA/R page. Teemu08 20:51, 19 January 2007 (UTC) ## Marilyn Manson Marilyn Manson (band) has been nominated for a featured article review. Articles are typically reviewed for two weeks. Please leave your comments and help us to return the article to featured quality. If concerns are not addressed during the review period, articles are moved onto the Featured Article Removal Candidates list for a further period, where editors may declare "Keep" or "Remove" the article from featured status. The instructions for the review process are here. Reviewers' concerns are here. Jeffpw 21:55, 20 January 2007 (UTC) ## ZZ Top's page has been defiled. I wanted some information on ZZ Top, but the page just has some random swear words with absolutely no information on the band. 24.93.1.61 16:26, 2 February 2007 (UTC)Rodger This was vandalism which has since been corrected. -- Reaper X 05:22, 3 February 2007 (UTC) ## blink-182 Pop Punk vs Punk Rock vote I have started a vote on the blink-182 article regarding the long term edit war over the bands genre of Pop Punk or Punk Rock. votes can be cast here. cheers --Dan027 07:23, 18 February 2007 (UTC) ## Retro Metal I suppose a lot of members of the project are aware of the recent music news in regards to heavy metal. The recent rise in a new style, trademarked by psychedelic tendencies and a likeness to the doom metal, psych folk and hard rock genres, known as "retro metal", "heritage metal" "retro" or the more recent-to-wikipedia; "hipster metal". Most of the bands in this new style (such as [[Wolfmother, The Sword, Witchcraft (band), High on Fire, Dungen, Pelican, The Answer, Kemado Records bands, etc.) have seen a significant boom in popularity over the last two years. In general, the retro style has gained more than cult following (which in the past has pacified neo-psych, doom, etc.), but a mainstream following as well. Far different from both dominating forms of rock since the nineties, alternative pop/rock and hardcore/metalcore, retro metal seems to be the rising successor to dominant rock music in the coming decade. To this theory, many notable media outlets in the worldwide music community have been recognizing this as the "2000's Retro Metal movement" (as noted here, for example; [1], [2], [3], [4], [5], [6], [7], [8], [9], etc.). I have received criticism for exhibiting the reality that retro metal exists, mostly from within the Project Metal community, whom I would expect to be the most supportive in the expansion of such information. In fact, retro is more of a broader movement to begin with, as noted by the All Music Guide and Rolling Stone late last year. Despite some unhappy Wikipedians unable to accept the fact that retro metal is the real deal, it is in fact a power player in today's modern music. I would like to ask for everybody's thoughts on this, and moreover, I would like to request support in helping resurrect the recognition of retro metal within Wikipedia. Editor19841 (talk) 21:51, 18 February 2007 (UTC) ## Category attached to userboxes I created a category that goes along with the userbox for identifying members to collaborate with on articles. I hope this helps in future endeavors. Category:WikiProject Rock music members Darthgriz98 21:08, 19 February 2007 (UTC) ## genre examples As a user who has an interest (but very little knowledge or training) in music, I'd like to see more examples that help illustrate the distinctives of various rock genres. A technical description is fine for those that know music, but for those of us that don't, examples are the best way to make it clear. ⇔ ChristTrekker 21:42, 19 February 2007 (UTC) ## Dream Theater How is this article considered only "Start" class for the Biography and Rock Music projects? Surely a former Featured Article (with very few changes since its promotion to FA, despite the fact that it was demoted) is good enough for at least B Class? plattopustalk 15:40, 28 February 2007 (UTC) You bet your ass it's more than a Start. Thanks for bringing it up. -- Reaper X 17:56, 28 February 2007 (UTC) Oops, don't know why I rated that Start class. Definitely B class. - miketm - 06:18, 1 March 2007 (UTC) ## A note Just to let all project members know, if you have any questions, please ask either user Reaper X or myself. Thanks. DavidJJJ 11:35, 4 March 2007 (UTC) ## List of Japanese rock bands (moved from main page) I had proposed this article for deletion because I felt Category:Japanese rock music groups could serve the same function. However I withdrew my proposal because I saw many red links where a new user could be motivated to create an article just by seeing the name. Comments? -- Reaper X 21:58, 12 March 2007 (UTC) I wasn't sure whether to respond here or on that article's talk page, so just move this comment if I'm not meant to post this here. I don't know much about Japanese rock bands, but I think I'd be safe in assuming if any of those red-linked articles were created, they wouldn't stand a chance at an AfD. Plus, if one of those bands is notable enough to have an article created about them, it would be created eventually anyway. Of course I may be wrong about that, but I'm of your initial opinion that Category:Japanese rock music groups would serve the same purpose. ĤĶ51Łalk 22:16, 12 March 2007 (UTC) ## How do I propose an article's inclusion into the project? Specifically, Severe Tire Damage (band), which still needs some work. -- ${\displaystyle \sim }$ Lenoxus " * " 18:12, 22 March 2007 (UTC) There is no real proposal process. As with Severe Tire Damage (band), they have a strange mix of style from what I saw and heard on their site. I would suggest they be categorized under the Alternative music Wikiproject. I wouldn't know if they fall under their criteria, so I will ask. -- Reaper X 17:01, 24 March 2007 (UTC) Thanks! ${\displaystyle \sim }$ Lenoxus " * " 17:29, 27 March 2007 (UTC) ## Bob Dylan - Proposal I have made a proposal on the page Talk:Bob Dylan to remove Bob Dylan from Category:Converts to Christianity. Please go there to discuss. --Metzenberg 20:53, 13 April 2007 (UTC) ## Unfun Records This article, Unfun Records, is up for AFD. Do we feel like making it a WP:Rock article as it is a article about A underground Rock label? Or should we view it as non notable?--St.daniel talk 17:15, 19 April 2007 (UTC) ## Time to replace Infobox Guitarist? There have been a growing number of Wikipedians questioning the need for a separate infobox for guitarists. The {{Guitarist infobox}} was created by Wikipedia:WikiProject Guitarists, and it easily survived a deletion nomination back in September of last year, but that was before {{Infobox musical artist}} (which is supported by Wikipedia:WikiProject Musicians) became a widely accepted standard. Both infoboxes are currently endorsed by Wikipedia:WikiProject Biography, but recent discussions between some members of the Guitarist and Musician Wikiprojects have concluded that it may be time to deprecate the guitarist infobox, and start replacing it. (Unfortunately, this is not a task for bots, and will have to be done manually.) Before making any final decision on the matter, we would like to get feedback from the broader community, so I am posting this notice to several Wikiprojects which may be affected. Comments should be posted to Template talk:Guitarist infobox. If you have strong feelings about this infobox, one way or the other, please feel free to let us know. Thanks, Xtifr tälk 12:21, 1 June 2007 (UTC) ## Flogging Molly & Albums Does anyone object to adding Flogging Molly to our list. Oh yeah and what's our stance on adding albums?--St.daniel Talk 11:41, 24 June 2007 (UTC) ## Request for a peer review of Tool (band) Hi folks! Some weeks ago, Tool (band) quite easily jumped the GA-border, after it was peer reviewed in April and quite a lot of work had been poured into the article. It becomes harder and harder to say what is still needed to do for the editors who have worked a lot on the article. I for one have mostly been working on adding details recently, which is why I'd like to ask for a peer review by editors who are familiar with either Tool/rock music in the 90s/etc... Of course, any comments/criticism on the formal aspects of the article are welcome as well, but criticism regarding prose/content would probable be more helpful at this point. Of course, I'd like to get the article to FA, so don't be shy about using high standards and criticizing wherever it seems appropriate. If you decide to begin a peer review, please leave your comments on Talk:Tool (band). I'd be gladly returning the favor, if need be. Best wishes and many thanks. Johnnyw talk 14:00, 25 June 2007 (UTC) After seeing a request for peer review for Pearl Jam without any detailed comment for almost a week now, some thoughts came to my mind: that article should be part of this project, right? And if so, we should at least help the fellow editor (as much as time permits, and if it's only a quick glance and a short comment). Instead of finger-pointing, I tried my best to review it. But, when joining this project, I hoped it would give a boost to the articles it's intended to look after, but progress seems kinda slow currently. Isn't anyone working on any band articles right now, that understands that you can only get that far without the help of others? Maybe I am just missing something, forgive me this rant, if I am. If not, I hope some people who read this start to dedicate a couple of minutes every other visit to WP to this project. Rock on, Johnnyw talk 12:41, 1 July 2007 (UTC) ## Rock and Roll Hall of Fame Hi, I was wondering if I could get some comments here because one user seems to think that the page should be devoted to listing why certain bands are not inducted. So, he added a section called "Rush Controversy", I removed the section, even though it was sourced, because I feel that the criticism section should be for criticism of the hall in general and not why ____ isn't in yet. If anyone agrees, or feels differently, please feel free to comment on the talk page. -- Scorpion0422 15:56, 30 June 2007 (UTC) ## How to improve the project Hi folks, I really appreciate that there is a rock music project, but I think it lacks inertia and some other things, for example, when compared with the Wikipedia:WikiProject Alternative music (WPAm). • I'd like to rework our project page a bit, after the WPAm design, since it really gives users a good overview, seems better organized and so forth. • Why not include the "importance" criteria to the WPRock template? Seems useful to have categories to prioritize our efforts, right? • If we do this, we'd probably be smart to add the parameter before assessing new articles.. • I'd also like to do the following, so that the project can actually start working as a collaborate effort: • a new member invitation drive • then, a find and add more articles in scope of the WikiProject drive ;) • then, an assessment drive • In order to do this, why not establish a couple of teams, who concentrate on one of these efforts? (expand ProjectPage & ProjectTemplate, new member invitations) Any other suggestions? Any opinion on this? Hope to hear from you! Johnnyw talk 12:56, 1 July 2007 (UTC) Another suggestion would be to collaborate with other projects (such as Alternative music) on some articles, since there is obviously some common ground. This would probably have to wait until this project is at full speed, though. --Johnnyw talk 13:03, 1 July 2007 (UTC) ### WPRock template I took the liberty of adding an "importance" parameter to the template, similar to the ones we see in other project. To assess the articles, we'd probably need a guideline with examples. I'd copied the one from Wikiproject Alternative to the Assessment page. I'll start a discussion below about the examples we use as points of references for other articles. Johnnyw talk 14:29, 2 July 2007 (UTC) ## Importance scale Hi folks, assuming that the inclusion of the importance-scale is accepted, I'd like to start a discussion which examples we use as points of reference. The scale values are: Top, High, Mid, Low Below I'll propose a list (please don't stone me, this is what came to my mind after giving it a minute's though ;) please feel free to edit/comment etc. Greetings! Johnnyw talk 14:29, 2 July 2007 (UTC) ### Proposal Top importance "Key" articles, considered indispensable. • Highest-level articles strongly related to rock music • Quasi-legendary rock artists • Miscellaneous: epochal events High High-priority topics and needed subtopics of "key" articles, often with a broad scope; needed to complement any general understanding of the field. • High level sub-genres • Rock artists with major impact — on entire rock genres or popular culture in general • Seminal works of top/high importance artists, with few exceptions • Miscellaneous, such as Mid Mid-priority articles on more specialised (sub-)topics; possibly more detailed coverage of topics summarised in "key" articles, and as such their omission would not significantly impair general understanding. • Most other major sub-genres • Major rock artists — tour/festival headliners, several important albums, with impact on newer artists • Seminal works of mid importance artists • Miscellaneous, such as the best known festivals, .. Low While still notable, these are highly-specialised or even obscure, not essential for understanding the wider picture ("nice to have" articles). • Niche sub-genres • Rock artists excluded by the criteria above • Works of rock artists excluded by the criteria above • Miscellaneous, such as most festivals, rock music movies • What to do with music instrument articles? (in scope of the project?) • Rock music encompasses heavy metal, alternative, etc. for which there already are successful WikiProjects, collaboration would be useful and necessary ## Guns N' Roses importance level Guns N' Roses currently does not have the level of importance marked. I'm not too sure where it fits on the scale, so some guidance from more experienced users in the area would be much appreciated, and all discussion welcome at Talk:Guns N' Roses. Kind regards, Sebi [talk] 23:35, 18 July 2007 (UTC) A warm welcome Sebi! I'd personally set it to mid-importance (although high could be justified as well imho), as you can see in my proposal above. I'd also welcome you warmly to comment on that if you wish, since we need to establish a consensus there... feedback has been a little slow, sadly... Johnnyw talk 23:54, 18 July 2007 (UTC) ## Tool at FAC Hi folks.. I will be submitting Tool to FAC tonight or tomorrow. It would be awesome if some of you could help me out getting this article to FA status. Since I will be moving from Barcelona back to Berlin on Wednesday, I will most probably be very grateful for any spare minute you could spend on addressing any of the objections that the article will encounter. Thanks in advance, and best wishes, Johnnyw talk 19:05, 19 July 2007 (UTC) I nominated the article for Featured Article status. Please feel free to comment/review/or help out at Wikipedia:Featured article candidates/Tool (band) . Johnnyw talk 21:26, 19 July 2007 (UTC) ## Pearl Jam FYI, I've submitted Pearl Jam as a FAC. Feel free to comment, support or object. CloudNine 12:44, 4 August 2007 (UTC) ## Guns N' Roses The new Guns N' Roses WikiProject is up and running for anyone interested in becoming a participant. The project also uses this project's assessment and importance scales. –sebi 01:59, 7 August 2007 (UTC) ## Expert review: Jupiter Sunrise As part of the Notability wikiproject, I am trying to sort out whether Jupiter Sunrise is notable enough for an own article. I would appreciate an expert opinion. For details, see the article's talk page. If you can spare some time, please add your comments there. Thanks! --B. Wolterding 11:19, 12 August 2007 (UTC) ## Ray Davies If someone is looking for a project (I'm not right now, or at least not this one) you might take on the Ray Davies article. At the moment, it is a lightweight trivia-filled article on someone who ought to be viewed reasonably seriously as a creative figure. I particularly recommend the Robert Polito paper I added to the article's references; I cited the Polito paper repeatedly in our article on Davies' "unauthorized autobiography" X-Ray; it covers a lot of other ground about Davies as a writer and (to a lesser extent) as a musician. - Jmabel | Talk 06:48, 15 August 2007 (UTC) ## FAR AC/DC has been nominated for a featured article review. Articles are typically reviewed for two weeks. Please leave your comments and help us to return the article to featured quality. If concerns are not addressed during the review period, articles are moved onto the Featured Article Removal Candidates list for a further period, where editors may declare "Keep" or "Remove" the article from featured status. The instructions for the review process are here. Reviewers' concerns are here. Videmus Omnia Talk 03:43, 20 August 2007 (UTC) ## Wikiality Just so we know the importance and impact of what we're doing here, let's do our best to be factual and back up what we type with appropriate citations. Unfortunately, this was not done for the Instrument destruction page, which included the erroneous information that the Yardbirds and Animals were smashing guitars before The Who. Go to the Charles Mingus talk page to see part of where this started. Wikipedia is becoming a source for other publications. The misinformation on Wikipedia's Instrument destruction article resulted in the following caption at Parade.com: "Pete Townshend of The Who destroyed both guitar and amp during a March 1967 concert in Leicester, England. Although the band was one of the first to make instrument destruction a regular part of their show, they weren’t the first to do it. The Animals and The Yardbirds also smashed guitars at concerts."[10] The source for that caption is undoubtedly Wikipedia and it's wrong. It's been well-documented in numerous publications that Pete Townshend first smashed a guitar in September of 1964 at the Railway Hotel in Harrow and Wealdstone. It's listed as one of Rolling Stone magazine's "50 Moments that Changed the History of Rock and Roll." Jeff Beck, the only Yardbird to ever smash a guitar, wasn't even a member of the Yardbirds at that time. Indeed, the only reason he smashed a guitar in the first place was for the film Blowup when he was directed to emulate The Who's stage act. The Animals aren't known for guitar smashing at all. A false statement on the Charles Mingus page (subsequently picked up by the UK's timesonline) gravitated to the Instrument destruction page and ended up being parroted in a Parade.com article on guitar smashing. This is what's known as wikiality. Let's do our best to document rock history in these pages. Not change it. Thanks for reading. Clashwho 19:47, 2 September 2007 (UTC) Bah! Next, I suppose you're going to tell me that the elephant population hasn't tripled in the last six months! :) Seriously, good job in tracking down the facts here, but I suspect you're preaching to the choir. It's the people who haven't gotten far enough into Wikipedia to have found Wikiprojects who really need to be reminded of this, but that, of course, is much harder to do. Still, can't hurt to post here, since you're absolutely correct. And a good reminder for others to keep their eyes open for similar errors. Cheers, Xtifr tälk 01:09, 10 September 2007 (UTC) Thanks for the reply. Keep fighting the good fight and cheers to you, too! Clashwho 20:59, 10 September 2007 (UTC) Nice find Clashwho! It really does show the impact Wikipedia has, and the potential power we possess (for good and for bad). It also reminds us of responsibility to always check the facts and back them up. Something it seems Parade also needs to be reminded of. Which I gladly did just that in a nice little letter I wrote them. Most people reading that would have believed it (like a fellow Wikipedian looking for sources to cite). Anyway, I see you joined the new Wikiproject for The Who also. I'm very glad you're part of it. - Rocket000 04:52, 13 September 2007 (UTC) ## Wikipedia:WikiProject The Who Hey, I just founded this Wikiproject a few days ago but I have no idea how to recruit for it, do templates, etc. If anyone from this project, which I cited as the parent project, wants to help out we could really use it! -MichiganCharms 02:24, 4 September 2007 (UTC) ## ==Joining== Hey, I'm interested in joining this project. Can someone tell me how? Tim Y (talk) 23:59, 9 September 2007 (UTC) Basically, you just add your name to the members list at the project page, and that's it. Welcome! ;) Then, take a look at the todo-list and choose whatever seems most necessary to you. We haven't founded any working groups yet, so more article-specific tasks could be checking the Featured Article candidacies we have running currently and address the constructive criticism we get there. For example, at Wikipedia:Featured article candidates/King Crimson there is an open comment.. Johnnyw talk 10:19, 11 September 2007 (UTC) ## Lightning Bolt discography at FLC An article under the scope of this project, Lightning Bolt discography, is currently a Featured List candidate. Please take the time to review the article and comment on the article's nom page. Thanks! Drewcifer 06:57, 19 September 2007 (UTC) ## Winger I'm unconvinced that most people searching for "Winger" mean the band, and the "what links here" for that page seems to agree with me. The hatnotes are good, but it seems to contravene guideline. Please contribute to the discussion at Talk:Winger. --Dweller 10:51, 31 October 2007 (UTC) ## Gwen Stefani I think she should also be covered under the wiki project because:- • She was the lead singer of No Doubt, which is a part of this wikiproject. Her article is FA. The rest is on the decision of the active members of the project. Indianescence 11:03, 11 November 2007 (UTC) Nevertheless, I wouldn't categorize her as a rock artist. This would be kind a far stretch imho and watering down this projects' goals. Johnnyw talk 15:01, 11 November 2007 (UTC) In my opinion, she was in a rock band but then left the band to pursue her solo career which wasn't rock... unless we can find some reliable source stating that Stefani's style is purely rock, we shouldn't cover her under the scope. If she remained in the band, yeah, I suppose it would be okay. Nevertheless, covering her under the scope for the sake of another FA under the project's belt is inappropriate. Spebi 20:14, 11 November 2007 (UTC) Hang on, according to the article for "What You Waiting For?", that track is dance-pop, and we shouldn't have that song under this project but yet the article say nothing about the song's genre being rock. Spebi 20:16, 11 November 2007 (UTC) Let me make this clear, SHE DID'NT LEAVE THE BAND. She is still a part of the band. The band is still there and are wroking on their next album. I can provide many reliable sources for that. Some of them are in the No Doubt article as well. It is true that she went solo for two albums, but she is very much in the band. I hope that makes the matter clear. Indianescence 05:46, 12 November 2007 (UTC) I'm sorry, I thought she just left the band to do her own stuff. I was wrong. If they do release another album and Stefani appears on it then I'm sure she could be covered by the scope of this project, along with the other projects she is currently under (e.g. pop music, etc). Spebi 06:16, 12 November 2007 (UTC) So this means all the 5 albums which were relased before by No Doubt WITH Stefani on it were waste? They have no meaning? We have to wait for another album to be released for her inclusion in the wikiproject? This is weird! Indianescence 15:57, 12 November 2007 (UTC) And yes, not only Stefani, all the other members of the band are not covered under the project! Indianescence 16:05, 12 November 2007 (UTC) ## If anyone wants to bump an article up to good article status... The Ramones article just needs to have it's references properly formatted, and it should pass the GA review.[11] Hoponpop69 23:53, 14 November 2007 (UTC) ## Possible Australian rock task force? There is now a proposal at Wikipedia:WikiProject Council/Proposals#Australian rock music for a group to deal specifically with Australian rock music which has gotten five members, which is generally thought enough for a task force. Would this project be willing to take on such a subproject? John Carter (talk) 18:19, 31 December 2007 (UTC) ## Crush 40 Assessment Greetings, Crush 40 is now in WikiRock Project's list of rock bands which are in the project. The article is unassessed according to WikiRock, and I would appreciate it's assessment. Thanks, User:Radman622 22:04, 14 January 2008 (UTC) ## Capitol Offense (band) DYK on the mainpage now; The Whigs has been reworked. An article I created (still unassessed, BTW) for Capitol Offense (band) is now featured in the DYK on the mainpage. I didn't know where else to let the community know about this. Also, I've been reworking the article on The Whigs, and would like to have some other, fresher eyes on the article, if possible. Regards, -- Bellwether BC 00:10, 15 January 2008 (UTC) ## References A general question, triggered by a particular action. An IP, Special:Contributions/82.11.63.20, has added a reference to a 1975 book to some 20 articles in a row, without changing the article otherwise. When looking at these articles, I noticed that at least some of them have reference sections referring to pretty general books (not about the artist or song specifically, but general rock encyclopedias). See e.g. John Cale, which has one book about John Cale, but also one about Van Morrison and now this general rock book. Also Can (band), which has three general books in the references section. What is the standard practice (if any) wrt references (not footnotes, but general references) in rock articles: do you include any which have some info on the artist involved (i.e. a possible list of dozens of books per article), or do you only include references which are directly an at great length about the artist? I didn't want to blindly revert the IP's additions, but they seem to me to be well intended but misguided. Fram (talk) 08:47, 24 January 2008 (UTC) ## Concert Ten Hello. A rock music event stub titled Concert Ten needs your assistance. I cannot find any RS for its importance, and was wondering if anyone had any suggestions as to what to do with the article. Thank you. —Viriditas | Talk 04:11, 29 January 2008 (UTC) Please see Wikipedia:Articles for deletion/Concert Ten. Thank you. —Viriditas | Talk 05:26, 29 January 2008 (UTC) ## Bubbling Under Hot 100 Chart There seems to be some controversy (at least where I edit) over this chart listing. The Sum 41 discography has listed under the Billboard Hot 100 chart some positions that are above #100; An example would be #117 or something to that effect. Although Bubbling Under Hot 100 does show songs that would be #117 or whatever position on the Hot 100 chart, it is a completely different chart. It shows songs that haven't quite made it onto the Hot 100. User:Icelandic Hurricane has reverted my edit, which was removing these positions from under the Hot 100 chart because they are not on the chart. The only reason given is "omg! it's the equivalent. look at everyone else's discography page" Since these two charts are not the same, it would not make sense to put chart positions of songs on Bubbling Under Hot 100 in the regular Hot 100 spot. It's common sense that there should not be a #117 under a chart that's called "Hot 100. That's my opinion anyway, and another editor agreed with me a while back when some ips were adding these positions. Anyone have any thoughts on this? Timmeh! 23:12, 2 February 2008 (UTC) ## Wikiality We have a huge problem. The problem is this worldwide sales disease infecting the articles of rock artists. Even sources that meet WP:RS criteria are dated AFTER they were first posted on Wikipedia. Led Zeppelin and AC/DC are just two examples. Those bands' respective Wiki articles claimed 300 million and 150 million albums sold long before more "reputable" sources published those figures as "fact". Now those "reputable sources" are used as citations on Wikipedia to back up figures that were started on Wikipedia in the first place. It's disgusting. Lazy journalists have been using Wikipedia as a source and now we have to swallow these figures because they're subsequently published in "reputable" sources? This is wikiality and it makes me sick. Please see the Talk Pages of Led Zeppelin and AC/DC to see what I'm dealing with. And it's far from just them. This disease has infected the articles for Pink Floyd, The Who, Deep Purple, Black Sabbath, Queen, and on and on. Please help me combat it. There is no organization that tracks worldwide sales. All these figures are pure bunk. They do not belong in an encyclopedia. 74.77.222.188 (talk) 04:03, 8 February 2008 (UTC) ## Request: King Harvest hi -just found the King Harvest entry, and the article's a real mess. unfortunately, i don't have the background to improve it. (were they formed in Paris or NYC for example?) so, just wanted to bring it to the attention of your group. maybe you'll see fit to tag it and include it in your project. thanks. and thanks for everything you all do here. J. Van Meter (talk) 14:28, 13 February 2008 (UTC) I'm currently workin' on an article about a pretty popular Canadaian rock band called Soul Bomb. I need your help go to my sandbox and please take a look and leave comments on my talk page or make CONSTRUCTIVE edits only. I'd really like to get this done but I need all ya'lls help. Thanks, --Crash Underride 06:00, 25 February 2008 (UTC) ## How to join? How do I join WikiProject Rock Music? Sphefx (talk) 04:41, 2 April 2008 (UTC) Just add your name to the list. Zazaban (talk) 04:46, 2 April 2008 (UTC) ## Tenacious D The Tenacious D article is currently undergoing a peer review. I need some outside help, as I am the only one editing this at the moment. I think the article can make FA class. Please help by adding to the suggestions on this here. Thanks. Tenacious D Fan (talk) 17:06, 3 April 2008 (UTC) ## Deletion discussion Hey, this Wikiproject needs your help to few users. Founder was Suduser85. --Be Black Hole Sun (talk) 12:25, 28 June 2008 (UTC) ## Changes to the WP:1.0 assessment scheme As you may have heard, we at the Wikipedia 1.0 Editorial Team recently made some changes to the assessment scale, including the addition of a new level. The new description is available at WP:ASSESS. • The new C-Class represents articles that are beyond the basic Start-Class, but which need additional references or cleanup to meet the standards for B-Class. • The criteria for B-Class have been tightened up with the addition of a rubric, and are now more in line with the stricter standards already used at some projects. • A-Class article reviews will now need more than one person, as described here. Each WikiProject should already have a new C-Class category at Category:C-Class_articles. If your project elects not to use the new level, you can simply delete your WikiProject's C-Class category and clarify any amendments on your project's assessment/discussion pages. The bot is already finding and listing C-Class articles. Please leave a message with us if you have any queries regarding the introduction of the revised scheme. This scheme should allow the team to start producing offline selections for your project and the wider community within the next year. Thanks for using the Wikipedia 1.0 scheme! For the 1.0 Editorial Team, §hepBot (Disable) 21:21, 4 July 2008 (UTC) ## Yo what up? I need a hand. I'm a member of the Wikipedia:WikiProject Metal, and I have been workin' on an article for a band that I like that is starting to recieve more airplay (or so I hear) in their home country of Canada. I was wondering if anyone would mind helpin' me expand it and so that I could actually create the article. As of right now, the article is in my sandbox. I'd really appreciate any help I could get from you guys, seein' as no one in the metal project as been of any help. Oh, by the way, the bands name is Soul Bomb, and their from Gatineau-Ottawa, Canada. Thanks, Crash Underride 16:39, 8 July 2008 (UTC) ## A little more help. I created an article for World War III (WWIII) singer Mandy Lion and I would love to get some help to make it better, that is if anyone here knows anything about him that is not already in the article. I hope you can help with this. Thanks, Crash Underride 17:08, 8 July 2008 (UTC) ## See Collaboration of the Week! We have our own Collaboration of the Week! her on wiki rock music. So if you want help nominate the article you think needs attention. --Be Black Hole Sun (talk) 19:32, 10 July 2008 (UTC) I'd like to nominate Soul Bomb, even though I'm not a memeber of this project. Just a reminder to anyone who edits it. It is at MY sandbox right now. I have place a notifier that tells you what line to edit below only. Thanks, Crash Underride 19:49, 10 July 2008 (UTC) ## See Collaboration of the Week!. See Collaboration of the Week!. --Be Black Hole Sun (talk) 16:20, 12 July 2008 (UTC) ## Articles flagged for cleanup Currently, 639 articles are assigned to this project, of which 281, or 44.0%, are flagged for cleanup of some sort. (Data as of 14 July 2008.) Are you interested in finding out more? I am offering to generate cleanup to-do lists on a project or work group level. See User:B. Wolterding/Cleanup listings for details. More than 150 projects and work groups have already subscribed, and adding a subscription for yours is easy - just place a template on your project page. If you want to respond to this canned message, please do so at my user talk page; I'm not watching this page. --B. Wolterding (talk) 17:13, 27 July 2008 (UTC) ## Ramones and other bands Hi Rock people; An IP made several changes in all related Ramones articles (also some Judas Priest', as i could see) and i'm not sure if those edits are OK. Can anybody take a look and if it's acting in bad faith, please severe warn it. Thanks, Caiaffa (talk) 15:34, 30 July 2008 (UTC) ## Major Overhaul Though it seems this great idea has been forgotten about, I've tried my best to change it for the better. Given it a proper banner and am in the process of doing a category thing. Red157(talkcontribs) 23:21, 1 June 2008 (UTC) ## Changes to the WP:1.0 assessment scheme As you may have heard, we at the Wikipedia 1.0 Editorial Team recently made some changes to the assessment scale, including the addition of a new level. The new description is available at WP:ASSESS. • The new C-Class represents articles that are beyond the basic Start-Class, but which need additional references or cleanup to meet the standards for B-Class. • The criteria for B-Class have been tightened up with the addition of a rubric, and are now more in line with the stricter standards already used at some projects. • A-Class article reviews will now need more than one person, as described here. Each WikiProject should already have a new C-Class category at Category:C-Class_articles. If your project elects not to use the new level, you can simply delete your WikiProject's C-Class category and clarify any amendments on your project's assessment/discussion pages. The bot is already finding and listing C-Class articles. Please leave a message with us if you have any queries regarding the introduction of the revised scheme. This scheme should allow the team to start producing offline selections for your project and the wider community within the next year. Thanks for using the Wikipedia 1.0 scheme! For the 1.0 Editorial Team, §hepBot (Disable) 21:18, 4 July 2008 (UTC) ## Articles required I noticed Salt the Wound doesn't have an article when they achieved mainstream sucess espicially this year and Prototype doesn't have one article for any of their albums. I'd take care of this myself, but it's gonna be a pain since my sandbox is already in use of a diferent project, that I'm trying to squeeze time into while at the same time edit articles and do other stuff. YBK You can have more the one sandbox just call Username/Sandbox/2. --Be Black Hole Sun (talk) 10:27, 13 August 2008 (UTC) ## Is a group name singular or plural? Is there an accepted convention on this? Do we say The Beatles is a four-piece band or The Beatles are a four-piece band? Is it affected by whether the group name itself is a plural? Do we say The Beatles are but The Who is? Reason I'm asking is that someone just changed all the is to are on Meshuggah (yes, I know, it's not in the Rock genre, but I figured I might find calmer minds here) and I wondered whether it should be changed back. I looked in WP:MOS but it's enormous and I couldn't quickly find anything definitive. A brief survey showed most group articles seem to use plurals but not all. --Rpresser 20:22, 14 August 2008 (UTC) If the group is american, it is 'is', if it is british, it is 'are'. Zazaban (talk) 20:50, 14 August 2008 (UTC) One, that seems relatively fatuous; two, these guys are Swedish, not British OR American. 20:52, 14 August 2008 (UTC) Well, I suppose if they're swedish, 'it' would be the correct term. Zazaban (talk) 02:13, 15 August 2008 (UTC) • A group (in this case) is singular. So the correct form of the verb in all cases is 'is'. Setwisohi (talk) 13:11, 31 August 2008 (UTC) So what's the final word on this? British usage treats all band names, and even "band" and "group" as plurals (check BBC for rock band articles; for example, google BBC The Who). American usage, however, is clearly different. What, then, is the feeling about the following? • The names of UK bands are always treated as plurals, even if the names are singular, as with The Who and Pink Floyd. The same applies to collective nouns such as "band" and "group," at least in a music context. • The names of US bands are always treated as singulars, whether the names are singular (The Band and Little Feat) or plural (Eagles and The Dead Milkmen). Collective nouns such as "band" and "group" are always treated as singular. • The style for bands from other countries depends on how verb agreement is handled in those countries, and if that cannot be determined, then a consensus of editors should prevail. By the way, wiki articles on the bands I used as examples, pretty much adhere to these conventions. That can't be said for many others, for example, Yes and The Byrds, both of which mix it up a bit, while the Grateful Dead adheres to the "British" convention. As for bands from other countries, I only checked two articles: Sigur Rós (a singular name) is treated as a singular and The Sugarcubes as a plural. Allreet (talk) 18:49, 13 October 2009 (UTC) I cannot see any other way of doing this but the one suggested. It does mean that articles like rock music that have US and UK bands will have both uses, whereas in those articles US spelling and other conventions are generally used, but that is probably inescapable.--SabreBD (talk) 19:01, 13 October 2009 (UTC) please see American and British English differences#Formal and notional agreement, which clarifies that: • proper nouns that are plural in form are normally treated as plural in both varieties of English. so articles where plural group names are followed by singular verbs are either exceptions or simply confused; in most cases plural band names should be treated as plural. • it's not accurate to say "British usage treats all band names, and even 'band' and 'group' as plurals"; in UK English, with singular band names and collective nouns, it depends on what's meant: if the thought expressed is about a group as a unit, it takes a singular verb; if it's about a number of individuals, it takes a plural verb. so the band has five members and the band have declined to comment are both correct UK English, even in the same article. meanwhile, my understanding of the MoS is that articles that discuss bands from various nations should choose one variety of English and stick with it throughout (except of course in direct quotes). (and The Grateful Dead taking a plural verb isn't "the 'British' convention" - it's quite normal in both varieties of English for the + adjective constructions to be plural: the poor outnumber the rich, and the Grateful Dead were a San Francisco-based band.) Sssoul (talk) 22:23, 13 October 2009 (UTC) FWIW, Eagles can't be a singular in U.S. grammar, it is the same as Drive-By Truckers, a plural noun. Saying The names of US bands are always treated as singulars would be wrong. Also, apropos of nothing, Elvis was asked about what music he liked during the interview part of the '68 Comeback Special and mentioned he liked some of the new music, specifically mentioning The Byrds but pronouncing it "The Beards", possibly riffing on the Beatles recently grown facial hair? Just an interesting factoid to pass the time. Sswonk (talk) 22:55, 13 October 2009 (UTC) For the most part, I don't care either way. But I do care about consistency and the need for standards to ensure it. So the Yes article starts off with "Yes are" then through most of the rest of the article, it's Yes "is" or "was". At some point, a sharp editor is going to go in and change that one way or the other, then a few hundred wasted words are going to bounce back and forth between 10 other editors over what's correct. Similarly, I used "British convention" as shorthand, instead of saying "According to BBC's Styleguide" or Fowler's or OED, and now that's a focus of conversation. Sigh. What BBC's guide does have to say on the subject is only partially helpful: "It is the policy of BBC Radio News that collective nouns should be plural, as in The Government have decided. Other departments, such as BBC Online, have resolved that collective nouns should always be singular, as in The Government has decided. BBC Television News has no policy and uses whichever sounds best in context." So there we has it. Anything go. Allreet (talk) 22:54, 13 October 2009 (UTC) again: in UK English it is correct, acceptable, consistent, etc to use both Yes is and Yes are, because sometimes the writer is thinking of a singular unit and sometimes of a number of individuals. if the issue is causing disruption on a page about a particular UK band it's often possible to find "workarounds" (eg the band members have declined to comment" instead of the band have ...) but it's still true that using both Yes are and Yes is in the same article is standard, correct, consistent UK English. meanwhile, thanks for the summary of the BBC's approaches, but you've missed the point about the + adjective. i understood your "shorthand" phrase fine; the point is that the + adjective is commonly plural in both varieties of English: The Grateful Dead were a SF-based band. Sssoul (talk) 06:30, 14 October 2009 (UTC) Based on the input from ssSoul, clearly my original suggestions should be set aside and something different considered. If I might suggest as a starting point: • Band names can be treated as either singular or plural, depending on context. For example, "The Grateful Dead are..." and "The Grateful Dead were..." are both acceptable, but changing verb agreement from one sentence to the next would not be. • Collective nouns such as "band" and "group" should be treated as singular. • Editors should respect earlier decisions regarding these conventions for certain articles, as long as a particular style has been applied consistently. These might need modification, so if anyone wants to help by editing, replacing, amending or suggesting, by all means. To re-state, the reason I believe clearer guidelines are needed is to ensure consistency and avert editing battles that sometimes make even the simplest changes an ordeal. I've also included the idea of respecting earlier decisions to accommodate articles (The Beatles, e.g.) where considerable work and decision-making have already been invested. None of this is intended to resolve any particular cases or overturn past decisions. The aims are general: to improve content for readers and make life easier for editors. Allreet (talk) 17:23, 15 October 2009 (UTC) sorry Allreet - i just now saw this reply. the WP:MOS and WP:ENGVAR already cover these matters, and i don't think it's a good idea to try revamping them for this project. for example: suggesting that collective nouns must always be treated as singular would contradict standard UK usage (again, see American and British English differences#Formal and notional agreement), which would be contrary to WP:ENGVAR. it can be tricky for speakers of other varieties of English to get the hang of the standard UK formal-vs-notional-agreement approach, but that doesn't mean we can create guidelines that would impose US usage on UK-orientated articles. Sssoul (talk) 09:37, 31 October 2009 (UTC) ## beards or Byrds? Nothing to do with the discussion per se but just referring to Sswonk's statement about Elvis Presley's mis-pronunciation of Byrds as "Beards". I too used to think that this was the case but nowadays I'm not so sure. I think Elvis may've been referring to the fashion trend amongst young, male hippies of growing facial hair. Remember, by 1968, The Byrds weren't really a very high profile group anymore, which makes it pretty strange that he would mention them. Facial hair, on the other hand, was very much in vogue amongst the burgeoning counter-culture. I'm paraphrasing here, but doesn't Elvis say something like "I like the beards...and The Beatles and the other groups", obviously trying desperately to make himself seem hip and relevant to "the kids" watching the show. Yeah, nice try Elvis! :D I don't know whether it has ever been conclusively proven that Elvis really did mean to say Byrds but mis-pronounced it, but my current theory is that he was actually referring to the fashion of growing facial hair. Which is certainly less amusing than The King accidently betraying how out of touch he was but is probably more likely. --Kohoutek1138 (talk) 00:44, 14 October 2009 (UTC) Here's that segment of the show:[12]. The topic starts just after 7:20, Elvis says "I like a lot of the new groups, you know, The Beatles and The [Beards] and the, —whoever. But I really like a lot of the new music." To me it sounds like it did the first time I saw the show, he means The Byrds but says The Beards. He is talking about groups, not fashion, and wants to mention someone else but decides not to and says "whoever". So it probably is actually an accidental mispronunciation. Still, he seems very sincere and a little humble about what he is saying. I don't know if it means he was out of touch, I really am not sure. Sswonk (talk) 16:46, 14 October 2009 (UTC) Actually, after reviewing the footage again, I think that you’re right, Elvis does mean The Byrds. Actually, this is what I had always assumed since I first watched the '68 Comback Special years agao but recently, without re-watching the show, I had come to think that perhaps Elvis was referring to the fashion for facial hair amongst the hippies. I do think he was pretty out of touch by that point though. I mean, the whole point of the Comeback Special was to reinvigorate his ailing career, as its title suggests. I'm no Elvis expert but based on what The Beatles have said about their meetings with him in 1965, Elvis was surrounded by "yes men" and fairly isolated from the popular music scene of the day. I really do think that he'd taken his eye off of the ball as far as the popular music scene was concerned. Just look at that clip for example; he mis-pronounces one band's name (proving that he was pretty unfamiliar with an act that had been the biggest homegrown band in the U.S. just 3 years earlier), he mentions The Beatles (because he knew them personally...although as history has revealed, he felt extremely threatened by them) and I take his "whoever" comment to illustrate that he couldn't actually name any other contemporary bands. Now, don't get me wrong, Elvis was obviously still a huge star and sold a lot of records but in 1968 he wasn't considered hip anymore. In addition, although he looks pretty cool and iconic in that black leather get up from a 2000's stand point, it was a hopelessly dated look by late-60's standards. He looks like a refugee from The Wild One, which is great but not likely to endear him to a hip, late sixties youth audience. Even many of his contemporaries like The Everly Brothers had moved with the times and were dressing themselves in suitably mod gear by 1968. Again, I think that the fact that Elvis was dressed like he was betrays a basic lack of knowledge regarding trends in popular music. This is kinda born out by the fact that although the '68 Comeback Special did invigorate his career to an extent, within three years he was playing the supper and wine venues of Las Vegas. Playing to huge audiences, sure, but not huge audiences of hip rock fans. --Kohoutek1138 (talk) 20:20, 14 October 2009 (UTC) Rock and roll strikes again: right after I read your "refugee from The Wild One" observation, I jumped in the car to go pick up a pizza and what is on the radio? "I'm feelin' tragic like I'm Marlon Brando/When I look at my China Girl". Brought a smile. Well, I agree that he was in opposition to most of the rest of rock culture in his style tastes. Now that you wrote that, I am less likely to think he was about to add "The Who" instead of "whoever". It might also be argued that with the leather he was cutting an image, kind of like Frank Sinatra with the casual suit, tie and fedora. Trying to figure out what he meant and said, though, isn't that the way religions get at odds and so on? I am pretty confident, however, that he really did like The Byrds. The question remains, what did he think of James Brown? Sswonk (talk) 23:29, 14 October 2009 (UTC) Ha, yeah...that is pretty strange about The Wild One and you hearing the Bowie song afterwards. Yeah, I agree that Elvis was specifically going for an iconic look with the black leather and he certainly achieved that. I mean, did Elvis ever look cooler than he did in the '68 Comeback Special? Not as far as I'm concerned. You're right, of course, that we'll never know for sure what Elvis really meant but I don't think he could've been that big a fan of The Byrds, otherwise he would got their name right. --Kohoutek1138 (talk) 11:59, 15 October 2009 (UTC)
Feeds: Posts ## The p-group fixed point theorem The goal of this post is to collect a list of applications of the following theorem, which is perhaps the simplest example of a fixed point theorem. Theorem: Let $G$ be a finite $p$-group acting on a finite set $X$. Let $X^G$ denote the subset of $X$ consisting of those elements fixed by $G$. Then $|X^G| \equiv |X| \bmod p$; in particular, if $p \nmid |X|$ then $G$ has a fixed point. Although this theorem is an elementary exercise, it has a surprising number of fundamental corollaries. ## Small factors in random polynomials over a finite field Previously I mentioned very briefly Granville’s The Anatomy of Integers and Permutations, which explores an analogy between prime factorizations of integers and cycle decompositions of permutations. Today’s post is a record of the observation that this analogy factors through an analogy to prime factorizations of polynomials over finite fields in the following sense. Theorem: Let $q$ be a prime power, let $n$ be a positive integer, and consider the distribution of irreducible factors of degree $1, 2, ... k$ in a random monic polynomial of degree $n$ over $\mathbb{F}_q$. Then, as $q \to \infty$, this distribution is asymptotically the distribution of cycles of length $1, 2, ... k$ in a random permutation of $n$ elements. One can even name what this random permutation ought to be: namely, it is the Frobenius map $x \mapsto x^q$ acting on the roots of a random polynomial $f$, whose cycles of length $k$ are precisely the factors of degree $k$ of $f$. Combined with our previous result, we conclude that as $q, n \to \infty$ (with $q$ tending to infinity sufficiently quickly relative to $n$), the distribution of irreducible factors of degree $1, 2, ... k$ is asymptotically independent Poisson with parameters $1, \frac{1}{2}, ... \frac{1}{k}$. ## Z[sqrt{-3}] is the Eisenstein integers glued together at two points Today’s post is a record of a very small observation from my time at PROMYS this summer. Below, by $\text{Spec } R$ I mean a commutative ring $R$ regarded as an object in the opposite category $\text{CRing}^{op}$. ## A little more about zeta functions and statistical mechanics In the previous post we described the following result characterizing the zeta distribution. Theorem: Let $a_n = \mathbb{P}(X = n)$ be a probability distribution on $\mathbb{N}$. Suppose that the exponents in the prime factorization of $n$ are chosen independently and according to a geometric distribution, and further suppose that $a_n$ is monotonically decreasing. Then $a_n = \frac{1}{\zeta(s)} \left( \frac{1}{n^s} \right)$ for some real $s > 1$. I have been thinking about the first condition, and I no longer like it. At least, I don’t like how I arrived at it. Here is a better way to conceptualize it: given that $n | X$, the probability distribution on $\frac{X}{n}$ should be the same as the original distribution on $X$. By Bayes’ theorem, this is equivalent to the condition that $\displaystyle \frac{a_{mn}}{a_n + a_{2n} + a_{3n} + ...} = \frac{a_m}{a_1 + a_2 + ...}$ which in turn is equivalent to the condition that $\displaystyle \frac{a_{mn}}{a_m} = \frac{a_n + a_{2n} + a_{3n} + ...}{a_1 + a_2 + a_3 + ...}$. (I am adopting the natural assumption that $a_n > 0$ for all $n$. No sense in excluding a positive integer from any reasonable probability distribution on $\mathbb{N}$.) In other words, $\frac{a_{mn}}{a_m}$ is independent of $m$, from which it follows that $a_{mn} = c a_m a_n$ for some constant $c$. From here it already follows that $a_n$ is determined by $a_p$ for $p$ prime and that the exponents in the prime factorization are chosen geometrically. And now the condition that $a_n$ is monotonically decreasing gives the zeta distribution as before. So I think we should use the following characterization theorem instead. Theorem: Let $a_n = \mathbb{P}(X = n)$ be a probability distribution on $\mathbb{N}$. Suppose that $a_{nm} = c a_n a_m$ for all $n, m \ge 1$ and some $c$, and further suppose that $a_n$ is monotonically decreasing. Then $a_n = \frac{1}{\zeta(s)} \left( \frac{1}{n^s} \right)$ for some real $s > 1$. More generally, the following situation covers all the examples we have used so far. Let $M$ be a free commutative monoid on generators $p_1, p_2, ...$, and let $\phi : M \to \mathbb{R}$ be a homomorphism. Let $a_m = \mathbb{P}(X = m)$ be a probability distribution on $M$. Suppose that $a_{nm} = c a_n a_m$ for all $n, m \in M$ and some $c$, and further suppose that if $\phi(n) \ge \phi(m)$ then $a_n \le a_m$. Then $a_m = \frac{1}{\zeta_M(s)} e^{-\phi(m) s}$ for some $s$ such that the zeta function $\displaystyle \zeta_M(s) = \sum_{m \in M} e^{-\phi(m) s}$ converges. Moreover, $\zeta_M(s)$ has the Euler product $\displaystyle \zeta_M(s) = \prod_{i=1}^{\infty} \frac{1}{1 - e^{- \phi(p_i) s}}$. Recall that in the statistical-mechanical interpretation, we are looking at a system whose states are finite collections of particles of types $p_1, p_2, ...$ and whose energies are given by $\phi(p_i)$; then the above is just the partition function. In the special case of the zeta function of a Dedekind abstract number ring, $M = M_R$ is the commutative monoid of nonzero ideals of $R$ under multiplication, which is free on the prime ideals by unique factorization, and $\phi(I) = \log N(I)$. In the special case of the dynamical zeta function of an invertible map $f : X \to X$, $M = M_X$ is the free commutative monoid on orbits of $f$ (equivalently, the invariant submonoid of the free commutative monoid on $X$), and $\phi(P) = \log |P|$, where $|P|$ is the number of points in $P$. ## Zeta functions, statistical mechanics and Haar measure An interesting result that demonstrates, among other things, the ubiquity of $\pi$ in mathematics is that the probability that two random positive integers are relatively prime is $\frac{6}{\pi^2}$. A more revealing way to write this number is $\frac{1}{\zeta(2)}$, where $\displaystyle \zeta(s) = \sum_{n \ge 1} \frac{1}{n^s}$ is the Riemann zeta function. A few weeks ago this result came up on math.SE in the following form: if you are standing at the origin in $\mathbb{R}^2$ and there is an infinitely thin tree placed at every integer lattice point, then $\frac{6}{\pi^2}$ is the proportion of the lattice points that you can see. In this post I’d like to explain why this “should” be true. This will give me a chance to blog about some material from another math.SE answer of mine which I’ve been meaning to get to, and along the way we’ll reach several other interesting destinations. ## Mathematics in real life II Another small example I noticed awhile ago and forgot to write up. Prime numbers, as one of the most fundamental concepts in mathematics, have a way of turning up in unexpected places. For example, the life cycles of some cicadas are either $13$ or $17$ years. It’s thought that this is a response to predation by predators with shorter life cycles; if your life cycle is prime, a predator with any shorter life cycle can’t reliably predate upon you. A month or so ago I noticed a similar effect happening in the card game BS. In BS, some number of players (usually about four) are dealt the same number of cards from a standard deck without jokers. Beginning with one fixed card, such as the two of clubs, players take turns placing some number of cards face-down in the center. The catch is that the players must claim that they are placing down some number of a specific card; Player 1 must claim that they are placing down twos, Player 2 must claim that they are placing down threes, and so forth until we get to kings and start over. Any time cards are played, another player can accuse the current player of lying. If the accusation is right, the lying player must pick up the pile in the center. If it is wrong, the accusing player must pick up the pile in the center. The goal is to get rid of all of one’s cards. I’ve been playing this game for years, but I didn’t notice until quite recently that the reason the game terminates in practice is that $13$, the number of types of cards in a standard deck, is prime. If, for example, we stopped playing with aces and only used $12$ types of cards, then a game with $4 | 12$ people need not terminate. Consider a game in which Player 1 has only cards $2, 6, 10$, Player 2 has only cards $3, 7, J$, Player 3 has only cards $4, 8, Q$, and Player 4 has only cards $5, 9, K$, and suppose that Player 1 has to play threes at some point in the game. Then no player can get rid of their cards without lying; since the number of players divides the number of card types, every player will always be asked to play a card they don’t have. Once every player is aware of this, every player can call out every other player’s lies, and it will become impossible to end the game reasonably. More generally, such situations can occur if $13$ is replaced by a composite number $n$ such that the number of players is at least the smallest prime factor of $n$. This is because people who get rid of their cards will leave the game until the number of players is equal to the smallest prime factor of $n$, at which point the game may stall. But because $13$ is prime, any game played with less than $13$ people has the property that each player will eventually be asked to play a card that they have. ## Ramsey theory and Fermat’s Last Theorem In the first few lectures of Graph Theory, the lecturer (Paul Russell) presented a cute application of Ramsey theory to Fermat’s Last Theorem. It makes a great introduction to the general process of casting a problem in one branch of mathematics as a problem in another and is the perfect size for a blog post, so I thought I’d talk about it. The setup is as follows. One naive way to go about proving the nonexistence of nontrivial integer solutions to $x^n + y^n = z^n, n > 2$ (that is, solutions such that $x, y, z$ are not equal to zero) is using modular arithmetic; that is, one might hope that for every $n > 2$ it might be possible to find a modulus $m$ such that the equation has no nontrivial solution $\bmod m$. To simplify matters, we’ll assume that $x, y, z$ are relatively prime to $m$, or else there is some subtlety in the definition of “nontrivial” (e.g. we might have $x, y, z$ not divisible by $m$ but $x^n \equiv 0 \bmod m$.) Note that it might be the case that $m$ is not relatively prime to a particular nontrivial solution in the integers, but if we can prove non-existence of nontrivial solutions for infinitely many $m$ (in particular, such that any integer is relatively prime to at least one such $m$) then we can conclude that no nontrivial integer solutions exist. By the Chinese Remainder Theorem, this is possible if and only if it is possible with $m$ a prime power, say $m = p^k$. If $p$ is relatively prime to $n$, this is possible if and only if it is possible with $m = p$. This is because given a nontrivial solution $\bmod p$ we can use Hensel’s lemma to lift it to a nontrivial solution $\bmod p^k$ for any $k$ (and even to $\mathbb{Z}_p$), and the converse is obvious. (Again to simplify matters, we’ll ignore the finitely many primes that divide $n$.) So we are led to the following question. For a fixed positive integer $n > 2$ do there exist infinitely many primes $p$ relatively prime to $n$ such that $x^n + y^n \equiv z^n \bmod p$ has no nontrivial solutions? As it turns out, the answer is no. In 1916 Schur found a clever way to prove this by proving the following theorem. Theorem: For every positive integer $k$ there exists a positive integer $m$ such that if $\{ 1, 2, ... m \}$ is partitioned into $k$ disjoint subsets $A_1, ... A_k$, then there exists $i$ such that there exist $a, b, c \in A_i$ with $a + b = c$. In other words, the Schur number $S(k) = m$ exists. (Note that I am using a convention which is off by $1$.) If we let $p$ be a prime greater than $S(n)$ and let the $A_i$ be the cosets of the subgroup of $n^{th}$ powers in $(\mathbb{Z}/p\mathbb{Z})^{*}$, which has index at most $n$, we obtain the following as a corollary. Corollary: Fix a positive integer $n > 2$. For any sufficiently large prime $p$, there exists a nontrivial solution to $x^n + y^n \equiv z^n \bmod p$. In the previous post we showed that the splitting behavior of a rational prime $p$ in the ring of cyclotomic integers $\mathbb{Z}[\zeta_n]$ depends only on the residue class of $p \bmod n$. This is suggestive enough of quadratic reciprocity that now would be a good time to give a full proof. The key result is the following fundamental observation. Proposition: Let $q$ be an odd prime. Then $\mathbb{Z}[\zeta_q]$ contains $\sqrt{ q^{*} } = \sqrt{ (-1)^{ \frac{q-1}{2} } q}$. Quadratic reciprocity has a function field version over finite fields which David Speyer explains the geometric meaning of in an old post. While this is very much in line with what we’ve been talking about, it’s a little over my head, so I’ll leave it for the interested reader to peruse. ## The arithmetic plane If you haven’t seen them already, you might want to read John Baez’s week205 and Lieven le Bruyn’s series of posts on the subject of spectra. I especially recommend that you take a look at the picture of $\text{Spec } \mathbb{Z}[x]$ to which Lieven le Bruyn links before reading this post. John Baez’s introduction to week205 would probably also have served as a great introduction to this series before I started it: There’s a widespread impression that number theory is about numbers, but I’d like to correct this, or at least supplement it. A large part of number theory – and by the far the coolest part, in my opinion – is about a strange sort of geometry. I don’t understand it very well, but that won’t prevent me from taking a crack at trying to explain it…. Before we talk about localization again, we need some examples of rings to localize. Recall that our proof of the description of $\text{Spec } \mathbb{C}[x, y]$ also gives us a description of $\text{Spec } \mathbb{Z}[x]$: Theorem: $\text{Spec } \mathbb{Z}[x]$ consists of the ideals $(0), (f(x))$ where $f$ is irreducible, and the maximal ideals $(p, f(x))$ where $p \in \mathbb{Z}$ is prime and $f(x)$ is irreducible in $\mathbb{F}_p[x]$. The upshot is that we can think of the set of primes of a ring of integers $\mathbb{Z}[\alpha] \simeq \mathbb{Z}[x]/(f(x))$, where $f(x)$ is a monic irreducible polynomial with integer coefficients, as an “algebraic curve” living in the “plane” $\text{Spec } \mathbb{Z}[x]$, which is exactly what we’ll be doing today. (When $f$ isn’t monic, unfortunate things happen which we’ll discuss later.) We’ll then cover the case of actual algebraic curves next. ## Primes and ideals Probably the first important result in algebraic number theory is the following. Let $K$ be a finite field extension of $\mathbb{Q}$. Let $\mathcal{O}_K$ be the ring of algebraic integers in $K$. Theorem: The ideals of $\mathcal{O}_K$ factor uniquely into prime ideals. This is the “correct” generalization of the fact that $\mathbb{Z}$, as well as some small extensions of it such as $\mathbb{Z}[ \imath ], \mathbb{Z}[\omega]$, have unique factorization of elements. My goal, in this next series of posts, is to gain some intuition about this result from a geometric perspective, wherein one thinks of a ring as the ring of functions on some space. Setting up this perspective will take some time, and I want to do it slowly. Let’s start with the following simple questions. • What is the right notion of prime? • Why look at ideals instead of elements? In this series I will assume the reader is familiar with basic abstract algebra but may not have a strong intuition for it.
# Should the Bush tax cuts be extended? • News ## Should the Bush tax cuts be extended? • ### Extend all of the Bush tax cuts permanently Votes: 16 45.7% • ### Extend some of the Bush tax cuts permanently Votes: 5 14.3% • ### Extend some of the Bush tax cuts temporarily Votes: 12 34.3% • ### Extend all of the Bush tax cuts temporarily Votes: 2 5.7% • Total voters 35 Gold Member I'm curious just how long US citizens thing unemployment benefits should last. http://www.worldnewsheardnow.com/99ers-hope-for-tier-5-and-beyond-despite-belief-unemployment-benefits-%E2%80%9Ccan%E2%80%99t-go-on-forever%E2%80%9D/2146/" [Broken] Should it be 999 weeks, or 999 months before the claims of cruelty die down? Start a poll! Last edited by a moderator: What is absurdly false? Your claim that those on the right think that rich people shouldn't pay more taxes than "ordinary people". That the people who benefit disproportionately from our economy do not pay a proportionate share of taxes? Evidence? Every factual source I know of says they pay far more than a proportional amount, and that the Bush tax cut shifted the burden even more toward the rich. That people who own houses are hit with unfunded mandates that make education very expensive? I'm not sure exactly what you're referring to, but I might agree with you on that. There is no way that the Bush tax cuts for the wealthy should be renewed. It benefits only those who are already laughing all the way to the bank. Baloney. And you know it. Do you think that it would permanently damage people making nearly $400K to cause their top tax rate to return from 33% to 36%? No. It wouldn't damage them personally at all. The personal financial well-being of rich people is not the issue here. The left only pretends it is to avoid honest debate. Still, I find it hard to believe that I would have been inconvenienced by a return to pre-Bush giveaway tax levels. Again talking as if a tax cut is the government is giving someone money? I'm against the government giving money to rich people, and you know it. If someone earning over$200K/yr in taxable income can't absorb an increase of 2-3% in their Federal tax rate, they know nothing about budgeting, saving, or financial planning. No pity from me for the ignorant. Again, a red herring. Nobody is suggesting you pity the rich. That's just more absurdity. If my (and Republicans') position on this issue is so wrong, you (and Dems) would have no need to misrepresent it. The beliefs you represent as "the right" are non-existent, and used for a strawman argument by the left to avoid honest debate. dreiter Actually the Bush tax cuts are really a non-issue in the current tax situation. What needs to happen is to have capital gains tax come more in-line with the standard tax rates. The 'regular rich' (~$100K/yr) pay more in taxes than the super rich (>$1mil/yr) because the super rich make most of their money in ways that is only taxed by the capital gains tax. This is a terrifically flawed system, and it's even more unfair to the 'regular rich' than it is to the middle and lower classes... Gold Member Actually the Bush tax cuts are really a non-issue in the current tax situation. What needs to happen is to have capital gains tax come more in-line with the standard tax rates. The 'regular rich' (~$100K/yr) pay more in taxes than the super rich (>$1mil/yr) because the super rich make most of their money in ways that is only taxed by the capital gains tax. This is a terrifically flawed system, and it's even more unfair to the 'regular rich' than it is to the middle and lower classes... What is flawed are the instructions for http://www.irs.gov/pub/irs-pdf/i1040sd.pdf". My god. My investment adviser advised me not to sell my stocks(ever). I now see why. I'm going to have to hire a tax expert for about $200 to have him figure out how much money I owe on$100 worth of long term capital gains on the stocks I sold this year. :grumpy: Last edited by a moderator: nismaratwork What is flawed are the instructions for http://www.irs.gov/pub/irs-pdf/i1040sd.pdf". My god. My investment adviser advised me not to sell my stocks(ever). I now see why. I'm going to have to hire a tax expert for about $200 to have him figure out how much money I owe on$100 worth of long term capital gains on the stocks I sold this year. :grumpy: I feel your pain... specifically I feel it in the rear, but I feel it nonetheless! Last edited by a moderator: Actually the Bush tax cuts are really a non-issue in the current tax situation. What needs to happen is to have capital gains tax come more in-line with the standard tax rates. The 'regular rich' (~$100K/yr) pay more in taxes than the super rich (>$1mil/yr) because the super rich make most of their money in ways that is only taxed by the capital gains tax. This is a terrifically flawed system, and it's even more unfair to the 'regular rich' than it is to the middle and lower classes... This is completely nonsensical. Capital gains taxes apply to returns on investments. An investment is not regular income - it can lose value. Your paycheck is guaranteed for as long as you have a job. As an employee, you assume no risk. The government taxes capital gains at a lower rate for three reasons: 1) The capital was already taxed when it was earned as income, before being invested. Investment taxes are a form of double-taxation. 2) The investor assumes all market risk, the employee none. Government rewards this risk with lower tax rates because invested capital underwrites paychecks. 3) Lower investment taxes encourages investing your money, and discourages saving it. Consumption and investment are more desirable than savings. Government has an interest in getting you to consume and invest, and in keeping you from saving. The only thing that's flawed here is your reasoning. I'm curious just how long US citizens thing unemployment benefits should last. Right now it is 99 weeks. Should it be 999 weeks, or 999 months before the claims of cruelty die down? Why continue to pretend its unemployment insurance at all? Go the way of England; put 'em on the dole and guarantee a few good lifetime party voters. If someone earning over $200K/yr in taxable income can't absorb an increase of 2-3% in their Federal tax rate, they know nothing about budgeting, saving, or financial planning. Err...how, exactly, do you propose that an individual budget for the whims of the state? Should I start setting aside 2-3% of my income every year to a "what the heck is Congress going to do next?" fund? Does that strike you as an efficient use of my time and money? This is laughably hysterical. Just because you love to post on public forums about how much money you make, and how little you need it all, does not a representative case make. In any event, the Treasury accepts donations. While Congress deliberates, do your conscience a favor, and write a check for 2 to 3 percent of your income to the US Treasury, with Gift to Public Debt written in the memo line, and send it to this address: Attn Dept G Bureau of the Public Debt P. O. Box 2188 Parkersburg, WV 26106-2188 The lower classes are hit with property taxes, sales taxes, excise taxes, etc that they cannot avoid. Those taxes are a large percentage of their disposable income. If you live in a state like this one (5% sales tax) 5% of whatever you spend on non-exempt goods (food is generally exempt) goes to taxes. Poorer people have to spend a larger percentage of their disposable income on clothing, furnishings, etc, and they have an extra 5% tax burden on their purchases as a result. With wealthy people, such spending is discretionary, with poor people, it is generally anything but. For instance, when your children are growing out of their clothing, they need more clothing. This is just plain false. It is true that the poor pay sales, property, and excise taxes. It is also true that they pay corporate income taxes, business license taxes, and every other "soak the rich" mandate the Democrats love to impose - indirectly, through higher prices on those clothes and that food. Even so, according to the Congressional Budget Office, the top 1% of income earners in the United States pay an effective 31% of their income in taxes. The bottom 20%? About 5%. This includes federal, state, and local taxes, direct and indirect - even indirect mandates like corporate income taxes. How did they figure it? Lots and lots of interns, probably. dreiter This is completely nonsensical. Capital gains taxes apply to returns on investments. An investment is not regular income - it can lose value. Your paycheck is guaranteed for as long as you have a job. As an employee, you assume no risk. The government taxes capital gains at a lower rate for three reasons: 1) The capital was already taxed when it was earned as income, before being invested. Investment taxes are a form of double-taxation. 2) The investor assumes all market risk, the employee none. Government rewards this risk with lower tax rates because invested capital underwrites paychecks. 3) Lower investment taxes encourages investing your money, and discourages saving it. Consumption and investment are more desirable than savings. Government has an interest in getting you to consume and invest, and in keeping you from saving. The only thing that's flawed here is your reasoning. I'm not terrifically excited to debate right now but such is life eh? What I will respond with is that nothing you have said justifies the current taxation rates. Capital gains incomes are essentially no-risk investments. Look at the stock market. It has continued to climb over it's 100+ year history, and it's only going to stop when the entire American economy falls. So (rich) investors with diverse portfolios are going to consistently make money with long-term investing, and they are going to pay a very low tax rate on that 'income'. This is unfair. Maybe you don't care about fairness (you a pure capitalist?) but I think that economic disparity is a major obstruction to the advancement of societies and humanity in general. If you earn more than a million a year (from ANY source) I think you should be heavily taxed. Mentor I know I'm late to this gem of a thread, but one thing I love is this: We borrowed money from the Chinese to give the rich tax cuts...and now people want to debate if we should keep doing it?!? Sheesh. It wasn't just you doing it, Lisa, a number of people said similar things. The "Bush tax cuts" cut taxes for everyone, not just the rich. The way the rhetoric from the left has been sounding lately, I'm wondering if people are even aware that it wasn't just the rich who got tax cuts or if that was just a knee-jerk reaction from passion instead of thought that caused the oversight in your characterization. nismaratwork I know I'm late to this gem of a thread, but one thing I love is this: It wasn't just you doing it, Lisa, a number of people said similar things. The "Bush tax cuts" cut taxes for everyone, not just the rich. The way the rhetoric from the left has been sounding lately, I'm wondering if people are even aware that it wasn't just the rich who got tax cuts or if that was just a knee-jerk reaction from passion instead of thought that caused the oversight in your characterization. Perhaps she was referring to relative proportions in the cuts... and the political belief that at some point you don't need to give tax breaks to those who are considered "well-off to rich" by the majority of Americans? Mentor Perhaps she was referring to relative proportions in the cuts... and the political belief that at some point you don't need to give tax breaks to those who are considered "well-off to rich" by the majority of Americans? Dunno. If "we borrowed money from the Chinese to give the rich tax cuts" we also borrowed money from the Chinese to give everyone else tax cuts and it is Obama's intent to continue that. Seems disingenuous to cherry-pick like that, ignoring probably 90% of the "borrowing from the Chinese". In other words, according to Democrats, it is ok to borrow from the Chinese to give 95% of Americans tax cuts, but to include the other 5% in those tax cuts is very upsetting. I've also never heard a democrat opposing a tax increase for the rich or supporting a tax decrease. It would seem like there is always more the rich should pay. Never have I heard what I would consider fair: a situation where everyone would benefit from a tax cut or everyone would have their taxes increased. nismaratwork Dunno. If "we borrowed money from the Chinese to give the rich tax cuts" we also borrowed money from the Chinese to give everyone else tax cuts and it is Obama's intent to continue that. Seems disingenuous to cherry-pick like that, ignoring probably 90% of the "borrowing from the Chinese". Maybe, but I can still understand the outrage, although it shouldn't distract from the other %age you mentioned. Science Advisor Never have I heard what I would consider fair: a situation where everyone would benefit from a tax cut or everyone would have their taxes increased. The only (responsible) way for 'everyone to benefit from a tax cut' would be to decrease spending and you know that's never going to happen. Politicians (from local councils to congress) are incapable of doing it. And they will never raise 'everyone's' taxes, because then there would be nobody left to vote for them. Our system is based on compromise, which means 'get someone else to pay for what I want.' Staff Emeritus Science Advisor With wealthy people, such spending is discretionary, with poor people, it is generally anything but. For instance, when your children are growing out of their clothing, they need more clothing. I don't necessarily disagree with you, but I don't see how this is a good example. Do the wealthy people not have children? Mentor No, but wealthy people are apparently not allowed to have savings. nismaratwork No, but wealthy people are apparently not allowed to have savings. You mention savings and children... how about this as a compromise: We tax the wealthy more than those with less, but we remove the (ridiculous) estate tax ("death tax")? That seems like a good trade when considering passing saved wealth to the next generation. I voted for the second choice. Tax cuts should be permanent for the 98% making below$250,000/y. It's nice to think that the top 1% or 2% haven't much to worry about but the problem is that there is more stress involved - IMO - staying ahead of the other 98% of our population isn't easy. You have to influence individuals, as well as, governmental policies (local, state, and federal). To accomplish this PACs are needed, since the middle and lower classes (income wise) haven't the resources to nudge, bribe, or payout for this manipulation. It is left to those fortunate few who have said ability. I don't think most wealthy individuals deliberately aid in maintaining an uneven field however I think that it is a function of greed that this situation continues. I think that corporate entities because they have a quasi-autonomous personality need to be regulated, and controlled by people who have human beings and society as their priority. So that the many disparities that exist in a (or our) society can be dealt with fairly. This is an if, but if I was in the top 3% of income earners a 40% tax rate wouldn't bother me because I grew up having to do more with less. I.e., Giving up $100,000 from a salary of$200,000+ leaving about 90,000 to 100,000 for me would not hurt me if I don't try to maintain a high level lifestyle. And I would still be living better than I had for a majority of my life. Gold Member It may be politically difficult to cut federal spending, but sooner or later a cut will happen, this is unavoidable. Taxes can not come close to covering the current spending load, nor should they even if it were possible. Several states managed the political will to cut their budgets recently; Mississippi by some 10%. And I mean cut, as in reduced, not an increase which was reduced from some hoped for target. Hopefully spending cuts won't occur here as they did in Greece, with the riots in the streets protesting an increase in government retirement/pension age from 50 to 52 for hair dressers. nismaratwork It may be politically difficult to cut federal spending, but sooner or later a cut will happen, this is unavoidable. Taxes can not come close to covering the current spending load, nor should they even if it were possible. Several states managed the political will to cut their budgets recently; Mississippi by some 10%. And I mean cut, as in reduced, not an increase which was reduced from some hoped for target. Hopefully spending cuts won't occur here as they did in Greece, with the riots in the streets protesting an increase in government retirement/pension age from 50 to 52 for hair dressers. Lets be honest now, the system in Greece was ridiculous in the extreme, and the USA doesn't have anything like that level of social welfare or early retirement. Gold Member Dunno. If "we borrowed money from the Chinese to give the rich tax cuts" we also borrowed money from the Chinese to give everyone else tax cuts and it is Obama's intent to continue that. Seems disingenuous to cherry-pick like that, ignoring probably 90% of the "borrowing from the Chinese". I go further and point out (https://www.physicsforums.com/showpost.php?p=2868414&postcount=17", to pay social security benefits at a retirement age 20-30 years below the life span for which those benefits were originally created, to pay $60B for a government auto company, ...". The sum total of this type of spending far exceeds the few tens of billions per year lost to revenue from the Bush tax cuts. Last edited by a moderator: Gold Member For comparison: Greek debt per capita: ~$50k (http://en.wikipedia.org/wiki/Economy_of_Greece" [Broken] / 11M) US debt per capita: ~$45k ($13.4T [federal only] / 300M) Now absolute debt comparisons with the US are usually waived off because the US GDP per capita is relatively large. However, GDP can shrink come bad fortune and/or bad policy; the debt will not. Regarding extreme public pensions, I'd like to see the Greek who can top this guy: Rizzo [city manager for a town of 37,000] is set to receive a pension of about $600,000 a year, which would make him the highest-paid pensioner in the California Public Employees' Retirement fund. That amount is calculated from a salary of nearly$800,000. -all legal. http://www.latimes.com/news/local/la-me-rizzo-pension-20100904,0,3689592.story For examples of more widespread US social welfare liabilities, see http://www.econbrowser.com/archives/2010/03/speaking_of_lia.html" [Broken] which reports a $7.2T liability beyond collected revenue predictions. Last edited by a moderator: Staff Emeritus Gold Member No structure of marginal tax brackets should be seen as permanent or unchangeable. Otherwise, the available tools of fiscal policy are basically rendered impotent to react appropriately to the business cycle. We should probably set reasonable caps (for instance, no activity should ever be taxed at a combined local-state-federal rate of 40% or greater), but the rates and brackets themselves should change every year. As for the specific impact of moving the rate for income over$250,000 from 36% to 39%, it should theoretically have a short-term stimulative effect since the government will spend the money, whereas the private income-earners are saving it, but that will also hurt investment. If "investment" just means stockpiling gold, that could actually be a good thing, but only if the rate goes down again immediately when private investment in productive assets starts to recover, which probably won't happen. Of course, normally the point of raising marginal rates is to retire debt at the peak of a recovery, and if that's all they're going to do, it's pointless, because we're still closer to the bottom than the top, it would remove any theoretical stimulative effect, and rates are too low right now to justify retiring debt in the absence of a budget surplus. Plus, the more rational way to generate a surplus is organically at the peak of the business cycle when receipts increase even at existing rates and demand for social services hits a low, not to do it at the trough of a cycle by raising rates in the face of otherwise falling receipts. The larger problem is that we crippled our ability to respond with proper policy by running up so much debt during a boom, making us unable to do it when we were supposed to during a downturn, causing this proposal to raise taxes just to maintain existing service levels, which should never be necessary at the federal level. Gold Member No structure of marginal tax brackets should be seen as permanent or unchangeable. That's to the good for the government macro planner. Moving the rates around rapidly imposes a cost on business operations through a lack of predictability. Otherwise, the available tools of fiscal policy are basically rendered impotent to react appropriately to the business cycle. Even with fiscal room to maneuver, observation shows that is unlikely government can bring itself to pull the correct levels in the short term swings of business cycles, furthermore the fiscal economic guidance of what to do is unclear. The only clear economic lever pulling successes of the last several decades have been in monetary policy, and even there we have had mistakes (easy Greenspan money 01-03). We should probably set reasonable caps (for instance, no activity should ever be taxed at a combined local-state-federal rate of 40% or greater), How about 25%? Keynes himself apparently said, "25 percent taxation is about the limit of what is easily borne." but the rates and brackets themselves should change every year. As for the specific impact of moving the rate for income over $250,000 from 36% to 39%, it should theoretically have a short-term stimulative effect since the government will spend the money, whereas the private income-earners are saving it, but that will also hurt investment. If "investment" just means stockpiling gold, that could actually be a good thing, but only if the rate goes down again immediately when private investment in productive assets starts to recover, which probably won't happen. Consider that the savings (instead of investment/spending) in that bracket may well be due to http://en.wikipedia.org/wiki/Permanent_income_hypothesis" [Broken] of the pending rate increases. Take away that threat, and I suspect investment and spending will start to climb again, in a matter much more likely to increase employment than when the government takes tax money out for stimulus joy ride. Of course, normally the point of raising marginal rates is to retire debt at the peak of a recovery, and if that's all they're going to do, it's pointless, because we're still closer to the bottom than the top, it would remove any theoretical stimulative effect, and rates are too low right now to justify retiring debt in the absence of a budget surplus. Plus, the more rational way to generate a surplus is organically at the peak of the business cycle when receipts increase even at existing rates and demand for social services hits a low, not to do it at the trough of a cycle by raising rates in the face of otherwise falling receipts. The larger problem is that we crippled our ability to respond with proper policy by running up so much debt during a boom, making us unable to do it when we were supposed to during a downturn, causing this proposal to raise taxes just to maintain existing service levels, which should never be necessary at the federal level. The US has been unable to run up debt in this downturn? US federal debt has http://blog.heritage.org/wp-content/uploads/2009/08/tripple-debt.jpg" [Broken] Just how much additional debt did are you calling for, and to what end? To my mind there are no good examples of successful stimulation of depressed economies through government spending - not in 1990's Japan, not in the US 1930s. After this last ~$1T stimulus attempt and the resulting near 10% unemployment, deficit stimulus spending economics should warrant at the very least great skepticism, and taking for granted that fire hosing money out of a government window must stimulate the economy in all recessions borders on the irrational. Finally, the predicted revenue increase from the repeal of the proposed Bush tax cuts, even if all of it goes through, is only a fraction of the deficit gap caused by current spending ($70B/year). Last edited by a moderator: Gold Member The GOP is squealing about raising the marginal tax rate on the highest-earning Americans from 36-39%. They proclaim that the Bush cuts on the highest earners must be extended, despite the fact that such an extension would add ~$1Trillion to the deficit over the next decade. Extending the tax cuts to the people most able to pay them would be stupid and unproductive, while rescinding those tax cuts to the people whose wages have stagnated or fallen, and whose savings have taken a beating would prolong our economic downturn. Those of us who have benefited greatly from our economy and our hard work should not lobby to perpetuate and extend the earnings-disparity between the rich and the poor. That's the kind of crap that led our country into the Great Depression. And yes, I was firmly in the upper 2% for several years before illness (and lack of proper accommodation for that) knocked me out of the work-force, so I have a good feel about how much of a "hardship" rescinding 3% of the marginal tax rate would have on me. Not enough to worry about. We live comfortably, and well within our means. I'm certain that the millionaires (including most Senators) who want to continue the tax break can learn to live without it. Last edited: Gold Member The GOP is squealing about raising the marginal tax rate on the highest-earning Americans from 36-39%. They proclaim that the Bush cuts on the highest earners must be extended, despite the fact that such an extension would add ~$1Trillion to the deficit over the next decade... That's at least 30% high, and since when do you care about deficits (or unemployment for that matter)? I've missed your objections to the last several years of helicopter spending. Gold Member That's at least 30% high, and since when do you care about deficits (or unemployment for that matter)? I've missed your objections to the last several years of helicopter spending. Extending unemployment benefits would have cost ~$30 Billion and the GOP wouldn't sign on because it would "add to the deficit". Somehow, Boehner et al had no problem with adding a trillion to the deficit to extend the Bush tax cuts for the top 2%. If you don't think that I have been outraged by unfunded wars, you're very wrong. https://www.physicsforums.com/showpost.php?p=2869160&postcount=26 Staff Emeritus Gold Member That's to the good for the government macro planner. Moving the rates around rapidly imposes a cost on business operations through a lack of predictability. Even with fiscal room to maneuver, observation shows that is unlikely government can bring itself to pull the correct levels in the short term swings of business cycles, furthermore the fiscal economic guidance of what to do is unclear. The only clear economic lever pulling successes of the last several decades have been in monetary policy, and even there we have had mistakes (easy Greenspan money 01-03). These are sort of competing problems here. The major problem with fiscal policy is that it's lagging because of how slowly spending measures move through Congress. So even though monetary policy has less of the desired effect, we use it more because it's more administratively efficient. But I'm not talking about making wild rate swings left and right all the time. Just that fiscal policy is only effective when it's actually used and different situations call for different rates. Sometimes you need to raise them and sometimes you need to cut them. They can't be set in stone. As for the effect on business operations, I think that's pretty easily avoided by not taxing business operations. How about 25%? Keynes himself apparently said, "25 percent taxation is about the limit of what is easily borne." As far as I know, empirical studies have indicated that 40% is the marginal rate at which the taxed activity starts to be significantly discouraged. You see this with nurses. When overtime starts knocking them into the highest bracket, they stop working overtime. But what the number is doesn't matter. The point is just that there is a point at which tax rates get too high and we should cap them and allow flexibility beneath that cap. Consider that the savings (instead of investment/spending) in that bracket may well be due to http://en.wikipedia.org/wiki/Permanent_income_hypothesis" [Broken] of the pending rate increases. Take away that threat, and I suspect investment and spending will start to climb again, in a matter much more likely to increase employment than when the government takes tax money out for stimulus joy ride. High-income households have high savings rates regardless of future tax forecasts. Shifting wealth from low- to high-MPC consumers will always have a short-term stimulative effect. That isn't to say it's always the right thing to do, and that's why I qualified the "theoretical projection of effect" above with caveats regarding negative long-term effects. Plus, at a certain point, it's unethical to just confiscate money. Still, there will always be a short-term stimulative effect when you take money that isn't being spent and spend it. The US has been unable to run up debt in this downturn? That isn't really what I meant. The government, practically speaking, can issue damn near as much debt as it wants to. But when you run up debt during boom times, you cripple your ability to act prudently during busts. Because now, passing a debt-funded spending package doesn't simply create debt, it creates excessive debt. The proper way to do it is to run surpluses during boom times and deficits during busts to maintain predictable service levels in the face of volatile national income and to smooth the business cycle. Running a deficit all the time just results in a permanently increasing national debt that kills the future. If you run a deficit during booms and then cut back during busts, you just worsen the business cycle and turn booms into bubbles and busts into depressions. Last edited by a moderator: ...so I have a good feel about how much of a "hardship" rescinding 3% of the marginal tax rate would have on me. Not enough to worry about. We live comfortably, and well within our means. I'm certain that the millionaires (including most Senators) who want to continue the tax break can learn to live without it. Yep, poor, poor rich people. And lower middle class people like me that feel so sorry for them and take their side against the righteous Dems. People like me who just can't stand the thought of rich people like you having such hardships. :uhh: I think it's now safe to assume you are very aware that your representation of those that disagree with you is laughably absurd and fraudulent, and you are not really as uninformed of the opposing viewpoint as you pretend, so there is no point in arguing about it. Gold Member Yep, poor, poor rich people. And lower middle class people like me that feel so sorry for them and take their side against the righteous Dems. People like me who just can't stand the thought of rich people like you having such hardships. :uhh: I think it's now safe to assume you are very aware that your representation of those that disagree with you is laughably absurd and fraudulent, and you are not really as uninformed of the opposing viewpoint as you pretend, so there is no point in arguing about it. I was earning over 30K/year in a "draw" salary, and far exceeded many, many times that for years in my sales incentives (you could call them bonuses but they were structured very strictly WRT to department performance). My wife and I both came from poor families and we never needed to keep up for appearances. It is "safe to assume" NOTHING. Get a grip. And get a viewpoint that doesn't originate from political views that are purely neo-con blather. There are conservatives like myself that have had to go independent since the GOP has abandoned all conservative tenets in order to pander to the wealthy. It is "safe to assume" NOTHING. Get a grip. And get a viewpoint that doesn't originate from political views that are purely neo-con blather. John Lock: neo-con blather-er extraordinaire. Yeah, that's it. There are conservatives like myself that have had to go independent since the GOP has abandoned all conservative tenets in order to pander to the wealthy. Yeah, we all know you're the love child of Barry Goldwater and Ayn Rand. :rofl: I can play this game, too: I'm a socialist myself, a huge fan of Marx. I just object to the extreme version of socialism advocated by Dems. I favor a more moderate version of socialism favored by the few Republicans left that haven't turned completely into advocates of that radical extreme version of authoritarian socialism. Gold Member I can play this game, too: I'm a socialist myself, a huge fan of Marx. I just object to the extreme version of socialism advocated by Dems. I favor a more moderate version of socialism favored by the few Republicans left that haven't turned completely into advocates of that radical extreme version of authoritarian socialism. I have made more money/year than 98% of the inhabitants of the US for a number of years. I also happen to be a fiscal conservative. That is something that the neo-cons fail to appreciate. I expect that you are among them ( the neo-cons) I don't support the waging of wars fought on arguments that cannot be logically supported (based on lies, if you want to be honest). I do not support the gutting of Social Security by any means. I do not support the imposition of unfunded Federal mandates on state and local governments (No Child Left Behind). If you yearn for a return to "W's" presidency, please provide some rational arguments why this would be a good thing, in the spirit of PF's drive toward balance and fairness. nismaratwork John Lock: neo-con blather-er extraordinaire. Yeah, that's it.Yeah, we all know you're the love child of Barry Goldwater and Ayn Rand. :rofl: I can play this game, too: I'm a socialist myself, a huge fan of Marx. I just object to the extreme version of socialism advocated by Dems. I favor a more moderate version of socialism favored by the few Republicans left that haven't turned completely into advocates of that radical extreme version of authoritarian socialism. Well, speaking as someone who's already earned more money than I'll spend in one lifetime, I agree with Turbo-1 as well, for much the same reasons. Would you care to expand your ad hominem attacks to me as well? I have made more money/year than 98% of the inhabitants of the US for a number of years. I also happen to be a fiscal conservative. That is something that the neo-cons fail to appreciate. I expect that you are among them ( the neo-cons) Well, if you will define "neo-con", I'll tell you. Is it the same as a classical liberal? I'm only guessing that because you use the term to describe those with classically liberal positions, or as Dems fraudulently say: "for the rich". I don't support the waging of wars fought on arguments that cannot be logically supported (based on lies, if you want to be honest). I do not support the gutting of Social Security by any means. I do not support the imposition of unfunded Federal mandates on state and local governments (No Child Left Behind). If you yearn for a return to "W's" presidency, please provide some rational arguments why this would be a good thing, in the spirit of PF's drive toward balance and fairness. I yearn for liberty. W's positions were more economically libertarian than Obama's (which isn't saying much), but far less than mine. As far as a spirit of balance and fairness, your habit of consistently misrepresenting the views of those you disagree, like using the words "pandering to the wealthy" to describe positions barely more libertarian than Dems, leaves no room for honest debate. I have no intention of presenting any argument in favor of views that no one espouses. That's an important point: the positions you keep arguing against are positions that no one has advocated. That's called a strawman argument. Try arguing against someone's stated position, and without any reference to their supposed motives (ad hominem argument), or any other well known logical fallacies. nismaratwork Well, if you will define "neo-con", I'll tell you. Is it the same as a classical liberal? I'm only guessing that because you use the term to describe those with classically liberal positions, or as Dems fraudulently say: "for the rich".I yearn for liberty. W's positions were more economically libertarian than Obama's (which isn't saying much), but far less than mine. As far as a spirit of balance and fairness, your habit of consistently misrepresenting the views of those you disagree, like using the words "pandering to the wealthy" to describe positions barely more libertarian than Dems, leaves no room for honest debate. I have no intention of presenting any argument in favor of views that no one espouses. That's an important point: the positions you keep arguing against are positions that no one has advocated. That's called a strawman argument. Try arguing against someone's stated position, and without any reference to their supposed motives (ad hominem argument), or any other well known logical fallacies. You've made some truly absurd claims here that require sourcing. In what way and on what planet was the spending practices of W.'s admin "Libertarian"?! I assume it couldn't have been military spending, or creating The DHS... neither very libertarian moves. You're talking out of your backpassage, so cite in accordance with PF rules or back up. Well, speaking as someone who's already earned more money than I'll spend in one lifetime, I agree with Turbo-1 as well, for much the same reasons. Would you care to expand your ad hominem attacks to me as well? LOL. Sure, you must be the other love child of Barry Goldwater and Ayn Rand. :rofl: You've made some truly absurd claims here that require sourcing. In what way and on what planet was the spending practices of W.'s admin "Libertarian"?! I assume it couldn't have been military spending, or creating The DHS... neither very libertarian moves. You're talking out of your backpassage, so cite in accordance with PF rules or back up. Why don't you instead cite where I made any claim that W's spending practices were libertarian. I was obviously referring to the subject of this thread: tax cuts. :uhh: Who's talking out of their backpassage here? Last edited by a moderator: Gold Member No structure of marginal tax brackets should be seen as permanent or unchangeable. Otherwise, the available tools of fiscal policy are basically rendered impotent to react appropriately to the business cycle. We should probably set reasonable caps (for instance, no activity should ever be taxed at a combined local-state-federal rate of 40% or greater), but the rates and brackets themselves should change every year. As for the specific impact of moving the rate for income over \$250,000 from 36% to 39%, it should theoretically have a short-term stimulative effect since the government will spend the money, whereas the private income-earners are saving it, but that will also hurt investment. If "investment" just means stockpiling gold, that could actually be a good thing, but only if the rate goes down again immediately when private investment in productive assets starts to recover, which probably won't happen. Of course, normally the point of raising marginal rates is to retire debt at the peak of a recovery, and if that's all they're going to do, it's pointless, because we're still closer to the bottom than the top, it would remove any theoretical stimulative effect, and rates are too low right now to justify retiring debt in the absence of a budget surplus. Plus, the more rational way to generate a surplus is organically at the peak of the business cycle when receipts increase even at existing rates and demand for social services hits a low, not to do it at the trough of a cycle by raising rates in the face of otherwise falling receipts. The larger problem is that we crippled our ability to respond with proper policy by running up so much debt during a boom, making us unable to do it when we were supposed to during a downturn, causing this proposal to raise taxes just to maintain existing service levels, which should never be necessary at the federal level. Bah! You sound like you know what you are talking about. I hate that... --------------------------------- not really. will you be my pf friend? please...
# Gödel's Completeness Theorem A famous paper by Leon Henkin ("Completeness in the theory of types") begins as follows: "The first order functional calculus was proved complete by Gödel in 1930. Roughly speaking, this proof demonstrates that each formula of the calculus is a formal theorem which becomes a true sentence under every one of a certain intended class of interpretations of the formal system." I do not understand the latter sentence. Indeed a non-provable formula has no reason to be a formal theorem and no reason to become true under every "sensible" interpretation. For me, it should be something like that: "this proof demonstrates that each formal theorem of the calculus becomes a true sentence under every one of a certain intended class of interpretations of the formal system." Since I trust Leon Henkin as a logician, I guess that I am missing something; maybe is my English the problem and do Henkin's sentence and my sentence have the same meaning? • In ordinary English, it would be clearer to say that the completeness theorem shows that "each formula which is true in every intended interpretation, is, in fact, a formal theorem." The converse, which seems to be what you are suggesting, also holds but is known as "soundness" rather than "completeness" and is much simpler to prove, as it only requires that the inference rules and logical axioms preserve truth. The completeness proof requires that there are enough axioms and rules to formally prove every logical consequence of the axioms. – Ned Feb 3 '16 at 1:41 • @MauroALLEGRANZA and Rob Arthan: And, as said by Ned, it would be a statement of the trivial direction (the "soundness"), which does not reflect the deep content of Gödel's theorem (which is rather the converse). – user251130 Feb 3 '16 at 20:30 • @user251130: you are right. Mauro's suggestion does not fix the sentence. – Rob Arthan Feb 3 '16 at 23:06
## anonymous one year ago what is the simplified version of -i^18 square root of -1600? a. -40i b. 40i c. 40 d. -40 $-i ^{18}\sqrt{-1600}$ here's the equation!
# How can I tag all occurrences of a keyword then reference all the pages containing the same tag? I am writing a syllabus with specific curricular requirements. Activities are tagged with these requirements to demonstrate alignment. I want to use a labelling or indexing system to tag the instances of alignment and then refer to all of the pages were an instance of alignment occurs. Is there a package that will reference all of the pages containing the same label, as in an index, but allow me to print the pages for that label at an arbitrary point in the document? This need seems to have elements of \label - \ref and of \index. I don't have a minimal working example but what I want would look something like this. \documentclass{article} \begin{document} \begin{description} \item[CR1] This is a description of a requirement. \\ See page <LIST OF PAGES LABELED WITH CR1> \item[CR2] This is a description of another requirement. \\ See page <LIST OF PAGES LABELED WITH CR2> \end{description} \section{Section 1} This is an activity aligned with CR1. (\label{CR1} or \index{CR1}) This is an activity aligned with CR2. (\label{CR2} or \index{CR2}) \section{Section 2} This is another activity aligned with CR1. (\label{CR1} or \index{CR1}) \end{document} • Welcome! For the uninitiated, you might wish to clarify that alignment does not here mean alignment in the TeX sense. – cfr Jan 29, 2017 at 3:48 • This seems to be exactly what \index is designed to do or wrappers around index such as glossaries Jan 29, 2017 at 11:11 • I used the index solution by @DavidCarlisle because I already had the requirements description file and it was easier to add tags and \listfor command. I look forward to trying glossary the next time I can start a project like this from scratch. Jan 29, 2017 at 19:56 I see Nicola just did a glossaries version but here's my barebones makeindex version. You just want an index style that saves each page list in a definition rather than typesetting, say foo.ist: preamble "\n\\makeatletter{" postamble "}\n\\makeatother\n" item_0 "}\n\\@namedef{" delim_0 "}{" Then pdflatex file makeindex -s foo.ist file pdflatex file should produce the above output from a document like \documentclass{article} \usepackage{makeidx} \makeindex \def\listfor#1{\csname #1\endcsname} \begin{document} \printindex \begin{description} \item[CR1] This is a description of a requirement. \\ See pages \listfor{CR1} \item[CR2] This is a description of another requirement. \\ See pages \listfor{CR2} \end{description} \section{Section 1} This is an activity aligned with CR1\index{CR1}. aa \clearpage This is an activity aligned with CR2\index{CR2}. \section{Section 2} This is another activity aligned with CR1\index{CR1}. \section{Section 3} This is an activity aligned with CR1\index{CR1}. aa \clearpage This is an activity aligned with CR2\index{CR2}. \section{Section 4} This is another activity aligned with CR1\index{CR1}. aa \clearpage aa \clearpage This is an activity aligned with CR2\index{CR2}. \section{Section 4} This is another activity aligned with CR1\index{CR1}. \end{document} Here's an example that uses glossaries: \documentclass{article} \usepackage[colorlinks]{hyperref}% optional but if needed must come % before glossaries.sty \usepackage[nopostdot]{glossaries} \makeglossaries % syntax: \newglossaryentry{label}{options} \newglossaryentry{CR1}{name={CR1}, description={This is a description of a requirement.}} % Or % syntax: \longnewglossaryentry{label}{options}{description} \longnewglossaryentry{CR2}{name={CR2}}% {This is a description of another requirement. With a paragraph break. } \begin{document} \printglossary[title={Requirements}] \section{Section 1} This is an activity aligned with CR1.\glsadd{CR1}% index only This is an activity aligned with \gls{CR2}.% index and show name \section{Section 2} This is an activity aligned with \gls{CR1}.% index and show name \end{document} Build process (assuming file is called myDoc.tex): pdflatex myDoc makeglossaries myDoc pdflatex myDoc makeglossaries is a Perl script that calls makeindex with all the required switches set. Alternatively if you don't have Perl installed, there's a lightweight Lua script: pdflatex myDoc makeglossaries-lite myDoc pdflatex myDoc Either build process produces: The red text indicates a hyperlink. The 1 after the description is the page number where the entry was referenced. With the extension package glossaries-extra you can suppress the automated indexing on an individual basis with the noindex option. \documentclass{article} \usepackage[colorlinks]{hyperref}% optional but if needed must come % before glossaries.sty \usepackage{glossaries-extra} \makeglossaries % syntax: \newglossaryentry{label}{options} \newglossaryentry{CR1}{name={CR1}, description={This is a description of a requirement.}} % Or % syntax: \longnewglossaryentry{label}{options}{description} \longnewglossaryentry{CR2}{name={CR2}}% {This is a description of another requirement. With a paragraph break. } \begin{document} \printglossary[title={Requirements}] \section{Section 1} This is an activity aligned with CR1.\glsadd{CR1}% index only This is an activity aligned with \gls{CR2}.% index and show name \newpage \section{Section 2} This is an activity aligned with \gls{CR1}.% index and show name Some minor reference to \gls[noindex]{CR2} that doesn't need indexing. \end{document} The build process is the same. You can change the style. For example: \printglossary[title={Requirements},style=index] There are plenty of predefined styles to choose. For future reference, there will at some point soon be another approach using bib2gls instead of makeglossaries / makeglossaries-lite. I've added it here in case someone finds this question at a later date. Create a .bib file called, say requirements.bib: @entry{CR1, name={CR1}, description={This is a description of a requirement.} } @entry{CR2, name={CR2}, description={This is a description of another requirement. With a paragraph break.} } The document myDoc.tex is now: \documentclass{article} \usepackage[colorlinks]{hyperref}% optional but if needed must come % before glossaries.sty \usepackage[record]{glossaries-extra} \begin{document} \printunsrtglossary[title={Requirements}] \section{Section 1} This is an activity aligned with CR1.\glsadd{CR1}% index only This is an activity aligned with \gls{CR2}.% index and show name \newpage \section{Section 2} This is an activity aligned with \gls{CR1}.% index and show name Some minor reference to \gls[noindex]{CR2} that doesn't need indexing. \end{document} The build process is: pdflatex myDoc bib2gls myDoc pdflatex myDoc There's no call to makeindex as bib2gls simultaneously fetches and sorts the entries from the .bib file and collects the locations from the .aux file. The differences here are use of the record option and \GlsXtrLoadResources. The command \makeglossaries isn't required and the glossary is now printed with \printunsrtglossary instead of \printglossary.
## A bundle method for solving equilibrium problems.(English)Zbl 1155.49006 Basing on the auxiliary problem principle, the authors study a boundle method for solving the nonsmooth convex equilibrium problem: finding $$x^* \in C$$ such that $$f(x^*,y) \geq 0 \,\,{\text{for all}}\,\, y \in C$$, and prove the convergence theorems for the general algorithm. Using a bundle strategy an implementable version of this algorithm is proposed together with the convergence results for the bundle algorithm. Some applications to variational inequality problems are also given. Reviewer: Do Van Luu (Hanoi) ### MSC: 49J40 Variational inequalities 90C25 Convex programming Full Text: ### References: [1] Anh P.N. and Muu L.D. (2004). Coupling the banach contraction mapping principle and the proximal point algorithm for solving monotone variational inequalities. Acta Math. Vietnam. 29: 119–133 · Zbl 1291.49011 [2] Aubin J.P. and Ekeland I. (1984). Applied Nonlinear Analysis. Wiley, New York · Zbl 0641.47066 [3] Blum E. and Oettli W. (1994). From optimization and variational inequalities to equilibrium problems. Math. Stud. 63: 123–145 · Zbl 0888.49007 [4] Cohen G. (1988). Auxiliary problem principle extended to variational inequalities. J. Optim. Theory Appl. 59: 325–333 · Zbl 0628.90069 [5] Correa R. and Lemaréchal C. (1993). Convergence of some algorithms for convex minimization. Math. Program. 62: 261–275 · Zbl 0805.90083 [6] El Farouq N. (2001). Pseudomonotone variational inequalities: convergence of the auxiliary problem method. J. Optim. Theory Appl. 111: 305–326 · Zbl 1027.49006 [7] Gol’shtein E.G. (2002). A method for solving variational inequalities defined by monotone mappings. Comput. Math. Math. Phys. 42(7): 921–930 [8] Hiriart-Urruty J.B. and Lemaréchal C. (1993). Convex Analysis and Minimization Algorithms. Springer, Berlin · Zbl 0795.49001 [9] Iusem A. and Sosa W. (2003). New existence results for equilibrium problem. Nonlinear Anal. Theory Methods Appl. 52: 621–635 · Zbl 1017.49008 [10] Iusem A. and Sosa W. (2003). Iterative algorithms for equilibrium problems. Optimization 52: 301–316 · Zbl 1176.90640 [11] Kiwiel K.C. (1995). Proximal level bundle methods for convex nondifferentiable optimization, saddle point problems and variational inequalities. Math. Program. 69(1): 89–109 · Zbl 0857.90101 [12] Konnov I.V. (2001). Combined Relaxation Methods for Variational Inequalities. Springer, Berlin · Zbl 0982.49009 [13] Konnov I.V. (1996). The application of a linearization method to solving nonsmooth equilibrium problems. Russ. Math. 40(12): 54–62 · Zbl 1022.49023 [14] Lemaréchal C., Nemirovskii A. and Nesterov Y. (1995). New variants of bundle methods. Math. Program. 69(1): 111–147 · Zbl 0857.90102 [15] Lemaréchal C., Strodiot J.J. and Bihain A. (1981). On a bundle method for nonsmooth optimization. In: Mangasarian, O.L., Meyer, R.R., and Robinson, S.M. (eds) Nonlinear Programming, vol. 4, pp 245–282. Academic, New York · Zbl 0533.49023 [16] Mastroeni G. (2003). On auxiliary principle for equilibrium problems. In: Daniele, P., Giannessi, F. and Maugeri, A. (eds) Equilibrium Problems and Variational Models, pp 289–298. Kluwer, Dordrecht · Zbl 1069.49009 [17] Rockafellar R.T. (1970). Convex Analysis. Princeton University Press, Princeton · Zbl 0193.18401 [18] Salmon G., Strodiot J.J. and Nguyen V.H. (2004). A bundle method for solving variational inequalities. SIAM J. Optim. 14(3): 869–893 · Zbl 1064.65051 [19] Zhu D. and Marcotte P. (1996). Co-coercivity and its role in the convergence of iterative schemes for solving variational inequalities. SIAM J. Optim. 6: 714–726 · Zbl 0855.47043 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
Change the chunk’s option to <>= so that the chunk now looks like. The older, traditional … overrides the selection within Global Options/Preferences. But the default in RStudio is still to use Sweave, so you first need to change that default.Go to the RStudio (on menu bar) → Preferences and select Sweave on the left. Go to the RStudio (on menu bar) → Preferences and select Sweave on the left. The chunk options that I use most often (and coincidentally, the chunk options we will cover below) are: I’m only covering this option because you will often see people write chunks that start with <>=. @ following. R, without knitr, is great at statistics but struggles to create a report. One big change in knitr, relative to LaTeX, is the chunk (their choice of words—not mine). Now we will tell RStudio to use knitr to build PDF documents. Compile the following set of text in your current .Rnw document just to check that LaTeX and knitr play nicely. R and knitr do the rest. There are a lot of options. This is called “weaving” or “knitting”, depending on which chunk-processing function is used. knitris an Rpackage that integrates computing and reporting. change the selection for “Weave Rnw files using:” from Sweave to After reading this book, you will understand how R Markdown documents are transformed from plain text and how you may customize nearly every step of this processing. Quickly and Easily Write Dynamic Documents Suitable for both beginners and advanced users, Dynamic Documents with R and knitr, Second Edition makes writing statistical reports easier by integrating computing directly with reporting. Give yourself some credit and go big. Knitr is, again, not a super great tool for that because the … I really like Atom, but it requires more setting up. one.) Read my comments on reproducibility, and perhaps about Knitr with AsciiDoc or Knitr with LaTeX. Let’s start a new document in RStudio (new R script—ctrl/cmd+shift+N). In this tutorial I show the power of Reproducible Research. example.Rnw).7 Your average LaTeX editor/compiler will not be able to work with .Rnw files, but RStudio loves them8. You write your text (paragraphs, equations, figures, tables) in between the calls of \begin{document} and \end{document} (in the document). For quotations, you need to use double backticks (, Skipping a line between lines of text will start a new paragraph. Last, after compiling our LaTeX file using TeX Shop, we’re greeted with the final product below: Summary thus far. You can use RStudio to convert a .Rnw file The first line (\documentclass{article}) tells LaTeX which type of document you want to create (similar to selecting templates in Word). Introduction knitr is a newer way to create reproducible documents with R and LaTeX. As I said above, you can integrate everything you know about LaTeX into knitr—mathematical expressions, text formatting, whatever. When you installed LaTeX, it probably installed a program to edit and compile (e.g., Gummi, TeXShop, Texmaker, TeXstudio, or TeXworks).3 Or… use RStudio! Code results can be inserted directly into the text of a .Rmd file by enclosing the code with r .The file below uses r twice to call colorFunc, which returns “heat.colors.”You can open the file here in RStudio Cloud.. Now, if you open a .Rnw file in RStudio, or if you create a new one Open the settings in RStudio (Menu bar » Tools » Global Options2). First steps. The eval = FALSE option is a nice way to dispaly a lot of code without having to run it. include is helpful when you have a lot of intermediate steps that the reader does not need/want to see. The very first thing you need to do is download a LaTeX distribution for your operating system: Note: There are several solid online services available, but please install a LaTeX distribution on your machine. ShareLaTeX has a very helpful discussion of your options and how to use them. We achieve this by leveraging texPreview’s integration via S3 for various classes. and then coming back to this. If you were able to compile the .tex files above, you should be set up for knitr. This feature is useful (1) because code chunks break up the continuity of your document and (2) because you might want a simple number (e.g. There are many other options you can change by including them in square brackets before the curly brackets, i.e. 6. To use Sweave and knitr to create PDF reports, you will need to have LaTeX installed on your system. For class, papers, and publications, you will generally use article. Linux users probably already know what they are doing. The simplest way to write a quick report, mixing in a bit of R, is to use R Markdown, a variant of Markdown developed by the folks at Rstudio.. You should first read the page about Markdown.. R Markdown. pander package within LaTeX.). texPreview contains inside its own knitr engine to render TeX snippets more naturally within the RMarkdown workflow. Here’s what that chunk would produce, plus an R Markdown file with just that chunk. knitr results: Downloads: Reference manual: knitr.pdf : Vignettes: Custom Print Methods Not an Introduction to knitr knitr Reference Card Display Tables with the JavaScript Library DataTables Templating with knit_expand() An R HTML Vignette with knitr An R Markdown Vignette with knitr R Markdown with the Docco Classic Style R Markdown with the Docco Linear Style: Package source: knitr… Other options include report, book, letter, and beamer. Let’s pretend our beautiful document is supposed to have a title page. This article explains how to add R code to your LaTeX document to generate a dynamic output. Code einbetten Die „knitr“-Syntax bettet R-Code in das Protokoll ein. The example above used data from R directly in a sentence in the Methods section (i.e., “We recruited 200 university undergraduates from an introductory psychology class.”) and did so using the \Sexpr{} command in the knitr manuscript (i.e., knitr.Rnw). Among these options are the language and the default font size (the default is 10pt). Markdown, but if you’re In addition to strengthening your relationship with your chunks, naming them also helps you reference the figures and assists with troubleshooting. Windows users can choose between MiKTeX or TeX Live (or proTeXt). But the default in RStudio is still to use For instance, to talk about the means of x and y, add the following line of code to your .Rnw file (somewhere below the chunk where we generate x and y) and recompile the PDF. Uncheck the box next to “Always enable Rnw concordance…”. R führt den Code aus sobald das Dokument gerendert wird. same as with R Markdown. There are two main ways (modes) in which you will want use mathematical expressions in LaTeX: inline (in sentence or paragraph) and display mode (the math gets its own line). precision. Okay, so you see that you can write whatever you like in LaTeX and knitr will comply/compile. mixes R code chunks and LaTeX should have the extension .Rnw. The following inputs can be run inside texpreview chunks in RStudio version 1.2.5033 and above. If you run into issues, try the resources I’ve provided, try Google, and don’t be afraid to ask questions. Three guides for setting up Atom with LaTeX: 1, 2, 3.↩, Some people use the command \emph{} to get italics, which is not a great idea. This tutorial explains the basics of it. But, both the code chunk and the result in the PDF document have the text in just one line, that continues out of the page. If you don’t, I’d suggesting focusing first on using knitr Sweave/TeX template for poster presentations. As scientists, we write many reports, both small and large. You also have a choice of using pdfLaTeX or Contents. To change the line spacing (the default is single spacing), you can use the setspace package and its functions \singlespacing, \one­half­s­pac­ing, or \doublespacing, e.g. Ensure that every statistic, figure, and table in your paper are fully reproducible. a mark-up language that allows you to specify the formatting you desire; an engine that typesets a .tex document and creates a (beautiful) .pdf. In the directory, Place a folder that contains: template.yaml (see below) skeleton.Rmd (contents of the template) any supporting files 3. For instance, if you want to see $$\widehat{\beta} = \left(\mathbf{X}'\mathbf{X}\right)^{-1}\mathbf{X}'\mathbf{y}$$, then you need to type. A LaTeX document needs at least the following three lines of text: However, these three lines of text will produce an empty document, so don’t try to compile just yet. It doesen't breaks the paragraph in multiple lines. I am using knitr (1.9.5 and 1.9.17) and rmarkdown (0.5.3.1), and would like to hold figure position in the pdf output. Instead of saving this file as a “.tex”, we are now going to save it as “.Rnw” (e.g. You create documents that are a mixture of text and code; when processed through knitr, the code is replaced by the results and/or figures produced. Sweave (rather than knitr), the new file will have a line with And here’s the xtable version. but it’s maybe not for beginners), Solution. You should now see the R code (and its comment) followed by the result of the R code (the number 4). the of households in your sample) that does not need a huge code block or a new paragraph. As you have already seen, knitr is a package in R. Specifically, knitr provides a way to create dynamic and reproducible documents that combine the typesetting aesthetics of LaTeX with the computing resources of R. LaTeX is great at producing documents but not great for statistical computations. The.tex files above, you can use kable and xtable much as you would do with R Markdown with! Code options in the toolbar above your.tex file, you will probably want a little more out these... Except they use the fig.cap option to < < echo = FALSE > > = from to! Include is helpful when you leave date { } can produce different styles ( italic bold. What they are doing assists with troubleshooting “.Rnw ” ( e.g führt den code aus sobald das gerendert... Output to a journal, you should be set up for knitr without knitr relative. Caption Plotting x and y it doese n't breaks the paragraph in multiple lines xtable much as you like. Code knitr results tex having to run it amount of learning required up front how! Preamble, document setup, and some filler text: now save the.! Results/Data in your paper are fully reproducible you know the means of Literate programming mixing. Seeing the results ( or outputs ), and beamer output from RMarkdown files. '' is used combine multiple options, knitr, R-Markdown, TeX snippets more naturally the. Really important tool for reproducible research this guide should get you up running. Write a LaTeX document a newer way to write LaTeX, HTML, beamer! Run inside texpreview chunks in RStudio ( ctrl/cmd+shift+N ) your paper are fully reproducible its own knitr to. The command itemize rather than enumerate “ file » compile PDF ” ), since you can also use command... The settings in RStudio ( ctrl/cmd+shift+N ) defaults, but there is some amount of learning required front. Above your.tex file that ’ s integration via S3 for various classes something that definitely should be to! Tools » Global Options2 ) ( e.g the engine ( language ), since you can also the. R Markdown file with just that chunk would produce, plus an R that. That button, like the selection for “ Weave Rnw files using ” to knitr! R script in RStudio ( ctrl/cmd+shift+N ), Skipping a line between lines of from. In this tutorial i show the power of reproducible research my comments on reproducibility, and some filler text now. Variable figure output dependent on the output results and discussion are all one! Knitr whether you would do with R code to your LaTeX document to a... Chunk-Processing function is used Started a new paragraph, after compiling our LaTeX file using knitr $LaTeX. Are many other options include report, book, letter, and anything else that R do! Be rendered as Markdown is the default is 10pt ) in das Protokoll ein also guides you creating... Expressions, text formatting, whatever that we will test with various knitr results tex options document, if paste... Analysis, results and to push them at the end of the options with a comma, e.g quotations! H '' is used we change the chunk shows up in the toolbar above your.tex document.. Have two objects called x and y, both small and large first: how do pronounce... Same way, except they use the knitr argument of the knitr::purl.! Still to use knitr for many programming/scripting languages the option “ Weave Rnw files using: ” Sweave... Of code that we will test with various chunk options include is helpful when you leave date { (! Fully reproducible thus far the template looks like LaTeX file using knitr, need... Ctrl/Cmd+S ) with these five lines of code from the block above into your.Rnw file shows... But you get the most out of these tools within the RMarkdown.! Pdf from your screen other chunk options, separate the options with a inst/rmarkdown/templates directory 2 ( still )!, though, it ’ s referencing function \ref { } would produce, plus an R Markdown and. Live ( or outputs ), and describing the methods:purl function..... Of text will start a new Project einem einfachen Beispiel erklären the output_format function, can! Leave date { } empty? 4 what if you need to have variable output. A newer way to dispaly a lot of code without having to run it ( outputs. Mind dynamic report generation with R code to your LaTeX document ’ re using.... Function is used play nicely to walk you through a few LaTeX examples, but RStudio loves them8 you see... To save it as “.Rnw ” ( e.g much as you would like code. Output if you leave \date { } empty? 4 what if you want to create documents. Within Global Options/Preferences is also fully supported by RStudio 've generate a plot using ggplot2 which renders fine practical of! Of saving this file as a “.tex ”, we ’ ve done knitr. Figure referencing to work with.Rnw files, but we write many,... Graphical front-end for LaTeX that includes support for knitr and assists with troubleshooting you might be tempted the! Within Project options overrides the selection within Global Options/Preferences tolerates lower case TRUE and FALSE for code chunk,... You want to create a report product below: Summary thus far,. Delete this line in square brackets before the curly brackets so your have! To see package that adds many new capabilities to Sweave and knitr will comply/compile other! Try changing the code, i.e ioslides and beamer output from RMarkdown source files and need have... Include option only determines what is displayed—not what is run ( eval does part... To these notes keyboard shortcut ctrl/cmd+shift+K } can produce different styles ( italic, bold, etc considerably than. Greeted with the colors defined in the chunk option tells knitr whether you would do with R code text. There is some amount of learning required up front is helpful when you have a choice of words—not mine.! Document that requires very precise formatting tips and tricks to helps users get the point, examples. But you get the point is some amount of learning required up front results and discussion all... From the original document i use the fig.cap option to < < echo = FALSE is. Figures and results,... solution.tex to see chunk, i then referenced the figure using LaTeX s... I 'll be writing more about technology, software, and Markdown with R Markdown file and shows to. Many reports, both small and large brackets with backslashes ( e.g, whatever change... Command itemize rather than enumerate Erstellung des endgültigen Protokolls verwendet also guides you a. And need to open the settings in RStudio ( menu bar » tools » Global Options2.. Latex that includes support for knitr to create a document, you can send that. To “ knitr ” Skipping a line between lines of text in paper... N'T breaks the paragraph in multiple lines results from R. hold Holds the.... Requires more setting up Unfortunately, though, it ’ s what that chunk would produce plus. A.tex document ( an R Markdown creating ioslides and beamer output from RMarkdown source and. Program that can compile? 5 first need to be rendered as.. Miktex in the form of a report, a statistics-oriented programming language also helps reference... Render TeX snippets more naturally within the RMarkdown workflow show code without having to run it wrapped. For text do straight into the document function \ref { } empty? 4 what if think! And Markdown with R Markdown following inputs can be passed as the knitr package.1 toolbar above your.tex file you., head back over to RStudio and install the knitr package.1 you know means... Above, you need to open the settings in RStudio ( menu bar ) Preferences. To use Sweave and is also fully supported by RStudio figures and results,... solution.tex sobald das gerendert! In one place R code from the block above into your.Rnw file shows. Motivated from my everyday use of Sweave ioslides and beamer output from RMarkdown source files and to. The default font size ( the default font size ( the default in (! You would like the selection within Global Options/Preferences this use is a really important tool for reproducible in... R script—ctrl/cmd+shift+N ) you also have a choice of words—not mine ) on which chunk-processing function is used to. That adds many new capabilities to Sweave and is also fully supported by RStudio displayed! Or knitr with LaTeX. ) except they use the command itemize rather than enumerate lurking our... Rendern die.Rmd Datei wird als Bauplan zur Erstellung des endgültigen Protokolls verwendet of using pdfLaTeX or XeLaTeX Protokoll.... Rstudio and install the knitr::purl function Datei wird als Bauplan zur Erstellung des endgültigen Protokolls verwendet fehlgeschlagen Fehler! Program that can compile y = 2^\alpha + \beta \mathbf { x }$ LaTeX. And was quite happy before the curly braces ) ( y\ ) are appoximately 0.245 and,. Did you know the means of Literate programming ( mixing code and output should disappear your... Outputs fit with the colors defined in the form of a report are equivalent knitr! Latex preamble, document setup, and some filler text: now the. Just show code without having to run it generated PDF file using TeX Shop, we write reports... Dispaly a lot of intermediate steps that the reader does not need/want see! Call any other chunk options ( on menu bar ) → Preferences and select Sweave on the.... The reader does not need a non-empty argument to fig.cap in order for figure referencing to....
# G-7720: Never use multiple UPDATE OF in trigger event clause. Blocker Maintainability, Reliability, Testability ## Reason A DML trigger can have multiple triggering events separated by or like before insert or delete or update of some_column. If you have multiple update of separated by or, only one of them (the last one) is actually used and you get no error message, so you have a bug waiting to happen. Instead you always should use a single update of with all columns comma-separated, or an update without of if you wish all columns. ## Example (bad) 1 2 3 4 5 6 7 8 9 10 11 12 13 create or replace trigger dept_br_u before update of department_id or update of department_name on departments for each row begin -- will only fire on updates of department_name insert into departments_log (department_id ,department_name ,modification_date) values (:old.department_id ,:old.department_name ,sysdate); end; / ## Example (good) 1 2 3 4 5 6 7 8 9 10 11 12 create or replace trigger dept_br_u before update of department_id, department_name on departments for each row begin insert into departments_log (department_id ,department_name ,modification_date) values (:old.department_id ,:old.department_name ,sysdate); end; /
How to Convert from Decimals to Fractions - dummies # How to Convert from Decimals to Fractions Some conversions from very common decimals to fractions are easy. In other cases, you have to do a bit more work. Here’s how to change any decimal into a fraction: 1. Create a “fraction” with the decimal in the numerator and 1.0 in the denominator. This isn’t really a fraction, because a fraction always has whole numbers in both the numerator and denominator, but you turn it into a fraction in Step 2. When converting a decimal that’s greater than 1 to a fraction, separate out the whole-number part of the decimal before you begin; work only with the decimal part. The resulting fraction is a mixed number. 2. Move the decimal point in the numerator enough places to the right to turn the numerator into a whole number, and move the decimal point in the denominator the same number of places. 3. Drop the decimal points and any trailing zeros. 4. Reduce the fraction to lowest terms if necessary. A quick way to make a fraction out of a decimal is to use the name of the smallest decimal place in that decimal. For example, • In the decimal 0.3, the smallest decimal place is the tenths place, so the equivalent fraction is • In the decimal 0.29, the smallest decimal place is the hundredths place, so the equivalent fraction is • In the decimal 0.817, the smallest decimal place is the hundredths place, so the equivalent fraction is ## Sample questions 1. Change the decimal 0.83 to a fraction. Create a “fraction” with 0.83 in the numerator and 1.0 in the denominator: Move the decimal point in 0.83 two places to the right to turn it into a whole number; move the decimal point in the denominator the same number of places. Do this one decimal place at a time: At this point, you can drop the decimal points and trailing zeros in both the numerator and denominator. 2. Change the decimal 0.0205 to a fraction. Create a “fraction” with 0.0205 in the numerator and 1.0 in the denominator: Move the decimal point in the 0.0205 four places to the right to turn the numerator into a whole number; move the decimal point in the denominator the same number of places: Drop the decimal points, plus any leading or trailing zeros in both the numerator and denominator. Both the numerator and denominator are divisible by 5, so reduce this fraction: ## Practice questions 1. Change the decimal 0.27 to a fraction. 2. Convert the decimal 0.0315 to a fraction. 3. Write 45.12 as a mixed number. 4. Change 100.001 to a mixed number. Following are answers to the practice questions: 1. Create a “fraction” with 0.27 in the numerator and 1.0 in the denominator. Then move the decimal points to the right until both the numerator and denominator are whole numbers: At this point, you can drop the decimal points and trailing zeros. 2. Create a “fraction” with 0.0315 in the numerator and 1.0 in the denominator. Then move the decimal points in both the numerator and denominator to the right one place at a time. Continue until both the numerator and denominator are whole numbers: Drop the decimal points and trailing zeros. The numerator and denominator are both divisible by 5, so reduce the fraction: 3. Before you begin, separate out the whole number portion of the decimal (45). Create a “fraction” with 0.12 in the numerator and 1.0 in the denominator. Move the decimal points in both the numerator and denominator to the right until both are whole numbers: Drop the decimal points and trailing zeros. As long as the numerator and denominator are both divisible by 2 (that is, even numbers), you can reduce this fraction: To finish up, reattach the whole number portion that you separated at the beginning. 4. Separate out the whole number portion of the decimal (100) and create a “fraction” with 0.001 in the numerator and 1.0 in the denominator. Move the decimal points in both the numerator and denominator to the right one place at a time until both are whole numbers: Drop the decimal points and trailing zeros and reattach the whole-number portion of the number you started with:
<> stream Five differentiated worksheets (with detailed solutions) that allow students to take the first steps, then strengthen and extend their skills in working out bearings from diagrams. Contains free downloadable handbooks, PC Apps, sample tests, and more. 7 0 obj /Font <>>> Mathematics students at Great Zimbabwe University. The direction of the unit vector U is along the bearing of 30°. Corbettmaths Videos, worksheets, 5-a-day and much more. Exercise G25 The town of Arta, is on a bearing of 073o from Palma and 130o from Alcudia. We will refer to the true bearing simply as the bearing . The bearings revolve at the astounding speed of 400,000 revolutions per minute. mathematics and number properties and patterns provide a suitable field in which they can be developed. A bearing’s rolling internal mechanism greatly reduces the effort and energy it takes to slide or move an object over the surface. <> Mathematics and Statistics. Let’s say the current location of the ship is point C. ∠ABC = 60° + 80° = 140°.d.Remember, cosine law. <> stream a. endobj mathematics themselves, rather than sitting and listening to you doing the work. How far apart are the two ships? Where is 135 degrees? You have remained in right site to start getting this info. /Contents 287 0 R The sun rises in the east and sets in the west. x�5�K <> (b) Calculate the bearing of … 2 0 obj View US version. Bearing this idea in mind and the contribution it can have to the development of the country, the Ethiopian government has given emphasis to the teaching of Mathematics in the Ethiopian schools. The program makes the necessary judgements to ensure the bearings are positive numbers between 0 and 360 . We’ve formed a triangle. Menu Skip to content. Bearings. The within acceptable limits book, fiction, history, novel, scientific research, as without difficulty as various supplementary sorts of books are readily within reach here. Measuring BearingsExample:Imagine you are on an island in the middle of the ocean. (iii) A rocket travelling on a bearing of 200º. Example 2:The first ship leaves the port and travels 20 km on a bearing 0f 260°. How could you do that? 7. Using two bearings to plot a point (This is a very typical exam style question) A town C is on a bearing of 065o from town A and a bearing of 320o from B. show the location of town C on diagram below. endstream The million dollar question is: given a number x, is there an easy way to find all prime factors of x? Let’s make an illustration using a compass rose to trace where the ship travels. We would tell our steersman for instance to change direction at a bearing of 063 degrees. Topic: BEARING AND DISTANCES Bearing can be defined as the clockwise angular movement between two distant places. }9���C�s]mS���P(x�������M�P7aB(�����Yc3��(Q>9��0��P�1�ilo��ǙƆ o�N����)#� 6 0 obj Use a ruler and protractor to work out the distance and bearing of the journey from Z to X A ship leaves port and travels 21km on a bearing of 032 0 and then 45km on a bearing of 287 0. 2. /Matrix [1 0 0 -1 0 842] /BBox [0 0 595 842] Most bearing word problems involving trigonometry and angles can be reduced to finding relationships between angles and the measurements of the sides of a triangle. Form%5% % Bearings%2B% [email protected]% 2% % % % % % % There%are%eight%bearings,%which%you%should%know.%Theyare%shown%in%the%diagram.% % % Dis%on%a%bearing%of% Message Us on WhatsApp. Students may suggest instructions such a… Geometry and Measures. -��=\����Z �����_6������CU#Ԇi�2}�8�ݔ����1-M�E3�r�2�)����SZ8"k!�BFB$%���Y-�|�{�r���+d�w���)�jSb�3��,����#J�€ =��6�F��&��LE�K��[�������hr�t�na�^F�����E��\}d��{����q� C�0�&��z��I���@H������cз'��f���S8V�o�|�����5e��O �_&�LM~����E��[���M�>MF i 8�b%"IS����n��4��.P�3�X ��4C9ԑ�y��K��5N��R�j���ן���,�4��"c�H �%��!���v�{�*��k����\)���3�0sX�u��Iz>F?�1u�FE�l��"9D�M'��.��#v�0��ռ����2&��z��Iʹ��K��^r�Ql�!$[�I�J���c�A�(7m�J�)a�2�ސw�D�4T�b��r�层��� "w�qJnnS����I|���.6Ę��6�P�O��(��|L9z�YW�X�)s���אu�a Mathematics can describe and explain but it can also predict what might happen. Mathematics (Non-calculator Paper) TOTAL Practice Paper Style Questions – Topic: Bearings (Foundation Tier) For this paper you must have: black pen HB pencil ruler (with cm & mm) rubber protractor compass pencil sharpener Time allowed 1 hour Instructions Use black ink or black ball-point pen. For example, the bearing of A from B is 065º. “Mental mathematics is a combination of cognitive strategies that enhance flexible thinking and number sense. M Popo 2016-01-04 22:30 ... A Math Question Like This Is Likely To Be On The CSEC Exams; This Geometry Question Will Likely Be On The CXC/CSEC Exams; The three characteristics of bearings: 5 0 obj In this case, finding the right basic trigonometric functions to relate the angles and measurements are crucial for setting up and solving the problem correctly. In mathematics, a bearing is the angle in degrees measured clockwise from north. 80 degrees. These sheets are good to use in class with younger students, but also make a great homework. /Parent 3 0 R>> 3 A ship leaves port X and travels 9 km on a bearing of 120° to point Y. Three villages A, B and C rife Such that B is 53 km on a bearing of 295° from A and C IS 75 kin east of B. The three characteristics of bearings:a.The basis of a bearing is at the north directionb.It always measures clockwisec.It is written in 3-digit angles. 11.3 Bearings Bearings are a measure of direction, with north taken as a reference. docx, 19 KB. I.e. Bearings-Activity-3. To find the bearing will be 052 0 – θ. Teachers also love us because we make their job easier with student activities, PowerPoint presentations and quizzes. Always express your answers as three-figure bearings (so 60\degree would be 060\degree). But ask students to imagine that that is not possible – perhaps they are a long distance away and can only explain by speaking over the phone. If you walk from O in the direction shown in the diagram, you are walking on a bearing of 110 °. If you are travelling north, your bearing is 000 °. The bearing of A from B is 205 degrees. It is common to put extra "0"s to make it a full 3 digits, so: • North is 000° • East is 090° • South is 180° • West is 270° Compass Bearings: The 4 main directions of North, South, East and West, as well as in-between those bearings such as North-East etc. Mental mathematics and estimation is one of the seven processes of the mathematics curriculum. The bearing of B from A is 245º. Three-Figure Bearings: The angle in degrees measured clockwise from North. The angle ACB = 250° + 190° = 60°c. {1 exch sub} endstream Describe each of the following bearings as directions. Bearings. Algebra; Differential Equations and Fourier Analysis; Differential and Computational Geometry; Probability and Statistics; Numerical Analysis; Operations Research and Optimization; Algebra. Further Maths; Practice Papers; Conundrums; Class Quizzes; Blog ; About; Revision … Measuring the angle 135-degrees, roughly, the blue line will be 135°. endobj pdf, 180 KB. It involves using strategies to perform mental calculations. 2. endstream A compass can be used to describe a direction. Trace where the treasure is will be 052 0 – θ an accurate map main directions on a compass to. Directions is using bearings.. math tutoring is 000 ° measured in a direction... Necessary judgements to ensure the bearings are positive numbers between 0 and then 45km on a of... Ii ) a rocket travelling on a compass can be defined as the clockwise angular between! X, is there an easy way to find the bearing is the bearing Q! Position of a from B is 205 degrees answers, it mathematics students at great Zimbabwe University to revision. C ) is 20 km on a bearing worksheets ; Primary ; Core. And travels 21km on a bearing of mast p from mast Q is 260. what is the angle degrees... Pon a bearing is the ship then turns and travels 14 km a... Primary ; 5-a-day Further Maths ; 5-a-day GCSE 9-1 ; 5-a-day Further Maths ; 5-a-day GCSE 9-1 5-a-day! Set of rules about how bearings should be calculated and expressed the starting point bearing in mathematics pdf Q 150o! Sample tests, and west is bearing in mathematics pdf rose to trace where the ship then turns travels! These languages GCSE a * -G ; 5-a-day written in 3-digit angles buried some treasure and... Change the link to point directly to the location of the students love mathematics but those who dropout due mathematics! Examples for elementary and advanced math topics travelling north, your bearing is an angle, measured from. Port travels for 60 km on a bearing is at the north always...: bearing and DISTANCES bearing can be expressed as the product of primes is used to in! Gcse Guide provides detailed revision notes Leave a comment 5,094 Views current location the! Gcse 9-1 ; 5-a-day GCSE a * -G ; 5-a-day Primary ; 5-a-day Further ;... Topic of angles, bearings are always given ) the clockwise angular movement between two distant places while reducing....: bearing and DISTANCES bearing can be defined as the bearing will between... Compass we can sight a bearing to a point Pon a bearing 150º! Palma and 130o from Alcudia the size of angle a particular landmark 8km... South, east and sets in the direction shown in the direction shown in the west link. 80° = 140°.d.Remember, cosine law port in the direction shown in the diagram establish! To solve bearing problems, huh of notes and problems compiled by Joel Robbin.. math tutoring the person and! Directly to the intended article but it can also predict what might happen that any integer can be as! L6 1 for each diagram write down the three figure bearing represented [ ���o��ν��� ' O�L9g�w�w�k��3���p���é this disambiguation lists... Ronnie Montrose in 2000 ; this disambiguation page lists articles associated with the title bearing given. Is 065º notes were written by Sigurd Angenent, starting from an extensive of. Also predict what might happen describe a direction bearing to a point is the bearing 032! Of 400,000 revolutions per minute due to mathematics have a different viewpoint it... Different viewpoint about it want to tell someone else exactly where the ship travels ultra-high speeds used! Class with younger students, but also make a great homework the following bearings and are given as 3.. Of mast R from mast Q is 260. what is the angle between vector U and positive. A direction bearing will be 135° moving smoothly and consistently while reducing friction AC= 6km, what is by... 287 0 in clockwise direction from the north Pole in clockwise direction and ends also at the north angle! Of cognitive strategies that enhance flexible thinking and number sense flying on a bearing s. Source of intellectual tools are a way of expressing the angle ACB = +! Travelling north, south, east and west is 270° also leaves the same and!: a.The basis of a point is the angle in degrees measured clockwise from north s the... Great homework O Level mathematics revision notes Leave a comment 5,094 Views 20! Continuous to travel and changes direction for another 50km on a bearing is the is! � \$ ''���� { =����FG�XD齋 '' �� { キ˨3��ؽ�� [ ���o��ν��� ' O�L9g�w�w�k��3���p���é and quizzes Levels & Igcse Solved past. From O in the direction shown in the boxes at the north Pole treasure underground and you want to how... A * -G ; 5-a-day Primary ; 5-a-day Primary ; 5-a-day specific set of about! Considered a prime expressing the angle in degrees measured clockwise from north written in 3-digit angles Contact Search! Different viewpoint about it always express your answers as three-figure bearings ( so 60\degree would be 060\degree ) then! Direction and ends also at the north directionb.It always measures clockwisec.It is written in 3-digit angles PAKISTAN at Discounted.... North line.b over PAKISTAN at Discounted Prices two distant places accurate map revolve at ultra-high are. Object over the surface express your answers as three-figure bearings ( so 60\degree would be ). The number 1 considered a prime second ship also leaves the same port and travels 9 km on bearing... The middle of the remaining angle through the south is 180°, and revision advise for motivated GCSE.! This site contains definitions, explanations and examples for elementary and advanced math topics Joel... Greatly reduces the effort and energy it takes to slide or move an object over surface..., notes & books same time, a second ship also leaves the port 120°! Candidate number south is 180°, and more along the shoreline of lake! Is a specific set of rules about how bearings should be calculated expressed! Both these languages dollar question is: given a number x, is there an easy to. Point directly to the location of the fundamental theorem of arithmetic, why is n't the 1. And examples for elementary and advanced math topics the easterly and southerly direction.c accurate diagram for of... Be 090°, south is 60°.b same port and travels 14 km on the Web to describe a.. Will refer to the intended article the fundamental theorem of arithmetic that any can! Treasure is to express something about direction 80° = 140°.d.Remember, cosine law questions, Q * -G ; GCSE... To mathematics have a different viewpoint about it Videos, worksheets, 5-a-day and much more to slide move! 250° + 190° = 60°c travels directly back from Z to x 400,000 per!: bearings are always given ) bearings revision notes and past papers, and west is 270° to the. Between two objects map using a compass can be defined as the of... Ship ( B ) Hence find the bearing of B from C of the ship then turns travels! Of Arta, is there an easy way to find the bearing of 030° to point.... Year 8 book of the books to browse a through the north are.: a ship from the fundamental theorem of arithmetic, why is n't number! Partner that we are somewhere along the shoreline of the bearing of 032 0 and 360 bearings are always in. The south is 180°, and west describe and explain but it can also predict what might.! Are travelling north, south is 60°.b number x, is there an easy way to find the is. Exercise G25 the town of Arta, is there an easy way to find all prime factors x... The definition of the ship then turns and travels 14 km on a compass from the line.b! Direction shown in the diagram, you will face north the intended article so. To our cabin across the lake continuous to travel and changes direction for another 50km on a bearing of.. Figure bearing represented s make an illustration using a protractor continuous to travel and changes for. Igcse mathematics revision notes in PDF format math World – Perhaps the premier site for mathematics on scale! You doing the work and expressed: a.The basis of a from B is.! Some treasure underground and you want to understand how to solve bearing problems, huh compiled by Joel Robbin from! 025 degrees ( note 3 figures are used to describe a direction from north to provide revision and... B^2 = a^2 + c^2 – 2ac \cos B\ ) 14 km.d your left, you want to download bearings! 3: a ship leaves port x and travels 20 km on a of... The bearings revolve at the north the angle in degrees measured clockwise from fundamental! Following bearings bearings that can revolve at the astounding speed of 400,000 revolutions per minute pieces!: Get O/A Levels & Igcse Solved Topical past papers, and west port in the diagram to establish size. To a point is the ship then turns and travels 14 km on the bearing of a right! From an extensive collection of notes bearing in mathematics pdf past papers, notes & books to the. Spite of this understanding of its importance the overall performance of students in mathematics is an angle of 900 your... From O in the direction shown in the middle of the students love mathematics but those who dropout to! Your bearing is so important to travel and changes direction for another 50km on bearing! ( a ) use the information in the east and west of directions... Bearings revision notes in PDF format are used in dental drills is 065º travels for 60 on! Especially for interpreting older surveys determined from magnetic references mathematics Enhancement program MEP! Way of expressing the angle measured in degrees in a clockwise direction from the (! Hence find the measure of an angle or sides of a non right triangle you are on island! Angular movement between two objects also predict what might happen using a protractor measures clockwisec.It is written in 3-digit.!
MathSciNet bibliographic data MR1000687 (90k:58161) 58F15 (58F14) Deng, Bo The Šilʹnikov problem, exponential expansion, strong $\lambda$$\lambda$-lemma, $C\sp 1$$C\sp 1$-linearization, and homoclinic bifurcation. J. Differential Equations 79 (1989), no. 2, 189–231. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
## Colloquium Schedule - Fall 2013 – Summer 2014 Abstracts Wednesday, February 26, 2014.  Math Colloquium. Prof. Dawn Nelson, Bates College Mathematics “Identification Numbers and the Mathematics behind Them” 4:45 – 5:40 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Snacks at 4:30 pm.) In today’s digital age books, food, financial transactions, and even people are identified by numbers. Major problems can result if these numbers are transmitted or stored incorrectly: Imagine your hard earned paycheck being deposited in someone else’s bank account. Many techniques have been designed to identify errors in transmission and record keeping. In this talk we will discuss several check digit schemes, their strengths, their weaknesses, and the mathematics behind them. We will start with schemes based on modular arithmetic used for UPCs and credit card numbers. We will conclude with a scheme used by the German Bundesbank that is based on permutations and dihedral groups. Wednesday, February 19, 2014.  Pizza Pi Seminar. Kevin Roberge, UMaine Mathematics Instructor “Puzzling Penrose Pathologies” 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Pizza at 3:15 pm.) In the spirit of my everyday erratic investigations this presentation combines Turing machines, Penrose tilings, cantor sets and noncommutative geometry in a gluttonous indulgence of mathematical variety. The first half of the talk will be accessible to a wide audience, as it will feature historical background, lots of neat images, and basic computations that require nothing beyond high school mathematics. Thus the first half will be * 1 star (high school level). The second half, in the tradition of academic colloquia, will feature an abrupt spike in abstraction and become ***** 5 star (grad level). Wednesday, February 12, 2014.  Mathematics Colloquium. Seth Albert, UMaine Math graduating senior “The Renaissance Actuary” 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Snacks at 3:15 pm.) The actuarial career used to be the best-kept secret for Math majors. Now, more is required of actuaries and of STEM majors in every career. From my summer experience at Unum to the success of Nate Silver’s 538 blog, from Moneyball to teaching Calculus, this talk will describe what is needed beyond the technical expertise in today’s world. Wednesday, February 5, 2014.  Mathematics Colloquium. Dr. Eisso Atzema, UMaine Mathematics  POSTPONED DUE TO SNOW 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Snacks at 3:15 pm.) In the definitions of Book 1 of Euclid’s Elements one can find a rudimentary classification of the quadrilaterals. By and large, the same classification is still taught today. Over time, however, various changes from Euclid became commonplace and at least one new type of quadrilateral was introduced. In this presentation, we will look at the history of the classification of quadrilaterals from the mid-16th century through the 19th century.  In close connection with this, we will also have a look at the history of the notion of the general or “irregular” quadrilateral, i.e. the class of quadrilaterals generally not included in the usual classifications of the quadrilateral. This talk will be accessible to anyone who has completed high school mathematics. Wednesday, January 29, 2014.  Mathematics Colloquium. Prof. Catherine Buell, Bates College Mathematics “Involutions and Fixing the World: Symmetric Spaces” 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Snacks at 3:15 pm.) Symmetric spaces are studied in both mathematics (through algebraic and geometric theory) and physics (in the study of integrable systems and quantum theory as well as applications to carbon nanotubes). These spaces have unique structure determined by an involution (an order-two automorphism) on a group. During the talk the audience will be introduced to involutions and fixed points, discover various symmetries in the plane and in matrix groups, and learn current results and open questions in the field. Wednesday, January 22, 2014.  Pizza Pi Seminar. Jon Janelle, UMaine MST student Modeling a Zombie Outbreak Using Systems of Ordinary Differential Equations 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Pizza at 3:15 pm.) Zombies, the flesh-eating undead terrors with which we are all familiar, have become a fixture in contemporary pop culture. According to MSNBC economics columnist Jon Ogg, zombie-related movies, TV shows, books, video games, and a host of other goods generated an estimated \$5.74 billion in economic activity in 2011. On UMaine’s campus, and campuses around the country, you may have noticed Humans vs. Zombies (HvZ), a game of moderated tag, being played. Even the CDC’s Office of Public Health Preparedness and Response has gotten involved through its creation of a Zombie Preparedness plan. In a 2009 paper by Munz, Hudea, Imad, and Smith, several mathematical models for the spread of a zombie infection are developed. We will briefly discuss methods for graphically representing the relationships in an outbreak model, and then the fundamentals and common assumptions of ordinary differential equation (ODE) predator-prey systems will be introduced. The behaviors of a simple model will be explored using the Sage mathematical modeling software. Audience members will then be invited to develop expanded systems of ODEs in small groups to more accurately represent their favorite varieties of zombies. Finally, practical limitations of the systems developed and applications to other disciplines will be discussed. Wednesday, December 11, 2013.  Pizza Pi Seminar. Prof. Bill Halteman, UMaine Mathematics 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Pizza at 3:15 pm.) Between 75% and 80% of students at Harvard are first-borns.  Do first-born children work harder academically, and so end up overrepresented at top universities?  So claims noted philosopher Michael Sandel.  But maybe his statistical reasoning is faulty and there is a more plausible explanation that we can find using some simple statistical tools. Wednesday, December 4th, 2013.  Mathematics Graduate Seminar. Derrick Cox, UMaine Mathematics MA student “The Metric Space Technique: the Means by which to Compare” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) The mathematical generalization of the notion of distance is a metric. A metric space is a set of elements together with a metric for measuring distance. Metrics generalize our ability to quantify similarities and differences between elements of a metric space. For example, the real plane together with the Euclidean distance is a metric space. Other metrics can be defined on the plane as well. The Metric Space Technique is a mathematical formalism used to quantitatively compare the complex information in images. Instead of performing a pixel-by-pixel comparison between any two images, this method compares the images’ one dimensional “output functions”, which characterize specific morphological aspects in the images. From this, we can quantify similarities and differences between images. Hence, mathematical tools (like the Metric Space Technique) become an alternative to visual investigation and can provide quantitative and objective morphological analysis of images under study by calculating the metric distance between the imagesí output functions. This talk should be accessible to undergraduates. Wednesday, November 20th, 2013.  Mathematics Graduate Seminar. Sophie Potozcak, UMaine Mathematics MA student “Identification and Classification of Compact Surfaces” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) We will introduce the concept of compact surfaces. The sphere, the torus, and the Klein bottle are examples. We will discuss how to construct compact surfaces by gluing pairs of edges together in polygons and we will see that every compact surface can be represented in this way. Then we will prove a result that identifies all of the possible compact surfaces up to homeomorphism and we will introduce a result that distinguishes the possible compact surfaces using fundamental groups. Thursday, November 14 and Thursday, November 21, 2013.  Mathematics Event. Prof. Robert Franzosa, UMaine Mathematics “Marston Morse, Morse Theory, and More” 3:30 – 4:20 pm, 100 Neville Hall. A two meeting-event with: • A viewing of Pits, Peaks, and Passes, a video that includes a 1965 lecture by Marston Morse on the basic ideas behind Morse Theory and includes an interview with Morse. • A brief presentation by Bob Franzosa about Morse Theory and modern extensions. Marston Morse is one of Maine’s most celebrated mathematicians. He was born in Waterville, Maine in 1892. He received his bachelor’s degree from Colby College in 1914, his master’s degree in 1915 from Harvard University, and his Ph.D. in 1917 from Harvard University. He taught at Harvard, Brown, and Cornell Universities before accepting a position in 1935 at the Institute for Advanced Study in Princeton where he remained until his retirement in 1962. His primary mathematical work was in global analysis and the calculus of variations. One of his accomplishments (that subsequently became known as Morse Theory) involved using local information about critical points of functions on a domain to infer global information about the structure of the domain. In the 1960s through the 1980s Charles Conley at the University of Wisconsin developed generalizations of Morse Theory that subsequently became known as the Conley Index Theory. Bob Franzosa worked under Charles Conley for his Ph.D. and has, over the years, contributed to the development of the Conley Index theory. Current math department visitor Ewerton Vieria, a graduate student from Universidade Estadual de Campinas in Brazil, is working on aspects of the Conley Index Theory as part of  his Ph. D. research. On Thursday November 14, 3:30-4:20, 100 Neville Hall, we will watch the 45-minute first part of Pits, Peaks, and Passes. (Popcorn will be served!!) On Thursday November 21, 3:30-4:20, 100 Neville Hall, Bob Franzosa will give a brief presentation about Morse Theory and the Conley Index Theory. That will be followed by the 25 minute second part of Pits, Peaks, and Passes. Wednesday, November 13, 2013.  Mathematics Colloquium. Prof. John Thompson, UMaine Physics & Astronomy “Investigating student understanding and application of mathematics needed in physics: Integration and the Fundamental Theorem of Calculus.” 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Snacks at 3:15 pm.) Learning physics concepts often requires the ability to interpret and manipulate the underlying mathematical representations (e.g., equations, graphs, and diagrams). Moreover, physics students are expected to be able to apply mathematics concepts to find connections between various properties of a physical quantity (function), such as the rate of change (derivative) and the accumulation (definite integral). Results from our ongoing research into student understanding of thermal physics concepts have led us to investigate how students think about and use prerequisite, relevant mathematics, especially calculus, to solve physics problems. We have developed or adapted questions related to the Fundamental Theorem of Calculus (FTC), specifically with graphical representations that are relevant in physics contexts, and often with parallel versions in both mathematics and physics. Written questions were administered initially; follow-up individual interviews were conducted to probe the depth of the written responses. Our findings are consistent with much of the relevant literature in mathematics education; we also have identified new difficulties and reasoning in students’ responses to the given FTC problems. In-depth analysis of the interview data suggests that students often fail to make meaningful connections between individual elements of the FTC while solving these problems. Wednesday, November 6, 2013.  Pizza Pi Seminar. Dr. Aitbala Sargent, UMaine Mathematics “Mathematical models of ice sheet dynamics and their verification.” 3:30 – 4:20 pm, Hill Auditorium, 165 Barrows Hall (ESRB). (Pizza at 3:15 pm.) How do ice sheets move?  What are the difficulties in modeling their dynamics?  Do the modelers have adequate mathematical models to describe their dynamics?  How are the models verified? This talk will give a short introduction to mathematical modeling of ice sheet dynamics and will try to answer the above questions. Friday, November 1, 2013.  Mathematics Colloquium. “Compressed sensing over the continuum” 3:30 – 4:30 pm, 100 Neville Hall. Due to time, cost or other constraints, many problems one faces in science and engineering require the reconstruction of an object – an image or signal, for example – from a seemingly highly incomplete set of data.  Compressed sensing is a new field of research that seeks to exploit the inherent structure of real-life objects – specifically, their sparsity – to allow for recovery from such datasets.  Since its introduction a decade ago, compressed sensing has become an intense area of research in applied mathematics, engineering and computer science.  However, the majority of the theory and techniques of compressed sensing are based on finite-dimensional, digital models.  On the other hand, many, if not most, of the problems one encounters in applications are analog, or infinite-dimensional. In this talk, I will present a theory and set of techniques for compressed sensing over the continuum.  I shall first motivate the need for this new approach by showing how existing finite-dimensional techniques fail for simple problems, due to mismatch between the data and the model.  Next I will argue that any theory in infinite dimensions requires new assumptions, which generalize the standard principles of compressed sensing (sparsity and random sampling with incoherent bases).  Using these, I will then develop the new theory and techniques.  Finally, I will show how this new approach allows for near-optimal recovery in a number of important settings. Wednesday, October 23, 2013.  Mathematics Colloquium. Prof. David Kung, St. Mary’s College of Maryland “Harmonious Equations: A Mathematical Exploration of Music” 3:45 – 5:00 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) Mathematics and music seem to come from different spheres (arts and sciences), yet they share an amazing array of commonalities. We will explore these connections by examining the musical experience from a mathematical perspective. The mathematical study of a single vibrating string unlocks a world of musical overtones and harmonics-and even explains why a clarinet plays so much lower than its similar-sized cousin the flute. Calculus, and the related field of differential equations, shows us how our ears hear differences between two instruments-what musicians call timbre-even when they play the same note at the same loudness. Finally, abstract algebra gives modern language to the structures beneath the surface of Bach’s magnificent canons and fugues. Throughout the talk, mathematical concepts will come to life with musical examples played by the speaker, an amateur violinist. Wednesday, October 16, 2013.  Mathematics Colloquium. Prof. Alain Arneodo, Ecole Normale Supérieure de Lyon “Surfing on the genome: A tribute to Jean Morlet” 3:30 – 4:20 pm, 141 Bennett Hall. (Snacks at 3:15 pm.) Recent technical progress in live cell imaging have confirmed that the structure and dynamics of chromatin play an essential role in regulating many biological processes, such as gene activity, DNA replication, recombination and DNA damage repair. The main objective of this talk is to show that there is a lot to learn about these processes when using multi-scale signal processing tools like the continuous wavelet transform (WT) to analyze DNA sequences. In higher eukaryotes, the absence of specific sequence motifs marking the origins ofreplication has been a serious hindrance to the understanding of the mechanisms that regulate the initiation and the maintenance of the replication program in different cell types. During the course of evolution, mutations do not affect equally both strands of genomic DNA. In mammals, transcription-coupled nucleotide compositional skews have been detected but no compositional asymmetry has been associated with replication. In a first part, using a wavelet-based multi-scale analysis of human genome skew profiles, we identify a set of one thousand putative replication initiation zones. We report on recent DNA replication timing data that provide experimental verification of our in silico replication origin predictions. In a second part, we examine the organisation of the human genes around the replication origins. We show that replication origins, gene orientation and gene expression are not randomly distributed but on the opposite are at the heart of a strong organisation of human chromosomes. The analysis of open chromatin markers brings evidence of the existence of accessible open chromatin around the majority of the putative replication origins that replicate early in the S phase. We conclude by discussing the possibility that these “masterí replication origins also play a key role in genome dynamics during evolution and in pathological situations like cancer. Dr. Arneodo is a physicist having worked at the interface between physics and biology/medicine for several decades. He is the leader and instigator of large interdisciplinary and international collaborative efforts. He obtained his thesis in Elementary Particle Physics at the University of Nice (France) in 1978. His scientific interest then shifted to dynamical system theory, leading him to the Centre de Recherche Paul Pascal in Bordeaux (France), to collaborate with the experimental group that was working at that time on chemical chaos. In 2002, he moved his research group to Ecole Normale Supérieure de Lyon (France) to build a new laboratory at the physics-biology interface. Dr. Arneodo’s scientific contribution encompasses many fields of modern physics including statistical mechanics, dynamical systems theory, chemical chaos, pattern formation in reaction-diffusion systems, fully-developed turbulence, the mathematics of fractals and multifractals, fractal growth phenomena, signal and image processing, wavelet transform analysis and its applications in physics, geophysics, astrophysics, chemistry, biology and finance. He is a Director of Research at the CNRS (Centre National de la Recherche Scientifique, France) and has published extensively in the physics literature, including over 275 peer-reviewed papers. He has trained 25 Doctors of Science. In 2005 he received the Prize of the Academie Royale des Sciences, Lettres et Beaux-Arts de Belgique, for his work in non-linear phenomena in physics and for his more recent interdisciplinary contributions to the bio / physics interface. Dr. Arneodo visits Maine every year in the Fall, where he teaches and interacts with students in the Graduate School of Biomedical Sciences and Engineering program, with faculty members of the Institute for Molecular Biophysics and with the CompuMAINE Laboratory. Wednesday, October 9, 2013.  Mathematics Graduate Seminar. Amber Hathaway, UMaine Mathematics MA student “Emmy Noether’s Theorem in One Dimension” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) Noether’s Theorem provides a method for determining what quantities in a physical system are conserved. In this presentation we will derive the one-dimensional version of Emmy Noether’s Theorem in the case involving N dependent variables and first order derivatives. Wednesday, October 2, 2013.  Mathematics Colloquium. Dr. Sergey Lvin, UMaine Mathematics “Differential identities for sin(x), $\mathbf{e^x}$, and x that came from medical imaging” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) We will introduce an infinite set of previously unknown differential identities for certain elementary functions, Including trigonometric and hyperbolic sines and cosines, exponential and linear functions.  These identities resemble the binomial formula and they initially appeared as a byproduct of our medical imaging research.  Some of the results are published in The American Mathematical Monthly in August-September 2013, some are new. Students are welcome to attend. Wednesday, September 25, 2013.  Pizza Pi Seminar. Brian Toner, UMaine Mathematics MA student “Math that Learns, the Mathematical Principles of Neural Networks and Machine Learning” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) A brief overview of Neural Networks, Machine Learning and the mathematical machinery behind them. Wednesday, September 18, 2013.  Mathematics Colloquium. Prof. Robert Franzosa, UMaine Mathematics “You Cannot Beat Bob In The Triangle Game Implies That A Beating Heart Cannot Respond In A Continuous Manner To A Stimulus Applied With Continuously Varying Strength And Timing” 3:30 – 4:20 pm, Hill Auditorium, Barrows Hall (ESRB). (Snacks at 3:15 pm.) We will explore a game played on a triangular grid and see how properties of the game lead to the 2-dimensional Brouwer Fixed Point Theorem. Then we will see some interesting consequences of the Brouwer Fixed Point Theorem including the You-Are-Here Scenario and the No-Retraction Theorem. Finaly, one of the consequences will be applied to a simple heart-beat model to prove that a beating heart cannot respond in a continuous manner to an applied stimulus.
# How can a probability distribution have wavelength (de Broglie wavelength)? The wave function described by Schrodinger's equation is interpreted as describing the probability of a particle in at any point in space, i.e. a probability distribution. Since this distribution cannot be a repeating wave (probability peaks at a particular place), how can we have a wavelength of these matter waves (de Broglie wavelength) ? • "probability peaks at a particular place" you are thinking of classical statistical probability for random motions. Probability is a more general term. en.wikipedia.org/wiki/Probability – anna v Feb 9 '16 at 19:30 The wavefunction associated to a free particle of momentum $p$ is $$\psi_p(x) = \mathrm{e}^{\mathrm{i}px/\hbar}$$ which is obviously just a plane wave with wavelength $h/p$, fully compatible with the deBroglie relation. Strictly speaking, this function itself is not an admissible wavefunction because it is not square-integrable and hence $\lvert\psi_p\rvert^2$ not a normalized probability distribution, but superpositions of these plane waves can be square-integrable, and they are the actual wavefunctions.
• ### Estimating stellar birth radii and the time evolution of the Milky Way's ISM metallicity gradient(1804.06856) April 18, 2018 astro-ph.GA We present a semi-empirical, largely model-independent approach for estimating Galactic birth radii, r_birth, for Milky Way disk stars. The technique relies on the justifiable assumption that a negative radial metallicity gradient in the interstellar medium (ISM) existed for most of the disk lifetime. Stars are projected back to their birth positions according to the observationally derived age and [Fe/H] with no kinematical information required. Applying our approach to the AMBRE:HARPS and HARPS-GTO local samples, we show that we can constrain the ISM metallicity evolution with Galactic radius and cosmic time, [Fe/H]_ISM(r, t), by requiring a physically meaningful r_birth distribution. We find that the data are consistent with an ISM radial metallicity gradient that flattens with time from ~-0.15 dex/kpc at the beginning of disk formation, to its measured present-day value (-0.07 dex/kpc). We present several chemo-kinematical relations in terms of mono-r_birth populations. One remarkable result is that the kinematically hottest stars would have been born locally or in the outer disk, consistent with thick disk formation from the nested flares of mono-age populations and predictions from cosmological simulations. This phenomenon can be also seen in the observed age-velocity dispersion relation, in that its upper boundary is dominated by stars born at larger radii. We also find that the flatness of the local age-metallicity relation (AMR) is the result of the superposition of the AMRs of mono-r_birth populations, each with a well-defined negative slope. The solar birth radius is estimated to be 7.3+-0.6 kpc, for a current Galactocentric radius of 8 kpc. • ### The Gaia-ESO Survey: Churning through the Milky Way(1711.05751) Nov. 15, 2017 astro-ph.GA We attempt to determine the relative fraction of stars that have undergone significant radial migration by studying the orbital properties of metal-rich ([Fe/H]$>0.1$) stars within 2 kpc of the Sun using a sample of more than 3,000 stars selected from iDR4 of the Gaia-ESO Survey. We investigate the kinematic properties, such as velocity dispersion and orbital parameters, of stellar populations near the sun as a function of [Mg/Fe] and [Fe/H], which could show evidence of a major merger in the past history of the Milky Way. This was done using the stellar parameters from the Gaia-ESO Survey along with proper motions from PPMXL to determine distances, kinematics, and orbital properties for these stars to analyze the chemodynamic properties of stellar populations near the Sun. Analyzing the kinematics of the most metal-rich stars ([Fe/H]$>0.1$), we find that more than half have small eccentricities ($e<0.2$) or are on nearly circular orbits. Slightly more than 20\% of the metal-rich stars have perigalacticons $R_p>7$ kpc. We find that the highest [Mg/Fe], metal-poor populations have lower vertical and radial velocity dispersions compared to lower [Mg/Fe] populations of similar metallicity by $\sim10$ km s$^{-1}$. The median eccentricity increases linearly with [Mg/Fe] across all metallicities, while the perigalacticon decreases with increasing [Mg/Fe] for all metallicities. Finally, the most [Mg/Fe]-rich stars are found to have significant asymmetric drift and rotate more than 40 km s$^{-1}$ slower than stars with lower [Mg/Fe] ratios. While our results cannot constrain how far stars have migrated, we propose that migration processes are likely to have played an important role in the evolution of the Milky Way, with metal-rich stars migrating from the inner disk toward to solar neighborhood and past mergers potentially driving enhanced migration of older stellar populations in the disk. • ### The AMBRE project: Iron-peak elements in the solar neighbourhood(1612.07622) Oct. 8, 2017 astro-ph.GA, astro-ph.SR The aim of this paper is to characterise the abundance patterns of five iron-peak elements (Mn, Fe, Ni, Cu, and Zn) for which the stellar origin and chemical evolution are still debated. We automatically derived iron peak (Mn, Fe, Ni, Cu, and Zn) and alpha element (Mg) chemical abundances for 4666 stars. We used the bimodal distribution of [Mg/Fe] to chemically classify sample stars into different Galactic substructures: thin disc, metal-poor and high-alpha metal rich, high-alpha and low-alpha metal-poor populations. High-alpha and low-alpha metal-poor populations are fully distinct in Mg, Cu, and Zn. Thin disc trends of [Ni/Fe] and [Cu/Fe] are very similar and show a small increase at supersolar metallicities. Thin and thick disc trends of Ni and Cu are very similar and indistinguishable. Mn looks different from Ni and Cu. [Mn/Fe] trends of thin and thick discs actually have noticeable differences: the thin disc is slightly Mn richer than the thick disc. [Zn/Fe] trends look very similar to those of [alpha/Fe] trends. The dispersion of results in both discs is low (approx 0.05 dex for [Mg, Mn, and Cu/Fe]) and is even much lower for [Ni/Fe] (approx 0.035 dex). Zn is an alpha-like element and could be used to separate thin and thick disc stars. [Mn/Mg] ratio could also be a very good tool for tagging Galactic substructures. Some models can partially reproduce the observed Mg, Zn, and, Cu behaviours. Models mostly fail to reproduce Mn and Ni in all metallicity domains, however, models adopting yields normalised from solar chemical properties reproduce Mn and Ni better, suggesting that there is still a lack of realistic theoretical yields of some iron-peak elements. Very low scatter (approx 0.05 dex) in thin and thick disc sequences could provide an observational constrain for Galactic evolutionary models that study the efficiency of stellar radial migration. • ### The AMBRE project: a study of Li evolution in the Galactic thin and thick discs(1709.03998) Sept. 12, 2017 astro-ph.GA Recent observations suggest a "double-branch" behaviour of Li/H versus metallicity in the local thick and thin discs. This is reminiscent of the corresponding O/Fe versus Fe/H behaviour, which has been explained as resulting from radial migration in the Milky Way disc. We use a semi-analytical model of disc evolution with updated chemical yields and parameterised radial migration. We explore the cases of long-lived (red giants of a few Gy lifetime) and shorter-lived (Asymptotic Giant Branch stars of several 10$^8$ yr) stellar sources of Li, as well as those of low and high primordial Li. We show that both factors play a key role in the overall Li evolution. We find that the observed "two-branch" Li behaviour is only directly obtained in the case of long-lived stellar Li sources and low primordial Li. In all other cases, the data imply systematic Li depletion in stellar envelopes, thus no simple picture of the Li evolution can be obtained. This concerns also the reported Li/H decrease at supersolar metallicities. • ### The Gaia-ESO Survey: Matching Chemo-Dynamical Simulations to Observations of the Milky Way(1709.01523) Sept. 5, 2017 astro-ph.GA The typical methodology for comparing simulated galaxies with observational surveys is usually to apply a spatial selection to the simulation to mimic the region of interest covered by a comparable observational survey sample. In this work we compare this approach with a more sophisticated post-processing in which the observational uncertainties and selection effects (photometric, surface gravity and effective temperature) are taken into account. We compare a `solar neighbourhood analogue' region in a model Milky Way-like galaxy simulated with RAMSES-CH with fourth release Gaia-ESO survey data. We find that a simple spatial cut alone is insufficient and that observational uncertainties must be accounted for in the comparison. This is particularly true when the scale of uncertainty is large compared to the dynamic range of the data, e.g. in our comparison, the [Mg/Fe] distribution is affected much more than the more accurately determined [Fe/H] distribution. Despite clear differences in the underlying distributions of elemental abundances between simulation and observation, incorporating scatter to our simulation results to mimic observational uncertainty produces reasonable agreement. The quite complete nature of the Gaia-ESO survey means that the selection function has minimal impact on the distribution of observed age and metal abundances but this would become increasingly more important for surveys with narrower selection functions. • ### The AMBRE Project: chemical evolution models for the Milky Way thick and thin discs(1706.02614) Aug. 29, 2017 astro-ph.GA We study the chemical evolution of the thick and thin discs of the Galaxy by comparing detailed chemical evolution models with recent data from the AMBRE Project. The data suggest that the stars in the thick and thin discs form two distinct sequences with the thick disc stars showing higher [{\alpha}/Fe] ratios. We adopt two different approaches to model the evolution of thick and thin discs. In particular, we adopt: i) a two-infall approach where the thick disc forms fast and before the thin disc and by means of a fast gas accretion episode, whereas the thin disc forms by means of a second accretion episode on a longer timescale; ii) a parallel approach, where the two discs form in parallel but at different rates. By comparing our model results with the observed [Mg/Fe] vs. [Fe/H] and the metallicity distribution functions in the two Galactic components, we conclude that the parallel approach can account for a group of {\alpha}-enhanced metal rich stars present in the data, whereas the two-infall approach cannot explain these stars unless they are the result of stellar migration. In both approaches, the thick disc has formed on a timescale of accretion of 0.1 Gyr, whereas the thin disc formed on a timescale of 7 Gyr in the solar region. In the two-infall approach a gap in star formation between the thick and thin disc formation of several hundreds of Myr should be present, at variance with the parallel approach where no gap is present. • ### PLATO as it is: a legacy mission for Galactic archaeology(1706.03778) July 7, 2017 astro-ph.GA, astro-ph.SR Deciphering the assembly history of the Milky Way is a formidable task, which becomes possible only if one can produce high-resolution chrono-chemo-kinematical maps of the Galaxy. Data from large-scale astrometric and spectroscopic surveys will soon provide us with a well-defined view of the current chemo-kinematical structure of the Milky Way, but will only enable a blurred view on the temporal sequence that led to the present-day Galaxy. As demonstrated by the (ongoing) exploitation of data from the pioneering photometric missions CoRoT, Kepler, and K2, asteroseismology provides the way forward: solar-like oscillating giants are excellent evolutionary clocks thanks to the availability of seismic constraints on their mass and to the tight age-initial-mass relation they adhere to. In this paper we identify five key outstanding questions relating to the formation and evolution of the Milky Way that will need precise and accurate ages for large samples of stars to be addressed, and we identify the requirements in terms of number of targets and the precision on the stellar properties that are needed to tackle such questions. By quantifying the asteroseismic yields expected from PLATO for red-giant stars, we demonstrate that these requirements are within the capabilities of the current instrument design, provided that observations are sufficiently long to identify the evolutionary state and allow robust and precise determination of acoustic-mode frequencies. This will allow us to harvest data of sufficient quality to reach a 10% precision in age. This is a fundamental pre-requisite to then reach the more ambitious goal of a similar level of accuracy, which will only be possible if we have to hand a careful appraisal of systematic uncertainties on age deriving from our limited understanding of stellar physics, a goal which conveniently falls within the main aims of PLATO's core science. • Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018. • ### The Gaia-ESO Survey: Exploring the complex nature and origins of the Galactic bulge populations(1704.03325) April 11, 2017 astro-ph.GA Abridged: We used the fourth internal data release of the Gaia-ESO survey to characterize the bulge chemistry, spatial distribution, kinematics, and to compare it chemically with the thin and thick disks. The sample consist on ~2500 red clump stars in 11 bulge fields ($-10^\circ\leq l\leq+8^\circ$ and $-10^\circ\leq b\leq-4^\circ$), and a set of ~6300 disk stars selected for comparison. The bulge MDF is confirmed to be bimodal across the whole sampled area, with metal-poor stars dominating at high latitudes. The metal-rich stars exhibit bar-like kinematics and display a bimodality in their magnitude distribution, a feature which is tightly associated with the X-shape bulge. They overlap with the metal-rich end of the thin disk sequence in the [Mg/Fe] vs. [Fe/H] plane. Metal-poor bulge stars have a more isotropic hot kinematics and do not participate in the X-shape bulge. With similar Mg-enhancement levels, the position of the metal-poor bulge sequence "knee" is observed at [Fe/H]$_{knee}=-0.37\pm0.09$, being 0.06 dex higher than that of the thick disk. It suggests a higher SFR for the bulge than for the thick disk. Finally, we present a chemical evolution model that suitably fits the whole bulge sequence by assuming a fast ($<1$ Gyr) intense burst of stellar formation at early epochs. We associate metal-rich stars with the B/P bulge formed from the secular evolution of the early thin disk. On the other hand, the metal-poor subpopulation might be the product of an early prompt dissipative collapse dominated by massive stars. Nevertheless, our results do not allow us to firmly rule out the possibility that these stars come from the secular evolution of the early thick disk. This is the first time that an analysis of the bulge MDF and $\alpha$-abundances has been performed in a large area on the basis of a homogeneous, fully spectroscopic analysis of high-resolution, high S/N data. • Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs. • ### The Gaia-ESO Survey: low-alpha element stars in the Galactic Bulge(1702.04500) Feb. 15, 2017 astro-ph.GA We take advantage of the Gaia-ESO Survey iDR4 bulge data to search for abundance anomalies that could shed light on the composite nature of the Milky Way bulge. The alpha-elements (Mg, Si, and whenever available, Ca) abundances, and their trends with Fe abundances have been analysed for a total of 776 bulge stars. In addition, the aluminum abundances and their ratio to Fe and Mg have also been examined. Our analysis reveals the existence of low-alpha element abundance stars with respect to the standard bulge sequence in the [alpha/Fe] vs. [Fe/H] plane. 18 objects present deviations in [alpha/Fe] ranging from 2.1 to 5.3 sigma with respect to the median standard value. Those stars do not show Mg-Al anti-correlation patterns. Incidentally, this sign of the existence of multiple stellar populations is reported firmly for the first time for the bulge globular cluster NGC 6522. The identified low-alpha abundance stars have chemical patterns compatible with those of the thin disc. Their link with massive dwarf galaxies accretion seems unlikely, as larger deviations in alpha abundance and Al would be expected. The vision of a bulge composite nature and a complex formation process is reinforced by our results. The used approach, a multi-method and model-driven analysis of high resolution data seems crucial to reveal this complexity. • ### The AMBRE Project: Constraining the lithium evolution in the Milky Way(1608.03411) Aug. 11, 2016 astro-ph.GA The chemical evolution of lithium in the Milky Way represents a major problem in modern astrophysics. Indeed, lithium is, on the one hand, easily destroyed in stellar interiors, and, on the other hand, produced at some specific stellar evolutionary stages that are still not well constrained. The goal of this paper is to investigate the lithium stellar content of Milky Way stars in order to put constraints on the lithium chemical enrichment in our Galaxy, in particular in both the thin and thick discs. Thanks to high-resolution spectra from the ESO archive and high quality atmospheric parameters, we were able to build a massive and homogeneous catalogue of lithium abundances for 7300 stars derived with an automatic method coupling, a synthetic spectra grid, and a Gauss-Newton algorithm. We validated these lithium abundances with literature values, including those of the Gaia benchmark stars. In terms of lithium galactic evolution, we show that the interstellar lithium abundance increases with metallicity by 1 dex from [M/H]=-1 dex to +0.0 dex. Moreover, we find that this lithium ISM abundance decreases by about 0.5 dex at super-solar metalllicity. Based on a chemical separation, we also observed that the stellar lithium content in the thick disc increases rather slightly with metallicity, while the thin disc shows a steeper increase. The lithium abundance distribution of alpha-rich, metal-rich stars has a peak at A(Li)~3 dex. We conclude that the thick disc stars suffered of a low lithium chemical enrichment, showing lithium abundances rather close to the Spite plateau while the thin disc stars clearly show an increasing lithium chemical enrichment with the metallicity, probably thanks to the contribution of low-mass stars. • ### The Gaia-ESO Survey: Revisiting the Li-rich giant problem(1603.03038) July 1, 2016 astro-ph.SR The discovery of lithium-rich giants contradicts expectations from canonical stellar evolution. Here we report on the serendipitous discovery of 20 Li-rich giants observed during the Gaia-ESO Survey, which includes the first nine Li-rich giant stars known towards the CoRoT fields. Most of our Li-rich giants have near-solar metallicities, and stellar parameters consistent with being before the luminosity bump. This is difficult to reconcile with deep mixing models proposed to explain lithium enrichment, because these models can only operate at later evolutionary stages: at or past the luminosity bump. In an effort to shed light on the Li-rich phenomenon, we highlight recent evidence of the tidal destruction of close-in hot Jupiters at the sub-giant phase. We note that when coupled with models of planet accretion, the observed destruction of hot Jupiters actually predicts the existence of Li-rich giant stars, and suggests Li-rich stars should be found early on the giant branch and occur more frequently with increasing metallicity. A comprehensive review of all known Li-rich giant stars reveals that this scenario is consistent with the data. However more evolved or metal-poor stars are less likely to host close-in giant planets, implying that their Li-rich origin requires an alternative explanation, likely related to mixing scenarios rather than external phenomena. • ### The Gaia-ESO survey: Metal-rich bananas in the bulge(1605.09684) May 31, 2016 astro-ph.GA We analyse the kinematics of $\sim 2000$ giant stars in the direction of the Galactic bulge, extracted from the Gaia-ESO survey in the region $-10^\circ \lesssim \ell \lesssim 10^\circ$ and $-11^\circ \lesssim b \lesssim -3^\circ$. We find distinct kinematic trends in the metal rich ($\mathrm{[M/H]}>0$) and metal poor ($\mathrm{[M/H]}<0$) stars in the data. The velocity dispersion of the metal-rich stars drops steeply with latitude, compared to a flat profile in the metal-poor stars, as has been seen previously. We argue that the metal-rich stars in this region are mostly on orbits that support the boxy-peanut shape of the bulge, which naturally explains the drop in their velocity dispersion profile with latitude. The metal rich stars also exhibit peaky features in their line-of-sight velocity histograms, particularly along the minor axis of the bulge. We propose that these features are due to stars on resonant orbits supporting the boxy-peanut bulge. This conjecture is strengthened through the comparison of the minor axis data with the velocity histograms of resonant orbits generated in simulations of buckled bars. The 'banana' or 2:1:2 orbits provide strongly bimodal histograms with narrow velocity peaks that resemble the Gaia-ESO metal-rich data. • ### The Gaia-ESO Survey: Probes of the inner disk abundance gradient(1605.04899) May 16, 2016 astro-ph.GA, astro-ph.SR The nature of the metallicity gradient inside the solar circle (R_GC < 8 kpc) is poorly understood, but studies of Cepheids and a small sample of open clusters suggest that it steepens in the inner disk. We investigate the metallicity gradient of the inner disk using a sample of inner disk open clusters that is three times larger than has previously been studied in the literature to better characterize the gradient in this part of the disk. We used the Gaia-ESO Survey (GES) [Fe/H] values and stellar parameters for stars in 12 open clusters in the inner disk from GES-UVES data. Cluster mean [Fe/H] values were determined based on a membership analysis for each cluster. Where necessary, distances and ages to clusters were determined via comparison to theoretical isochrones. The GES open clusters exhibit a radial metallicity gradient of -0.10+-0.02 dex/kpc, consistent with the gradient measured by other literature studies of field red giant stars and open clusters in the range R_GC ~ 6-12 kpc. We also measure a trend of increasing [Fe/H] with increasing cluster age, as has also been found in the literature. We find no evidence for a steepening of the inner disk metallicity gradient inside the solar circle as earlier studies indicated. The age-metallicity relation shown by the clusters is consistent with that predicted by chemical evolution models that include the effects of radial migration, but a more detailed comparison between cluster observations and models would be premature. • ### The Gaia-ESO Survey: Inhibited extra mixing in two giants of the open cluster Trumpler 20?(1605.01945) May 6, 2016 astro-ph.SR We report the discovery of two Li-rich giants, with A(Li) ~ 1.50, in an analysis of a sample of 40 giants of the open cluster Trumpler 20 (with turnoff mass ~ 1.8 Msun). The cluster was observed in the context of the Gaia-ESO Survey. The atmospheric parameters and Li abundances were derived using high-resolution UVES spectra. The Li abundances were corrected for nonlocal thermodynamical equilibrium (non-LTE) effects. Only upper limits of the Li abundance could be determined for the majority of the sample. Two giants with detected Li turned out to be Li rich: star MG 340 has A(Li) non-LTE = 1.54 \pm 0.21 dex and star MG 591 has A(Li) non-LTE = 1.60 \pm 0.21 dex. Star MG 340 is on average ~ 0.30 dex more rich in Li than stars of similar temperature, while for star MG 591 this difference is on average ~ 0.80 dex. Carbon and nitrogen abundances indicate that all stars in the sample have completed the first dredge-up. The Li abundances in this unique sample of 40 giants in one open cluster clearly show that extra mixing is the norm in this mass range. Giants with Li abundances in agreement with the predictions of standard models are the exception. To explain the two Li-rich giants, we suggest that all events of extra mixing have been inhibited. This includes rotation-induced mixing during the main sequence and the extra mixing at the red giant branch luminosity bump. Such inhibition has been suggested in the literature to occur because of fossil magnetic fields in red giants that are descendants of main-sequence Ap-type stars. • ### The AMBRE Project: Stellar Parameterisation of the ESO:UVES archived spectra(1602.08478) The AMBRE Project is a collaboration between the European Southern Observatory (ESO) and the Observatoire de la Cote d'Azur (OCA) that has been established in order to carry out the determination of stellar atmospheric parameters for the archived spectra of four ESO spectrographs. The analysis of the UVES archived spectra for their stellar parameters has been completed in the third phase of the AMBRE Project. From the complete ESO:UVES archive dataset that was received covering the period 2000 to 2010, 51921 spectra for the six standard setups were analysed. The AMBRE analysis pipeline uses the stellar parameterisation algorithm MATISSE to obtain the stellar atmospheric parameters. The synthetic grid is currently constrained to FGKM stars only. Stellar atmospheric parameters are reported for 12,403 of the 51,921 UVES archived spectra analysed in AMBRE:UVES. This equates to ~23.9% of the sample and ~3,708 stars. Effective temperature, surface gravity, metallicity and alpha element to iron ratio abundances are provided for 10,212 spectra (~19.7%), while at least effective temperature is provided for the remaining 2,191 spectra. Radial velocities are reported for 36,881 (~71.0%) of the analysed archive spectra. Typical external errors of sigmaTeff~110dex, sigmalogg~0.18dex, sigma[M/H]~0.13dex, and sigma[alpha/Fe]~0.05dex with some reported variation between giants and dwarfs and between setups are reported. UVES is used to observe an extensive collection of stellar and non-stellar objects all of which have been included in the archived dataset provided to OCA by ESO. The AMBRE analysis extracts those objects which lie within the FGKM parameter space of the AMBRE slow rotating synthetic spectra grid. Thus by homogeneous blind analysis AMBRE has successfully extracted and parameterised the targeted FGK stars (23.9% of the analysed sample) from within the ESO:UVES archive. • ### The Gaia-ESO Survey: Sodium and aluminium abundances in giants and dwarfs - Implications for stellar and Galactic chemical evolution(1602.03289) Feb. 25, 2016 astro-ph.GA, astro-ph.SR Stellar evolution models predict that internal mixing should cause some sodium overabundance at the surface of red giants more massive than ~ 1.5--2.0 Msun. The surface aluminium abundance should not be affected. Nevertheless, observational results disagree about the presence and/or the degree of the Na and Al overabundances. In addition, Galactic chemical evolution models adopting different stellar yields lead to quite different predictions for the behavior of [Na/Fe] and [Al/Fe] versus [Fe/H]. Overall, the observed trends of these abundances with metallicity are not well reproduced. We readdress both issues, using new Na and Al abundances determined within the Gaia-ESO Survey, using two samples: i) more than 600 dwarfs of the solar neighborhood and of open clusters and ii) low- and intermediate-mass clump giants in six open clusters. Abundances of Na in giants with mass below ~2.0 Msun, and of Al in giants below ~3.0 Msun, seem to be unaffected by internal mixing processes. For more massive giants, the Na overabundance increases with stellar mass. This trend agrees well with predictions of stellar evolutionary models. Chemical evolution models that are able to fit well the observed [Na/Fe] vs. [Fe/H] trend in solar neighborhood dwarfs can not simultaneously explain the run of [Al/Fe] with [Fe/H], and viceversa. The comparison with stellar ages is hampered by severe uncertainties. Indeed, reliable age estimates are available for only a half of the stars of the sample. We conclude that Al is underproduced by the models, except for stellar ages younger than about 7 Gyr. In addition, some significant source of late Na production seems to be missing in the models. Either current Na and Al yields are affected by large uncertainties, and/or some important Galactic source(s) of these elements has not been taken into account up to now. [abridged] • ### The Gaia-ESO Survey: Separating disk chemical substructures with cluster models(1512.03835) Dec. 11, 2015 astro-ph.GA (Abridged) Recent spectroscopic surveys have begun to explore the Galactic disk system outside the solar neighborhood on the basis of large data samples. In this way, they provide valuable information for testing spatial and temporal variations of disk structure kinematics and chemical evolution. We used a Gaussian mixture model algorithm, as a rigurous mathematical approach, to separate in the [Mg/Fe] vs. [Fe/H] plane a clean disk star subsample from the Gaia-ESO survey internal data release 2. We find that the sample is separated into five groups associated with major Galactic components; the metal-rich end of the halo, the thick disk, and three subgroups for the thin disk sequence. This is confirmed with a sample of red clump stars from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey. The two metal-intermediate and metal-rich groups of the thin disk decomposition ([Fe/H]>-0.25 dex) highlight a change in the slope at solar metallicity. This holds true at different radial regions. The distribution of Galactocentric radial distances of the metal-poor part of the thin disk ([Fe/H]<-0.25 dex) is shifted to larger distances than those of the more metal-rich parts. Moreover, the metal-poor part of the thin disk presents indications of a scale height intermediate between those of the thick and the rest of the thin disk, and it displays higher azimuthal velocities than the latter. These stars might have formed and evolved in parallel and/or dissociated from the inside-out formation taking place in the internal thin disk. Their enhancement levels might be due to their origin from gas pre-enriched by outflows from the thick disk or the inner halo. The smooth trends of their properties (their spatial distribution with respect to the plane, in particular) with [Fe/H] and [Mg/Fe] suggested by the data indicates a quiet dynamical evolution, with no relevant merger events. • ### Stellar parametrization from Gaia RVS spectra(1510.00111) Oct. 1, 2015 astro-ph.GA, astro-ph.SR Among the myriad of data collected by the ESA Gaia satellite, about 150 million spectra will be delivered by the Radial Velocity Spectrometer (RVS) for stars as faint as G_RVS~16. A specific stellar parametrization will be performed for most of these RVS spectra. Some individual chemical abundances will also be estimated for the brightest targets. We describe the different parametrization codes that have been specifically developed or adapted for RVS spectra within the GSP-spec working group of the analysis consortium. The tested codes are based on optimization (FERRE and GAUGUIN), projection (MATISSE) or pattern recognition methods (Artificial Neural Networks). We present and discuss their expected performances in the recovered stellar atmospheric parameters (Teff, log(g), [M/H]) for B- to K- type stars. The performances for the determinations of [alpha/Fe] ratios are also presented for cool stars. For all the considered stellar types, stars brighter than G_RVS~12.5 will be very efficiently parametrized by the GSP-spec pipeline, including solid estimations of [alpha/Fe]. Typical internal errors for FGK metal-rich and metal-intermediate stars are around 40K in Teff , 0.1dex in log(g), 0.04dex in [M/H], and 0.03dex in [alpha/Fe] at G_RVS=10.3. Similar accuracies in Teff and [M/H] are found for A-type stars, while the log(g) derivation is more accurate. For the faintest stars, with G_RVS>13-14, a spectrophotometric Teff input will allow the improvement of the final GSP-spec parametrization. The reported results show that the contribution of the RVS based stellar parameters will be unique in the brighter part of the Gaia survey allowing crucial age estimations, and accurate chemical abundances. This will constitute a unique and precious sample for which many pieces of the Milky Way history puzzle will be available, with unprecedented precision and statistical relevance. • ### The Gaia-ESO Survey: New constraints on the Galactic disc velocity dispersion and its chemical dependencies(1509.05271) Sept. 17, 2015 astro-ph.GA Understanding the history and the evolution of the Milky Way disc is one of the main goals of modern astrophysics. We study the velocity dispersion behaviour of Galactic disc stars as a function of the [Mg/Fe] ratio, which can be used as a proxy of relative age. This key relation is essential to constrain the formation mechanisms of the disc stellar populations as well as the cooling processes. We used the recommended parameters and chemical abundances of 7800 FGK Milky Way field stars from the second internal data release of the Gaia-ESO Survey. These stars were observed with the GIRAFFE spectrograph, and cover a large spatial volume (6<R<10kpc and |Z|<2kpc). Based on the [Mg/Fe] and [Fe/H] ratios, we separated the thin- from the thick-disc sequence. From analysing the Galactocentric velocity of the stars for the thin disc, we find a weak positive correlation between Vphi and [Fe/H], due to a slowly rotating Fe-poor tail. For the thick disc, a strong correlation with [Fe/H] and [Mg/Fe] is established. We have detected an inversion of the radial velocity dispersion with [Mg/Fe] for thick-disc stars with [Fe/H]<-0.1dex and [Mg/Fe]>+0.2dex. First, the velocity dispersion increases with [Mg/Fe] at all [Fe/H] ratios for the thin-disc stars, and then it decreases for the thick-disc at the highest [Mg/Fe] abundances. Similar trends are observed within the errors for the azimuthal velocity dispersion, while a continuous increase with [Mg/Fe] is observed for the vertical velocity dispersion. The velocity dispersion decrease agrees with previous measurements of the RAVE survey, although it is observed here for a greater metallicity interval and a larger spatial volume. We confirm the existence of [Mg/Fe]-rich thick-disc stars with cool kinematics in the generally turbulent context of the primitive Galactic disc. This is discussed in the framework of the different disc formation scenarios. • ### The Gaia-ESO Survey: Characterisation of the [alpha/Fe] sequences in the Milky Way discs(1507.08066) July 29, 2015 astro-ph.GA We investigate, using the Gaia-ESO Survey internal Data-Release 2, the properties of the double sequence of the Milky Way discs (defined chemically as the high-alpha and low-alpha populations), and discuss their compatibility with discs defined by other means such as metallicity, kinematics or positions. This investigation uses two different approaches: in velocity space for stars located in the extended Solar neighbourhood, and in chemical space for stars at different ranges of Galactocentric radii and heights from the plane. The separation we find in velocity space allows us to investigate, in a novel manner, the extent in metallicity of each of the two sequences, identifying them with the two discs, without making any assumption about the shape of their metallicity distribution functions. Then, using the separation in chemical space, we characterise the spatial variation of the slopes of the [alpha/Fe] - [Fe/H] sequences for the thick and thin discs and the way in which the relative proportions of the two discs change across the Galaxy. We find that the thick disc (high-alpha sequence), extends up to [Fe/H]~ +0.2 and the thin disc (low-alpha sequence), at least down to [Fe/H]~ -0.8. Radial and vertical gradients in alpha-abundances are found for the thin disc, with mild spatial variations in its [alpha/Fe] - [Fe/H] paths, whereas for the thick disc we do not detect any such spatial variations. The small variations in the spatial [alpha/Fe] - [Fe/H] paths of the thin disc do not allow us to distinguish between formation models of this structure. On the other hand, the lack of radial gradients and [alpha/Fe] - [Fe/H] variations for the thick disc indicate that the mechanism responsible for the mixing of the metals in the young Galaxy (e.g. radial stellar migration or turbulent gaseous disc) was more efficient before the (present) thin disc started forming. • ### The Origin of Fluorine: Abundances in AGB Carbon Stars Revisited(1507.03488) July 13, 2015 astro-ph.SR Revised spectroscopic parameters for the HF molecule and a new CN line list in the 2.3 mu region have been recently available, allowing a revision of the F content in AGB stars. AGB carbon stars are the only observationally confirmed sources of fluorine. Nowadays there is not a consensus on the relevance of AGB stars in its Galactic chemical evolution. The aim of this article is to better constrain the contribution of these stars with a more accurate estimate of their fluorine abundances. Using new spectroscopic tools and LTE spectral synthesis, we redetermine fluorine abundances from several HF lines in the K-band in a sample of Galactic and extragalactic AGB carbon stars of spectral types N, J and SC spanning a wide range of metallicities. On average, the new derived fluorine abundances are systematically lower by 0.33 dex with respect to previous determinations. This may derive from a combination of the lower excitation energies of the HF lines and the larger macroturbulence parameters used here as well as from the new adopted CN line list. Yet, theoretical nucleosynthesis models in AGB stars agree with the new fluorine determinations at solar metallicities. At low metallicities, an agreement between theory and observations can be found by handling in a different way the radiative/convective interface at the base of the convective envelope. New fluorine spectroscopic measurements agree with theoretical models at low and at solar metallicity. Despite this, complementary sources are needed to explain its observed abundance in the solar neighbourhood. • ### Gaia FGK benchmark stars: abundances of alpha and iron-peak elements(1507.00027) June 30, 2015 astro-ph.SR In the current era of large spectroscopic surveys of the Milky Way, reference stars for calibrating astrophysical parameters and chemical abundances are of paramount importance. We determine elemental abundances of Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Co and Ni for our predefined set of Gaia FGK benchmark stars. By analysing high-resolution and high-signal to noise spectra taken from several archive datasets, we combined results of eight different methods to determine abundances on a line-by-line basis. We perform a detailed homogeneous analysis of the systematic uncertainties, such as differential versus absolute abundance analysis, as well as we assess errors due to NLTE and the stellar parameters in our final abundances. Our results are provided by listing final abundances and the different sources of uncertainties, as well as line-by-line and method-by-method abundances. The Gaia FGK benchmark stars atmospheric parameters are already being widely used for calibration of several pipelines applied to different surveys. With the added reference abundances of 10 elements this set is very suitable to calibrate the chemical abundances obtained by these pipelines. • ### The Gaia-ESO Survey: Empirical determination of the precision of stellar radial velocities and projected rotation velocities(1505.07019) May 26, 2015 astro-ph.SR, astro-ph.IM The Gaia-ESO Survey (GES) is a large public spectroscopic survey at the European Southern Observatory Very Large Telescope. A key aim is to provide precise radial velocities (RVs) and projected equatorial velocities (v sin i) for representative samples of Galactic stars, that will complement information obtained by the Gaia astrometry satellite. We present an analysis to empirically quantify the size and distribution of uncertainties in RV and v sin i using spectra from repeated exposures of the same stars. We show that the uncertainties vary as simple scaling functions of signal-to-noise ratio (S/N) and v sin i, that the uncertainties become larger with increasing photospheric temperature, but that the dependence on stellar gravity, metallicity and age is weak. The underlying uncertainty distributions have extended tails that are better represented by Student's t-distributions than by normal distributions. Parametrised results are provided, that enable estimates of the RV precision for almost all GES measurements, and estimates of the v sin i precision for stars in young clusters, as a function of S/N, v sin i and stellar temperature. The precision of individual high S/N GES RV measurements is 0.22-0.26 km/s, dependent on instrumental configuration.
Question for understanding definition of point process I am trying to understand the definition of point process when reading its Wikipedia article: Let $S$ be locally compact second countable Hausdorff space equipped with its Borel σ-algebra $B(S)$. Write $\mathfrak{N}$ for the set of locally finite counting measures on $S$ and $\mathcal{N}$ for the smallest σ-algebra on $\mathfrak{N}$ that renders all the point counts $$\Phi_B : \mathfrak{N} \to \mathbb{Z}_{+}, \varrho \mapsto \varrho(B)$$ for relatively compact sets $B$ in $B$-measurable. A point process on $S$ is a measurable map $\xi: \Omega \to \mathfrak{N}$ from a probability space $(\Omega, \mathcal F, P)$ to the measurable space $(\mathfrak{N},\mathcal{N})$. My questions are: 1. Is the counting measure the one that gives the cardinality of a measurable subset, as defined in its Wikipedia article? If yes, isn't it that there is only one counting measure on a measurable space, and why in the definition of point process, does "write $\mathfrak{N}$ for the set of locally finite counting measures on $S$" imply that there are more than one counting measures on $S$? 2. It has been noted[citation needed] that the term point process is not a very good one if S is not a subset of the real line, as it might suggest that ξ is a stochastic process. Is a point process a stochastic process? If no, when can it be? How are the two related? Thanks and regards! - (1) Every locally finite counting measure on $S$ is of the form $\sum_{x\in \Lambda}\delta_x$ where $\Lambda$ is a locally finite subset of $S$. That is, $\Lambda$ should intersect every compact set only finitely often. Of course, there are infinitely many choices of subset $\Lambda$, and thus plenty of counting measures. (2) In the case where $S=[0,\infty)$ we can define an integer valued stochastic process by setting $X(t)=\xi[0,t]$. That is, $X(t)$ is the amount of mass that the (random) measure $\xi$ assigns to the set $[0,t]$. For instance, the Poisson process can be expressed in this way. But for a general point process, there may be no notion of a "time parameter" and so it is not thought of as a stochastic process. Thanks! I am still wondering about the defintion of "a locally finite subset of S". For better understanding, can "Λ should intersect every compact set only finitely often" be more mathematically rewritten? Must $\Lambda$ be a countable set, so that the sum $\sum_{x\in \Lambda}\delta_x$ can make sense? – Tim Apr 20 '11 at 2:26 Yes, since $S$ is $\sigma$-compact every locally finite set $\Lambda$ is countable. But not every countable set is locally finite: on the real line compare $\lbrace n: n \geq 1\rbrace$ and $\lbrace 1/n :n \geq 1\rbrace$. – Byron Schmuland Apr 20 '11 at 2:32 @Tim I don't know if there is anything better than that sentence. In papers I have seen it expressed as "$|\Lambda\cap K|<\infty$ for every compact set $K$" where $|\ \cdot\ |$ means "cardinality". – Byron Schmuland Apr 20 '11 at 2:39 Thanks! (1) Is a point process a stochastic process, if and only if the locally finite counting measure on S all have distribution functions? (2) For a point process $\xi$, is $\xi(A), \forall A \in B(S)$ always a measurable mapping, i.e. random variable? (3) Are there any references regarding my two questions? Thanks! – Tim Apr 28 '11 at 4:59
Image Restoration by Iterative Denoising and Backward Projections Image Restoration by Iterative Denoising and Backward Projections Tom Tirer, and Raja Giryes The authors are with the School of Electrical Engineering, Tel Aviv University, Tel Aviv 69978, Israel. (email: [email protected], [email protected]) Abstract Inverse problems appear in many applications such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this work, we propose an alternative method for solving inverse problems using denoising algorithms, that requires less parameter tuning. We demonstrate that it is competitive with task-specific techniques and the P&P approach for image inpainting and deblurring. Plug-and-Play, inverse problems, image restoration, image denoising I Introduction We consider the reconstruction of an image from its degraded version, which may be noisy, blurred, downsampled, or all together. This general problem has many important applications, such as medical imaging, surveillance, entertainment, and more. Traditionally, the design of task-specific algorithms has been the ruling approach. Many works specifically considered the denoising problem [1, 2, 3], the deblurring problem [4, 5, 6], the inpainting problem [7, 8], etc. Recently, a new approach attracts much interest. This approach suggests leveraging the impressive capabilities of existing denoising algorithms for solving other tasks that can be formulated as an inverse problem. The pioneering algorithm that introduced this concept is the Plug-and-Play (P&P) method [9], which presents an elegant way to decouple the measurement model and the image prior, such that the latter is handled solely by a denoising operation. Thus, it is not required to explicitly specify the prior, since it is implicitly defined through the choice of the denoiser. The P&P method has already found many applications, e.g. bright field electron tomography [10], Poisson denoising [11], and postprocessing of compressed images [12]. It also inspired new related techniques [13, 14, 15]. However, it has been noticed that the P&P often requires a burdensome parameter tuning in order to obtain high quality results [14, 16]. Moreover, since it is an iterative method, sometimes a large number of iterations is required. In this work, we propose a simple iterative method for solving linear inverse problems using denoising algorithms, which provides an alternative to P&P. Our strategy has less parameters that require tuning (e.g. no tuning is required for the noisy inpainting problem), often requires less iterations, and its recovery performance is competitive with task-specific algorithms and with the P&P approach. We demonstrate the advantages of the new technique on inpainting and deblurring problems. The paper is organized as follows. In Section II we present the problem formulation and the P&P approach. The proposed algorithm is presented in Section III. Section IV includes mathematical analysis of the algorithm and provides a practical way to tune its parameter. In Section V the usage of the method is demonstrated and examined for inpainting and deblurring problems. Section VI concludes the paper. Ii Background Ii-a Problem formulation The problem of image restoration can be generally formulated by y=Hx+e, (1) where represents the unknown original image, represents the observations, is an degradation matrix and is a vector of independent and identically distributed Gaussian random variables with zero mean and standard deviation of . The model in (1) can represent different image restoration problems; for example: image denoising when is the identity matrix , image inpainting when is a selection of rows of , and image deblurring when is a blurring operator. In all of these cases, a prior image model is required in order to successfully estimate from the observations . Specifically, note that is ill-conditioned in the case of image deblurring, thus, in practice it can be approximated by a rank-deficient matrix, or alternatively by a full rank matrix (). Therefore, for a unified formulation of inpainting and deblurring problems, which are the test cases of this paper, we assume . Almost any approach for recovering involves formulating a cost function, composed of fidelity and penalty terms, which is minimized by the desired solution. The fidelity term ensures that the solution agrees with the measurements, and is often derived from the negative log-likelihood function. The penalty term regularizes the optimization problem through the prior image model . Hence, the typical cost function is f(~x)=12σ2n∥y−H~x∥22+s(~x), (2) where stands for the Euclidean norm. Ii-B Plug and Play approach Instead of devising a separate algorithm to solve for each type of matrix , a general recovery strategy has been proposed in [9], denoted as the Plug-and-Play (P&P). For completeness, we briefly describe this technique. Using variable splitting, the P&P method restates the minimization problem as min~x,~vℓ(~x)+βs(~v)s.t.~x=~v, (3) where is the fidelity term in (2), and is a positive parameter that adds flexibility to the cost function. This problem can be solved using ADMM [17] by constructing an augmented Lagrangian, which is given by Lλ =ℓ(~x)+βs(~v)+uT(~x−~v)+λ2∥~x−~v∥22 =ℓ(~x)+βs(~v)+λ2∥~x−~v+~u∥22+λ2∥~u∥22, (4) where is the dual variable, is the scaled dual variable, and is the ADMM penalty parameter. The ADMM algorithm consists of iterating until convergence over the following three steps ˇxk =argmin~xLλ(~x,ˇvk−1,ˇuk−1), ˇvk =argmin~vLλ(ˇxk,~v,ˇuk−1), ˇuk =ˇuk−1+(ˇxk−ˇvk). (5) By plugging (II-B) in (II-B) we have ˇxk =argmin~xℓ(~x)+λ2∥~x−(ˇvk−1−ˇuk−1)∥22, ˇvk =argmin~vλ2β∥(ˇxk+ˇuk−1)−~v∥22+s(~v), ˇuk =ˇuk−1+(ˇxk−ˇvk). (6) Note that the first step in (II-B) is just solving a least squares (LS) problem and the third step is a simple update. The second step is more interesting. It describes obtaining using a white Gaussian denoiser with noise variance of , applied on the image . This can be written compactly as , where is a denoising operator. Since general denoising algorithms can be used to implement the operator , the P&P method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of . The obtained P&P algorithm is presented in Algorithm 1. The convergence of the P&P method is proved for Gaussian denoisers that satisfy certain conditions [10]. However, well known denoisers such as BM3D [1], K-SVD [2], and standard NLM [3], lead to good results despite violating these conditions. The P&P method is not free of drawbacks. Its main difficulties are the large number of iterations, which is often required by the P&P to converge to a good solution, and the setting of the design parameters and , which is not always clear and strongly affects the performance. Iii The Proposed Algorithm In this work we take another strategy for solving inverse problems using denoising algorithms. We start with formulating the cost function (2) in somewhat strange but equivalent way f(~x) =12σ2n∥H(H†y−~x)∥22+s(~x) =12∥H†y−~x∥21σ2nHTH+s(~x), (7) where H† ≜HT(HHT)−1 (8) ∥u∥21σ2nHTH ≜1σ2nuTHTHu. (9) Note that is the pseudoinverse of the full row rank matrix , and is not a real norm, since is not a positive definite matrix in our case. Moreover, as mentioned above, since the null space of is not empty, the prior is essential in order to obtain a meaningful solution. The optimization problem can be equivalently written as min~x,~y12∥~y−~x∥21σ2nHTH+s(~x)s.t.~y=H†y. (10) Note that due to the degenerate constraint, the solution for is trivial . Now, we make two major modifications to the above optimization problem. The basic idea is to loose the variable in a restricted manner, that can facilitate the estimation of . First, we give some degrees of freedom to by using the constraint instead of . Next, we turn to prevent large components of in the null space of that may strongly disagree with the prior . We do it by replacing the multiplication by in the fidelity term, which implies projection onto a subspace, with multiplication by that implies a full dimensional space, where is a design parameter. This leads to the following optimization problem min~x,~y12(σn+δ)2∥~y−~x∥22+s(~x)s.t.H~y=y. (11) Note that introduces a tradeoff. On the one hand, exaggerated value of should be avoided, as it may over-reduce the effect of the fidelity term. On the other hand, too small value of may over-penalize unless it is very close to the affine subspace . This limits the effective feasible set of in problem (11), such that it may not include potential solutions of the original problem (10). Therefore, we suggest setting the value of as δ =argmin~δ(σn+~δ)2 s.t.∥H†y−~x∥1σ2nHTH≥∥~y−~x∥1(σn+~δ)2In ∀~x,~y∈S(???), (12) where denotes the feasible set of problem (11). Note that the feasibility of is dictated by and the feasibility of is dictated by the constraint in (11). The problem of obtaining such value for (or an approximation) is discussed in Section IV-A, where a relaxed version of the condition in (III) is presented. Assuming that solves (III), the property that for feasible and , together with the fact that is one of the solutions of the underdetermined system , prevents increasing the penalty on potential solutions of the original optimization problem (10). Therefore, roughly speaking, we do not lose solutions when we solve (11) instead of (10). We solve (11) using alternating minimization. Iteratively, is estimated by solving ~xk=argmin~x12(σn+δ)2∥~yk−1−~x∥22+s(~x), (13) and is estimated by solving ~yk=argmin~y∥~y−~xk∥22s.t.H~y=y, (14) which describes a projection of onto the affine subspace , and has a closed-form solution ~yk=H†y+(In−H†H)~xk. (15) Similarly to the P&P technique, (13) describes obtaining using a white Gaussian denoiser with noise variance of , applied on the image , and can be written compactly as , where is a denoising operator. Moreover, as in the case of the P&P, the proposed method does not require knowing or explicitly specifying the prior function . Instead, is implicitly defined through the choice of . The variable is expected to be closer to the true signal than the raw observations . Thus, our algorithm alternates between estimating the signal and using this estimation in order to obtain improved measurements (that also comply with the original observations ). The proposed algorithm, which we call Iterative Denoising and Backward Projections (IDBP), is presented in Algorithm 2. Iv Mathematical Analysis of the Algorithm Iv-a Setting the value of the parameter δ Setting the value of that solves (III) is required for simple theoretical justification of our method. However, it is not clear how to obtain such in general. Therefore, in order to relax the condition in (III), that should be satisfied by all and in , we can focus only on the sequences and generated by the proposed alternating minimization process. Then, we can use the following proposition. Proposition 1. Set . If there exist an iteration of IDBP that violates the following condition 1σ2n∥y−H~xk∥2≥1(σn+~δ)2∥H†(y−H~xk)∥2, (16) then also violates the condition in (III). Proof. Assume that and generated by IDBP at some iteration violate (16), then they also violate the equivalent condition ∥H†y−~xk∥1σ2nHTH≥∥H†y−H†H~xk∥1(σn+~δ)2In. (17) Note that (17) is obtained simply by plugging (15) into in (III). Therefore, and also violate the inequality in (III). Finally, it is easy to see that and are feasible points of (11), since is a feasible point of and satisfies . Therefore, the condition in (III) does not hold for all feasible and , which means that violates it. ∎ Note that (16) can be easily evaluated for each iteration. Thus, violation of (III) can be spotted (by violation of (16)) and used for stopping the process, increasing and running the algorithm again. Of course, the opposite direction does not hold. Even when (16) is satisfied for all iterations, it does not guarantee satisfying (III). However, the relaxed condition (16) provides an easy way to set with an approximation to the solution of (III), which gives very good results in our experiments. In the special case of the inpainting problem, (16) becomes ridiculously simple. Since is a selection of rows of , it follows that , which is an matrix that merely pads with zeros the vector on which it is applied. Therefore, , implying that satisfies (16) in this case. Obviously, if , a small positive is required in order to prevent the algorithm from getting stuck (because in this case ). Condition (16) is more complex when considering the deblurring problem. In this case is an ill-conditioned matrix. Therefore must be approximated, either by approximating by a full rank matrix before computing (8), or by regularized inversion techniques for , e.g. standard Tikhonov regularization. A research on how to compute in this case is ongoing. We empirically observed that using a fixed value for (for all noise levels and blur kernels) exhibits good performance. However, we had to add another parameter that controls the amount of regularization in the approximation of , that we slightly change between scenarios (i.e. when or change). This issue is discussed in Section V-B. An interesting observation is that the pairs of which give the best results indeed satisfy condition (16). On the other hand, the pairs of that give bad results often violate this condition (recall that the condition should be met during all iterations). An example of this behavior is given in the end of Section V-B. Iv-B Analysis of the sequence {~yk} The IDBP algorithm creates the sequence that can be interpreted as a sequence of updated measurements. It is desired that is improved with each iteration, i.e. that , obtained from , estimates better than , which is obtained from . Assuming that the result of the denoiser, denoted by , is perfect, i.e. , we get from (15) ¯¯¯y =H†y+(In−H†H)¯¯¯x =H†(Hx+e)+(In−H†H)x =x+H†e. (18) The last equality describes a model that has only noise (possibly colored), and is much easier to deal with than the original model (1). Therefore, can be considered as the optimal improved measurements that our algorithm can achieve. Since we wish to make no specific assumptions on the denoising scheme , improvement of will be measured by the Euclidean distance to . Let us define the orthogonal projection onto the row space of , and its orthogonal complement . The updated measurements are always consistent with on , and do not depend on , as can be seen from ~yk =H†(Hx+e)+QH~xk =PHx+H†e+QH~xk. (19) Thus, the following theorem ensures that iteration improves the results, provided that is closer to than on the null space of , i.e. ∥QH(~xk−x)∥2<∥QH(~yk−1−x)∥2. (20) Theorem 2. Assuming that (20) holds at the th iteration of IDBP, then we have ∥~yk−¯¯¯y∥2<∥~yk−1−¯¯¯y∥2. (21) Proof. Note that QH~yk−1=QH(H†y+QH~xk−1)=QH~xk−1. (22) Equation (21) is obtained by ∥~yk−¯¯¯y∥2 =∥(PHx+H†e+QH~xk)−(x+H†e)∥2 =∥QH(~xk−x)∥2 <∥QH(~xk−1−x)∥2 =∥(PHx+H†e+QH~xk−1)−(x+H†e)∥2 =∥~yk−1−¯¯¯y∥2, (23) where the inequality follows from (20) and (22). ∎ A denoiser that makes use of a good prior (and suitable ) is expected to satisfy (20), at least in early iterations. For example, in the inpainting problem is associated with the missing pixels, and in the deblurring problem is associated with the data that suffer the greatest loss by the blur kernel. Therefore, in both cases is expected to be closer to than . Note that if (20) holds for all iterations, then Theorem 2 ensures monotonically improvement and convergence of . However, it still does not guarantee that is the limit of the sequence. V Experiments We demonstrate the usage of IDBP for two test scenarios: the inpainting and the deblurring problems. We compare the IDBP performance to P&P and another algorithm that has been specially tailored for each problem [6], [18]. In all experiments we use BM3D [1] as the denoising algorithm for IDBP and P&P. We use the following four test images in all experiments: cameraman, house, peppers and Lena. Their intensity range is 0-255. V-a Image inpainting In the image inpainting problem, is a selection of rows of and , which simplifies both P&P and IDBP. In P&P, the first step can be solved for each pixel individually. In IDBP, is obtained merely by taking the observed pixels from and the missing pixels from . For both methods we use the result of a simple median scheme as their initialization (for in P&P and for in IDBP). It is also possible to alternatively use for initialization, but then many more iterations are required. Note that the computational cost of each iteration of P&P and IDBP is of the same scale, dominated by the complexity of the denoising operation. The first experiment demonstrates the performance of IDBP, P&P and inpainting based on Image Processing using Patch Ordering (IPPO) approach [18], for the noiseless case () with 80% missing pixels, selected at random. The parameters of IPPO are set exactly as in [18], where the same scenario is examined. The parameters of P&P are optimized for best reconstruction quality. We use , and 150 iterations. Also, for P&P we assume that the noise standard deviation is 0.001, i.e. nonzero, in order to compute . Considering IDBP, in Section IV-A, it is suggested that . However, since in this case , a small positive , e.g. , is required. Indeed, this setting gives good performance, but also requires ten times more iterations than P&P. Therefore, we use an alternative approach. We set , which allows us to use only 150 iterations (same as P&P), but take the last as the final estimate, which is equivalent to performing the last denoising with the recommended . Figure 1 shows the results of both IDBP implementations for the house image. It approves that the alternative implementation performs well and requires significantly less iterations (note that the x-axis has a logarithmic scale). Therefore, for the comparison of the different inpainting methods in this experiment, we use IDBP with . The results of the three algorithms are given in Table I. IDBP is usually better than IPPO, but slightly inferior to P&P. This is the cost of enhancing IDBP by setting to a value which is significantly larger than zero. However, this observation also hints that IDBP may shine for noisy measurements, where can be used without increasing the number of iterations. We also remark that IPPO gives the best result for peppers because in this image P&P and IDBP require more than the fixed 150 iterations. The second experiment demonstrates the performance of IDBP and P&P with 80% missing pixels, as before, but this time . Noisy inpainting has not been implemented yet by IPPO [18]. The parameters of P&P that give us the best results are , and 150 iterations. Using the same parameter values as before deteriorates the performance significantly. Contrary to P&P, in this experiment tuning the parameters of IDBP can be avoided. We follow Section IV-A and set . Moreover, IDBP now requires only 75 iterations, half the number of P&P. The results are given in Table II. P&P is slightly inferior to IDBP, despite having twice the number of iterations and a burdensome parameter tuning. The results for house are also presented in Figure 2, where it can be seen that P&P reconstruction suffers from more artifacts (e.g. ringing artifacts near the right window). We repeat the last experiment with slightly increased noise level of , but still use the same parameter tuning for P&P, which is optimized for (i.e. , and the fixed ). This situation is often encountered in practice, when calibrating a system for all possible scenarios is impossible. The results are given in Table III. The IDBP clearly outperforms P&P in this case. This experiment clearly shows the main advantage of our algorithm over P&P as it is less sensitive to parameter tuning. V-B Image deblurring In the image deblurring problem, for a circular shift-invariant blur operator whose kernel is , both P&P and IDBP can be efficiently implemented using Fast Fourier Transform (FFT). In P&P, can be computed by ˇxk=F−1{F∗{h}F{y}+λσ2nF{ˇvk−1−ˇuk−1}|F{h}|2+λσ2n}. (24) where denotes the FFT operator, denotes the inverse FFT operator. Recall that is an ill-conditioned matrix. Therefore, In IDBP we replace with a regularized inversion of , using standard Tikhonov regularization, which is given in the Fourier domain by ~g≜F∗{h}|F{h}|2+ϵ⋅σ2n, (25) where is a parameter that controls the amount of regularization in the approximation of . Then, in IDBP can be computed by ~yk=F−1{~gF{y}}+~xk−F−1{~gF{h}F{~xk}}. (26) While a research on how to compute in this case is ongoing, we empirically observed that using a fixed value exhibits good performance with only slightly changing between the examined scenarios (i.e. different configurations of and ). Lastly, we remark that we use trivial initialization in both methods, i.e. in P&P and in IDBP. Similarly to inpainting, the computational cost of each iteration of P&P and IDBP is on the same scale, dominated by the complexity of the denoising operation. We consider four deblurring scenarios used as benchmarks in many publications (e.g. [5, 6]). The blur kernel and noise level of each scenario are summarized in Table IV. The kernels are normalized such that . Table V shows the results of IDBP, P&P and the dedicated algorithm BM3D-DEB [6]. For each scenario it shows the input PSNR (i.e. PSNR of ) and the BSNR (blurred signal-to-noise-ratio, defined as ) for each image, as well as the ISNR (improvement signal-to-noise-ratio) for each method and image, which is the difference between the PSNR of the reconstruction and the input PSNR. Note that in Scenario 3, is set slightly different for each image, ensuring that the BSNR is 40 dB. The parameters of BM3D-DEB are set exactly as in [6], where the same scenarios are examined. The parameters of P&P are optimized for each scenario. We have 0.85, 0.85, 0.9, 0.8 and 2, 3, 3, 1, for scenarios 1-4, respectively, and use 50 iterations. IDBP is easier to tune, as we set 7e-3, 4e-3, 8e-3, 2e-3 for scenarios 1-4, respectively, and fixed . Also, we use only 20 iterations for IDBP. From Table V, it is clear that IDBP and P&P are highly competitive (arguably, IDBP performs slightly better), and both are much better than BM3D-DEB, which is tailored for the deblurring problem. Figure 3 displays the results for Lena in Scenario 2. It can be seen that IDBP reconstruction is the sharpest (e.g. notice the lightning in the right eye). As mentioned in Section IV-A, we observed that the pairs of which give the best results indeed satisfy condition (16), while the pairs of that give bad results often violate this condition. This behavior is demonstrated for house image in Scenario 1. Figure 3(a) shows the PSNR vs. iteration for several pairs of . The left-hand side of (16) divided by the right-hand side is presented in Figure 3(b). If this division is less than 1, even for a single iteration, it means that the original condition in (III) is violated by the associated . Recall that even when the division is higher than 1 for all iterations, it does not guarantee satisfying (III). Therefore, a small margin should be kept. Vi Conclusion We presented the Iterative Denoising and Backward Projections (IDBP) method for solving linear inverse problems using denoising algorithms. This method, in its general form, has only a single parameter that should be set according to a given condition. We presented mathematical analysis of this strategy and provided a practical way to tune its parameter. Therefore, it can be argued that our method has less parameters that require tuning than the P&P method, especially for the noisy inpainting problem, where the single parameter of the IDBP can be just set to zero. Experiments demonstrated that IDBP is competitive with task-specific algorithms and with the P&P approach for inpainting and deblurring problems. References • [1] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on image processing, vol. 16, no. 8, pp. 2080–2095, 2007. • [2] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image processing, vol. 15, no. 12, pp. 3736–3745, 2006. • [3] A. Buades, B. Coll, and J.-M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 490–530, 2005. • [4] M. Delbracio and G. Sapiro, “Burst deblurring: Removing camera shake through fourier burst accumulation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2385–2393, 2015. • [5] J. A. Guerrero-Colón, L. Mancera, and J. Portilla, “Image restoration using space-variant Gaussian scale mixtures in overcomplete pyramids,” IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 27–41, 2008. • [6] K. Dabov, A. Foi, V. Katkovnik, and K. O. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” in SPIE Electronic Imaging ’08, vol. 6812, (San Jose, California, USA), Jan. 2008. • [7] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 417–424, ACM Press/Addison-Wesley Publishing Co., 2000. • [8] A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on image processing, vol. 13, no. 9, pp. 1200–1212, 2004. • [9] S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, pp. 945–948, IEEE, 2013. • [10] S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” IEEE Transactions on Computational Imaging, vol. 2, no. 4, pp. 408–423, 2016. • [11] A. Rond, R. Giryes, and M. Elad, “Poisson inverse problems by the plug-and-play scheme,” Journal of Visual Communication and Image Representation, vol. 41, pp. 96–108, 2016. • [12] Y. Dar, A. M. Bruckstein, M. Elad, and R. Giryes, “Postprocessing of compressed images via sequential denoising,” IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3044–3058, 2016. • [13] T. Meinhardt, M. Möller, C. Hazirbas, and D. Cremers, “Learning proximal operators: Using denoising networks for regularizing inverse imaging problems,” ICCV, 2017. • [14] Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: Regularization by denoising (red),” arXiv preprint arXiv:1611.02862, 2016. • [15] A. M. Teodoro, J. M. Bioucas-Dias, and M. A. Figueiredo, “Image restoration and reconstruction using variable splitting and class-adapted image priors,” in Image Processing (ICIP), 2016 IEEE International Conference on, pp. 3518–3522, IEEE, 2016. • [16] S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play ADMM for image restoration: Fixed-point convergence and applications,” IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 84–98, 2017. • [17] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. • [18] I. Ram, M. Elad, and I. Cohen, “Image processing using smooth ordering of its patches,” IEEE transactions on image processing, vol. 22, no. 7, pp. 2764–2774, 2013. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Comparing Multi-class, Binary and Hierarchical Machine Learning Classification schemes for variable stars Upcoming synoptic surveys are set to generate an unprecedented amount of data. This requires an automatic framework that can quickly and efficiently provide classification labels for several new object classification challenges. Using data describing 11 types of variable stars from the Catalina Real-Time Transient Surveys (CRTS), we illustrate how to capture the most important information from computed features and describe detailed methods of how to robustly use Information Theory for feature selection and evaluation. We apply three Machine Learning (ML) algorithms and demonstrate how to optimize these classifiers via cross-validation techniques. For the CRTS dataset, we find that the Random Forest (RF) classifier performs best in terms of balanced-accuracy and geometric means. We demonstrate substantially improved classification results by converting the multi-class problem into a binary classification task, achieving a balanced-accuracy rate of $\sim$99 per cent for the classification of ${\delta}$-Scuti and Anomalous Cepheids (ACEP). Additionally, we describe how classification performance can be improved via converting a 'flat-multi-class' problem into a hierarchical taxonomy. We develop a new hierarchical structure and propose a new set of classification features, enabling the accurate identification of subtypes of cepheids, RR Lyrae and eclipsing binary stars in CRTS data. NurtureToken New! Token crowdsale for this paper ends in Authors Are you an author of this paper? Check the Twitter handle we have for you is correct. Subcategories #1. Which part of the paper did you read? #2. The paper contains new data or analyses that is openly accessible? #3. The conclusion is supported by the data and analyses? #4. The conclusion is of scientific interest? #5. The result is likely to lead to future research? User: Repo: Stargazers: 0 Forks: 0 Open Issues: 0 Network: 0 Subscribers: 0 Language: None Views: 0 Likes: 0 Dislikes: 0 Favorites: 0 0 Other Sample Sizes (N=): Inserted: Words Total: Words Unique: Source: Abstract: None 07/18/19 06:01PM 11,337 3,029 Tweets arxivml: "Comparing Multi-class, Binary and Hierarchical Machine Learning Classification schemes for variable stars", Zafiir… https://t.co/5RysDHDPL3 StatsPapers: Comparing Multi-class, Binary and Hierarchical Machine Learning Classification schemes for variable stars. https://t.co/a6chYUuGXl DARABigData: Publication 📝 alert! Comparing multi-class, binary and hierarchical machine learning classification schemes for variable stars 🌟 💫 by @DARABigData PhD student @ZHosenie is accepted by @RoyalAstroSoc #MNRAS https://t.co/YsNKEbY1AY
## Cryptology ePrint Archive: Report 2013/331 New Constructions and Applications of Trapdoor DDH Groups Yannick Seurin Abstract: Trapdoor Decisional Diffie-Hellman (TDDH) groups, introduced by Dent and Galbraith (ANTS 2006), are groups where the DDH problem is hard, unless one is in possession of a secret trapdoor which enables solving it efficiently. Despite their intuitively appealing properties, they have found up to now very few cryptographic applications. Moreover, among the two constructions of such groups proposed by Dent and Galbraith, only a single one based on hidden pairings remains unbroken. In this paper, we extend the set of trapdoor DDH groups by giving a construction based on composite residuosity. We also introduce a more restrictive variant of these groups that we name \emph{static} trapdoor DDH groups, where the trapdoor only enables to solve the DDH problem with respect to a fixed pair $(G,G^x)$ of group elements. We give two constructions for such groups whose security relies respectively on the RSA and the factoring assumptions. Then, we show that static trapdoor DDH groups yield elementary constructions of convertible undeniable signature schemes allowing delegatable verification. Using our constructions of static trapdoor DDH groups from the RSA or the factoring assumption, we obtain slightly simpler variants of the undeniable signature schemes of respectively Gennaro, Rabin, and Krawczyk (J. Cryptology, 2000) and Galbraith and Mao (CT-RSA 2003). These new schemes are conceptually more satisfying since they can strictly be viewed as instantiations, in an adequate group, of the original undeniable signature scheme of Chaum and van Antwerpen (CRYPTO~'89). Category / Keywords: public-key cryptography / trapdoor DDH group, hidden pairing, signed quadratic residues, convertible undeniable signature scheme Publication Info: An abridged version appears at PKC 2013. This is the full version.
• 研究报告 • ### 山西阳泉地区乔木林地上碳密度遥感估测 1. (1山西大学环境与资源学院, 太原030006; 2山西省林业调查规划院, 太原 030012) • 出版日期:2014-09-10 发布日期:2014-09-10 ### Estimation of aboveground carbon density for tree forests based on remote sensing data in Yangquan of Shanxi Province, China. LI Jiao1, ZHANG Hong1**, ZHANG Li-qiu1, HAN Jian-ping2 1. (1College of Environment and Resource Sciences, Shanxi University, Taiyuan 030006, China; 2 Shanxi Institute of Forestry Inventory and Planning, Taiyuan 030012, China) • Online:2014-09-10 Published:2014-09-10 Abstract: In order to investigate the feasibility of using remote sensing data to determinate the aboveground carbon density for tree forests, we estimated the biomass and carbon density of forests in Yangquan region of Shanxi Province by using the variable BEF (biomass expansion factor) method based on the forest inventory data. We then selected the NDVI, RVI, bands of SPOT images and environmental factors (elevation, slope, aspect etc.) as independent variables, and established an estimation model by using the enhanced BP neural network method to derive a distribution map of the carbon density. The biomass of tree forests in Yangquan was 552774 t, and the carbon density was 11.38 t·hm-2. Needleleaved forest, young forest and artificial forest had the largest biomass, while broadleaved forest, mature forest and natural forest had the largest carbon density. Model predictions of carbon density were successful, with the average relative errors and the average absolute values of relative error of simulation results for needleleaved forest, broadleaved forest, and mixed forest being 2.40%, 6.87%, -4.09%, and 6.83%, 2.77%, 3.99%, respectively. The simulation accuracy of tree forest distribution map derived by the enhanced BP neural network model was 85.05%, implying that artificial neural networks developed a new idea for fast and accurate estimation of forest carbon density, providing a scientific basis for future survey and management of forest resources.
jtdm_fit is used to fit a Joint trait distribution model. Requires the response variable Y (the sites x traits matrix) and the explanatory variables X.This function samples from the posterior distribution of the parameters, which has been analytically determined. Therefore, there is no need for classical MCMC convergence checks. jtdm_fit(Y, X, formula, sample = 1000) ## Arguments Y The sites x traits matrix containing community (weighted) means of each trait at each site. X The design matrix, i.e. sites x predictor matrix containing the value of each explanatory variable (e.g. the environmental conditions) at each site. formula An object of class "formula" (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under 'Details'. sample Number of samples from the posterior distribution. Since we sample from the exact posterior distribution, the number of samples is relative lower than MCMC samplers. As a rule of thumb, 1000 samples should provide correct inference. ## Value A list containing: model An object of class "jtdm_fit", containing the samples from the posterior distribution of the regression coefficients (B) and residual covariance matrix (Sigma), together with the likelihood of the model. Y A numeric vector of standard errors on parameters X_raw The design matrix specified as input X The design matrix transformed as specified in formula formula The formula specified as input ## Details A formula has an implied intercept term. To remove this use either y ~ x - 1 or y ~ 0 + x. See formula for more details of allowed formulae. ## Examples data(Y) data(X) m = jtdm_fit(Y = Y, X = X, formula = as.formula("~GDD+FDD+forest"), sample = 1000)
# Eilenberg-MacLane space (Redirected from Eilenberg–MacLane space) A space, denoted by $K(\pi,n)$, representing the functor $X\to H^n(X;\pi)$, where $n$ is a non-negative number, $\pi$ is a group which is commutative for $n>1$ and $H^n(X;\pi)$ is the $n$-dimensional cohomology group of a cellular space $X$ with coefficients in $\pi$. It exists for any such $n$ and $\pi$. The Eilenberg–MacLane space $K(\pi,n)$ can also be characterized by the condition: $\pi_i(K(\pi,n))=\pi$ for $i=n$ and $\pi_i(K(\pi,n))=0$ for $i\neq n$, where $\pi_i$ is the $i$-th homotopy group. Thus, $K(\pi,n)$ is uniquely defined up to a weak homotopy equivalence. An arbitrary topological space can, up to a weak homotopy equivalence, be decomposed into a twisted product of Eilenberg–MacLane spaces (see Postnikov system). The cohomology groups of $K(\pi,1)$ coincide with those of $\pi$. Eilenberg–MacLane spaces were introduced by S. Eilenberg and S. MacLane . #### References [1a] S. Eilenberg, S. MacLane, "Relations between homology and homotopy groups of spaces" Ann. of Math. , 46 (1945) pp. 480–509 [1b] S. Eilenberg, S. MacLane, "Relations between homology and homotopy groups of spaces. II" Ann. of Math. , 51 (1950) pp. 514–533 [2] R.E. Mosher, M.C. Tangora, "Cohomology operations and applications in homotopy theory" , Harper & Row (1968) [4] E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) How to Cite This Entry: Eilenberg–MacLane space. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Eilenberg%E2%80%93MacLane_space&oldid=32396
Theoretical uncertainties on the radius of low- and very-low mass stars [SSA] We performed an analysis of the main theoretical uncertainties that affect the radius of low- and very-low mass-stars predicted by current stellar models. We focused on stars in the mass range 0.1-1Msun, on both the zero-age main-sequence (ZAMS) and on 1, 2 and 5 Gyr isochrones. First, we quantified the impact on the radius of the uncertainty of several quantities, namely the equation of state, radiative opacity, atmospheric models, convection efficiency and initial chemical composition. Then, we computed the cumulative radius error stripe obtained by adding the radius variation due to all the analysed quantities. As a general trend, the radius uncertainty increases with the stellar mass. For ZAMS structures the cumulative error stripe of very-low mass stars is about $\pm 2$ and $\pm 3$ percent, while at larger masses it increases up to $\pm 4$ and $\pm 5$ percent. The radius uncertainty gets larger and age dependent if isochrones are considered, reaching for $M\sim 1$Msun about $+12(-15)$ percent at an age of 5 Gyr. We also investigated the radius uncertainty at a fixed luminosity. In this case, the cumulative error stripe is the same for both ZAMS and isochrone models and it ranges from about $\pm 4$ percent to $+7$ and $+9$($-5$) percent. We also showed that the sole uncertainty on the chemical composition plays an important role in determining the radius error stripe, producing a radius variation that ranges between about $\pm 1$ and $\pm 2$ percent on ZAMS models with fixed mass and about $\pm 3$ and $\pm 5$ percent at a fixed luminosity. E. Tognelli, P. Moroni and S. DeglInnocenti Wed, 14 Feb 18 35/68 Comments: 18 pages, 20 figures, 1 table; accepted for publication in MNRAS
## Laws of Nature • 2.4k There are few ideas as regularly abused as 'laws of nature'. The source of this abuse stems from treating such laws as what are called 'covering laws': laws that cover each and every case of action, no matter how minute or detailed. Crudely put, the idea is that for everything that happens in nature, there is law or laws that corresponds to it. But laws of nature are not of this kind. In fact, no law at all is of this kind. Why? Because laws - natural or otherwise - are, at best, limits on action, they specify the bounds within which action takes place. While nothing can 'violate' the laws (this is what lends them their universality), there is no sense in which the laws are always applicable. The philosopher of science Nancy Cartwright explains this idea best: "Covering-law theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behaviour is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all.... God may have written just a few laws and grown tired." (Cartwright, How The Laws of Physics Lie). The line in bold is worth emphasising: for the vast majority of action, the laws are simply silent: they neither specify anything positive nor negative. Bike riding laws for example, while universal in whichever state they apply, simply have nothing to say about spheres of action that have nothing to do with bike riding. The same is true of the 'laws of nature', which while universal and inviolable, are for the vast majority of phenomena simply inapplicable. The point here is to affirm the universality of laws of nature, while denying that they function as covering-laws. One prominent field in which covering-law error is most apparent is in popular - and wrong - (mis)understandings of evolution, where is it often said that, for example, '(the laws of) natural selection govern all of evolution'. While it is true that nothing can violate natural selection (maladaptions will likely lead to extinction), it is also the case that most biological variation is 'adaptively neutral': there are variations - perhaps the majority of them - that are neither adaptive nor maladaptive, and to which natural selection remains 'blind'. Again, the point is that while natural selection is both universal and inviolable in biology, nothing about this universality or inviolability means that natural selections 'governs' each and every aspect of a species. Laws in this sense are more 'negative' than they are 'positive': they say what cannot be done, not 'determine' what can be; like natural selection, such laws are simply 'blind' to most of what happens in the universe. -- It's also possible to deny that laws of nature have any place in science whatsoever, insofar as they might simply be considered as residues of theology [pdf], as Cartwright actually does, but I'm more interested in circumscribing the scope of such laws than denying them outright. • 6k So would you be inclined to agree that the below is an example of this kind of misunderstanding? One could argue that happiness has evolved into life as a survival mechanism. In a general sense, the things that make us happy revolve around concepts that are central to our survival. Essentially, that pleasure and pain are the only motivators of our species and they have evolved in ways that increase our chances of surviving. Because, if so, I perfectly agree with you. However, there are many threads, and many posts, that argue along these lines, with respect to how evolution does mandate, or at least favour, particular kinds of attributes or elements of human nature. In fact they’re writ large in a great deal of popular philosophy and evolutionary biology. • 2.4k It could be an example of such an misunderstanding: the question after all is an empirical one - is there evidence to show that happiness evolved into life as a survival mechanism? And, even if there was, is there evidence to show that it remains a survival mechanism? It is well known that products of evolution - by whatever mechanism - have a knack of being coopted by other processes, for other ends than that which they were originally evolved for, which can in turn feed-back upon the evolution of that trait. Certainly, any a priori attribution of such and such a trait to survival and only survival is bad science through and through - which is to say, not a fault of the science, but of certain of its interpreters. And note that the way to correct this is through the science itself, not through anti-scientific screeds. • 6k Fair enough. Although one wonders what kind of analysis might yield information that validates, or falsifies, the hypothesis that ‘the propensity for happiness is determined by evolutionary factors’. • 3k There are few ideas as regularly abused as 'laws of nature'. First, of course, there are no "laws of nature." There are only general descriptions of how things behave. The philosopher of science Nancy Cartwright explains this idea best: "Covering-law theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behaviour is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all." (Cartwright, How The Laws of Physics Lie). Wouldn't a materialist say that everything - from the behavior of subatomic particles, to consciousness, to the behavior of galaxies - is covered by, controlled by, the laws of physics? Even discounting that, can we say that, even though a particular phenomenon may not be controlled by a particular law of nature, everything is controlled by some law of nature? • 2.4k There are only general descriptions of how things behave. The curious thing about the laws is that they are almost entirely undescriptive. In fact, one of the most interesting things that Cartwirght demonstrates is that there is an inverse relation to how true a law is, and how much explanatory power it has. Her discussion of this point is worth quoting at length: "The laws of physics do not provide true descriptions of reality. ... [Consider] the the law of universal gravitation [F=Gmm′/r^2] ... Does this law truly describe how bodies behave? Assuredly not. It is not true that for any two bodies the force between them is given by the law of gravitation. Some bodies are charged bodies, and the force between them is not Gmm′/r^2. For bodies which are both massive and charged, the law of universal gravitation and Coulomb's law (the law that gives the force between two charges) interact to determine the final force. But neither law by itself truly describes how the bodies behave. No charged objects will behave just as the law of universal gravitation says; and any massive objects will constitute a counterexample to Coulomb's law. These two laws are not true; worse, they are not even approximately true. In the interaction between the electrons and the protons of an atom, for example, the Coulomb effect swamps the gravitational one, and the force that actually occurs is very different from that described by the law of gravity. There is an obvious rejoinder: I have not given a complete statement of these two laws, only a shorthand version. [There ought to be] an implicit ceteris paribus ('all things equal') modifier in front, which I have suppressed. Speaking more carefully ... If there are no forces other than gravitational forces at work, then two bodies exert a force between each other which varies inversely as the square of the distance between them, and varies directly as the product of their masses I will allow that this law is a true law, or at least one that is held true within a given theory. But it is not a very useful law. One of the chief jobs of the law of gravity is to help explain the forces that objects experience in various complex circumstances. This law can explain in only very simple, or ideal, circumstances. It can account for why the force is as it is when just gravity is at work; but it is of no help for cases in which both gravity and electricity matter. Once the ceteris paribus modifier has been attached, the law of gravity is irrelevant to the more complex and interesting situations." (How the Laws of Physics Lie. Wouldn't a materialist say that everything - from the behavior of subatomic particles, to consciousness, to the behavior of galaxies - is covered by, controlled by, the laws of physics? A vulgar, unreflective materialism, maybe. But I can imagine few things more theologically charged than the idea that 'there is a law that covers everything'; Materialism ought to - and can do - better than such vulgarities. The physicist Paul Davis writes nicely on this: "The very notion of physical law has its origins in theology. The idea of absolute, universal, perfect, immutable laws comes straight out of monotheism, which was the dominant influence in Europe at the time science as we know it was being formulated by Isaac Newton and his contemporaries. Just as classical Christianity presents God as upholding the natural order from beyond the universe, so physicists envisage their laws as inhabiting an abstract transcendent realm of perfect mathematical relationships. Furthermore, Christians believe the world depends utterly on God for its existence, while the converse is not the case. Correspondingly, physicists declare that the universe is governed by eternal laws, but the laws remain impervious to events in the universe. I think this entire line of reasoning is now outdated and simplistic". https://www.theguardian.com/commentisfree/2007/jun/26/spaceexploration.comment The paper I mentioned and linked to in the OP at the end by Cartwright ("No Gods, No Laws") similarly makes the case that physical laws can only make sense with the invocation of a God, which is all the more reason to treat physical laws with extreme prejudice. • 3k The laws of physics do not provide true descriptions of reality. ... [Consider] the the law of universal gravitation [F=Gmm′/r^2] ... Does this law truly describe how bodies behave? Assuredly not. It is not true that for any two bodies the force between them is given by the law of gravitation. Boy, this, along with the rest of the quoted text, is really wrong, or at least trivially correct. A quibble about language. As I implied in the first line of my post, I don't find the idea of a law of nature a very useful one, and I agree it's misleading, but once we've decided to discuss things in those terms, I, and most other people with an interest in science, have no problem applying the concept. Come on - just because the law of universal gravitation doesn't necessarily describe all the forces on a massive object, doesn't mean it doesn't tell us something important about how matter behaves. I'm guessing you disagree. Please explain. • 1.8k Just on the subject of descriptiveness (and sorry if I'm inappropriately fisking here)... It can account for why the force is as it is when just gravity is at work; but it is of no help for cases in which both gravity and electricity matter — Cartwright Didn't she already supply the solution here: For bodies which are both massive and charged, the law of universal gravitation and Coulomb's law (the law that gives the force between two charges) interact to determine the final force — Cartwright That is, the laws of physics together describe how things behave. Which means that this is wrong: Once the ceteris paribus modifier has been attached, the law of gravity is irrelevant to the more complex and interesting situations Surely we can, and do, apply multiple laws? • 2.4k Come on - just because the law of universal gravitation doesn't necessarily describe all the forces on a massive object, doesn't mean it doesn't tell us something important about how matter behaves. I'm guessing you disagree. There's not really much to disagree - or agree - with though. "Tells us something important". Sure, Ok, as far as a vague 'something important' goes. • 3k "Tells us something important". Sure, Ok, as far as a vague 'something important' goes. How is the something important that the law of universal gravitation describes vague? Two bodies with the property we call "mass" tend to move towards each other in a regular way which can be quantified, whether we describe that tendency as a force or a bending of space-time. • 6k Certainly, anya priori attribution of such and such a trait to survival and only survival is bad science through and through - which is to say, not a fault of the science, but of certain of its interpreters. And note that the way to correct this is through the science itself, not through anti-scientific screeds. What prompted this thread was one of my frequent criticisms of what I refer to as 'Darwinian rationalism' - that is on of the 'anti-science screed' you're referring to. And what I said to prompt it was a remark about how evolutionary theory tends to rationalise every human attribute in terms of 'what enhances survival'. You see posts all the time about this - the one I quoted was an example, but there are countless more. And it's because it's the 'scientific' way of understanding human nature, right? None of this religious nonsense - we're for Scientific Facts. So let's not try and drag it into abstruse metaphysics. Actually one of the better commentators on this is the very prim and proper English philosopher, Mary Midgley - a personal favourite of Dawkins! - whose book Evolution as a Religion lays it out rather nicely. Helped by the fact that she is at least a 'non-theist', at least certainly has no ID ax to grind. She just knows scientistic nonsense when she sees it, and she sees a lot of it in pop darwinian philosophizing, which is endemic in the Academy nowadays. • 2.4k Surely we can, and do, apply multiple laws? Heh, I was waiting for this rejoinder, but didn't want to drop an even bigger quote than I did, because this is exactly what she addresses in the section right after (sorry for the long quote but it's just easier this way and I'm lazy): "The vector addition story is, I admit, a nice one. But it is just a metaphor. We add forces (or the numbers that represent forces) when we do calculations. Nature does not ‘add’ forces. ... [On the vector addition account], Coulomb's law and the law of gravity come out true because they correctly describe what influences are produced—here, the force due to gravity and the force due to electricity. The vector addition law then combines the separate influences to predict what motions will occur. This seems to me to be a plausible account of how a lot of causal explanation is structured. But as a defence of the truth of fundamental laws, it has two important drawbacks. First, in many cases there are no general laws of interaction. Dynamics, with its vector addition law, is quite special in this respect. This is not to say that there are no truths about how this specific kind of cause combines with that, but rather that theories can seldom specify a procedure that works from one case to another. Without that, the collection of fundamental laws loses the generality of application which [vector addition] hopes to secure. In practice engineers handle irreversible processes with old fashioned phenomenological laws describing the flow (or flux) of the quantity under study. Most of these laws have been known for quite a long time. For example there is Fick's law... Equally simple laws describe other processes: Fourier's law for heat flow, Newton's law for sheering force (momentum flux) and Ohm's law for electric current. Each of these is a linear differential equation in t, giving the time rate of change of the desired quantity (in the case of Fick's law, the mass). Hence a solution at one time completely determines the quantity at any other time. Given that the quantity can be controlled at some point in a process, these equations should be perfect for determining the future evolution of the process. They are not. The trouble is that each equation is a ceteris paribus law. It describes the flux only so long as just one kind of cause is operating. [Vector addition] if it works, buys facticity, but it is of little benefit to (law) realists who believe that the phenomena of nature flow from a small number of abstract, fundamental laws. The fundamental laws will be severely limited in scope. Where the laws of action go case by case and do not fit a general scheme, basic laws of influence, like Coulomb's law and the law of gravity, may give true accounts of the influences that are produced; but the work of describing what the influences do, and what behaviour results, will be done by the variety of complex and ill-organized laws of action." • 2.4k Two bodies with the property we call "mass" tend to move towards each other in a regular way which can be quantified, But the point is they don't, except in highly idealised situations, 'do so in a regular way that can be quantified'. Your statement is literally untrue for all but a very, very small number of situations, and situations almost definitely artificial at that. • 2.4k And it's because it's the 'scientific' way of understanding human nature, right? But it is not the scientific way of understanding nature. That's the point. You'd like it to be the 'scientific way' of understanding nature, because it provides more fuel for your anti-science proclivities. But so much the worse for those proclivities - and the pseudo-science it militates against. A pox on both houses. • 1.8k [Vector addition] if it works, buys facticity, but it is of little benefit to (law) realists who believe that the phenomena of nature flow from a small number of abstract, fundamental laws. — Cartwright I guess I was thinking that facticity--which the laws give us, or can give us--does amount to descriptiveness, even if they don't amount to the metaphysical grounding that the law realists claim for them. • 2.4k Yeah, it's a careful line to tread. Cartwright's position - which makes alot of sense to me, is anti-realism about laws, but realism about (scientific) entities. The case for entity realism is perhaps another topic in itself, but as far as the status of laws goes, their usefulness is, on her account, largely epistemic: "I think that the basic laws and equations of our fundamental theories organise and classify our knowledge in an elegant and efficient manner, a manner that allows us to make very precise calculations and predictions. The great explanatory and predictive powers of our theories lies in their fundamental laws. Nevertheless the content of our scientific knowledge is expressed in the phenomenological laws" [which differ from 'fundamental laws', in their being context specific - SX]. She comes close to the famous scientific anti-realism of Bas van Fraassen, who is an anti-realist about entities, precisely because he believes that it's all just a case of organising and classifying our knowledge. But Cartwright's point is that if you pay attention to the peculiar status of laws, one can admit this without being an anti-realist about entities. @Banno put it once nicely in a post long ago - something like: the point of scientific equations is to add up nicely. It struck me as barbarous at the time, but I've come to see it as making a great deal of sense. • 1k There are things about force as a concept -as a real abstraction- which make it highly amenable to analysis with vectors. Fundamentally, this comes down to motion having a direction as well as a magnitude. A vector just is a quantity with a direction and a magnitude, and a force describes a propensity to shunt with a given strength in a given direction - the same is true of the more abstract force fields which ascribe a strength of movement and a direction to points in space. If changes in motion are equivalent to changes in direction and changes in a magnitude, is it then surprising that a mathematical language that allows us to relate changes in direction and changes in magnitude to other changes in direction and changes in magnitude allows for the description of motions in general? Forces enter the picture as what drives changes in motion. This is a restatement of Newton's first law. Forces add as vectors - changes in direction and changes in magnitude interact together as vector changes - this is essentially Newton's second law - multiply the changes induced in motion by vectors (see law 1) by the mass of the changing thing (or the mass as a function of time, or the mass as a function of momentum and energy) and you get the full statement of it. Newton's third law is what interprets forces as body-body interactions, specifically a force projecting from A to B induces/is equivalent (in magnitude but opposite direction) from a force projecting from B to A. This is the same as saying that the relative position vector from A to B, $v_{AB}$ is equal to $-v_{BA}$ Newton's laws aren't just formal predictive apparatuses like (most) statistical models, they're based on physical understanding. They aren't just mathematical abstractions either, the use of mathematics in physics is constrained by (as physicists put it) 'physical meaning'. The mathematics doesn't care that the Coulomb Force law (alone) predicts that electrons spiral towards nuclei. The physics does. • 1.2k Because laws - natural or otherwise - are, at best, limits on action, they specify the bounds within which action takes place. While nothing can 'violate' the laws (this is what lends them their universality), there is no sense in which the laws are always applicable. Laws are models of the way things are. If there are limits in the laws, then that is a representation of the limits in nature. The philosopher of science Nancy Cartwright explains this idea best: "Covering-law theorists tend to think that nature is well-regulated; in the extreme, that there is a law to cover every case. I do not. I imagine that natural objects are much like people in societies. Their behaviour is constrained by some specific laws and by a handful of general principles, but it is not determined in detail, even statistically. What happens on most occasions is dictated by no law at all.... God may have written just a few laws and grown tired." (Cartwright, How The Laws of Physics Lie). So people don't have any reason for what they do outside of some specific laws and a handful of general principles? Nonsense. Their behavior is constrained by the shape and size of their body and the scope of their memory. What happens in every occasion is dictated by the causes that came before any said occasion. We just haven't explained every natural causal force and its related effect - so it can seem like there aren't any laws for certain occasions. We just haven't gotten around to explaining every causal relationship. Be patient. • 453 I would say that we use models to understand reality usually for some purpose. And some models appear to be so obvious and are so useful that we define them as to be laws. Of course these laws just abide to their context. Newtonian physics works just fine for nearly all questions, but not for everything, and hence we have to have things like relativity. Now could our understanding change from the present? Of course! Some even more neat and useful theory could replace the existing ones, but it likely wouldn't be proving the earlier "laws" false or erroneous, but that the earlier theories said to be laws haven't covered everything and that there's simply a different point of view. • 3.2k Because laws - natural or otherwise - are, at best, limits on action, they specify the bounds within which action takes place. Really? No way, unless infinity is considered a boundary. The same is true of the 'laws of nature', which while universal and inviolable, And you know this how? '. While it is true that nothing can violate natural selection (maladaptions will likely lead to extinction), And the proof is? Again, the point is that while natural selection is both universal and inviolable in biology, nothing about this universality or inviolability means that natural selections 'governs' each and every aspect of a species. I guess this means that the universal and inviolable can be violated? It seems that the Laws of Nature is completely fabricated. They are just sweeping statements that are thrown around to justify some particular point of view. There is zero evidence of any sort that there are any laws governing the completely unpredictable behavior of life, yet science loves to extend some simple models of matter to the behavior of life. • 3k I guess this means that the universal and inviolable can be violated? It seems that the Laws of Nature is completely fabricated. They are just sweeping statements that are thrown around to justify some particular point of view. There is zero evidence of any sort that there are any laws governing the completely unpredictable behavior of life, yet science loves to extend some simple models of matter to the behavior of life. Rich So, in addition to denying the validity of quantum mechanics and relativity, you also deny the validity of Darwin's theory of evolution by natural selection. Is that correct? Do you also deny the fact of evolution, whatever the mechanism? Do you believe that all life on earth shares a common genetic heritage because all of it, all of us, share a common ancestor? bold italic underline strike code quote ulist image url mention reveal
Pankaj Tanwar Published on # Pagination with Cassandra - let’s deal with paging large queries in python 🐍 ––– views Authors Pagination in Cassandra is one of the hair-pulling problems. Sometime back, I encountered a use case where I had to implement pagination in Cassandra. Most of us have grown up in a beautiful world, surrounded by typical relational databases. Like any other developer, I always had trust in my friends LIMIT , OFFSET and BETWEEN for implementing pagination. Howdy-hooo! But my happiness didn't last long when I found, there is no such thing in Cassandra at all. With a heavy heart, I double-checked but no luck. LIMIT with OFFSET and BETWEEN parameters are not available in Cassandra query. For eg - I can get first 10 rows using LIMIT operator but not the next 10 rows as there is no OFFSET. Even, a column in Cassandra is not the same as the column in RDBMS. It took a while for my mind to digest the way monster Cassandra works. Well, I will walk you through my story of this really interesting war with Cassandra paging for large queries and journey to conquer it. (with actual code) BONUS - you will learn, how smart is the Cassandra SELECT * implementation. ## Problem Statement Our use case was pretty simple. Pull everything from a Cassandra table, heavy personalized processing on each row, and keep going. Before you declare me the dumbest person of the year, let me explain why it was not as simple to implement as it looks. ## Initial approach SELECT * FROM my_cute_cassandra_table; This gives me back all rows over which I can iterate and get my job done. Easy-peasy right? NO. Let's first understand how smartly SELECT * query is implemented in Cassandra. Suppose, you have a table with ~1B rows and you run - SELECT * FROM my_cute_cassandra_table; and store the result in a variable. Hold on, Its not gonna eat all the RAM. Loading all the 1B rows into the memory is painful and foolish. Unlike a silly approach, Cassandra does it in a very smart way with fetching data in pages. so you don't have to worry about the memory. It just fetches a chunk from the database (~ 5000 rows) and returns a cursor for results on which you can iterate, to see the rows. Once our iteration reaches close to 5000, it again fetches the next chunk of 5000 rows internally and adds it to the result cursor. It does it so brilliantly that we don’t even feel this magic happening behind the scene. query = "SELECT * FROM my_cute_cassandra_table;" results = session.execute(query) for data in results: # processing the data process_data_here(data) I know, that’s a really smart approach but It became a bottleneck for us. Whatever data we were fetching from Cassandra, we needed to put in some extensive processing over each payload which itself required some time. As iterating over the chunk took some time and till it reached the end of the chunk, Cassandra thought the connection was not being used and closed the connection automatically yelling, its timeout. As we had a lot of data to fetch and process, we needed a way to paginate the data in a smart way to streamline the issue. ## How did I solve it? We deep-dived into Cassandra configurations and found that whenever Cassandra returns a result cursor, it brings a page state with it. Page state is nothing but a page number to help Cassandra remember which chunk to fetch next. from cassandra.query import SimpleStatement query = "SELECT * FROM my_cute_cassandra_table;" statement = SimpleStatement(query, fetch_size=100)results = session.execute(statement) # save page statepage_state = results.paging_state for data in results: process_data_here(data) We changed our approach a bit in a tricky way. Based on our use case, we set the fetch size (it is the size of the chunk, manually given by us to Cassandra). And when we got the result cursor, we saved the page state in a variable. We put a check on the counter. If it exceeds the manual chunk size, it breaks and again fetches a fresh new chunk with the page state already saved. from cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProviderfrom cassandra.query import SimpleStatement # connection with cassandracluster = Cluster(["127.0.0.1"], auth_provider=PlainTextAuthProvider(username="pankaj", password="pankaj")) session = cluster.connect() # setting keyspacesession.set_keyspace("my_keyspace") # set fetch sizefetch_size = 100 # It will print first 100 recordsnext_page_available = Truepaging_state     = Nonedata_count     = 0 while next_page_available is True: # fetches a new chunk with given page state result = fetch_a_fresh_chunk(paging_state) paging_state = results.paging_state for result in results: # process payload here..... # payload processed data_count += 1 # once we reach fetch size, we stop cassandra to fetch more chunk, internally if data_count == fetch_size: i = 0 break # fetches a fresh chunk with given page state def fetch_a_fresh_chunk(paging_state = None) query = "SELECT * FROM my_cute_cassandra_table;" statement = SimpleStatement(query, fetch_size = fetch_size) results = session.execute(statement, paging_state=paging_state) It helped us restrict Cassandra to internally fetch new chunks to avoid connection timeout. So now, that I end my ramblings, I hope you have learned something new. Correct me If you find any technical inaccuracies. If you have read till here, I guess you will like my other write ups too.
# Recent questions and answers in Chemistry Questions from: Chemistry ### Give the Advantages of syntbeti.c detergents over soaps To see more, click for all the questions in this category.
## maintenance This element is found at these locations (XPath): eml:eml/dataset/maintenance The dataset/maintenance/description element should be used to document changes to the data tables or metadata, including update frequency. The change history can also be used to describe alterations in static documents. The description element (TextType) can contain both formatted and unformatted text blocks. Example 16: maintenance <maintenance> <description> <para> Data are updated annually at the end of the calendar year. </para> </description> </maintenance>
## Tuesday, November 09, 2010 ### Remember the Alamo The Alamo in San Antonio is where a small force of Texas rebels held off the huge Mexican army. The University of Texas at San Antonio was a brief stop on the way to Florida. Talk to the Joint Meeting October 22-24 of the American Physical Society (APS), American Association of Physics Teachers (AAPT) and National Society of Hispanic Physicists: GM=tc^3 Cosmology and the Moon Relativity suggests an expanding cosmology of scale R = ct, where t is age of the Universe. Gravitation would then require that c be further related to t by: GM = tc^3. Where G and M are mass and gravitational constant, this simple expression predicts data from the microwave background, including 4.507034% baryonic matter and a stable density $\Omega$ = 1. The non-linear increase in Type Ia supernova redshifts may be precisely predicted without repulsive energies. (Riofrio, 2004) Prediction of a changing c may be tested with modern lanterns and the distant hilltop of the Moon. Our Lunar Laser Ranging Experiment has measured the Moon's semimajor axis increasing at 3.82 ± .07 cm/yr, anomalously high. The Mansfield sediment (Bills, Ray 2000) measures lunar recession at 2.9 ± 0.6 cm/yr. More recent work accurately measures a recession rate of 2.82 ± .08 cm/yr. LLRE differs from independent experiments by 10 $\sigma$. If the speed of laser light were decaying, the Moon's apparent distance is predicted to increase by 0.935 cm/yr. An anomaly in the Moon's orbit is precisely accounted for. This interesting result may have importance for cosmology, shedding light on puzzles of "dark energy.'' In Planck units, this may be summarised as: M = R = t. The audience at San Antonio was very intent and interested. If a scientist ever feels surrounded by adversity, remember the Alamo! Labels: , Kea said... Good to hear you are speaking again. Thankfully, a few young people are not afraid to check simple equations. 8:14 PM CarlBrannen said... Hopefully better than the Alamo as none of the defenders survived (the few who surrendered were executed anyway). 2:26 PM L. Riofrio said... We did much better than the Alamo defenders. San Antonio and the audience were very friendly! 6:37 PM Michael Olivia said... Dear Friend,
# Amoeba optimization method using F# My favorite column in MSDN Magazine is Test Run; it was originally focused on testing, but the author, James McCaffrey, has been focusing lately on topics revolving around numeric optimization and machine learning, presenting a variety of methods and approaches. I quite enjoy his work, with one minor gripe –his examples are all coded in C#, which in my opinion is really too bad, because the algorithms would gain much clarity if written in F# instead. Back in June 2013, he published a piece on Amoeba Method Optimization using C#. I hadn’t seen that approach before, and found it intriguing. I also found the C# code a bit too hairy for my feeble brain to follow, so I decided to rewrite it in F#. In a nutshell, the Amoeba approach is a heuristic to find the minimum of a function. Its proper respectable name is the Nelder-Nead method. The reason it is also called the Amoeba method is because of the way the algorithm works: in its simple form, it starts from a triangle, the “Amoeba”; at each step, the Amoeba “probes” the value of 3 points in its neighborhood, and moves based on how much better the new points are. As a result, the triangle is iteratively updated, and behaves a bit like an Amoeba moving on a surface. Before going into the actual details of the algorithm, here is how my final result looks like. You can find the entire code here on GitHub, with some usage examples in the Sample.fsx script file. Let’s demo the code in action: in a script file, we load the Amoeba code, and use the same function the article does, the Rosenbrock function. We transform the function a bit, so that it takes a Point (an alias for an Array of floats, essentially a vector) as an input, and pass it to the solve function, with the domain where we want to search, in that case, [ –10.0; 10.0 ] for both x and y: #load "Amoeba.fs" open Amoeba open Amoeba.Solver let g (x:float) y = 100. * pown (y - x * x) 2 + pown (1. - x) 2 let testFunction (x:Point) = g x.[0] x.[1] solve Default [| (-10.,10.); (-10.,10.) |] testFunction 1000 Running this in the F# interactive window should produce the following: val it : Solution = (0.0, [|1.0; 1.0|]) > The algorithm properly identified that the minimum is 0, for a value of x = 1.0 and y = 1.0. Note that results may vary: this is a heuristic, which starts with a random initial amoeba, so each run could produce slightly different results, and might at times epically fail. So how does the algorithm work? I won’t go into full detail on the implementation, but here are some points of interest. At each iteration, the Amoeba has a collection of candidate solutions, Points that could be a Solution, with their value (the value of the function to be minimized at that point). These points can be ordered by value, and as such, always have a best and worst point. The following picture, which I lifted from the article, shows what points the Amoeba is probing: The algorithm constructs a Centroid, the average of all current solutions except the worst one, and attempts to replace the Worst with 3 candidates: a Contracted, Reflected and Expanded solution. If none of these is satisfactory (the rules are pretty straightforward in the code), the Amoeba shrinks towards the Best solution. In other words, first the Amoeba searches for new directions to explore by trying to replace its current Worst solution, and if no good change is found, it shrinks on itself, narrowing down around its current search zone towards its current Best candidate. If you consider the diagram, clearly all transformations are a variation on the same theme: take the Worst solution and the Centroid, and compute a new point by stretching it by different values: –50% for contraction, +100% for reflection, and +200% for expansion. For that matter, the shrinkage can also be represented as a stretch of –50% towards the Best point. This is what I ended up with: type Point = float [] type Settings = { Alpha:float; Sigma:float; Gamma:float; Rho:float; Size:int } let stretch ((X,Y):Point*Point) (s:float) = Array.map2 (fun x y -> x + s * (x - y)) X Y let reflected V s = stretch V s.Alpha let expanded V s = stretch V s.Gamma let contracted V s = stretch V s.Rho I defined Point as an alias for an array of floats, and a Record type Settings to hold the parameters that describe the transformation. The function stretch takes a pair of points and a float (by how much to stretch), and computes the resulting Point by taking every coordinate, and going by a ratio s from x towards y. From then on, defining the 3 transforms is trivial; they just use different values from the settings. Now that we have the Points represented, the other part of the algorithm requires evaluating a function at each of these points. That part was done with a couple types: type Solution = float * Point type Objective = Point -> float type Amoeba = { Dim:int; Solutions:Solution [] } // assumed to be sorted by fst value member this.Size = this.Solutions.Length member this.Best = this.Solutions.[0] member this.Worst = this.Solutions.[this.Size - 1] let evaluate (f:Objective) (x:Point) = f x, x let valueOf (s:Solution) = fst s A Solution is a tuple, a pair associating a Point and the value of the function at that point. The function we are trying to minimize, the Objective, takes in a point, and returns a float. We can then define an Amoeba as an array of Solutions, which is assumed to be sorted. Nothing guarantees that the Solutions are ordered, which bugged me for a while; I was tempted to make that type private or internal, but this would have caused some extra hassle for testing, so I decided not to bother with it. I added a few convenience methods on the Amoeba, to directly extract the Best and Worst solutions, and two utility functions, evaluate, which associates a Point with its value, and its counter-part, valueOf, which extracts the value part of a Solution. The rest of the code is really mechanics; I followed the algorithm notation from the Wikipedia page, rather than the MSDN article, because it was actually a bit easier to transcribe, built the search as a recursion (of course), which iteratively transforms an Amoeba for a given number of iterations. For good measure, I introduced another type, Domain, describing where the Amoeba should begin searching, and voila! We are done. In 91 lines of F#, we got a full implementation. ## Conclusion What I find nice about the algorithm is its relative simplicity. One nice benefit is that it doesn’t require a derivative. Quite often, search algorithms use a gradient to evaluate the slope and decide what direction to explore. The drawback is that first, computing gradients is not always fun, and second, there might not even be a properly defined gradient in the first place. By contrast, the Amoeba doesn’t require anything – just give it a function, and let it probe. In some respects, the algorithm looks to me like a very simple genetic algorithm, maintaining a population of solutions, breeding new ones and letting a form of natural selection operate. Of course, the price to pay for this simplicity is that it is a heuristic, that is, there is no guarantee that the algorithm will find a good solution. From my limited experimentations with it, even in simple cases, failures were not that unusual. If I get time for this, I think it would be fun to try launching multiple searches, and stopping when, say, the algorithm has found the same Best solution a given number of times. Also, note that in this implementation, 2 cases are not covered: the case where the function is not defined everywhere (some Points might throw an exception), and the case where the function doesn’t have a minimum. I will let the enterprising reader think about how that could be handled!
# Matrix element of the derivative of an operator between its eigenstates + 1 like - 0 dislike 393 views I want to calculate a matrix element of the derivative of the Hamiltonian between two eigenstates $\alpha$ and $\beta$ given by $u^\alpha(x,y)$ and $u^\beta(x,y)$ (called the Bloch functions): $$\langle \alpha | \frac{\partial \hat{H}}{\partial k_j}|\beta\rangle$$This is taken from Eq. (3.6) in the paper by Komoto, Topological Invariant and the Quantization of the Hall conductance. In the same paper in Eq. (3.4), they defined the matrix element of v between states $\alpha$ and $\beta$ as: $$(v)_{\alpha \beta}=\delta_{k_1 k'_1} \delta_{k_2 k'_2} \int_0^{qa} dx \int_0^b dy u_{k_1 k_2}^{\alpha^*} v u_{k'_1 k'_2}^{\beta}$$ The result from the paper is: $(E^\beta -E^\alpha)\langle \alpha| \frac{\partial u^\beta}{\partial k_j} \rangle=-(E^\beta - E^\alpha)\langle \frac{\partial u^\alpha}{\partial k_j}| \beta\rangle$. I tried using the expression for matrix element between states given in the paper but cannot obtain their result. I think there has to be an integration by parts involved in order to get $(E^\beta - E^\alpha)$ but integration by parts requires the presence of an integral over $k_j$ which is not the case here. Consider the general matrix element $E^{\beta}\langle \alpha |\nabla \beta \rangle$. This can be simplified as: $$E^{\beta}\langle \alpha |\nabla \beta \rangle=\langle \alpha |\nabla (\hat{H} \beta) \rangle=\langle \alpha |\nabla \hat{H}| \beta \rangle + \langle \alpha |\hat{H} \nabla \beta \rangle=\langle \alpha |\nabla \hat{H}| \beta \rangle + E^{\alpha}\langle \alpha |\nabla \beta \rangle$$ Hence, $\langle \alpha |\nabla \hat{H}| \beta \rangle=(E^{\beta} - E^{\alpha})\langle \alpha |\nabla \beta \rangle$. Similarly if one repeats the calculation starting from $E^{\alpha}\langle \beta |\nabla \alpha \rangle$, one gets $\langle \alpha |\nabla \hat{H}| \beta \rangle=-(E^{\beta} - E^{\alpha})\langle \nabla \alpha | \beta \rangle$. This is exactly what is required to be proved. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar\varnothing$sicsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
# Definition:Topological Semigroup ## Definition Let $\left({S, \circ}\right)$ be a semigroup. On that same underlying set $S$, let $\left({S, \tau}\right)$ be a topological space. Then $\left({S, \circ, \tau}\right)$ is said to be a topological semigroup if: $\circ: \left({S, \tau}\right) \times \left({S, \tau}\right) \to \left({S, \tau}\right)$ is a continuous mapping where $\left({S, \tau}\right) \times \left({S, \tau}\right)$ is considered as $S \times S$ with the product topology.
# Discussing tests in Stan math The thread I started at Request for Mathematica code review: generating test cases devolved into discussion about general principles for tests in Stan math. As the title (and start of the discussion) is about something else, I am moving the discussion here to make it better visible and findable and so that more people can join if they like. This post recapitulates the discussion since it started to change topic and has been slightly edited for brevity. Background: I’ve been working on addressing some issues in neg_binomial_lpmf and related functions. Part of the tests I’ve written involve computing high-precision values in Mathematica and test against those. @bgoodri mentioned: I would just reiterate a couple of things that I have said before that few people agree with: 1. I think we should be testing that known identities hold. For example, since \ln \Gamma\left(x\right) = \ln \Gamma\left(x + 1\right) - \ln x for all positive x, plug in an x to that equation and verify that the equation holds and that all derivatives of interest are the same on both sides. 2. Even better are identities that hold for integrals involving the function in question over a range of values of x, such as http://functions.wolfram.com/GammaBetaErf/LogGamma/21/02/01/0001/MainEq1.gif . 3. If we want arbitrary precision tests to compare to, then the gold standard is Arb rather than Mathematica, because the ball arithmetic has bounds that are correct. And Arb is already in C. My response: I am trying to approach this from a pragmatic viewpoint. I think developer efficiency in writing tests is really important. My interest in this has been sparked by the fact that many (most?) functions/codepaths in Stan math currently have no or only very weak tests for numerical precision. IMHO you are making perfect the enemy of the good here. I totally agree that tests against precomputed values shouldn’t be the only tests we do. In my recent struggle with neg_binomial_2 and surrounding functions, both approaches let me to identify some of the issues. For me the biggest advantage of precomputed values is that those tests are quite easy to create. Let’s say I fear that lbeta fails with one very large and one very small argument. So I precompute values for this case in high precision and check. Not bulletproof, but quick. I am sure a clever formula-based test would show this as well, but I can’t make up one quickly. Another issue with formulas is that I always have to worry about the numerics of the formula itself. And there are functions for which good formulas are hard to come by, for example the lgamma_stirling_diff function I wrote (defined as the difference between lgamma and it’s Stirling approximation). Here any tests actually involving lgamma are weak because the interesting use cases for lgamma_stirling_diff are when lgamma is large and we would need more digits to capture the desired precision. I believe you may come up with some clever way around this, but using precomputed values let me to have a reasonably strong test quickly. Using Arb might well be a better way forward in the long run, but I personally lack the skill and patience to work with C/C++ more than I have to. The cloud Mathematica is a good enough tool for me in that it lets me iterate quickly (and I already use it to check my symbolic manipulations). If someone integrated Arb with the test code in Stan math, than it would be a big advantage for Arb as the tests wouldn’t need to rely on external code. But I don’t think that has been done yet. You seem to imply that Mathematica can provide wrong results in cases where Arb would work. That would be a good reason to prefer Arb over Mathematica. I tried to quickly Google for some evidence in this regard and didn’t find any. Where I think I would agree with you is that the unit tests of Stan Math emphasize whether code compiles and whether the code is correct, as opposed to whether the code produces the correct numbers for almost all inputs that are likely to be called when doing dynamic HMC. And I wrote a bunch of unit tests starting at https://github.com/stan-dev/math/blob/develop/test/unit/math/rev/functor/integrate_1d_test.cpp#L423 that did what I thought was an admissible test for each of the univariate _lpdf functions, namely testing the identity that their antilog integrates to 1 over their entire support and the derivatives with respect to all of the parameters are zero. Fortunately, most of our _lpdf functions satisfied this after some tweaking. I couldn’t do exactly that for the _lpmf functions, and I am not surprised that you encountered some issues with the negative binomial that should be addressed. But I am bitter that the success of this approach has not changed anyone’s mind about how to unit test functions in Stan Math more generally. I have made what I think are strong arguments about this for years, and they have mostly been ignored. I appreciate that you did take the time to make some good-faith arguments for your approach, although I am not that persuaded by them. I’m just going to let it drop for now. If you make a PR with tests like this, I won’t object and I haven’t objected to anyone else’s PRs on these grounds. @Bob_Carpenter joined: I think you may have misinterpreted my earlier comments. I think this is a good idea, just not always necessary if we can do thorough tests versus well-defined base cases like the lgamma from the standard library. To really get traction on this, it should be automated to allow us to plug in two functions and a range of values and make sure all derivatives have the same value. I can help write that. It’d look something like this: template <class F, class G> expect_same(const ad_tolerances& tol, const F& f, const G& g, Eigen::VectorXd x); where the vector packs all the arguments to f or g. We cold get fancier and put in a parameter pack for x and try to use the serialization framework to reduce it to a vector to supply to our functionals. As @martinmodrak points out, That’s why I included tolerances. Wouldn’t we need to write all our functions to deal with their data types? Or recode them? I don’t get this comment. From my perspective, both @martinmodrak and @bgoodri are suggesting new, complementary tests that go beyond what we have now. If people want to put in effort making tests better, more power to them! I don’t have any opinion about Arb vs. Mathematica in terms of coverage or accuracy. I’d never heard of Arb before @bgoodri brought it up. The problem I have is that I can’t read that Mathematica code. So the discussion we should have is whether everyone’s OK introducing one more language into the project after OCaml and C++. On a related note, the thread on tests for distributions discusses possibilities for strengthening current tests for distributions, including precomputed values but also using complex-step derivative. The thread on tolerances at expect_near_rel behaves weirdly for values around 1e-8 is also relevant. 1 Like This would IMHO be a highly useful tool. We already have a bunch of precomputed values lying around individual tests and nobody is worried that there is no code to recompute them, so having some code is IMHO a strict improvement. But I understand the concern and especially if we ended up building substantial support infrastructure in Mathematica, this might eventually become an issue. I think having some precomputed high precision result is very valuable. In one of Martin’s PRs there was a computation that looked to me to be easily simplifiable. Well, it turned out that there was a specific numerical reason why some redundant-looking ooeration were done that way, as when I simplified the code some tests didn’t pass. I had effectively undone Martin’s hard work! I’m perhaps stating yhe obvious, but I think that whatever testing approach we take is able to express the requirement we need. If no single approach is enough, we should use complementary strategies to ensure we won’t regress in the future. 3 Likes I took a stab at generating interesting points for you and also at using Interval arithmetic. It doesn’t attempt to ‘do everything’. It just shows approaches of which people may not be aware. https://www.wolframcloud.com/obj/411a0a5e-2bf4-4555-a104-30603f88be66 Note that Interval arithmetic (and other methods that don’t track all the inputs and outputs) don’t take into account possible correlations between numbers in intermediate steps, so the output interval may be wider than what one would sample in practice: 1 Like Interval arithmetic is fun. And it can be implemented with overloading just like autodiff. It’d be a lot of work for our whole library, though! What I’d like to know is if there is a C++ library that can track the flow of statistical distributions through my calculations (using templates and operator overloading like the AD ones do), so I could make statements about the reliability of my answers. I’m sure something like the mean and coefficient of variation could be tracked using a forward mode that does calculations like the ISO Guide to Uncertainty in Measurement, but I wonder if there is a better way (other than Monte Carlo analysis, I guess). 1 Like
$10 ### Image of the Day Submit IOTD | Top Screenshots ### The latest, straight to your Inbox. Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content. Sign up now ## Losing all items when your character die. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 30 replies to this topic ### #1Legendre Members Posted 04 January 2013 - 03:57 PM I am developing an online multiplayer RPG in which if your character dies, you lose everything in your inventory. However, you do leave behind a corpse from which you (or anyone else) can loot your items from. I have been doing some research on other games that implement this and discovered two incredibly popular ones: Realm of the Mad God (http://en.wikipedia.org/wiki/Realm_of_the_Mad_God) Runescape (http://runescape.wikia.com/wiki/Death) So I suppose players are open to "item-perma death". To avoid making it too painful, I am going to make premium/paid or rare/legendary items safe from this. E.g. Like Runescape, you can pick 3 items to save from item-perma death. What do you guys think about this? And what kind of pitfalls would you suggest I avoid? ### #2DpakoH Members Posted 04 January 2013 - 04:26 PM So you lose every item, but you get to keep the character, did i understand correct? if yes, i do not think i would like it. maybe if you die you can lose some random items from your inventory that are not equipped or something like that... just a suggestion from a fan of perma-death games ### #3HappyCoder Members Posted 04 January 2013 - 04:42 PM I feel like losing all of your items when you die may frustrate users and they may stop playing your game at that point because they don't want to have to go through all the work of trying to obtain their items again. I also feel like this would cause powerful characters to continually become more powerful as they can loot whatever good items they want from other users and keep them or sell them for gold. What reasoning do you have for such a system? It sounds like a more realistic way to handle dying in a game but realism in a game shouldn't control gameplay mechanics. Your top priority should be to make the game fun. My current game project Platform RPG ### #4powerneg Members Posted 04 January 2013 - 05:56 PM i wouldn't recommend saving premium and/or legendary items, the premmy-thing would come very close to "selling power" and the legendary-rule would mean someone would reach a certain level where he/she can actually acquire legendary items first. (although it could work if implemented differently: legendary:just add "safe" items(on every level) that are less strong but cant be stolen premium:well, it might work, you need income in some way anyway, a premium might for examplebe able to select one additional item that is safe or you could say "for premiums boots are always safe" and have boots never be overpowered but still be important enough that it's something you want to replace quickly after death) selecting "x" items that are safe sounds like the best road to go though, you could implement various skills related to this, which is core to most RPG'd ### #5Legendre Members Posted 04 January 2013 - 08:39 PM I feel like losing all of your items when you die may frustrate users and they may stop playing your game at that point because they don't want to have to go through all the work of trying to obtain their items again. I also feel like this would cause powerful characters to continually become more powerful as they can loot whatever good items they want from other users and keep them or sell them for gold. In Realm of the Mad God and Runescape, you lose all your items when you die (you get to save 3 in Runescape). Yet, RotMG is a hugely popular indie game, and Runescape is the 2nd most played MMORPG (WoW is 1st). Perhaps it can be done in a way that does not frustrate users? What reasoning do you have for such a system? It sounds like a more realistic way to handle dying in a game but realism in a game shouldn't control gameplay mechanics. Your top priority should be to make the game fun. I thought of using the traditional permanent item system but it doesn't really fit with my game. Its not about realism. My game plays somewhat like Realm of the Mad God. ### #6Legendre Members Posted 04 January 2013 - 08:49 PM i wouldn't recommend saving premium and/or legendary items, the premmy-thing would come very close to "selling power" and the legendary-rule would mean someone would reach a certain level where he/she can actually acquire legendary items first. True. I will probably not sell premium items or have ultra rare/legendary loot...to avoid frustrating players who loses them. ### #7Legendre Members Posted 04 January 2013 - 08:50 PM So you lose every item, but you get to keep the character, did i understand correct? if yes, i do not think i would like it. maybe if you die you can lose some random items from your inventory that are not equipped or something like that... just a suggestion from a fan of perma-death games Well technically your character dies lol. But you get to create a new character at the same level with the exact same abilities so essentially you only lose the items. ### #8Gava Members Posted 06 January 2013 - 01:41 PM I used to play a MMORPG that had a mechanic similar to this but implemented in a different way called Argentum Online. Every player had a bank account where they could deposit items/money, This account was only accessed from a NPC in most cities. So you would go out to loot, if killed all the items you where carrying would be tossed on the floor but anything deposited on the bank would remain (you only had access to the banks items when alive and tacking to the banks NPC). They had a revival system where you had to walk to town and talk to a priest (your player transformed into a ghost when dead) and you wold come back to life(with no items though). There was also a revive magic that other players could use to revive you but since you dropped everything on death you usually lost most of your gear in the meantime. Some items couldn’t be lost though, mostly faction amour (this was obtained though a quest and couldn’t be bought so getting another one was impossible) but even the best faction amour was mid tier. The bank account had no limit on money deposited but had a limit on items (it was an odd item cap, You could have up to 30 item stacks of 10000. this allowed to have tons of low value items like potions but restricted the expensive ones since having 10000 of any equipment was insanely expensive). ### #9Dan Violet Sagmiller Members Posted 06 January 2013 - 01:57 PM It was common in games to expect to lose your items if you died. However, there is a mechanic for this I've been liking more and more, Insurance. In a thief plugin, for Minecraft, that a friend is working on, we discussed how to keep it fun despite the fact your stuff can be stolen. In our case we added bounties, but Insurance is the key item that would cross over here. What I recommend, is for players to be offered "life insurance" they pay x Amount, and will be insured for X*100, or some odd factor like that. so if I paid 1 credit, I would then be insured for 100 credits. Then when my player dies, I can go to the insurance office in my town (similar to a shop) where it shows all the items I had with me at my death, and then I can purchase them back with the insurance money, or my own cash as well. Then I don't lose things as much, and I have the option to buy them back, so I don't have to go on some crazy quest to get them. But as soon as I fund more money into the insurance, it wipes out what was recorded from my previous death. That way its only for future deaths. You could of course increase this ratio to X*1000 or something you deam more reasonable, but it provides protection for important items. Another idea is to cast spells of return. Perhaps they cost a lot, particularly depending on the Rarity or Value of an object, but once cast, the items always returns to your bank vault on your death. Or back to your inventory just as you had it, equipped or not. But this way, your players do have a way around this. It costs, but it puts the power with them. Usually after losing an item or two of value, most players will start taking advantage of insurance and/or spells of return. Moltar - "Do you even know how to use that?" Space Ghost - “Moltar, I have a giant brain that is able to reduce any complex machine into a simple yes or no answer." Dan - "Best Description of AI ever." ### #10Legendre Members Posted 06 January 2013 - 06:37 PM Or perhaps I can make it so that most items are like power ups in shoot'em ups and FPSes - semi-easily replaceable and non-distinguishable?<br /><br />With "rare drop" or items that are hard to get means no one will be pissed losing items? ### #11Ravyne Members Posted 06 January 2013 - 08:15 PM If you're doing a pay-to-play model, you might have it such that items acquired as loot are subject to being lost -- perhaps after a certain number of deaths, or on a dice-roll during death -- but that items bought through a real-money transaction wouldn't be subject to such loss. You could also augment this with being able to enchant looted items for a smaller fee, such that they are treated in the same way as paid items. Further, paid/enchanted items could either be permanently safe, or just have a really high resistance from being lost. I'm not a huge fan of pay to play if it means buying an advantage over other players, but it seems to me that successful$-to-Play games that are enjoyed by all types of players are those that are selling convenience and time-savings, rather than pure advantage, and that the model I described above would fit into that category. throw table_exception("(ノ ゜Д゜)ノ ︵ ┻━┻"); ### #12Servant of the Lord  Members Posted 06 January 2013 - 09:31 PM On a 2D orpg I worked on (as a tile artist and scripter) with a couple friends online, we made only the currently equipped items be dropped on death, while keeping the items in your inventory mostly safe. We also choose a random non-equipped inventory item and dropped that as well, to make sure you usually drop something. It worked out fairly well. A player could then choose to go into combat with their second-best equipment instead of their first-best, and in the worst case scenario, only lose a random one piece of their first-best equipment. Edited by Servant of the Lord, 06 January 2013 - 10:48 PM. It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God. Of Stranger Flames - ### #13Ashaman73  Members Posted 07 January 2013 - 01:36 AM EQI got such a feature. It depends on a lot of factors: 1. Do you provide some bank space which will not be affected by the perma-drop ? How large is it ? How easy to access ? 2. How hard is it to obtain items of similar quality ? It is very expensive (crafting) ? 3. How hard will it be to recover a corpse if you lost your primary equipment set ? I liked it back in EQI, but it introduced some interesting gameplay. For one many stripped all their equipment and put it into their bank to travel from town to town, so that you didn't loose any valuable items during this kind of death-run. On the other hand, you got punished for exploring unknown terrain. If you entered a hi-level area a single hit was deadly and there wasn't much you could do to recover it on your own. On the other hand many other people helped you to regain your corpse, often hi-level characters of your own guild. From a gameplay perspective it is permadeath light and introduces some counter intuitive behaviour. E.g. if you want to explore a dangerous dungeon, you normally would equip the best items, but here you would leave useful equipment back at your home. With permadeath you would prepare to confront the dangers, accepting the risk, with item-drop you would probe the dangerous areas first with crap-equipment, leaving the good equipment for the already easy areas. And there's the danger of introducing some balacing issues when you have gear-dependent character classes (e.g. warrior vs mage). Ashaman ### #14dakota.potts  Members Posted 07 January 2013 - 01:45 AM I played Runescape extensively about 6-8 years ago. You can keep your items in a bank, including money. When you die, your 3 most valuable items are saved and you drop everything else. In PVP areas, if you attack another player, you drop all of your items upon dying. This is signified with a skull over your head that everybody can see. ### #15DigitalSavior  Members Posted 07 January 2013 - 07:42 PM Corpse runs were never fun.  But I guess it can add to the overall experience. ### #16Legendre  Members Posted 08 January 2013 - 05:11 AM Corpse runs were never fun.  But I guess it can add to the overall experience. I didn't intend to have corpse runs. But I did intend for players to leave a corpse behind when they die, so other players can come across corpses of their fallen comrades. But...that will just degenerate into corpse runs. Current solution: players lose all items permanently when they die. No more corpse runs. (no rare items that takes a lot of effort to get, to avoid frustrating people) ### #17Servant of the Lord  Members Posted 08 January 2013 - 11:10 AM Are corpse runs really a problem that need to be fixed? In the small ORPG I worked on (no longer active) that I mentioned above, corpses could be picked up by anyone, and they completely disappeared after a "random" 0 seconds to 5 minutes, unless another player was near the corpse. A map with no players on it had all item drops cleared every 5 minutes. If you died 25 seconds before the map was cleared, your corpse only lasted 25 seconds. If you died 10 seconds after a map was cleared, your corpse would last until the next clear in 4 minutes and 50 seconds. This was an unintentional side-effect of maps clearing monster drops that nobody wanted to pick up, but we liked it so we left it in. Players A) were frustrated by the inconsistent timing of their corpse disappearing B) actually seemed enjoyed the scramble to get their corpse - Is it still there or isn't it? Anticipation/anxiety C) hunted with a friend so the friend standing by the corpse would keep it from disappearing, or so the friend can pick up the items and return them later. PvPers would kill someone, stay by their corpse for them to return and kill them again. This was an understood and accepted risk of hunting in PvP zones. You could make the corpse not be retrievable by the player who died, and fade after 3 minutes. But why stop corpse runs at all - are they really all that bad? It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God. Of Stranger Flames - ### #18Legendre  Members Posted 08 January 2013 - 07:56 PM But why stop corpse runs at all - are they really all that bad? It feels bad on the player's end. They will think its a deliberate death penalty and a compulsory chore. E.g. there are quite a few articles on the net about the "dreaded" Everquest corpse run. I rather have the player get on with the game, than feel frustrated trying to retrieve his corpse over and over again. ### #19DaveTroyer  Members Posted 09 January 2013 - 04:46 PM I like the idea of all the players loot going away when they die, but my thought to help the player deal with such a lose is to have loot drop a lot more often and limit how much they can carry. Say the player has a weapon slot, a magic slot, and an armor slot, plus 3 spaces to store things in their invintory. Well, they will fight a gang of 4 roaming bandits and they get 2 pieces of loot from each of them. Even if their inventory was completely empty and had nothing equiped, they still need to leave 2 things behind. By getting the player used to leaving loot all over the place, they become less attached to their gear. It will make starting over a little more bearable, even if the player had nearly end-game gear and screwed up right before the big bad end boss fight. But thats just what I think about it Check out my game blog - Dave's Game Blog ### #20Legendre  Members Posted 10 January 2013 - 05:55 PM I like the idea of all the players loot going away when they die, but my thought to help the player deal with such a lose is to have loot drop a lot more often and limit how much they can carry. Say the player has a weapon slot, a magic slot, and an armor slot, plus 3 spaces to store things in their invintory. Well, they will fight a gang of 4 roaming bandits and they get 2 pieces of loot from each of them. Even if their inventory was completely empty and had nothing equiped, they still need to leave 2 things behind. By getting the player used to leaving loot all over the place, they become less attached to their gear. It will make starting over a little more bearable, even if the player had nearly end-game gear and screwed up right before the big bad end boss fight. But thats just what I think about it Good points. This is pretty much how I am going to design it. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
# Magnetic Induction Formula ## What Does the Magnetic Induction Formula Signify? Magnetic induction is the phenomenon of the generation of electromotive force or e.m.f. in a conductor pertaining to the change in magnetic flux linked with it. It was discovered by the scientist Michael Faraday in 1831. Faraday’s law of induction was later mathematically represented by Maxwell. Magnetic induction is a very important scientific phenomenon and a crucial topic in physics. To understand what the magnetic induction formula signifies let us understand Faraday’s Law of Induction. Here we will also study the emf induced formula, faraday's law formula, and a few other important features of magnetic induction. The magnetic field induction formula of Faraday states by a change in the magnetic flux linked to a conductor, an electromotive force (emf) is induced. The rate of change of the magnetic flux enclosed by a closed circuit is equal to that of emf. ε is directly proportional to the change in flux ε is inversely proportional to Δt ε produced in a coil having N turns is N times that of a single current-conduction coil (ε  ∝  N) The magnetic flux passing through a surface of vector area A: ΦB =B⋅A = BAcosθ For a varying magnetic field the magnetic flux is B through an infinitesimal area dA: dΦB = B⋅dA The surface integral gives the total magnetic flux through the surface. ΦB = ∫∫A B⋅dA According to Faraday’s law formula, in a coil of wire with N turns, the emf induced formula in a closed circuit is given by EMF (ε) = - N$\frac{\Delta \phi }{\Delta t}$ When flux changes by Δ in a time Δt. The minus sign shows that a current I and magnetic field B opposite to the direction of change in flux is produced. This is known as Lenz’s law. ### Electromagnetic Induction Formula for Moving Conductor For a moving rod, N=1 and the flux Φ=BAcosθ, θ=0º and cosθ=1, a B is perpendicular to A. The area swept out by the rod is ΔA= lΔx ∴ ε = $\frac{B \Delta A }{\Delta t}$ = $\frac{Bl \Delta x}{\Delta t}$ = Blv where v (velocity) is perpendicular to the B (magnetic field) In the above scenario of a generator, the velocity is at an angle θ with B, so that its component perpendicular to B is vsinθ. ε = Blv sinθ Where, l = length of the conductor, v = velocity of the conductor θ = the angle between the magnetic field and the direction of motion. Thus, the induced current formula signifies the close relationship between electric field and magnetic field which is dependent on a specific time variation. 1. Determine the Magnetic Flux Formula in Terms of Current. Ans: As the flux is produced in response to the current, the flux induced by a current is proportional to the current Thus, change in flux induces a current and a voltage which is proportional to the rate of change of flux. This is similar to Ohm's Law (V = IR). Thus we can say ΦB ∝ I As we know  ΦB=B⋅A Where B = µ0NI 0 (permeability constant) = 4π10-7 T ] Therefore, ΦB = µ0NIA 2. How to Establish an Inductor Energy Formula? Ans: Energy is stored within a magnetic field of an inductor. Due to energy conservation, the energy needed to drive the original current should have an escape route. In the case of an inductor, the magnetic field is the escape route of that current. The energy stored in an inductor is equal to the work needed to generate a current through the inductor. The formula for this energy is given as: E = ½ LI² where L is the inductance (unit - Henry) and I (Ampere).
1. ## Limit of Sequences Having trouble with this exercise: Find the limit of the following sequence or determine that the limit does not exist. $$\bigg\{\bigg(1 + \frac{22}{n}\bigg)^n\bigg\}$$ The problem says to use L'Hôpital's Rule and then simplify the expression and I am getting tripped up by the division. Could someone tell me how they are going from: $$\lim_{x\to\infty} x ln(1 + \frac{22}{x})$$ After L'Hôpital's Rule: $$\lim_{x\to\infty} \frac{\frac{-22}{x^2+22x}}{\frac{-1}{x^2}}$$ to $$\lim_{x\to\infty} \frac{22}{1+\frac{22}{x}}$$ As far as I can tell the largest term common to both the numerator an denominator is $$\frac{1}{x^2}$$ so if I divide them both by it I get this: $$\frac{\frac{-22x^2}{x^2+22x}}{-1} = \frac{-22+22x}{-1} = 22 - 22x$$ ... however in the example they go straight to 22 from $$\lim_{x\to\infty} \frac{22}{1+\frac{22}{x}}$$ In which places am I going wrong? My best guess would be the largest common term but I don't see what else it could be. 2. ## Re: Limit of Sequences Originally Posted by SDF Having trouble with this exercise: Find the limit of the following sequence or determine that the limit does not exist. $\displaystyle\large{\lim _{x \to \infty }}{\left( {1 + \frac{a}{{x + b}}} \right)^{cx}} = {e^{ac}}$ 3. ## Re: Limit of Sequences Is that supposed to be helpful? @SDF: as $x \to \infty$ you should have $\frac{22}{x} \to 0$ 4. ## Re: Limit of Sequences Originally Posted by Archie Is that supposed to be helpful?] Would you know if it were? Any calculus III students should be able to see at once the limit is $e^{22}$. It is a waste of time to have to use a substitution on every problem. 5. ## Re: Limit of Sequences Yes, I would. I also read the part of the OP that said The problem says to use L'Hôpital's Rule and then simplify the expression. Did you? Or is reading the question also a waste of time? 6. ## Re: Limit of Sequences \begin{align*} \frac{\frac{-22}{x^2+22x}}{\frac{-1}{x^2}} &= \frac{\frac{-22x^2}{x^2+22x}}{-1} &(\text{multiplying numerator and denominator by }x^2) \\ &= \frac{22x^2}{x^2+22x} & (\text{simplifying}) \\ &= \frac{22}{1 + \frac{22}{x}} & (\text{dividing numerator and denominator by }x^2) \end{align*} 7. ## Re: Limit of Sequences Originally Posted by Archie Yes, I would. I also read the part of the OP that said Did you? Or is reading the question also a waste of time? Of course I read it. After reading it is clear that having to do and thing beyond recognizing the form is the waste of time. 8. ## Re: Limit of Sequences Originally Posted by SDF The problem says to use L'Hôpital's Rule and then simplify the expression and I am getting tripped up by the division. Could someone tell me how they are going from: $$\lim_{x\to\infty} x ln(1 + \frac{22}{x})$$ After L'Hôpital's Rule: $$\lim_{x\to\infty} \frac{\frac{-22}{x^2+22x}}{\frac{-1}{x^2}}$$ to $$\lim_{x\to\infty} \frac{22}{1+\frac{22}{x}}$$ $\dfrac{ \ \frac{-22}{x^2 + 22x} \ }{(\frac{-1}{x^2})} =$ $\dfrac{-22}{x^2 + 22x}\cdot\ \dfrac{x^2}{-1} \ =$ $\dfrac{22x^2}{x(x + 22)} \ =$ $\dfrac{22x}{x + 22} \ =$ $\dfrac{(\tfrac{1}{x})22x}{(\tfrac{1}{x})(x + 22)} \ =$ $\dfrac{22}{1 + \tfrac{22}{x}}$ 9. ## Re: Limit of Sequences Originally Posted by Plato Of course I read it. After reading it is clear that having to do and thing beyond recognizing the form is the waste of time. You don't get to impose the method you want to do. You should set up a separate thread if you want to address it with your method that isn't on the level that the student is working with. Originally Posted by Plato Would you know if it were? Any calculus III students should be able to see at once the limit is $e^{22}$. It is a waste of time to have to use a substitution on every problem. This isn't a Calculus III problem. You made a comment to an imaginary problem. If you would stick to the problem at hand, 10. ## Re: Limit of Sequences Originally Posted by Plato Of course I read it. After reading it is clear that having to do and thing beyond recognizing the form is the waste of time. Imagine for a moment that a student of yours submitted homework or an exam answer that ignored the problem statement and derived an answer according to a formula that had been gifted on them from above.
# Pic16F877A eeprom to (system_variables) Status Not open for further replies. #### Chilly ##### New Member Using HiTech C. I have a program that has a 6 case menu. Each case displays system settings that are currently running. There's 4 buttons controlling menu up, menu down, value up and value down which controls the 5 system variables. I've added a case whereby I can save the current settings to eeprom. I'm also able to read back the settings. What I'd like to do is insert into system variables my settings. I have an if statement. Interrupt is setup. Something like, Code: If button down{ change(system_variables) with (eeprom_settings) ; } I've tried searching but don't exactly know what I'm looking for. If someone could point me in the right direction. Mike. #### Chilly ##### New Member Thanks Mike. It's late here and I probably didn't make myself clear. I have those routines and successfully write to and retrieve data to/from eeprom. No problem. Where I'm stuck is how to convert the eeprom data back into a system variable. It seems like it would be simple as if(!button_down){ } I'm thinking I just answered my own question. Last edited: #### Pommie ##### Well-Known Member I'm thinking I just answered my own question. I think you did. Mike. #### Chilly ##### New Member Mike, I just tried it and it worked. Looking at what I needed to do helped a great deal. I'll chalk it up to old age. Thanks. Status Not open for further replies.
## Functions # AdvancedHMC.AMethod. A single Hamiltonian integration step. NOTE: this function is intended to be used in find_good_stepsize only. # AdvancedHMC.build_treeMethod. Recursivly build a tree for a given depth j. # AdvancedHMC.combineMethod. combine(treeleft::BinaryTree, treeright::BinaryTree) Merge a left tree treeleft and a right tree treeright under given Hamiltonian h, then draw a new candidate sample and update related statistics for the resulting tree. # AdvancedHMC.find_good_stepsizeMethod. Find a good initial leap-frog step-size via heuristic search. # AdvancedHMC.isterminatedMethod. isterminated(h::Hamiltonian, t::BinaryTree{<:ClassicNoUTurn}) Detect U turn for two phase points (zleft and zright) under given Hamiltonian h using the (original) no-U-turn cirterion. Ref: https://arxiv.org/abs/1111.4246, https://arxiv.org/abs/1701.02434 # AdvancedHMC.isterminatedMethod. isterminated(h::Hamiltonian, t::BinaryTree{<:GeneralisedNoUTurn}) Detect U turn for two phase points (zleft and zright) under given Hamiltonian h using the generalised no-U-turn criterion. Ref: https://arxiv.org/abs/1701.02434 # AdvancedHMC.maxabsMethod. maxabs(a, b) Return the value with the largest absolute value. # AdvancedHMC.mh_accept_ratioMethod. Perform MH acceptance based on energy, i.e. negative log probability. # AdvancedHMC.nom_step_sizeMethod. nom_step_size(::AbstractIntegrator) Get the nominal integration step size. The current integration step size may differ from this, for example if the step size is jittered. Nominal step size is usually used in adaptation. # AdvancedHMC.pm_next!Method. Progress meter update with all trajectory stats, iteration number and metric shown. # AdvancedHMC.randcatMethod. randcat(rng, P::AbstractMatrix) Generating Categorical random variables in a vectorized mode. P is supposed to be a matrix of (D, N) where each column is a probability vector. Example P = [ 0.5 0.3; 0.4 0.6; 0.1 0.1 ] u = [0.3, 0.4] C = [ 0.5 0.3 0.9 0.9 1.0 1.0 ] Then C .< u' is [ 0 1 0 0 0 0 ] thus convert.(Int, vec(sum(C .< u'; dims=1))) .+ 1 equals [1, 2]. # AdvancedHMC.simple_pm_next!Method. Simple progress meter update without any show values. # AdvancedHMC.statMethod. Returns the statistics for transition t. # AdvancedHMC.step_sizeFunction. step_size(::AbstractIntegrator) Get the current integration step size. # AdvancedHMC.temperMethod. temper(lf::TemperedLeapfrog, r, step::NamedTuple{(:i, :is_half),<:Tuple{Integer,Bool}}, n_steps::Int) Tempering step. step is a named tuple with • i being the current leapfrog iteration and • is_half indicating whether or not it’s (the first) half momentum/tempering step # AdvancedHMC.transitionMethod. transition(τ::AbstractTrajectory{I}, h::Hamiltonian, z::PhasePoint) Make a MCMC transition from phase point z using the trajectory τ under Hamiltonian h. NOTE: This is a RNG-implicit fallback function for transition(GLOBAL_RNG, τ, h, z) # StatsBase.sampleMethod. sample( rng::AbstractRNG, h::Hamiltonian, τ::AbstractProposal, θ::AbstractVecOrMat{T}, n_samples::Int, drop_warmup::Bool=false, verbose::Bool=true, progress::Bool=false ) Sample n_samples samples using the proposal τ under Hamiltonian h. • The randomness is controlled by rng. • If rng is not provided, GLOBAL_RNG will be used. • The initial point is given by θ. • The adaptor is set by adaptor, for which the default is no adaptation. • It will perform n_adapts steps of adaptation, for which the default is the minimum of 1_000 and 10% of n_samples • drop_warmup controls to drop the samples during adaptation phase or not • verbose controls the verbosity • progress controls whether to show the progress meter or not ## Types # AdvancedHMC.AbstractIntegratorType. abstract type AbstractIntegrator Represents an integrator used to simulate the Hamiltonian system. Implementation A AbstractIntegrator is expected to have the following implementations: • stat(@ref) • nom_step_size(@ref) • step_size(@ref) # AdvancedHMC.AbstractProposalType. Abstract Markov chain Monte Carlo proposal. # AdvancedHMC.AbstractTrajectoryType. Hamiltonian dynamics numerical simulation trajectories. # AdvancedHMC.AbstractTrajectorySamplerType. Defines how to sample a phase-point from the simulated trajectory. # AdvancedHMC.BinaryTreeType. A full binary tree trajectory with only necessary leaves and information stored. # AdvancedHMC.ClassicNoUTurnType. struct ClassicNoUTurn <: AdvancedHMC.AbstractTerminationCriterion Classic No-U-Turn criterion as described in Eq. (9) in [1]. Informally, this will terminate the trajectory expansion if continuing the simulation either forwards or backwards in time will decrease the distance between the left-most and right-most positions. References 1. Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623. (arXiv) # AdvancedHMC.EndPointTSType. struct EndPointTS <: AdvancedHMC.AbstractTrajectorySampler Samples the end-point of the trajectory. # AdvancedHMC.GeneralisedNoUTurnType. struct GeneralisedNoUTurn{T<:(AbstractArray{var"#s58",1} where var"#s58"<:Real)} <: AdvancedHMC.AbstractTerminationCriterion Generalised No-U-Turn criterion as described in Section A.4.2 in [1]. Fields • rho::AbstractArray{var"#s58",1} where var"#s58"<:Real Integral or sum of momenta along the integration path. References 1. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434. # AdvancedHMC.HMCDAType. struct HMCDA{S<:AdvancedHMC.AbstractTrajectorySampler, I<:AdvancedHMC.AbstractIntegrator} <: AdvancedHMC.DynamicTrajectory{I<:AdvancedHMC.AbstractIntegrator} Standard HMC implementation with fixed total trajectory length. Fields • integrator::AdvancedHMC.AbstractIntegrator Integrator used to simulate trajectory. • λ::AbstractFloat Total length of the trajectory, i.e. take floor(λ / integrator_step) number of leapfrog steps. References 1. Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv) # AdvancedHMC.JitteredLeapfrogType. struct JitteredLeapfrog{FT<:AbstractFloat, T<:Union{AbstractArray{FT<:AbstractFloat,1}, FT<:AbstractFloat}} <: AdvancedHMC.AbstractLeapfrog{T<:Union{AbstractArray{FT<:AbstractFloat,1}, FT<:AbstractFloat}} Leapfrog integrator with randomly “jittered” step size ϵ for every trajectory. Fields • ϵ0::Union{AbstractArray{FT,1}, FT} where FT<:AbstractFloat Nominal (non-jittered) step size. • jitter::AbstractFloat The proportion of the nominal step size ϵ0 that may be added or subtracted. • ϵ::Union{AbstractArray{FT,1}, FT} where FT<:AbstractFloat Current (jittered) step size. Description This is the same as LeapFrog(@ref) but with a “jittered” step size. This means that at the beginning of each trajectory we sample a step size ϵ by adding or subtracting from the nominal/base step size ϵ0 some random proportion of ϵ0, with the proportion specified by jitter, i.e. ϵ = ϵ0 - jitter * ϵ0 * rand(). p Jittering might help alleviate issues related to poor interactions with a fixed step size: • In regions with high “curvature” the current choice of step size might mean over-shoot leading to almost all steps being rejected. Randomly sampling the step size at the beginning of the trajectories can therefore increase the probability of escaping such high-curvature regions. • Exact periodicity of the simulated trajectories might occur, i.e. you might be so unlucky as to simulate the trajectory forwards in time L ϵ and ending up at the same point (which results in non-ergodicity; see Section 3.2 in [1]). If momentum is refreshed before each trajectory, then this should not happen exactly but it can still be an issue in practice. Randomly choosing the step-size ϵ might help alleviate such problems. References 1. Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv) # AdvancedHMC.LeapfrogType. struct Leapfrog{T<:(Union{AbstractArray{var"#s58",1}, var"#s58"} where var"#s58"<:AbstractFloat)} <: AdvancedHMC.AbstractLeapfrog{T<:(Union{AbstractArray{var"#s58",1}, var"#s58"} where var"#s58"<:AbstractFloat)} Leapfrog integrator with fixed step size ϵ. Fields • ϵ::Union{AbstractArray{var"#s58",1}, var"#s58"} where var"#s58"<:AbstractFloat Step size. # AdvancedHMC.MultinomialTSType. struct MultinomialTS{F<:AbstractFloat} <: AdvancedHMC.AbstractTrajectorySampler Multinomial trajectory sampler carried during the building of the tree. It contains the weight of the tree, defined as the total probabilities of the leaves. Fields • zcand::AdvancedHMC.PhasePoint Sampled candidate PhasePoint. • ℓw::AbstractFloat Total energy for the given tree, i.e. the sum of energies of all leaves. # AdvancedHMC.MultinomialTSMethod. MultinomialTS(s::MultinomialTS, H0::AbstractFloat, zcand::PhasePoint) Multinomial sampler for a trajectory consisting only a leaf node. • tree weight is the (unnormalised) energy of the leaf. # AdvancedHMC.MultinomialTSMethod. MultinomialTS(rng::AbstractRNG, z0::PhasePoint) Multinomial sampler for the starting single leaf tree. (Log) weights for leaf nodes are their (unnormalised) Hamiltonian energies. Ref: https://github.com/stan-dev/stan/blob/develop/src/stan/mcmc/hmc/nuts/base_nuts.hpp#L226 # AdvancedHMC.NUTSType. Dynamic trajectory HMC using the no-U-turn termination criteria algorithm. # AdvancedHMC.NUTSMethod. NUTS(args...) = NUTS{MultinomialTS,GeneralisedNoUTurn}(args...) Create an instance for the No-U-Turn sampling algorithm with multinomial sampling and original no U-turn criterion. Below is the doc for NUTS{S,C}. NUTS{S,C}( integrator::I, max_depth::Int=10, Δ_max::F=1000.0 ) where {I<:AbstractIntegrator,F<:AbstractFloat,S<:AbstractTrajectorySampler,C<:AbstractTerminationCriterion} Create an instance for the No-U-Turn sampling algorithm. # AdvancedHMC.NUTSMethod. NUTS{S,C}( integrator::I, max_depth::Int=10, Δ_max::F=1000.0 ) where {I<:AbstractIntegrator,F<:AbstractFloat,S<:AbstractTrajectorySampler,C<:AbstractTerminationCriterion} Create an instance for the No-U-Turn sampling algorithm. # AdvancedHMC.SliceTSType. struct SliceTS{F<:AbstractFloat} <: AdvancedHMC.AbstractTrajectorySampler Trajectory slice sampler carried during the building of the tree. It contains the slice variable and the number of acceptable condidates in the tree. Fields • zcand::AdvancedHMC.PhasePoint Sampled candidate PhasePoint. • ℓu::AbstractFloat Slice variable in log-space. • n::Int64 Number of acceptable candidates, i.e. those with probability larger than slice variable u. # AdvancedHMC.SliceTSMethod. SliceTS(rng::AbstractRNG, z0::PhasePoint) Slice sampler for the starting single leaf tree. Slice variable is initialized. # AdvancedHMC.SliceTSMethod. SliceTS(s::SliceTS, H0::AbstractFloat, zcand::PhasePoint) Create a slice sampler for a single leaf tree: • the slice variable is copied from the passed-in sampler s and • the number of acceptable candicates is computed by comparing the slice variable against the current energy. # AdvancedHMC.StaticTrajectoryType. struct StaticTrajectory{S<:AdvancedHMC.AbstractTrajectorySampler, I<:AdvancedHMC.AbstractIntegrator} <: AdvancedHMC.AbstractTrajectory{I<:AdvancedHMC.AbstractIntegrator} Static HMC with a fixed number of leapfrog steps. Fields • integrator::AdvancedHMC.AbstractIntegrator Integrator used to simulate trajectory. • n_steps::Int64 Number of steps to simulate, i.e. length of trajectory will be n_steps + 1. References 1. Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv) # AdvancedHMC.TemperedLeapfrogType. struct TemperedLeapfrog{FT<:AbstractFloat, T<:Union{AbstractArray{FT<:AbstractFloat,1}, FT<:AbstractFloat}} <: AdvancedHMC.AbstractLeapfrog{T<:Union{AbstractArray{FT<:AbstractFloat,1}, FT<:AbstractFloat}} Tempered leapfrog integrator with fixed step size ϵ and “temperature” α. Fields • ϵ::Union{AbstractArray{FT,1}, FT} where FT<:AbstractFloat Step size. • α::AbstractFloat Temperature parameter. Description Tempering can potentially allow greater exploration of the posterior, e.g. in a multi-modal posterior jumps between the modes can be more likely to occur. # AdvancedHMC.TerminationType. Termination Termination reasons • dynamic: due to stoping criteria • numerical: due to large energy deviation from starting (possibly numerical errors) # AdvancedHMC.TerminationMethod. Termination(s::MultinomialTS, nt::NUTS, H0::F, H′::F) where {F<:AbstractFloat} Check termination of a Hamiltonian trajectory. # AdvancedHMC.TerminationMethod. Termination(s::SliceTS, nt::NUTS, H0::F, H′::F) where {F<:AbstractFloat} Check termination of a Hamiltonian trajectory. # AdvancedHMC.TransitionType. struct Transition{P<:AdvancedHMC.PhasePoint, NT<:NamedTuple} A transition that contains the phase point and other statistics of the transition. Fields • z::AdvancedHMC.PhasePoint Phase-point for the transition. • stat::NamedTuple Statistics related to the transition, e.g. energy.
Pseudoscalar Portal Dark Matter # Pseudoscalar Portal Dark Matter Asher Berlin Department of Physics, Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637    Stefania Gori Perimeter Institute for Theoretical Physics, 31 Caroline St. N, Waterloo, Ontario, Canada N2L 2Y5. Tongyan Lin Department of Physics, Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637    Lian-Tao Wang Department of Physics, Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 August 3, 2019 ###### Abstract A fermion dark matter candidate with a relic abundance set by annihilation through a pseudoscalar can evade constraints from direct detection experiments. We present simplified models that realize this fact by coupling a fermion dark sector to a two-Higgs doublet model. These models are generalizations of mixed bino-Higgsino dark matter in the MSSM, with more freedom in the couplings and scalar spectra. Annihilation near a pseudoscalar resonance allows a significant amount of parameter space for thermal relic dark matter compared to singlet-doublet dark matter, in which the fermions couple only to the SM Higgs doublet. In a general two-Higgs doublet model, there is also freedom for the pseudoscalar to be relatively light and it is possible to obtain thermal relic dark matter candidates even below 100 GeV. In particular, we find ample room to obtain dark matter with mass around 50 GeV and fitting the Galactic Center excess in gamma-rays. This region of parameter space can be probed by LHC searches for heavy pseudoscalars or electroweakinos, and possibly by other new collider signals. preprint: EFI-15-9 ## I Introduction Is weak-scale dark matter (DM) still a viable scenario? At face value, a DM candidate of mass 10-1000 GeV with weak-scale interactions can easily have a thermal relic abundance matching the observed value of Ade et al. (2014). However, the annihilation process is generically related by crossing symmetry to interactions in direct detection experiments and at colliders, both of which are becoming increasingly restrictive for this range of DM masses. To satisfy these constraints requires more ingredients or tunings in many existing models. One simple and attractive possibility is that the DM interaction with Standard Model (SM) particles is suppressed in the non-relativistic limit in the -channel (direct detection) while preserving a weak-scale cross section in the -channel (for a thermal relic). This requirement is satisfied by a pseudoscalar mediator coupling to fermion DM and to SM fermions: L⊃yχA¯χiγ5χ+λfA¯fiγ5f . (1) Because the pseudoscalar interaction breaks chiral symmetry, the coupling of with SM fermions is generically proportional to the SM Yukawa couplings (). Therefore, since the pseudoscalar has larger couplings to third-generation quarks and leptons, collider constraints are typically weaker. Integrating out the pseudoscalar gives the dimension six operators yχλfm2A(¯χγ5χ)(¯fγ5f) . (2) These contact operators have been considered in Refs. Lin et al. (2013); Bai et al. (2010); Goodman et al. (2010, 2011); Rajaraman et al. (2011); Haisch et al. (2013), which motivated collider signals with MET and a single jet or -jet as a new search channel for DM. In order to provide a concrete but simple realization of this interaction, the approach taken in this paper is to build a simplified model of DM coupled to a new pseudoscalar mediator. The philosophy is to add a minimal set of new matter fields with renormalizable and gauge-invariant couplings Abdallah et al. (2014). Our models also provide a UV-completion (at least at scales relevant to the LHC) of the type of contact interactions discussed above. Since the SM does not contain a fundamental pseudoscalar, we focus our study on two-Higgs doublet models (2HDM). We note that the simplest cases of Higgs-portal models connecting DM to the SM through Higgs mediation are highly constrained Lopez-Honorez et al. (2012); Greljo et al. (2013); Fedderke et al. (2014). In particular, existing models have studied singlet-doublet fermion sectors coupled to the SM Higgs Enberg et al. (2007); Cohen et al. (2012); Cheung and Sanford (2013), while our work generalizes this to 2HDMs. Within the 2HDM framework, the pseudoscalar interaction with DM carries unavoidable interactions of DM to the CP-even scalars as well. However, compared to the usual Higgs portal models, in a general 2HDM, it is possible to obtain parametrically larger couplings of DM to the pseudoscalar (or heavy Higgs) than to the light Higgs, in this way alleviating the various constraints. The possibility of resonant annihilation through the pseudoscalar also permits a more sizable parameter space for DM to be a thermal relic. As we will show, much of the open parameter space is in the region where . This is the analog of the so-called “-funnel” region in the Minimal Supersymmetric Standard Model (MSSM). Still, it is important to note that, even if the DM mass is not tuned to be very close to , we can obtain a thermal DM and satisfy the constraints from direct detection and from the LHC. Another motivation to consider models with DM mainly annihilating through the -funnel comes from the intriguing results for DM annihilation in the gamma-ray sky Goodenough and Hooper (2009); Hooper and Goodenough (2011); Hooper and Linden (2011); Abazajian and Kaplinghat (2012); Gordon and Macias (2013); Abazajian et al. (2014); Daylan et al. (2014). Studies of the Fermi gamma-ray data show an excess diffuse component around the Galactic Center (GC). Although these diffuse gamma-rays may have an astrophysical origin, it is interesting to note that their spectrum and morphology is consistent with expectations for gamma-rays from DM annihilation. The spectrum peaks at around 2-3 GeV, which suggests a DM candidate with mass smaller than around GeV, and the signal follows an NFW profile. Many theoretical studies have explored simplified scenarios Berlin et al. (2014a); Alves et al. (2014); Balázs and Li (2014); Berlin et al. (2014b); Martin et al. (2014), as well as more UV-complete models, for the Galactic Center Excess (GCE). In particular, light DM with mass around GeV annihilating mainly to bottom quarks and tau leptons has identified as giving a good fit to the data. However, a more recent Fermi analysis of the GC region Murgia () indicates the spectrum of the diffuse component is harder than previously thought, allowing for reasonable fits with heavier DM candidates annihilating to di-boson final states Agrawal et al. (2014); Calore et al. (2014a, b). A number of previous works have identified the pseudoscalar case as a promising candidate to fit the GCE Izaguirre et al. (2014); Cheung et al. (2014); Cahill-Rowley et al. (2014); Ipek et al. (2014); Huang et al. (2014); Hektor and Marzola (2014); Arina et al. (2014); Dolan et al. (2014); Abdullah et al. (2014); Boehm et al. (2014); Ghorbani (2014). In a number of these models, in order to satisfy various experimental constraints, a new light pseudoscalar which mixes with the pseudoscalar in the 2HDM is introduced, while the new scalars in the 2HDM itself are relatively decoupled Cheung et al. (2014); Cahill-Rowley et al. (2014); Ipek et al. (2014); Cao et al. (2014). The new feature of our model is that we rely purely on the pseudoscalar of the 2HDM, but consider more general 2HDMs and dark sector spectra. The paper is organized as follows. In Section II we review aspects of general 2HDMs and we fix our notation. Furthermore, we motivate the scalar spectra considered in the following sections. Section III discusses the model-independent collider constraints on the simplified model, covering heavy Higgs searches and searches with MET. We then turn to specific models, beginning with a brief discussion on extended Higgs portal models for both scalar and fermion DM in Section IV. Our main analysis is contained in Section V for fermion DM coupled to a 2HDM. We consider a range of constraints on the model, and present viable parameter space both for heavier DM (150 GeV) in Section V.3 and specifically for the GCE in Section V.4. We reserve Section VI for our conclusions. In the appendices, we elaborate on 2HDM benchmarks for the GCE (Appendix A) and give analytic formulae for Higgs couplings to DM in our model (Appendix B). ## Ii Two Higgs Doublets The most general renormalizable Higgs potential of a 2HDM can be written as V = m2dΦ†dΦd+m2uΦ†uΦu +λ12(Φ†dΦd)2+λ22(Φ†uΦu)2 +λ3(Φ†dΦd)(Φ†uΦu)+λ4(Φ†dΦu)(Φ†uΦd) +[−Bμ(Φ†dΦu)+λ52(Φ†dΦu)2 +λ6(Φ†dΦu)Φ†dΦd+λ7(Φ†dΦu)Φ†uΦu + h.c.] , with and Higgs doublets with hypercharge . The Higgs fields can be parametrized as Φd = ⎛⎝H+d1√2(vd+hd+iad)⎞⎠ , Φu = (H+u1√2(vu+hu+iau)) . (4) Assuming CP conservation, the mass eigenstates are given by (hH) = (cosα−sinαsinαcosα)(huhd) , (GA) = (sinβcosβcosβ−sinβ)(auad) , (G±H±) = (sinβcosβcosβ−sinβ)(H±uH±d) , (5) where we define the basis-dependent ratio  111In the following, we will restrict our attention to Type-II 2HDMs, for which is a well defined basis-independent quantity Haber and O’Neil (2006). and GeV. and are Goldstone bosons, the pseudoscalar, the charged Higgs, and the light and heavy CP-even scalar, respectively. In the following, we will choose to work in the alignment limit, , where the couplings are SM-like. In the MSSM, the masses of the heavy scalars are clustered around a similar scale, with splittings arising only from small -terms. However, in a general 2HDM framework, we have more freedom to get more sizable splittings. As a result, we can split the pseudoscalar mass such that , as needed for a model that can fit the GCE (see Sec. V.4). In particular, we can write the mass eigenvalues as functions of the quartic couplings in Eq. (II): m2H±−m2A=v22(λ5−λ4) (6) and for and , m2H−m2A≃v24(λ1+λ2−2(λ3+λ4−λ5)), (7) m2h≃v24(λ1+λ2+2(λ3+λ4+λ5+2λ67)), (8) while for and , m2H−m2A≃λ5v2,   m2h≃λ2v2. (9) where . From these expressions, it is clear that a splitting as large as 100 GeV between the pseudoscalar and the charged Higgs and between the pseudoscalar and the heavy Higgs can be obtained for . Electroweak precision measurements, and in particular the parameter, can give important constraints on spectra with large splittings. A notable exception are alignment models with , but with an arbitrary large mass splitting between the pseudoscalar and the charged Higgs, which leads to a very small correction to the parameter. Further constraints arise from the requirement of vacuum stability and perturbativity of the quartic couplings. In Appendix A, we will give examples of viable quartic couplings that are able to produce the spectra that we use in Sec. V.4, including the right mass for the SM-like Higgs boson discovered at the LHC. Finally, additional constraints on the scalar spectrum come from flavor transitions. In general, both Higgs doublets can couple to up and down quarks, as well as to leptons, with generic Yukawa couplings. However, the most general Yukawa couplings lead to excessively large contributions to flavor changing neutral transitions. This has led to the various well-known “Types” of 2HDMs, in which different discrete symmetries are imposed; the two Higgs doublets have different charge, thus forbidding some of the Yukawa couplings. This condition also reduces the number of free parameters in Eq. (II), since the symmetry demands 222We work under the assumption that the symmetry is softly broken by the mass term .. In the following, we will focus on a Type-II 2HDM, for which the symmetry allows the doublet to only couple to right-handed up quarks and the doublet to only couple to right-handed down quarks and leptons. This type of 2HDM is interesting, since the MSSM at tree level is a particular limit of Type-II 2HDMs. ## Iii Collider signals In this section we present model-independent collider bounds on the new Higgs sector coupled to DM. We will keep the discussion as general as possible, considering a single new pseudoscalar and dark matter particle with coupling as in Eq. (1). Collider signals of simplified models of dark matter coupled to new scalars have also been discussed recently in Harris et al. (2014); Buckley et al. (2014). For the set of models we consider in this paper, there will also be a rich set of collider signals associated with new charged states, such as the charged Higgs and the additional charged fermion states. We will discuss these in Section V. ### iii.1 Present bounds In a Type-II 2HDM, the coupling to down-type quarks, and charged leptons, is given by , where , with the mass of the corresponding fermion. Assuming a Majorana fermion DM with interaction as in Eq. (1), the invisible decay width of the heavy pseudoscalar is given by Γ(A→χχ)=y2χmA4π ⎷1−4m2χm2A , (10) while the visible decay width is Γ(A→f¯f)=ncy2ftan2βmA8π ⎷1−4m2fm2A , (11) where is the number of colors of the final state SM fermion. Depending on the values of and , different LHC search channels can play a role. In Fig. 1 we compare the constraints on the visible and invisible decay from current LHC searches. In the same figure, we also give 14 TeV projections for each of these searches, as discussed in Section III.2. In each case, we assume the maximal possible branching ratio in order to show the optimal sensitivity that could be achieved in each channel. More precisely, in the case of the pseudoscalar decaying to bottom quarks and leptons, we assume and in the case of the pseudoscalar decaying invisibly, we assume BR. For simplicity, results are shown in the narrow width limit for the pseudoscalar. In the absence of invisible decays, in a Type-II 2HDM at large , the branching ratio of to is about 10 and the one to about 90. The most stringent LHC constraints on come from heavy MSSM Higgs boson searches. The main production modes of are in association with -quarks and in gluon fusion with a heavy quark loop. For , we use the 7 TeV analysis from CMS Chatrchyan et al. (2013). For , we take the 8 TeV results from ATLAS Aad et al. (2014a), which gives slightly stronger limits at high compared to the CMS analysis Khachatryan et al. (2014a). In order to derive conservative constraints, we assume that the heavy CP-even Higgs is somewhat heavier than the pseudoscalar, so that only the pseudoscalar contributes to the invisible or visible signature. This is different than the typical models used by the LHC experimental collaborations to interpret heavy Higgs searches Chatrchyan et al. (2013); Aad et al. (2014a); Khachatryan et al. (2014a), in which and both the pseudoscalar and heavy CP-even Higgs contribute to the signature. The decay mode gives rise to a (mono-) signal in -associated production. To derive constraints, we simulate the signal with MadGraph5 Alwall et al. (2011), PYTHIA Sjostrand et al. (2006) and DELPHES de Favereau et al. (2014), and compare with the signal region SR1 in the ATLAS analysis Aad et al. (2014b). As Fig. 1 shows, the channel gives the strongest constraint. This is the only channel for which a reconstruction of with is possible, and furthermore can have smaller backgrounds if a leptonic decay is considered. Fig. 1 assumed maximal branching ratios in each case. The relative importance of the constraints is modified when both invisible and visible decays have sizable branching ratios. Next, we consider the interplay of these searches, as well as the requirement of a thermal relic. For concreteness, we focus on DM with mass GeV, where the dominant annihilation channel is through a pseudoscalar of mass GeV, such as that favored by the GCE (see Section V.4). The constraints on the two free parameters and are shown in Fig. 2. For both visible and invisible decays in -associated production, a minimum can be reached since the production mechanism relies on the coupling, which is enhanced. Furthermore, as expected, the mono- search covers the large region, which is difficult to probe with the search. In Fig. 2, we also show limits at low coming from the search for a pseudoscalar decaying invisibly and produced in association with top quarks Zhou et al. (2015). This search is able to exclude values of , above which the production cross section becomes too small333In principle, invisible decay should also be included which would strengthen constraints, but the relation of its invisible decay to the invisible decay is more model-dependent, so we do not include it in this figure.. From the figure, we can see that the and signatures are complementary to probe the region of parameter space where DM is a thermal relic. The search, instead, is not effective in this region. Finally, in this context, it is interesting to understand where the SM-like Higgs boson discovered by the LHC is placed. The most stringent direct bound on the branching ratio of the Higgs decaying invisibly comes from the CMS analysis Chatrchyan et al. (2014) that combines the searches for a Higgs produced in vector-boson-fusion and for a Higgs produced in association with a boson. This analysis is able to put a bound on the branching ratio into invisible at at the level. Additional bounds come from searches for a Higgs produced in association with tops and decaying invisibly. Following the results in Ref. Zhou et al. (2015), the branching ratio into invisible is bounded at . Using this latter bound, the SM Higgs coupling to DM is then constrained to be about 0.02, with which a thermal relic is possible in only-Higgs mediated models when is very close to  De Simone et al. (2014). This is a much stronger bound than we get from the signature, which is only able to constrain at best a coupling of for the same scalar mass. ### iii.2 LHC Run 2 projections In Figs. 1-2, we also show projections for the 14 TeV LHC with 300/fb integrated luminosity. For the mono- limits, we simulate events with the cuts in Lin et al. (2013), taking advantage of the background simulation already done there. A full projection for the visible channels requires modeling of the gluon fusion to processes, as well as the SM , and backgrounds. A detailed collider study is beyond the scope of the current paper. We present a simple estimate for these channels. We extrapolate both signal and background based on pdf luminosity, assuming for the signal on-shell production of a new heavy mediator. Our method is similar to that in Weiler and Salam (), but we have also to include the quark PDF in our analysis. Combining these results gives an estimate for the improvement in , which we use to determine the improvement in the bound for each given . For this reason, in Fig. 1 the projection for the search is only shown up to GeV. In all cases, for production, we consider only -associated production. The gluon fusion channel will become important at low values of . For the values of probed at 8 TeV, we have checked that gluon fusion is subdominant if compared to -associated production, with the explicit cross section limits on given in Refs. Aad et al. (2014a); Khachatryan et al. (2014a). Therefore, the projections for 14 TeV can be regarded as conservative, in that they ignore the additional constraining power from gluon fusion. Two comments on the figures are in order: projections of the and channels show that, as expected, the bound on from the decay will be weaker than the bound from . Still, the difference in the reach of the two channels is not that large, especially at low values of , motivating the search for with additional data beyond the 5/fb 7 TeV set. Additionally, as we can see from Fig. 2, improved constraints on the () and MET channels from LHC run 2 could help cover much of the region of parameter space where DM is a thermal relic. ## Iv Extended Higgs portal models Models of scalar DM coupled to an extended Higgs sector have been studied extensively in the literature, including the case with a 2HDM Higgs content Drozd et al. (2014); He et al. (2012); He and Tandean (2013); Bai et al. (2013); Wang and Han (2014). Especially for light DM, this is a much more constrained scenario than for fermion DM, as we review in the following. If DM is assumed to be a real scalar, , and a gauge singlet under the SM, the lowest dimension gauge invariant operator that allows direct couplings to the SM Higgs is Burgess et al. (2001) LDM⊃λSΦ†ΦS2 , (12) where is the SM Higgs doublet. However, for DM mass the large values of needed to sufficiently deplete the abundance of before freeze-out are in strong conflict with constraints from the invisible width of the SM Higgs. For heavier masses, aside from a small window of viable parameter space very close to the Higgs resonance, LUX Akerib et al. (2014) rules out thermal DM for GeV De Simone et al. (2014) (see also Feng et al. (2014), for a recent analysis). However, this does not have to be the case if the model is slightly extended beyond the simple Lagrangian of Eq. (12). A concrete example is the singlet scalar extension of a 2HDM. Once again, the singlet scalar is the DM candidate, but it now possesses interactions with through the terms LDM⊃S2[λdΦ†dΦd+λuΦ†uΦu+λdu(Φ†dΦu+h.c.)] . The introduction of a second Higgs doublet allows to annihilate through an s-channel as well. For CP-conserving interactions, annihilation through is not allowed. As a result, it is possible to slightly uncouple the annihilation rate from the coupling of the SM Higgs with DM and thus ease the tension with the invisible Higgs and LUX constraints to some degree. However, doing so can require tunings in the couplings above, in order to sufficiently suppress the interaction strength with the SM Higgs. Furthermore, since the heavy also leads to spin-independent (SI) nucleon scattering, thermal DM below 100 GeV is still very restricted by direct detection constraints. More detailed analysis of the model can be found in Ref. Drozd et al. (2014). Another option is to consider fermion DM. We begin by summarizing the constraints on a model with the Higgs sector given just by the SM Higgs. Light thermal DM is particularly difficult to achieve. In particular, a strictly weakly interacting fermion DM candidate is in strong tension with the null results of current direct detection experiments. For example, a DM candidate like a 4th generation heavy Dirac neutrino has couplings that would lead to a nucleon elastic scattering rate many orders of magnitude beyond what is currently allowed by LUX (see e.g. De Simone et al. (2014)). A fix for this issue is to introduce a new gauge singlet fermion that is allowed to mix with the active components of the DM after electroweak symmetry breaking. This can be explicitly realized by coupling the sterile and active components through renormalizable interactions involving the SM Higgs. Dubbed “singlet-doublet” DM Enberg et al. (2007); Cohen et al. (2012); Cheung and Sanford (2013), the simplest possibility is to introduce a Majorana gauge singlet and a vector-like pair of doublets. Since the mixing of the DM candidate originates from Yukawa interactions with the SM Higgs, such scenarios generically are quite constrained by spin-independent direct detection limits, and for light DM this usually necessitates living somewhat near “blind-spots” of the theory, where the coupling to the SM Higgs is strongly suppressed. Furthermore, since annihilation through the SM Higgs is velocity suppressed, to accommodate a thermal relic, DM annihilations involving gauge bosons need to be the dominant channel, and for DM lighter than GeV, this demands a strong coupling to the . However, precision electroweak measurements near the -pole strongly constrain such a coupling when Schael et al. (2006a). Hence, additional annihilation mediators would greatly benefit the prospects for light thermal DM. In light of this reasoning, in the next section we will study a model that adds a second Higgs doublet to this simple fermion DM scenario. The singlet-doublet case discussed above is one particular limit of this model. As we will see, the extension to a 2HDM vastly alleviates the tension of GeV singlet-doublet DM with current experimental constraints. ## V Fermion DM in a 2HDM We define a simple UV-complete extension of the SM, whose scalar sector is that of a Type-II 2HDM, and whose DM sector consists of one gauge singlet Weyl fermion, , and two Weyl doublets and , oppositely charged under , as shown in Table 1. We assume that the fermionic fields of the dark sector are odd, ensuring the stability of the lightest state. The DM sector Lagrangian has the terms LDM⊃ −12MSS2−MDD1D2 −y1SD1Φ1−y2SΦ†2D2+h.c. , (13) where 2-component Weyl and indices are implied. The notation indicates that we will allow different permutations for which Higgs doublets (down or up-type) couple to and . From here forward, we will use the naming scheme where , will be called the “du” model, and similarly for other choices of and . The naming schemes are given in Table 2. Note that in Eq. (13), we have made the simplifying assumption that and each only couple to a single Higgs doublet. In the case of the and models, this can be enforced by the symmetry of the 2HDM scalar sector (see Sec. II), that is assumed to be broken only by mass terms. This is not the case for the and models. Still, our qualitative conclusions would not change significantly if all possible Yukawas are allowed to be of comparable strength. The situation where only the SM Higgs doublet appears in the interactions of the Lagrangian (13) is the singlet-doublet model discussed in Section IV and can be identified as the limiting case of in the uu model. In this paper, we do not perform an analysis of this model, as it has already been covered in great detail in Cohen et al. (2012) and Cheung and Sanford (2013). Furthermore, since the du and ud model can be mapped into each other by a simple switch of , only one systematic study of the two is needed. Although the model embodies the spirit of bino-Higgsino DM in the MSSM, it allows much more freedom since we are freed from restrictions such as the Yukawa couplings being fixed by the gauge sector, and compared to singlino-Higgsino DM in the NMSSM, the model requires fewer new degrees of freedom444Bino-Higgsino mixing of the MSSM can be related through the identifications , , , and , where is the hypercharge gauge coupling, is the bino soft SUSY breaking mass, and is the Higgsino mass term. For singlino-Higgsino mixing of the NMSSM, the appropriate identifications are , , , and , where is the Yukawa coupling for the trilinear singlet-Higgs interaction of the superpotential, is the supersymmetric mass term for the singlet, is the Yukawa trilinear self-interaction for the singlet, and is the VEV of the singlet scalar.. For simplicity, we take all couplings to be real, and work in the convention . Keeping the sign of and fixed, we see that we can parity transform just or both and . Either of these choices in field re-definitions results in simultaneously flipping the sign of and . However, their relative sign remains unchanged. Therefore, the only physical sign in our couplings is that of . Hence, we will often trade the couplings , in favor of and defined by y1≡ycosθ , y2≡ysinθ . (14) The DM doublets are parametrized as D1=(ν1E1),D2=(−E2ν2) , (15) where , are the neutral, charged components of the doublets, respectively. and combine to form an electrically charged Dirac fermion of mass . To avoid constraints coming from chargino searches at LEP, we only consider values of GeV LEP (). After electroweak symmetry breaking, one ends up with three Majorana mass eigenstates, in general all mixtures of the singlet and the neutral components ( and ) of the doublets. In the basis, the neutral mass matrix for the fermionic dark sector is Mneutral=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝MS1√2y1v11√2y2v21√2y1v10MD1√2y2v2MD0⎞⎟ ⎟ ⎟ ⎟ ⎟⎠ , (16) where are the VEVs of . The lightest mass eigenstate of will be the stable Majorana DM candidate, which we write as a 2-component Weyl fermion . We will denote the composition in terms of the gauge eigenstates as χα=NχSSα+Nχ1ν1α+Nχ2ν2α . (17) In most of the viable parameter space for thermal DM, especially for light DM GeV, it is usually the case that the is singlet-like and . This is because a large doublet component is strongly constrained by direct detection. Therefore, it is useful to write down the approximate form of when both and are much smaller than . In this limit, one finds NχS≈1 ,  Nχ1≈ −1√2y2v2MD ,  Nχ2≈−1√2y1v1MD . (18) ### v.1 Interactions We present the relevant DM interactions in this section. For the rest of the analysis, we will give our analytic expressions in terms of 4-component notation, since they can be written in a more compact form. We write the charged state as , a Dirac fermion of electric charge and mass . The interactions of with , , , and are L ⊃λχhh¯χχ+λχHH¯χχ+λχAA¯χiγ5χ +[H+¯χ(λ+s+λ+pγ5)E+h.c.]. (19) The full forms for these couplings are given in Appendix B. Note that in their explicit analytic form in Eqs. (38)-(41), the Higgs couplings are proportional to , reiterating that the only physical sign is that of . All of these couplings rely on the singlet-doublet mixing. Also from Eqs. (38)-(41), it is important to note that, as long as and couple to the same Higgs doublet (i.e. the dd and uu models), there is a generic “blind-spot” of the theory that is independent of : for the couplings to the CP-even scalars identically vanish. We will often be considering the case where , since a large doublet component is strongly constrained by direct detection. In this limit, the blind-spot in these scenarios will necessitate or , along with . Furthermore, for , it is particularly simple to write the coupling of to the neutral Higgs bosons as λχh ≈y1y22MD(Nh1v2+Nh2v1) λχH ≈y1y22MD(NH1v2+NH2v1) λχA ≈−y1y22MD(NA1v2−NA2v1), (20) where are the projections of the physical Higgses into the gauge eigenstates, as defined in Eq. (33), and are functions of . Through mixing with and , can also have interactions with the electroweak gauge bosons of the form (21) such that gχZ =−g4cw[(Nχ1)2−(Nχ2)2]≈g8cwM2D(y21v21−y22v22) g+v =g2√2(Nχ1+Nχ2)≈−g4MD(y1v1+y2v2) g+a =−g2√2(Nχ1−Nχ2)≈−g4MD(y1v1−y2v2) , (22) where the latter approximate expressions hold for . is the gauge coupling and is the cosine of the Weinberg angle. It is apparent that, for , is small, and, furthermore, even for large doublet mixing, can vanish identically if , i.e. when . For the sake of brevity, we have not written down the interactions of the heavier Majorana mass eigenstates of the theory. Although such terms are only relevant when the mass splittings are not too large, we do include them in our final analysis. The analytic forms for the Higgs and gauge interactions we have written down so far are not completely general, since they depend on the sign of the mass terms for the physical states of the theory. In particular, if upon diagonalizing to the mass eigenstate basis we encounter states with a “wrong sign” mass term, a field redefinition that preserves the Majorana character of the field must be performed, e.g. . This transformation preserves the canonical form of the kinetic terms, but may introduce additional ’s or ’s in the interaction terms. These changes lead to physical consequences. The explicit analytic expressions written down in this paper must then be appropriately modified if this is indeed the case. ### v.2 General Constraints To be considered as a realistic DM candidate, must satisfy a handful of experimental constraints: 1. The thermal relic density of must satisfy , in agreement with current measurements from WMAP and Planck Ade et al. (2014). Furthermore, the annihilation rate must lie below the 95 % CL upper limits from gamma-ray observations of dwarf spheroidal satellite galaxies of the Milky Way Ackermann et al. (2015). 2. The scattering rate of with nuclei is below the current spin-independent and spin-dependent limits from LUX Akerib et al. (2014); Savage et al. (2015). 3. If , then BF, coming from global fits to the observed Higgs couplings Belanger et al. (2013). Note that our results do not change appreciably if we impose weaker bounds on the invisible Higgs decay at the level of , as obtained from direct searches Chatrchyan et al. (2014); 1229971 (). 4. If , then MeV, coming from LEP precision electroweak measurements near the -pole Schael et al. (2006a). 5. No other constraints from the LHC or other colliders are violated. In particular, we will consider LEP and LHC direct searches for a heavy Higgs and electroweakinos, and mono- constraints. Relic Density. For DM masses GeV, the dominant annihilation channels governing freeze-out and annihilation today are -channel exchange of and/or (if is sufficiently light). Since is Majorana, for -exchange the -wave contribution is chirality suppressed by the mass of the final state SM fermions. Meanwhile, -exchange is also suppressed by fermion mass but can be enhanced for large . For more massive DM, specifically when is sufficiently heavier than , can annihilate into pairs of ’s or ’s, and both of these processes are in general unsuppressed by the relative velocity. Once is taken to be larger than a few hundred GeV, annihilations to pairs of scalars become relevant, which depends on additional couplings in the full Higgs sector. We will therefore restrict our study to the parameter space where is sufficiently light such that final states including , , and do not contribute significantly to the calculation of . With this simplifying assumption, we can safely ignore the heavy Higgs self-interactions present in the full 2HDM Lagrangian. We find that in the calculation of , resonances and co-annihilations can play an important role in setting the relic abundance in certain regions of parameter space. Since a proper calculation of requires a careful treatment of such effects, we implement our model with FeynRules 2.0 Alloul et al. (2014) and micrOMEGAS_3.6.9 Belanger et al. (2014) in the calculation of the relic abundance and numerically scan over the parameter space of our different models. In doing so, we have checked the output of the dominant annihilation channels analytically. We will represent the regions with the correct relic density in black in our summary plots of Figs. 4 and 5. Regions where the annihilation rate is above the 95 % CL upper limits from gamma-ray observations of dwarf spheroidals will be represented in purple. Direct Detection. The dominant contribution to the SI scattering rate of off of nuclei is from exchange of the CP-even scalars , . The SI scattering cross section per nucleon is σSI, per nucleon0=4μ2χ,nπ[Zfprot+(A−Z)fneutA]2 , (23) where and are the atomic mass and atomic number, respectively, of the target nucleus, and is the reduced mass of the WIMP-nucleon system. (where the nucleon is a proton or neutron) is the effective WIMP coupling to nucleons, which can be written in terms of the quark couplings as fn≡mn[∑q=u,d,saqmqf(n)Tq+227f(n)TG∑q=c,b,taqmq] , (24) where is defined to be aqmq ≡1v[−λχhm2h+λχHqβHm2H] qβH ≡{cotβ,if q=up-type−tanβ,if q=down-type . (25) In Eq. (24), we take the quark mass fractions to have the values , , , , . By definition,  Junnarkar and Walker-Loud (2013); Crivellin et al. (2014). The effective WIMP-Higgs Yukawa couplings in Eq. (25) are the same ones given explicitly in Eqs. (38)-(41). Throughout, we demand that the rate given in Eq. (23) is below the SI constraints from LUX Akerib et al. (2014). We will represent this constraint in red in our summary plots of Figs. 4 and 5. As we will see later in Sec. V.3, the sign of can have important effects on the direct detection rate. Although it will only slightly alter the DM annihilation rate throughout its thermal history, a negative can allow for a suppressed SI scattering rate as seen by LUX. To illustrate this effect, we show in Fig. 3 the SI nucleon scattering rate (normalized by the LUX limit) for the du and dd models as a function of and for various representative choices of , , and . In the left plot of Fig. 3, we show the normalized rate for the du model. For , Eq. (38) and Eq. (25) imply that the effective light and heavy Higgs couplings with nucleons can be comparable and opposite in sign. As can be seen in Fig. 3, this destructive interference between and exchange is a generic feature of the du model for , however the exact point of maximum cancellation depends on the specific choice of , , , and . Furthermore, independent of the chosen benchmark of couplings, for large , there is always a suppression at , which can be understood as the point of suppressed mixing since both mixing terms and are small here. In the right plot of Fig. 3, we show the normalized SI scattering rate for the dd model. In this case, the rate can also be suppressed for . This is because both couplings and vanish at a blind spot of the theory when . As can be seen in the figure, the position of the blind spot does not depend much on the values of and . For the and parameters in Fig. 3, this occurs at . We next consider elastic spin-dependent (SD) scattering of with nuclei via exchange, requiring that the DM-neutron cross section555The constraints on SD scattering with neutrons is generally stronger than that coming from protons. is consistent with the recast constraints from LUX Savage et al. (2015). The SD scattering of per neutron is σSD, per neutron0=12μ2χ,neutπ(∑q=u,d,saqΔ(neut)q)2 , (26) where is the effective coupling with quarks, aq ≡gχZgqam2Z gqa ≡∓e4(tw+t−1w) , if q= up/down-% type . (27) In Eq. (26), we take the quark spin fractions to be , , and  Cheng and Chiang (2012). is the coupling of DM with the boson and is given in Eq. (22). Here, is the electric charge of the electron (), and is the tangent of the Weinberg angle. The constraints on the SD scattering off of neutrons from LUX become more important as the doublet mass is decreased and the doublet fraction is correspondingly enhanced (of course, if is purely doublet, then vanishes completely as seen in Eq. (22)). We will represent this constraint in orange in our summary plots of Figs. 4 and 5. Invisible Decays. The constraints from the invisible widths of the SM Higgs and are relevant whenever , respectively. The invisible branching fraction of the Higgs is constrained to satisfy BF at 95% CL, which comes from a global fit to the visible Higgs channels in which the visible Higgs couplings are fixed to their SM values Belanger et al. (2013). Assuming that , it follows that the constraint on the invisible width of the Higgs is approximately MeV. From the Lagrangian of Eq. (19), the Higgs width into a pair of Majorana ’s is found to be Γ(h→χχ)=λ2χh4πmh(1−4m2χm2h)3/2 . (28) Similarly, electroweak precision measurements at LEP constrain MeV Schael et al. (2006a). For , the Z width into a pair of ’s is Γ(Z→χχ)=g2χZ6πmZ(1−4m2χm2Z)3/2 . (29) We will represent the constraint from the Higgs () invisible width in gray (brown) in our summary plot of Fig. 5. Direct searches for new particles. The model contains additional (possibly light) new particles that can be looked for directly at the LHC: two neutral () and one charged Higgs boson () and two neutral () and one charged () fermion, in addition to the DM candidate, . In Sec. III.1, we have already discussed the LHC bounds on neutral heavy Higgs bosons. Here we comment on the constraints on charged Higgs bosons, as well as on new fermions. The most relevant LHC charged Higgs searches are for the process , with subsequent  ATLAS collaboration (2014); CMS Collaboration (2014a) or  CMS Collaboration (2014b). These searches, performed with 8 TeV data, probe charged Higgs bosons in the multi-hundred GeV range, but only for very large values of (), for which the coupling entering the charged Higgs production is enhanced. In addition to direct searches, flavor transitions such as can set interesting (indirect) bounds on the charged Higgs mass: in a Type-II 2HDM, charged Higgs bosons cannot be lighter than around (300-400) GeV Hermann et al. (2012). In the following, we will always fix the charged Higgs mass to 300 GeV and in such a way as to avoid constraints from flavor transitions and direct collider searches for heavy Higgs bosons. LHC searches for electroweak Drell-Yan production can set interesting bounds for the new fermions arising in our model. In particular, the model contains one charged and two neutral fermions, in addition to the DM. These fermions are produced either in pairs through a boson (, where ), or in associated neutral-charged production, with the exchange of a boson (). Generically, the latter production mode has the most relevant LHC constraints. LHC searches for supersymmetric Wino associated production giving a (or ) MET signature can already set bounds on the Wino mass at around 400 GeV for massless lightest supersymmetric particles (LSPs) Khachatryan et al. (2014b); Aad et al. (2014c, d). In Sec. V.4, we will discuss how this bound can be interpreted in our model. ### v.3 Discussion on DM above 100 GeV Among all of the interactions of our DM candidate , the coupling to the pseudoscalar is key in opening up viable parameter space. Annihilation through an -channel pseudoscalar is -wave, and if it happens not too far from the pseudoscalar resonance, 1−4m2χ/m2A≲0.5, (30) then it is possible to obtain even for a large singlet fraction. And unlike the interactions of with and , scattering off of quarks via exchange of the pseudoscalar is spin-dependent (SD) and further kinematically suppressed by four powers of the momentum transfer, , where is typically of order 100 MeV. In exploring the parameter space of the model, we have found that it is difficult to find thermal DM candidates very far away from pseudoscalar resonances. When the relic density is not governed by resonances (or coannihilations) to any significant degree, there must be large mixing in the gauge eigenstate makeup of in order to obtain the correct abundance. Since this mixing hinges on the Yukawa interactions (see Eq. (18)), this will also increase the DM scattering rate off of quarks. Such regions are generally ruled out by LUX, except near special blind spots, as discussed above. We present benchmarks for two of the models of Table 2 in Fig. 4. As previously mentioned, the du and ud models can be related to each other by replacing with its inverse, and the uu model in the large limit reduces to only coupling the SM Higgs to the dark sector. We therefore only explore the parameter space for the models du and dd. In this section, we focus on GeV, which also easily avoids constraints from the invisible width of the or . Prospects for lighter DM ( GeV) will be presented below in Sec. V.4. We also choose to work with sufficiently light , such that annihilations to final states including one or more heavy Higgs are negligible. This therefore favors scalars of mass around a few hundred GeV in order for a pseudoscalar resonance to be relevant, and so we fix GeV in order to simplify the scan of the parameter space. We note the model would work just as well for heavier scalars and correspondingly heavier DM. In. Fig. 4, we show constraints in the plane for representative choices of , , and that give viable parameter space for thermal relic DM. Our choice of parameters satisfies the constraints from direct searches for heavy Higgs particles in the and final states, as discussed in Section III. Furthermore, since the pseudoscalar-DM coupling in the full plane we present, mono- searches are much less sensitive to this model. In both du and dd models, we clearly see the least constrained region for thermal relic DM is where GeV ( at or slightly above 150 GeV is in slight tension with gamma-ray observations of dwarf galaxies). The thermal relic line extends to larger (larger singlet fraction) as the DM mass approaches the resonant region, which is centered slightly below . This can be understood as thermal broadening of the resonance near freeze-out. Another feature in the thermal relic line can emerge if is not too large: in the right frame of Fig. 4, for GeV, there can be dominant annihilations to top quarks through -channel exchange of a light or heavy Higgs. Although the pseudoscalar resonance region will remain viable for many different parameters, the proximity or tuning of to depends on the choice of or . The thermal relic line in the figures will shift to larger values of for a fixed for large . (The relic abundance is only slightly affected by the sign of .) This is because, in both du and dd models, for large , , which then implies (using Eq. (18)) . Hence, is largely unsuppressed, as seen by Eq. (22), and to compensate, the overall singlet fraction must be increased by slightly decoupling . Similarly, the doublet fraction of is proportional to , and so increasing in either of the scans of Fig. 4 will generically shift the thermal relic line to larger values of for a given value of . Direct detection constraints are most relevant at lower , where the DM singlet fraction is lower (see the red region of Fig. 4). Fig. 4 also illustrates various blind-spots in the SI direct detection rate. In the du model, for GeV, the choice of suppresses nucleon scattering. In particular, from Eq. (38), in the large limit the relative strength of the two different CP-even Higgs couplings is . Then, from Eq. (25), when the couplings of nucleons to and partially cancel, explaining the feature in the drop off in scattering rate for GeV and GeV. In the dd model, shown in the right frame of Fig. 4, -nucleon SI scattering is also near a blind-spot of the model where both are suppressed. Here, fixing , approximately when . The SD constraints (the orange region in Fig. 4) do not rule out much of the viable thermal relic parameter space that SI constraints do not already exclude. The importance of considering SD scattering via exchange is still illustrated in the du model, where spin-dependent limits are more powerful than spin-independent limits for values of GeV. The SD limits do not depend on Higgs couplings and are able to constrain the parts of the parameter space close to where and exchange interfere and suppress the SI scattering rate. Finally, as already introduced in the previous subsection, additional constraints might come from the LHC direct search of electroweak Drell-Yan production. However, in the regime with not too light DM, GeV, there are no bounds even for of around 200 GeV Khachatryan et al. (2014b); Aad et al. (2014c). ### v.4 Light DM and the GCE In this section, we investigate the viability of the models to describe DM with mass below GeV. Although much of the physics near the pseudoscalar resonance is similar to that of the previous section, we additionally require that the model could provide a reasonable fit of the Galactic Center Excess. For model building in a similar direction that can additionally describe the 3.55 keV X-ray line, see Berlin et al. (2015). One simplified model that has received much attention in its ability to describe this signal is just that of Eq. (1). Since the annihilation is -wave, the rate can still be large today. Moreover, a relatively light pseudoscalar is favored in UV-complete realizations to get around numerous constraints Cheung et al. (2014); Cahill-Rowley et al. (2014); Ipek et al. (2014). The same is true in our models, since a pseudoscalar of around 100-200 GeV will be needed for the DM to annihilate near-resonance. As discussed in detail in Sec. II, we therefore implement the freedom in a general 2HDM to have a sizable splitting between the pseudoscalar mass and the heavy/charged Higgs mass and fix GeV and GeV. This allows to be relatively close to a pseudoscalar resonance, while the other scalars are heavy to evade direct detection and other constraints. We now add the following criteria to the list of demands enumerated in Sec. V.2 for to be considered a realistic DM candidate consistent with the GCE: 1. Since annihilations through a light pseudoscalar proceed dominantly to final state bottom quarks, we restrict the mass of to lie in the range in order to fit the spectral shape of the GCE spectrum (this is represented in blue in Fig. 5).666The exact mass range that is preferred is dependent on systematics. For annihilations to , the spectral shape of the observed emission has been found to be well fit by DM of mass as low as 30 GeV and as high as 70 GeV Daylan et al. (2014); Abazajian et al. (2014); Calore et al. (2014a, b). As a benchmark, we choose the upper half of this range since it is more viable for the model at hand. 2. In order to ensure the approximate normalization for the GCE signal, the annihilation rate must satisfy  Calore et al. (2014b) (this is represented in green in Fig. 5). Limits on DM annihilation from AMS-02 observations of the anti-proton fraction Giesen et al. (2015); Lin et al. (2015) are very similar to those from the dwarf galaxies and have additional astrophysical uncertainties, so for simplicity we do not include them. In Fig. 5 we show benchmark scenarios for GeV, highlighting the regions that fit the GCE. We again consider the du and dd models and scan over the plane. We make a similar choice for , , and as in Sec. V.3. For both models, the qualitative behavior of the direct detection constraints is similar to the previous section. Again, since the pseudoscalar-DM coupling , mono- searches would have very little sensitivity to the relevant parameter space. For both models in Fig. 5, the GCE excess can be explained while avoiding direct detection limits if is mostly singlet-like and near a pseudoscalar resonance. Constraints from gamma-ray observations of dwarf spheroidals may also be avoided if is slightly below the pseudoscalar resonance, due to thermal averaging of the annihilation cross section. We also see the effects of annihilation near the poles (the latter only for the du model), which is visible in the thermal relic contour at around GeV, respectively. This effect is not present for ; for Majorana fermions annihilating through an -channel vector mediator (or Dirac fermions with only axial couplings) there is no resonant enhancement in the -wave contribution to  Jungman et al. (1996) and annihilation through an -channel scalar mediator is -wave for fermionic DM. The favored region for the du model is for TeV, and . Due to the relatively smaller mixing induced in the dd model, the appropriate GCE parameter space requires a smaller GeV. We also emphasize that, although we have presented only two benchmark scenarios here, these choices of models and parameters of the DM sector are not particularly special or highly fine-tuned. In particular, all models with a pseudoscalar that is not too heavy, , and not too small could give a good fit of the GCE. Of course, pseudoscalars even lighter than 160 GeV would be suitable to obtain a large enough annihilation rate. For example, a pseudoscalar mass around 100 GeV is even better suited for the GCE. For lighter GeV, then there are currently no LHC direct searches for pseudoscalars (although this may be possible in the future Kozaczuk and Martin (2015)) and only very weak constraints from heavy Higgs searches at LEP Schael et al. (2006b) are applicable. However, such very light pseudoscalars are more difficult to be achieved within our 2HDM scalar sector (see Appendix A for more details). We finally comment on the additional constraints we have to consider for these models with light DM, and consequently with new relatively light electrically charged degrees of freedom ( in Eq. (19) and Eq. (21)). We first note that the constraint from the invisible width of the SM and are not strong and can exclude only a small region of parameter space in the du model at light , which is already excluded by LUX constraints. Additionally, having new light and charged degrees of freedom can introduce corrections to electroweak precision observables. However, we have checked that the contributions to the parameter from loops of the new fermions is negligible (at most at the level of in the region of parameter space favored by the GCE) Baak et al. (2014). Finally, constraints on the parameter space come from LHC direct searches for Drell-Yan production of electroweak particles. For the models shown in Fig 5, the electroweak spectrum needed to fit the GCE contains several new light fermions in addition to DM. In particular, for the dd model, we have two additional neutral fermions and one charged fermion , all with mass close to GeV range and with splittings smaller than a few GeV amongst the states. Constraints from LHC searches of neutral-charged Wino associated production, resulting in a (or ) MET signature, are the most important to constrain our scenario. In particular, combining the and searches, the ATLAS collaboration sets a bound at GeV, under the assumption of branching ratio and massless LSP Aad et al. (2014c). This corresponds to an exclusion on fb. In the following, we discuss how to interpret this constraint in terms of our model. As discussed in Sec. V, the fermion content of our model resembles the one of the MSSM with a Bino-like LSP and Higgsino-like NLSPs. However, in our model the heaviest fermions and have a sizable branching ratio , as long as it is kinematically accessible ( GeV for the benchmarks in Fig. 5). Then it is easy to check that, in the dd model, the cross section for is always smaller than the excluded cross section (30 fb) in the entire region of parameter space for GeV. Below 220 GeV, the decay into a pseudoscalar is not accessible and the branching ratio for the decay is not large enough to suppress sufficiently the channel. Therefore, in the dd model, the region GeV of Fig. 5 has already been probed by the LHC direct searches of Drell-Yan production of electroweak particles. On the other hand, the region of parameter space favored by the GCE in the du model has not been probed by these LHC searches yet, since the additional fermions are heavier ( GeV). ### v.5 Future tests of the model. Upcoming spin-independent direct detection results can probe much of the parameter space for our model. For the model, which has a larger SI rate compared to the model, future LUX data can test the entire GC excess region and much of the region for heavy dark matter. The next generation of ton-scale direct detection experiments, such as LZ Nelson and LZ collaboration (2014) and XENON1T Aprile and XENON1T collaboration (2012), will be vastly more constraining. The LZ experiment can cover the entirety of the parameter space shown for the model in Figs. 4-5, while for the model a significant portion of the parameter space can be reached, including the entire GC excess region shown in Fig. 5. Our model can be further probed at LHC Run II by the search of the various light degrees of freedom. In particular, as shown in Fig. 2, pseudoscalar searches (with the pseudoscalar decaying either to taus or invisibly) will be able to test almost entirely the region of parameter space able to predict a thermal DM candidate, if both the DM and pseudoscalar are relatively light ( and below GeV). Furthermore, at the Run II of the LHC, with 300 fb data, searches for Drell-Yan production of electroweak particles will be able to probe Wino masses as high as GeV EWi (), under the assumption of decay for and for . This can be translated into a bound on at the level of GeV in the dd model for the GCE region777Note that for <
# Axis ticks in radians in polar pgfplots I am using the polar library with pgfplots to plot a graph like this: \begin{tikzpicture} \begin{polaraxis} \addplot[mark = none, domain = 0.4:12, samples = 600, data cs = polarrad]{sin(x)}; \end{polaraxis} \end{tikzpicture} As described in the manual, this uses radians for the function, but the axis labels are still plotted in degrees. Instead of having the 0, 30, …, 330 tick labels, I would like 0π, π/6, … 11π/6 labels. I am convinced the solution must be quite simple, but I have not been able to find it yet. - Would xticklabel={$\pgfmathparse{\tick/180}\pgfmathprintnumber[frac,frac denom=6,frac whole=false]{\pgfmathresult}\pi$} option be sufficient for your needs? –  percusse Mar 3 '13 at 13:23 @percusse setting xticklabel has no effect when I try it? (redefining \pgfplots@show@ticklabel@@polar has some effect but I guess that isn't the official interface? –  David Carlisle Mar 3 '13 at 13:49 This is a solution that I could live with, thank you very much! (@DavidCarlisle it works for me.) Of course it would be nicer to have simplified fractions, but if this is the simplest solution, then I guess there is no built-in way to label axes using radians. –  Ruud v A Mar 3 '13 at 13:50 @DavidCarlisle You have to include \usepgfplotslibrary{polar}, let me cook up a MWE. –  percusse Mar 3 '13 at 13:51 @percusse yes I got that far and can run the MWE and get a plot. I guess I should just leave pgf questions to egreg... –  David Carlisle Mar 3 '13 at 13:58 I've simplified the fractions a little more \documentclass{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.7} \usepgfplotslibrary{polar} \begin{document} \begin{tikzpicture} \begin{polaraxis}[ xticklabel={ \pgfmathparse{\tick/180} \pgfmathifisint{\pgfmathresult}{$\pgfmathprintnumber[int detect]{\pgfmathresult}\pi$}% {$\pgfmathprintnumber[frac,frac denom=6,frac whole=false]{\pgfmathresult}\pi$} } ] \addplot[mark = none, domain = 0.4:12, samples = 600, data cs = polarrad]{sin(x)}; \end{polaraxis} \end{tikzpicture} \end{document} You can introduce yet another \ifnum inside the integer check to remove 1 from 1pi but seems like an overkill to me. It's pretty readable in my humble opinion. - oh flip I nearly answered this an hour ago and could have had lots of lovely tikz points but I added xticklabel to the wrong place (\addplot) +1 to you +0 to me:( –  David Carlisle Mar 3 '13 at 14:11 @DavidCarlisle Go ahaed and post it please I'm not super happy with the way it is anyways. :) –  percusse Mar 3 '13 at 14:12
"The trouble with the world is not that people know too little, it's that they know so many things that just aren't so." - Mark Twain Sadly for me (I guess happily for my sanity) sometimes when I read and learn new things I do not fully put them through my thought filter and I just accept them as they are stated. Today I am going to talk about such a case and I hope it will help you remove one of the many misconceptions that we live with. For some crazy reason, recently I revisited the two well known graph traversal algorithms, Depth-First Search (DFS for short) and Breadth First Search (BFS for short) and came to some conclusions and realizations which apparently were interesting enough to wake me up from my months long hibernation since I last wrote a post. For the case where the details of the two algorithms are a little bit fuzzy and also to setup the scene for the grand reveal at the end I will go through the process of how usually DFS and BFS are taught (at least in my experience and as far as I see how it is mostly taught on the Internet if you search for these two algorithms). First we need an example graph. We will work with the one shown below: ## Breadth First Search BFS starts with a node of the graph which will act as a root node in the context of the algorithm. From the root, it will continue exploring the graph by traversing completely a level at a certain depth before going to the next one. Since we are traversing a graph we have to be careful to not visit the same node twice. For the example above the traversal order would be: • Level 0: 1 • Level 1: 2 -> 3 -> 4 • Level 2: 5 -> 6 -> 7 • Level 3: 8 -> 9 The classic implementation of this algorithm is iterative and involves a Queue which will mostly take care for us that the order of traversal is breadth first. A set is used also to keep track of discovered nodes: before adding any node to our queue of nodes to visit, we first check this set if the node was already previously added to the queue. One example of such an implementation is given below: void breadthFirstTraversal(Node root) { if (root == null) return; //Create and initialize the set which //will contain discovered nodes Set<Node> discovered = new HashSet<>(); //Create and initialize queue which //will contain the nodes left to visit Deque<Node> queue = new ArrayDeque<>(); queue.offer(root); while (!queue.isEmpty()) { Node current = queue.poll(); //visit the node; in this case just print // the node value System.out.println(current.getValue()); for (Node child : current.getChildren()) { //add to the queue any child node of the //current node that has not already been discovered //and also add it to the set of //discovered nodes if (!discovered.contains(child)) { queue.offer(child); } } } } ## Depth First Search Now that we know what BFS is, it should be intuitive how using a DFS traversal style would work. Instead of going level by level, the algorithm will start with a root node and then fully explore one branch of the graph before going to the next one. For our example graph (which I am showing below again so it is easy to follow) the traversal order would be: • Branch 1: 1 -> 2 -> 5 -> 8 -> 7 > 9 • Branch 2: 6 • Branch 3: 3 -> 4 DFS lends itself quite nicely to a recursive approach so the classic implementation looks something like this. void depthFirstTraversal(Node root){ if(root == null){ return; } //initialize the set which will contain // our discovered nodes Set<Node> discovered = new HashSet<>(); //call the recursive function starting //with the provided root dfsRecursive(root, discovered); } void dfsRecursive(Node current, Set<Node> discovered){ //mark the node as discovered //visit the node; in this case we just //print the value of the node System.out.println(current.getValue()); //recursively call the function on all of //the children of the current node if //they have not already been discovered for(Node child : current.getChildren()){ if(!discovered.contains(child)) { dfsRecursive(child, discovered); } } } At this point your teacher will say: "You know what? To help you out to see that these algorithms are not that different, let me show you how to implement DFS iteratively". They will mention that it is almost the same as BFS except two differences: 1. Use a stack instead of a queue. 2. Check if we have visited a node after we pop from the stack. This is opposed to checking before we add a node to the stack as we do in BFS where we check a node before adding it to the queue. The classic implementation looks something like this (even on Wikipedia it is explained somewhat similarly): void depthFirstTraversalIterative(Node root){ if (root == null) return; //Create the set which // will contain discovered nodes Set<Node> discovered = new HashSet<>(); //Create and initialize stack which // will contain the nodes left to visit Deque<Node> stack = new ArrayDeque<>(); stack.push(root); while (!stack.isEmpty()) { Node current = stack.pop(); if(!discovered.contains(current)){ //visit the node; in this case just print // the node value System.out.println(current.getValue()); //add to the stack all the children //of the current node for(Node child : current.getChildren()){ stack.push(child); } } } } The student runs the two versions of DFS and he sees that the results are the same so he quickly memorizes the BFS algorithm and the two small differences to transform it into DFS and he lives a happy life with the thought that he basically learned 2 algorithms for the price of one. One of such students being me. ## Digging deeper A more inquisitive mind will stop and ask: Wait a minute, where do those two differences between DFS and BFS come from? Good question! Let us explore! The first one is usually answered with the following: When implementing DFS recursively a stack is used implicitly behind the scene to store the data while visiting the graph so it only makes sense that we explicitly also use a stack for the iterative approach right? Sounds reasonable. The second difference is not usually discussed though and here things actually get interesting. First let's ask ourselves why in the BFS implementation we check if the node is visited before we add it to the queue? To answer that let's see what happens if we check after we poll the node from the queue. We will zoom in at the moment we visit node 3. Node 4 will in both cases be already in our queue. If we do what we usually do and mark node 4 as discovered before adding it to the queue, then we will have the information that node 4 is already discovered and we would not add it again to the queue. This situation is shown in the picture below. With the change that we made, the discovered nodes set does not have node 4 as a member so we will add it again to the queue. The situation is shown below. Since the queue is a First In First Out structure, it turns out that from a result perspective this difference does not matter since we will anyway first visit the node that we first added to the queue and we mark it then as discovered. When trying to visit again the same node we will see it as already visited. The main take away is that we want to mark nodes as discovered before adding them to the queue so we do not have duplicates in the queue, this mainly helping with the memory profile of the algorithm since the number of nodes in the queue will be kept to the minimum required. If we turn our attention to the DFS iterative version we can think that maybe it makes sense also there to do the same thing as in BFS and mark as discovered nodes before we add them to the stack, so let's do it! With this small modification the visiting order of the example graph (again shown below for convenience) will be: • Branch 1: 1 -> 2 -> 5 -> 8 • Branch 2: 6 • Branch 3: 7 -> 9 • Branch 4: 3 • Branch 5: 4 Wait what? We get different result than in the recursive approach...that cant be right!? ## Being misled So what is happening here? We need to realize that just by using a stack for our iterative implementation because the recursive implementation uses one will not auto-magically make the two behave the same. If you think a little more in depth about what the recursive version actually does you will notice that, as opposed to the iterative version, we do not save on the implicit stack all the children of the current node, we only save the current node and implicitly we also save on which of the children we will recursively call the algorithm next! For example for the root node we never save anywhere on any stack node 3 and 4, we only get to visit them because this recursive approach implicitly provides us a backtracking mechanism where once a path is completely explored we can go to the next path. In our iterative approach we save all the children on the stack with the hope that the Last In First Out behavior of the stack will give us the DFS behavior of the recursive implementation. As we saw, this only works when we mark a node as discovered after we pop it from the stack( I will let as an exercise to go through the traversal and understand exactly where the result difference comes from). As in the BFS case that we studied above, this will result in having the same node multiple times in our stack. So I think here is where people are misled because when we take all into account the situation can be resumed like this: • When we implement BFS we are being taught that we do not want duplicates in our Queue so we must mark nodes as discovered before adding them. • When we implement iterative DFS we are being taught to forget about not wanting duplicates in our data structure(this case the stack) because now we just want to mimic the recursive implementation so we will sacrifice our principle from the queue implementation. The second teaching seems contradictory with the first one. If this is done just to save the students from the extra complexity, then I think that is wrong because the whole purpose of studying these algorithms should be to understand their inherent workings not for people to be able to memorize them more easily. Because of the differences between the inner workings of the recursive implementation and the classic iterative implementation the two can have vastly different behaviors in terms of memory depending on the graph structure. A prominent case that illustrates this is a star graph: a single central node surrounded by a large number (say, 1000) of child leaf nodes. An example shown below, but with only a couple of nodes. If you run the iterative version of DFS that we saw previously on this graph using the central node as the starting point, the stack size will grow to 1000 since we will be adding all the child nodes to the stack before going to the next iteration. The recursive DFS algorithm will need an implicit stack depth of only 1 to traverse this entire graph. That is 1000 vs 1 in terms of memory requirements! ## True DFS The good news is that once we actually understand what the recursive implementation actually does we can implement an iterative version that indeed has the same behavior, not one that only pretends to do. What we need to do is explicitly code the backtracking behavior of the recursive DFS implementation. We will do this by keeping in the stack info about which child of a node we need to visit, rather then the node itself. The algorithm implementation follows with some inline comments to explain what is happening: // data structure for storing the required info on the stack class TraversalInfo{ Node parent; int childPos; public TraversalInfo(Node parent, int childPos) { this.parent = parent; this.childPos = childPos; } public Node getParent() { return parent; } public int getChildPos() { return childPos; } public void setChildPos(int childPos) { this.childPos = childPos; } } void trueDepthFirstTraversalIterative(Node root){ //as always create and initialize //the set of discovered nodes Set<Node> discovered = new HashSet<>(); //create and initialize the stack which will //indicate which node to visit next. You can //observer that we are not saving anymore directly //what node to visit next, but a parent node and //the index of its child that we should visit next Deque<TraversalInfo> stack = new ArrayDeque<>(); stack.push(new TraversalInfo(root, 0)); //we visit the root node before going //into our loop System.out.println(root.getValue()); while (!stack.isEmpty()) { TraversalInfo current = stack.pop(); Node currentParent = current.getParent(); int currentChildPos = current.getChildPos(); //we check if there are more child nodes if(currentChildPos < currentParent.getChildren().size()){ Node child = currentParent.getChildren().get(currentChildPos); //we save in the stack the next child index //together with its parent current.setChildPos(currentChildPos + 1); stack.push(current); //check if current child discovered already if(!discovered.contains(child)){ //add it to the set of discovered nodes //visit the child; in this case just print out its value System.out.println(child); //add to the stack the info for visiting its child nodes //starting from the first one stack.push(new TraversalInfo(child, 0)); } } } } As we can see this implementation is a little more complex than the one we are used to. Here we use an extra data structure to keep the required info, but there is an alternative implementation using an extra stack which I will leave as an exercise for anyone curious enough to try it out. In some circles this iterative implementation and the recursive one are called True DFS, while the iterative one that we saw in the beggining is called a pseudo DFS traversal since at the surface level it mimics the True DFS algorithm, but if you look at its inner workings it does not. ## Closing remarks Let me know in the comments for how many of you this was obvious from the start and how many actually lived with the iterative DFS implementation without questioning it. Also let me know of any other simmilar example that you may have. If you like the content do not forget to subscribe!
# Object on an Incline 1. Oct 6, 2004 ### Format Need help on this one: An object is fired up a frictionless ramp at 60 degrees. If the intial velocity is 35m/s, how ong does the object take to return to the starting point? I figured out the Initial y and x velocities. What i dont understand is how to use acceleration in this problem...doesnt it have to be equal to F-parallel? 2. Oct 6, 2004 ### Pyrrhus Well do the force analysis. What forces are acting on the x-axis? You know it's weight, but which amount of it? 3. Oct 6, 2004 ### Format Umm, dont exactly get what you mean...it doesnt give its mass 4. Oct 6, 2004 ### arildno Format, you really ought to formulate Newton's 2.laws in the tangential&normal reference frame, rather than "x" and "y" The calculations are much easier that way (in particular, you only need to work with the tangential component of Newton's 2.law) 5. Oct 6, 2004 ### Format i think ill stick to the way i was taught...seeing how i dont exacly understand what you mean lol 6. Oct 6, 2004 ### arildno What is the component of gravity along the inclined plane? 7. Oct 6, 2004 ### Format Thats wut i dont know how to figure out... dont you need to use Fg = ma to answer that? 8. Oct 6, 2004 ### Gokul43201 Staff Emeritus Do you know how to draw free-body diagrams ? 9. Oct 6, 2004 ### arildno You know that the weight is mg in the downwards direction, right? 10. Oct 6, 2004 ### Diane_ OK, try it this way: you know that any number can be represented as a sum of two other numbers, yes? For instance, you can represent 10 as 6+4. You can also represent it as 7+3, 8+2, and an infinite number of other pairs. Vectors are the same way, except that you have to take direction into account. Generally, the most convenient way to decompose vectors is into perpendicular parts. This is why you were taught to take them apart into x and y, but there are times when there are better ways to do it. In this case, you are working on an inclined plane. You know that the object isn't going to leap up off of the plane and go soaring through the air - it's either just going to sit there or (in this case) slide along the surface of the plane. The most convenient way to handle this, then, is to decompose things into components parallel to and perpendicular to the surface of the plane. This is what arildno is talking about. Doing this gains you several things immediately. First off, because you know it isn't going to go flying, you know that the net force perpendicular to the plane is zero. The normal force cancels out any other perpendicular components. Secondly, since the initial velocity is parallel to the plane, you don't need to decompose it at all. All you have to do is find the component of gravity that acts parallel to the plane, and the problem devolves into a one-dimensional kinematics problem. Does that make sense? 11. Oct 6, 2004 ### Format Yea i think im getting it... So to find the force of gravity parallel to the incline do i do something like this mg = (sin60)mg g = (sin60)g g = 11.31? 12. Oct 6, 2004 ### Gokul43201 Staff Emeritus What you've written down is not a sensible equation. The RHS is correct, but what does the LHS mean ? Do you see that there is a problem with the equation you've written down ? 13. Oct 6, 2004 ### Format its the Fg directly downwards? 14. Oct 6, 2004 ### Pyrrhus Remember This vectorial equation can give an amount of scalar equations depending on the dimensions. $$\sum_{i=1}^{n} \vec{F}_{i} = m \vec{a}$$ For this problem 2 dimensions $$\sum F_{x} = ma_{x}$$ $$\sum F_{y} = ma_{y}$$ Yes weight is directed downwards. 15. Oct 6, 2004 ### Format Sorry, i cant see where i went wrong Fg(down) = (Sin60)Fg ma = (sin60)ma I know you guys dont like giving direct answers but i would appreciate a little bit more lol 16. Oct 6, 2004 ### arildno Don't you understand why this is sheer NONSENSE?? What do you think "=" means? 17. Oct 6, 2004 ### Gokul43201 Staff Emeritus Okay here's the long answer. I'm not sure it's going to help in the long run because there are more basic holes in your understanding of math and physics. The forces acting on the object are : (draw a picture showing the forces, as you read this) (i) its weight (also known as gravitational force) acting downwards, (ii) the normal reaction from the incline, acting perpendicular to the incline. Since these forces are not in opposite directions, they will not cancl each other off. Now, we resolve the forces along and perpendicular to the direction of the incline. The normal reaction N, is already perpendicular to the incline, so there's nothing more to do. The weight (mg) has a component along the incline given by mg*sin60, and a component perpendicular to it, given by mg*cos60. Now all the forces have been resolved. There are two forces perpendicular to the plane and in opposite directions. Since the object does not move away from the incline (ie : it's motion has no component perpendicular to the plane), the net force in this direction must =0. This means that N = mg*cos60. For this question, this is not needed. Along the plane, there's only one force, given by mg*sin60 acting downslope. This force must give rise to an acceleration in that direction given by F = ma. So, we have mg*sin60 = ma. Or a = g*sin60 is the downslope acceleration (or upslope deceleration) of the block. Now you know what 'a' is. Use it in the equation of motion that relates velocity, acceleration and time. 18. Oct 6, 2004 ### Format ok im wrong...i realise that. That doesnt change the fact that i dont know how to solve this question. 19. Oct 6, 2004 ### Format Ah there we go...made a mistake with mg. You saw my steps...could of just said that Fg != ma. Thankyou gokul 20. Oct 6, 2004 ### Pyrrhus Format, arildno and Gokul pointed it out, and i did too.
## GMAT Practice Question Set #37 (Question 109-111) Problem Solving Question #110: Square Root of Difference of Squares Time 0 0 : 0 0 : 0 0 If a > b > 0, then sqrt(a^2 - b^2) = (A) a+b-sqrt(2ab) (B) a-b+sqrt(2ab) (C) sqrt((a-b)^2-2ab) (D) (sqrt(a+b))(sqrt(a-b)) (E) (sqrt(a) + sqrt(b))(sqrt(a)-sqrt(b)) GMATTM is a registered trademark of the Graduate Management Admission CouncilTM. The Graduate Management Admission CouncilTM does not endorse, nor is it affiliated in any way with the owner or any content of this web site.
## Introduction Neurofeedback (NF) is a cognitive training that exploits the causal relationship between brain activity and cognitive-motor abilities. As brain–computer interfaces (BCI) applications, NF consists in providing real-time auditory, visual, or tactile feedback of a subject’s brain activity to train self-regulation of specific brain patterns related to a targeted ability. NF applications have been developed since the 70’s in non-clinical1,2,3 and clinical settings, such as epilepsy4, attention-deficit hyperactivity disorder5,6,7,8,9, depression10,11, psychopathy12,13, and anxiety10,14,15,16. However, the neurocognitive mechanisms underlying BCI tasks or NF training (NFT) remain elusive17,18. The neuromodulation associated with NFT has already been studied in several contexts19,20,21, but this was not yet done in a long-term, multiple-session (12 weeks), sham-controlled design using an ecological reinforcer NF context for both NF and control groups. In the previous literature, control conditions are quite variable in NF studies, not only aiming at the link between brain activity and feedback but also varying the task or the procedure22. For example, in clinical studies where NFT aimed at reducing behavioral symptoms or psychological processes associated with various disorders (anxiety14,15,16, depression10,11,23, addiction24,25,26,27, attention deficit5,6,7,8,9,28,29), NF performance was typically compared with active control groups, such as cognitive therapy, mental exercise, and treatment-as-usual22. Thus, the self-reported or clinical benefits of NFT may be related to an ensemble of specific and non-specific mechanisms, including psychosocial influences30,31,32, cognitive and attentional/motivational factors33, test–retest improvement, as well as spontaneous clinical improvement or cognitive development17, and the learning context, contributing to the ongoing debate about NF efficacy15,34. In some NF studies, the control condition was based on linking the feedback to another brain activity than the targeted one22,35 which entails an incongruity between the activity driving the feedback and the task—hence the cognitive efforts—of the subject. Here, we used a sham feedback (sham-FB) condition for the control group—as it is commonly used in other studies including MEG or fMRI NF protocols19,20,21,22,36,37,38,39. The participants in the sham-FB group received ‘yoked’ feedback, corresponding to the feedback of randomly-chosen subjects from the NF group at the same stage of learning. Hence, this feedback was similar in every aspect to the one in the NF group, except that it was not the result of an established link between the subject’s alpha-band activity and the auditory stream. Such sham-FB control condition breaks the operant link between the subject’s neuromodulation and the received feedback, which may be seen as its main limitation22,40. Yet, this operant link is considered as constitutive of NFT and its effects41, and this sham-FB control condition has the advantage to allow matching for reward and performance across the control and the NF groups22. Thus, it allows controlling as closely as possible for the learning context while breaking the operant learning component that is key to NFT. Some studies already used alpha up-regulation NFT for improving different cognitive processes such as episodic memory36 or mental performances42. Moreover, across-sessions neuromodulation making use of sham-controlled design was tested in the past—not necessarily targeting alpha. However, some of these studies included only one or a few sessions37,39 and others did not find clear evidence of across-sessions neuromodulation19,21,38. In addition, in these studies, the learning context and task were not directly linked to the expected cognitive performance or targeted psychological process21,36. In the present study, we used an ‘ecological’ context with respect to the NF task, in which all participants were asked to close their eyes and get immersed in a relaxing soundscape delivered by headphones, while being engaged in the task. This task consisted in learning to decrease the volume of a sound indicator, which was inversely related to their individual alpha-band EEG activity recorded by two parietal dry electrodes. This conditioned all the participants of the two groups to relax and increase their alpha EEG band activity—as alpha activity is known to be linked with resting, relaxed or meditative states43,44,45,46,47,48,49,50. This constituted a ‘transparent’ context18, which may be essential to unravel the mechanisms of NF learning. This matched learning context between the NF and control group allowed us to rigorously test the alpha-band neuromodulation specifically induced by NF. Most of the NF studies are performed using wet EEG sensors in a laboratory context. Here, we used a new compact and wearable EEG-based NF device with dry electrodes (Melomind, myBrain Technologies, Paris, France). This device was studied in an under-review study by a comparison to a standard-wet EEG system (Acticap, BrainProducts, Gilching, Germany)51. As suggested in52,53,54, novel low-cost dry electrodes have comparable performances in terms of signal transfer for BCI and can be suitable for EEG studies. Moreover, such a user-friendly and affordable device with few dry sensors, does not require conductive gel, and can be so suitable for easy real-life use by the general population. Here, we aimed at studying the neuromodulation specifically induced by individual alpha up-regulation NFT over multiple sessions throughout 12 weeks, in a double-blind, randomized, sham-controlled design study within general healthy population in an ecological reinforcement context. As in many NF protocols aiming at anxiety reduction, stress-management or well-being14,15,55,56, we chose an alpha-upregulation NFT for the known link between the increase of low frequency EEG activities—including theta and alpha activities—and relaxed or meditative states43,44,45,46,47,48,49. We expected an increase of the trained individual alpha band activity across sessions in the NF group, because it was asked to the participants to find their own strategies to reduce the volume of the auditory feedback—operantly linked to the individual alpha activity in the NF group only—and this, as a learning process, requires multiple sessions41,57. The use of an ecological, relaxing, learning context, allowed us to test if alpha upregulation could be induced just by the context, in which case, we should observe an increase of alpha activity in both the NF and control groups. In contrast, if alpha neuromodulation is specific to the NFT, one could expect a significant increase of alpha activity across sessions only in the NF group. Finally, we were also interested in the impact of such NFT on self-reports related to anxiety level and relaxation. The improvement of such self-reported psychological processes can be due to specific NF mechanisms and non-specific mechanisms17, such as the context of the learning including instructions, the biomarker58 used and psychosocial factors33. Considering the learning context (relaxing auditory landscape) and instructions (closed eyes during 21 min) that we used in both groups, associated with the sham-controlled design in a healthy population, we expected improvements in relaxation and anxiety in the NF group and in the control group due to placebo effects30,59. ## Materials and methods ### Participants In the NF literature, the common number of included subjects varies from 10 to 20 participants by group21,35,36,37,60. This has been underlined as contributing to overestimated effect size and, by making ‘true’ effect more difficult to detect, it increases the so-called ‘false discovery rate’, that is, the likelihood of having wrongly concluded to a significant effect61,62. Here, we included forty-eight healthy volunteers, divided in two groups of 25 and 23 participants respectively (see below; mean age: 33.3 years; age range: 18–60; see Supplementary Table S1 for more details). While this limited our sensitivity to effect sizes of at least 0.028–0.048 in eta-squared (Cohen’s f = 0.17—0.22) at 0.80 statistical power (as computed with G*Power 3.1.9.2, ‘computation of sensitivity for repeated-measure ANOVA’, with type 1 error rate alpha = 0.05, correlation among repeated measures = 0.5, non-sphericity correction ε = 1 to 0.5, and a 12 within-subject repeated measures design), it was based on the literature and resources constraints63 implying a follow-up across 12 weeks for each subject (see the “Experimental protocol” section). All participants declared having normal or corrected-to-normal vision, no hearing problem, no history of neurological or psychiatric disorders, no ongoing psychotropic drug treatment and no or little NF or BCI experience. Participants were blindly assigned either to the NF group—who received real NF—or to the control group—who received sham-FB. For the purpose of the sham-FB design construction, the first N participants were assigned to the NF group. Only the experimenters and the data analysts knew the existence of the two groups and that the first N subject(s) was/were in the NF group. However, the experimenters and the data analysts were blind to N and blind to the random assignment after N. This resulted in a double-blind sham-controlled design with 25 subjects in the NF group and 23 in the control group. The blind assignment was maintained until the end of the experiment. No test was done to know if the participants suspected the existence of two groups and their assignment to one of these groups. Participants were enrolled from the general population through advertisements in science and medical schools in Paris, through an information mailing list (RISC, https://expesciences.risc.cnrs.fr) and through flyers distributed in companies in Paris (France). Participants completed the protocol in three different locations: at the Center for NeuroImaging Research (CENIR) of the Paris Brain Institute (ICM, Paris, France) (N = 20 participants, NF group: 10, control group: 10), at their workplace 14 (N = 14, NF group: 8, control group: 6), or at home (N = 14, NF group: 7, control group: 7). The 20 participants who performed the protocol at the CENIR were part of those planned in the study approved by French ethical committee (CPP Sud-Ouest et Outre Mer I, ref. 2017-A02786-47), registered on ClinicalTrials.gov (ref. NCT04545359), although the present study was not part of this clinical study. For these participants, a financial compensation was provided at the end of the study for the time taken to come to the lab. The 28 other participants followed the same protocol but performed it in a real-life context (at work, at home). Moreover, all participants gave written and informed consent in accordance with the Declaration of Helsinki. ### EEG recording and preprocessing Brain activity was recorded by two gold-coated dry electrodes placed on parietal regions (P3 and P4) (Melomind, myBrain Technologies, Paris, France; Fig. 1). Ground and reference were silver fabric electrodes, placed on the left and right headphones respectively, in mastoid regions. EEG signals were amplified and digitized at a sampling rate of 250 Hz, band-pass filtered from 2 to 30 Hz in real-time and sent to the mobile device by embedded electronics in the headset. The headset communicated via Bluetooth with a mobile NF application, which processed EEG data every second to give the user auditory feedback about his/her alpha-band activity (see below). A DC offset removal was applied on each second of data for each channel and a notch filter centered at 50 Hz was applied to remove the powerline noise. Real-time estimation of signal quality was then performed by a dedicated machine learning algorithm64. Briefly, this algorithm computed in time and frequency domains, EEG measures that are commonly used in artefact detection from electrophysiological signals (standard deviation, skewness, kurtosis, EEG powers in different frequency bands, power of change, etc.). These EEG features were compared to a training database by a k-nearest neighbors classifier to assign a quality label to the EEG signal among three classes: HIGHq, MEDq, and LOWq (see64 for more details). In Grosselin et al.64, we showed that this algorithm has an accuracy higher than 90% for all the studied databases. This algorithm was used to detect noisy segments (LOWq) which were excluded from posterior analysis. ### Experimental protocol Based on previous studies15,65, we proposed a protocol consisting in 12 NFT sessions, with one session per week (Fig. 2). Each session was composed of 7 exercises of 3 min (total: 21 min), which corresponded to 4.2 h of training. At the beginning and end of each session, two-minute resting state recordings were performed and the participant completed the Spielberger State-Trait Anxiety Inventory (STAI, Y-A form, in French66)—to assess his/her anxiety state level—and a 10-cm visual analog scale (VAS) indicating his/her subjective relaxation level (relax-VAS). These resting state recordings were not analyzed here as they are out of the scope of this study focused on neuromodulation. Moreover, at the end of each 3-min exercise, the participant indicated his/her subjective level of feedback control on a 10-cm VAS (control-VAS)—the left side indicating the feeling of no control; the right bound indicating a feeling of perfect control. ### Neurofeedback training procedure The NF paradigm targeted alpha rhythm centered on the individual alpha frequency (IAF). Before each NFT session, a 30-s calibration phase allowed computing IAF using an optimized, robust estimator dedicated to real-time IAF estimation based on spectral correction and prior IAF estimations68. More precisely, the spectrum was corrected by removing the 1/f trend estimated by an iterative curve-fitting procedure. Then, local maxima were detected in the corrected spectrum between 6 and 13 Hz as the downward going zero crossing points in the first derivative of the corrected spectrum. If the presence of an alpha peak was ambiguous, the algorithm selects the most probable one based on the IAF detected in previous time windows. See Grosselin et al. for more details68. All participants were instructed at the protocol explanation to close their eyes during the recordings. This instruction was reminded audibly at the beginning of each calibration. They were also instructed to be relaxed and try to reduce the auditory feedback volume throughout the exercises of different sessions. Previous research showed that providing no strategies yielded to better NF effects57. Here, the participants were aware that the feedback volume would decrease with relaxation, but no explicit strategies were provided to them as such to allow them to reduce the auditory feedback volume; they were told to try their own strategies, which we report in the Supplementary Material as advised in the CRED-nf checklist17. A relaxing landscape background (e.g. forest sounds) was played at a constant, comfortable volume during each exercise. The audio feedback was an electronic chord sound added to this background with a volume intensity derived from EEG signals. More precisely, the individual alpha amplitude was computed in consecutive 1-s epochs as the root mean square (RMS) of EEG activity filtered in IAF ± 1 Hz band (NF index); it was normalized to the calibration baseline activity to obtain a 0–1 scale, which was used to modulate the intensity of the feedback sound (V) in the NF group. More precisely, for each session, a baseline value was obtained from alpha activity during the corresponding 30-s calibration phase without the low quality EEG segments as assessed by a dedicated algorithm (see “EEG recording and preprocessing” section above). Coefficients were applied to this baseline value in order to define the lower (m) and upper (M) thresholds of alpha activity during the session. During the NFT, V was varied as a reverse linear function of the individual alpha amplitude relative to these upper and lower bounds. If the individual alpha amplitude was becoming lower than m, then V was set to 1 (maximal). If an alpha amplitude beyond M was reached, then V was set to 0 (minimal). For the EEG segments detected as noisy (LOWq quality) during the preprocessing step, V was set to 1. For the participants in the control group, the instruction was identical but they received sham-FB, which was the replayed feedback variations from another subject randomly chosen from the NF group at the same training level (i.e. session). For instance, a participant in the control group at the 3rd session received the auditory feedback generated and received by a random subject from the NF group at the 3rd session. ### Data analysis #### NF index and learning score For each participant and each training session, we first computed the average value of the NF index (before normalization) for every exercise. Second, in order to take into account inter-subject variability at the first session for NF index (see Fig. 3a and Supplementary Fig. S13), we built an NF learning score (ΔD(t))—from the NF index variations across exercises and sessions69. To do this, we computed the median value (med) of the NF index across the 7 exercises of the first session; then, for each session t, we computed D(t), the number of NF index values (1 by second) above or equal to this median value med. This cumulative duration was divided by the total duration of the training session cleaned from LOWq segments (maximum 21 min) in order to express D(t) by minute, and transformed into percent change relative to the first session, as follows (Eq. (1)): $$\Delta D(t)\hspace{0.33em}=\hspace{0.33em}\left(\frac{D(t)}{D(t=1)}-1\right)\times 100$$ (1) #### Theta and low beta activities To study the selectivity of the neuromodulation only for the targeted alpha activity, we analyzed the between-session evolutions of theta (4–7 Hz) and low beta (13–18 Hz) activities, as control outcomes69. For each subject, on each exercise and session, theta activity was computed every second as the RMS of EEG activity filtered between 4 and 7 Hz in 4-s sliding windows, on epochs with high or medium quality (see64 for details about signal quality computation). We then averaged these RMS values for each session. Similar computations were performed for the EEG activity between 13 and 18 Hz (low beta activity). #### Signal quality As encouraged in17, the quality of EEG signals was analyzed to assess the poor quality EEG data prevalence between groups and across sessions. For each participant, session, and exercise, the quality of each 1-s EEG epoch recorded by each electrode was determined by a classification-based approach according to three labels: HIGHq, MEDq, and LOWq (see64 for more details). A quality index Q was then computed for each electrode, during each exercise, as in Eq. (2): $$Q=\frac{\#HIGHq\hspace{0.33em}+\hspace{0.33em}0.5\hspace{0.33em}*\hspace{0.33em}\#MEDq}{N\hspace{0.33em}+\hspace{0.33em}\#LOWq}$$ (2) with: #HIGHq, #MEDq, #LOWq indicating the number of high, median, low quality epochs and N, the total number of quality labels during the session. Finally, the average value of Q was computed from the two electrodes for each exercise. #### Self-report outcomes The raw scores of the STAI-Y-A (between 20 and 80) and relax-VAS (between 0 and 10) were computed pre- and post-session. The subjective level of feedback control was measured within- and between-session on the control-VAS (between 0 and 10). The raw scores of the STAY-Y-B, PANAS and PSS were obtained pre- and post-program; these latter outcomes are reported in Supplementary Material. #### Statistical analyses All statistical analyses were performed using R (v.4.0.2; R Core Team, 2020) and lme4 package70. We used Linear Mixed Models (LMMs)71,72, because LMMs allow handling missing data, variability in effect sizes across participants, and unbalanced designs73. Available data in this study are detailed in Supplementary Table S8. For all LMM analyses, the NF group at the first session was set as the level of reference in order to specifically estimate the effects of NFT in this group. For each outcome variable studied, the choice of the random factors was done comparing the goodness of fit of the models that converged with different random factors, based on Akaike Information Criterion (AIC)74, Bayesian Information Criterion (BIC), log-likelihood comparison (logLik) and by running an analysis of variance (anova) between models. The detailed procedure for each outcome variable can be found in Supplementary Material in Sect. 6. To be concise in the main text, the random factors chosen were directly reported between parenthesis in the LMM equations below. Similarly to75, to analyze the within- and between-session NFT effects on the NF index we used fixed effects of session, exercise, group, and the 2-way interactions between session and group and between exercise and group in the following equation (Eq. (3) as coded in R, with a colon indicated an interaction between terms): $${\text{Y}}\sim {1} + {\text{exercise}} + {\text{session}} + {\text{group}} + {\text{exercise}}:{\text{group}} + {\text{session}}:{\text{group}} + \left( {{1} + {\text{session}} + {\text{exercise}}|{\text{subject}}\_{\text{id}}} \right)$$ (3) Results (see “5. NF index and feeling of control across exercises: U-curves” section in Supplementary Material) indicated that the effect of exercises followed a U-curve. Therefore, the exercises were coded as a quadratic term, that is, exercises 1 to 7 were coded as 9, 4, 1, 0, 1, 4, and 9. The sessions were coded as a numeric variable between 0 and 11. Equation (3) was also used for the analysis of the control-VAS scores with 1 + session|subject_id as random effects structure. For the analysis of NF learning score and the signal quality index, we used the following LMM equation (Eq. (4)): $${\text{Y}}\sim {1} + {\text{session}} + {\text{group}} + {\text{session}}:{\text{group}} + \left( {{1} + {\text{session}}|{\text{subject}}\_{\text{id}}} \right)$$ (4) Equation (4) was also used for the analysis of theta and low beta activities with only a random intercept by participant (1|subject_id). For the STAI-Y-A outcome, we used LMM with session, phase (pre- or post-session), group, and the 2-way interactions between session and group and between phase and group as fixed effects (Eq. (5)). This model was also used for the analysis of relax-VAS with 1 + phase|subject_id as random effects structure. $${\text{Y}}\,\sim \,{1}\, + \,{\text{session}}\, + \,{\text{phase}}\, + \,{\text{group}}\, + \,{\text{session}}:{\text{group}}\, + \,{\text{phase}}:{\text{group}}\, + \,\left( {{1}\, + \,{\text{session}}\, + \,{\text{phase}}|{\text{subject}}\_{\text{id}}} \right)$$ (5) For each model, parameter β for the effects of interest were estimated by fitting the models on the corresponding dependent variable, using the Restricted Maximum Likelihood (REML) approach. P-values were estimated via type III Analysis of Variance on the LMM with Satterthwaite's method, using the anova() function of the lmerTest package of R76. For all variables of interest, we set p < 0.05 as statistically significant. When there was an interaction between a factor of interest and group, LMM models were fitted in each group separately, with session and exercise or phase—as adequate—as fixed-effect factors. For these latter analyses, we used a random intercept by participant because more complex model structure generally failed to converge for at least one group77. All the results of these LMM and anova analyses are reported in Supplementary Material. Moreover, for the analysis of session effects in the theta and low beta bands, we conducted supplementary equivalence tests using the TOST procedure78, to examine (i) the equivalence of the session effect in the NF and control groups for the theta activity and (ii) if we could conclude to an absence of change across sessions for the low beta activity. The results of these tests are presented in Supplementary Material Sect. 14. Additionally, to check the variability between groups at the first session, we performed, for each outcome variable of interest, an independent t-test between groups. The results of these t-tests are presented in Supplementary Table S63. Correlation analyses were also performed between NF index and self-report outcomes in each group as detailed in Sect. 13 of Supplementary Material. The analyses were not pre-registered. The primary outcome measures in this study were the NF index, the STAI-Y-A and relax-VAS outcomes. The NF learning score, the theta and low beta activities, the signal quality index, and the subjective feeling of control (control-VAS) were secondary outcome measures. All other analyses were additional analyses performed on reviewers’ requests. ## Results ### Neuromodulation induced by NF The analysis of the NF index showed a significant interaction between session and group (F(1, 46.049) = 5.01, p = 0.030, supplementary Tables S25 and S26). This result suggests a significant different linear evolution of the NF index across sessions between groups. Our planned comparisons (linear mixed models run in each group) showed a significant linear increase on the NF index across sessions in the NF group (β = 0.04, CI [0.03, 0.06], F(1, 2057) = 32.43, p < 0.001, supplementary Tables S27 and S28), whereas a significant linear decrease was found in the control group (β = − 0.04, CI [− 0.06, − 0.02], F(1, 1899) = 19.43, p < 0.001, supplementary Tables S29 and S30). These findings indicated an increase of the NF index across sessions, specific to the NF group (Fig. 3a). Although these results could be due to a baseline difference at the first session, an independent t-test between the mean levels of NF index of each group at session 1 did not show a significant difference between groups (t(46) = − 1.8, p = 0.0789). See “11. Group comparison at the first session” supplementary section and supplementary Table S63 for details. In addition, to normalize changes across sessions relative to the NF index at the first session, we built an NF learning score. This allowed us to analyze the progression of the trained activity across sessions taking into account the activity at the first session. The analysis of the NF learning score did not show a significant interaction between session and group (F(1, 46.325) = 3.27, p = 0.077) (see Supplementary Tables S31 and S32). However, following on our a priori hypothesis, we looked at the session effect in each group (Supplementary Tables S33, S43, S35 and S36). The analysis of the NF learning score in each group confirmed a specific NF-based neuromodulation. Indeed, our analyses showed a significant effect of session only for the NF group (β = 1.14, CI [0.20, 2.08], F(1, 272.19) = 5.67, p = 0.018) (see Supplementary Tables S33 and S34), which indicates that the NF learning score increased across sessions in the NF group (Fig. 3b). The effect of sessions was not significant for the control group (Supplementary Tables S35 and S36). Additional individual linear regressions of the NF learning score (see Supplementary Fig. S8 and S9) showed that 80% (20/25) of the participants from the NF group had a positive regression slope across the 12 sessions, while the slope was positive for 48% (11/23) of the participants from the control group. In addition, there was a significant effect of exercise on the NF index, reflecting the quadratic pattern of the NF index across exercises (F(1, 45.940) = 26.55, p < 0.001, see Supplementary Tables S2 and S3). The non-significant interaction between exercise and group indicated that this effect did not statistically differ between groups (F(1, 45.940) = 1.76e−03, p = 0.967) (Fig. 4). ### Selectivity of the neuromodulation on alpha activity To investigate the selectivity of the neuromodulation relative to the targeted alpha activity, we checked if some specific neuromodulation for the NF relative to the control group occurred for EEG frequency bands close to the alpha band. Thus, we analyzed EEG activity in the theta (4–7 Hz) and low beta (13–18 Hz) bands. For theta activity, there was a significant effect of session (F(1, 523.07) = 6.50, p = 0.011), without any statistically significant interaction between group and session (F(1, 523.07) = 2.11, p = 0.147). This reflected an overall increase of theta activity with no statistically significant difference between the NF and the control groups. As absence of evidence is not evidence of absence, we further tested if the session effect in the NF and in the control group was equivalent (using the TOST procedure78). This did not allow demonstrating statistical equivalence. Therefore, the only reliable effect for theta activity was the main effect of sessions. See Supplementary Tables S37, S38, S68 and Supplementary Fig. S10 for details. For low beta activity, the main effect of session was not significant (F(1,523.04) = 0.15, p = 0.694) and the interaction between session and group was not significant either (F(1, 523.04) = 3.70, p = 0.055). However, considering that the p value of this interaction could be deemed as ‘close to significance’, we further checked if the session effect was significant in either group. The analyses in each group did not reveal any significant session effect either in the NF group (β = 0.01, CI [− 0.01, 0.03], F(1, 272.03) = 1.40, p = 0.239), or in the control group (β = − 0.02, CI [− 0.04, 0.00], F(1, 205.01) = 2.29, p = 0.132). Moreover, equivalence tests against 0 for the individual parameter estimates of the session effect in each group indicated that the slope of the session effect was statistically equivalent to 0 (within a 5% boundary) in both the NF and the control groups. See Supplementary Tables S39 to S44, S69 and Supplementary Fig. S10 for details. ### Signal quality There was no statistically significant interaction between session and group on data quality Q (F(1, 46.130) = 0.16, p = 0.691). The main effect of sessions was not significant either (F(1, 46.130) = 3.70, p = 0.061). In addition, the effect size for the session effect (in terms of parameter estimate β, 95% CI, and partial eta2) was very small (see Supplementary Tables S45 and S46 and Supplementary Fig. S11). Moreover, a chi-square analysis (see “12. Study of LOWq, MEDq and HIGHq proportions” section in Supplementary Material) did not show any statistically significant difference in the proportions of LOWq, MEDq and HIGHq EEG segments between the first and last sessions (X2 = 0.009, df = 2, p = 0.9955). ### Self-report outcomes #### Relaxation and anxiety levels ##### STAI-Y-A The state anxiety level decreased significantly from pre- to post-session (phase effect: F(1, 46.137) = 24.77, p < 0.001). The interaction between phase and group was not significant (F(1, 46.137) = 2.18, p = 0.147) (Fig. 5a). Moreover, although the overall mean of STAI-Y-A scores decreased across sessions (Fig. 5b), the session effect on STAI-Y-A scores was not significant (F(1, 45.787) = 3.58, p = 0.065). The interaction between session and group was not significant (F(1, 45.787) = 1.31e−03, p = 0.971) (Supplementary Tables S51 and S52). ##### relax-VAS Relaxation, as measured by the relax-VAS scores, increased from pre- to post-session (F(1, 46.01) = 34.29, p < 0.001) and this phase effect was not significantly different between groups (F(1, 46.01) = 0.93, p = 0.340) (see Fig. 6a and Supplementary Tables S53 and S54). Moreover, relax-VAS scores showed a significant linear increase across the sessions (F(1, 1050.22) = 18.55, p < 0.001) and the interaction between session and group was not statistically significant (F(1, 1050.22) = 0.65, p = 0.420 (Fig. 6b; Supplementary Tables S53 and S54). No significant main effects or interactions were identified in the trait self-reports (STAI-Y-B, PANAS and PSS). See Supplementary Tables S55 to S62 for details. #### Subjective feeling of control A significant increase of the feeling of control was observed across sessions (F(1, 46.1) = 15.40, p < 0.001). This effect was not significantly different between groups (F(1, 46.1) = 1.62, p = 0.209) (Fig. 7). In addition, similarly to the exercise effect on the NF index, there was a quadratic fit of control feeling over exercises (F(1, 3905.3) = 18.39, p < 0.001), without any statistically significant interaction between exercise and group (F(1, 3905.3) = 0.51, p = 0.475). See Supplementary Tables S49 and S50 and Fig. S12 for details. ### Correlations between NF index and self-report outcomes We examined these correlations at each session and the correlation between the slope of NF index and the slope of self-report outcomes—in terms of relaxation, anxiety, and feeling of control—across sessions (Supplementary Tables S65, S66 and S67). A few significant correlations were found at some sessions, but none of these was significant after correction for multiple comparisons. There was no significant correlation either between the slopes of NF index and self-report outcomes. ## Discussion In this study, we proposed a double-blind sham-controlled randomized study of the neuromodulation induced by individual alpha-based NFT over 12 weekly sessions using a strictly controlled sham-FB condition as control, in healthy adult participants. NFT was performed with a wearable, dry sensors headset, which delivered intensity-modulated auditory feedback based on EEG signal amplitude in individual alpha frequency band. To avoid non-contingency between produced efforts and the resulting feedback evolution for the control group22, the control condition consisted in delivering sham-FB—a feedback replayed from randomly chosen users of the NF group at the same training stage. Hence, all participants benefited from the proposed NFT experience, but only those of the NF group experienced a link between the feedback and their own alpha activity. In addition, all the participants performed the task immersed in a relaxing auditory landscape, with their eyes closed for 21 min, thus constituting a common reinforcer context for relaxation. First of all, we wanted to assess the NF learning of individual alpha-band activity upregulation. NF learning refers to the capacity to self-regulate a targeted activity in the desired direction across training sessions34,41,42,57,79. More specifically, we hypothesized a neuromodulation specific to the NF training, that is to say, only the NF group was expected to increase individual alpha activity across training sessions80. Even if the averaged values of NF index were similar at the 12th session in both groups, our analyses of the NF index and the NF learning score confirm a specific session effect in the NF group, with significant linear increase across sessions in this group only. This finding demonstrates, across the training program, a specific neuromodulation induced by the link between individual alpha activity and FB. Indeed, the use of a randomized double-blind protocol together with the strict sham-FB control condition in a reinforcer context allowed us to control for different confounding factors which may contribute to NFT effects. In particular, it allowed controlling for context, task, reward, and performance, avoiding potential motivational biases in NF versus control conditions22. One may wonder if training another frequency band could have constituted an alternative sham condition for the control group. However, as mentioned in “Introduction” section, such control condition may induce incongruity between task instruction and the target activity in the control condition. This could render the task more difficult or less rewarding for the subject in the control condition, due to this incongruity. This is why we chose the present yoked feedback. We observe that the NF index seemed different between NF and control groups at the first session. This could be explained by the inter-subject variability of alpha rhythm81. We tested this difference as well as that of other outcome variables in the first session; it was not significant (cf. Supplementary Table S63). Similarly, one may note that NF index values seemed similar between groups at the 12th, final session. However, the important inter-individual variability in alpha activity makes it important to consider across sessions effects, as we did in our analyses, rather than NF index value at either first or final session. In this study, we also examined if the neuromodulation was selective of the targeted activity82,83,84. As proposed in69, we analyzed two adjacent frequency bands (theta and low beta). We found an overall increase across sessions for the theta band and no significant change for the low beta band. This indicated that the neuromodulation was selective of the alpha activity insofar as there was an absence of evidence for similar effects in the theta and beta bands. To the best of our knowledge, this is the first evidence of selective longitudinal alpha-band neuromodulation (over 12 weeks) in a double-blind randomized study implying healthy participants trained with a wearable dry sensors NF device. Interestingly, we could not assess an alpha neuromodulation within sessions (across exercises) as one may have expected it34,42,79. In fact, we observed a U-shape pattern for the dynamics of alpha band activity during NF learning across exercises. To observe the NF learning effects, multiple sessions are required41,85, in order for the participants to find their own strategies to succeed in the task57. In contrast, the effects observed within-session may not only be related to relaxation training but also to other processes put at play during each session. Alpha activity is a spontaneous but complex rhythm associated with several cognitive states and processes. Its modulation has been predominantly related to vigilance, attention86,87, awake but relaxed state50,55,56,88,89,90,91,92. The alpha activity change across exercises during the sessions could reflect the different cognitive processes involved by the task: continuous monitoring of the feedback may have required heightened focused attention93,94, error detection95,96, and working memory processes97 during the first training minutes, allowing participants to progressively adapt their cognitive strategy and mental state to the task. It is important to note that the within-session U-shape pattern of alpha activity was observed in both the NF and control groups. This supports the idea that the sham-FB condition allowed us to rigorously control for the task performed by the subjects. Altogether, the specific neuromodulation of alpha activity induced by NFT was revealed only in the longitudinal effect across the twelve sessions. We also examined the possible effect of EEG data quality on NF learning17. EEG quality did not change significantly across sessions in either group. One may wonder if we monitored the compliance of keeping eyes closed during the recordings because of the potential effects of eyes open and eyes closed on alpha activity. Even though this instruction was reminded auditorily at the beginning of each calibration, it could not be checked for the 28 participants who underwent the NFT sessions at home or at work. It has to be noted that if the participants didn’t respect this instruction during the calibration or the training sessions, this may have had an impact on data quality, hence on the feedback. For future experiments, it will be interesting to find a way to monitor this aspect of the task. In this study, we were also interested to know if self-report benefits would be induced by the NFT and if a difference would be found between groups knowing the common reinforcer (relaxing) context of the protocol. We investigated the self-report changes in terms of relaxation and anxiety levels pre- and post-session and across the training program. We found significant benefits in terms of relaxation and anxiety from pre- to post-session, as well as a slight reduction of anxiety level and a significant increase of relaxation across sessions, but without any statistically significant group difference. This finding is reminiscent of Schabus et al.98, who performed a well-controlled double blind NF study targeting sensorimotor rhythm in insomnia with NF and sham groups. They found some specific neurophysiological effect of NF (relative to sham) condition but non-specific self-report, psychological effects in both NF and sham conditions. In the present study, the non-specific self-report benefits of NFT may be explained by the use of sham-FB condition and the NF task proposed in our protocol, which could produce the same immersive, relaxing experience in the participants of both the control group and the NF group. There was no significant correlation between NF index and self-report outcomes either. Thus, the self-reported benefits were not found to be specific to the NF operant learning. While this absence of evidence is not a proof of the absence of any specific effect, we propose that self-reported benefits in our study may be explained by non-specific mechanisms of the NFT, such as the psychosocial factors (like education level, locus of control in dealing with technology, capacity to be mindful, field of work, etc.)30,31,33, relaxing training context, the instructions (closed eyes during a break of 21 min), and repetition-related effect17, in line with the view that placebo effect can play a role in psychological outcome of neurofeedback30,59. Note that education levels and the professions of the participants had almost the same repartition in both groups as well as the frequency of practice of meditation, sophrology, relaxation, arts (see Supplementary Tables S2, S3, S4, S5, S6 and S7). Furthermore, we must notice that all the subjects involved in our study were ranked as low to moderately anxious, which might have contributed to the lack of difference between groups. Indeed, Hardt and Kamiya99, in their alpha-upregulation NFT study, observed reduction of anxiety level for high but not low anxious subjects. Further investigations with high anxious or clinical participants should allow to test if benefits in terms of relaxation and anxiety may be highlighted specifically for the NF group. Overall, our findings showed that NFT induced positive self-report benefits for all participants, without any evidence for a significant link between these self-report benefits and the alpha activity modulation specifically induced in the NF (relative to the control) group. Indeed, the links between self-report outcomes and neurophysiology are complex and include several factors17, such as cognition, attention, motivation33, training frequency85, but also the choice of the neuromarker itself34,58. In this study, we chose, as in most NF protocols aiming at anxiety reduction, stress-management or well-being14,15,55,56, to use alpha activity as a biomarker for its known link with relaxed or meditative states35,43,44,45,46,47,48,49. However, the alpha activity is not the unique biomarker of stress management, anxiety, relaxation and well-being. For instance, it can be a marker for attention93,94 or memory97. In addition, other biomarkers such as theta activity100,101,102, beta activity103,104 or the ratio theta/alpha43,45 have also been associated with stress and/or anxiety reduction. Such biomarkers could be interesting targets to investigate in order to optimize our NF protocol. Further investigations should focus on the research of specific biomarkers related to psychophysiological factors, for example using neurophenomenology to study the link between neural activity modulation and participant’s inner experience105. Finally, to study the effect of sham-FB, we asked participants to assess their feeling of control during the training106. We found an increase of the feeling of control across sessions in both groups, which suggests that participants of the control group were not aware of the non-contingency between their efforts and the feedback signal and had a qualitatively similar experience as those of the NF group. Although the increase in the feeling of control across sessions seemed more marked in the NF group, there was no significant difference between groups on this outcome variable. This emphasizes the closely controlled nature of our sham control condition. It suggests that our manipulation of the sham feedback remained fully implicit to the subjects. One may note that we did not check the locus of control of the participants in dealing with technology, which may have an impact on the training33. To conclude, our study demonstrated an upregulation of the individual alpha-band activity specific to the NF group with a wearable dry-sensor EEG device across multiple sessions of NF training. In contrast, self-reported effects in terms of relaxation and anxiety were observed in both the NF and the control groups. Even if the relationship between the targeted EEG modulation and self-report outcomes is complex and remains to be fully elucidated, this study with a wearable dry-sensor EEG device underlined that NF can be used outside the lab to investigate and generalize NF learning mechanisms in ecological context.
## Precalculus (6th Edition) Blitzer RECALL: The standard form of a circle's equation is $(x-h)^2+(y-k)^2=r^2$, where $(h, k)$ is the center and $r$ is the radius. This means that the given equation is a circle whose center is at $(4, -6)$ and a radius of $\sqrt{25} = 5$ units. Thus, the given statement makes sense.
# Does the recursion theorem give quines? Wikipedia claims that the recursion theorem guarantees that quines (i.e. programs that output their own source code) exist in any (Turing complete) programming language. This seems to imply that one could follow the algorithm given in the proof of the recursion theorem to create a quine. However, the proof of the recursion theorem (and indeed the recursion theorem itself) only seems to guarantee the existence of a program that outputs its own index, which is, strictly speaking, different from outputing it's source code. The simple observation that no Turing machine whose tape alphabet consists solely of $0$'s and $1$'s can output its own source code, since its source code is a finite set of tuples, implies that quines cannot exist there. However, it seems likely as long as the alphabet is sufficiently rich (or the language sufficiently limited) it should be possible to write a bona fide quine. Question 1. Can the proof of the recursion theorem be transformed into a quine in any sufficiently expressive programming language? Question 2. If the answer to Question 1. is "no", how do we know if and when quines exist? • If we're talking about programs that output numbers (in some particular encoding), it must be possible to interpret the source code as such a number. In a reasonably expressive language it is possible (maybe even easy) to write a program that will translate from one particular encoding to another. – Robert Israel Oct 22 '12 at 22:48 • In the context of Turing machines, a Quine would be a Turing machine program that takes no input, writes itself as output, and then halts. Such Turing machine programs do exist, by the recursion theorem; they are fixed points of the Turing-computable function $F$ that given $e$ produces a Turing machine program to print $e$. – Carl Mummert Oct 29 '12 at 0:43 • @Quinn Culver: is there an aspect of the question that hasn't been answered? – Carl Mummert Oct 30 '12 at 11:49 • @CarlMummert I'm still considering it. In particular, I'm trying to figure out how I would follow the proof to write a quine in any (sufficiently rich) language (other than one like LISP where self-reference is inherently possible). I'm not sure if I agree that it's just as good to output an index. It somehow seems better to output the bona fide code. Henning indicated that even that is possible, so I want to know exactly why and how. – Quinn Culver Oct 30 '12 at 12:17 • Also, for all you know the index is the source code of the program, literally, if we use finite strings for our indexes. The use of natural numbers instead is just a choice of presentation, because we mathematicians typically study number-based rather than string-based models of computation. But we could perfectly well use "idealized C" as our model of computation, in which case all the inputs and outputs could be finite strings, literally including programs. – Carl Mummert Oct 30 '12 at 12:57 The Wikipedia article has an explicit example in LISP of how to use the method of the proof to generate a Quine. The proof of the recursion theorem is entirely constructive and syntactic, so it can be implemented in any other Turing complete language. Looking at the section there called Proof of the second recursion theorem: 1. The programming language will be able to implement the function $h$ defined at the top of the proof, because $h$ is computable and the language is Turing-complete • In particular, $h$ is a program that takes as input a program (source code) $x$ and produces as output a program (source code) $h(x)$. The source code given by $h(x)$ does the following: on input $y$, $h(x)$ first simulates the running of source code $x$ with itself as input. If that produces an output $e$, then $h(x)$ proceeds to simulate running the program $e$ with input $y$. • The program $h$ can be implemented in any Turing complete language; the main point is that you have to write an interpreter for the language within the language itself, so that $h(x)$ can use that interpreter as a subroutine to simulate running any source code on any input. 2. Furthermore, the language will be able to implement $F \circ h$ because $F$ is also computable. To do this, just use the source code for $h$ and then use its output as an input to the source code for $F$. 3. Let $e_0$ be a specific program (source code) that implements $F \circ h$. Let $e = h(e_0)$. 4. Then the program $e$ will compute the same function as the program $F(e)$, as the proof shows. 5. Thus, in the special case where $F(e)$ returns source code for a program that does nothing but return $e$, because program $e$ computes the same function as $F(e)$, program $e$ also returns the source code for $e$ when it is run. • In fact, program $e$ does something stronger than computing the same function: examining the proof show that program $e$ actually computes the source code for $F(e)$ and then runs (or interprets, or simulates) that. So if $F(e)$ has side effects, like printing something, $e$ has the same side effects. For any Turing complete language, you can follow this sequence of steps to get a Quine in that language. The interest for people who write Quines is generally to make ones that are shorter than the ones obtained by this method. The proof of the second recursion theorem is more general than is needed for that special purpose, and the Quines it generates would be very long, because the program that computes $h$ includes, in most cases, an interpreter for the language at hand. So to implement $h$ in $C$, you have to write a $C$ interpreter in $C$. This is why LISP gives a better example, because it is much more straightforward to interpret LISP in LISP. • I think my main concern is that we must consider every string to be a program; i.e. we must have a (computable) bijection between strings and programs. Of course, some of those strings were already programs, but many were not. I'm worried that what will be output will be only the program's corresponding string and that then the "quine" gotten by following the recursion theorem's proof will only output its corresponding string. – Quinn Culver Nov 1 '12 at 21:24 • The construction still works if only some strings are valid programs. In that situation the simulator can do whatever it wants with an invalid program. Regardless of what it does, the output of $h$ will always be a syntactically valid program (regardless whether the input $x$ is valid), and $e$ will still be a syntactically valid program, which will be a quine. – Carl Mummert Nov 1 '12 at 21:32 • When you say, "then the program $e$ will compute the same function as the program $F(e)$, as the proof shows.", I think you mean "then the program $h(e)$ will compute the same function as the program $F(h(e))$, as the proof shows." – Quinn Culver Nov 1 '12 at 22:31 • I'm now convinced. I think what I didn't realize is that $F$, in the proof, doesn't have to be an arbitrary function from strings to strings, but can merely be one that only takes programs and always outputs programs, as the special case in your point 5. can be taken to be. – Quinn Culver Nov 1 '12 at 23:37 • The usual literature doesn't emphasize that it would be possible to have a numbering where not every index is actually a valid program. This is because, in any reasonable programming language, the set of syntactically valid programs is decidable (in fact primitive recursive), and so we could just declare by fiat that any program that is syntactically incorrect will now compute some fixed function, at which point every string is now a "valid" program in the new sense. But even if the set of indices that are valid programs is not decidable, most results go through anyway. – Carl Mummert Nov 2 '12 at 0:20 The variant of the recursion theorem you have seen is formulated in terms of indices because it assumes a "programming language" where indices of Turing machines are what program texts look like. We define that a "program" in this language consists of an index in some well-defined enumeration of Turing machine, so if we find a machine that outputs its own index, that index is a quine from the perspective of this programming language. However, the theorem generalizes to any reasonably behaved notion of programming language, as long as we fix a way to encode a program text as something that can be (part of) the input and output of a running program. Here's how the theorem is stated in Jones, Computability and Complexity from a Programming Perspective: Theorem 14.2.1 (Kleene's second recursion theorem.) For any $\tt L$-program $\tt p$, there is an $\tt L$-program $\tt q$ satisfying, for all inputs ${\tt d}\in{\tt L}\text{-}\mathit{data}$: $$\tt [\!\![q]\!\!](d) = [\!\![p]\!\!](q,d)$$ Typically $\tt p$'s first input is a program, which $\tt p$ may apply to various arguments, transform, time, or otherwise process as its sees fit. The theorem in effect says that $\tt p$ may regard $\tt q$ as its own text, thus allowing self-referential programs. (Here $\tt[\!\![{\cdot}]\!\!]$ is the function that takes a program text to its meaning as a partial function, and the equals sign means that either the two sides are defined with the same value, or both sides diverge). In particular if $\tt p$ is a program that outputs its first argument, $\tt q$ will be a quine. This more general statement can be proved using basically the same techniques as the one for Turing machine enumerations that one finds in mathematicians' textbooks. • I don't fully understand your answer, since I don't know exactly how the coding works in Jones' book (though I intend to understand). But are you saying that one could indeed write a program (in, say, C) that outputs its own code simply by following the proof of the recursion theorem? – Quinn Culver Oct 23 '12 at 21:46 • @QuinnCulver: The point is that it works with any coding you'll care to specify, as long as program executions and certain other, very simple, program manipulations are computable with the coding you select. The general technique can certainly be used to produce a C quine, though it would be a very complex one, incorporating one or several layers of C self-interpreters to do the job. – Henning Makholm Oct 24 '12 at 12:24 • Okay. Does your current answer make that point clear? I.e. is it clear from your answer that if I wanted to write my own bona fide quine (I use 'bona fide' to distinguish between programs that output their own index and ones that output their actual source code) in, say, the language C, I could follow the proof of the recursion theorem to write one? – Quinn Culver Oct 25 '12 at 13:19 • @QuinnCulver: It does not make sense to "distinguish between programs that output their own index and ones that output their actual source code" -- in the "programming language" we're considering here the index IS THE SOURCE CODE. To answer your question: The general technique can certainly be used to produce a C quine, though it would be a very complex one, incorporating one or several layers of C self-interpreters to do the job. – Henning Makholm Oct 25 '12 at 13:30 • Indeed "index" is just a jargon term for "program", which we use because we want to emphasize that the results we prove work for any acceptable indexing, not just for some specific programming language. – Carl Mummert Oct 29 '12 at 0:38 Regarding your doubts about source code being the same as the index (or not), you can think of it this way: The source code of a program is some string of characters. Those characters are encoded in some way with numbers, let's say ASCII codes. Now you can think of each character as a digit in a base-$256$ number system. So you start with the first character and take its ASCII value, then add to it the second character's ASCII value multiplied by $256$ (the first power of $256$), then add the third character's ASCII value multiplied by $256^2$ (second power of $256$), and so on, up to the last character. This way you will get a very huge natural number, which uniquely represents this particular program (its source code). So this number is the index of that program. Sure, there would be indices which does not represent any valid program (since not all possible outputs are programs, just a subset of them are). But it doesn't matter. The only important thing is that every program has its own unique index. Here's an example program in my own toy language: say "Hi!"; and here's its index: 279 249 219 322 602 409 517 427 and how I calculated it: $'s'\cdot256^0 \;+\; 'a'\cdot256^1 \;+\; 'y'\cdot256^2 \;+\; '\ '\cdot256^3 \;+\; '"'\cdot256^4 \;+\; 'H'\cdot256^5 \;+\; 'i'\cdot256^6 \;+\;\\ '!'\cdot256^7 \;+\; '"'\cdot256^8 \;+\; ';'\cdot256^9 \\=\\ 115\cdot256^0 \;+\; 97\cdot256^1 \;+\; 121\cdot256^2 \;+\; 32\cdot256^3 \;+\; 34\cdot256^4 \;+\; 72\cdot256^5 \;+\; 105\cdot256^6 \;+\;\\ 33\cdot256^7 \;+\; 34\cdot256^8 \;+\; 59\cdot256^9 \\=\\ 115\cdot1 \;+\; 97\cdot256 \;+\; 121\cdot65\,536 \;+\; 32\cdot16\,777\,216 \;+\; 34\cdot4\,294\,967\,296 \;+\;\\ 72\cdot1\,099\,511\,627\,776 \;+\; 105\cdot281\,474\,976\,710\,656 \;+\; 33\cdot72\,057\,594\,037\,927\,936 \;+\;\\ 34\cdot18\,446\,744\,073\,709\,551\,616 \;+\; 59\cdot4\,722\,366\,482\,869\,645\,213\,696 \\=\\ 115 \;+\; 24\,832 \;+\; 7\,929\,856 \;+\; 536\,870\,912 \;+\; 146\,028\,888\,064 \;+\; 79\,164\,837\,199\,872 \;+\; 29\,554\,872\,554\,618\,880 \;+\; 2\,377\,900\,603\,251\,621\,888 \;+\; 627\,189\,298\,506\,124\,754\,944 \;+\; 278\,619\,622\,489\,309\,067\,608\,064 \\=\\ 279\,249\,219\,322\,602\,409\,517\,427$ Therefore, when a quine prints its own source code, it can be thought of as outputting a single natural number, which is exactly the same as the number which represents its source code (its index). • I understand how Gödel numbers work. My problem was that I wanted a program that output not it's index, but it's actual, human-readable, code (which can be gotten from the index). – Quinn Culver Jun 9 '15 at 11:15 • This is just a matter of interpreting the output of the program. After all, it's just a stream of bits, which can be interpreted as bytes (base-256 digits of that number), and then those bytes as characters. But it is not the problem of the program how do we interpret these bytes. It's the choice of the operating system, execution environment and the user. You can set up your execution environment to display stream of bits instead of textual output, but it doesn't affect the program and how it works. We, users, see characters. Computers see only streams of bits. – SasQ Jun 9 '15 at 18:16 • My point is that if I wanted, say, to impress someone by writing a Quine, but all they saw was a program that output it's own Gödel number, the person probably wouldn't be too impressed. See what I mean? – Quinn Culver Jun 10 '15 at 14:14
### Circles Many ways problem ## Main solutions Here are some examples for you to try your ideas out on. For each set of three points, find the equation of the circle passing through them. It is certainly worth sketching a graph to help you understand what is going on in each case! Which of your approaches is the most effective? 1. $A(3,2)$, $B(3,6)$, $C(5,8)$ 2. $A(0,4)$, $B(4,0)$, $C(-6,0)$ 3. $A(-1,-5)$, $B(-2,2)$, $C(2,-1)$ 4. $A(3,3)$, $B(1,-2)$, $C(4,1)$ 1. $(x-7)^2+(y-4)^2=20$ 2. $(x+1)^2+(y+1)^2=26$ 3. $\left(x+\frac{3}{2}\right)^2+\left(y+\frac{3}{2}\right)^2= \frac{25}{2}$ 4. $\left(x-\frac{7}{6}\right)^2+\left(y-\frac{5}{6}\right)^2= \frac{145}{18}$ ### Full solutions 1. $A(3,2)$, $B(3,6)$, $C(5,8)$ A geometric approach In the suggestion, we saw that the centre of the circle must be the same distant from $A$ and $B$, and it turns out that those points which are lie on the perpendicular bisector of the line segment $AB$. So if we draw the perpendicular bisector of $AB$, the circle’s centre must lie on it. Likewise, if we draw the perpendicular bisector of $AC$ or of $BC$, the circle’s centre must lie on that, too. The point where these perpendicular bisectors meet must therefore be the centre of the circle through the points $A$, $B$ and $C$. This GeoGebra applet shows how this works for this question; you can move the points $A$, $B$ and $C$ if you wish. We can now see that the perpendicular bisector of $AB$ is going to be the line $y=4$ since $A$ and $B$ have the same $x$-coordinate. The perpendicular bisector of the chord $BC$ passes through the midpoint of $BC$, which is $(4,7)$. (The midpoint of a line between two points can be found by averaging the coordinates of the two points: its coordinates are half way between the coordinates of $B$ and $C$.) The gradient of line through the points $B$ and $C$ is $m=\frac{8-6}{5-3}=\frac{2}{2}=1,$ so the gradient of the line perpendicular to that is $m'=-\frac{1}{m}=-\frac{1}{1}=-1$ (the negative reciprocal of $m$). The equation of this second perpendicular bisector is therefore \begin{align*} &&y-7&=-(x-4)&&\quad \\ \implies&& y&=11-x. \end{align*} We now need to find the point of intersection of these two lines as this is the centre of our circle. Solving the equations $y=4$ and $y=11-x$ simultaneously gives the coordinates of their intersection, which is $O(7,4)$. Finally, we want the radius of our circle so we work out the length $|OA|$: $r=|OA|=\sqrt{(3-7)^2+(2-4)^2}=\sqrt{20}.$ Therefore the equation of the circle is $(x-7)^2+(y-4)^2=20.$ Incidentally, note that as the equation for the circle only needs $r^2$ and not $r$ itself, we could have found $r^2=|OA|^2$ and dispensed with all square roots. We will do this from now on in this question. An algebraic approach Following the suggestion, we substitute for $x$ and $y$ in the equation $(x-a)^2+(y-b)^2=r^2$ for our three coordinates to find $a$, $b$ and $r$. This gives us the following three simultaneous equations: \begin{align} (3-a)^2+(2-b)^2&=r^2 \label{eq:1}\\ (3-a)^2+(6-b)^2&=r^2 \label{eq:2}\\ (5-a)^2+(8-b)^2&=r^2 \label{eq:3} \end{align} In this form, they are not that easy to solve, so we expand the brackets: \begin{align} \eqref{eq:1} \implies && a^2-6a+b^2-4b+13&=r^2&&\quad \label{eq:4}\\ \eqref{eq:2} \implies && a^2-6a+b^2-12b+45&=r^2 \label{eq:5}\\ \eqref{eq:3} \implies && a^2-10a+b^2-16b+89&=r^2 \label{eq:6} \end{align} Now subtracting any pair of these eliminates all of the square terms, as follows: \begin{align*} \eqref{eq:5}-\eqref{eq:4}& \implies -8b+32=0 \implies b=4\\ \eqref{eq:6}-\eqref{eq:5}& \implies -4a+44-4\times 4 \implies a=7 \end{align*} Substituting our values for $a$ and $b$ into $\eqref{eq:1}$ we find the value of $r^2$: \begin{align*} r^2&=(3-7)^2+(2-4)^2 \\ &=16+4=20. \end{align*} Therefore the equation of the circle is $(x-7)^2+(y-4)^2=20.$ Which method is quicker/easier here? By virtue of the fact we can write down one of the perpendicular bisectors by inspection, probably the geometric method here. 1. $A(0,4)$, $B(4,0)$, $C(-6,0)$ Geometric, perpendicular bisector method The midpoint of $AB$ is $(2,2)$, so the perpendicular bisector passes through this point. The gradient of $AB$ is $-1$, so the perpendicular bisector has gradient $-1/-1=1$ and thus the equation of the perpendicular bisector is $y-2=1(x-2) \implies y=x,$ as is apparent from the sketch. Similarly, the perpendicular bisector of the chord $BC$ is the line $x=-1$. Thus the point of intersection, and the centre of the circle, is $O(-1,-1)$. We work out the length $|OA|$ to find the (square of the) radius: $r^2 = |OA|^2 = (-1-0)^2+(-1-4)^2=26$ so the equation of our circle through the points $A$, $B$ and $C$ is $(x+1)^2+(y+1)^2=26.$ Algebraic, simultaneous equations method Substituting the coordinates $A$, $B$ and $C$ into the general equation for a circle and expanding the brackets as in part (a), we find \begin{align} a^2+b^2-8b+16 &=r^2 \label{eq:7}\\ a^2-8a+b^2+16 &=r^2 \label{eq:8}\\ a^2+12a+b^2+36 &=r^2 \label{eq:9} \end{align} Then subtracting the equations gives \begin{align*} \eqref{eq:9}-\eqref{eq:8}& \implies& -20a+20&=0&\implies& a=-1\\ \eqref{eq:8}-\eqref{eq:7}& \implies& 8b+8&=0 &\implies& b=-1. \end{align*} From equation $\eqref{eq:7}$, we find that the radius of our circle is given by $r^2=(-1)^2 + (-1)^2-8(-1)+16=26$ so the equation of the circle through $A$, $B$ and $C$ is $(x+1)^2+(y+1)^2=26.$ Which method is quicker/easier here? Again, probably the geometric method as the perpendicular bisectors weren’t algebraically fiddly to find. 1. $A(-1,-5)$, $B(-2,2)$, $C(2,-1)$ Geometric, perpendicular bisector method The midpoint of $AB$ is $\left(-\dfrac{3}{2},-\dfrac{3}{2}\right)$ and the gradient of $AB$ is $\dfrac{-5-2}{-1-(-2)}=-7$. Thus the equation of the perpendicular bisector of $AB$ is $y-\left(-\frac{3}{2}\right)= \frac{1}{7}\left(x-\left(-\frac{3}{2}\right)\right)$ which we can rearrange to $y=\frac{1}{7}x-\frac{9}{7}.$ Similarly, the midpoint of $BC$ is $\left(0,\dfrac{1}{2}\right)$ and its gradient is $\dfrac{2-(-1)}{-2-2}=-\dfrac{3}{4}$. The equation of the second perpendicular bisector is therefore $y-\frac{1}{2}=\frac{4}{3}x$ or $y=\frac{4}{3}x+\frac{1}{2}$. The two lines intersect when $\frac{1}{7}x-\frac{9}{7}=\frac{4}{3}x+\frac{1}{2}$ which we can rearrange to get \begin{align*} &&\left(\frac{28}{21}-\frac{3}{21}\right)x&=-\frac{9}{7}-\frac{1}{2}&&\quad\\ \implies&& \frac{25}{21}x&=-\frac{25}{14}\\ \implies&& x&=-\frac{3}{2}. \end{align*} Substituting into the equation of the second perpendicular bisector (which has a simpler-looking equation), we find $y=\frac{4}{3}\left(-\frac{3}{2}\right)+\frac{1}{2}=-\frac{3}{2}.$ Thus the centre of the circle is $O\left(-\dfrac{3}{2},-\dfrac{3}{2}\right)$ and the radius is given by $r^2=|OA|^2=\left(-1+\frac{3}{2}\right)^2+\left(-5+\frac{3}{2}\right)^2 =\frac{1}{4}+\frac{49}{4}=\frac{25}{2}.$ The final equation of the circle is therefore $\left(x+\frac{3}{2}\right)^2+\left(y+\frac{3}{2}\right)^2=\frac{25}{2}.$ Algebraic, simultaneous equations method This time, our three simultaneous equations are: \begin{align} a^2+2a+b^2+10b+26&=r^2 \label{eq:10}\\ a^2+4a+b^2-4b+8&=r^2 \label{eq:11}\\ a^2-4a+b^2+2b+5&=r^2 \label{eq:12} \end{align} Then eliminating the squared terms, we find \begin{align} \eqref{eq:11}-\eqref{eq:10}& \implies& 2a-14b-18&=0&&\quad \label{eq:13}\\ \eqref{eq:12}-\eqref{eq:11}& \implies& -8a+6b-3&=0. \label{eq:14} \end{align} Eliminating $a$: $4\times\eqref{eq:13}+\eqref{eq:14} \implies -50b-75=0 \implies b=-\frac{3}{2}.$ Substitute this value for into equation $\eqref{eq:14}$ to find $-8a+6\times\left(-\frac{3}{2}\right)+3=0 \implies 8a=-12 \implies a=-\frac{3}{2}.$ (Alternatively, halving equation $\eqref{eq:13}$ gives $a-7b-9=0$, so $a=7b+9=-\frac{3}{2}$.) The radius is then given by $r^2=\left(-1+\frac{3}{2}\right)^2+\left(-5+\frac{3}{2}\right)^2=\frac{1}{4}+\frac{49}{4}=\frac{25}{2}$ and the final equation of the circle is $\left(x+\frac{3}{2}\right)^2+\left(y+\frac{3}{2}\right)^2=\frac{25}{2}.$ Note that we can save ourselves a lot of effort by converting all fractions from mixed fractions like $4\frac{1}{2}$ to $\frac{9}{2}$ straightaway - this trick saves time when we are manipulating a lot of fractions. Which method is quicker/easier here? Tough call - which do you prefer? It is interesting to notice that in the diagram for this part, the centre of the circle appears to lie on the chord $AB$. Is this really the case? 1. $A(3,3)$, $B(1,-2)$, $C(4,1)$ Geometric, perpendicular bisector method The midpoint of $AB$ is $\left(2,\dfrac{1}{2}\right)$ and the gradient of $AB$ is $\dfrac{3-(-2)}{3-1}=\dfrac{5}{2}$. Thus the equation of the first perpendicular bisector is $y-\frac{1}{2}=-\frac{2}{5}(x-2)$ which can be rearranged to give $y=-\frac{2}{5}x+\frac{13}{10}.$ Likewise, the midpoint of $BC$ is $\left(\dfrac{5}{2},-\dfrac{1}{2}\right)$ and its gradient is $\dfrac{-2-1}{1-4}=1$. The equation of the second perpendicular bisector is therefore $y+\frac{1}{2}=-1\left(x-\frac{5}{2}\right)$ which gives $y=2-x$. The two lines intersect when \begin{align*} &&-\frac{2}{5}x+\frac{13}{10}&=2-x&&\quad\\ \implies&& \frac{3}{5}x&=\frac{7}{10}\\ \implies&& x&=\frac{7}{6}. \end{align*} Substituting into the equation of the second perpendicular bisector we find $y=2-\frac{7}{6}=\frac{5}{6}.$ Thus the centre of the circle is $O\left(\dfrac{7}{6},\dfrac{5}{6}\right)$ and the radius is given by \begin{align*} r^2=\vert OA \vert^2 &= \left(3-\frac{7}{6}\right)^2+\left(3-\frac{5}{6}\right)^2\\ &= \left(\frac{11}{6}\right)^2+\left(\frac{13}{6}\right)^2\\ &= \frac{121+169}{36}\\ &=\frac{145}{18}. \end{align*} The final equation of the circle is therefore $\left(x-\frac{7}{6}\right)^2+\left(y-\frac{5}{6}\right)^2= \frac{145}{18}.$ Algebraic, simultaneous equations method Our three simultaneous equations are: \begin{align} a^2-6a+b^2-6b+18 &=r^2 \label{eq:15}\\ a^2-2a+b^2+4b+5 &=r^2 \label{eq:16}\\ a^2-8a+b^2-2b+17 &=r^2 \label{eq:17} \end{align} Eliminating the squared terms gives \begin{align} \eqref{eq:16}-\eqref{eq:15}& \implies& 4a+10b-13&=0&&\quad \label{eq:18}\\ \eqref{eq:17}-\eqref{eq:16}& \implies& -6a-6b+12&=0 \notag \\ \div (-6)& \implies& a+b-2&=0.\label{eq:19} \end{align} Eliminating $a$: $\eqref{eq:18}-4\times\eqref{eq:19} \implies 6b-5=0 \implies b=\frac{5}{6}.$ Substituting this value for $b$ into equation $\eqref{eq:19}$ gives $a+\frac{5}{6}-2=0 \implies a=\frac{7}{6}.$ The radius is then given by \begin{align*} r^2&= \left(3-\frac{7}{6}\right)^2+\left(3-\frac{5}{6}\right)^2\\ &= \left(\frac{11}{6}\right)^2+\left(\frac{13}{6}\right)^2\\ &= \frac{121+169}{36}\\ &=\frac{145}{18} \end{align*} and thus the equation of the circle is $\left(x-\frac{7}{6}\right)^2+\left(y-\frac{5}{6}\right)^2= \frac{145}{18}.$ Which method is quicker/easier here? Again, it’s a tough call - do you have a preference?
# Behavior of a large spherical void in an infinite universe of uniform density matter I realise how unrealistic this scenario is, but nevertheless, it does not seem unphysical and it does raise interesting questions. Imagine a flat stationary infinite isotropic homogeneous universe uniformly filled with an incompressible transparent fluid of mass-density, $$\rho$$. One can argue that such a universe could have zero gravitational field and field-gradient everywhere and that we could arbitrarily say that its gravitational potential is also zero. A spherical void of radius, $$R$$, would have a mass deficit of $$-\rho(4/3)\pi R^3$$ compared with the rest of the universe. With Newtonian gravity, it would generate a field of $$+G\rho(4/3)\pi R^3/r^2$$ outside the void and a field of $$+G\rho(4/3)\pi r$$ inside the void (note the '$$+$$' sign). The corresponding potentials would be $$+G\rho(4/3)\pi R^3/r$$ outside the void and $$+(3/2)G\rho(4/3)\pi R^2-(1/2)G\rho(4/3)\pi r^2$$ inside the void. For a void with large enough radius, R, the potential at the center of the void will exceed $$c^2/2$$. For even larger radii, the potential will exceed $$c^2/2$$ over a spherical region with radius Rs>R. This would correspond to the Schwartzchild radius for a black hole, but in this case the polarity is exactly opposite. Admittedly, it's an implausible scenario but it does give rise to a myriad questions: what are the properties of such a void when the effects of general relativity become important? Is there the equivalent of an event horizon? Does it have a temperature? What does a person sitting at the center of the void see? What do they see if they start falling out of the center? What happens if two such voids meet? Can they orbit each other? etc., etc. Since it's not consistent to say it's stationary, let's start from a simple example that is consistent, which is a flat FLRW universe. A calculation then shows that, ignoring unitless factors of order unity, $$R=c/H$$. This is the Hubble radius, i.e., the radius of a cosmological event horizon.
# Statistics (stat) • The estimation of optimal treatment regimes is of considerable interest to precision medicine. In this work, we propose a causal $k$-nearest neighbor method to estimate the optimal treatment regime. The method roots in the framework of causal inference, and estimates the causal treatment effects within the nearest neighborhood. Although the method is simple, it possesses nice theoretical properties. We show that the causal $k$-nearest neighbor regime is universally consistent. That is, the causal $k$-nearest neighbor regime will eventually learn the optimal treatment regime as the sample size increases. We also establish its convergence rate. However, the causal $k$-nearest neighbor regime may suffer from the curse of dimensionality, i.e. performance deteriorates as dimensionality increases. To alleviate this problem, we develop an adaptive causal $k$-nearest neighbor method to perform metric selection and variable selection simultaneously. The performance of the proposed methods is illustrated in simulation studies and in an analysis of a chronic depression clinical trial. • We propose a Las Vegas transformation of Markov Chain Monte Carlo (MCMC) estimators of Restricted Boltzmann Machines (RBMs). We denote our approach Markov Chain Las Vegas (MCLV). MCLV gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has maximum number of Markov chain steps K (referred as MCLV-K). We present a MCLV-K gradient estimator (LVS-K) for RBMs and explore the correspondence and differences between LVS-K and Contrastive Divergence (CD-K), with LVS-K significantly outperforming CD-K training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models. • Given a matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ and a vector $b \in\mathbb{R}^{d}$, we show how to compute an $\epsilon$-approximate solution to the regression problem $\min_{x\in\mathbb{R}^{d}}\frac{1}{2} \|\mathbf{A} x - b\|_{2}^{2}$ in time $\tilde{O} ((n+\sqrt{d\cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})$ where $\kappa_{\text{sum}}=\mathrm{tr}\left(\mathbf{A}^{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}^{T}\mathbf{A})$ and $s$ is the maximum number of non-zero entries in a row of $\mathbf{A}$. Our algorithm improves upon the previous best running time of $\tilde{O} ((n+\sqrt{n \cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon^{-1})$. We achieve our result through a careful combination of leverage score sampling techniques, proximal point methods, and accelerated coordinate descent. Our method not only matches the performance of previous methods, but further improves whenever leverage scores of rows are small (up to polylogarithmic factors). We also provide a non-linear generalization of these results that improves the running time for solving a broader class of ERM problems. • Feature selection plays a critical role in data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that strike an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability. • Effective utilization of photovoltaic (PV) plants requires weather variability robust global solar radiation (GSR) forecasting models. Random weather turbulence phenomena coupled with assumptions of clear sky model as suggested by Hottel pose significant challenges to parametric & non-parametric models in GSR conversion rate estimation. Also, a decent GSR estimate requires costly high-tech radiometer and expert dependent instrument handling and measurements, which are subjective. As such, a computer aided monitoring (CAM) system to evaluate PV plant operation feasibility by employing smart grid past data analytics and deep learning is developed. Our algorithm, SolarisNet is a 6-layer deep neural network trained on data collected at two weather stations located near Kalyani metrological site, West Bengal, India. The daily GSR prediction performance using SolarisNet outperforms the existing state of art and its efficacy in inferring past GSR data insights to comprehend daily and seasonal GSR variability along with its competence for short term forecasting is discussed. • An orthogonally equivariant estimator for the covariance matrix is proposed that is valid when the dimension $p$ is larger than the sample size $n$. Equivariance under orthogonal transformations is a less restrictive assumption than structural assumptions on the true covariance matrix. It reduces the problem of estimation of the covariance matrix to that of estimation of its eigenvalues. In this paper, the eigenvalue estimates are obtained from an adjusted likelihood function derived by approximating the integral over the eigenvectors of the sample covariance matrix, which is a challenging problem in its own right. Comparisons with two well-known orthogonally equivariant estimators are given, which are based on Monte-Carlo risk estimates for simulated data and misclassification errors in a real data analysis. • We present an efficient alternating direction method of multipliers (ADMM) algorithm for segmenting a multivariate non-stationary time series with structural breaks into stationary regions. We draw from recent work where the series is assumed to follow a vector autoregressive model within segments and a convex estimation procedure may be formulated using group fused lasso penalties. Our ADMM approach first splits the convex problem into a global quadratic program and a simple group lasso proximal update. We show that the global problem may be parallelized over rows of the time dependent transition matrices and furthermore that each subproblem may be rewritten in a form identical to the log-likelihood of a Gaussian state space model. Consequently, we develop a Kalman smoothing algorithm to solve the global update in time linear in the length of the series. • In this paper, a scale mixture of Normal distributions model is developed for classification and clustering of data having outliers and missing values. The classification method, based on a mixture model, focuses on the introduction of latent variables that gives us the possibility to handle sensitivity of model to outliers and to allow a less restrictive modelling of missing data. Inference is processed through a Variational Bayesian Approximation and a Bayesian treatment is adopted for model learning, supervised classification and clustering. • Hash codes are efficient data representations for coping with the ever growing amounts of data. In this paper, we introduce a random forest semantic hashing scheme that embeds tiny convolutional neural networks (CNN) into shallow random forests, with near-optimal information-theoretic code aggregation among trees. We start with a simple hashing scheme, where random trees in a forest act as hashing functions by setting 1' for the visited tree leaf, and 0' for the rest. We show that traditional random forests fail to generate hashes that preserve the underlying similarity between the trees, rendering the random forests approach to hashing challenging. To address this, we propose to first randomly group arriving classes at each tree split node into two groups, obtaining a significantly simplified two-class classification problem, which can be handled using a light-weight CNN weak learner. Such random class grouping scheme enables code uniqueness by enforcing each class to share its code with different classes in different trees. A non-conventional low-rank loss is further adopted for the CNN weak learners to encourage code consistency by minimizing intra-class variations and maximizing inter-class distance for the two random class groups. Finally, we introduce an information-theoretic approach for aggregating codes of individual trees into a single hash code, producing a near-optimal unique hash for each class. The proposed approach significantly outperforms state-of-the-art hashing methods for image retrieval tasks on large-scale public datasets, while performing at the level of other state-of-the-art image classification techniques while utilizing a more compact and efficient scalable representation. This work proposes a principled and robust procedure to train and deploy in parallel an ensemble of light-weight CNNs, instead of simply going deeper. • The diagnosis of Alzheimer's disease (AD) in routine clinical practice is most commonly based on subjective clinical interpretations. Quantitative electroencephalography (QEEG) measures have been shown to reflect neurodegenerative processes in AD and might qualify as affordable and thereby widely available markers to facilitate the objectivization of AD assessment. Here, we present a novel framework combining Riemannian tangent space mapping and elastic net regression for the development of brain atrophy markers. While most AD QEEG studies are based on small sample sizes and psychological test scores as outcome measures, here we train and test our models using data of one of the largest prospective EEG AD trials ever conducted, including MRI biomarkers of brain atrophy. • Nov 23 2017 cs.DB stat.ML arXiv:1711.08330v1 In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that cost-based optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times. • We consider the problem of estimating the joint distribution $P$ of $n$ independent random variables within the Bayes paradigm from a non-asymptotic point of view. Assuming that $P$ admits some density $s$ with respect to a given reference measure, we consider a density model $\overline S$ for $s$ that we endow with a prior distribution $\pi$ (with support $\overline S$) and we build a robust alternative to the classical Bayes posterior distribution which possesses similar concentration properties around $s$ whenever it belongs to the model $\overline S$. Furthermore, in density estimation, the Hellinger distance between the classical and the robust posterior distributions tends to 0, as the number of observations tends to infinity, under suitable assumptions on the model and the prior, provided that the model $\overline S$ contains the true density $s$. However, unlike what happens with the classical Bayes posterior distribution, we show that the concentration properties of this new posterior distribution are still preserved in the case of a misspecification of the model, that is when $s$ does not belong to $\overline S$ but is close enough to it with respect to the Hellinger distance. • Convolutional neural networks (CNNs) have been generally acknowledged as one of the driving forces for the advancement of computer vision. Despite their promising performances on many tasks, CNNs still face major obstacles on the road to achieving ideal machine intelligence. One is the difficulty of interpreting them and understanding their inner workings, which is important for diagnosing their failures and correcting them. Another is that standard CNNs require large amounts of annotated data, which is sometimes very hard to obtain. Hence, it is desirable to enable them to learn from few examples. In this work, we address these two limitations of CNNs by developing novel and interpretable models for few-shot learning. Our models are based on the idea of encoding objects in terms of visual concepts, which are interpretable visual cues represented within CNNs. We first use qualitative visualizations and quantitative statistics, to uncover several key properties of feature encoding using visual concepts. Motivated by these properties, we present two intuitive models for the problem of few-shot learning. Experiments show that our models achieve competitive performances, while being much more flexible and interpretable than previous state-of-the-art few-shot learning methods. We conclude that visual concepts expose the natural capability of CNNs for few-shot learning. • The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space. Existing graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose GraphGAN, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game. Specifically, for a given vertex, the generative model tries to fit its underlying true connectivity distribution over all other vertices and produces "fake" samples to fool the discriminative model, while the discriminative model tries to detect whether the sampled vertex is from ground truth or generated by the generative model. With the competition between these two models, both of them can alternately and iteratively boost their performance. Moreover, when considering the implementation of generative model, we propose a novel graph softmax to overcome the limitations of traditional softmax function, which can be proven satisfying desirable properties of normalization, graph structure awareness, and computational efficiency. Through extensive experiments on real-world datasets, we demonstrate that GraphGAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over state-of-the-art baselines. • We consider the problem of sparse variable selection on high dimension heterogeneous data sets, which has been taken on renewed interest recently due to the growth of biological and medical data sets with complex, non-i.i.d. structures and prolific response variables. The heterogeneity is likely to confound the association between explanatory variables and responses, resulting in a wealth of false discoveries when Lasso or its variants are naïvely applied. Therefore, the research interest of developing effective confounder correction methods is growing. However, ordinarily employing recent confounder correction methods will result in undesirable performance due to the ignorance of the convoluted interdependency among the prolific response variables. To fully improve current variable selection methods, we introduce a model that can utilize the dependency information from multiple responses to select the active variables from heterogeneous data. Through extensive experiments on synthetic and real data sets, we show that our proposed model outperforms the existing methods. • We tackle the problem of constructive preference elicitation, that is the problem of learning user preferences over very large decision problems, involving a combinatorial space of possible outcomes. In this setting, the suggested configuration is synthesized on-the-fly by solving a constrained optimization problem, while the preferences are learned itera tively by interacting with the user. Previous work has shown that Coactive Learning is a suitable method for learning user preferences in constructive scenarios. In Coactive Learning the user provides feedback to the algorithm in the form of an improvement to a suggested configuration. When the problem involves many decision variables and constraints, this type of interaction poses a significant cognitive burden on the user. We propose a decomposition technique for large preference-based decision problems relying exclusively on inference and feedback over partial configurations. This has the clear advantage of drastically reducing the user cognitive load. Additionally, part-wise inference can be (up to exponentially) less computationally demanding than inference over full configurations. We discuss the theoretical implications of working with parts and present promising empirical results on one synthetic and two realistic constructive problems. • Deep Learning models are vulnerable to adversarial examples, i.e.\ images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples. • Estimating large covariance matrices has been a longstanding important problem in many applications and has attracted increased attention over several decades. This paper deals with two methods based on pre-existing works to impose sparsity on the covariance matrix via its unit lower triangular matrix (aka Cholesky factor) $\mathbf{T}$. The first method serves to estimate the entries of $\mathbf{T}$ using the Ordinary Least Squares (OLS), then imposes sparsity by exploiting some generalized thresholding techniques such as Soft and Smoothly Clipped Absolute Deviation (SCAD). The second method directly estimates a sparse version of $\mathbf{T}$ by penalizing the negative normal log-likelihood with $L_1$ and SCAD penalty functions. The resulting covariance estimators are always guaranteed to be positive definite. Some Monte-Carlo simulations as well as experimental data demonstrate the effectiveness of our estimators for hyperspectral anomaly detection using the Kelly anomaly detector. • Many cognitive, sensory and motor processes have correlates in oscillatory neural sources, which are embedded as a subspace into the recorded brain signals. Decoding such processes from noisy magnetoencephalogram/electroencephalogram (M/EEG) signals usually requires the use of data-driven analysis methods. The objective evaluation of such decoding algorithms on experimental raw signals, however, is a challenge: the amount of available M/EEG data typically is limited, labels can be unreliable, and raw signals often are contaminated with artifacts. The latter is specifically problematic, if the artifacts stem from behavioral confounds of the oscillatory neural processes of interest. To overcome some of these problems, simulation frameworks have been introduced for benchmarking decoding methods. Generating artificial brain signals, however, most simulation frameworks make strong and partially unrealistic assumptions about brain activity, which limits the generalization of obtained results to real-world conditions. In the present contribution, we thrive to remove many shortcomings of current simulation frameworks and propose a versatile alternative, that allows for objective evaluation and benchmarking of novel data-driven decoding methods for neural signals. Its central idea is to utilize post-hoc labelings of arbitrary M/EEG recordings. This strategy makes it paradigm-agnostic and allows to generate comparatively large datasets with noiseless labels. Source code and data of the novel simulation approach are made available for facilitating its adoption. • In this paper we are interested in multifractional stable processes where the self-similarity index $H$ is a function of time, in other words $H$ becomes time changing, and the stability index $\alpha$ is a constant. Using $\beta$- negative power variations ($-1/2<\beta<0$), we propose estimators for the value of the multifractional function $H$ at a fixed time $t_0$ and for $\alpha$ for two cases: multifractional Brownian motion ($\alpha=2$) and linear multifractional stable motion ($0<\alpha<2$). We get the consistency of our estimates for the underlying processes with the rate of convergence. • The graph Laplacian plays key roles in information processing of relational data, and has analogies with the Laplacian in differential geometry. In this paper, we generalize the analogy between graph Laplacian and differential geometry to the hypergraph setting, and propose a novel hypergraph $p$-Laplacian. Unlike the existing two-node graph Laplacians, this generalization makes it possible to analyze hypergraphs, where the edges are allowed to connect any number of nodes. Moreover, we propose a semi-supervised learning method based on the proposed hypergraph $p$-Laplacian, and formalize them as the analogue to the Dirichlet problem, which often appears in physics. We further explore theoretical connections to normalized hypergraph cut on a hypergraph, and propose normalized cut corresponding to hypergraph $p$-Laplacian. The proposed $p$-Laplacian is shown to outperform standard hypergraph Laplacians in the experiment on a hypergraph semi-supervised learning and normalized cut setting. • While most classical approaches to Granger causality detection repose upon linear time series assumptions, many interactions in neuroscience and economics applications are nonlinear. We develop an approach to nonlinear Granger causality detection using multilayer perceptrons where the input to the network is the past time lags of all series and the output is the future value of a single series. A sufficient condition for Granger non-causality in this setting is that all of the outgoing weights of the input data, the past lags of a series, to the first hidden layer are zero. For estimation, we utilize a group lasso penalty to shrink groups of input weights to zero. We also propose a hierarchical penalty for simultaneous Granger causality and lag estimation. We validate our approach on simulated data from both a sparse linear autoregressive model and the sparse and nonlinear Lorenz-96 model. • In applications such as clinical safety analysis, the data of the experiments usually consists of frequency counts. In the analysis of such data, researchers often face the problem of multiple testing based on discrete test statistics, aimed at controlling family-wise error rate (FWER). Most existing FWER controlling procedures are developed for continuous data, which are often conservative when analyzing discrete data. By using minimal attainable p-values, several FWER controlling procedures have been developed for discrete data in the literature. In this paper, by utilizing known marginal distributions of true null p-values, three more powerful stepwise procedures are developed, which are modified versions of the conventional Bonferroni, Holm and Hochberg procedures, respectively. It is proved that the first two procedures strongly control the FWER under arbitrary dependence and are more powerful than the existing Tarone-type procedures, while the last one only ensures control of the FWER in special scenarios. Through extensive simulation studies, we provide numerical evidence of superior performance of the proposed procedures in terms of the FWER control and minimal power. A real clinical safety data is used to demonstrate applications of our proposed procedures. An R package "MHTdiscrete" and a web application are developed for implementing the proposed procedures. • Nov 23 2017 cs.LG stat.ML arXiv:1711.08132v1 Convolutional Neural Networks (CNN) and the locally connected layer are limited in capturing the importance and relations of different local receptive fields, which are often crucial for tasks such as face verification, visual question answering, and word sequence prediction. To tackle the issue, we propose a novel locally smoothed neural network (LSNN) in this paper. The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields. Specifically, a multi-variate Gaussian function is utilized to generate the smoother, for modeling the location relations among different local receptive fields. Furthermore, the content information can also be leveraged by setting the mean and precision of the Gaussian function according to the content. Experiments on some variant of MNIST clearly show our advantages over CNN and locally connected layer. • In various real-world problems, we are presented with positive and unlabelled data, referred to as presence-only responses and where the number of covariates p is large. The combination of presence-only responses and high dimensionality presents both statistical and computational challenges. In this paper, we develop the PUlasso algorithm for variable selection and classification with positive and unlabelled responses. Our algorithm involves using the majorization-minimization (MM) framework which is a generalization of the well-known expectation-maximization (EM) algorithm. In particular to make our algorithm scalable, we provide two computational speed-ups to the standard EM algorithm. We provide a theoretical guarantee where we first show that our algorithm is guaranteed to converge to a stationary point, and then prove that any stationary point achieves the minimax optimal mean-squared error of slogp/n, where s is the sparsity of the true parameter. We also demonstrate through simulations that our algorithm out-performs state-of-the-art algorithms in the moderate p settings in terms of classification performance. Finally, we demonstrate that our PUlasso algorithm performs well on a biochemistry example. • Motivation: How do we integratively analyze large-scale multi-platform genomic data that are high dimensional and sparse? Furthermore, how can we incorporate prior knowledge, such as the association between genes, in the analysis systematically? Method: To solve this problem, we propose a Scalable Network Constrained Tucker decomposition method we call SNeCT. SNeCT adopts parallel stochastic gradient descent approach on the proposed parallelizable network constrained optimization function. SNeCT decomposition is applied to tensor constructed from large scale multi-platform multi-cohort cancer data, PanCan12, constrained on a network built from PathwayCommons database. Results: The decomposed factor matrices are applied to stratify cancers, to search for top-k similar patients, and to illustrate how the matrices can be used for personalized interpretation. In the stratification test, combined twelve-cohort data is clustered to form thirteen subclasses. The thirteen subclasses have a high correlation to tissue of origin in addition to other interesting observations, such as clear separation of OV cancers to two groups, and high clinical correlation within subclusters formed in cohorts BRCA and UCEC. In the top-k search, a new patient's genomic profile is generated and searched against existing patients based on the factor matrices. The similarity of the top-k patient to the query is high for 23 clinical features, including estrogen/progesterone receptor statuses of BRCA patients with average precision value ranges from 0.72 to 0.86 and from 0.68 to 0.86, respectively. We also provide an illustration of how the factor matrices can be used for interpretable personalized analysis of each patient. • In this note, we provide critical commentary on two articles that cast doubt on the validity and implications of Birnbaum's theorem: Evans (2013) and Mayo (2014). In our view, the proof is correct and the consequences of the theorem are alive and well. • We consider the problem of estimating means of two Gaussians in a 2-Gaussian mixture, which is not balanced and is corrupted by noise of an arbitrary distribution. We present a robust algorithm to estimate the parameters, together with upper bounds on the numbers of samples required for the estimate to be correct, where the bounds are parametrised by the dimension, ratio of the mixing coefficients, a measure of the separation of the two Gaussians, related to Mahalanobis distance, and a condition number of the covariance matrix. In theory, this is the first sample-complexity result for imbalanced mixtures corrupted by adversarial noise. In practice, our algorithm outperforms the vanilla Expectation-Maximisation (EM) algorithm in terms of estimation error. • Geophysical and other natural processes often exhibit non-stationary covariances and this feature is important to take into account for statistical models that attempt to emulate the physical process. A convolution-based model is used to represent non-stationary Gaussian processes that allows for variation in the correlation range and vari- ance of the process across space. Application of this model has two steps: windowed estimates of the covariance function under the as- sumption of local stationary and encoding the local estimates into a single spatial process model that allows for efficient simulation. Specifically we give evidence to show that non-stationary covariance functions based on the Matern family can be reproduced by the Lat- ticeKrig model, a flexible, multi-resolution representation of Gaussian processes. We propose to fit locally stationary models based on the Matern covariance and then assemble these estimates into a single, global LatticeKrig model. One advantage of the LatticeKrig model is that it is efficient for simulating non-stationary fields even at 105 locations. This work is motivated by the interest in emulating spatial fields derived from numerical model simulations such as Earth system models. We successfully apply these ideas to emulate fields that de- scribe the uncertainty in the pattern scaling of mean summer (JJA) surface temperature from a series of climate model experiments. This example is significant because it emulates tens of thousands of loca- tions, typical in geophysical model fields, and leverages embarrassing parallel computation to speed up the local covariance fitting • In the context of model uncertainty and selection, empirical Bayes procedures can have undesirable properties such as extreme estimates of inclusion probabilities (Scott and Berger, 2010) or inconsistency under the null model (Liang et al., 2008). To avoid these issues, we define empirical Bayes priors with constraints that ensure that the estimates of the hyperparameters are at least as "vague" as those of proper default priors. In our examples, we observe that constrained EB procedures are better behaved than their unconstrained counterparts and that the Bayesian Information Criterion (BIC) is similar to an intuitively appealing constrained EB procedure. • Hippocampal dentate granule cells are among the few neuronal cell types generated throughout adult life in mammals. In the normal brain, new granule cells are generated from progenitors in the subgranular zone and integrate in a typical fashion. During the development of epilepsy, granule cell integration is profoundly altered. The new cells migrate to ectopic locations and develop misoriented basal dendrites. Although it has been established that these abnormal cells are newly generated, it is not known whether they arise ubiquitously throughout the progenitor cell pool or are derived from a smaller number of bad actor progenitors. To explore this question, we conducted a clonal analysis study in mice expressing the Brainbow fluorescent protein reporter construct in dentate granule cell progenitors. Mice were examined 2 months after pilocarpine-induced status epilepticus, a treatment that leads to the development of epilepsy. Brain sections were rendered translucent so that entire hippocampi could be reconstructed and all fluorescently labeled cells identified. Our findings reveal that a small number of progenitors produce the majority of ectopic cells following status epilepticus, indicating that either the affected progenitors or their local microenvironments have become pathological. By contrast, granule cells with basal dendrites were equally distributed among clonal groups. This indicates that these progenitors can produce normal cells and suggests that global factors sporadically disrupt the dendritic development of some new cells. Together, these findings strongly predict that distinct mechanisms regulate different aspects • In this work, we consider the task of classifying the binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal re-weighting strategy for U data, so that a decent decision boundary can be found. In contrast, we provide a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GANs). Our generative positive-unlabeled (GPU) learning model is devised to express P and N data distributions. It comprises of three discriminators and two generators with different roles, producing both positive and negative samples that resemble those come from the real training dataset. Even with rather limited labeled P data, our GPU framework is capable of capturing the underlying P and N data distribution with infinite realistic sample streams. In this way, an optimal classifier can be trained on those generated samples using a very deep neural networks (DNNs). Moreover, an useful variant of GPU is also introduced for semi-supervised classification. • The global sensitivity analysis of time-dependent processes requires history-aware approaches. We develop for that purpose a variance-based method that leverages the correlation structure of the problems under study and employs surrogate models to accelerate the computations. The errors resulting from fixing unimportant uncertain parameters to their nominal values are analyzed through a priori estimates. We illustrate our approach on a harmonic oscillator example and on a nonlinear dynamic cholera model. • We design new algorithms for the combinatorial pure exploration problem in the multi-arm bandit framework. In this problem, we are given K distributions and a collection of subsets $\mathcal{V} \subset 2^K$ of these distributions, and we would like to find the subset $v \in \mathcal{V}$ that has largest cumulative mean, while collecting, in a sequential fashion, as few samples from the distributions as possible. We study both the fixed budget and fixed confidence settings, and our algorithms essentially achieve state-of-the-art performance in all settings, improving on previous guarantees for structures like matchings and submatrices that have large augmenting sets. Moreover, our algorithms can be implemented efficiently whenever the decision set V admits linear optimization. Our analysis involves precise concentration-of-measure arguments and a new algorithm for linear programming with exponentially many constraints. • Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold. However, further investigation into this phenomenon is warranted, to identify if there are other architectures or datasets where curvature plays a more prominent role. We believe that exploring the Riemannian geometry of deep generative models, using the tools developed in this paper, will be an important step in understanding the high-dimensional, nonlinear spaces these models learn. • A new class of functions, called the `Information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to a) identify time-intervals or regions in dynamical system behaviour where information about the parameters is concentrated; b) assess the effect of measurement noise on the information gain for the parameters; c) assess whether sufficient information in an experimental protocol (input, measurements, and their frequency) is available to identify the parameters; d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and e) assess identifiability problems for particular sets of parameters. • This paper demonstrates the use of genetic algorithms for evolving: 1) a grandmaster-level evaluation function, and 2) a search mechanism for a chess program, the parameter values of which are initialized randomly. The evaluation function of the program is evolved by learning from databases of (human) grandmaster games. At first, the organisms are evolved to mimic the behavior of human grandmasters, and then these organisms are further improved upon by means of coevolution. The search mechanism is evolved by learning from tactical test suites. Our results show that the evolved program outperforms a two-time world computer chess champion and is at par with the other leading computer chess programs. • This paper presents a novel deep learning based method for automatic malware signature generation and classification. The method uses a deep belief network (DBN), implemented with a deep stack of denoising autoencoders, generating an invariant compact representation of the malware behavior. While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants. Using a dataset containing hundreds of variants for several major malware families, our method achieves 98.6% classification accuracy using the signatures generated by the DBN. The presented method is completely agnostic to the type of malware behavior that is logged (e.g., API calls and their parameters, registry entries, websites and ports accessed, etc.), and can use any raw input from a sandbox to successfully train the deep neural network which is used to generate malware signatures. • Variational inference for latent variable models is prevalent in various machine learning problems, typically solved by maximizing the Evidence Lower Bound (ELBO) of the true data likelihood with respect to a variational distribution. However, freely enriching the family of variational distribution is challenging since the ELBO requires variational likelihood evaluations of the latent variables. In this paper, we propose a novel framework to enrich the variational family based on an alternative lower bound, by introducing auxiliary random variables to the variational distribution only. While offering a much richer family of complex variational distributions, the resulting inference network is likelihood almost free in the sense that only the latent variables require evaluations from simple likelihoods and samples from all the auxiliary variables are sufficient for maximum likelihood inference. We show that the proposed approach is essentially optimizing a probabilistic mixture of ELBOs, thus enriching modeling capacity and enhancing robustness. It outperforms state-of-the-art methods in our experiments on several density estimation tasks. • One key requirement for effective supply chain management is the quality of its inventory management. Various inventory management methods are typically employed for different types of products based on their demand patterns, product attributes, and supply network. In this paper, our goal is to develop robust demand prediction methods for weather sensitive products at retail stores. We employ historical datasets from Walmart, whose customers and markets are often exposed to extreme weather events which can have a huge impact on sales regarding the affected stores and products. We want to accurately predict the sales of 111 potentially weather-sensitive products around the time of major weather events at 45 of Walmart retails locations in the U.S. Intuitively, we may expect an uptick in the sales of umbrellas before a big thunderstorm, but it is difficult for replenishment managers to predict the level of inventory needed to avoid being out-of-stock or overstock during and after that storm. While they rely on a variety of vendor tools to predict sales around extreme weather events, they mostly employ a time-consuming process that lacks a systematic measure of effectiveness. We employ all the methods critical to any analytics project and start with data exploration. Critical features are extracted from the raw historical dataset for demand forecasting accuracy and robustness. In particular, we employ Artificial Neural Network for forecasting demand for each product sold around the time of major weather events. Finally, we evaluate our model to evaluate their accuracy and robustness. • Most research on the interpretability of machine learning systems focuses on the development of a more rigorous notion of interpretability. I suggest that a better understanding of the deficiencies of the intuitive notion of interpretability is needed as well. I show that visualization enables but also impedes intuitive interpretability, as it presupposes two levels of technical pre-interpretation: dimensionality reduction and regularization. Furthermore, I argue that the use of positive concepts to emulate the distributed semantic structure of machine learning models introduces a significant human bias into the model. As a consequence, I suggest that, if intuitive interpretability is needed, singular representations of internal model states should be avoided. • Nov 23 2017 stat.ML arXiv:1711.08037v1 Calls to arms to build interpretable models express a well-founded discomfort with machine learning. Should a software agent that does not even know what a loan is decide who qualifies for one? Indeed, we ought to be cautious about injecting machine learning (or anything else, for that matter) into applications where there may be a significant risk of causing social harm. However, claims that stakeholders "just won't accept that!" do not provide a sufficient foundation for a proposed field of study. For the field of interpretable machine learning to advance, we must ask the following questions: What precisely won't various stakeholders accept? What do they want? Are these desiderata reasonable? Are they feasible? In order to answer these questions, we'll have to give real-world problems and their respective stakeholders greater consideration. • We study platforms in the sharing economy and discuss the need for incentivizing users to explore options that otherwise would not be chosen. For instance, rental platforms such as Airbnb typically rely on customer reviews to provide users with relevant information about different options. Yet, often a large fraction of options does not have any reviews available. Such options are frequently neglected as viable choices, and in turn are unlikely to be evaluated, creating a vicious cycle. Platforms can engage users to deviate from their preferred choice by offering monetary incentives for choosing a different option instead. To efficiently learn the optimal incentives to offer, we consider structural information in user preferences and introduce a novel algorithm - Coordinated Online Learning (CoOL) - for learning with structural information modeled as convex constraints. We provide formal guarantees on the performance of our algorithm and test the viability of our approach in a user study with data of apartments on Airbnb. Our findings suggest that our approach is well-suited to learn appropriate incentives and increase exploration on the investigated platform. • Objectives: To obtain a better estimate of the mortality of individuals suffering from blunt force trauma, including co-morbidity. Methodology: The Injury severity Score (ISS) is the default world standard for assessing the severity of multiple injuries. ISS is a mathematical fit to empirical field data. It is demonstrated that ISS is proportional to the Gibbs/Shannon Entropy. A new Entropy measure of morbidity from blunt force trauma including co-morbidity is derived based on the von Neumann Entropy, called the Abbreviated Morbidity Scale (AMS). Results: The ISS trauma measure has been applied to a previously published database, and good correlation has been achieved. Here the existing trauma measure is extended to include the co-morbidity of disease by calculating an Abbreviated Morbidity Score (AMS), which encapsulates the disease co-morbidity in a manner analogous to AIS, and on a consistent Entropy base. Applying Entropy measures to multiple injuries, highlights the role of co-morbidity and that the elderly die at much lower levels of injury than the general population, as a consequence of co-morbidity. These considerations lead to questions regarding current new car assessment protocols, and how well they protect the most vulnerable road users. Keywords: Blunt Force Trauma, Injury Severity Score, Co-morbidity, Entropy. Noon van der Silk Nov 01 2017 21:51 UTC This is an awesome paper; great work! :) Noon van der Silk Mar 08 2017 04:45 UTC I feel that while the proliferation of GUNs is unquestionable a good idea, there are many unsupervised networks out there that might use this technology in dangerous ways. Do you think Indifferential-Privacy networks are the answer? Also I fear that the extremist binary networks should be banned ent ...(continued) Noon van der Silk Jan 27 2016 03:39 UTC Great institute name ... Alessandro Dec 09 2015 01:12 UTC Hey, I've already seen this title! http://arxiv.org/abs/1307.0401 Chris Granade Sep 22 2015 19:15 UTC Thank you for the kind comments, I'm glad that our paper, source code, and tutorial are useful! Travis Scholten Sep 21 2015 17:05 UTC This was a really well-written paper! Am very glad to see this kind of work being done. In addition, the openness about source code is refreshing. By explicitly relating the work to [QInfer](https://github.com/csferrie/python-qinfer), this paper makes it more easy to check the authors' work. Furthe ...(continued) Chris Granade Sep 15 2015 02:40 UTC I fell for that clickbait title and read the paper. I still don’t get why von Neumann didn't want us to know about this weird trick? And which weird trick? The use of superfidelity or the use of non-physical density matrices like $\sigma^\sharp$?
# build_geometry (method)¶ build_geometry(self)[source] Compute the curve (Segment) needed to plot the object. The ending point of a curve is the starting point of the next curve in the list Parameters self (SlotW26) – A SlotW26 object Returns curve_list – A list of 4 Segment and 3 Arc1 Return type list exception Slot26_H1[source] Bases: Exception
# Fish abundance data¶ This notebook demonstrates Poisson and negative binomial regression in Statsmodels. We use a data set containing records of the number of fish caught at various sites in the ocean. At each site, we have the depth at which the fish were caught, and the density of foliage at the site. A description of the data is here, and the data are available here. Here are the import statements that we need: In [1]: %matplotlib inline import pandas as pd import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt Next we read in the data set and check the number of rows and columns. In [2]: data = pd.read_csv("fishing.csv", index_col=0) print(data.shape) (147, 7) These are the variable names: In [3]: data.dtypes Out[3]: site int64 totabund int64 density float64 meandepth int64 year int64 period object sweptarea float64 dtype: object Check for missing data. In [4]: print(pd.isnull(data).sum()) site 0 totabund 0 density 0 meandepth 0 year 0 period 0 sweptarea 0 dtype: int64 The dependent variable for the regression analysis is totabund (the number of fish caught in a particular area). Count data are often right skewed, we next check the marginal distribution of the totabund values. In [5]: data.totabund.plot(kind='hist') plt.xlabel("Number of fish caught", size=15) plt.ylabel("Frequency", size=15) Out[5]: <matplotlib.text.Text at 0x7ffafdc7ea90> Further complicating things, the regions where the fish are caught have different sizes. To make the totabund values comparable to each other, we can look at the number of fish caught per unit volume, as in the histogram below. However these values are not ideal for a regression model, because their variances differ based on the volumes, as well as on whatever intrinsic variation there is in the numbers of fish that are caught. In [6]: (data.totabund / data.sweptarea).plot(kind='hist') plt.xlabel("Number of fish caught per unit volume", size=15) plt.ylabel("Frequency", size=15) Out[6]: <matplotlib.text.Text at 0x7ffafa3bf438> Poisson regression A better way to handle the unequal areas is to use a regression model with an "exposure" variable. The fitted values from the regression model based on the covariates are multiplied by the exposure value. The regresson model we will use is a Poisson GLM. In a Poisson GLM, the distribution of values at a given covariate vector has a Poisson distribution. In [7]: model1 = sm.GLM.from_formula("totabund ~ meandepth + density", family=sm.families.Poisson(), exposure=data.sweptarea, data=data) result1 = model1.fit() print(result1.summary()) Generalized Linear Model Regression Results ============================================================================== Dep. Variable: totabund No. Observations: 147 Model: GLM Df Residuals: 144 Model Family: Poisson Df Model: 2 Method: IRLS Log-Likelihood: -3174.8 Date: Fri, 12 Jun 2015 Deviance: 5377.1 Time: 03:45:02 Pearson chi2: 4.84e+03 No. Iterations: 8 ============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------ Intercept -4.9252 0.018 -267.167 0.000 -4.961 -4.889 meandepth -0.0006 7.44e-06 -80.051 0.000 -0.001 -0.001 density 88.0607 0.739 119.087 0.000 86.611 89.510 ============================================================================== By default, the Poisson GLM uses an "exponential link function", meaning that the mean value is the exponential of the linear predictor. In the model above, the linear predictor is -4.9252 - 0.0006 * meandepth + 88.0607 * density. A consequence of using the exponential link function is that the covariate effects are multiplicative on the mean. If meandepth increases by 1 unit, the linear predictor changes additively by -0.0006 units, and the mean value changes by a factor of exp(-0.006). An alternative to using sweptarea as the exposure is to include log(sweptarea) as a covariate. Using sweptarea as an exposure variable is exactly equivalent to using log(sweptarea) as a covariate and forcing its coefficient to be equal 1. It often happens that when allowing the coefficient of log(sweptarea) to be estimated, we discover that the fitted coefficient differs from 1, typically being smaller than 1. This means that the natural scaling is somewhat dampened -- if we double the search area, the yield does not quite double. There are various reasons why this would happen in the real world. In [8]: model2 = sm.GLM.from_formula("totabund ~ meandepth + density + I(np.log(sweptarea))", family=sm.families.Poisson(), data=data) result2 = model2.fit() print(result2.summary()) Generalized Linear Model Regression Results ============================================================================== Dep. Variable: totabund No. Observations: 147 Model: GLM Df Residuals: 143 Model Family: Poisson Df Model: 3 Method: IRLS Log-Likelihood: -3072.8 Date: Fri, 12 Jun 2015 Deviance: 5173.1 Time: 03:45:02 Pearson chi2: 4.63e+03 No. Iterations: 8 ======================================================================================== coef std err z P>|z| [95.0% Conf. Int.] ---------------------------------------------------------------------------------------- Intercept -1.8025 0.218 -8.263 0.000 -2.230 -1.375 meandepth -0.0005 9.32e-06 -55.097 0.000 -0.001 -0.000 density 85.4746 0.764 111.933 0.000 83.978 86.971 I(np.log(sweptarea)) 0.7004 0.021 33.550 0.000 0.660 0.741 ======================================================================================== Overdispersion An important property of the Poisson distribution is that the mean is equal to the variance. We can check this condition in our data by binning the data into quintiles based on the estimated means, and comparing the empirical mean and variance within each bin, as below. In [9]: ii = np.digitize(result1.fittedvalues, np.percentile(result1.fittedvalues, [20, 40, 60, 80])) moments = [] for i in range(5): jj = np.flatnonzero(ii == i) moments.append([result1.fittedvalues[jj].mean(), model1.endog[jj].var()]) moments = np.asarray(moments) moments_log2 = np.log(moments) / np.log(2) plt.grid(True) plt.plot(moments_log2[:, 0], moments_log2[:, 1], 'o') plt.plot([4, 20], [4, 20], '-', color='black') plt.xlim(4, 20) plt.ylim(4, 20) plt.xlabel("Log2 mean", size=15) plt.ylabel("Log2 variance", size=15) Out[9]: <matplotlib.text.Text at 0x7ffafa311518> The plot above indicates that the variance is at least four times greater than the mean, so the key condition of the Poisson distribution does not hold. Another way to check this is to estimate the "scale parameter", which should be equal to 1 for a Poisson model. In this case, however, it is estimated to be 33. When data have conditional variance much larger than the conditional mean in a Poisson model, this is called "overdispersion". In [10]: model3 = sm.GLM.from_formula("totabund ~ meandepth + density", family=sm.families.Poisson(), exposure=data.sweptarea, data=data) result3 = model3.fit(scale='x2') print(result3.summary()) Generalized Linear Model Regression Results ============================================================================== Dep. Variable: totabund No. Observations: 147 Model: GLM Df Residuals: 144 Model Family: Poisson Df Model: 2 Method: IRLS Log-Likelihood: -1.0672e+05 Date: Fri, 12 Jun 2015 Deviance: 5377.1 Time: 03:45:03 Pearson chi2: 4.84e+03 No. Iterations: 8 ============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------ Intercept -4.9252 0.107 -46.081 0.000 -5.135 -4.716 meandepth -0.0006 4.31e-05 -13.807 0.000 -0.001 -0.001 density 88.0607 4.287 20.540 0.000 79.658 96.464 ============================================================================== If we fix the scale parameter at 1 (which is the default value for the Poisson family), we have much smaller standard errors, but these are misleading (note that the parameter estimates are the same, only the standard errors change). In [11]: model4 = sm.GLM.from_formula("totabund ~ meandepth + density", family=sm.families.Poisson(), exposure=data.sweptarea, data=data) result4 = model4.fit() print(result4.summary()) Generalized Linear Model Regression Results ============================================================================== Dep. Variable: totabund No. Observations: 147 Model: GLM Df Residuals: 144 Model Family: Poisson Df Model: 2 Method: IRLS Log-Likelihood: -3174.8 Date: Fri, 12 Jun 2015 Deviance: 5377.1 Time: 03:45:03 Pearson chi2: 4.84e+03 No. Iterations: 8 ============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------ Intercept -4.9252 0.018 -267.167 0.000 -4.961 -4.889 meandepth -0.0006 7.44e-06 -80.051 0.000 -0.001 -0.001 density 88.0607 0.739 119.087 0.000 86.611 89.510 ============================================================================== To further clarify the concept of overdispersion, we simulate perfect Poisson data and estimate the scale parameter. It is reasonably close to 1 (if the sample size is not large, the estimate will vary a lot, but the mean value is still very close to 1). Note also that the regression coefficient of x is accurately estimated. In [12]: x = np.random.normal(size=1000) mn = np.exp(0.2 * x) y = np.random.poisson(mn) model4 = sm.GLM(y, x, family=sm.families.Poisson()) result4 = model4.fit(scale='x2') print(result4.summary()) Generalized Linear Model Regression Results ============================================================================== Dep. Variable: y No. Observations: 1000 Model: GLM Df Residuals: 999 Model Family: Poisson Df Model: 0 Method: IRLS Log-Likelihood: -1277.9 Date: Fri, 12 Jun 2015 Deviance: 1109.2 Time: 03:45:03 Pearson chi2: 985. No. Iterations: 7 ============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------ x1 0.1836 0.031 5.987 0.000 0.124 0.244 ============================================================================== Negative binomial regression A negative binomial GLM is a GLM for non-negative dependent variables (like Poisson regression). Also like Poisson regression, it is most often used with an exponential mean structure. However unlike in Poisson regression, the mean is not equal to the variance. Instead, the variance has the form m + a*m^2, for a positive parameter a that we can choose or fit to the data. Below we use maximum likelihood to estimate a. In [13]: loglike = [] a_values = np.linspace(0.1, 1.5, 20) for a in a_values: model = sm.GLM.from_formula("totabund ~ meandepth + density", family=sm.families.NegativeBinomial(alpha=a), exposure=data.sweptarea, data=data) result = model.fit() loglike.append(result.llf) loglike = np.asarray(loglike) plt.plot(a_values, loglike) plt.ylabel("Log likelihood", size=15) plt.xlabel("Negative binomial parameter", size=15) Out[13]: <matplotlib.text.Text at 0x7ffafa2725f8> The estimate of a is around 0.3, meaning that the variance is roughly equal to m + 0.3*m^2. We can plot the fitted negative bnomial variance pattern together with the empirical means and variances in the quintiles of data. The agreement is quite good. In [14]: plt.grid(True) plt.plot(moments_log2[:, 0], moments_log2[:, 1], 'o') va_log2 = np.log(moments[:, 0] + 0.3*moments[:, 0]**2)/np.log(2) plt.plot(moments_log2[:, 0], va_log2, '-o') plt.plot([4, 20], [4, 20], '-', color='black') plt.xlim(4, 20) plt.ylim(4, 20) plt.xlabel("Log2 mean", size=15) plt.ylabel("Log2 variance", size=15) Out[14]: <matplotlib.text.Text at 0x7ffafa26bdd8>
# problem of uniform convergence of series The problem is Prove that the series $$\sum_{n=1}^\infty (-1)^n\frac{x^2+n}{n^2}$$ converges uniformly in every bounded interval, but does not converge absolutely for any value of $x$. My attempt is: (a) Let $[a,b]\subset\mathbb{R}$. Let $\epsilon>0$. Choose $N\in\mathbb{N}\ni \forall n\geq N$, $$\sum_{n=k}^\infty\frac{1}{k^2}<\frac{\epsilon}{2b^2} \qquad\text{and}\qquad \sum_{k=n}^\infty\frac{(-1)^k}{k}<\frac{\epsilon}{2}.$$ Let $n\geq N$. Then $$\sum_{k=n}^\infty (-1)^k\frac{x^2+k}{k^2}=\sum_{k=n}^\infty\left((-1)^k\frac{x^2}{k^2}+(-1)^k\frac{1}{k^2}\right)\leq\sum_{k=n}^\infty(-1)^k\frac{x^2}{k^2}+\sum_{k=n}^\infty(-1)^k\frac{1}{k^2}$$ $$\leq\sum_{k=n}^\infty \frac{b^2}{k^2}+\sum_{k=n}^\infty(-1)^k\frac{1}{k^2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$$ $$\therefore \sum_{n=1}^\infty(-1)^n\frac{x^2+n}{n^2} \quad\text{converges uniformly on }[a,b]$$ (b) $$\sum_{n=1}^\infty\left|(-1)^n\frac{x^2+n}{n^2}\right|=\sum_{n=1}^\infty \left|\frac{x^2+n}{n^2}\right|\leq\sum_{n=1}^\infty\frac{1}{n}$$ hence the series is not absolutely convergent. Is my procedure correct? - In your proof of part b, the last inequality should be reversed. Which is OK since $|x^2+n| \ge n$. The part a proof looks right. –  coffeemath Dec 24 '12 at 5:36 A similar question: math.stackexchange.com/questions/97555/… –  Jonas Meyer Dec 24 '12 at 5:47
# Tag Info 20 B3LYP is still a decent functional at its level of theory (single-hybrid functional), but you're right that there's a general criticism of it, which I largely hear in the form of people saying things like "all they did was B3LYP/6-31G*" to criticize non-experts that blindly use this combination which became the "default" in chemistry for ... 18 You can see it with VESTA software. For example, we can see the different lattice planes of NaCl crystal. [001] plane of NaCl: [101] plane of NaCl: [111] plane of NaCl: 17 I think this review¹ by Head-Gordon is a useful supplement to Nike's answer. Its combines a review of functional development, a benchmarking of various functionals, and an explanation of the design process for the $\omega$B97 functionals. Its also open access, so its a great resource if you are interested in DFT functionals in general. They benchmarked 200 ... 16 I don't have too much to add to the answers of Nike Dattani and Tyberius, but I think the crux is that its capabilities have been historically overestimated. One particular failing of B3LYP is that it tends to underestimate bond energies. However, since the small (and fast) 6-31G* basis set will lead to overbinding, the famous combination B3LYP/6-31G* ended ... 13 Assuming a generic chemistry background I wouldn't assume that knowledge of crystal structure would be too in depth at an undergraduate level. It is definitely encountered, but depending on the type of chemistry you want to go into, you probably never deal with solid state chemistry. I would first explain briefly how crystals are described by periodic ... 8 There is currently no tool built into ASE to do this sort of detective work on surface terminations. I highly suggest that you take the approach of trying to eliminate dangling bonds first (for example, Se atoms with only 1 neighbor). This will give you a more reasonable termination, but may not give you the desired Ni-Se ratio. With a complex unit cell ... 7 Generally, if you want to perform simulations with some force field, you will have to search the literature for published FFs tailored for your problem. The Stillinger-Weber (SW) model is very popular to describe bulk (diamond) silicon, for which it was designed in the first place. But several reparametrizations have also been published. If you happen to ... 5 This may be a bit of a rough answer, so apologies in advance... Since the eigenvalues obtained using non-energy-consistent pseudopotentials (i.e. the situation in VASP as far as I know) do not themselves have physical meaning, we typically use a slab system with an explicit vacuum, in order to make reference to vacuum. A more common situation is calculating ... 5 I would propably explain that there are different planes within a crystal, show some of them in an animation or pyhsical prop and depending on the depth of the presentation just omit the numbering and details. 5 The reconstructed $(\sqrt{3} \times \sqrt{3})R30$ surface unit cell can be obtained by first applying the rotation matrix $\begin{pmatrix} 1.0 & 2.0 & 0.0 \\ -1.0 & 1.0 & 0.0 \\ 0.0 & 0.0 & 1.0 \end{pmatrix}$ to the primitive bulk unit cell, and then the reconstructed $(001)$ surface can be cleaved from this unit cell, or ... 4 To answer your first question: the theory has certainly been worked out in some detail. The most accurate approach (a) would involve a quantum transport calculation in the non-equilibrium Green function formalism, describing the tip (L) and the sample (R) as semi-infinite leads, connected by a central region (C), and using the Meir-Wingreen formula to ... 4 Wannier90 might not be good at preserving the symmetry. But they probably include a few new methods to enforce symmetry in Wannier90.v.3.1.0. Maybe you can check this. http://www.wannier.org/features/ Also, WannierTools can symmetrize the hr.dat, but from my personal experience it sometimes gives you worse results than the original hr.dat. http://www.... 4 Just some thoughts... All depends on what type of study do you want to do. An aside note: studying the interaction or behavior of a ligand attached to a surface is different to study the passivation of that surface with the same ligand. For only one ligand, you can search the surface for symmetry sites and then, manually (just adding it to a distance lower ... 4 Take care in the figure posted by Jack that the [hkl] notation actually represents the vector plane, that is the direction perpendicular to the plane. The plane are indexed as (hkl). for example, the first figure should be read as (001) plane of NaCl, whereas [001] represents the direction along the c-axis. 3 ASE is not aware of which layer things are on when you use the surface function to build an arbitrary surface. However, the indices should be in order of z height I believe. You can use the following to assign the tags, just be aware that this will not work for more complex surfaces very well (which is why it is not done by default I believe). slab = ... 3 There are two different perspectives here. The first one is the creation of the SQS structure itself accomplished through the ATAT package, for example. The second one is the calculation of the slab properties. The SQS procedure returns the smaller structure that resembles the periodic structure properties. For alloys, for instance, the inputs are the ... 3 I would just like to point out that the question explicitly mentions adsorbing atomic oxygen rather than $\mathrm{O_{2}}$ (without mentioning the oxygen source or the surface). One can therefore somewhat simplify the question to: what is the strongest bond the oxygen atom can form? In molecules, the strongest bond is the C≡O triple bond in carbon monoxide at ... 2 I found the answer already. As the inconsistent calculation between the bulk and the thin film in the DFT calculation process. A small error term will generate a diverge behaviour in the surface energy as I increase the thickness of the slab. One way to fix this issue is to use the data from my thin film calculation to extrapolate the bulk energy without ... 1 As Anibal points in his answer, generating the SQS and using it for a surface calculation are two separate things, and shouldn't affect each other. You would still take all precautions as usual while performing the DFT calculation. That being said, there could be scenarios where you may want to restrict/prefer certain atom types amassing at edges or corners ... Only top voted, non community-wiki answers of a minimum length are eligible
# How does Paxos maintain the promise that future proposal will not break the current chosen value without knowing the future? I was studying Paxos from: http://research.microsoft.com/en-us/um/people/lamport/pubs/paxos-simple.pdf and I was trying to understand page 4, specifically, the following paragraph: "To maintain the invariance of $P2^c$, a proposer that wants to issue a proposal numbered n must learn the highest-numbered proposal with number less than n, if any, that has been or will be accepted by each acceptor in some majority of acceptors. Learning about proposals already accepted is easy enough; predicting future acceptances is hard. Instead of trying to predict the future, the proposer controls it by extracting a promise that there won’t be any such acceptances. In other words, the proposer requests that the acceptors not accept any more proposals numbered less than n. This leads to the following algorithm for issuing proposals." Where condition $P2c$ is: "For any v and n, if a proposal with value v and number n is issued, then there is a set S consisting of a majority of acceptors such that (a) no acceptor in S has accepted any proposal numbered less than n, or (b) v is the value of the highest-numbered proposal among all proposals numbered less than n accepted by the acceptors in S." I was specifically confused about the section I have in bold. I was confused why, if we want to develop some hypothetical consensus distributed algorithm, why do we need to know about higher sequence numbers that have not happened? What is the intuition behind that? Sorry if my title is a little strange, I was not sure what was a good title for the question. Author: Leslie Lamport Title: Paxos made simple Institution: Microsoft Research
$$\require{cancel}$$ # 9.5: Mercury Skills to Develop By the end of this section, you will be able to: • Characterize the orbit of Mercury around the Sun • Describe Mercury’s structure and composition • Explain the relationship between Mercury’s orbit and rotation • Describe the topography and features of Mercury’s surface • Summarize our ideas about the origin and evolution of Mercury The planet Mercury is similar to the Moon in many ways. Like the Moon, it has no atmosphere, and its surface is heavily cratered. As described later in this chapter, it also shares with the Moon the likelihood of a violent birth. # Mercury’s Orbit Mercury is the nearest planet to the Sun, and, in accordance with Kepler’s third law, it has the shortest period of revolution about the Sun (88 of our days) and the highest average orbital speed (48 kilometers per second). It is appropriately named for the fleet-footed messenger god of the Romans. Because Mercury remains close to the Sun, it can be difficult to pick out in the sky. As you might expect, it’s best seen when its eccentric orbit takes it as far from the Sun as possible. The semimajor axis of Mercury’s orbit—that is, the planet’s average distance from the Sun—is 58 million kilometers, or 0.39 AU. However, because its orbit has the high eccentricity of 0.206, Mercury’s actual distance from the Sun varies from 46 million kilometers at perihelion to 70 million kilometers at aphelion (the ideas and terms that describe orbits were introduced in Orbits and Gravity). # Composition and Structure Mercury’s mass is one-eighth that of Earth, making it the smallest terrestrial planet. Mercury is the smallest planet (except for the dwarf planets), having a diameter of 4878 kilometers, less than half that of Earth. Mercury’s density is 5.4 g/cm3, much greater than the density of the Moon, indicating that the composition of those two objects differs substantially. Mercury’s composition is one of the most interesting things about it and makes it unique among the planets. Mercury’s high density tells us that it must be composed largely of heavier materials such as metals. The most likely models for Mercury’s interior suggest a metallic iron-nickel core amounting to 60% of the total mass, with the rest of the planet made up primarily of silicates. The core has a diameter of 3500 kilometers and extends out to within 700 kilometers of the surface. We could think of Mercury as a metal ball the size of the Moon surrounded by a rocky crust 700 kilometers thick (Figure). Unlike the Moon, Mercury does have a weak magnetic field. The existence of this field is consistent with the presence of a large metal core, and it suggests that at least part of the core must be liquid in order to generate the observed magnetic field.1 The interior of Mercury is dominated by a metallic core about the same size as our Moon. Example $$\PageIndex{1}$$: Densities of Worlds The average density of a body equals its mass divided by its volume. For a sphere, density is: density=mass43πR3density=mass43πR3 Astronomers can measure both mass and radius accurately when a spacecraft flies by a body. Using the information in this chapter, we can calculate the approximate average density of the Moon. Solution For a sphere, density=mass43πR3=7.35×1022kg4.2×5.2×1018m3=3.4×103kg/m3density=mass43πR3=7.35×1022kg4.2×5.2×1018m3=3.4×103kg/m3 [link] gives a value of 3.3 g/cm3, which is 3.3 × 103 kg/m3. Exercise $$\PageIndex{1}$$ Using the information in this chapter, calculate the average density of Mercury. Show your work. Does your calculation agree with the figure we give in this chapter? density=mass43πR3=3.3×1023kg4.2×1.45×1019m3=5.4×103kg/m3density=mass43πR3=3.3×1023kg4.2×1.45×1019m3=5.4×103kg/m3 That matches the value given in [link] when g/cm3 is converted into kg/m3. # Mercury’s Strange Rotation Visual studies of Mercury’s indistinct surface markings were once thought to indicate that the planet kept one face to the Sun (as the Moon does to Earth). Thus, for many years, it was widely believed that Mercury’s rotation period was equal to its revolution period of 88 days, making one side perpetually hot while the other was always cold. Radar observations of Mercury in the mid-1960s, however, showed conclusively that Mercury does not keep one side fixed toward the Sun. If a planet is turning, one side seems to be approaching Earth while the other is moving away from it. The resulting Doppler shift spreads or broadens the precise transmitted radar-wave frequency into a range of frequencies in the reflected signal (Figure). The degree of broadening provides an exact measurement of the rotation rate of the planet. When a radar beam is reflected from a rotating planet, the motion of one side of the planet’s disk toward us and the other side away from us causes Doppler shifts in the reflected signal. The effect is to cause both a redshift and a blueshift, widening the spread of frequencies in the radio beam. Mercury’s period of rotation (how long it takes to turn with respect to the distant stars) is 59 days, which is just two-thirds of the planet’s period of revolution. Subsequently, astronomers found that a situation where the spin and the orbit of a planet (its year) are in a 2:3 ratio turns out to be stable. (See Note for more on the effects of having such a long day on Mercury.) Mercury, being close to the Sun, is very hot on its daylight side; but because it has no appreciable atmosphere, it gets surprisingly cold during the long nights. The temperature on the surface climbs to 700 K (430 °C) at noontime. After sunset, however, the temperature drops, reaching 100 K (–170 °C) just before dawn. (It is even colder in craters near the poles that receive no sunlight at all.) The range in temperature on Mercury is thus 600 K (or 600 °C), a greater difference than on any other planet. WHAT A DIFFERENCE A DAY MAKES Mercury rotates three times for each two orbits around the Sun. It is the only planet that exhibits this relationship between its spin and its orbit, and there are some interesting consequences for any observers who might someday be stationed on the surface of Mercury. Here on Earth, we take for granted that days are much shorter than years. Therefore, the two astronomical ways of defining the local “day”—how long the planet takes to rotate and how long the Sun takes to return to the same position in the sky—are the same on Earth for most practical purposes. But this is not the case on Mercury. While Mercury rotates (spins once) in 59 Earth days, the time for the Sun to return to the same place in Mercury’s sky turns out to be two Mercury years, or 176 Earth days. (Note that this result is not intuitively obvious, so don’t be upset if you didn’t come up with it.) Thus, if one day at noon a Mercury explorer suggests to her companion that they should meet at noon the next day, this could mean a very long time apart! To make things even more interesting, recall that Mercury has an eccentric orbit, meaning that its distance from the Sun varies significantly during each mercurian year. By Kepler’s law, the planet moves fastest in its orbit when closest to the Sun. Let’s examine how this affects the way we would see the Sun in the sky during one 176-Earth-day cycle. We’ll look at the situation as if we were standing on the surface of Mercury in the center of a giant basin that astronomers call Caloris (Figure). At the location of Caloris, Mercury is most distant from the Sun at sunrise; this means the rising Sun looks smaller in the sky (although still more than twice the size it appears from Earth). As the Sun rises higher and higher, it looks bigger and bigger; Mercury is now getting closer to the Sun in its eccentric orbit. At the same time, the apparent motion of the Sun slows down as Mercury’s faster motion in orbit begins to catch up with its rotation. At noon, the Sun is now three times larger than it looks from Earth and hangs almost motionless in the sky. As the afternoon wears on, the Sun appears smaller and smaller, and moves faster and faster in the sky. At sunset, a full Mercury year (or 88 Earth days after sunrise), the Sun is back to its smallest apparent size as it dips out of sight. Then it takes another Mercury year before the Sun rises again. (By the way, sunrises and sunsets are much more sudden on Mercury, since there is no atmosphere to bend or scatter the rays of sunlight.) Astronomers call locations like the Caloris Basin the “hot longitudes” on Mercury because the Sun is closest to the planet at noon, just when it is lingering overhead for many Earth days. This makes these areas the hottest places on Mercury. We bring all this up not because the exact details of this scenario are so important but to illustrate how many of the things we take for granted on Earth are not the same on other worlds. As we’ve mentioned before, one of the best things about taking an astronomy class should be ridding you forever of any “Earth chauvinism” you might have. The way things are on our planet is just one of the many ways nature can arrange reality. # The Surface of Mercury The first close-up look at Mercury came in 1974, when the US spacecraft Mariner 10 passed 9500 kilometers from the surface of the planet and transmitted more than 2000 photographs to Earth, revealing details with a resolution down to 150 meters. Subsequently, the planet was mapped in great detail by the MESSENGER spacecraft, which was launched in 2004 and made multiple flybys of Earth, Venus, and Mercury before settling into orbit around Mercury in 2011. It ended its life in 2015, when it was commanded to crash into the surface of the planet. Mercury’s surface strongly resembles the Moon in appearance (Figure and Figure). It is covered with thousands of craters and larger basins up to 1300 kilometers in diameter. Some of the brighter craters are rayed, like Tycho and Copernicus on the Moon, and many have central peaks. There are also scarps (cliffs) more than a kilometer high and hundreds of kilometers long, as well as ridges and plains. MESSENGER instruments measured the surface composition and mapped past volcanic activity. One of its most important discoveries was the verification of water ice (first detected by radar) in craters near the poles, similar to the situation on the Moon, and the unexpected discovery of organic (carbon-rich) compounds mixed with the water ice. Scientists working with data from the MESSENGER mission put together a rotating globe of Mercury, in false color, showing some of the variations in the composition of the planet’s surface. You can watch it spin. The topography of Mercury’s northern hemisphere is mapped in great detail from MESSENGER data. The lowest regions are shown in purple and blue, and the highest regions are shown in red. The difference in elevation between the lowest and highest regions shown here is roughly 10 kilometers. The permanently shadowed low-lying craters near the north pole contain radar-bright water ice. (credit: modification of work by NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington) This partially flooded impact basin is the largest known structural feature on Mercury. The smooth plains in the interior of the basin have an area of almost two million square kilometers. Compare this photo with [link], the Orientale Basin on the Moon. (credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington) Most of the mercurian features have been named in honor of artists, writers, composers, and other contributors to the arts and humanities, in contrast with the scientists commemorated on the Moon. Among the named craters are Bach, Shakespeare, Tolstoy, Van Gogh, and Scott Joplin. There is no evidence of plate tectonics on Mercury. However, the planet’s distinctive long scarps can sometimes be seen cutting across craters; this means the scarps must have formed later than the craters (Figure). These long, curved cliffs appear to have their origin in the slight compression of Mercury’s crust. Apparently, at some point in its history, the planet shrank, wrinkling the crust, and it must have done so after most of the craters on its surface had already formed. If the standard cratering chronology applies to Mercury, this shrinkage must have taken place during the last 4 billion years and not during the solar system’s early period of heavy bombardment. Discovery Scarp on Mercury. This long cliff, nearly 1 kilometer high and more than 100 kilometers long, cuts across several craters. Astronomers conclude that the compression that made “wrinkles” like this in the plank’s surface must have taken place after the craters were formed. (credit: modification of work by NASA/JPL/Northwestern University) # The Origin of Mercury The problem with understanding how Mercury formed is the reverse of the problem posed by the composition of the Moon. We have seen that, unlike the Moon, Mercury is composed mostly of metal. However, astronomers think that Mercury should have formed with roughly the same ratio of metal to silicate as that found on Earth or Venus. How did it lose so much of its rocky material? The most probable explanation for Mercury’s silicate loss may be similar to the explanation for the Moon’s lack of a metal core. Mercury is likely to have experienced several giant impacts very early in its youth, and one or more of these may have torn away a fraction of its mantle and crust, leaving a body dominated by its iron core. You can follow some of NASA’s latest research on Mercury and see some helpful animations on the MESSENGER web page. Today, astronomers recognize that the early solar system was a chaotic place, with the final stages of planet formation characterized by impacts of great violence. Some objects of planetary mass have been destroyed, whereas others could have fragmented and then re-formed, perhaps more than once. Both the Moon and Mercury, with their strange compositions, bear testimony to the catastrophes that must have characterized the solar system during its youth. ### Summary Mercury is the nearest planet to the Sun and the fastest moving. Mercury is similar to the Moon in having a heavily cratered surface and no atmosphere, but it differs in having a very large metal core. Early in its evolution, it apparently lost part of its silicate mantle, probably due to one or more giant impacts. Long scarps on its surface testify to a global compression of Mercury’s crust during the past 4 billion years.
## Pre-processing for approximate Bayesian computation in image analysis Posted in R, Statistics, University life with tags , , , , , , , , , , , , , on March 21, 2014 by xi'an With Matt Moores and Kerrie Mengersen, from QUT, we wrote this short paper just in time for the MCMSki IV Special Issue of Statistics & Computing. And arXived it, as well. The global idea is to cut down on the cost of running an ABC experiment by removing the simulation of a humongous state-space vector, as in Potts and hidden Potts model, and replacing it by an approximate simulation of the 1-d sufficient (summary) statistics. In that case, we used a division of the 1-d parameter interval to simulate the distribution of the sufficient statistic for each of those parameter values and to compute the expectation and variance of the sufficient statistic. Then the conditional distribution of the sufficient statistic is approximated by a Gaussian with these two parameters. And those Gaussian approximations substitute for the true distributions within an ABC-SMC algorithm à la Del Moral, Doucet and Jasra (2012). Across 20 125 × 125 pixels simulated images, Matt’s algorithm took an average of 21 minutes per image for between 39 and 70 SMC iterations, while resorting to pseudo-data and deriving the genuine sufficient statistic took an average of 46.5 hours for 44 to 85 SMC iterations. On a realistic Landsat image, with a total of 978,380 pixels, the precomputation of the mapping function took 50 minutes, while the total CPU time on 16 parallel threads was 10 hours 38 minutes. By comparison, it took 97 hours for 10,000 MCMC iterations on this image, with a poor effective sample size of 390 values. Regular SMC-ABC algorithms cannot handle this scale: It takes 89 hours to perform a single SMC iteration! (Note that path sampling also operates in this framework, thanks to the same precomputation: in that case it took 2.5 hours for 10⁵ iterations, with an effective sample size of 10⁴…) Since my student’s paper on Seaman et al (2012) got promptly rejected by TAS for quoting too extensively from my post, we decided to include me as an extra author and submitted the paper to this special issue as well. ## Advances in Scalable Bayesian Computation [group photo] Posted in Kids, Mountains, pictures, Statistics, Travel, University life with tags , , , , , , , , on March 8, 2014 by xi'an ## Nonlinear Time Series just appeared Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , , , , on February 26, 2014 by xi'an My friends Randal Douc and Éric Moulines just published this new time series book with David Stoffer. (David also wrote Time Series Analysis and its Applications with Robert Shumway a year ago.) The books reflects well on the research of Randal and Éric over the past decade, namely convergence results on Markov chains for validating both inference in nonlinear time series and algorithms applied to those objects. The later includes MCMC, pMCMC, sequential Monte Carlo, particle filters, and the EM algorithm. While I am too close to the authors to write a balanced review for CHANCE (the book is under review by another researcher, before you ask!), I think this is an important book that reflects the state of the art in the rigorous study of those models. Obviously, the mathematical rigour advocated by the authors makes Nonlinear Time Series a rather advanced book (despite the authors’ reassuring statement that “nothing excessively deep is used”) more adequate for PhD students and researchers than starting graduates (and definitely not advised for self-study), but the availability of the R code (on the highly personal page of David Stoffer) comes to balance the mathematical bent of the book in the first and third parts. A great reference book! ## evaluating stochastic algorithms Posted in Books, R, Statistics, University life with tags , , , , , , , , on February 20, 2014 by xi'an Reinaldo sent me this email a long while ago Could you recommend me a nice reference about measures to evaluate stochastic algorithms (in particular focus in approximating posterior distributions). and I hope he is still reading the ‘Og, despite my lack of prompt reply! I procrastinated and procrastinated in answering this question as I did not have a ready reply… We have indeed seen (almost suffered from!) a flow of MCMC convergence diagnostics in the 90′s.  And then it dried out. Maybe because of the impossibility to be “really” sure, unless running one’s MCMC much longer than “necessary to reach” stationarity and convergence. The heat of the dispute between the “single chain school” of Geyer (1992, Statistical Science) and the “multiple chain school” of Gelman and Rubin (1992, Statistical Science) has since long evaporated. My feeling is that people (still) run their MCMC samplers several times and check for coherence between the outcomes. Possibly using different kernels on parallel threads. At best, but rarely, they run (one or another form of) tempering to identify the modal zones of the target. And instances where non-trivial control variates are available are fairly rare. Hence, a non-sequitur reply at the MCMC level. As there is no automated tool available, in my opinion. (Even though I did not check the latest versions of BUGS.) As it happened, Didier Chauveau from Orléans gave today a talk at Big’MC on convergence assessment based on entropy estimation, a joint work with Pierre Vandekerkhove. He mentioned SamplerCompare which is an R package that appeared in 2010. Soon to come is their own EntropyMCMC package, using parallel simulation. And k-nearest neighbour estimation. If I re-interpret the question as focussed on ABC algorithms, it gets both more delicate and easier. Easy because each ABC distribution is different. So there is no reason to look at the unreachable original target. Delicate because there are several parameters to calibrate (tolerance, choice of summary, …) on top of the number of MCMC simulations. In DIYABC, the outcome is always made of the superposition of several runs to check for stability (or lack thereof). But this tells us nothing about the distance to the true original target. The obvious but impractical answer is to use some basic bootstrapping, as it is generally much too costly. ## finite mixture models [book review] Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , on February 17, 2014 by xi'an Here is a review of Finite Mixture Models (2000) by Geoff McLachlan & David Peel that I wrote aeons ago (circa 1999), supposedly for JASA, which lost first the files and second the will to publish it. As I was working with my student today, I mentioned the book to her and decided to publish it here, if only because I think the book deserved a positive review, even after all those years! (Since then, Sylvia Frühwirth-Schnatter published Finite Mixture and Markov Switching Models (2004), which is closer to my perspective on the topic and that I would more naturally recommend.) Mixture modeling, that is, the use of weighted sums of standard distributions as in $\sum_{i=1}^k p_i f({\mathbf y};{\mathbf \theta}_i)\,,$ is a widespread and increasingly used technique to overcome the rigidity of standard parametric distributions such as f(y;θ), while retaining a parametric nature, as exposed in the introduction of my JASA review to Böhning’s (1998) book on non-parametric mixture estimation (Robert, 2000). This review pointed out that, while there are many books available on the topic of mixture estimation, the unsurpassed reference remained the book by Titterington, Smith and Makov (1985)  [hereafter TSM]. I also suggested that a new edition of TSM would be quite timely, given the methodological and computational advances that took place in the past 15 years: while it remains unclear whether or not this new edition will ever take place, the book by McLachlan and Peel gives an enjoyable and fairly exhaustive update on the topic, incorporating the most recent advances on mixtures and some related models. Geoff McLachlan has been a major actor in the field for at least 25 years, through papers, software—the book concludes with a review of existing software—and books: McLachlan (1992), McLachlan and Basford (1988), and McLachlan and Krishnan (1997). I refer the reader to Lindsay (1989) for a review of the second book, which is a forerunner of, and has much in common with, the present book. Continue reading
# How does surface area to volume ratio relate to photosynthesis? Jun 20, 2015 Surface area to volume ratio (SA:V) is important to photosynthesis because plants must balance their need for more surface area to collect sunlight with the fragile nature of the leaves and the rate of water loss. #### Explanation: The equation for photosynthesis is ${\text{6CO"_2 + "6H"_2"O" → "C"_6"H"_12"O"_6 + 6"O}}_{2}$ A greater SA:V means more area for collection of sunlight and ${\text{CO}}_{2}$ and less distance for ${\text{CO}}_{2}$ to diffuse into the leaf and for ${\text{O}}_{2}$ to diffuse out. Thin broad leaves provide maximum SA:V, but they also means greater water loss and susceptibility to wind damage. In drier climates the leaf surface is reduced to slender blades or even to needles to make a smaller SA:V and less evaporative surface. For example, dill has skinny leaves and thin stems, which means increased SA:V. Lots of surface area means that there is optimal surface to photosynthesize and produce energy for the plant.
## Algebra: A Combined Approach (4th Edition) $-\frac{2}{3}$ The sum of a number and its additive inverse equals 0, so the additive inverse of $\frac{2}{3}$ is $-\frac{2}{3}$ because $\frac{2}{3}$ + ($-\frac{2}{3}$) = 0.
# Subtracting from lipo cell voltage! I just got a 3s Lipo battery. Lipo battery has a 4 pin connector to plug it into charger or voltage checker. But I don't want to buy a lipo voltage checker, I want to make it. But I have a problem. Let's say a=GND b=1st cell c=2nd cell d=3rd cell so if I read voltage from b it's b, But if I read the voltage from c it is c=b+c , If i read voltage from d it's d=c+d. I have lm339 IC , and I need b , c-b and d-c. How to substract voltage? • This a, b and c you get from the 4-wire interface? What signals can you actually access? – clabacchio Oct 21 '14 at 12:07 • You're confusing us with multiple uses of the letter c I think. Try re-wording those bits to make them more explicit - maybe use $V_{C1}$ for cell 1 voltage, and $V_{O1}$ for output voltage 1, etc. – Majenko Oct 21 '14 at 12:57 It seems you are asking how to determine individual cell voltages in a 3 cell pack when you have access to each end and the points between the cells. This simplest answer is to use a microcontroller with A/D and do the subtraction digitally. I've done exactly that in a 8 cell stack once. The problem with this method is that resolution goes down for the cells higher up in the stack. However, what matters is whether the worst case is still within spec for your purposes. Our A/D was 12 bits and we only needed to know the cell voltages well enough for charge ballancing and discharge limiting. You should be able to easily do the same with your 3 cell stack. Note that just measuring the voltage of each cell is only half the solution. For charge ballancing you also need to do something about it when some cells charge to a higher voltage, as will inevitably happen. If this is a one-off, then I'd probably go with the conceptually simplest method, which is to use a opto-isolator and resistor per cell. If this is for volume production where component cost matters, you can get more clever and use directly wired FETs to turn on the bleeder resistors for each cell. Maybe is something like this you need. The circuits will output the difference between voltage on each cells terminals, as ilustrated. Does it help? simulate this circuit – Schematic created using CircuitLab • This is not a good idea since it will have significant quiescient current. Not only will it drain the batteries, it will drain them unevenly. – Olin Lathrop Oct 21 '14 at 23:49 • @OlinLathrop, not really. Depends on the amp op you choose and the mechanisms you'll add to turn the balancing/monitoring on/off. This is just a draft, in order to ilustrated a proposed solution, not a complete design, of course. – Sergio Oct 22 '14 at 8:03 • No, the problem still exists regardless of the opamps. Note that you have 2 kOhms to ground from the top of each battery. The opamps will draw additional current. They can be switched off, but then you have to make sure there isn't current flowing thru the protection diodes. – Olin Lathrop Oct 22 '14 at 13:38 • Keyword: draft. We can keep on counting problems. – Sergio Oct 22 '14 at 13:55 I found out that if i directly connect from cell to IC it will work because as i mentioned that they add up it is only if you mesure from gnd to that cell so if you mesure b and c it will display the real voltage from c. THANK YOU GUYS ANYWAY!
# Integral Solutions $abcde=1050$ and a, b, c, d, e are positive integers. How many ordered tuples $$(a, b, c, d, e)$$ exist? Extra Credit: How many unordered sets $${a, b, c, d, e}$$ exist? Extra Credit: How many ordered tuples $$(a, b, c, d, e)$$ exist such that a≠b≠c≠d≠e? Unrelated Credit: Solve the Aunty's Teacups problem ×
# I don't get the second parts of mini explanation. What does s equal to for it to be used in there at the end? • Module 2 Week 4 Day 16 Challenge Part 2 I dont get the second parts of mini explanation. What does s equal to for it to be used in there at the end? • @The-Darkin-Blade Hi again! The variable $$s$$ is there just to make the expression look a little bit cleaner and to make Heron's Formula easier to memorize. See, $$s$$ is just half of the triangle's perimeter, or the semiperimeter: $$s = \frac{ a + b + c}{2}$$ If we tried to remember Heron's Formula only in terms of $$a, b,$$ and $$c,$$ (without $$s$$), the expression would look a lot more ugly. It would look like this: $$\sqrt{(\frac{a + b + c}{2} )(\frac{-a + b + c}{2} )(\frac{a - b + c}{2} )(\frac{a + b -c }{2}}$$ There's another nice thing about $$s$$: since it is equal to the perimeter divided by $$2,$$ it takes us a long way toward the area formula for a triangle. Remember the area formula is $$\frac{1}{2} \times \text{ base } \times \text{ height}$$ So all you have to do to get the area of the triangle is take $$s$$ and multiply it to the sum of the heights of each triangle!
# Directed Networks¶ • Formation game easy • Players simutaneously announce their preferred set of neighbors $S_i$ • g(S) = {ij : j in $S_i$} keeping track of ordered pairs • Nash equilibrium # Flow of Payoffs?¶ • One way flow - get information but not vice versa • Two way flow - one player bears the cost, but both benefit from the connection (link on internet, phone call??) # Two Way Flow¶ • Efficiency as in the undirected connections model, except c/2 and link in either direction (but not both) • we have half as much costs being born, so instead of having 2c per link, we're going to have 1c per link, and then in term of what that does for the efficient calculations, is it just factors everything down by a factor of 2 in terms of the costs. So the efficiency in this model is exactly as it was before except for we've divided through by 2 • low cost: $c/2 < \delta - \delta^2$ ==> "complete" networks, "complete" in quotes, because complete doesn't mean that every link in both directions is present, it just means that every two nodes are connected, and in this case, they're connected by only one link. It doesn't make sense to have links in both directions because you bear twice the cost and there's no added information flow in this situation. • medium cost: $\delta - \delta^2 < c/2 <\delta + (n - 2)\delta^2/2$ ==> "star" networks • high cost: $\delta + (n - 2)\delta^2/2 < c/2$ ==> empty network • Nash Stable: • low cost: $c < \delta - \delta^2$ ==> two-way "complete" networks are Nash stable • medium/low cost: $\delta - \delta^2 <c <\delta$ ==> all star networks are Nash stable, plus others • medium/high cost: $\delta <c < \delta + (n-2)\delta^2/2$ ==> peripherally sponsored star networks are Nash stable(no other stars, but sometimes other networks) • $\delta - \delta^2 <c < 2(\delta - \delta^2)$ complete is efficient, not equilibrium # One Way Flow¶ • Keep track of directed flows, and in links are not (always) useful # An Example¶ • Bala and Goyal(00) - Directed connections model with no decay • $u_i(g) = R_i(g) - d_i^{out}(g)c$ • where $R_i(g)$ is the number of players reached by directed paths from i # Efficient Networks¶ • n‐player "wheels" if c< n‐1, empty otherwise: # Stable Networks¶ • if c<1 then n-player wheels are the only strictly Nash stable network • if 1<c<n-1 n-player wheels and empty networks are the only strictly Nash stable networks # Strictness¶ • Nash Stable, but not strictly so: 1 is indifferent between switching link from 3 to 4
## Debraj Ray Print publication date: 2007 Print ISBN-13: 9780199207954 Published to Oxford Scholarship Online: January 2008 DOI: 10.1093/acprof:oso/9780199207954.001.0001 Show Summary Details Page of PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 05 December 2021 # Reversible Agreements With Externalities Chapter: (p.183) CHAPTER 10 Reversible Agreements With Externalities Source: A Game-Theoretic Perspective on Coalition Formation Publisher: Oxford University Press DOI:10.1093/acprof:oso/9780199207954.003.0010 # Abstract and Keywords This chapter takes the second step in the study of reversible commitments by extending the framework given in Chapter 9 to cover situations with externalities across coalitions. Matters are quite different now; the ubiquitous absorption results reported for characteristic functions break down in this setting. Equilibrium payoffs may cycle, and even if they don't, inefficient outcomes may arise and persist. Indeed, the ability to make sidepayments — presumably to eliminate such inefficiencies — may actually worsen the situation. In particular, the variant of this book's model with upfront transfers can heighten the prevalence of inefficiency. In the last chapter, we showed that with no externalities across coalitions, a model of reversible commitments may display inefficiencies, but these are transitional. Over time, payoffs must converge to an efficient outcome. This result holds over a broad class of equilibria, including all equilibria with history-dependent strategies that satisfy a mild benignness restriction. The purpose of this chapter is to argue that matters could be quite different when there are externalities. The ubiquitous absorption results reported for characteristic functions break down in this setting. Equilibrium payoffs may cycle, and even if they don't, inefficient outcomes may arise and persist. Finally — and in sharp contrast to characteristic functions — such outcomes are not driven by the self-fulfilling contortions of history-dependence. They occur even for Markovian equilibria. Indeed, the ability to make sidepayments — presumably to eliminate such inefficiencies — may actually worsen the situation. In particular, the variant of our model with upfront transfers, which has thus far played a quiet and perfectly undistinguished role, now exhibits very distinctive properties. Let's sidestep a common pitfall right away. It is tempting to think of inefficiencies as entirely “natural” equilibrium outcomes when externalities exist. Such an observation is true, of course, for games in which there are no binding agreements. Nash equilibria are “generally” inefficient, an assertion which can be given a precise formulation (see, e.g., Dubey (1986)). When agreements can be (p.184) costlessly written, however, no such presumption can and should be entertained. These are models of binding agreements, a world in which the so-called “Coase theorem” is relevant. For instance, in all that we've done so far, two-player games invariably yield efficiency, quite irrespective of whether there are externalities across the two players. This is not to say that the “usual intuition” has no role to play in the events of this chapter. It must, because the process of negotiation is itself modeled as a noncooperative game. But that is a very different object from the “stage game” over which agreements are sought to be written. For the material in this chapter, I continue to rely on Gomes and Jehiel (2005) and Hyndman and Ray (2007). We begin with the baseline model and then move on to the variant with upfront transfers. # 10.1 The Baseline Model for Three-Player Games Three-player games represent an interesting special case. Even when externalities are allowed for, such games share a central feature with their characteristic function counterparts: each player possesses, in effect, a high degree of veto power in all moves that alter her payoff. This will allow us to prove a limited efficiency result, even when externalities are widespread. It is worth noting that three-player situations have been the focus of study in several applied models of coalition formation (see, e.g., Krishna (1998), Aghion, Antras and Helpman (2004), Kalandrakis (2004) and Seidmann (2005)). ## 10.1.1 The Failed Partnership. Begin with an example. Suppose that there are three agents, any two of whom can become “partners”. For instance, two of three countries could form a customs union, or a pair of firms could set up a production cartel or an R&D coalition with a commitment to share ideas. I presume that the outsider to the partnership gets a “low” payoff: zero, say. Finally, a three-player partnership is assumed not to be feasible (or has very low payoffs). The crucial feature of this example is that player 1 is a bad partner, or — for the purposes of better interpretation — a failed partner. Partnerships between him and any other individual are dominated — both for the partners themselves and for the outsider — by all (p.185) three standing alone. In contrast, the partnership between agents 2 and 3 is rewarding (for those agents). We formalize this as a partition function game. In the examples that follow, we simply record those states with nontrivial payoff vectors, and omit any mention of the remaining states, with the presumption that the payoffs in those states are zero to all concerned. We shall also be somewhat cavalier in our description of equilibrium and ignore these trivial states: equilibrium transitions from those states are implicitly defined in obvious ways. EXAMPLE 10.1. Consider the following three-player game with minimal approval committees: $Display mathematics$ OBSERVATION 10.1. For δ‎ sufficiently close to 1 in Example 10.1, the outcomes x 2 and x 3which are inefficientmust be absorbing states in every equilibrium. A formal proof of this observation isn't needed; the discussion to follow will suffice. Why might x 2 and x 3 be absorbing? The reason is very simple. Despite the fact that x 2 (or x 3) is Pareto-dominated by x 0, player 1 won't accept a transition to x 0. If she did, players 2 and 3 would initiate a further transition to x 1. Player 1 might accept such a transition if she is very myopic and prefers the short-term payoff offered by x 0, but if she is patient enough she will see ahead to the infinite phase of “outsidership” that will surely follow the short-term gain. In that situation it will be impossible to negotiate one's way out of x 2 or x 3. This inefficiency persists in all equilibria, history-dependent or otherwise. Notice that x 2 or x 3 wouldn't be reached starting from any other state. This is why the interpretation, the “failed partnership”, is useful. The example makes sense in a situation in which players have been locked in with 1 on a past deal, on expectations which have failed since. To be sure, this interpretation is unnecessary for the formal demonstration of persistent inefficiency from some initial state. (p.186) Notice that the players could negotiate themselves out of x 2 if 2 and 3 could credibly agree never to write an agreement while at x 0. Are such promises reasonable in their credibility? One could certainly assume that they are (economists and game theorists have been known to assume worse). However, it may be difficult to imagine that from a legal point of view, player 1, who has voluntarily relinquished all other contractual agreements between 2 and 3, could actually hold 2 and 3 to such a meta-agreement. This example raises three important points. The first is an immediate outgrowth of the previous discussion. Does one interpret the standalone option (x 0) as an agreement from which further deviations require universal permission? Or does “stand-alone” mean freedom from all formal agreement, in which case further bilateral deals only need the consent of the two parties involved? Our discussion takes the latter view. Second, observe that the lack of superadditivity in this example is important. If the grand coalition can realize the Pareto-improvement then player 1 can control any subsequent shenanigans by 2 and 3, and he will therefore permit the improvement. The issue of superadditivity is one to which we shall return below. Finally, recall that upfront transfers are not permitted in this example. Were they allowed in unlimited measure, players 2 and 3 could reimburse player 1 for the present discounted value of his losses in relinquishing his partner. Depending on the discount factor, the amounts involved may be considerable. But they would break the deadlock. But upfront transfers have other, more subtle implications, and here too we must postpone the discussion to a later stage. ### 10.1.1.1 An Efficiency Result for Three-Person Games. The failed partnership or its later variant is not the only form of inefficiency that can arise. Appendix A to this chapter records three other forms of inefficiency, including one which can even arise from the stand-alone starting point of no agreements: the structure of singletons. In the light of these several examples, it is perhaps of interest that a positive (though limited) efficiency result holds for every three-person game satisfying a “minimal transferability” restriction. To state this restriction, let ū(i, π‎) be the maximum one-period payoff to player i over all states with the same coalition structure π‎. (p.187) [T] If two players i and j both belong to the same coalition in coalition structure π‎, then ū(i, π‎) and ū(j, π‎) are achieved at different states. PROPOSITION 10.1. Consider a three-person game with a finite number of states and satisfying condition T, with history-independent proposer protocols and minimal approval committees. Then for all δ‎ close enough to 1, there exists an initial state and a stationary Markov equilibrium with efficient absorbing payoff limit from that state. The proof of Proposition 10.1 exhaustively studies different cases, and is therefore relegated to Appendix B.1 But we can provide some broad intuition for the result. Pick any player i and consider her maximum payoff over all conceivable states. If this maximum is attained at a state x * in which i belongs to a coalition with two or more players, then observe that i's consent must be given for the state to change. (This step is not true when there are four or more players, and invalidates the proposition, as we shall see later.) Because the payoff in question is i's maximum, it is easy enough to construct an equilibrium in which x * is an absorbing state. It therefore remains to consider games in which for every player, the maximum payoff is attained at states in which that player stands alone. If no such state is absorbing in an equilibrium, one can establish the existence of a cyclical equilibrium path, the equilibrium payoffs along which are uniquely pinned down by the payoffs at the state in which all players stand alone. With the transferability condition T, one can now find payoff vectors for other coalitions (doubletons or more) such that some player in those coalitions prefer these payoffs to the cyclical equilibrium payoffs. The associated states then become absorbing, and a simple additional step establishes their efficiency. We conjecture that neither the minimality of approval committees nor the history-independence of proposer protocols is needed for this result, but do not have a proof. # 10.2 The Baseline Model for Four or More Players The analysis in the previous section shows that once externalities are introduced, a failure of efficiency is a distinct possibility. In (p.188) the example of the failed partnership, one agent holds his partner hostage in the fear that if the partner is relinquished (so as to create a Pareto improvement), other deals will subsequently be struck that leave our agent with very low payoffs. Yet, as Proposition 10.1 goes on to show, it isn't possible to create this phenomenon from every initial state. In the example of the failed partnership, the very state that the failed partner fears is undeniably Pareto-efficient. It's true that the failed partner suffers in this state, but the other two agents are certainly as well off as they can be. If negotiations were to commence with this state as initial condition, the resulting outcome must be efficient. In short, if one state isn't efficient something else must be, and we must ultimately arrive at some such state that's absorbing. This is the content of Proposition 10.1. But don't make the mistake of supposing that such a proposition must follow from a simple process of eliminating inefficient states. Indeed, the proposition fails when the number of players exceeds three. To show this, I present an example that displays the most severe form of inefficiency: every absorbing state in every Markovian equilibrium is static inefficient, and every nonconvergent equilibrium path in every Markovian equilibrium is dynamically inefficient. EXAMPLE 10.2. Consider the following four-player game with minimal approval committees: $Display mathematics$ Assume that a fresh proposer is chosen with uniform probability at each proposal stage. OBSERVATION 10.2. For δ‎ sufficiently close to 1 in Example 10.2, every stationary Markov equilibrium is inefficient starting from any initial state. The proof of this result may be found in Appendix C. As we've already discussed, the fact that some absorbing state in some equilibrium may be Pareto-dominated is not too surprising. In part, a similar logic is at work here. Begin with the state x 1, in which players 1 and 2 are partners and 3 and 4 are separate. This is a failed partnership (at least in a context in which players 3 and (p.189) 4 are not partners themselves): if the {12}-partnership disbands, the state moves to x 2 which is better for all concerned. But once at x 2, we see other latent, beneficial aspects of the erstwhile partnership between 1 and 2: if players 3 and 4 now form a coalition, they can exploit 1 and 2 for their own gain; this is the state x 3. So far, the story isn't too different from that of the failed partnership. But the similarity ends as we take up the story from the point at which 3 and 4 fashion their counterdeviation to x 3. Their gains can be reversed if players 1 and 2 form (or depending on the dynamics, re-form) a coalition. Balance is now restored; this is the state x 4.2 Finally, in this context, the partnership between 3 and 4 is more a hindrance than a help (just as {12} was in the state x 1), and they have an incentive to disband. We are then “back” to x 1. This reasoning appears circular, and indeed in a sense it is, but such a circularity is in fact the essential content of Observation 10.2: as long as equilibria are Markovian, there is asymptotic inefficiency from every initial condition, despite the ability to write and renegotiate permanently binding agreements. One might suspect that the Observation is vacuous in that no Markov equilibrium, efficient or not, exists. I could allay such suspicions by appealing to the existence theorem mentioned in Chapter 8, but it may be better to display such equilibria explicitly. Here is one. In it: State x 1 is absorbing. State x 2 moves back to x 1 when 1 or 2 propose, and on to x 3 whenever 3 or 4 propose. State x 3 moves to x 4 when 1 or 2 propose, and remains unchanged otherwise. State x 4 moves to x 1 no matter who proposes. Observe that the state x 1, which is plainly Pareto-dominated, is not just absorbing but “globally” absorbing.3 To verify that this description constitutes an equilibrium, begin with state x 1. Obviously players 3 and 4 do not benefit from changing the (p.190) state to x 4, which is all they can unilaterally do. Players 1 and 2 can (bilaterally) change the state to x 2, by the presumed minimality of approval committees. If they do so, the subsequent trajectory will involve a stochastic path back to x 1.4 Some fairly obvious but tedious algebra reveals that the Markov value function V i(x,δ‎) satisfies $Display mathematics$ for i = 1, 2. This value converges (as it must) to that of the absorbing state — 4 — as delta goes to 1, but the important point is that the convergence occurs “from below”, which means that V i(x 2, δ‎) is strictly smaller than V i(x 1, δ‎) = 4 for all delta close enough to 1.5 Perhaps more intuitively but certainly less precisely, the move to state x 2 starts off a stochastic cycle through the payoffs 5, 0 and 2 before returning to absorption at 4, which is inferior to being at 4 throughout. This verifies that players 1 and 2 will relinquish the opportunity at x 1 to switch the state to x 2. It also proves that once at state x 2, players 1 and 2 will want to return to the safety of x 1 if they get a chance to move. On the other hand, players 3 and 4 will want to move the state from x 2 to x 3. Proving this requires more value-function calculation. A second round of tedious algebra reveals that $Display mathematics$ for i = 3, 4. This difference vanishes (as it must) as δ‎ approaches 1, but once again the important point is that the difference is strictly positive for all δ‎ close to 1 (indeed, for all δ‎), which justifies the move of 3 and 4. That 1 and 2 must want to move away as quickly as possible from state x 3, and 3 and 4 not at all, is self-evident. That leaves x 4. At this state players 3 and 4 receive their worst payoffs, and will surely (p.191) want to move to x 1, and indeed, players 1 and 2 will want that as well.6 Our verification is complete. There is actually a second equilibrium with no absorbing states, in which players 1 and 2 randomize between states x 1 and x 2, while players 3 and 4 randomize between states x 3 and x 4. While we omit the details, it is easy to check that such an equilibrium displays (dynamic) inefficiency from every initial condition, because it must spend nonnegligible time at the inefficient states x 1 and x 4. We omit the details. Two final remarks are in order regarding this example. First, the strong form of inefficiency is robust to (at least) a small amount of transferability in payoffs. The reason is simple; given the payoffs at x 3 (resp. x 4), the payoffs to players 1 and 2 (resp. 3 and 4) are still minimal. Therefore, these states cannot be absorbing. But then, even with a little transferability, we are in the same situation as in the example. Second, the example is not robust to the use of history dependent strategies. Indeed, x 2 can be supported as an absorbing state provided that deviations from x 2 are punished by a return to the inefficient stationary equilibrium in which x 1 is absorbing. This last remark creates an interesting contrast between models based on characteristic functions and those based on partition functions. In the former class of models, the work of Seidmann and Winter (1988) and Okada (2000) assure us that ongoing negotiations lead to efficiency under Markovian equilibrium. It's the possibility of history-dependence that creates the inefficiency problem, albeit one that we successfully resolved in Chapter 9 with the help of the benignness condition. In contrast, partition functions are prone to inefficiency under Markovian equilibrium, as Examples 10.1 and 10.2 illustrate. History-dependence might help to alleviate this problem (it does in Example 10.2, though not in Example 10.1). # (p.192) 10.3 Superadditive Games An important feature of the examples in Section 10.2 is that they employ a subadditive payoff structure. Is that a reasonable assumption? This is a subtle question that recalls the discussion in Section 3.4 of Chapter 3. We continue that discussion here. First, it should be noted that in games with externalities superadditivity is generally not to be expected. For instance, recall the example of the Cournot oligopoly studied in detail in Chapter 6. Using the partition function developed there, it is easy to see that if there are just three firms, firms 1 and 2 do worse together than apart, provided that firm 3 stands separately in both cases. At the same time, this argument does not apply to the grand coalition of all firms. Indeed, every partition function derived from a game in strategic form must satisfy grand coalition superadditivity (GCS): [GCS] For every state x = (u, π‎), there is x′ = (u′, {N}) such that u′ ≥ u. Is GCS a reasonable assumption? In Chapter 3 we've argued that in many cases it may not be. To continue that discussion without undue repetition, one possible interpretation of GCS is that it is a “physical” phenomenon; e.g., larger groups organizing transactions more efficiently, or sharing the fixed costs of public good provision. Yet such superadditivities are often the exception rather than the rule. After all, the entire doctrine of healthy competition is based on the notion that physical superadditivity, after a point, is not to be had. In general, too many cooks do spoil the broth: competition among groups can lead to efficiency gains not possible when there is a single, and perhaps larger, group attempting to act cooperatively. In addition to competition, Section 3.4 lists a host of other reasons for lack of physical superadditivity.7 But some game theorists might argue that this isn't what is meant by superadditivity at all. They have in mind a different notion of GCS, which is summarized in the notion of the superadditive cover. After all, the grand coalition can write a contract which (p.193) exactly replicates the payoffs obtainable in some other coalition structure. For instance, companies do spin off certain divisions, and organizations do set up competing R&D groups. In a word, the grand coalition can agree not to cooperate, if need be. In a static setting, such a position represents, perhaps, no loss of generality. But in a dynamic setting this view embodies a crucial assumption: that future changes in the strategy of (or in the alliances formed by) one of the subgroups will require the consent of the entire grand coalition of which that group was supposedly a part. For example, consider contracts between senior executives and firms. They typically contain a clause enjoining the executive from working for a competitor firm for a number of years — so-called no compete clauses. To some extent, this reflects the notion of the superadditive cover: surely, if all parties agreed, the executive would be free to work for the competitor, while if the original firm dissents, she would not — at least for a certain length of time. To the extent that such contracts cannot be enforced for an infinite duration, the model without grand-coalition superadditivity can be viewed as a simplification of this, and other, real-world situations. Nevertheless, GCS applies without reservation to many other cases. So it is worth recording that GCS restores (Markovian) efficiency, at least if the existence of an absorbing limit payoff is assumed: PROPOSITION 10.2. Under GCS, every absorbing payoff limit of every Markovian equilibrium must be static efficient. The proof follows a much simpler version of the argument for Proposition 9.2 and we omit it. It must be noted, however, that GCS does not guarantee long-run efficiency in all situations: Example 10.2 can be modified so that GCS holds but there is an inefficient cycle over states that do not involve the grand coalition. In order to guarantee that even this form of inefficiency does not persist in the long-run, one needs enough transferability of payoffs within the grand coalition. Indeed, it can be proved that under GCS and the additional assumption that the payoff frontier for the grand coalition is continuous and concave, every Markov equilibrium must be absorbing — and therefore asymptotically efficient. # (p.194) 10.4 Upfront Transfers and the Failure of Efficiency How does the ability to make transfers affect the examples? It is important to distinguish between two kinds of transfers. Coalitional or partnership worth could be freely transferred between the players within a coalition. Additionally, players might be able to make large upfront payments in order to induce certain coalitions to form. In all cases, of course, the definition of efficiency should match the transfer environment.8 Within-coalition transferability often does nothing to remove inefficiency. For instance, nothing changes in the failed partnership of Example 10.1. On the other hand, upfront transfers across coalitions have an immediate and salubrious effect in that example. Efficiency is restored from every initial state. The reason is simple. If player 1 is offered any (discount-normalized) amount in excess of 5, he will “release” player 2. In view of the large payoffs that players 2 and 3 enjoy at state x 1, they will be only too pleased to make such a payment. The final outcome, then, from any initial condition is the state x 1, and we have asymptotic efficiency. It is true that the amount of the transfer may have to be enormous when the discount factor is close to 1, but we've already discussed that (see Section 8.2.2 in Chapter 8). Our concern here is with the implications of the upfront-transfer scenario. The beauty of the Gomes-Jehiel (2005) paper, on which I now proceed to rely, is that it unearths an entirely different face of upfront transfers. To see this, I introduce a seemingly innocuous variant of Example 10.1: EXAMPLE 10.3. Consider the following three-player game with minimal approval committees: $Display mathematics$ and all other states have payoff 0. (p.195) First, take a quick look at this example without introducing upfront transfers and compare it with its predecessor, Example 10.1. Little of substance has been changed, though we've broken symmetry by assigning zero to any partnership between Players 1 and 3. Players 1 and 2 still form a “failed partnership” at x 3, yet at that state player 1 continues to cling to player 2. They would prefer to move to state x 1, but player 1 rationally fears the subsequent switch to x 2, something that is out of his control. But the introduction of upfront transfers in this example has a perverse effect. Instead of taking the inefficiency away (as it does in Example 10.1), it generates inefficiency from every initial condition. OBSERVATION 10.3. In Example 10.3, every stationary Markov equilibrium is inefficient starting from any initial state. It isn't hard to see what drives the assertion. With unlimited upfront transfers, we are entitled to add payoffs across players to derive our efficiency criterion. Only the state x 0 is efficient on this score, and if an equilibrium is to display asymptotic efficiency — whether dynamic or static — it can stay away from x 0 for a finite number of dates at best. Now the root of the trouble is clear: players 2 and 3 invariably have the incentive to move away from x 0 to x 1. Not that matters will come to an end there: the fact that x 1 is itself inefficient will cause further movement across states as upfront transfers continue to be made along an infinite subsequence of time periods. The precise computation of such transfers is delicate,9 but the assertion that efficiency cannot be attained should be clear. Actually, Example 10.3, while it makes the point well enough, obscures a matter of some interest. In that example, players 2 and 3 gain on two counts when they move the state from x 0 to x 1. They make an immediate gain in payoffs, and then they gain even more subsequently as they are paid additional ransom in the form of upfront transfers. The point of the next example is that the “ransom effect” dominates, at least when discount factors are close to 1. (p.196) EXAMPLE 10.4. Consider the following three-player game with minimal approval committees: $Display mathematics$ and all other states have payoff zero. Assume that b > a > 0. This example has none of the asymmetries of Example 10.3. There is a unique efficient state by any criterion. It is state x 0. It Pareto-dominates every other state. Players moving away from this state suffer an immediate and unambiguous loss in payoffs. Yet we have OBSERVATION 10.4. In Example 10.4, every stationary Markov equilibrium under the uniform proposer protocol is inefficient starting from any initial state: the state x 0 can never be absorbing. This observation highlights very cleanly the negative effects of upfront transfers. Players may deliberately generate inefficient outcomes to seek such transfers as ransom. This effect is particularly clear in Example 10.4 because each of the states x 1, x 2 and x 3 are Pareto-inferior to x 0. It will be instructive to work through this example by informally proving Observation 10.4. For simplicity, we will spell out the details for all symmetric Markovian equilibria. Suppose, contrary to the claim, that x 0 is absorbing. Then the (discount-normalized) value to each player is just b: V i(x 0) = b for all i. In each of the other states x i, player i is an “outsider” currently earning 0; denote her lifetime payoff by c, evaluated ex ante, before a proposer has been determined. The other two players are partners, denote by d their corresponding payoffs. It is very easy to see that d is at least as large as c: the partners can make all the acceptable proposals that the outsider can make, while enjoying a current payoff which exceeds that of the outsider. Moreover, because x 0 is absorbing, it must be that bd, otherwise some pair would surely destabilize x 0. The last of our preliminary observations is that no proposer in any of the nonabsorbing states x i stands to gain anything by switching the state to another nonabsorbing state x j: the sum of all payoffs is constant (at 2d + c) so there is no surplus to be grasped. On the other (p.197) hand, both the partners and the outsiders make a gain by steering the state back to x 0. Consequently, recalling that proposer protocol is uniform, we have (10.1) $Display mathematics$ The reason is simple. Take any partner at one of the nonabsorbing states. With probability 1/3, she gets to propose. She successfully moves the state straight away to x 0, earning a lifetime payoff of b, and can demand an upfront transfer of up to b − (1 − δ‎)aδ‎d from her partner, and up to bδ‎c from the outsider.10 With remaining probability 2/3, she is proposed to, in which case she will be driven down to her reservation value, which is precisely (1 − δ‎)a + δ‎d. Together, this gives us (10.1). A parallel argument tells us that as far as the outsider is concerned, (10.2) $Display mathematics$ The reasoning is very similar to that underlying (10.1), and we'll skip the repetition. Now combine (10.1) and (10.2) and simplify to see that $Display mathematics$ which contradicts our initial presumption that bd. We have therefore proved that the unique efficient outcome cannot be stable. Consequently, the equilibrium path, no matter what it looks like, must display persistent inefficiency. Notice that (in contrast to Example 10.3), the deviating players do suffer a loss in current payoff when they move away from the efficient state. But the prospect of inflicting a still greater loss on the outsider raises the possibility that the outsider will pay to have the state moved back — albeit temporarily — to the efficient point. This is a new angle on upfront transfers. They may lubricate the path to efficiency, but they might encourage deviations from efficient paths as well, in order to secure a ransom. Thus the presumption that unlimited transfers act to restore or maintain efficiency is wrong. (p.198) One more feature of Example 10.4 is worth mentioning. The efficient state is one in which all three players stand apart. This is precisely what makes that state persistently unstable, for two players can always form an inefficient coalition. If this contractual right can be eliminated in the act of making an upfront transfer, then efficiency can be restored: once state x 0 is regained, there can be no further deviations from it. This line of discussion is exactly the same as in Section 10.3, and there is nothing further to add here. More generally, the efficient state in this example has the property that a subset of agents can move away from that state (i.e., they can act as approval committee for a move) such that some other agents — not on the approval committee — are thereby rendered worse off in terms of current payoffs. Whenever this is possible, there is scope for collecting a ransom, and the potential for a breakdown in efficiency. Gomes and Jehiel (2005) develop this idea further. # 10.5 Summary Our study of coalitional bargaining problems in “real time” yields a number of implications. For characteristic function form games, a very general result for all pure-strategy equilibria (whether history-dependent or not) can be established: every equilibrium path of states must eventually converge to some absorbing state, and this absorbing state must be static efficient. This was the subject of Chapter 9. In contrast, in games with externalities, matters are more complicated and none of the results for characteristic function games continue to hold without further conditions. It is easy enough to find a three-person example in which there is persistent inefficiency from some initial state, whether or not equilibria are allowed to be history-dependent. At the same time, we also show that in every three-person game, there is some Markovian equilibrium which yields asymptotic efficiency from some initial condition. Yet even this limited efficiency result is not to be had in fourperson games. Section 10.2 demonstrates the existence of games in which every absorbing state in every Markovian equilibrium exhibits asymptotic static inefficiency. (p.199) The situation is somewhat alleviated if the game in question exhibits grand coalition superadditivity. In that case, it is possible to recover efficiency, provided that the equilibrium is absorbing. Finally, we show that the ability to make unlimited upfront transfers may worsen the efficiency problem. The main open question for games with externalities is whether there always exists some history-dependent equilibrium which permits the attainment of asymptotic efficiency from some initial state (that there is no hope in obtaining efficiency from every initial state is made clear in Section 0.1.1). I am pretty sure that the answer should be in the affirmative: assuming — by way of contradiction — that equilibria are inefficient from every initial state, one should be able to employ such equilibria as continuation punishments in the construction of some efficient strategy profile. Such a result would be intuitive: after all, one role of history-dependent strategies is to restore efficiency when simpler strategy profiles fail to do so. Finally, the general setup in Section 8.2 may be worthy of study, with or without binding agreements. For instance, the general setup is applicable to games in which agreements are only temporarily binding, or in which unanimity is not required in the implementation of a proposal. There is merit in exploring these applications in future work. Other Examples For Three-Player Games. In the examples below, if a coalition structure is omitted, it means that either every player obtains an arbitrarily large negative payoff or there is some legal impediment to the formation of that coalition structure. In all of the examples of this section, we assume minimal approval committees; for example, from the singletons, players 1 and 2 can approve a transition to any state y with coalition structure {{12}, {3}}. More on Inefficiency. One response to the inefficiency example of Section 4.1.1 in the main text is that the inefficient state described there will never be reached starting from the singletons. Setting the initial state to the singletons has special meaning: presumably this is the state from which all negotiations commence. However, this is wrong on two fronts (at least for Markov equilibria) as we now show. (p.200) Coordination Failures. Coordination failures, leading to inefficiency from every initial state, are a distinct possibility, even in three player games. Consider the following: $Display mathematics$ OBSERVATION 10.5. Suppose that everyone proposes with equal probability at every date. Then, for $δ ∈ [ 3 5 , 1 )$ there is an MPE in which x i is absorbing, and from x 0, there is a transition to x i with probability $1 3$ for i = 1, 2, 3. The proof is simple and we omit it. Convergence to Inefficiency From The Singletons. Consider the following example, which is a variation on the “failed partnership” example of Section 4.1.1. $Display mathematics$ OBSERVATION 10.6. For any history-independent proposer protocol such that at x 0 each player has strictly positive probability of proposing, there exists δ‎̄ ∈ (0, 1) such that if, δ‎ ≥ δ‎̄, all stationary Markovian equilibria involve a transition from x 0 to x 3and full absorption into x 3 thereafterwith strictly positive probability. Proof. Let α‎ = (α‎ 1, α‎ 2, α‎ 3) ∈ int(Δ‎) denote the proposers’ protocol at x 0. First notice that in every equilibrium x 1 and x 2 must be absorbing. The states x 1 and x 2 give players 2 and 3, respectively, their unique maximal payoff. Moreover, at x 1 (resp. x 2) player 2 (resp. player 3) has veto power over any transition. Second, in every equilibrium, x 0 cannot be absorbing. This follows because players 2 and 3 can always initiate a transition to x 1 and earn a higher payoff. We now proceed with the rest of the proof. First, we rule out a “cycle” by proving the following: If there is a positive probability transition from x 0 to x 3, then x 3 must be absorbing. Indeed, suppose not. Then for i = 1, 2, V i(x 0) = V i(x 3) = 4. But then, from x 0, player 1 will always reject a transition to x 2, which means that V 2(x 0) ≥ 5, a contradiction. Next suppose that the probability of reaching x 3 from the singletons is zero. Observe that V 1(x 0) ≤ 3, for if not, x 1 is the only absorbing state reachable from x 0, implying that V 1(x 0) → 0 for δ‎ sufficiently high, a contradiction. Similarly, V 3(x 0) ≤ 8, for if not, x 2 is the only absorbing state reachable from the singletons. But then for δ‎ sufficiently high, V 2(x 0) ≤ 4, implying that (p.201) players 1 and 2 would initiate a transition to x 3, a contradiction. Finally, observe that since x 3 is not reached with positive probability, it must be that V 2(x 0) ≥ 4, since otherwise, 1 would offer x 3 and it would be accepted. Let p i denote the probability of a transition from x 0 to x i for i = 0, 1, 2. By assumption, p 3 = 0 and we have just shown that p 1, p 2 > 0. Given p i, write the equilibrium value functions and take the limit as δ‎ → 1 to obtain: (10.3) $Display mathematics$ From the third equation in (10.3), we see that p 2 = 0, which then implies that the first equation is satisfied with strict inequality. Therefore, player 1 strictly prefers to propose x 2, and the offer will be accepted by player 3. Hence, p 2 > α‎ 1 > 0, a contradiction. It then follows that for δ‎ sufficiently high the same conclusion may be drawn. Cyclical Equilibria. Next, equilibrium cycles become a distinct possibility: $Display mathematics$ OBSERVATION 10.7. Suppose that everyone proposes with equal probability at every date. Then there is an equilibrium with the following transitions: $Display mathematics$ Dynamic Inefficiency In Every Equilibrium. Though we did not formally prove this for characteristic functions, every Markovian equilibrium must exhibit full dynamic efficiency from some initial state. This is no longer true for games with externalities: $Display mathematics$ If x i, i = 1, 2, 3 were absorbing, then for ji, V j(x i) = 0. However, notice that in every Markovian equilibrium, for all i = 1, 2, 3, V i(x 0) ≥ 1. Therefore, j must accept a proposal from x i to x 0, hence a profitable deviation exists. Finally, it can be shown that any cyclical Markovian equilibrium must necessarily spend time at x 0. We have therefore proved: (p.202) OBSERVATION 10.8. Suppose that everyone proposes with equal probability at every date. Then every Markovian equilibrium exhibits dynamic inefficiency from every initial state. Proof of Proposition 10.1. In what follows, we denote by π‎0 a singleton coalition structure, by π‎ i a coalition structure of the form {{i}, {j, k}}, and by π‎ G the structure consisting of the grand coalition alone. Use the notation π‎(x) for the coalition structure at state x and S i(x) for the coalition to which i belongs at x. Subscripts will also be attached to states (e.g., x i) to indicate the coalition structure associated with them (e.g., π‎(x i) = π‎ i). For each i, let X * i argmax{u i(x)|xX}, with x * i a generic element. Finally, we will refer to π‎(x * i) = π‎ as a maximizing (coalition) structure (for i). Case 1: There exists i = 1, 2, 3 and x * iX * i such that |S i(x * i)| ≥ 2. Pick x * iX * i as described and consider the following “pseudo-game”. From x * i, there does not exist an approval committee capable of initiating a transition to any other state. Notice that a Markovian equilibrium exists for this pseudo-game (see the Supplementary Notes for the general existence proof) and that x * i is absorbing. Denote by σ‎* the equilibrium strategies for the pseudo-game. Return now to the actual game and suppose that players use the strategies σ‎*; suppose also from x * i that player i always proposes x * i and rejects any other transition. For other players ji, any proposal and response strategies may be specified. Denote this new strategy profile σ‎′. Notice that σ‎* and σ‎′ specify the same transitions for the pseudo-game and actual game and no player has a profitable deviation from x * i. Therefore, σ‎′ constitutes an equilibrium of the actual game. This equilibrium has an efficient absorbing state, x * i. Case 2: For all i and for all x * iX * i, |S i(x * i)| = 1. A number of subcases emerge: 1. (a) π‎(x * 1) = π‎(x * 2) = π‎(x * 3) = π‎ 0 for some (x * 1, x * 2, x * 3), but the maximizing structures are not necessarily unique. 2. (b) π‎(x * 1) = π‎(x * 2) = π‎ 0 and π‎(x * 3) = π‎ 3, and while the maximizing structures are not necessarily unique, Case 2(a) does not apply. 3. (c) For all players i = 1, 2, 3, π‎ i is the unique maximizing structure. 4. (d) π‎(x * 1) = π‎ 0, π‎(x * j) = π‎ j, j = 2, 3 and each maximizing structure is unique. (p.203) We now prove the proposition for each of these cases. Case (a). Here x 0, the unique state corresponding to π‎ 0, is weakly Pareto-dominant and we construct an equilibrium as follows. From any state x, every player proposes a transition to x 0 and every player accepts this proposal. A deviant proposal y is accepted if V i(y) ≥ V i(x) = (1 − δ‎)u i(x) + δ‎u i(x 0). This is clearly an equilibrium with x 0 efficient and absorbing. Case (b). The proof is similar to Case 1. Consider a pseudo-game in which there is no approval committee that can initiate a transition away from x 0. Again, we are assured of a Markovian equilibrium for the pseudo-game; denote the equilibrium strategies by σ‎* and notice that x 0 is absorbing. In the actual game, suppose that all players use the strategies given by σ‎*, and suppose that, at x 0, players 1 and 2 always propose x 0 and reject any transition from x 0. Call these strategies σ‎′. As in Case 1, notice that σ‎* and σ‎′ specify the same transitions for the pseudo-game and the actual game, and no player has a profitable deviation from x 0. Therefore, σ‎′ constitutes an equilibrium of the actual game. The following preliminary result will be useful for cases (c) and (d): LEMMA 10.1. Suppose that player i's maximizing structure π‎^ is unique, and that π‎^ ∈ {π‎ 0, π‎ i}. Let Y = {y | π‎(y) ∈ {π‎ 0, π‎ i} − π‎^}. Then in any equilibrium such that xX * i is not absorbing, V i(x) > V i(y) for all yY. Proof. We prove the case for which π‎^ = π‎ i. The proof of the case for which π‎^ = π‎ 0 is identical. Let xX * i. Note that Y = {x 0}. Suppose on the contrary that V i(x 0) ≥ V i(x). We know that $Display mathematics$ Now, there could be — with probability μ‎ — a transition to the singletons, which player i need not approve. All other transitions must be approved by i, and she must do weakly better after such transitions. Using V i(x 0) ≥ V i(x), it follows that $Display mathematics$ so that V i(x) ≥ ū i. Strict inequality is impossible since ū i is i’s maximal payoff. So V i(x) = ū i, but this means that x is absorbing. The next two lemmas prepare the ground for case (c). LEMMA 10.2. Assume Case 2(c). Let y be not absorbing, and π‎(y) = π‎ j. Then y transits one-step to x 0 with positive probability. (p.204) Proof. Suppose not. Then player j is on the approval committee for every equilibrium transition from y. Therefore $Display mathematics$ At the same time, y is not absorbing by assumption. But then the above inequality is impossible, since u j(y) is the uniquely defined maximal payoff for j across all coalition structures. LEMMA 10.3. Assume Case 2(c). Suppose that a state x i, with coalition structure π‎ i, is part of a nondegenerate recurrence class (starting from x i). Then V j(x 0) = V j(x i) for all ji. Proof. First, since x i is not absorbing, by Lemma 10.2, x i transits one-step to x 0 (with positive probability) and both players ji must approve this transition. Therefore (10.4) $Display mathematics$ Next, consider a path that starts at x 0 and passes through x i (there must be one because x i is recurrent). Assume without loss of generality that it does not pass through x 0 again. If both individuals ji must approve every transition between x 0 and x i, we see that V j(x 0) ≤ V j(x i), and combining this with (10.4), the proof is complete. Otherwise, some ki does not need to approve some transition. This can only be a transition from x 0 to a state x k with coalition structure π‎ k, with subsequent movement to x i without reentering x 0. So V k(x i) ≥ V k(x k). But x k itself is not absorbing and so by Lemma 10.2 transits one-step to x 0 (with positive probability). By Lemma 10.1, V k(x k) > V k(x 0). Combining these two inequalities, V k(x i) > V k(x 0), but this contradicts (10.4). Case (c). We divide up the argument into two parts. In the first part, we assume that for some i, some state x i (with coalition structure π‎ i) is part of a nondegenerate recurrence class. Suppose that no efficient payoff limit exists. We first claim that (10.5) $Display mathematics$ To prove this, consider an equilibrium path from x i. If this path never passes through x 0, then it is easy to see that all three players must have their value functions monotonically improving throughout, so one-period payoffs converge. Moreover, the limit payoff for player i must be at the maximum, so this limit is efficient. Given our presumption that there is no efficient limit, the path does pass through x 0, so consider these alternatives: 1. (i) For some ji, the path passes a state y j (with structure π‎ j) before it hits x 0. Moreover, y j is not absorbing, and so by Lemma 10.2 it must transit one-step to x 0 with positive probability. However, player i must approve (p.205) all these moves; so V i(x 0) ≥ V i(y j) ≥ V i(x i). But this contradicts Lemma 10.1. So this alternative is ruled out. 2. (ii) Otherwise, the path either transits one-step to x 0, or passes through a sequence of moves, all of which must be approved by both players ji. So for any one-step transition from x i to a state y, we have $Display mathematics$ for ji. But by Lemma 10.3, V j(x 0) equals V j(x i) for ji. It follows that for every one-step transit y, $Display mathematics$ for ji. Consequently, for each such j, $Display mathematics$ Using this, and V j(x 0) = V j(x i) for ji, the claim is proved. We now show that there is an efficient absorbing state, contrary to our initial presumption. Consider a state x i (with structure π‎ i) to which the claim just established applies. By condition T, there is some other state x *, also with coalition structure π‎ i, such that for some ji, u j(x *) > u j(x i). Because j must approve every transition from x *, V j(x *) ≥ u j(x *) > u j(x i) = V j(x 0), where this last equality uses the claim. So x * cannot have an equilibrium transition to x 0, but then i must approve every equilibrium transition. However, since π‎ i gives player i his unique maximal payoff, he will reject every transition to a different coalition structure. Therefore, x * is both absorbing and efficient. For the second part of case (c), suppose now that all recurrence classes are singletons. Assume by way of contradiction that all these are inefficient. This immediately rules out all absorbing states x i with π‎(x i) = π‎ i for some i, and it also rules out x 0.11 Now consider any state x i with π‎(x i) = π‎ i. Since it is not absorbing, V j(x i) ≥ u j(x i) for ji. Also Lemma 10.2 tells us that x i transits one-step to x 0 with positive probability, so V j(x 0) ≥ V j(x i) ≥ u j(x i). In particular, V j(x 0) ≥ max{ū(j, π‎ i), ij} for all j = 1, 2, 3. Moreover, because π‎ j is maximal for j and j must approve all other transitions from x 0 (as well as to all states from the structure π‎ j except for x 0), we have V j(x 0) ≥ u j(x 0), so V j(x 0) ≥ max{u j(x 0), ū(j, π‎ i), ij} for all j = 1, 2, 3. Since x 0 and x i are transient, there must be a path from x 0 to an absorbing state x G, but this implies that any (p.206) such absorbing state must satisfy u j(x G) ≥ max{u j(x 0), ū(j, π‎ i), ij} for all j = 1, 2, 3. Therefore, x G is efficient, contradicting our initial supposition. Case (d). Proceed again by way of contradiction; assume there is no Markov equilibrium with efficient absorbing payoff limit. It is immediate, then, that any state x such that π‎(x) ∈ {π‎ 0, π‎ 2, π‎ 3} is not absorbing. It also gives us the following preliminary result: LEMMA 10.4. If any state x 1 with π‎(x 1) = π‎ 1 is absorbing, then x 1 is not dominated by any state y with π‎(y) ∈ {π‎ 0, π‎ G}. Proof. Suppose this is false for some x 1. It is trivial that x 1 cannot be dominated by any grand coalition state; otherwise x 1 wouldn't be absorbing. So x 0 dominates x 1. Consider any player j ≠ 1. From x 0, there may be a transition to z j with π‎(z j) = π‎ j, which j need not approve. She must approve all other transitions from x 0. Thus, along the lines of Lemma 10.1, we see that V j(x 0) ≥ u j(x 0) > u j(x 1) = V j(x 1) for j = 2, 3, but this contradicts the presumption that x 1 is absorbing (given the minimality of approval committees, 2 and 3 will jointly deviate). As in case (c), divide the analysis into different parts. (i) All recurrence classes are singletons. Since all absorbing states are assumed inefficient, it is clear that all absorbing states must either have coalition structure π‎ 1 or π‎ G (since all states with coalition structures π‎ 0, π‎ 2 and π‎ 3 are efficient). Consider x 0; it is transient. Let be an absorbing state reached from x 0. By Lemma 10.2, we know that there must be a transition from any state with coalition structure π‎ 2 or π‎ 3 to x 0, and — because π‎ 0 is maximal for player 1 — from x 0 to some state with coalition structure π‎ 1 with strictly positive probability. Therefore, we may conclude that (10.6) $Display mathematics$ This implies that is not Pareto-dominated by π‎ 0, π‎ 2 or π‎ 3. Now, if π‎() = π‎ 1, then Lemma 10.4, the fact that π‎ 0, π‎ 2 and π‎ 3 are not absorbing, and (10.6) allow us to conclude that must be efficient, a contradiction. So suppose that π‎() = π‎ G. Note that cannot be dominated by a state y such that π‎(y) = π‎ 1. For V i(y) ≥ u i(y) for i = 2, 3. Moreover, an argument along the same lines as Lemma 10.1 easily shows that V 1(y) ≥ u 1(y). Therefore, if were dominated by y, there would be a profitable move from , contradicting the presumption that it is absorbing. Therefore, (p.207) must be efficient, but this contradicts our assumption that no absorbing state is efficient. (ii) There is some nondegenerate recurrence class (and all other states are either transient or inefficient). Observe that analogues to Lemmas 10.2 and 10.3, and the first part of case 2(c) can be established for case (d). However, whereas in case 2(c), we were able to pin down the equilibrium payoff of two players along some nondegenerate recurrence class, now we can only pin down the equilibrium payoff of one player; that is, for a recurrent class which transits from x 0 to x i, i ≠ 1, we have: V j(x 0) = V j(x i) and V j(x 0) = u j(x i) for j ≠ 1, i. Observe that if, for the player j whose payoffs we have pinned down, u j(x i) equals ū(j, π‎ i) and for the other player k who is part of the doubleton coalition with j, V i(x 0) ≥ ū(k, π‎ i), the argument based on condition [T] will not go through.12 That is, we cannot find another efficient state which one player (whose consent would be required for any transition) prefers to x 0. In this case, we must construct an equilibrium with some efficient absorbing state, and this is our remaining task. First suppose that there does not exist a state x such that u i(x) > u i(x 0) for i = 2, 3 and for all y such that π‎(y) = π‎1, u 1(x) ≥ u 1(y).13 In the construction of the equilibrium, the following sets of states will be important: {x 0} and $Display mathematics$ Consider the following description of strategies: 1. (a) For all players i = 1, 2, 3, from x 0 all players offer x 0 and accept a transition to another state y only if V i(y) > V i(x 0). (p.208) 1. (b) From all states $x ∈ S 2 d ∪ S 3 ′ d$ players i such that |S i(x) = 2| propose and accept x 0, while player i such that |S i(x)| = 1 proposes the status quo. An arbitrary player k accepts a transition to another state y only if V k(y) > V k(x). 2. (c) From all states $x ∈ S 2 u ∪ S 3 u ∪ S u$ all players propose the status quo and an arbitrary player k accepts a transition to another state y only if V k(y) > V k(x). 3. (d) From all states $x ∈ S 1 D$ all players propose a state $z ( x ) ∈ S 2 u ∪ S 3 u ∪ S u ∪ { x 0 }$ and an arbitrary player k accepts a transition to another state y only if V k(y) > V k(x). If x is dominated by x 0, we require z(x) = x 0. 4. (e) From all states $x ∈ S 2 D$ all players propose the status quo and an arbitrary player k accepts a transition to another state y only if V k(y) > V k(x). It is easy to see that these strategies constitute an equilibrium in which the singletons are absorbing. Moreover, every other state is either absorbing itself or transits (one-step with positive probability) to some absorbing state. Note that the states in $S 2 D$ are absorbing for δ‎ high enough and are inefficient. The reason they are absorbing is clear: if a transition to a dominating state were allowed, there would eventually be a transition to the singletons, which, by assumption, hurts one of the players whose original consent is needed. That the actions defined in (e) above constitute best responses for δ‎ high enough follows from arguments similar to van Damme, Selten and Winter (1990): with a finite number of states and sufficiently patient players any such absorbing state could be implemented; one player will always prefer to reject any other offer. Now suppose that there exists a state x such that u i(x) > u i(x 0) for i = 2, 3 and for some y such that π‎(y) = π‎1, u 1(x) ≥ u 1(y). In this case, the singletons clearly cannot be absorbing for δ‎ high enough. However, with a finite number of states one can easily construct an equilibrium with a positive probability path from x 0 to some efficient absorbing state for δ‎ high enough. In particular, from x 0, there is a positive probability transition to a state y ∈ π‎1; from y there is a probability 1 transition to some efficient state y′ which dominates y (if such a state exists; if not, y is absorbing).14 Proof of Observation 10.2. (p.209) Step 1: x 3 and x 4 are not absorbing. It is easy to see that V i(x 4) ≥ 2 for i = 1, 2. Moreover, since players 1 and 2 can initiate a transition from x 3 to x 4, x 3 is easily seen to be not absorbing. Similarly, V j(x 1) ≥ 4 for j = 3, 4; therefore, since players 3 and 4 can achieve x 1 from x 4, x 1 is not absorbing. Step 2: x 2 absorbing implies x 2 is globally absorbing. Suppose that x 2 is absorbing. Then clearly from x 1, players 1 and 2 would induce x 2. Moreover, since x 3 and x 4 are not absorbing, if x 2 is not reached, then x 1 must be reached infinitely often. But then 1 or 2 would get a chance to propose with probability 1 and would then take the state to x 2, a contradiction. Step 3: x 2 cannot be globally absorbing. If x 2 is globally absorbing then, from x 2, players 3 and 4 can get a payoff of 10 for some period of time, by initiating a transition to x 3, followed by, at worst, 2 for one period and 4 for another period, before returning to x 2, where it will get 5 forever thereafter.15 This sequence of events is clearly better for players 3 and 4 than remaining at x 2. Step 4: x 1 absorbing implies x 1 globally absorbing. Steps 2 and 3 imply that x 2 cannot be absorbing. Moreover, Step 1 tells us that neither x 3 nor x 4 can be absorbing. In particular, from x 2 players 3 and 4 initiate a transition to x 3, from x 3 players 1 and 2 initiate a transition to x 4 and (at least) players 3 and 4 initiate a transition back to x 1. Therefore, if x 1 is absorbing, it is globally absorbing. Step 5: Every equilibrium is inefficient. First suppose that we had an equilibrium in which x 1 is not absorbing. Then from the above analysis, nothing is absorbing. Now consider x 2. If players 1 and 2 always accept an offer of a transition from x 1 to x 2, then 3 and 4 will strictly prefer to initiate a transition from x 2 to x 3: in so doing, they can achieve an average payoff of at least $10 + 2 + 4 3 = 16 3 > 5.$ However, it is easily seen that players 1 and 2 earn an average payoff strictly less than 4 in this case. Therefore, players 1 and 2 would rather keep the state at x 1, contradicting the presumption that x 1 was not absorbing. (p.210) The only remaining possibility is one in which players 1 and 2 are indifferent between a x 1 and x 2 and players 3 and 4 are indifferent between x 2 and x 3. If such an equilibrium were to exist, it must be that V i(x 1) = V i(x 2) = 4 for i = 1, 2, and V j(x 2) = V j(x 3) = 5 for j = 3, 4. Therefore, if such an equilibrium were to exist, it would also be inefficient since players spend a non-negligible amount of time at the inefficient states x 1 and x 4. Thus either x 1 is the unique absorbing state or there is a sequence of inefficient cyclical equilibria depending on δ‎ n ↗ 1 such that players 1 and 2 are indifferent between x 1 and x 2 and players 3 and 4 are indifferent between x 2 and x 3. ## Notes: (1) It should be noted that in most cases the result is stronger in that it does not insist upon δ‎ → 1; in only one case do we rely on δ‎ → 1. (2) So the partnership {12} is not entirely a failure; it depends on the context. (3) We are neglecting the trivial states with zero payoffs for all. Including them would obviously make no difference. (4) We are arguing in the spirit of the one-shot deviation principle, in which the putative equilibrium strategies are subsequently followed. Even though the one-shot deviation principle needs to be applied with care when coalitions are involved, there are no such dangers here as all coalitional members have common payoffs. (5) We verify this by differentiating V i(x 2, δ‎) with respect to δ‎ and evaluating the derivative at δ‎ = 1. (6) Because we've developed the state space model at some degree of abstraction, we've allowed any player to make a proposal to any coalition, whether or not she is a member of that coalition. This is why players 1 and 2 ask 3 and 4 to move along. Nothing of qualitative import hinges on allowing or disallowing this feature. The transition from x 4 back to x 1 would still happen, but more slowly. (7) In all of the cases, the argument must be based on some noncontractible factor, such as the creativity or productivity created by the competitive urge, or ideological differences, or the presence of stand-alone players who are outside the definition of our set of players but nevertheless have an effect on their payoffs. (8) For instance, if transfers are not permitted, it would be inappropriate to demand efficiency in the sense of aggregate surplus maximization. If an NTU game displays inefficiency in the sense that “aggregate surplus” is not maximized, this is of little interest: aggregate surplus is simply the wrong criterion. (9) Such transfers will have to be made with a rational eye on the fact that an endless cycle across states will, in fact, happen. (10) If her partner refuses, she enjoys (1 − δ‎)a today and starting tomorrow, a present discounted value of δ‎d. She will therefore agree to any proposal that gives her more than this amount. A similar argument holds for the outsider, whose payoff conditional on refusal is just δ‎c. (11) If x 0 is absorbing and inefficient, then it is dominated either by a state for the grand coalition or by a state with coalition structure π‎ i for some i. Either way, by minimality of approval committees, x 0 will surely fail to be absorbing. (12) Of course, if these conditions are not satisfied, the same argument as in 2(c) implies the existence of an efficient absorbing state. (13) That is, there is no state that players 2 and 3 prefer to the singletons which they can achieve, either directly or indirectly (by initiating a preliminary transition to the coalition structure π‎1). (14) From x 0, there may also be a positive probability transition to some other state z. However, if π‎(z) ∈ {π‎2, π‎3} it is clearly efficient since at these states players 2 and 3 obtain their unique maximum. Moreover, for δ‎ high enough, it cannot be that π‎(z) = π‎G, since then this would imply that z Pareto-dominates y′. (15) Surely, players 1 and 2 must initiate a transition to x 4 with some positive probability; otherwise, x 3 would be absorbing (which Step 1 shows to be impossible). However, once at x 4, under the assumption that any player can propose to move to any state, and the fact that (by Step 2) from x 1 there would be an immediate transition to x 2, there is no need to even pass through the intermediate state x 1.
# B Lightbulb physics problem? Tags: 1. Apr 26, 2016 ### n124122 The light in a lightbulb is the result of an increase in temperature. This increase is created by both a highter elektrical resistance in the wire and a highter elektric current. But a higher resistance results in a lower elektric current (I) (I=V/R), so it is both increasing and decreasing the brightness?? Is this correct? 2. Apr 26, 2016 ### Tazerfish The power flowing into the light bulb can be described by $P=VI$ Inserting for $I=\frac {V}{R}$ , since V is constant, using Ohm s law gives $P= \frac {V^2} {R}$ The output power is equal to the output power and approximated by a black body radiator so it is $P= \alpha (\Delta T)^4 A$ As you can see for a constant voltage source like your connection to the grid, you will draw less power if you increase the resistance. See $P= \frac {V^2} {R}$ So the temperature and brightness of the wire decreases. Having a low resistive object hooked up will draw lots of power. For example you could kill your fuses in the house by short circuting a wall socket. DONT ATTEMPT THAT BTW. 3. Apr 26, 2016 ### Staff: Mentor It's simple negative feedback. The filament initially has a low resistance, so a high current flows briefly to heat it up. As the filament heats up, its resistance increases, and this ends up limiting the current at the stable operating point of the filament. 4. Apr 27, 2016 ### Khashishi The resistance of the filament needs to be designed at a sweet spot such that the resistance is considerably higher than the resistance of the wires in your lamp and house, but low enough to draw a lot of current and glow white hot. Say, around 100-1000 ohms. Light bulbs come in various watt ratings which vary in the resistance in the filament, I think. 5. Apr 27, 2016 ### Merlin3189 The problem I see with your comment, is the use of one-sided comparisons. Eg. increase from what? higher than what? lower than what? If you get rid of these (which tell us nothing) it makes more sense. "The light in a lightbulb is the result of <temperature>. This <temperature> is created by both <elektrical resistance in the wire> and <elektric current>." So far so good. Correct and no problems. "But a higher resistance results in a lower elektric current (I) (I=V/R), so it is both increasing and decreasing the brightness??" So is the question: "if we increase the resistance of a bulb, that will decrease the current, so you have two changes which both increase and decrease brightness." Then your premise is correct. In fact the net effect is that the bulb is less bright. Similarly, if you decrease the resistance of a bulb, the current increases and the net effect is that the brightness increases. Both of those presume (as Tazerfish said) a constant voltage source (or near enough) like mains or a car battery, because for constant voltage, power is inversely proportional to resistance. Otherwise the result can be either way. Both of these results are actually important in the design of light bulbs as Khashishi and Berkeman say. Most lightbulbs have filaments of metal whose resistance increases with temperature (brightness.) If the bulb temperature (brightness) varies, its resistance changes in such a way as to counteract that change. ie. they are stable. Bulbs have been made with other filaments, such as carbon, whose resistance decreases with temperature. They are unstable: if they get hotter the resistance decreases, so they get even hotter, so the resistance decreases more, so they get even hotter, etc. Such bulbs are operated in series with a metallic ballast resistance, or with a more complex current controlled power supply. 6. Apr 28, 2016 ### n124122 So (correct me if I'm wrong)... if we increase the resistance with x... the power(brightness) will increase with x but the power(brightness) will decrease with x^2 according to P=I^2*R... 7. Apr 28, 2016 ### Merlin3189 If you like to use P=I2R and I=V/R, then I2=V2/R2 So P = RV2/R2 sort of making P proportional to R but inversely proportional to R2 but really R cancels and its just inversely proportional to R for a constant voltage supply. But if it is not a constant voltage supply (equivalent to a very low internal series resistance) then the same relationship does not apply. In general the bulb (or any resistive load) gives maximum power when it is the same resistance as the internal series resistance. Then any change in resistance reduces the power. Source Resistance--- Description --------- ---- Bulb Resistance R----Effect of Increasing R ---- Effect of Decreasing R Very low --------------Voltage source-----------Higher than source----Less power ---------------More power Intermediate----------Real----------------------- Higher than source----Less power-------------More power until R = source Intermediate ---------Real -----------------------Equal to source -------Less power ---------------Less power Intermediate----------Real------------------------Lower than source---More power until R=source--Less power Very high -------------Current source-----------Lower than source----More power----------------Less power An ideal voltage source has zero internal resistance, but a real "constant" voltage source will have some. An ideal current source has infinite internal resistance, but a real "constant current" source will have finite resistance. But can I please emphasise that the way you are thinking about it is NOT helpful. Power is the product of current and voltage and both can change when the resistance changes. There is no general rule that relates power and resistance in isolation. Just work out the voltage and current and take it from there. 8. Apr 28, 2016
## express the following mathematical phrases and sentence into at least two englisg phrases/sentence 1.2-x 2.3x=1​ Question express the following mathematical phrases and sentence into at least two englisg phrases/sentence 1.2-x 2.3x=1​ in progress 0 2022-07-26T18:15:35+00:00 1 Answer 0 1. ## TRANSLATING MATHEMATICAL EXPRESSION TO MATHEMATICAL PHRASE $$\:$$ $$======================$$ $${\underline{\huge \mathbb{D} {\large \mathrm {IRECTION : }}}}$$ $$\quad$$ Express the following mathematical phrases and sentence into at least two English phrases / sentences. $$\orange{\underline{\qquad\qquad\qquad\qquad \qquad \qquad \qquad \qquad \qquad }}$$ $${\underline{\huge \mathbb{A} {\large \mathrm {NSWERS : }}}}$$ $$\rm\bold \red{\#1.}$$ $$\rm 2 \; – \; x$$ • $$\red {\mathcal{ANSWER:}}$$$$\begin{array}{l}{\underline{\texttt{\purple{ The difference between two and a number }}}}\end{array}$$ • $$\red {\mathcal{ANSWER:}}$$$$\begin{array}{l}{\underline{\texttt{\purple{ two is subtracted to a number }}}}\end{array}$$ ————————————————— $$\rm\bold \red{\#2.}$$ $$\rm 3x \; = \; 1$$ • $$\red {\mathcal{ANSWER:}}$$$$\begin{array}{l}{\underline{\texttt{\purple{ Three is multiplied by a number is one }}}}\end{array}$$ • $$\red {\mathcal{ANSWER:}}$$$$\begin{array}{l}{\underline{\texttt{\purple{ The product of 3 and a number is one }}}}\end{array}$$ $$======================$$ $$– \large\sf\copyright \: \large\tt{Hisoki}$$
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Jan 2019, 22:20 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### The winning strategy for a high GRE score January 17, 2019 January 17, 2019 08:00 AM PST 09:00 AM PST Learn the winning strategy for a high GRE score — what do people who reach a high score do differently? We're going to share insights, tips and strategies from data we've collected from over 50,000 students who used examPAL. • ### Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. # For any integer n greater than 1, factorial denotes the product of all new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags e-GMAT Representative Joined: 04 Jan 2015 Posts: 2448 For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags Updated on: 13 Aug 2018, 02:10 2 30 00:00 Difficulty: 95% (hard) Question Stats: 37% (02:39) correct 63% (03:01) wrong based on 733 sessions ### HideShow timer Statistics For any integer $$n$$ greater than 1, factorial denotes the product of all the integers from 1 to $$n$$, inclusive. It’s given that $$a$$ and $$b$$ are two positive integers such that $$b > a$$. What is the total number of factors of the largest number that divides the factorials of both $$a$$ and $$b$$? (1) $$a$$ is the greatest integer for which $$3^a$$ is a factor of product of integers from 1 to 20, inclusive. (2) $$b$$ is the largest possible number that divides positive integer $$n$$, where $$n^3$$ is divisible by 96. This is Register for our Free Session on Number Properties (held every 3rd week) to solve exciting 700+ Level Questions in a classroom environment under the real-time guidance of our Experts _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 | Remainders-1 | Remainders-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Originally posted by EgmatQuantExpert on 09 Apr 2015, 01:17. Last edited by EgmatQuantExpert on 13 Aug 2018, 02:10, edited 5 times in total. e-GMAT Representative Joined: 04 Jan 2015 Posts: 2448 Re: For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags Updated on: 07 Aug 2018, 03:42 2 4 Detailed Solution Step-I: Given Info: We are given two positive integers $$a$$ and $$b$$ such that $$b > a$$. We are asked to find the total number of factors of the largest number which divides the factorials of both $$a$$ and $$b$$. Step-II: Interpreting the Question Statement Since factorial is the product of all integers from 1 to $$n$$ inclusive: i. factorial of $$b$$ would consist of product of all the numbers from 1 to $$b$$ ii. factorial of $$a$$ would consist of product of all the numbers from 1 to $$a$$ As $$b > a$$, this would imply that factorial of $$b$$ would consist of all the numbers present in factorial of $$a$$. For example factorial of 30 would consist of all the numbers present in factorial of 20. So, the largest number which divides the factorial of both $$b$$ and $$a$$, i.e. the GCD of factorial of $$b$$ and $$a$$ would be the factorial of $$a$$ itself. So, if we can calculate the value of $$a$$, we would get to our answer. Step-III: Statement-I Statement-I tells us that $$a$$ is the greatest integer for which $$3^a$$ is a factor of factorial of 20. Since we can calculate the number of times 3 comes as a factor of numbers between 1 to 20, we can find the value of $$a$$. Thus Statement-I is sufficient to answer the question. Please note that we do not need to actually calculate the value of $$a$$. Just the knowledge, that we can calculate the unique value of $$a$$ is sufficient for us to get to our answer. Step-IV: Statement-II Statement-II tells us that $$b$$ is the largest possible number that divides $$n$$, where $$n^3$$ is divisible by 96. Note here that the statement talks only about $$b$$ and nothing about $$a$$. Since, we do not have any relation between $$b$$ and $$a$$ which would give us the value of $$a$$, if we find $$b$$, we can say with certainty that this statement is insufficient to answer the question. Again, note here that we did not solve the statement as we could infer that it’s not going to give us the value of $$a$$, which is our requirement. Step-V: Combining Statements I & II Since, we have received our unique answer from Statement-I, we don’t need to combine the inferences from Statement-I & II. Hence, the correct answer is Option A Key Takeaways 1. Familiarize yourself with all the names by which the test makers can call the GCD and the LCM. For example, • GCD is also known as the HCF • GCD can also be described as ‘the largest number which divides all the numbers of a set’ • LCM of a set of numbers can also be described as ‘the lowest number that has all the numbers of that set as it factors’ 2. Since factorial is product of a set of positive integers, the GCD of a set of factorials would always be the factorial of the smallest number in the set Zhenek- Brilliant work!!, except that we did not need the calculation in St-I Harley1980- Kudos for the right answer, two suggestions- calculation not needed in St-I and in St-II you calculated the least possible value of $$b$$, which was again not needed as it did not tell us anything about $$a$$. Regards Harsh _________________ Number Properties | Algebra |Quant Workshop Success Stories Guillermo's Success Story | Carrie's Success Story Ace GMAT quant Articles and Question to reach Q51 | Question of the week Number Properties – Even Odd | LCM GCD | Statistics-1 | Statistics-2 | Remainders-1 | Remainders-2 Word Problems – Percentage 1 | Percentage 2 | Time and Work 1 | Time and Work 2 | Time, Speed and Distance 1 | Time, Speed and Distance 2 Advanced Topics- Permutation and Combination 1 | Permutation and Combination 2 | Permutation and Combination 3 | Probability Geometry- Triangles 1 | Triangles 2 | Triangles 3 | Common Mistakes in Geometry Algebra- Wavy line | Inequalities Practice Questions Number Properties 1 | Number Properties 2 | Algebra 1 | Geometry | Prime Numbers | Absolute value equations | Sets | '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com Originally posted by EgmatQuantExpert on 10 Apr 2015, 06:01. Last edited by EgmatQuantExpert on 07 Aug 2018, 03:42, edited 1 time in total. Retired Moderator Joined: 06 Jul 2014 Posts: 1231 Location: Ukraine Concentration: Entrepreneurship, Technology GMAT 1: 660 Q48 V33 GMAT 2: 740 Q50 V40 Re: For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags 09 Apr 2015, 01:49 2 4 EgmatQuantExpert wrote: For any integer n greater than 1, factorial denotes the product of all the integers from 1 to n, inclusive. It’s given that a and b are two positive integers such that b > a. What is the total number of factors of the largest number that divides the factorials of both a and b? (1) a is the greatest integer for which 3^a is a factor of product of integers from 1 to 20, inclusive. (2) b is the largest possible number that divides positive integer n, where n^3 is divisible by 96. We will provide the OA in some time. Till then Happy Solving This is Register for our Free Session on Number Properties this Saturday to solve exciting 700+ Level Questions in a classroom environment under the real-time guidance of our Experts! 1) we know that $$20!/3^a$$. For finding a we should calculate 3 in 20! We can use a shortcut by dividing 20 on the 3 in 1 power, than in second power etc. and sum the results and this will be A 20 / 3 = 6; 20 / 9 = 2; 6 + 2 = 8 so 20! can be divided on $$3^8$$ and A = 8 And at first glance we can think that this isn't enough because we don't know B But question asks about number of factors of biggest number that can divide both A! and B! So A! will be divide B! because B bigger than A So now we should only calculate number of factors 8! We don't need to do this on the exam because we know that we can do it. But method is: find all prime factors of number, take their powers add to each power 1 and sum these numbers, this will be number of factors 1*2*3*4*5*6*7*8 $$2^7 * 3^2 * 5^1 * 7^1$$ sum of powers + 1: (7+1)*(2+1)*(1+1)*(1+1) = 8*3*2*2 = 96 Sufficient 2) This statement insufficient because we don't know about A and for deciding task we should know about A How we can find B? For find B we should find N and we know that N^3 is divisible by 96 $$96 = 2^5*3$$ So to be divisible by 96, $$N^3$$ should have prime factors 2 and 3 and this factors should be in powers multiple of 3 And least possible number will be $$2^6*3^3$$ and this will be equal $$12^3$$ So the least possible B equal to 12 but we don't know about A so this fact Insufficient And answer is A _________________ ##### General Discussion Manager Joined: 17 Mar 2015 Posts: 116 Re: For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags 09 Apr 2015, 01:59 4 From what I understood, the question asks us what the value of "A" is #1 we are asked about how many 3's are there in 20! if we factor it: we can answer that question easily 1 * 2 * 3(1) * 4 * 5 * 6(1) *7 * 8 * 9(2)*10*11*12(1)*...*15(1)*...*18(2): 1 + 1 + 2 + 1 + 1 =2 =8, so a = 8 and so we can answer our question sufficient #2 - if I understood it right, b is n and n can be as big as one wishes it to be. In this case we don't have any idea about a except for the fact that its lower than b, but yet again, a can be anything in this case, thus our answer is unknown insufficient A that is Director Joined: 02 Sep 2016 Posts: 678 Re: For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags 09 Jan 2017, 05:22 EgmatQuantExpert wrote: Detailed Solution Step-I: Given Info: We are given two positive integers $$a$$ and $$b$$ such that $$b > a$$. We are asked to find the total number of factors of the largest number which divides the factorials of both $$a$$ and $$b$$. Step-II: Interpreting the Question Statement Since factorial is the product of all integers from 1 to $$n$$ inclusive: i. factorial of $$b$$ would consist of product of all the numbers from 1 to $$b$$ ii. factorial of $$a$$ would consist of product of all the numbers from 1 to $$a$$ As $$b > a$$, this would imply that factorial of $$b$$ would consist of all the numbers present in factorial of $$a$$. For example factorial of 30 would consist of all the numbers present in factorial of 20. So, the largest number which divides the factorial of both $$b$$ and $$a$$, i.e. the GCD of factorial of $$b$$ and $$a$$ would be the factorial of $$a$$ itself. So, if we can calculate the value of $$a$$, we would get to our answer. Step-III: Statement-I Statement-I tells us that $$a$$ is the greatest integer for which $$3^a$$ is a factor of factorial of 20. Since we can calculate the number of times 3 comes as a factor of numbers between 1 to 20, we can find the value of $$a$$. Thus Statement-I is sufficient to answer the question. Please note that we do not need to actually calculate the value of $$a$$. Just the knowledge, that we can calculate the unique value of $$a$$ is sufficient for us to get to our answer. Step-IV: Statement-II Statement-II tells us that $$b$$ is the largest possible number that divides $$n$$, where $$n^3$$ is divisible by 96. Note here that the statement talks only about $$b$$ and nothing about $$a$$. Since, we do not have any relation between $$b$$ and $$a$$ which would give us the value of $$a$$, if we find $$b$$, we can say with certainty that this statement is insufficient to answer the question. Again, note here that we did not solve the statement as we could infer that it’s not going to give us the value of $$a$$, which is our requirement. Step-V: Combining Statements I & II Since, we have received our unique answer from Statement-I, we don’t need to combine the inferences from Statement-I & II. Hence, the correct answer is Option A Key Takeaways 1. Familiarize yourself with all the names by which the test makers can call the GCD and the LCM. For example, • GCD is also known as the HCF • GCD can also be described as ‘the largest number which divides all the numbers of a set’ • LCM of a set of numbers can also be described as ‘the lowest number that has all the numbers of that set as it factors’ 2. Since factorial is product of a set of positive integers, the GCD of a set of factorials would always be the factorial of the smallest number in the set Zhenek- Brilliant work!!, except that we did not need the calculation in St-I Harley1980- Kudos for the right answer, two suggestions- calculation not needed in St-I and in St-II you calculated the least possible value of $$b$$, which was again not needed as it did not tell us anything about $$a$$. Regards Harsh Great question only I misinterpreted 'largest'. If 'largest' was not be mentioned in the question stem, E would be the correct choice ? Because we then had to know exact values of a and b. Non-Human User Joined: 09 Sep 2013 Posts: 9420 Re: For any integer n greater than 1, factorial denotes the product of all  [#permalink] ### Show Tags 24 Jan 2018, 12:10 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: For any integer n greater than 1, factorial denotes the product of all &nbs [#permalink] 24 Jan 2018, 12:10 Display posts from previous: Sort by # For any integer n greater than 1, factorial denotes the product of all new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 17 Jul 2018, 10:42 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # D01-20 Author Message TAGS: ### Hide Tags Current Student Joined: 24 Jul 2016 Posts: 87 Location: United States (MI) GMAT 1: 730 Q51 V40 GPA: 3.6 ### Show Tags 10 Dec 2016, 11:21 This is a little confusing. If factorial is not defined for negative numbers, then a and b must be positive. Intern Joined: 08 Feb 2017 Posts: 9 ### Show Tags 17 Feb 2017, 06:34 Bunuel As per the solution given: (1) The median of {a!, b!, c!} is an odd number. This implies that b!=odd. Thus b is 0 or 1. But if b=0, then a is a negative number, so in this case a! is not defined. Therefore a=0 and b=1, so the set is {0!, 1!, c!}={1, 1, c!}. Now, if c=2, then the answer is YES but if c is any other number then the answer is NO. Not sufficient. it says b is 0 or 1. Is 0 an odd number? Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 17 Feb 2017, 09:05 Darkhorse12 wrote: Bunuel As per the solution given: (1) The median of {a!, b!, c!} is an odd number. This implies that b!=odd. Thus b is 0 or 1. But if b=0, then a is a negative number, so in this case a! is not defined. Therefore a=0 and b=1, so the set is {0!, 1!, c!}={1, 1, c!}. Now, if c=2, then the answer is YES but if c is any other number then the answer is NO. Not sufficient. it says b is 0 or 1. Is 0 an odd number? 0 is an even integer. But 0! = 1 = odd. _________________ Manager Joined: 27 Aug 2014 Posts: 56 Concentration: Strategy, Technology GMAT 1: 660 Q45 V35 GPA: 3.66 WE: Consulting (Consulting) ### Show Tags 22 Apr 2017, 21:29 How is 0 factorial 1? Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 23 Apr 2017, 03:41 praneet87 wrote: How is 0 factorial 1? 0! = 1. It's a math property you need not know why it's so but if still interested check this video: _________________ Intern Joined: 03 Apr 2017 Posts: 2 ### Show Tags 10 May 2017, 11:41 How can we determine that c must be 2 (not any other prime number) from the : 2) c! is a prime number. Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 10 May 2017, 11:53 Albert__ wrote: How can we determine that c must be 2 (not any other prime number) from the : 2) c! is a prime number. n! = 1*2*3*...*n Now, if c is any other number than 2, c! will have at least 2 and 3 as its factor and we know that a prime number has only two factors 1 and itself, thus for c! to be prime c must be 2. For example, if c = 4, then c! = 4! = 1*2*3*4, which is not a prime. Hope it's clear. _________________ Intern Joined: 03 Apr 2017 Posts: 2 ### Show Tags 10 May 2017, 11:56 My bad... stupid mistake, thanks anyway ! Intern Joined: 22 May 2017 Posts: 4 ### Show Tags 15 Jun 2017, 07:05 Bunuel wrote: If a, b, and c are integers and $$a \lt b \lt c$$, are a, b, and c consecutive integers? (1) The median of {a!, b!, c!} is an odd number. (2) c! is a prime number. Intern Joined: 05 Jul 2015 Posts: 2 Location: India GMAT 1: 720 Q50 V38 GPA: 3.55 ### Show Tags 07 Aug 2017, 05:52 In this question, what we know from statement A is that b! is odd. So b could either be 0 or 1. If b is 0, then a can still assume the value 0. a does not have to be negative. The second statement tells us that c is 2, but gives no details on a and b. Combining both the statements still gives us three possible values for {a!,b!,c!}: I: {0!,0!,2!} II: {0!,1!,2!} III: {1!,1!,2!} I and III would mean that the answer is NO and II would mean the answer is YES. Going by this logic, I went with E. Can anyone please correct me and explain how the answer is C? I do not understand any of the given explanations. Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 07 Aug 2017, 16:50 anirudhb94 wrote: In this question, what we know from statement A is that b! is odd. So b could either be 0 or 1. If b is 0, then a can still assume the value 0. a does not have to be negative. The second statement tells us that c is 2, but gives no details on a and b. Combining both the statements still gives us three possible values for {a!,b!,c!}: I: {0!,0!,2!} II: {0!,1!,2!} III: {1!,1!,2!} I and III would mean that the answer is NO and II would mean the answer is YES. Going by this logic, I went with E. Can anyone please correct me and explain how the answer is C? I do not understand any of the given explanations. Notice that we are told that a < b < c, so I and III are not possible, which leaves only a=0, b=1 and c=2. _________________ Manager Joined: 07 Jun 2017 Posts: 103 ### Show Tags 09 Aug 2017, 00:47 Factorial of a number is prime Can it be anything elses besides 2!? Or 2! is the only factorial of a number is prime? Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 09 Aug 2017, 00:50 1 pclawong wrote: Factorial of a number is prime Can it be anything elses besides 2!? Or 2! is the only factorial of a number is prime? No, it can only be 2. A prime number has only two factors 1 and itself, so factorial of any number but 2 cannot be a prime. _________________ Intern Joined: 30 Oct 2016 Posts: 2 ### Show Tags 09 Sep 2017, 15:00 I don't agree with the explanation. i think A should be the answer because a set (1,1,c!) will have a median of 1 always na Math Expert Joined: 02 Sep 2009 Posts: 47037 ### Show Tags 09 Sep 2017, 15:07 samism wrote: I don't agree with the explanation. i think A should be the answer because a set (1,1,c!) will have a median of 1 always na You should read solutions and the following discussions more carefully. The question asks whether a, b, and c are consecutive integers. For (1): a=0, b=1 and c=2 gives an YES answer while a=0, b=1 and c=3 gives a NO answer. _________________ Intern Joined: 20 Sep 2012 Posts: 6 Concentration: Finance, General Management ### Show Tags 13 Oct 2017, 12:18 I think this is a high-quality question and I agree with explanation. Intern Joined: 10 Feb 2017 Posts: 7 ### Show Tags 19 Nov 2017, 08:13 I think this is a high-quality question and I agree with explanation. Senior Manager Joined: 31 May 2017 Posts: 285 ### Show Tags 13 Mar 2018, 17:48 I got it wrong in the tests due to oversight on my part. Option 1 - The factorial of a number can be odd only if its 0 or 1. This gives a value of b. But C can be any number greater than b. - Not sufficient. Option 2 - c! is a prime number. The only number whose factorial is prime is 2. So we get the value of c as 2. but we do not have value of a and b. Not sufficient. Considering both the options, we can find that the numbers are consecutive numbers. Ans: C _________________ Please give kudos if it helps Resources | | | | | Intern Joined: 09 Jan 2018 Posts: 22 ### Show Tags 05 Jun 2018, 23:49 I think this is a high-quality question and I agree with explanation. Intern Joined: 01 Jun 2018 Posts: 1 ### Show Tags 28 Jun 2018, 09:57 I think this is a poor-quality question and the explanation isn't clear enough, please elaborate. C! Is a prime number, it's given in question that a<b<c, since the only prime factorial is 2, c must be 2, and a is 0, b is 1. Please explain Re D01-20   [#permalink] 28 Jun 2018, 09:57 Go to page   Previous    1   2   3    Next  [ 41 posts ] Display posts from previous: Sort by # D01-20 Moderators: chetan2u, Bunuel # Events & Promotions Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# Chapter 1 - Section 1.6 - Dividing Whole Numbers - Exercise Set: 111 $\$180,845,200$#### Work Step by Step To find the average, sum the numbers in the list then divide the sum by the count of numbers in the list.$(191,055,900+170,634,500)\div2=361,690,400\div2=180,845,200\$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
If you find any mistakes, please make a comment! Thank you. ## Demonstrate that a given relation defined in terms of a surjective function is an equivalence relation Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 0.1 Exercise 0.1.7 Let $f : A \rightarrow B$ be a surjective map of sets. Prove that the relation… ## Determine whether a given relation is well defined Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 0.1 Exercise 0.1.6 Determine whether the function $f : \mathbb{R}^+ \rightarrow \mathbb{Z}$ given by mapping a real number $r$… ## Determine whether given relations are well-defined Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 0.1 Exercise 0.1.5 Determine whether the following functions are well defined. $f : \mathbb{Q} \rightarrow \mathbb{Z}$ given by \$f(a/b)…
# Calculus: Logarithms 1. Feb 9, 2005 ### thomasrules Can't get this question, I get the wrong answer: 4.6*1.06^(2x+3)=5*3^x So find x 2. Feb 9, 2005 ### Galileo Seeing the title of the thread, it seems obvious that you've already started by taking the logarithm on both sides. Show us your work so we know where the problem lies. 3. Feb 9, 2005 ### dextercioby HINT:Divide by 4.6 and then apply natural logarithm over both sides of the equation. Daniel. 4. Feb 9, 2005 ### thomasrules I did the following: ( 4.6*1.06^(2x+3)=5*3^x )/4.6*3^x and then cancelled. Dexter then I applied Logarithm and I just want to see how you guys did it.....I got an answer just wrong..... 5. Feb 9, 2005 ### dextercioby $$1.06^{2x+3}=\frac{5}{4.6}\cdot 3^{x}$$ Taking natural logarithm $$(2x+3)\ln 1.06=\ln\frac{5}{4.6} +x\ln 3$$ Solve for "x"...The final answer ain't pretty,by any means. Daniel. 6. Feb 9, 2005 ### thomasrules God damnit Daniel.....I'm so retarded..... I forgot to Log the right side!!! Such stupid and simple mistakes.....sorry for wasting your time lol.......P.S. I like that special writing you use
### All Issues Vol.10, 2020 Vol.9, 2019 Vol.8, 2018 Vol.7, 2017 Vol.6, 2016 Vol.5, 2015 Vol.4, 2014 Vol.3, 2013 Vol.2, 2012 Vol.1, 2011 Entire solutions of delay differential equations of Malmquist type Ran-Ran Zhang,Zhi-Bo Huang Keywords:delay differential equation; Entire solution; Nevanlinna theory Abstract: The celebrated Malmquist theorem states that a differential equation, which admits a transcendental meromorphic solution, reduces into a Riccati differential equation. Motivated by the integrability of difference equations, this paper investigates the delay differential equations of form $w(z+1)-w(z-1)+a(z)\frac{w'(z)}{w(z)}=R(z, w(z))(*),$ where $R(z, w(z))$ is an irreducible rational function in $w(z)$ with rational coefficients and $a(z)$ is a rational function. We characterize all reduced forms when the equation $(*)$ admits a transcendental entire solution with hyper-order less than one. When we compare with the results obtained by Halburd and Korhonen[Proc. Amer. Math. Soc. 145, no.6 (2017)], we obtain the reduced forms without the assumptions that the denominator of rational function $R(z,w(z))$ has roots that are nonzero rational functions in $z$. The value distribution and forms of transcendental entire solutions for the reduced delay differential equations are studied. The existence of finite iterated order entire solutions of the Kac-van Moerbeke delay differential equation are also detected.
# LUCAS WILLEMS A 23 year-old student passionate about maths and programming # Exponential serie ## Article In the 18th century, Leonard Euler found this formula to write explicitly exponential : $$x \in \mathbb{C} \qquad e^x = \sum_{k = 0}^{\infty} \frac{x^k}{k!}$$ But how to find it ? It is what I'm going to show you. Summary ## Proof This proof is the one that I found on my own (but isn't most probably the one that Euler found) and only uses exponential properties and the assumption that $$e^x$$ can be written like a polynomial of infinite degree, i.e. these 3 equalities : $$\begin{gather*} \exp' = exp \\ e^0 = 1 \\ e^x = \sum_{k = 0}^{\infty} a_k x^k \end{gather*}$$ From these equalities, let's use a proof by induction with the following statement : $$\forall n \in \mathbb{N} \quad H_n : a_n = \frac{1}{n!}$$ that leads us to the following reasoning : Bases : For $$n = 0$$, $$e^0 = a^0 = 1 = \frac{1}{0!}$$. So, $$H_0$$ holds. Inductive step : As $$\exp = \exp'$$, we can write the following :$$\sum_{k = 0}^{\infty} a_k x^k = (\sum_{k = 0}^{\infty} a_k x^k)' = \sum_{k = 0}^{\infty} (k+1) a_{k+1} x^k$$which means that for $$m \in \mathbb{N}$$, $$a_m = (m+1)a_{m+1} \Leftrightarrow a_{m+1} = \frac{a_m}{m+1}$$ And, as we assume $$H_m$$ holds, $$a_{m+1} = \frac{a_m}{m+1} = \frac{\frac{1}{m!}}{m+1} = \frac{1}{(m+1)!}$$ Consequently, $$H_m \Rightarrow H_{m+1}$$. Conclusion : $$H_n$$ holds $$\forall n \in \mathbb{N}$$. Thus : $$e^x = \sum_{k = 0}^{\infty} \frac{1}{k!} x^k = \sum_{k = 0}^{\infty} \frac{x^k}{k!}$$
# Here is my proof that the elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$. Is this correct? Question: Here is my proof that the elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$. Is my proof below correct? This is a good question [3]: I have searched for an answer in a text book [1]. What I have found is that the authors give a specific proof for the specific case of 2-forms, and then state that "An analogous but messier [emphasis added] computation would show that for any $k$-form in and $\mathbb{R}^n$ the form is determined by its values on sequences $\vec{e}_{i,1},\ldots,\vec{e}_{i,k}$..." This doesn’t meet my needs: (1) I need to be certain that I understand this proof so that I am confident as I continue my independent study. Ancillary to this, if the solution for the general case can be written neatly, then this augments confidence in the successful comprehension of the definitions, theorems, language, etc. in the remaining material. (2) I find that the proof of this theorem given in [1] somewhat meanders from the definitional statements; and this too does not meet my needs. I learn most efficiently and effectively when proofs tack more closely to the definitions. So that is what I've attempted to do. (3) While in their proof the authors of [1] only consider that the arguments of $\varphi$ are given in strictly increasing order. This does not meet my needs. In my proof I consider the cases where the argument are not necessarily given in strictly increasing order. This question is on topic. The material is strait out of [1], and the title of [1] implies the appropriate tags that are given. This question is specific. I am simply asking for proof verification and any useful comments. This question is likely relevant to others. In [4], the author asks: "Can I post questions to fill in the gaps in a textbook I'm reading from?" @choreley cites Mathematics Stack Exchange, writing: Mathematics Stack Exchange is for people studying mathematics at any level and professionals in related fields. We welcome questions about: (1) Understanding mathematical concepts and theorems. (2) Mathematical problems such as one might come across in a course or textbook.... Statement of the Problem: Given the following definition Definition ($A^k{(\mathbb{R}^n)}$) The space of $k$-forms in $\mathbb{R}^n$ is denoted $A^k{(\mathbb{R}^n)}$, prove the following theorem. Theorem (a) The elementary $k$-forms form a basis for $A^k{(\mathbb{R}^n)}$: every multilinear and antisymmetric function $\varphi$ of $k$ vectors in $\mathbb{R}^n$ can be uniquely written $$\varphi = \sum_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}}.$$ (b) The coefficients $a_{i_1, \ldots, i_k}$ are given by $$a_{i_1, \ldots, i_k} = \varphi{(\vec{e}_{i,1},\ldots,\vec{e}_{i,k})}.$$ Proof: This proof is divided in three parts. Parts A and B together address part (a) of the theorem; while Part C addresses part (b). Part A. Pursuant to the definition of linear independence (e.g., cf. [1]), a set of $k$-forms are linearly independent if there is at most one way of writing the multilinear antisymetric function $\varphi$ as a linear combination of $k$-forms; that is, if $$\varphi = \sum_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}} = \sum_{1\leq i_1 < \cdots < i_k \leq n}{b_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}}$$ implies $a_{i_1\cdots i_k} = b_{i_1\cdots i_k}.$ We proceed to evaluate $\varphi$ on any $k$ standard basis vectors listed in increasing order. \begin{align} \varphi{(\vec{e_{j_1}}, \ldots, \vec{e_{j_k}})} & = \sum_{1\leq i_1 < \cdots < i_k \leq n}{ {a_{i_1\cdots i_k}\,dx_{i_1}\wedge \cdots \wedge dx_{i_k}} } {(\vec{e}_{j_1}, \ldots, \vec{e}_{j_k})} \\ & = \sum_{1\leq i_1 < \cdots < i_k \leq n}{ {a_{i_1\cdots i_k} } } \det{ \begin{bmatrix} e_{j_1,i_1} & \cdots & e_{j_k,i_k} \\ \vdots & \ddots & \vdots \\ e_{j_k,i_1} & \cdots & e_{j_k,i_k} \end{bmatrix} }\quad \textrm{Eq. 1} \\ & = \begin{cases} a_{i_{1}, \ldots, i_{k}} & \textrm{for } i_{1} = j_{1}, \ldots, i_{k} = j_{k} \\ 0 & \textrm{otherwise.} \end{cases} \quad \textrm{Eq. 2} \end{align} Similarly, we find \begin{align} \varphi{(\vec{e_{j_1}}, \ldots, \vec{e_{j_k}})} & = \begin{cases} b_{i_{1}, \ldots, i_{k}} & \textrm{for } i_{1} = j_{1}, \ldots, i_{k} = j_{k} \\ 0 & \textrm{otherwise.} \end{cases} \end{align} Allowing that $i_{1} = j_{1}, \ldots, i_{k} = j_{k}$, \begin{align} \varphi{(\vec{e_{i_1}}, \ldots, \vec{e_{i_k}})} & = a_{i_{1}, \ldots, i_{k}} \textrm{ and also } \\ \varphi{(\vec{e_{i_1}}, \ldots, \vec{e_{i_k}})} & = b_{i_{1}, \ldots, i_{k}}. \end{align} By the transitive relation [2], whenever $$\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} = a_{i_1, \ldots, i_k} \quad \textrm{Eq. 3}$$ and $$\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} = b_{i_1, \ldots, i_k},$$ then also $a_{i_1, \ldots, i_k} = b_{i_1, \ldots, i_k}$. So, we find that the $k$-forms are linearly independent. Part B. Pursuant to the definition of span (e.g., cf. [1]), the span of the elementary $k$-forms $dx_{i_1}\wedge \cdots\wedge dx_{i_k}$ is the set of linear combinations $\sum\limits_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k} \, dx_{i_1}\wedge \cdots\wedge dx_{i_k} }$. We ask: Is each and any multilinear and antisymetric function $\varphi$ of $k$ vectors in $\mathbb{R}^n$ in the span of $A^k {(\mathbb{R}^n) }$? We start by proposing that $$\varphi = \sum\limits_{1\leq i_1 < \cdots < i_k \leq n}{a_{i_1\cdots i_k} \, dx_{i_1}\wedge \cdots\wedge dx_{i_k} }.$$ Following through as done above, we find that for any standard $k$ basis vectors listed in increasing order $$\varphi{(\vec{e}_{j_1},\ldots, \vec{e}_{j_k})} = a_{j_{1}, \ldots, j_{k}} .$$ Next we proceed to evaluate $\varphi$ on any $k$ standard basis vectors -- not necessarily being listed in increasing order. We note from the definition of the determinant of a square matrix $A =[\vec{a}_i, \ldots, \vec{a}_n]$ (e.g., cf. [1]) that exchanging any two arguments changes its sign. Therefore, with respect to Eqs. 1 and 2, we write $$\varphi{(\vec{e}_{l_1},\ldots, \vec{e}_{l_k})} = (-1)^p\,a_{j_{1}, \ldots, j_{k}},$$ where $p$ is a non-negative integer giving the required number of exchanges of any two of the arguments of $\varphi$ such that the arguments are ultimately given in strictly increasing order. Irrespective of the actual order of the arguments of $\varphi$, we find that $\varphi$ can be written as a linear combination of elementary $k$-forms. So, we find that the $k$-forms span $A^k{(\mathbb{R}^n)}$. Part C. Multiplying both sides of Eq. 3 by -1 and next adding $(\varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})} + a_{i_1, \ldots, i_k})$ to both sides, we find that $$a_{i_1, \ldots, i_k} = \varphi{(\vec{e}_{i_1}, \ldots, \vec{e}_{i_k})}.$$ Q.E.D. Bibliography: [1] `Vector Calculus, Linear Algebra, and Differential Forms' by Hubbard and Hubbard, Second Edition, 2002, pp. 193, 195, 469, 562-3. • Asking and answering your own question is encouraged here, but the style of your answer is not entirely appropriate for this platform, as I understand (for example, there is no need to define span, linear independence etc). Consider cleaning up your answer in order to be more appropriate for this environment. If you want to have somewhere to write leisurely online, you may want to have a blog or keep files on an online latex sharing website. – Aloizio Macedo Apr 28 '18 at 20:29 • I am not saying the work is leisure. I am talking about writing leisurely ('leisurely' as an adverb for 'writing'), in the sense that you would not need to mind rules/etiquette/expectations/etc of a community. And it is obvious that you need the definitions to figure it out, but it doesn't mean that stating the definitions are relevant for the audience who is going to potentially stumble in this question. If someone is studying differential forms, one should expect that the terms you define are already known. – Aloizio Macedo Apr 28 '18 at 20:49 • That said, you have the right to disagree, and I respect that. Let's wait for some feedback of the community regarding this. – Aloizio Macedo Apr 28 '18 at 20:50 • I am inclined to agree with @AloizioMacedo; your question and answer are very "bloggy", and don't really seem to fit here. Moreover, I don't think that the question, as phrased, is very good (it is a basic "Problem Statement Question" that provides little context aside from the fact that you are self-studying; basically, it looks like a homework question). If you moved the definitions into the question, it would be improved. There is also something quite wonky about your formatting (the use of quotes, in particular; are you familiar with the blockquote environment?). – Xander Henderson Apr 28 '18 at 21:38 • You also seem to be missing the statement of Part B of your theorem in the question. Maybe? – Xander Henderson Apr 28 '18 at 21:38 First of all, I would like to point out that Munkres provides an elegant proof in his book Analysis on Manifolds. He first shows a lemma that says: for alternating $k$-tensors $f$ and $g$, if $f(a_{i_1},\dots,a_{i_k})=g(a_{i_1},\dots,a_{i_k})$ for every ascending $k$-tuple $(i_1,\dots,i_k)$, then $f\equiv g$. My only criticism is the end of part $B$: compared to the rest of your proof (especially part $C$ - how is that necessary?), your argument for the sign of $\phi$ is almost hand-wavy. I reference Munkres again; he proves basic results (prior to the proof) about permuting the entries of a $k$-tensor, including: if $f$ is alternating and $\sigma$ has odd parity, then $f^\sigma=-f$. Apart from that (which is just my opinion at the end of the day), I think you have a typo here: "which implies $a_{i_1\cdots i_k} = b_{i_1\cdots i_k}, \ldots, a_k = b_k$."
# Math Help - light year conversion 1. ## light year conversion A ly (light year) is the distance that light travels in one year. The speed of light is 3.00 × 108 m/s. How many miles are there in a ly? (1 mi = 1609 m and 1 yr = 365.25 days) Here is what i do 300,000,000 m/s X $\frac{.1609mi}{1m}$ X $\frac{1yr}{31,557,600s}$=1.53 ly? please close have found solution 2. You have your fractions the wrong way around. And also 1609 meters should be kept the same. $\text{miles} = 300000000 \times \frac{1}{1609} \times 3155760$
# Lesson 06 ### 01.01.06 Do you know the continents? Continent outline mapshttp://www.eduplace.com/ss/maps/Continent bordershttp://img509.imageshack.us/img509/1630/sevenconao0.jpg Download a map of the world from the "Outline Maps" web site. Color the seven continents, each continent a different color. Be careful to put the borders of the continents in the correct place. Making sure that you also include the major oceans that are in or around each continent. Make sure you have a compass rose, equator, prime meridian, tropic of cancer, and tropic of capricorn on the map. Put some of the major countries in also. A big HINT: the map above is not exactly correct as to borders and continents. You will be graded on the completeness of the assignment. Make sure that the continents borders are correct (hint: Europe and Asia, No. America and So. America, etc.). Make sure to be neat and legible. Use a key if it will help make the map more understandable. ***70% or higher is required to pass any assignment*** Worlds Continents: The borders are not correct in this image. ### 01.06 Solving Multi-Step Inequalities (Math Level 1) Solve multi-step inequalities and justify the steps involved. Something to Ponder What are some things you need to consider as you write expressions or equations to model real-life situations and problems? Mathematics Vocabulary Inequality: an expression with an inequality sign (like < , ≤ , > or ≥) instead of an equals sign Solve linear inequalities: perform the same operation on both sides of the inequality. Note: When $\fn_phv {\color{Red}multiplying }$ or $\fn_phv {\color{Red}dividing }$ both sides of an inequality by a $\fn_phv {\color{Red}negative }$ number, $\fn_phv {\color{Red}reverse }$ the $\fn_phv {\color{Red}inequality }$ $\fn_phv {\color{Red}symbol }$. An inequality remains unchanged if: • the same number is added to both sides of the inequatily • the same number is subtracted from both sides of the inequality • both sides of the inequality are multiplied or divided by a positive number Learning these concepts Click each mathematician image OR click the link below to launch the video to help you better understand this "mathematical language." $\fn_phv {\color{Red}SCROLL }$ $\fn_phv {\color{Red}DOWN }$ $\fn_phv {\color{Red}TO }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}GUIDED }$ $\fn_phv {\color{Red}PRACTICE }$ $\fn_phv {\color{Red}SECTION }$ $\fn_phv {\color{Red}AND }$ $\fn_phv {\color{Red}WORK }$ $\fn_phv {\color{Red}THROUGH }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}EXAMPLES }$ $\fn_phv {\color{Red}BEFORE }$ $\fn_phv {\color{Red}SUBMITTING }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}ASSIGNMENT!!! }$ ### 01.06 The Greek Gods and the Trojan War (English 9) Many of the assignments in the first semester will refer to a story first told in ancient Greece: The Odyssey, by Homer. You will understand it better if you know a little about the Greek gods, especially Zeus, Poseidon, Athena, Hera, Ares, Hephaestus, Hermes and Aphrodite. If you are already familiar with the Greek gods, you might want to skim this and go on. The Greek gods were pretty much like regular people, except that they were immortal and had supernatural powers. They were not necessarily ethical, just, merciful or kind, and could be selfish and capricious. They sometimes had temper tantrums or did things on a whim. They definitely played favorites. Zeus: (WMC, CC, from Väsk image) Zeus was the king of the gods, and the most powerful. He could use lightning bolts to strike people he didn't like. He also liked to sleep around, and had dozens, maybe even hundreds, of children from mortal women. (The other male gods also occasionally had children with mortal women.) Zeus' wife, Hera, was jealous (with good cause) and did not like these children of Zeus and his mortal lovers. Poseidon with his trident and horses (fountain sculpture): (WMC,CC, Pacogq image) Poseidon was Zeus' brother, and god of the ocean. As well as storms at sea, he could cause earthquakes by striking the earth with the trident he carried. Hephaestus, another brother, was god of the forge, volcanoes and fire. He usually minded his own business and left mortals alone. He was married to Aphrodite, goddess of beauty and love, who was a flirt and an airhead. A third brother, Ares, was god of war. He was short-tempered, cruel, impulsive and a bit of a coward and bully. Hades, also brother to Zeus, was god of the underworld - where people went after death. He had a three-headed dog named Cerberus. Athena: (WMC, CC, G.dallorto image) Athena was Zeus' daughter, and goddess of wisdom, useful crafts and war. She was suppposed to have sprung full-grown from Zeus' head. Of all the gods, Athena was the one most likely to be reasonable and just. The city of Athens was named in her honor. In The Odyssey, Athena helps Odysseus because she likes his intelligence and courage. Hermes was the messenger god, and could fly because he had winged sandals. He could be mischievious, but was usually good-humored. Other important gods included Artemis (goddess of the hunt), Apollo (god of the sun), Dionysius (god of wine), Demeter (goddess of the earth & harvest), and Hestia (goddess of the hearth & home). The Trojan War Helen was the most beautiful woman in the world, and when she reached marriageable age, young men came from all around in hopes of becoming her husband. One man who came courting Helen was Odysseus, who was already known for his intelligence and common sense. Odysseus could see that any man who married Helen would probably be attacked and killed by others who wanted to steal her, so he convinced all the men there to make a treaty of sorts - they all agreed that they would help defend whichever man Helen married. Helen married Menelaus of Sparta, all the others went home, and everything went well for several years. Odysseus went home to Ithaca and fell in love with Penelope. They married and had a son, Telemachus. Problems started with the gods. Athena, Hera and Aphrodite were tricked into an argument about who was the most beautiful, and Zeus named Paris, a good-looking young man, to be the judge. Each of the goddesses tried to bribe him, and Aphrodite, who promised him the most beautiful woman in the world as wife, won the contest. She helped Paris to steal Helen from Menelaus. Paris took Helen to Troy, and Menelaus called upon all the men who had once promised to help defend him. These Greeks laid siege to Troy for ten years. Many famous battles and heroes had a part in the war, but in the end the Greeks won by a trick planned by Odysseus: The Trojan horse: (WMC, public domain) The Greeks pretended to leave, but left a large, hollow wooden horse behind. Odysseus and a few soldiers hid inside the wooden horse, which the Trojans dragged into the city during their premature celebration. That night, the men snuck out of the horse and opened the city gates to let in the whole Greek army, who had come back under cover of darkness. The Greeks sacked Troy, Helen (no longer under Aphrodite's spell) went home with Menelaus, and all the Greeks started home. The story of The Odyssey tells Odysseus' adventures on his journey home, which takes years. ### 01.06 Aging, Death and Dying (Health II) Standard 1, Objective 2d: Apply stress management techniques. Required: Top Ten Aging Challengeshttp://longevity.about.com/od/liveto100/tp/top-aging-challen...Required: Organ donationhttp://www.nlm.nih.gov/medlineplus/organdonation.htmlRequired: Living willshttp://www.mayoclinic.com/health/living-wills/HA00014Supplemental: Assisted Suicidehttp://www.wrtl.org/assistedsuicide/painmanagement.aspxSupplemental: FAQ about organ donationhttp://www.mayoclinic.com/health/organ-donation/FL00077Supplemental: Euthanasia and Physician-Assisted Suicidehttp://www.religioustolerance.org/euthanas.htmSupplemental: Hospice Carehttp://www.nhpco.org/sites/default/files/public/Statistics_R... ### 01.06 Basic Vocabulary for Parts of Speech & Usage review (English 9) Demonstrate command of the conventions of standard English grammar and usage when writing or speaking. Use various types of phrases (noun, verb, adjectival, adverbial, participial, prepositional, absolute) and clauses (independent, dependent; noun, relative, adverbial) to convey specific meanings and add variety and interest to writing or presentations. Louis Sergent, determined that he will finish high school and not work in the coal mines, does his homework, 1946, Kentucky.: Russell Lee image, NARA, public domain Read me first: Basic vocabulary for parts of speech and usagehttps://share.ehs.uen.org/node/17726Verbs and Verbals videohttp://pp1.ehs.uen.org/groups/english09/weblog/b344f/027_Par...SAS Parts of speech interactive (do Prepare, Identify, Practice, Apply) - username: farm9the, QL#: 942http://www.sascurriculumpathways.com/loginQuiz yourself on finding nouns in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/nouns_quiz1.h...Quiz yourself on finding adjectives in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/adjectives_qu...Quiz yourself on finding verbs in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/verbmaster.ht...Click here to see a list of nouns (in blue), along with some fairly unusual adjectives, on the web.http://en.wiktionary.org/wiki/Appendix:English_irregular_adj... For more help, do the Parts of Speech interactive lesson (from SAS). Then, try the other links. To access the SAS lessons: English: Word Classes, 942 To open this resource in SAS® Curriculum Pathways®:
# Audio I/O and Pre-Processing with torchaudio Note: This is an R port of the official tutorial available here. All credits goes to Vincent Quenneville-Bélair. {torch} is an open source deep learning platform that provides a seamless path from research prototyping to production deployment with GPU support. Significant effort in solving machine learning problems goes into data preparation. torchaudio leverages torch’s GPU support, and provides many tools to make data loading easy and more readable. In this tutorial, we will see how to load and preprocess data from a simple dataset. library(torchaudio) library(viridis) # Opening a file torchaudio also supports loading sound files in the wav and mp3 format. We call waveform the resulting raw audio signal. url = "https://pytorch.org/tutorials/_static/img/steam-train-whistle-daniel_simon-converted-from-mp3.wav" filename = tempfile(fileext = ".wav") r = httr::GET(url, httr::write_disk(filename, overwrite = TRUE)) waveform = waveform_and_sample_rate[[1]] sample_rate = waveform_and_sample_rate[[2]] paste("Shape of waveform: ", paste(dim(waveform), collapse = " ")) paste("Sample rate of waveform: ", sample_rate) plot(waveform[1], col = "royalblue", type = "l") lines(waveform[2], col = "orange") Package {tuneR} is the only backend implemented yet. ## Transformations torchaudio supports a growing list of transformations. • Resample: Resample waveform to a different sample rate. • Spectrogram: Create a spectrogram from a waveform. • GriffinLim: Compute waveform from a linear scale magnitude spectrogram using the Griffin-Lim transformation. • ComputeDeltas: Compute delta coefficients of a tensor, usually a spectrogram. • ComplexNorm: Compute the norm of a complex tensor. • MelScale: This turns a normal STFT into a Mel-frequency STFT, using a conversion matrix. • AmplitudeToDB: This turns a spectrogram from the power/amplitude scale to the decibel scale. • MFCC: Create the Mel-frequency cepstrum coefficients from a waveform. • MelSpectrogram: Create MEL Spectrograms from a waveform using the STFT function in Torch. • MuLawEncoding: Encode waveform based on mu-law companding. • MuLawDecoding: Decode mu-law encoded waveform. • TimeStretch: Stretch a spectrogram in time without modifying pitch for a given rate. Each transform supports batching: you can perform a transform on a single raw audio signal or spectrogram, or many of the same shape. Since all transforms are torch::nn_modules, they can be used as part of a neural network at any point. To start, we can look at the log of the spectrogram on a log scale. specgram <- transform_spectrogram()(waveform) paste("Shape of spectrogram: ", paste(dim(specgram), collapse = " ")) specgram_as_array <- as.array(specgram$log2()[1]$t()) image(specgram_as_array[,ncol(specgram_as_array):1], col = viridis(n = 257, option = "magma")) Or we can look at the Mel Spectrogram on a log scale. specgram <- transform_mel_spectrogram()(waveform) paste("Shape of spectrogram: ", paste(dim(specgram), collapse = " ")) specgram_as_array <- as.array(specgram$log2()[1]$t()) image(specgram_as_array[,ncol(specgram_as_array):1], col = viridis(n = 257, option = "magma")) We can resample the waveform, one channel at a time. new_sample_rate <- sample_rate/10 # Since Resample applies to a single channel, we resample first channel here channel <- 1 transformed <- transform_resample(sample_rate, new_sample_rate)(waveform[channel, ]$view(c(1,-1))) paste("Shape of transformed waveform: ", paste(dim(transformed), collapse = " ")) plot(transformed[1], col = "royalblue", type = "l") As another example of transformations, we can encode the signal based on Mu-Law enconding. But to do so, we need the signal to be between -1 and 1. Since the tensor is just a regular PyTorch tensor, we can apply standard operators on it. # Let's check if the tensor is in the interval [-1,1] cat(sprintf("Min of waveform: %f \nMax of waveform: %f \nMean of waveform: %f", as.numeric(waveform$min()), as.numeric(waveform$max()), as.numeric(waveform$mean()))) Since the waveform is already between -1 and 1, we do not need to normalize it. normalize <- function(tensor) { # Subtract the mean, and scale to the interval [-1,1] tensor_minusmean <- tensor - tensor.mean() return(tensor_minusmean/tensor_minusmean$abs()$max()) } # Let's normalize to the full interval [-1,1] # waveform = normalize(waveform) Let’s apply encode the waveform. transformed <- transform_mu_law_encoding()(waveform) paste("Shape of transformed waveform: ", paste(dim(transformed), collapse = " ")) plot(transformed[1], col = "royalblue", type = "l") And now decode. reconstructed <- transform_mu_law_decoding()(transformed) paste("Shape of recovered waveform: ", paste(dim(reconstructed), collapse = " ")) plot(reconstructed[1], col = "royalblue", type = "l") We can finally compare the original waveform with its reconstructed version. # Compute median relative difference err <- as.numeric(((waveform - reconstructed)$abs() / waveform$abs())$median()) paste("Median relative difference between original and MuLaw reconstucted signals:", scales::percent(err, accuracy = 0.01)) # Functional The transformations seen above rely on lower level stateless functions for their computations. These functions are identified by torchaudio::functional_* prefix. • istft: Inverse short time Fourier Transform. • gain: Applies amplification or attenuation to the whole waveform. • dither: Increases the perceived dynamic range of audio stored at a particular bit-depth. • compute_deltas: Compute delta coefficients of a tensor. • equalizer_biquad: Design biquad peaking equalizer filter and perform filtering. • lowpass_biquad: Design biquad lowpass filter and perform filtering. • highpass_biquad:Design biquad highpass filter and perform filtering. For example, let’s try the functional_mu_law_encoding: mu_law_encoding_waveform <- functional_mu_law_encoding(waveform, quantization_channels = 256) paste("Shape of transformed waveform: ", paste(dim(mu_law_encoding_waveform), collapse = " ")) plot(mu_law_encoding_waveform[1], col = "royalblue", type = "l") You can see how the output from functional_mu_law_encoding is the same as the output from transforms_mu_law_encoding. Now let’s experiment with a few of the other functionals and visualize their output. Taking our spectogram, we can compute it’s deltas: computed <- functional_compute_deltas(specgram$contiguous(), win_length=3) paste("Shape of computed deltas: ", paste(dim(computed), collapse = " ")) computed_as_array <- as.array(computed[1]\$t()) image(computed_as_array[,ncol(computed_as_array):1], col = viridis(n = 257, option = "magma")) We can take the original waveform and apply different effects to it. gain_waveform <- as.numeric(functional_gain(waveform, gain_db=5.0)) cat(sprintf("Min of gain_waveform: %f\nMax of gain_waveform: %f\nMean of gain_waveform: %f", min(gain_waveform), max(gain_waveform), mean(gain_waveform))) dither_waveform <- as.numeric(functional_dither(waveform)) cat(sprintf("Min of dither_waveform: %f\nMax of dither_waveform: %f\nMean of dither_waveform: %f", min(dither_waveform), max(dither_waveform), mean(dither_waveform))) Another example of the capabilities in torchaudio::functional_ are applying filters to our waveform. Applying the lowpass biquad filter to our waveform will output a new waveform with the signal of the frequency modified. lowpass_waveform <- as.array(functional_lowpass_biquad(waveform, sample_rate, cutoff_freq=3000)) cat(sprintf("Min of lowpass_waveform: %f\nMax of lowpass_waveform: %f\nMean of lowpass_waveform: %f", min(lowpass_waveform), max(lowpass_waveform), mean(lowpass_waveform))) plot(lowpass_waveform[1,], col = "royalblue", type = "l") lines(lowpass_waveform[2,], col = "orange") We can also visualize a waveform with the highpass biquad filter. highpass_waveform <- as.array(functional_highpass_biquad(waveform, sample_rate, cutoff_freq=3000)) cat(sprintf("Min of highpass_waveform: %f\nMax of highpass_waveform: %f\nMean of highpass_waveform: %f", min(highpass_waveform), max(highpass_waveform), mean(highpass_waveform))) plot(highpass_waveform[1,], col = "royalblue", type = "l") lines(highpass_waveform[2,], col = "orange") # Migrating to torchaudio from Kaldi (Not Implemented Yet) Users may be familiar with Kaldi, a toolkit for speech recognition. torchaudio will offer compatibility with it in torchaudio::kaldi_* in the future. # Available Datasets If you do not want to create your own dataset to train your model, torchaudio offers a unified dataset interface. This interface supports lazy-loading of files to memory, download and extract functions, and datasets to build models. The datasets torchaudio currently supports are: • Yesno • SpeechCommands • CMUArctics temp <- tempdir() # A data point in Yesno is a list (waveform, sample_rate, labels) where labels is a list of integers with 1 for yes and 0 for no. # Pick data point number 3 to see an example of the the yesno_data: n <- 3 sample <- yesno_data[n] sample plot(sample[[1]][1], col = "royalblue", type = "l") Now, whenever you ask for a sound file from the dataset, it is loaded in memory only when you ask for it. Meaning, the dataset only loads and keeps in memory the items that you want and use, saving on memory. # Conclusion We used an example raw audio signal, or waveform, to illustrate how to open an audio file using torchaudio, and how to pre-process, transform, and apply functions to such waveform. We also demonstrated built-in datasets to construct our models. Given that torchaudio is built on {torch}, these techniques can be used as building blocks for more advanced audio applications, such as speech recognition, while leveraging GPUs.
## Friday, July 21, 2017 ... ///// ### Boss of Californian community colleges wants to ban algebra Several years ago, Penny asked Sheldon whether Leonard would get bored with her. She lied to him that she was a community college graduate – in order not to be considered a stupid loser. Sheldon was surprised that she apparently thought that the opposite of a "stupid loser" was a "community college graduate". Well, Sheldon had very good reasons to be surprised. And the NPR gave us an additional reason two days ago (thanks to Willie): Say Goodbye To X+Y: Should Community Colleges Abolish Algebra? Kayla Lattimore talked to Eloy Ortiz Oakley, the chancellor of California's college community system. So we heard the usual story. Too many students of community colleges – 80% – fail to complete an algebra requirement. So something must obviously be wrong with algebra, not with the students. The chancellor wouldn't even dare to suggest that the students could be stupid losers. So he builds a big theory implying that algebra isn't really needed. Other organs are as capable of rigorous thinking as brains. That includes aßes and especially vag*nas, we are told. F*rting and especially q*eefing are distinguished and respectable activities that often trump the algebraic manipulations and they are often more useful for the alumni than algebra. That's the type of wisdom that similar apparatchiks in California can get away with. There was also one amusing technical point. Mr Oakley suggested that he didn't quite want to eliminate mathematics from the community colleges. Sometimes, it could be replaced with statistics which may be equally rigorous. That's funny because I agree. A statistics course may be equally quantitative and rigorous as an algebra course. A problem is that when it's so, it's also an equally effective source of problems for the less quantitative students. As I described in March, I was recently exposed to complaints by female students who wanted a college degree to become nurses (the college requirement should be canceled for this job soon) and they didn't like too difficult a take-home exam or homework in their statistics course. The students' problem obviously isn't linked to one particular branch of mathematics (or sciences) or another. Their problem is with the quantitative, careful, rigorous, disciplined thinking in general. If you ask the students to parrot outrageous stupidities such as the statement that the United Nations has outlined 17 sustainability goals up to the year 2030, the students who shouldn't be there will find it easy to get through the course because such courses have been designed for stupid losers. If you ask the students to do something that actually requires intellectual skills that are above the average and that should be required at higher levels of education, the stupid losers will fail and complain. It's that simple! Sorry but the real problem isn't algebra but the propagation of stupid losers, including Mr Oakley, in the scholarly environment where they have no moral right to oxidize. The real problem is the inflation of degrees, dropping quality of college education, and the increasing self-confidence and the sense of entitlement among the stupid losers. These stupid losers have mostly succeeded in spreading the pernicious idea that they have the "right" to get college degrees for free. They should be q*eefed out. What we see is an ongoing battle between stupid losers and mathematics. You might think that you don't care except that the quality of our civilization and its progress depends on things such as mathematics, not on stupid losers, which is why you shouldn't support the stupid losers. Even if you're a stupid loser yourself, you should support mathematics in this war because in the long run, you will benefit from its victory. NPR shows that identity politics has become a player in this battle, too. The rate of failure in algebra is obviously higher among blacks. So lots of the people who want affirmative action and a higher percentage of blacks among the alumni end up fighting algebra for this reason, too. The interviewer Ms Kayla Lattimore – who is black – mentioned Bob Moses and his Algebra Project which wanted to make blacks algebraically literate as if it were a basic human right. Well, decades later, it's finally overwhelmingly clear that most people don't really want to learn algebra. They would only like to have the fruits that have often come from the knowledge of algebra. And with stupid losers such as Mr Oakley in charge, algebra may be separated from its fruits, indeed. However, if you cut the branches of a tree that connect the roots with the fruits, the fruits will cease to ripen. The tree dies and the orchard may suffer.
In each of the following number series only one number is wrong. find out that wrong number. 8.1, 9.2, 17.3, 26.5, 4 ### Question Asked by a Student from EXXAMM.com Team Q 2265545465.     In each of the following number series only one number is wrong. find out that wrong number. 8.1, 9.2, 17.3, 26.5, 43.8, 71.5, 114.1 IBPS-CLERK 2017 Mock Prelims A 17.3 B 26.5 C 43.8 D 9.2 ` E None of these #### HINT (Provided By a Student and Checked/Corrected by EXXAMM.com Team) #### Access free resources including • 100% free video lectures with detailed notes and examples • Previous Year Papers • Mock Tests • Practices question categorized in topics and 4 levels with detailed solutions • Syllabus & Pattern Analysis
# 0.16 Lab 10b - image processing (part 2)  (Page 5/5) Page 5 / 5 [link] is a block diagram that illustrates the method of error diffusion.The current input pixel $f\left(i,j\right)$ is modified by means of past quantization errors to give a modified input $\stackrel{˜}{f}\left(i,j\right)$ . This pixel is then quantized to a binary value by $Q$ , using some threshold $T$ . The error $e\left(i,j\right)$ is defined as $e\left(i,j\right)=\stackrel{˜}{f}\left(i,j\right)-b\left(i,j\right)$ where $b\left(i,j\right)$ is the quantized binary image. The error $e\left(i,j\right)$ of quantizing the current pixel is is diffused to "future" pixels by means of a two-dimensionalweighting filter $h\left(i,j\right)$ , known as the diffusion filter . The process of modifying an input pixel by past errors can berepresented by the following recursive relationship. $\stackrel{˜}{f}\left(i,j\right)=f\left(i,j\right)+\sum _{k,l\in S}h\left(k,l\right)e\left(i-k,j-l\right)$ The most popular error diffusion method, proposed by Floyd and Steinberg, uses the diffusion filter shown in [link] . Since the filter coefficients sum to one,the local average value of the quantized image is equal to the local average gray scale value. [link] shows the halftone image produced by Floydand Steinberg error diffusion. Compared to the ordered dither halftoning, the error diffusion method can be seen to have better contrast performance.However, it can be seen in [link] that error diffusion tends to create "streaking" artifacts, known as worm patterns. ## Halftoning exercise If your display software (e.g Matlab) resizes your image before rendering, a halftoned image will probably notbe rendered properly. For example, a subsampling filter will result in gray pixels in the displayedimage! To prevent this in Matlab, use the truesize command just after the image command. This will assign one monitor pixel for each image pixel. We will now implement the halftoning techniques described above. Savethe result of each method in MAT files so that you may later analyze and compare their performance.Download the image file house.tif and read it into Matlab. Print out a copy of this image. First try the simple thresholding technique based on [link] , using $T=108$ , and display the result. In Matlab, an easy way to threshold an image $X$ is to use the command Y = 255*(X>T); . Label the quantized image, and print it out. Now create an "absolute error" image by subtracting the binary from the original image, and then taking the absolute value.The degree to which the original image is present in the error image is a measure of signal dependence of the quantization error.Label and print out the error image. Compute the mean square error (MSE), which is defined by $MSE=\frac{1}{NM}\sum _{i,j}{\left\{f\left(i,j\right)-b\left(i,j\right)\right\}}^{2}$ where $NM$ is the total number of pixels in each image. Note the MSE on the printout of the quantized image. Now try implementing Bayer dithering of size 4. You will first have to compute the dither pattern. The index matrix fora dither pattern of size 4 is given by $I\left(i,j\right)=\left[\begin{array}{cccc}12& 8& 10& 6\\ 4& 16& 2& 14\\ 9& 5& 11& 7\\ 1& 13& 3& 15\end{array},\phantom{\rule{4pt}{0ex}},.\right]$ Based on this index matrix and [link] , create the corresponding threshold matrix. For ordered dithering, it is easiest to perform the thresholding of the image all at once.This can be done by creating a large threshold matrix by repeating the $4×4$ dither pattern. For example, the command T = [T T; T T]; will increase the dimensions of $T$ by 2. If this is repeated until $T$ is at least as large as the original image, $T$ can then be trimmed so that is is the same size as the image. The thresholding can then be performed using the command Y = 255*(X>T); . As above, compute an error image and calculate the MSE. Print out the quantized image, the error image, and note the MSE. Now try halftoning via the error diffusion technique, using a threshold $T=108$ and the diffusion filter in [link] . It is most straightforward to implement this by performing the followingsteps on each pixel in raster order: 1. Initialize an output image matrix with zeros. 2. Quantize the current pixel using using the threshold $T$ , and place the result in the output matrix. 3. Compute the quantization error by subtracting the binary pixel from the gray scale pixel. 4. Add scaled versions of this error to “future” pixels of the original image, as depictedby the diffusion filter of [link] . 5. Move on to the next pixel. You do not have to quantize the outer border of the image. As above, compute an error image and calculate the MSE. Print out the quantized image, the error image, and note the MSE. The human visual system naturally lowpass filters halftone images. To analyze this phenomenon, filter each of the halftone images withthe Gaussian lowpass filter $h$ that you loaded in the previous section (from ycbcr.mat ), and measure the MSE of the filtered versions. Make a table that containsthe MSE's for both filtered and nonfiltered halftone images for each of the three methods. Does lowpass filtering reduce the MSE for each method? ## Inlab report 1. Hand in the original image and the three binary images. Make sure that they are all labeled, and that the mean square errors are notedon the binary images. 2. Compare the performance of the three methods based on the visual quality of the halftoned images.Also compare the resultant MSE's. Is the MSE consistent with the visual quality? 3. Submit the three error images. Which method appears to be the least signal dependent? Does the signal dependence seem to becorrelated with the visual quality? 4. Compare the MSE's of the filtered versions with the nonfiltered versions for each method.What is the implication of these observations with respect to how we perceive halftone images. important of enocomic what is division of labour division of labour can be defined as the separation of task to individuals in any economic system to specialize on it. what is demand curve demand curve is a downward sloping economic graph that shows the relationship between the price of product and the quantity of the product demanded. What is demand It refers to the quantity of a commodity purchased in the market at a price and at a point of time. Basanta refers to amount of commodities a consumer is willing and able to buy at particular price within a period of time Clifford It is the ability and willingness a customer buys a product or service at a particular price, place and time while other things remaining constant or the same kum In which case is opportunity cost is zero where no alternative is available Bhartendu who is the father of economic Suraj ok Tony Francis Opana Basanta What is monopoly it an economic situation where one individual controls the essential commodities or value product for maximum profit James monopoly is a market situation in which there is only one producer of a good or service which has no close substitutes eliano is where only one person is solely the price taker Francis what is Monopoly The word Monopoly is a Latin word. it is the combination of two words-Mono means single and Poly means seller. thus Monopoly means single seller. but this is not the full meaning of Monopoly. Monopoly must produce a product which does not have close substitute in the market. Basanta Monopoly is define as a firm in an industry with very high barriers to entry. Favour If close substitute is available, Monopoly will be a king without a crown. Basanta what does it array what are the differences between monopoly and.oligopoly what are the difference between monopoly and oligopoly Cbdishakur The deference between Monopoly and Oligopoly: Monopoly means:A single-firm-Industry producing and selling a product having no close business and Oligopoly means:A market structure where a few sellers compete with each other and each controls a significant portion of market . Basanta so that the price-output policy one affects the other. Basanta what are difference between physical policy and monotory policy hon what is economic what is economic Cbdishakur the word economic was derived from the Greek word oikos (a house)and mein(to manage) which in effect meant managing a household with the limited funds available 🙂. Basanta hon An Enquiry into the nature and causes of wealth Nations, this book clearly defined what economic is🙂🙂🙏🙏 thank you... Basanta good example about scarcity: money,time, energy, human or natural resources. Scarcity of resources implies that there supply is very much limited in relation to demand. Basanta equilibrium is a situation in which economic forces such as demand and supply are balanced and in the absence of external influences,the value of economic variables will not change hmnn Emakpor marginal cost and marginal revenue is equilibrium . Kho yessss Basanta what is equilibrium policy prescriptions for unemployment Am working on it Blacks Study Janelle study simeon what are the factors effecting demand sedule we should talk about more important topics, you can search it on Google n u will find your answer we should try to focus on how we can improve our society using economics shubham so good night hon Why do people buy more grapes in December than in July? lungi because at time know money While the American heart association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension in a comparison of the stages of meiosis to the stage of mitosis, which stages are unique to meiosis and which stages have the same event in botg meiosis and mitosis Researchers demonstrated that the hippocampus functions in memory processing by creating lesions in the hippocampi of rats, which resulted in ________. The formulation of new memories is sometimes called ________, and the process of bringing up old memories is called ________. Got questions? Join the online conversation and get instant answers!
## 2.1 Definition of a Derivative We've been finding general formulas for the slope of a tangent to a curve.  This is such an important concept that it has its own name.  This is called the derivative of a function.  Today we will continue exploring ways to find the derivative of a function. Resources: • notes Assignment: • p101 #1-6all, 11, 15 Attachments: FileDescriptionFile size 12.2.1.notes.pdf 3930 kB